Downtime is a significant issue for a Web services provider that offers cloud computing. This provider is adversely affected when there’s Internet downtime and numerous users can’t communicate or perform work. Worse yet, when this happens, the blogs and media can have a field day with it, and once the outage is publicized, it can have a devastating effect on the organization.
Distributed servers cause outages because they don’t have the large block of memory, cache, and memory reallocation needed to prevent such outages. Cache is a significant aspect of reliability. Distributed servers routinely experience 200 hours of downtime per year for each server.
The mainframe might have five minutes of downtime per year, if that, and many times that’s part of a scheduled outage. The mainframe excels at offering cache and large blocks of available memory, making it the ideal cloud machine. In fact, here are the top-10 reasons to look at mainframe cloud computing:
10. Reliability, security, and availability
9. Cloud computing leverages virtualization on the mainframe (another technology that’s also far superior on the mainframe), yielding greater efficiency in processing and utilization of computing assets while delivering increased stabilization.
8. Service breadth and the ability to serve a globally integrated enterprise
7. Service functionality
6. Superior service performance; the ability to handle hundreds of customized Linux Web service images 5. Superior service security
4. Superior service reliability
3. Superior ability to manage virtualized services
2. Speed and ease of deployment on the mainframe mean it uses 25 times fewer people (three people per mainframe compared to hundreds in a distributed data center).
1. Scalable symmetric multi-processors are the only platform capable of handling the thousandfold increase in the quantity of information that Google predicts is coming.
The conventional wisdom of our day is that having a massive collection of small, inexpensive servers is better than using a mainframe. Conventional wisdom needs to be periodically reexamined in the context of what new factors have affected a market.
Conventional wisdom is wrong in the case of cloud platforms. The mainframe is 10 times less expensive to own and operate than a massive collection of distributed servers. This is a reality that can be determined by carefully looking at the comparative costs of each platform type using analyst and enterprise metrics. The majority of top executives are looking at the incredible success of Google and others using server-based systems when instead they need to examine whether those server platforms provide a sustainable base for cloud computing going forward, or whether those companies will be vulnerable to competitors using mainframes to do the same thing for far less cost.
The labor component accounts for 70 percent of IT costs for distributed computing and 13 percent for mainframe computing. Top executives should understand these huge cost differences in computing platforms and what drives savings. Automated process, shared workload, and a high level of computing availability make the difference. The mainframe is able to manage shared workload and efficiently offload processor- intensive workload while requiring only a small labor force to maintain its operation. Top executives have failed to grasp the real comparative costs of distributed vs. mainframe platforms. The challenge is for them to make an honest, unbiased comparison or risk facing a competitor that does.
The mainframe is the least expensive cloud computing hardware platform. Reliability, security, scalability, availability, and accuracy add up to five minutes or less of downtime per year for the mainframe vs. on average 200 hours of downtime per year for each distributed computer. Cloud computing depends on 24x7 operations. When honest, unbiased cost comparisons are performed, the mainframe proves to be the ideal cloud computing platform, bar none. ME