Cloud computing promises to displace wholly owned information technology hardware and software. How does this assertion stand up over time?
When the price of a durable item remains stable, the market for that item tends toward equilibrium between the item’s cost of ownership and its rental or lease rate. When the price is near that equilibrium, other factors influence the decision to buy or rent. For instance, a person may choose to buy, lease or rent a car. In New York City, a person may dispense with a car entirely, relying instead on taxis or mass transit. The “hassle factor”—the cost of parking and insurance—overwhelms the value of independent means of transportation. As another example, economists speak of the balance between the cost of home ownership and the price of renting an apartment. An apartment isn’t a home, but they provide similar enough functions to make a price comparison meaningful. For some people, moving from a home to an apartment or condominium makes sense. Renting reduces the hassle factor.
These scenarios rely on stable prices over time. If the item’s price changes, the balance between ownership and renting shifts. When the price of a home steadily rises, the home becomes a good investment, despite the hassle factor of home ownership. Seeing the value of a new car drop in the first year of ownership, some people choose to lease. Others may buy, holding the car for much longer, and put up with the hassle factor of having an older car.
Information technology faces similar economics. When the price of computing was very high, most users rented (using time-sharing). The lower price of computing over time led more businesses to buy, and those businesses could afford greater amounts of computational power.
Increasingly powerful and affordable computing environments addressed larger sets of data and more complex algorithms. The range of problems itself didn’t change, but the cost of solving those problems dropped as the cost of technology steadily declined. Problems that had been impossible to solve became expensive to solve, and eventually became inexpensive to solve. For instance, American Airlines created a powerful competitive advantage by deploying the Sabre online reservation system in the ’60s. Now, any airline that doesn’t have an online reservation system isn’t a real airline. Companies across many industries applied computing following two mandates: Wherever there were 50 people doing the same thing, automate; and wherever there were 50 people waiting for one person to do something, automate.
Companies with large computing infrastructures discovered their available spare capacity. A company with a million servers running at 98 percent utilization realized it had the equivalent of 20,000 machines idle. The executives knew capacity wasn’t waste, but simply the consequence of varying workload. They also realized they could “rent out” small chunks of their available computing resource, as long as they could fence those users from the internal processing the company needed to run. These companies re-created the time-sharing model and monetized their idle spare capacity.
Smaller organizations valued cloud computing. It eliminated a barrier to entry for software development firms. Software start-ups once raised capital to purchase technology for development and test. With cloud, they could rent just the capacity they needed for the time they required. Larger firms could get additional capacity to deal with workload spikes, and then release that capacity when the demand lessened. Companies of all sizes rediscovered that most of their IT workload could run on a generic computational resource. They realized renting was cheaper than owning, especially when they considered the hassle factor.
This appealing economic model deteriorates as the underlying price of technology drops. Would any venture capital firm fund a start-up that intended to deliver cloud computing? Today’s cloud computing providers rely on the sunk cost of their existing infrastructure. The initial capital expense is an insurmountable barrier to entry, as long as that cost remains high.
The cost of computing continues to drop, eroding that barrier to entry. Today’s start-up can acquire multicore computing platforms for a few hundred dollars. A midsized company can acquire computing capacity at one-eighth to 1/64th the price charged five years ago. This represents the continuing impact of Moore’s Law, which lowers the unit cost of computing by half every nine, 12 or 18 months, compounded across a five-year horizon. (Network capacity tends to double at unit cost over nine months, disk storage over a year and processors at the longer end of the scale.)
The Upside of the Hassle Factor
Today’s public cloud provider will presumably continue to grow, but not exponentially. In five years, the cost of their marginal capacity will be miniscule compared to today’s prices. Public cloud consumers will realize the hassle factor associated with owning technology is trivial compared with the business risk of depending on someone else’s availability, security, recoverability, privacy, service levels and overall care of their Internet-connected generic technology. Companies will rediscover the benefits of having a captive IT supplier staffed by its own employees. When a public cloud fails, all it can give its customers is more capacity at a lower price, later. An executive has more choices when managing his or her own IT staff should they fail to deliver what the business needs. That’s the upside of the hassle factor.
Is there any long-term viable strategy for a public cloud vendor? The first challenge would be to ride the price/performance improvements as rapidly as they arrive, so the business would have to continually invest in new IT—which is costly. Firms such as Google, Microsoft and Amazon, which have already invested heavily in IT, have developed brilliant innovations to contain costs. Modular design with minimal site preparation. Power and cooling added as needed. Standardized containers delivered and wired in on demand. Amazon also builds modular data centers to minimize construction costs and optimize heating, cooling, cable runs and energy consumption. These businesses strive relentlessly for margin performance through monetizing unused capacity—but the core business drives their IT procurement strategy.
A public cloud vendor has no funding source to support that level of IT investment. If a public cloud vendor could identify a core set of long-term customers that guaranteed to spend at least some amount annually, that could anchor the public cloud vendor similarly to a large department store anchoring a shopping mall. All the existing public cloud vendors have such a customer—their parent company. But a customer of cloud who promises to spend a minimum amount annually, regardless of actual utilization, isn’t buying cloud computing—they’re outsourcing.
The cost/benefit analysis between an external supplier and a captive supplier comes down to this: Can the business run its data center efficiently enough to compete with an outsourcer? The outsourcer has the same capital costs, software costs, personnel costs and also must make a profit. If a business is inefficient, cloud is only half the solution. The whole solution is either outsourcing or running the data center more efficiently.
Public cloud vendors exploit the temporary difference between the decreasing cost of computing and the increasing demand. As that cost continues to drop, those businesses that need computing will find it increasingly affordable. More and more complex problems will be tractable with owned resources. The benefits of ownership will outweigh the apparent simplicity of public cloud. Cloud computing will continue—but as private and community cloud, not public cloud.
For some companies, the great migration to public cloud will flow in reverse. For most companies, it will stop before it even begins. As the market dries up, the end game will evolve as W. Chan Kim and Renée Mauborgne describe in Blue Ocean Strategy. Expect to see frantic attempts at service differentiation and price wars as public cloud providers collapse into a “red ocean.”
Note: This article follows the “NIST Definition of Cloud Computing” as defined in NIST SP 800-145, from http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf. The key elements of this definition are on-demand, self-service, broad network access, resource pooling, rapid elasticity and measured service. Public cloud refers to cloud capabilities available to the general public.