End of the efficiency trade and the coming capacity crunch
Part 3: Cloud and infrastructure article series
In enabling new business and preserving existing value, IT executives are more challenged than ever to balance the agility companies want with the stability they need. Legacy data centers are increasingly removed from the cloud and mobile end users. Delivering data to a variety of devices and taking advantage of new platforms typically requires giving up some control over IT infrastructure. Our new series, The cloud and infrastructure, covers these trends and ways IT executives can make the most of emerging technologies. The third article in the series addresses data center strategy, including emerging technology, the colocation market, data centers, IT equipment, and transistor technology.
In the world of servers and data centers, IT executives have grown accustomed to buying more computing power each year at lower cost and increasingly smaller sizes. This trend has enabled efficiency trades between the space and power required to deliver increasing computing capacity and storage. Since IT equipment and IT facilities still consume significant IT budget, changes in these trends could create strategic capacity problems and drive up long-term costs.
In this article we discuss ways computing refreshes will likely start to lose value, technology advances may soon counter standardization moves, and data center consolidation will likely need to reverse. We will discuss some possible mitigations and look closely at developments in the largest cost component—data center strategy.
How the end of Moore’s Law changes data center strategy
In the past, companies could reserve some CapEx every year for new hardware, and OpEx budgets for IT could reduce or remain flat while delivering the same or more business functionality. Businesses essentially became used to getting more for less. This tradeoff is, however, starting to run its course as both advances in transistor technology and computer equipment pricing slow down, signaling an end to Moore’s Law.
Manufacturers have delivered recent performance and cost improvements with features such as multi-core architectures and specialized chips over increased transistor density, but the bottom line is that computing power advances are slowing, price improvements are stalling, and replacement technologies are not yet ready.
The internet triggered huge increases in corporate data center capacity, but the slowdown in microprocessor performance gains and size reductions means that once again the capacity curve may invert. If you can’t easily buy more computing power in the same server footprint anymore, then increasing capacity can only mean more servers of the same size, more space needed, and more electricity consumed by IT equipment.
There may be a coming capacity crunch in the data center. Do you respond by building new capacity or leveraging someone else’s?
Soaring IC costs
As manufacturers move into the 16/14nm area, design costs will exceed $250 million for most ICs. To recover design costs, one would need to generate $1.5 billion in revenue during the life of the IC.
The coming capacity crunch: Build or buy?
With the capacity crunch on the horizon, IT executives should be considering the following dynamics as they assess their data center strategy:
- Economies of scale favor large-scale data center builds
- Data center construction techniques and power sources evolve
- Design lifecycles of data center buildings outlast computing and storage technology improvements by an order of magnitude.
- Following shorter term emerging technology trends can result in misaligned long-term capital investments
- Real estate ownership, balance sheet assets, control of capacity, and securing the risk profile internally have advantages for some business models
- When emerging technologies become mainstream cost effective, there will likely be an opportunity to right-size capacity to that new dynamic (e.g, x86 virtualization)
Whether you decide to build or buy, close consideration of all service, cost, and risk parameters are a central component of your data center strategy.
Building and maintaining enterprise data centers is usually an industrial-scale undertaking at industrial-scale cost, where economies of scale and lengthy lifecycles are key to managing unit costs. Data center construction continues to grow at a pace largely driven by the primary wholesale colocation market. New builds tend to be on a large scale for data center colocation providers or large-scale internet-based service providers such as Facebook, Microsoft, and Amazon. These providers are investing in site selection, power and network capabilities, a minimum of Tier III resilience, and substantial square footage. It is sensible to have a hard look at the capital and time required to build your own and weigh that against a burgeoning market of substantial scale.
Download the full article to read more about these topics, as well as considerations such as power efficiency, new cooling technology, and off-grid power options.
Slowdowns in the development of new transistor technology could drive a number of unintended consequences:
- Refresh cycles could slow
- Capacity needs might increase
- Complexities would likely return
- Prices could rise
If these consequences materialize, the effect on your computing, support, and data center strategies will need to be thoroughly thought through. The next article in our series considers the operating aspects of this and other trends: How does the IT executive establish the optimal operating model for managing emerging and legacy technologies?