Software-defined everything has been saved
Much of the operating environment can now be virtualized, automated, and orchestrated, positioning infrastructure as a competitive differentiator that can help fuel innovation.
Amid the fervor surrounding digital, analytics, and cloud, it is easy to overlook advances currently being made in infrastructure and operations. The entire operating environment—server, storage, and network—can now be virtualized and automated. The data center of the future represents the potential for not only lowering costs, but also dramatically improving speeds and reducing the complexity of provisioning, deploying, and maintaining technology footprints. Software-defined everything can elevate infrastructure investments, from costly plumbing to competitive differentiators.
Virtualization has been an important background trend, enabling many emerging technologies over the past decade. In fact, we highlighted it in our very first Technology Trends report six years ago.1 While the overall category is mature, many adoptions focused primarily on the compute layer. Servers have been abstracted from dedicated physical environments to virtual machines, allowing automated provisioning, load balancing, and management processes. Hypervisors—the software, firmware, or hardware that control virtual resources—have advanced to a point where they can individually manage a wide range of virtual components and coordinate among themselves to create breakthroughs in performance and scalability.
View all 2015 technology trends
Create and download a custom PDF of the Business Trends 2015 report.
Meanwhile, other critical data center components have not advanced. Network and storage assets have remained relatively static, becoming bottlenecks limiting the potential of infrastructure automation and dynamic scale.
Enter software-defined everything. Technology advances now allow virtualization of the entire technology stack—compute, network, storage, and security layers. The potential? Beyond cost savings and improved productivity, software-defined everything can create a foundation for building agility into the way companies deliver IT services.
Software-defined networking (SDN) is one of the most important building blocks of software-defined everything. Like the move from physical to virtual machines for compute, SDN adds a level of abstraction to the hardware-defined interconnections of routers, switches, firewalls, and security gear. Though communication gear still exists to drive the physical movement of bits, software drives the data plane (the path along which bits move) and, more importantly, the control plane, which routes traffic and manages the required network configuration to optimize the path. The physical connectivity layer becomes programmable, allowing network managers—or, if appropriate, even applications—to provision, deploy, and tune network resources dynamically, based on system needs.
SDN also helps manage changing connectivity needs for an increasingly complex collection of applications and end-user devices. Traditional network design is optimized for fixed patterns, often in a hierarchical scheme that assumes predictable volume between well-defined end points operating on finite bandwidth. That was acceptable in the early days of distributed computing and the Web. Today, however, many organizations must support real-time integration across multiple servers, services, clouds, and data stores, enable mobile devices initiating requests from anywhere in the world, and process huge and expanding volumes of internal and external data, which can cause traffic spikes. SDN helps manage that complexity by using micro-segmenting, workload monitoring, programmable forwarding, and automated switch configuration for dynamic optimization and scaling.
The network is not the only thing being reimagined. Software-defined storage (SDS) represents logical storage arrays that can be dynamically defined, provisioned, managed, optimized, and shared. Coupled with compute and network virtualization, entire operating environments can be abstracted and automated. The software-defined data center (SDDC) is also becoming a reality. A Forrester report estimates that “static virtual servers, private clouds, and hosted private clouds will together support 58 percent of all workloads in 2017, more than double the number installed directly on physical servers.”2 This is where companies should focus their software-defined everything efforts.
Yet, as you determine scope, it is important to recognize that SDDC cannot and should not be extended to all IT assets. Applications may have deep dependencies on legacy hardware. Likewise, platforms may have hooks into third-party services that will complicate migrations, or complexities across the stack may turn remediation efforts into value eroders. Be deliberate about what is and is not in scope. Try to link underlying infrastructure activities to a broader strategy on application and delivery model modernization.
A recent Computer Economics study found that data center operations and infrastructure consume 18 percent of IT spending, on average.3 Lowering total cost of ownership by reducing hardware and redeploying supporting labor is the primary goal for many SDDC efforts. Savings come from the retirement of gear (servers, racks, disk and tape, routers and switches), the shrinking of data center footprints (lowering power consumption, cooling, and, potentially, facility costs), and the subsequent lowering of ongoing recurring maintenance costs.
Moving beyond pure operational concerns and cost outlays can deliver additional benefits. The new solution stack should become the strategic backbone for new initiatives around cloud, digital, and analytics. Even without systemic changes to the way systems are built and run, projects should see gains through faster environment readiness, the ability to engineer advanced scalability, and the elimination of power/connectivity constraints that may have traditionally lowered team ambitions. Leading IT departments are reimagining themselves by adopting agile methodologies to fuel experimentation and prototyping, creating disciplines around architecture and design, and embracing DevOps.4 These efforts, when paired with platform-as-a-service solutions, provide a strong foundation for reusing shared services and resources. They can also help make the overall operating environment more responsive and dynamic, which is critical as organizations launch digital and other innovative plays and pursue opportunities related to the API economy, dimensional marketing, ambient computing, and the other trends featured in this report.
Business executives should not dismiss software-defined everything as a tactical technical concern. Infrastructure is the supply chain and logistics network of IT. It can be a costly, complex, bottleneck—or, if done well, a strategic weapon. SDDC offers ways to remove recurring costs. Organizations should consider modernizing their data centers and operating footprints, if for no other reason than to optimize their total cost of ownership. They should also pursue opportunities to build a foundation for tomorrow’s business by reimagining how technology is developed and maintained and by providing the tools for disruptive digital, cloud, analytics, and other offerings. It’s not just about the cloud; it’s about removing constraints and becoming a platform for growth. Initially, first movers will likely benefit from greater efficiencies. Yet, soon thereafter, they should be able to use their virtualized, elastic tools to reshape the ways their companies work (within IT, and more importantly, in the field), engage customers, and perhaps even design core products and offerings.
Cisco Systems Inc. is currently developing a suite of products dubbed Application Centric Infrastructure (ACI), featuring tight integration between the physical and virtual elements. The goal is to move software-defined everything beyond hardware and infrastructure into applications and business operations. What’s more, rather than only abstracting and automating network components, ACI provides hooks into compute and storage components, tools, service level agreements, and related services like load balancing, firewalls, and security policies. According to Ishmael Limkakeng, Cisco’s vice president of sales, “ACI will enable cost efficiency, flexibility, and the rapid development and implementation of business apps that drive the bottom line.”
As an alpha customer for its own technology, Cisco is currently in the midst of a three-year internal ACI roll-out. To date, IT teams have constructed ACI infrastructure in a number of data centers and have begun a multi-year journey of migrating the company’s portfolio of 4,067 production applications.5 In deploying ACI internally, Cisco CIO Rebecca Jacoby is looking to achieve productivity gains by simplifying the provisioning, management, and movement of resources. At the heart of this strategy lies an innovative policy model in which approaches for configuring, using, reusing, and deploying the company’s network become standardized.
Cisco’s ACI deployment teams are working toward reducing overall IT operating expenses by 41 percent. This goal includes 58 percent cost savings in network provisioning and 21 percent cost savings in network operations and management. Moreover, they expect a fourfold increase in bandwidth, which, in turn, could lead to a 25 percent savings in capital expenditures, a 45 percent reduction in power usage, and a physical footprint 19 percent smaller than it was before ACI deployment.
Finally, Cisco aims to improve the flexibility of the use of those resources—expanding from just productivity in IT, to productivity for the business. Cisco has already seen performance gains via its CITEIS private cloud—an implementation of the VCE Vblock architecture stack. And the ACI business case includes a potential 12 percent optimization of compute resources and a 20 percent improvement in storage capacity.6
AmerisourceBergen’s IntrinsiQ unit is a leading provider of oncology-focused software. IntrinsiQ’s applications automate nearly every element of oncology—from treatment options to drug prescriptions—and the software is onsite at more than 700 clinics across the United States and Canada. These installations are important for the company’s physician clients, but installing and maintaining software across hundreds of locations comes with significant overhead.
To ease the overhead burden, IntrinsiQ opted to deploy a single-instance, multi-tenant application to reduce software development and long-term maintenance costs. The 80-person company, however, had no experience in multi-tenancy or in developing the underlying layers of a cloud-based application. Moreover, in moving to the cloud, IntrinsiQ had to demonstrate to its customers that the security of patient data would continue to be compliant with strict industry regulations.
To reduce costs, bolster innovation, and accelerate time to market, IntrinsiQ partnered with PaaS provider Apprenda. Apprenda’s platform provided back-end functions including multi-tenancy, provisioning, deployment, and, perhaps most importantly, security. IntrinsiQ continued to focus on developing oncology applications, as well as incorporating the new private cloud offering into its IT delivery and support models.
The division of duties achieved the needed results. The company’s oncology software specialists were able to work at high productivity levels by focusing on the application functionality and leveraging the PaaS software’s out-of-the-box management tools. IntrinsiQ was able to collapse its development schedule and reduce its costs: The new customer-facing, cloud-based application hit the market 18 months earlier than planned. Additionally, the cloud-based solution has made IntrinsiQ more affordable for smaller oncology clinics—expanding the company’s potential market.
Each day, eBay Inc. enables millions of commerce and payments transactions. eBay Marketplaces connects more than 155 million users around the world who, in 2014 alone, transacted $83 billion in gross merchandise volume. And PayPal’s 162 million users transacted more than $228 billion in total payment volume just last year.8 For eBay Inc., scale is not optional: It is the foundation of all its operations and a critical component of the company’s future plans.
Over the last decade, eBay Inc.’s platform and infrastructure group recognized that as the company grew, its infrastructure needs were also growing. The company’s IT footprint now included hundreds of thousands of assets across multiple data centers. Product development teams wanted to get from idea to release in days, but environments often took months to procure and “rack and stack” via traditional methods. Moreover, the scope of eBay Inc.’s business had grown far beyond online auctions to include offline commerce, payment solutions, and mobile commerce—all domains in which reliability and performance are essential.
The company decided to make infrastructure a competitive differentiator by investing in an SDDC to drive agility for innovation and efficiency throughout its operations while simultaneously creating a foundation for future growth. The company kicked off its SDDC efforts by tackling agility through the construction of a private cloud.
The first step to this process was one of standardization. Standardizing network design, hardware SKUs, and procedures helped create homogeneity in infrastructure that helped set the foundation for automation and efficiency.
Historically, the company had procured servers, storage, and network equipment on demand when each team or project requested infrastructure. Cloud solutions, on the other hand, made it possible to decouple the acquisition of compute, storage, and network resources from the provisioning cycles. This allowed for better partnership with the vendor ecosystem through disciplined supply-chain practices, and enabled on-demand provisioning for teams and projects that required infrastructure. Today, engineers are able to provision a virtual host in less than a minute and register, provision, and deploy an application in less than 10 minutes via an easy-to-use portal.
The infrastructure team took an end-to-end approach to automation, all the way from hardware arriving at the data center dock to a developer deploying an application on a cluster of virtual machines. Automation of on-boarding, bootstrapping, infrastructure lifecycle, imaging, resource allocation, repair, metering, and chargeback were core to building on-demand infrastructure with economic efficiency.
Compute on demand was only the beginning of eBay Inc.’s SDDC strategy. Next, the company tackled higher-order functions like load balancing, object storage, databases-as-a-service, configuration management, and application management. By creating a portfolio of internal cloud computing services, the company was able to add software-defined capabilities and automate bigger pieces of its software development infrastructure. The infrastructure team now provides product teams with the software-enabled tools they need to work more like artists and less like mechanics.
Beyond creating agility, the software-defined initiative has also helped drive enforceable standards. From hardware engineering to OS images to control plane configuration, standardization has become an essential part of eBay Inc.’s strategy to scale. The development culture has shifted away from one in which people ask for special kinds of infrastructure. Now, they are learning how to build products for standard infrastructure that comes bundled with all the tools needed to provision, develop, deploy, and monitor applications, though all stages of development. IT still receives (and supports) occasional special requests, but the use of container technologies helps it manage these outliers. These technologies provide engineers with a “developer class of service” where they are able to innovate freely as they would in a start-up—but within components that will easily fit into the broader environment. Moreover, knowing beforehand which commodity will be provided has helped both development and operations become more efficient.
eBay Inc. is working to bring software-defined automation to every aspect of its infrastructure and operations. In the company’s network operating centers (NOCs), for example, advanced analytics are now applied to the 2,000,000 metrics gathered every second from infrastructure, telemetry, and application platforms. Traditionally, the company’s “mission control” was surrounded by 157 charts displayed on half a dozen large screens that provided real-time visibility into that carefully curated subset of metrics and indicators engineers deemed most critical to system stability. These same seven screens can now cover more than 5,000 potential scenarios by only displaying those signals that deviate from the norm, thus making the detection of potential issues much easier than before. And, with software-defined options to segment, provision, and deploy new instances, engineers can take action in less time than it would have previously taken them to determine which of the screens to look at. Through its SDDC initiative, the company is now able to direct its energy toward prevention rather than reaction, making it possible for developers to focus on eBay Inc.’s core disciplines rather than on operational plumbing.
Since joining Little Rock-based Acxiom, a global enterprise data, analytics, and SaaS company, in 2013, Dennis Self, senior vice president and CIO, has made it his mission to lead the organization into the new world of software-defined everything. “Historically, we’ve propagated physical, dedicated infrastructure to support the marketing databases we provide customers,” he says. “With next-gen technologies, there are opportunities to improve our time to deliver, increase utilization levels, and reduce the overall investment required for implementation and operational activities.”
In early 2014, Self’s team began working to virtualize that infrastructure—including the network and compute environments. The goal is to build next-generation infrastructure-as-a-service and network-as-a-service capabilities that help create economies of scale, improve speed of delivery, and increase overall efficiency through automation. This effort is helping to drive innovation at the application level as well. Software engineers are able to quickly design new solutions to help customers cleanse and integrate the data required to enable their marketing strategies and related activities.
Though there is considerable work still to be completed, Acxiom is already seeing benefits. Provisioning, which once took days or weeks, now takes minutes or hours—allowing teams to quickly jumpstart new product offerings and internal projects.
Self sees Acxiom’s current efforts as part of a disruptive trend that may transform the way businesses approach the infrastructure, databases, and software that they need to succeed. “I can foresee a point in the future when infrastructure will be commoditized and offered at market rates by a handful of utility companies,” he says, “That will allow IT teams to focus on other, more strategic IT services and solutions that will help enable business strategies and operations.”
At Citi, the IT services we provide to our customers are built on top of thousands of physical and virtual systems deployed globally. Citi infrastructure supports a highly regulated, highly secure, highly demanding transactional workload. Because of these demands, performance, scale, and reliability—delivered as efficiently as possible—are essential to our global business operations. The business organizations also task IT with supporting innovation by providing the IT vision, engineering, and operations for new services, solutions, and offerings. Our investments in software-defined data centers are helping on both fronts.
With 21 global data centers and system architectures ranging from scale-up mainframes and storage frames to scale-out commodity servers and storage, dealing effectively with large-scale IT complexity is mission-critical to our business partners. We became early adopters of server virtualization by introducing automation to provisioning several years ago, and we manage thousands of virtual machines across our data centers. The next step was to virtualize the network, which we accomplished by moving to a new two-tier spine-and-leaf IP network fabric similar to what public cloud providers have deployed. That new physical network architecture has enabled our software-defined virtual networking overlays and our next generation software-defined commodity storage fabrics. We still maintain a large traditional fiber channel storage environment, but many new services are being deployed on the new architecture, such as big data, NoSQL and NewSQl data services, grid computing, virtual desktop infrastructure, and our private cloud services.
Currently, we are engaged in three key objectives to create a secure global private cloud. The first objective focuses on achieving “cloud scale” services. As we move beyond IT as separate compute, network, and storage silos to a scale-out cloud service model, we are building capabilities for end-to-end systems scaled horizontally and elastically within our data centers—and potentially in the not-too-distant future, hybrid cloud services. The second objective is about achieving cloud speed of delivery by accelerating environment provisioning, speeding up the deployment of updates and new capabilities, and delivering productivity gains to applications teams through streamlined, highly automated release and lifecycle management processes. The results so far are measurable in terms of both client satisfaction and simplifying maintenance and operations scope. The final objective is to achieve ongoing cloud economics with respect to the cost of IT services to our businesses. More aggressive standardization, re-architecting, and re-platforming to lower-cost infrastructure and services is helping reduce technical debt, and will also help lower IT labor costs. At the same time, the consumption of IT services is increasing year over year, so keeping costs under control by adopting more agile services allows our businesses to grow while keeping IT costs manageable. Our new CitiCloud platform-as-a-service capabilities—which feature new technologies such as NoSQL/NewSQL and big data solutions along with other rapid delivery technology stacks—help accelerate delivery and time to market advantages. Packaging higher-level components and providing them to application teams accelerates the adoption of new technologies as well. Moreover, because the new technology components have strict compliance, security, and DevOps standards to meet, offering more tech stacks as part of platform-as-a-service provides stronger reliability and security guarantees.
By introducing commodity infrastructure underneath our software-defined architectures, we have been able to incrementally reduce unit costs without compromising reliability, availability, and scale. Resiliency standards continue to be met through tighter controls and automation, and our responsiveness—measured by how quickly we realize new opportunities and deliver new capabilities to the business—is increasing.
Focusing IT on these three objectives—cloud scale, cloud speed, and cloud economics—has enabled Citi to meet our biggest challenge thus far: fostering organizational behavior and cultural changes that go along with advances in technology. We are confident that our software-defined data center infrastructure investments will continue to be a key market differentiator—for IT, our businesses, our employees, our institutional business clients, and our consumer banking customers.
Risk should be a foundational consideration as servers, storage, networks, and data centers are replatformed. The new infrastructure stack is becoming software-defined, deeply integrated across components, and potentially provisioned through the cloud. Traditional security controls, preventive measures, and compliance initiatives have been challenged from the outset because the technology stack they sit on top of was inherently designed as an open, insecure platform. To have an effective software-defined technology stack, key concepts around things like access, logging and monitoring, encryption, and asset management need to be reassessed, and, if necessary, enhanced if they are to be relevant. There are new layers of complexity, new degrees of volatility, and a growing dependence on assets that may not be fully within your control. The risk profile expands as critical infrastructure and sensitive information is distributed to new and different players. Though software-defined infrastructure introduces risks, it also creates opportunity to address some of the more mundane but significant challenges in day-to-day security operations.
Security components that integrate into the software-defined stack may be different from what you own today—especially considering federated ownership, access, and oversight of pieces of the highly integrated stack. Existing tools may need to be updated or new tools procured that are built specifically for highly virtual or cloud-based assets. Governance and policy controls will likely need to be modernized. Trust zones should be considered: envelopes that can manage groups of virtual components for policy definition and updates across virtual blocks, virtual machines, and hypervisors. Changes outside of controls can be automatically denied and the extended stack can be continuously monitored for incident detection and policy enforcement.
Just as importantly, revamped cyber security components should be designed to be consistent with the broader adoption of real-time DevOps. Moves to software-defined infrastructure are often not just about cost reduction and efficiency gains; they can set the stage for more streamlined, responsive IT capabilities, and help address some of today’s more mundane but persistent challenges in security operations. Governance, policy engines, and control points should be designed accordingly—preferably baked into new delivery models at the point of inception. Security and controls considerations can be built into automated approaches to building management, configuration management, asset awareness, deployment, and system automation—allowing risk management to become muscle memory. Requirements and testing automation can also include security and privacy coverage, creating a core discipline aligned with cyber security strategies.
Similarly, standard policies, security elements, and control points can be embedded into new environment templates as they are defined. Leading organizations co-opt infrastructure modernization with a push for highly standardized physical and logical configurations. Standards that are clearly defined, consistently rolled out, and tightly enforced can be a boon for cyber security. Vulnerabilities abound in unpatched, noncompliant operating systems, applications, and services. Eliminating variances and proactively securing newly defined templates can reduce potential threats and provide a more accurate view of your risk profile.
The potential scope of a software-defined everything initiative can be daunting—every data center, server, network device, and desktop could be affected. What’s more, the potential risk is high, given that the entire business depends on the backbone being overhauled. To round matters out, an initiative may deliver real long-term business benefits, yet only have vague immediate impacts on line-of-business bottom lines (depending on IT cost models and charge-back policies). Given the magnitude of such an effort and its associated cost, is it worth campaigning for prioritization and budget? If so, where would you start? The following are some considerations based on the experiences of early adopters:
In mature IT organizations, moving eligible systems to an SDDC can reduce spending on those systems by approximately 20 percent, which frees up budget needed to pursue higher-order endeavors.9 These demonstrated returns can help spur the initial investment required to fulfill virtualization’s potential by jump-starting shifts from physical to logical assets and lowering total cost of ownership. With operational costs diminishing and efficiencies increasing, companies will be able to create more scalable, responsive IT organizations that can launch innovative new endeavors quickly and remove performance barriers from existing business approaches. In doing so, they can fundamentally reshape the underlying backbone of IT and business.