Tackle cloud complexity
Returning to positive ROI by taking a proactive approach to complexity management.
A blog post by David Linthicum, Chief Cloud Strategy Officer, Deloitte Consulting LLP and Dave Knight, VP of Alliance Relationships and IBM Alliance Global Cloud Lead, Deloitte Consulting LLP
When cloud computing took off in 2007, the migration was a gold rush, as companies raced to launch quick fix solutions in hope of transforming their companies. As the year 2020 begins, cloud computing projects are in their second or third generation. Instead of offering the promised ROI, some projects are racking up sky-high operational costs, impeding efficiency, and stifling productivity. That’s in addition to potential system outages, security breaches, and customer issues with stoppages and delays that keep developers and architects up at night.
The problem? Cloud complexity. For some companies, it has become a disaster.
In our Cloud Complexity Management Survey, we found that most large enterprises are running into more complexity than expected—and 47 percent of companies surveyed see complexity as the biggest risk to their ROI. Some are even experiencing negative value from their cloud investments.
The complexity issue comes down to three main factors:
- Several different cloud providers to choose from, each with its own databases and development environments, security systems, governing systems, etc.
- Companies not giving up on-premise systems as quickly as they planned, so IT departments have to support the complex new systems while still trying to maintain the old
- The rise of multicloud over the past five years
As an organization moves to cloud, they often find there’s a sweet spot between innovation and complexity. The challenge is recognizing when you have reached that tipping point, then choosing whether to rein in or ramp up your operations cost to offset the complexity. Most organizations are just beginning to recognize that too much complexity is a problem.
Recovering your cloud ROI
Good architectural planning helps deal with complexity issues proactively, but it’s not too late for companies that find themselves in a tight spot.
The first step is admitting that there’s a problem, then the focus can shift to developing a plan to get things back on track and simplified. It essentially requires hiring specialists from outside or taking on the initiative internally. Even if additional costs may be incurred, our experience has shown that spending the time on reducing complexity is a worthwhile investment—in some cases generating double-digit ROI.
Even with that incentive, reducing complexity involves a cultural change: shifting to a proactive, innovative, and more thoughtful culture. This shift can be difficult, but it is a critical step before organizations can take advantage of being proactive, use technology as a force multiplier, and benefit from continuous innovation.
Solving for complexity
Based on our experience, finding a solution and resolving cloud complexity typically costs about 20 percent of the current cloud computing budget, above and beyond your existing spend on IT. In other words, if you spent $4 million on cloud services, in addition to your normal IT budget, it could cost 20 percent of $4 million to fix a critical complexity issue. This investment may seem significant, but companies are likely to get that money back once the complexity is addressed.
In order to course correct, organizations should break the complex architecture apart into individual domains, grouping them by type—for instance, all cognitive pieces together, all services, all processes, on-premise, infrastructure-related, and so on. Within each of those buckets, you can create an abstraction layer with containerization software, such as Red Hat’s OpenShift. It works to simplify data and services by combining them into one central resource, whether the data exists in the cloud or not. Instead of managing three different services, each with its own identity access management and a proprietary directory service, you can leverage a tool that’s able to work across the different cloud environments.
Using a container-based option you can abstract the sharing of information between the directories to support common applications, like security systems, and automate how security information is shared proactively within that console.
From there, you can automate the use of data so that it’s accessible across various functions and applications without increasing the complexity as more data sources are added. Another benefit is that when something goes down, you can quickly spin it back up. And it’s all under a single management layer with the same interface, no matter where the workload is running.
All considered, leveraging a platform like OpenShift as an abstraction layer for containerized workloads helps hide the complexity from people who are leveraging that domain. You can worry less about where and how things are running and shift your focus to creating value. Plus, from an architect’s perspective, it provides backup across clouds. And from a business perspective, it helps ensure that applications are always available to customers.
The ROI of best practices
Managing complexity means returning to a streamlined, productive state where efficiency and innovation are valued. Operating costs decrease when you spend less on cloud services, less on development, and less on IT. In addition, you can reduce your security vulnerability and regain the tremendous benefits cloud computing offers, all while getting applications that hold their value over a longer period of time.
The most successful companies are the forward-looking ones—the game changers and innovators who see innovation as a core strategic advantage and guard it carefully. They don’t allow complexity or other architectural issues to creep in. If you’re concerned about how to address yesterday’s challenges today, you’re more than likely behind the complexity curve. You want to get to a place where you’re focused on tomorrow’s challenges today—and it’s not about how to tackle cloud complexity.
A version of this article originally appeared in VentureBeat.