Cloud Processes – Deloitte On Cloud Blog | Deloitte US has been saved
A blog post by Mike Kavis, chief cloud architect
In my last blog, I talked about how, in addition to technology, people and process were really key ingredients of cloud adoption. My mantra is, “Technology is easy; people are hard.” That extends to process too. Cloud technology is much easier to get a handle on than are the people and processes that orchestrate how work gets done. In this blog, I’ll examine how process affects cloud transformations.
Building better processes for cloud
In my experience, I’ve found there are five keys to successfully dealing with process issues that invariably arise in a cloud journey:
Look at “what,” not “how”
Processes don’t spring out of thin air. They’re created to fill a need. The reason for the process (the “what”) is usually more important than how it’s performed. That’s especially true for cloud deployments. Take provisioning, for example. In cloud, infrastructure is code, not physical machines or appliances. Of course, infrastructure should be provisioned cost-effectively and securely, but that shouldn’t add weeks or months to the timeline.
What I’m driving at is that the need for a particular process probably won’t disappear during a cloud migration, but the process should be evaluated for relevance, and it should be redesigned to meet the need while being optimized to a cloud environment. Otherwise, you’ll be wasting time and be bogged down from the start.
For example, in the non-cloud world, infrastructure is physical assets: servers, storage devices, network appliances, etc. Typically, there are silos of experts assigned to each type of infrastructure (e.g., server, storage, and network teams). For each silo, in order to orchestrate the processes involved to provision all of the necessary steps to install the hardware, then deploy and configure software on top of it, processes are put in place to manage the handoffs between groups.
But in the cloud, infrastructure is code, and the code required to implement the virtual hardware, software, and configurations can be implemented by a single team, which requires fewer processes and handoffs. So, the requirements (the “what”) for implementing the technologies do not change, but the steps involved to perform the implementation may change drastically. How you satisfy the requirements in the cloud should be drastically simpler and more automated than how it was previously done. Many of the handoffs, reviews, and tickets between silos become pure waste in the cloud.
Remove the waste
Speaking of waste, many companies start their cloud journey with decades of baggage in their IT processes. And, for most of them, their processes are less than ideal—even for the legacy environment. Take that to cloud, and you’re asking for trouble. Cloud provides the opportunity to implement infrastructure and software much quicker, but you must remove the waste from your legacy process to take advantage of it.
To remove waste from processes, you must first be able to visualize all the steps involved within a process. Enter value stream mapping (VSM). VSM is a pragmatic approach for analyzing and visualizing processes, and it’s designed to ferret out waste and create value end to end by removing low value steps from the process. You can apply VSM best practices to help stakeholders identify and eliminate inefficiencies and optimize processes to improve flow.
Once you identify the bottlenecks and waste in processes, you can redesign them to be more cloud-friendly and create better value for the company. And remember, always optimize processes before attempting to automate them.
Share the responsibility
While you’re redesigning your processes for cloud, it’s essential to keep in mind the shared responsibility model between you and the cloud service provider (CSP). In the public cloud, the cloud provider now owns the responsibility of the underlying infrastructure. This changes many roles, responsibilities, processes, and tools and requires a redesign of both processes and operating models to take advantage of it.
Incident management is a good example of a process where the roles and responsibilities within the cloud are different than they are in a traditional data center, because in a public cloud, the underlying infrastructure is the responsibility of the CSP(s), but you still own everything built on top of it.
If you don’t define upfront who’s responsible for what, and if you don’t optimize your processes for cloud, you could lose the agility benefits that cloud offers. And, when the first incident happens, you’ll be stuck using legacy processes to solve cloud problems, and you’ll likely see your key metrics like SLOs (service-level objectives) and MTTR (mean time to repair) underperform.
Operational, security, and development processes also need to function under a shared responsibility model. So, it’s essential to identify role and responsibility changes for these processes as well so that, when something goes wrong, it is clear who is responsible and accountable for resolution.
Automate where you can
And while you’re redesigning legacy processes for cloud, keep one word top of mind: automation. Throughout the software development life cycle (SDLC), there are myriad steps and processes. So many, in fact, that it’s beyond the scope of this blog to cover all of them. I’ll touch on one group as an example—deployment processes—because that’s where many bottlenecks occur.
One problem I frequently see is that companies take their legacy deployment processes as they are to cloud. No. Just no. Why? Because most often, these processes were written at a time where huge, monolithic applications were designed to be deployed on physical infrastructure, biannually or quarterly, with a process that required manual steps, review gates, and numerous forms and checklists. That simply won’t work at the speed of cloud.
Enter CI/CD: continuous integration, continuous delivery. The power of CI/CD lies in automation. If you rethink how you deliver software, you can get really creative with how you use CI/CD to streamline processes. Manual reviews and checklists? Gone. Slow delivery? Gone. Infrequent updates? Gone. Automation? Pervasive? Yes.
Here are just a few aspects of delivery pipelines that you can automate:
Build trust
Automation is great, but it only works if people trust it. Many people have been burned so often by botched deployments that, when you take away manual processes that ostensibly work and replace them with automated processes that are unknown, it’s difficult to gain buy-in. However, over time, with a good CI/CD pipeline that’s fully automated, repeatable, and auditable, people will start to trust automation, and you can start removing some of the obstacles that prevent faster software delivery.
But remember: Even if you automate your deployment process, if you don’t get rid of the manual review gates and approval checklists, you won’t be able to fully automate the end-to-end process. You will encounter wait time (i.e., waste) so that people can rubber-stamp approve the process. It all comes down to trust. If you can show that automation delivers secure, high-quality software, people will trust it, and the manual approval processes and checklists can be shelved for good.
A golden opportunity
When you embark on your cloud journey, you’re really moving to a greenfield virtual data center. There aren’t processes in place, so you have a golden opportunity to create them from the ground up, optimized for cloud. You won’t get a better chance to get your processes “right.” Take advantage of it; don’t bring legacy processes to the cloud. The “what” of your processes is still likely valid, based on your company’s needs. It’s the “how” that you have a chance to transform and streamline. Don’t pass it up.
If you like this blog, you can check out my book, Accelerating Cloud Adoption: Optimizing the Enterprise for Speed and Agility1, for a more thorough discussion of how redesigning processes is essential to cloud deployment.
1 Michael Kavis, Accelerating Cloud Adoption: Optimizing the Enterprise for Speed and Agility (O'Reilly Media, December 15, 2020).
Mike is the chief cloud architect in Deloitte Consulting LLP’s Cloud practice, responsible for helping clients implement cloud strategy and architecture to drive digital transformation. Beyond his technology experience, Mike brings an insightful understanding of how to address the organizational change, process improvement, and talent management challenges associated with digital transformation. Mike brings more than 30 years of experience in software development and architecture to his role. Most recently, he was a principal architect with Cloud Technology Partners. A pioneer in cloud computing, Mike led a team that built the world’s first high-speed transaction network in Amazon’s public cloud and won the 2010 AWS Global Startup Challenge. He has written extensively about cloud technologies and the Internet of Things. Mike earned a bachelor’s degree from Rochester Institute of Technology and masters’ degrees in business and information systems from Colorado Technical University.