Awakening architecture with cloud innovation core has been saved
The authors would like to thank Jay Mavi, Hitesh Jhamb, and Sigrunn Sky of Deloitte Consulting LLP for their contributions to this article.
Cover art by: Daniel Hertzberg
United States
United States
United States
While more and more businesses understand the value that the cloud brings, achieving the sweet spot of optimally balanced cloud operation still remains evasive for most organizations, especially where there is an expectation that simply lifting and shifting workloads to the cloud will bring business benefits. A cloud strategy that defers the modernization of application workloads as part of a transformation will inevitably underplay or even marginalize innovation opportunity. The ability to truly innovate through the application of technology has now become a critical business enabler.
The most successful cloud implementations are those that offer the now well-recognized potential of reduced cost and increased efficiency while providing a solid innovation platform, allowing businesses to transform in response to market change, whether adverse or opportunistic. The current pandemic highlights the impact of technology enablement in the face of changing market conditions and further supports the case for business agility, underpinned by innovative technology solutions.
This business imperative brings into focus how important it is for organizations to consider the critical design goal of enabling innovation in their cloud strategy, what a foundation for innovation looks like, and what the potential pitfalls along this journey are, a consideration that needs to be weighed against a waning financial benefit of direct lift and shift strategy. On this point, organizations should be looking to implement cloud native architecture wherever possible across all tiers of the IT landscape (infrastructure, cross cutting, or application-focused) whether in the context of greenfield development or legacy workload migration.
In this article we explore how cloud native architecture, focused on establishing core innovation capability across three critical pillars of platform foundations, enterprise data lake, and modern compute services, can help you realize increased business value of cloud while avoiding the common pitfalls of adoption, concluding that:
For the purposes of this article, a truly innovative cloud architecture should be thought of as an optimally designed and implemented collaboration of cloud services that reaches or exceeds the functional, quality, operational, and business goals desired.
It is understood that a typical enterprise IT landscape for most organizations is a diverse mix of the old and the new, spanning legacy and modern technology platforms—it is this complexity that requires a hybrid approach that combines legacy integrated with cloud native (and points in between) in order to achieve the sweet spot of optimal benefit or as near as can be realistically achieved. A simple decision framework can be a valuable tool here, which should be part of any well-defined cloud reference architecture. This will help strike a good balance when considering the constraints presented (people, process, or technology) when introducing any type of architectural change.
Nevertheless, a cloud native, modernized architecture can help the organization reap the benefits of the inherent efficiencies of well-orchestrated, appropriately scoped services executing in an optimized computing environment. Such an architecture will also provide the platform agility, opening up an easier path to innovation.
“Innovation core” is a technology perspective that has its roots in traditional modes of enterprise IT realigned to the cloud native paradigm through a set of unified cloud native architecture principles. It introduces an architectural paradigm that organizations can adopt to cover key facets of cloud native adoption and better understand potential approaches toward attaining organizational goals (figure 1).
Simply stated, the value proposition here is that getting the core services (referenced in figure 1) deployed successfully and in a shared model will unlock the desired benefits for the majority of innovation opportunities.
We have seen this approach used to great effect at some of the largest global organizations. For example, a large US retail organization needed a demand forecast solution to mitigate the risk of primary supply chain system outage.1 The benefits were significantly lowered total cost of ownership and a drastically shortened time to recovery—from 72 hours to a matter of minutes.
We believe that a well-developed innovation core should be recognizable based on several fundamental characteristics.
So, what actually constitutes an innovation core? Figure 2 depicts an example of how an organization’s innovation core might be composed in a tangible form. While not prescriptive, this model is a good representation of what artifacts we might expect to see, noting that cloud native principles are a common thread, which relates across each artifact.
Cloud foundations speaks largely to the tactical, cross-cutting concerns of the enterprise cloud capability split across key domains (figure 3). While it is critical that these static capabilities are robustly implemented, it is equally important that they are bound by an operating model that provides capabilities related to transformation, governance, and engineering (figure 4). A practical cloud operating model is, of course, an essential part of cloud adoption but is beyond the scope of this article. Concepts of the cloud business office for business-related concerns and the cloud center of excellence for architecture- and technology-related concerns should be also considered for adoption.
The services depicted in figure 3 span a vast array of options, some examples of which are depicted in figure 5.
Establishing the operational framework can be the most daunting task for organizations as the touchpoints between people, process, and technology frequently converge within cloud foundations given that this is typically a shared domain. In our experience, getting the governance aspects correct is critical to success. We would recommend establishing a cloud business office for strategic guidance and cost control and business value assurance, and a center of excellence for architectural quality assurance, cloud service management, and technology innovation, which is a significant factor in mitigating the pitfalls (more on this in the “Avoiding the pitfalls” section).
The enterprise data warehouse approach has been a core part of many data strategies since the evolution of online transactional processing and online analytical processing systems and is just as valid, at least conceptually speaking, in today’s cloud-centric landscape. However, the exponential explosion of data creation and subsequent need for timely data processing can prove challenging for traditional, monolithic database products and toolsets. Therefore, the advancement of this critical capability is intrinsically tied to and enabled by cloud native characteristics of instantly variable (and theoretically limitless) scale and capacity.
As a result, more and more siloed data stores are increasingly being subsumed into (or replaced by) a wider enterprise data architecture that offers improved capability for data at scale storage and processing through techniques of distributed architecture and parallel processing. Hadoop is the widely recognized origin of this approach and still provides the underpinnings of what is now commonly termed data lake.
The creation of the enterprise data lake achieves several benefits for the business by removing the constraints of siloed data warehouses, back offices, and lines of business systems, thereby providing opportunities for the holistic mining of data for greater insight and business value, most importantly at the massive scale required to both store and process data within the demands of business need. A financial IT enterprise needed to integrate data that was spread out across a centralized legacy data warehouse, back-office system of record, and numerous satellite data systems. This modernization effort was required as the existing approach of batch consolidation and subsequent query was hugely error prone and lacked timeliness of data insight. The enterprise introduced core innovation data lake capability based on distributed file system services (Hadoop) in the cloud, eradicating several data silos and enabling the huge benefits of big data approach, such as parallelized query. This effort lowered the total cost of ownership and operational complexity and shortened the time to insight for the various lines of business.
It should, therefore, be of no surprise that the data lake paradigm fits well into cloud native characteristics of hyperscale storage and horizontally scalable processing. Additionally, it has the ability to economize processing cycles using automated scaling and ephemeral compute nodes.
Accordingly, cloud service providers’ toolsets are optimized to process “big data” in an expeditious manner using technologies that are able to hyperscale4 and parallelize data processing cycles. All the major cloud service providers now offer various options for data lake implementation that generally include an ecosystem of tools (figure 7) that allow:
The modern application platform is concerned with the delivery of compute capabilities of the innovation core (arguably the most recognizable shift over the past decade), driving an immense impact to the application development world with the evolution of virtualization, containerized compute, and serverless architecture.
This recognition is many times manifest in IT strategy as a desire to lift and shift from, or stretch across, on-premise to public cloud in order to mitigate “big bang” risk. Here we suggest that while a lift-and-shift strategy is appropriate with certain workloads and circumstances, it must be considered part of a cloud native strategy. If not, we have seen a tendency to defer innovation benefit in favor of what is decreasing return on investment in the lift-and-shift approach—“wait to innovate” is not an option in today’s market.5
It is worth noting that the fledgling cloud computing offerings from the current top two market share vendors (AWS and Microsoft) came to life as abstracted compute services that offered ephemeral compute and persistent data storage tier services with no recognizable “virtual machine” offerings, a fact that perhaps underscores the original intention and innovative thinking that pervades cloud computing as we know it today.
It is sometimes too easy to immediately equate modern application platforms with the ubiquitous docker technology. However, we suggest that a modern application platform/containerization capability is better interpreted as any service that offers the aforementioned benefits of compute efficiency, scale, and resiliency, regardless of underpinning technology. In other words, a “containerized” service does not necessarily equate to a “dockerized” service.
This is an important consideration for organizations that have not yet considered modern application platforms that do not have an overt reliance on docker and Kubernetes (such as workflow, low-code/no-code, logic, serverless). The dependency on these services is and will continue to become more and more abstracted over time (another benefit of innovation cloud adoption) as cloud service providers look to simplify the compute stack and its underlying operation. This is exemplified by the proliferation of low-code systems, highly integrated and visual workflow-driven productivity suites, and plug-and-play service composition (figure 8).
It is worth noting that the fledgling cloud computing offerings from the current top two market share vendors (AWS and Microsoft) came to life as abstracted compute services.
Inevitably, the more complex the organization, the higher the chances of failure of cloud adoption and transformation during the journey of maturing innovation capability. What follows here is a selection of recurring pitfall themes that we believe can be mitigated with practical application of innovation core thinking.
Pitfall: Lines of business turn down shared services due to negative perceptions of governed IT processes being considered bureaucratic or too costly. This triggers fallback to legacy system usage (building further technical debt) or attempts to circumvent controls by negotiating directly with SaaS providers without IT collaboration. Doing so sometimes ends in noncompliant implementations or, worse still, exposes the organization to increased technical risk.
The “partner, not police” thinking offered through a reusable and agile service catalog will give broader optionality to lines of business, quicker time to solution, reduced cost, and better risk management; structuring agile and highly interoperable foundational services to cover cross-cutting concerns in a cloud native fashion will remove barriers to adoption so that critical control requirements can be rapidly implemented, such as identity and access management and data governance.
Pitfall: Programs stumble or fail entirely due to the misunderstanding of dependencies and/or constraints across multiple streams of innovation delivery.
These misunderstandings or miscommunications can stretch across people, processes, and technology. For example, we have seen project groups spend many iterations on a “perfect” design only to find that they have selected services that are not yet approved or permitted by information security organizations, sometimes due to known issues or unknown technical impact.
Having a set of coherent services that are pre-vetted and approved within defined usage scenarios, sometimes referred to as “guardrails,” will steer solution design groups toward better architecture decisions and promote efficient use of cloud native design patterns with consistent and predictable operational characteristics.
Pitfall: Solution groups expend redundant effort due to siloed solution design and inadequate collaboration across streams, resulting in multiple disparate solutions and unnecessary technology proliferation (i.e., technical debt) that implement identical features and functionality.
The world of cloud native opportunity brings with it huge solution opportunity; there are many ways to skin the proverbial cat with cloud services that must be controlled if the cloud ecosystem is not to become a technology soup. Diligent selection of fit for purpose, reusable solution components that are aligned to enterprise IT principles, and architectural quality goals allow the cloud ecosystem to reduce technology proliferation, lower the operational complexity/cost, and reduce operational risk.
Pitfall: Platform engineering team skill sets are not well aligned with new paradigms of cloud operations leading to suboptimal cloud infrastructure implementation based on legacy patterns and practices.
Thankfully for IT professionals, the expansion of cloud infrastructure is still recognizable from the viewpoint of fundamental protocols and technological underpinnings. The Open Systems Interconnection (OSI) model, developed in the late 1970s, is as relevant today as it ever was. Having said that, there are certain infrastructure domains that demand some fresh thinking if the organization is going to enable a cloud native approach and not constrain itself by attempting to migrate ingrained and often unsuitable data center patterns and processes into the cloud. This shift in perspective demands that organizations look at more appropriate, cloud-aligned ways to manage risk, especially with regard to information security control implementation.
Pitfall: Platform engineering teams cannot fully operationalize applications into a production environment as they are blocked by information security, data privacy, or risk management business lines.
Laying your cloud foundations correctly is critical to future success; while it is very easy to deploy isolated proofs of concept in a public cloud, this is a far cry from realizing enterprise, mission-critical capability, especially when sensitive data is involved and confidentiality, integrity, and availability of data are paramount. We often see organizations with a healthy “proof of concept” portfolio struggle when it comes to transitioning these workloads into an operational mode. The innovation core approach demands that core services are fully approved (governed) by all stakeholders and are of the required architectural quality for current and future operation, easing the path from the proof of concept all the way to production.
Pitfall: Solution operational costs are more costly than anticipated in the cloud.
Dollar for dollar, we consistently see very little financial benefit in lift-and-shift strategies that favor virtual machine infrastructure buildout over modernization. In fact, it’s common to see an increase in the total cost of ownership when factoring in migration costs. Exploiting cloud native services that are scalable on demand and ephemeral in nature gives vastly increased cost-savings potential and fine-grained service composition (not a strong suit of virtual machines), translating into more efficient resource utilization for cloud native approaches.
It’s essential to have a rigorous approach to planning out your service landscape from a financial perspective—innovation core allows this by leveraging the analysis toolsets available, such as tagging, anomaly checks, and automatic decommissioning as part of the service definition landscape. Off-the-shelf, precosted services and solutions can be categorized by price, giving a huge advantage to financial planning cycles of solution implementation.
Being cloud native is a business imperative regardless of your organization’s mix of legacy and modern technology. Deloitte’s 2018 Global CIO Survey found that only 54% of CIO respondents thought their organization’s existing technology stacks could support current and future business needs.
That’s not to say that organizations should be rushing to “cloud natively” modernize every facet of organizational IT. Nonetheless, sowing the seeds of cloud native thinking, judiciously selecting trailblazing application modernization, and shifting away from virtualization at the virtual machine tier in favor of “born in the cloud” (awakened) architecture will reap critical strategic benefits. Starting with an innovation core is a solid way forward.
Aligning an innovation-focused cloud strategy enabled by a cloud native approach aids to unlock the true benefits of cloud, helping organizations:
The authors would like to thank Jay Mavi, Hitesh Jhamb, and Sigrunn Sky of Deloitte Consulting LLP for their contributions to this article.
Cover art by: Daniel Hertzberg