Reengineering technology: Building new IT delivery models from the top down and bottom up Tech Trends 2018

06 December 2017

With business strategies inextricably linked to technology, organizations are rethinking how they envision and deliver technology solutions: transforming IT departments into engines driving business growth, with responsibilities that span back-office systems, operations, and even product offerings. 

Learn More

View Tech Trends 2018

Watch the video

Learn more about Cloud strategy & readiness

Create a custom PDF

For nine years, Deloitte Consulting LLP’s annual Tech Trends report has chronicled the steps that CIOs and their IT organizations have taken to harness disruptive technology forces such as cloud, mobile, and analytics. Throughout, IT has adapted to new processes, expectations, and opportunities. Likewise, it has worked more closely with the business to develop increasingly tech-centric strategies.

Yet as growing numbers of CIOs and enterprise leaders are realizing, adapting incrementally to market shifts and disruptive innovation is no longer enough. At a time when blockchain, cognitive, and digital reality technologies are poised to redefine business models and processes, IT’s traditional reactive, siloed ways of working cannot support the rapid-fire change that drives business today. With technology’s remit expanding beyond the back office and into the product-management and customer-facing realms, the problem is becoming more pressing.

This evolving dynamic carries some risk for CIOs. While they enjoy unprecedented opportunities to impact the business and the greater enterprise, these opportunities go hand-in-hand with growing expectations—and the inevitable challenges that CIOs encounter in meeting these expectations. In a 2016–17 Deloitte survey of executives on the topic of IT leadership transitions, 74 percent of respondents said that CIO transitions happen when there is general dissatisfaction among business stakeholders with the support CIOs provide. Not surprisingly, 72 percent of those surveyed suggested that a CIO’s failure to adapt to a significant change in corporate strategy may also lead to his transition out of the company.1

For years, IT has faithfully helped reengineer the business, yet few shops have reengineered themselves with the same vision, discipline, and rigor. That’s about to change: Over the next 18 to 24 months, we will likely see CIOs begin reengineering not only their IT shops but, more broadly, their approaches to technology. The goal of these efforts will be to transform their technology ecosystems from collections of working parts into high-performance engines that deliver speed, impact, and value.

Reengineering approaches may vary, but expect to see many CIOs deploy a two-pronged strategy. From the bottom up, they can focus on creating an IT environment in which infrastructure is scalable and dynamic and architecture is open and extendable. Importantly, automation (driven by machine learning) will likely be pervasive, which can accelerate the processes of standing up, building on top of, and running the IT stack. These principles are baked into infrastructure and applications, thus becoming elemental to all aspects of the operation. From the top down, CIOs and their teams have an opportunity to transform how the shop budgets, organizes, staffs, and delivers services.

The reengineering technology trend is not an exercise in retooling. Rather, it is about challenging every assumption, designing for better outcomes, and, ultimately, creating an alternate IT delivery model for the future.

Two-pronged reengineering technology approach

Enough with the tasks, already

In their best-selling book Reengineering the Corporation, Michael Hammer and James Champy defined business processes as an entire group of activities that when effectively brought together, create a result customers value. They went on to argue that by focusing on processes rather than on individual tasks—which, by themselves, accomplish nothing for the customer—companies can achieve desired outcomes more efficiently. “The difference between process and task is the difference between whole and part, between ends and means,” Hammer and Champy wrote.2

Today, many IT organizations take the opposite approach. As IT scaled continuously over the last three decades, it became excruciatingly task-focused, not just in applications and infrastructure but in networks, storage, and administration. Today, IT talent with highly specialized skillsets may work almost exclusively within a single functional area. Because they share few common tools with their highly specialized counterparts in other functional areas, low-bandwidth/high-latency human interfaces proliferate among network engineers, system administrators, and security analysts.

Until recently, efforts to transform IT typically focused on adopting new technologies, outsourcing, or offshoring. Few emphasized the kind of systematic, process-focused reengineering that Hammer and Champy advocated. Meanwhile, consumerization of technology, the public’s enduring fascination with young technology companies, and the participation of some IT functions in greenfield projects have put pressure on CIOs to reengineer. Yet approaches that work well for start-ups and new company spinoffs might be unrealistic for larger companies or agencies. These organizations can tackle reengineering challenges by broadening the frame to include open source, niche platforms, libraries, languages and tools, and by creating the flexibility needed to scale.

Reengineering from the bottom up

One dimension of reengineering focuses on modernizing underlying infrastructure and architecture. To jump-start bottom-up initiatives, forward-thinking companies can focus their planning on three major areas of opportunity:

  • Automation: Automation is often the primary goal of companies’ reengineering efforts. There are automation opportunities throughout the IT life cycle. These include, among others, automated provisioning, testing, building, deployment, and operation of applications as well as large-scale autonomic platforms that are self-monitoring, self-learning, and self-healing. Almost all traditional IT operations can be candidates for automation, including anything that is workflow-driven, repetitive, or policy-based and requires reconciliation between systems. Approaches have different names: robotic process automation, cognitive automation, intelligent automation, and even cognitive agents. However, their underlying stories are similar: applying new technologies to automate tasks and help workers handle increasingly complex workloads.3


    As part of their automation efforts, some companies are deploying autonomic platforms that layer in the ability to dynamically manage resources while integrating and orchestrating more of the end-to-end activities required to build and run IT solutions. When discussing the concept of autonomics, we are really talking about automation + robotics, or taking automation to the next level by basing it in machine learning. Autonomic platforms build upon two important trends in IT: software-defined everything’s climb up the tech stack, and the overhaul of IT operating and delivery models under the DevOps movement. With more of IT becoming expressible as code—from underlying infrastructure to IT department tasks—organizations now have a chance to apply new architecture patterns and disciplines. In doing so, they can remove dependencies between business outcomes and underlying solutions, and redeploy IT talent from rote low-value work to the higher-order capabilities. Organizations also have an opportunity to improve productivity. As one oft-repeated adage reminds us, “The efficiency of an IT process is inversely correlated to the number of unique humans it takes to accomplish it.”

    Another opportunity lies in self-service automation, an important concept popularized by some cloud vendors. Through a web-based portal, users can access IT resources from a catalog of standardized service options. The automated system controls the provisioning process and enforces role-based access, approvals, and policy-based controls. This can help mitigate risk and accelerate the marshaling of resources.

  • Technical debt: Technical debt doesn’t happen just because of poor code quality or shoddy design. Often it’s the result of decisions made over time—actions individually justified by their immediate ROI or the needs of a project. Organizations that regularly repay technical debt by consolidating and revising software as needed will likely be better positioned to support investments in innovation. Companies can also accrue technical debt in physical infrastructure and applications, and maintaining legacy systems carries certain costs over an extended period of time. Re-platforming apps (via bare metal or cloud) can help offset these costs and accelerate speed-to-market and speed-to-service.

    As with financial debt, organizations that don’t “pay it back” may end up allocating the bulk of their budgets to interest (that is, system maintenance), leaving little for new opportunities. Consider taking the following two-step approach to addressing technical debt:
    • Quantify it: Reversal starts with visibility—a baseline of lurking quality and architectural issues. Develop simple, compelling ways that describe the potential impact of the issues in order to foster understanding by those who determine IT spending. Your IT organization should apply a technical debt metric not only to planning and portfolio management but to project delivery as well.
    • Manage it: Determine what tools and systems you will need over the next year or two to achieve your strategic goals. This can help you to identify the parts of your portfolio to address. Also, when it comes to each of your platforms, don’t be afraid to jettison certain parts. Your goal should be to reduce technical debt, not just monitor it.
  • Modernized infrastructure: There is a flexible architecture model whose demonstrated efficiency and effectiveness in start-up IT environments suggest that its broader adoption in the marketplace may be inevitable. In this cloud-first model—and in the leading practices emerging around it—platforms are virtualized, containerized, and treated like malleable, reusable resources, with workloads remaining independent from the operating environment. Systems are loosely coupled and embedded with policies, controls, and automation. Likewise, on-premises, private cloud, or public cloud capabilities can be employed dynamically to deliver any given workload at an effective price and performance point. Taken together, these elements can make it possible to move broadly from managing instances to managing outcomes.

    It’s not difficult to recognize a causal relationship between architectural agility and any number of potential strategic and operational benefits. For example, inevitable architecture provides the foundation needed to support rapid development and deployment of flexible solutions that, in turn, enable innovation and growth. In a competitive landscape being redrawn continuously by technology disruption, time-to-market can be a competitive differentiator.4

Reengineering from the top down

Though CIOs’ influence and prestige have grown markedly over the last decade, the primary source of their credibility continues to lie in maintaining efficient, reliable IT operations. This is, by any measure, a full-time job. Yet along with that responsibility, they are expected to harness emerging technology forces. They stay ahead of the technology curve by absorbing the changes that leading-edge tools introduce to operational, organizational, and talent models. Finally, an ever-growing cadre of enterprise leaders with “C” in their titles—think chief digital officer, chief data officer, or chief algorithm officer—demand that CIOs and their teams provide: 1) new products and services to drive revenue growth, 2) new ways to acquire and develop talent, and 3) a means to vet and prototype what the company wants to be in the future.

As growing numbers of overextended CIOs are realizing, the traditional operating model that IT has used to execute its mission is no longer up to the job. Technological advances are creating entirely new ways of getting work done that are, in some cases, upending how we think about people and machines complementing one another. Moreover, the idea that within an organization there are special types of people who understand technology and others who understand business is no longer valid. Technology now lies at the core of the business, which is driving enterprise talent from all operational areas to develop tech fluency.5

The time has come to build a new operating model. As you explore opportunities to reengineer your IT shop from the top down, consider the following areas of opportunity:

  • Reorganizing teams and breaking down silos: In many IT organizations, workers are organized in siloes by function or skillset. For example, network engineering is distinct from QA, which is different from system administration. In this all-too-familiar construct, each skill group contributes its own expertise to different project phases. This can result in projects becoming rigidly sequential and trapped in one speed (slow). It also encourages “over the wall” engineering, a situation in which team members work locally on immediate tasks without knowing about downstream tasks, teams, or the ultimate objectives of the initiative.

    Transforming this model begins by breaking down skillset silos and reorganizing IT workers into multi-skill, results-oriented teams. These teams focus not on a specific development step—say, early-stage design or requirements—but more holistically on delivering desired outcomes. A next step might focus on erasing the boundaries between macro IT domains such as applications and infrastructure. Ask yourself: Are there opportunities to share resources and talent? For new capabilities, can you create greenfield teams that allow talent to rotate in or out as needed? Can some teams have budgets that are committed rather than flexible? The same goes for the siloes within infrastructure: storage, networks, system administration, and security. What skillsets and processes can be shared across these teams?6

    A final note on delivery models: Much of the hype surrounding Agile and DevOps is merited. Reorganizing teams will likely be wasted effort if they aren’t allowed to develop and deliver products in a more effective way. If you are currently testing the Agile-DevOps waters, it’s time to wade in. Be like the explorer who burned his boat so that he couldn’t return to his familiar life.

  • Budgeting for the big picture: As functional silos disappear, the demarcation line between applications and infrastructure fades, and processes replace tasks, IT shops may have a prime opportunity to liberate their budgets. Many older IT shops have a time-honored budget planning process that goes something like this: Business leaders make a list of “wants” and categorize them by priority and cost. These projects typically absorb most of IT’s discretionary budget, with care and maintenance claiming the rest. This basic budget blueprint will be good for a year, until the planning process begins again.

    We are beginning to see a new budgeting model emerge in which project goals reorient toward achieving a desired outcome. For example, if “customer experience” becomes an area of focus, IT could allocate funds to e-commerce or mobile products or capabilities. Specific features remain undetermined, which gives strategists and developers more leeway to focus effort and budgetary resources on potentially valuable opportunities that support major strategic goals. Standing funding for rolling priorities offers greater flexibility and responsiveness. It also aligns technology spend with measurable, attributable outcomes.

    When revising your budgeting priorities, keep in mind that some capital expenses will become operating expenses as you move to the cloud. Also, keep an eye out for opportunities to replace longstanding procurement policies with outcome-based partner and vendor arrangements or vehicles for co-investment.

  • Managing your portfolio while embracing ambiguity: As IT budgets focus less on specifics and more on broad goals, it may become harder to calculate the internal rate of return (IRR) and return on investment (ROI) of initiatives. Consider a cloud migration. During planning, CIOs can calculate project costs and net savings; moreover, they can be held accountable for these calculations. But if an initiative involves deploying sensors throughout a factory to provide greater operational visibility, things may get tricky: There may be good outcomes, but it’s difficult to project with any accuracy what they might be. Increasingly, CIOs are becoming more deliberate about the way they structure and manage their project portfolios by deploying a 70/20/10 allocation: Seventy percent of projects focus on core systems, 20 percent focus on adjacencies (such as the “live factory” example above), and 10 percent focus on emerging or unproven technologies that may or may not deliver value in the short term. Projects at the core typically offer greater surety of desired outcomes. But the further projects get away from the core, the less concrete their returns become. As CIOs move into more fluid budgeting cycles, they should recognize this ambiguity and embrace it. Effectively balancing surety with ambiguity can help them earn the right over time to explore uncertain opportunities and take more risks.

  • Guiding and inspiring: IT has a unique opportunity—and responsibility—to provide the “bigger picture” as business leaders and strategists prioritize their technology wish lists. For example, are proposed initiatives trying to solve the right problem? Are technology-driven goals attainable, given the realities of the organization’s IT ecosystem? Importantly, can proposals address larger operational and strategic goals? IT can play two roles in the planning process. One is that of shaman who inspires others with the possibilities ahead. The other role is that of the sherpa, who guides explorers to their desired destination using only the tools currently available.

Skeptic’s corner

The term “reengineer” may give some CIOs pause. The idea of challenging assumptions and transforming systems may seem like an open invitation to dysfunction, especially as the operational realities of the existing enterprise remain. In the paragraphs that follow, we will try to correct several misconceptions that skeptical CIOs may harbor about the growing reengineering technology trend.

Misconception: Technology will always be complex and require architects and engineers to decipher it for the business.

Reality: When they are new, technologies often seem opaque, as do the possibilities they offer the enterprise. But as we have seen time and again, yesterday’s disruptive enigma quickly evolves into another entry in the tech fluency canon. Consider artificial intelligence, for example. In the beginning, it was the near-exclusive domain of the computer-savvy. Today, kids, their grandparents, and your board members use AI daily in the computer vision that dynamically focuses their smartphone cameras, and in the natural language processing engine powering their virtual personal assistants. Consistently, early adopters have a way of bringing the less technologically dexterous with them on the path to broad adoption.

Misconception: By distributing tech across the business, you lose efficiency that goes with having a centralized enterprise architecture.

Reality: We understand your point, but in fact, the process of reengineering technology can make federated architecture a viable alternative, in terms of efficiency, to traditional centralized models. For example, architectural standards and best practices for security, monitoring, and maintenance can be embedded in the policies and templates of software-defined infrastructure. When a new environment is provisioned, the architecture is built into the stack, becoming automatic and invisible. Instead of enterprise architecture being a religious argument requiring the goodwill of developers, it becomes codified in the fabric of your technology solutions. Rather than playing the thankless role of ivory-tower academic or evangelist (hoping-wishing-praying for converts), architects can focus on evolving platforms and tooling.

Misconception: Breaking down organizational silos sounds like a recipe for organizational chaos. IT functions and teams are delineated for a reason.

Reality: The issue of organizational siloes boils down to one question: Should IT remain a collection of function-specific fiefdoms, or should you organize it around processes and outcomes? By focusing on and organizing around outcomes, you are not introducing disorder—you are simply reordering the IT organization so that it can partner more effectively with the business, and maximize the value it brings to the enterprise. This is particularly true with bottom-up investments focusing on standardizing platforms, automation, and delivery.

Lessons from the front lines

Sysco’s secret sauce

Sysco, a leading food marketing and distribution company, took a bold stance to reevaluate a technology transformation initiative that was well under way. Twelve of Sysco’s 72 domestic operating companies had gone live with a new ERP solution meant to standardize processes, improve operations, and protect against outdated legacy systems with looming talent shortages. The problem: Those businesses that were up and running on the new ERP solution were seeing no significant operating advantages. Worse: Even as Sysco was outspending its industry peers in technology, competitors were focusing their investments on new digital capabilities that facilitated and simplified the customer experience. Sysco’s sizable back-office implementation, on the other hand, was perceived as an obstacle by customers doing business with the company.

Sysco’s IT leadership considered an alternate approach. They reevaluated those same legacy systems with an eye on modernizing and amplifying the intellectual property and “secret sauce” embedded in decades of customized order management, inventory management, and warehouse management solutions. At the same time, they recognized the need to fundamentally transform the IT department, shifting from an org that had evolved to support large-scale packaged software configuration to one that could move with more agility to engineer new capabilities and offerings—especially customer-facing solutions.

IT leadership needed the corporate management team’s buy-in to pivot strategies and alter its current trajectory, into which they had sunk significant time, resources, and dollars. From an architecture perspective, many of the technologies central to the new approach—cloud, application modernization platforms, microservices, and autonomics—didn’t exist or were not mature when the original ERP strategy was formed. Explaining how technology, tools, and methodologies had advanced over the past several years, the IT team made the case to the executive leadership team to modernize the core with these tested technologies, which would position Sysco for the future more effectively and with greater flexibility, while costing far less than it would if they continued to roll out the ERP solution to the other operating companies.

“Our legacy systems are customized specifically for what we do,” says Wayne Shurts, executive vice president and chief technology officer at Sysco. “The systems are old, but they work great. Operating companies were so happy to be back on familiar ground, even while we were modernizing the underlying technology—the hardware they run on, the language they are built on, the way we manage them.”7

To achieve these results, Shurts also convinced company leadership to completely reorganize IT operations: He wanted software product, platform, and service teams working in an Agile framework embracing DevOps rather than the traditional waterfall processes that were characteristic of Sysco’s IT organization.

“First came the why, then came the how. We are changing everything about the way we work,” Shurts says. “We are changing the technology and methodologies that we use, which requires new tools and processes. Ultimately, it means we change how we are organized.” With more than half of the IT organization having made the shift, teams are embracing new tools, techniques, and methods. Each individual team can stand up a fully functioning new application organized around the team’s product and customer experiences, owning a mandate to not just continually innovate but own both feature/function development and ongoing operational support. Plans are in place to transform the rest of IT in the year ahead.

In addition to reorganizing the internal IT team, Shurts brought in experienced third-party architects, engineers, and developers to build Sysco’s microservices’ capabilities and help codify the new Agile behavior. His team worked side by side with surgically placed experts, with the goal of “creating our own disciples so we could be self-sufficient.” So began a systemic effort to retool and rewire Sysco IT in order to broaden the organization’s skillset, balanced with teams of veteran employees familiar with the company’s legacy systems.

Shurts continues to evolve the IT processes to meet his team’s goal of delivering new releases daily—to bring new ideas, innovation, and help to customers every day. “Our competition and our customers expect to see things they’ve never seen before in heavy doses. If you believe that the pace of change in the world today will only accelerate, then you need to move to not only a new method but a new mind-set. My advice to other CIOs? Every shop needs to go down this path—from the top down, and the bottom up.”

Vodafone Germany develops great customer experiences

Vodafone Germany is one of the country’s leading telecom operators, offering mobile, broadband, TV, and enterprise services. In order to support its business needs and better integrate its markets, the company launched a multi-year program to modernize its infrastructure and ready its IT stack for digital. The initiative also required implementing new work processes and retraining workers to better support end-to-end customer experiences—reengineering IT to respond to the future of technology.

The first step was virtualizing the infrastructure enabling local market legacy systems. Vodafone Germany migrated from its own data centers to a cloud-dominant model, modernizing IT operations according to the evolved architecture, tools, and potential for automation. The reengineered stack drove down costs while improving resiliency; it made disaster recovery easier, facilitated scaling up to capacity, and gave Vodafone Germany the agility to position IT activities for transformation—not just net-new digital initiatives but areas requiring deep integration to the legacy core.

The organization did face challenges in the migration, which included some legacy systems that didn’t fit in a virtualized infrastructure. Those systems would have required significant development costs to prepare them for migration. So, Vodafone Germany coupled the infrastructure effort with a broader modernization mission—changing legacy core applications so that they could serve as the foundation for new products, experiences, and customer engagement, or decommissioning end-of-life legacy systems. As they did so, Vodafone Germany built a new definition of their core and pushed their IT operating model to undergo a similar transformation.

“Most IT organizations are cautious about replacing legacy systems due to the risks and business disruption, but we saw it as a way to accelerate the migration,” says Vodafone Germany chief technology officer Eric Kuisch. “Aging systems presented roadblocks that made it difficult or impossible to meet even four-to-six-month timeframes for new features. Our goal was to deliver initiatives in weeks or a couple of months. We believed that modernization of technology capabilities could improve time to market while lowering cost of ownership for IT.”8

The next step in Vodafone Germany’s modernization is an IT transformation for which it will invest in network virtualization, advanced levels of automation, and making the entire IT stack digital-ready.

To accomplish so much so fast, Kuisch’s team chose a multi-modal IT model, incorporating both Agile and waterfall methodologies. They used an Agile framework for the front-end customer touchpoints and online experience, while implementing the back-end systems’ legacy migration with the more traditional waterfall methodology. The company undertook a massive insourcing initiative, putting resources into training its own IT team to create business architects to manage end-to-end service-level agreements for a service rather than for individual systems.

Vodafone Germany’s transformation will enable the company to provide end-to-end customer experiences that were not possible with its legacy systems. The results so far have been increased efficiency and significant cost savings. The infrastructure virtualization alone realized a 30–40 percent efficiency. The potential around improvements to digital experience, new feature time to market, and even new revenue streams are tougher to quantify but likely even more profound.

Beachbody’s digital reengineering workout

Since 1998, Beachbody, a provider of fitness, nutrition, and weight-loss programs, has offered customers a wide variety of instructional videos, first in VHS and then in DVD format. The company’s business model—the way it priced, packaged, and transacted—was to a large extent built around DVD sales.

Roughly three years ago, Beachbody’s leadership team recognized that people were rapidly changing the way they consumed video programming. Digital distribution technology can serve up a much bigger catalog of choices than DVDs and makes it possible for users to stream their selections directly to mobile devices, TVs, and PCs. As a result of the new technology and changes in consumer behavior, Beachbody subsequently decided to create an on-demand model supported by a digital platform.

From an architectural standpoint, Beachbody built the on-demand platform in the public cloud. And once the cloud-specific tool sets and team skills were in place, other teams began developing business products that also leverage the public cloud.

Beachbody has developed its automation capabilities during the last few years, thanks in part to tools and services available through the public cloud. For example, teams in Beachbody’s data center automated several workload and provisioning tasks that, when performed manually, required the involvement of five or more people. As Beachbody’s data center teams transitioned to the cloud, their roles became more like software engineers than system administrators.

To create the on-demand model, Beachbody established a separate development team that focused exclusively on the digital platform. When the time came to integrate this team back into the IT organization, they reorganized IT’s operations to support the new business model. IT reoriented teams around three focal areas to provide customers a consistent view across all channels: the front end, delivering user experiences; the middle, focusing on API and governance; and the back end, focusing on enterprise systems.9

My take

Michael Dell, chairman and CEO
Dell Technologies

Digital transformation is not about IT—even though technology often is both the driver and the enabler for dramatic change. It is a boardroom conversation, an event driven by a CEO and a line-of-business executive: How do you fundamentally reimagine your business? How do you embed sensors, connectivity, and intelligence in products? How do you reshape customer engagement and outcomes? The wealth of data mined from the increasing number of sensors and connected nodes, advanced computing power, and improvements in connectivity, along with rapid advances in machine intelligence and neural networks, are motivating companies to truly transform. It’s an overarching priority for a company to quickly evolve into a forward-thinking enterprise. 

Digital is a massive opportunity, to be sure, and most likely to be top of the executive team’s agenda. But there are three other areas in which we’re seeing significant investment, either as stand-alone initiatives or as components of a broader digital transformation journey. We took a look at each of these to determine how we could best assist our customers in meeting their goals.  

Close to our heritage is helping IT itself transform to dramatically improve how organizations harness technology and deliver value. Companies want to use software-defined everything, to automate platforms, and to extrapolate infrastructure to code. It is not atypical these days for a company to have thousands of developers and thousands of applications but only a handful of infrastructure or operations resources. Of course, they still need physical infrastructure, but they are automating the management, optimization, and updating of that infrastructure with software. Our customers want to put their money into changing things rather than simply running them; they want to reengineer their IT stacks and organizations to be optimized for speed and results. In doing so, IT is being seen as BT—“business technology,” with priorities directly aligned to customer impact and go-to-market outcomes. In doing so, IT moves from chore to core—at the heart of delivering the business strategy.  

The changing nature of work is driving the next facet of transformation. Work is no longer a place but, rather, a thing you do. Companies are recognizing they must provide the right tools to their employees to make them more productive. There has been a renaissance in people understanding that the PC and other client devices are important for productivity. For example, we are seeing a rise in popularity of thinner, lighter notebooks with bigger screens, providing people with tools to do great work wherever they are located. Companies are rethinking how work could and should get done, with more intuitive and engaging experiences, as business processes are rebuilt to harness the potential of machine learning, blockchain, the Internet of Things, digital reality, and cloud-native development.  

Last but definitely not least, we are seeing an increased interest in securing networks against cyber-attacks and other threats. The nature of the threats is constantly changing, while attack surfaces are growing exponentially due to embedded intelligence and the increased number of sensors and expansions in nodes. Security must be woven into infrastructure and operations. Companies are bolstering their own security-operation services with augmented threat intelligence, and they are segmenting, virtualizing, and automating their networks to protect their assets. 

We realize we need to be willing to change as well, and our own transformation began with a relentless focus on fulfilling these customer needs. At a time when other companies were downsizing and streamlining, Dell went big. We acquired EMC, which included VMware, and along with other technology assets—Boomi, Pivotal, RSA, SecureWorks, and Virtustream—we became Dell Technologies. We created a family of businesses to provide our customers with what they need to build a digital future for their own enterprise: approaches for hybrid cloud, software-defined data centers, converged infrastructure, platform-as-a-service, data analytics, mobility, and cybersecurity.  

Like our customers, we are using these new capabilities internally to create better products, services and opportunities. Our own IT organization is a test bed and proof-of-concept center for the people, process, and technology evolution we need to digitally transform Dell and our customers for the future. In our application rationalization and modernization journey, we are architecting global common services such as flexible billing, global trade management, accounts receivable, and indirect taxation, to deliver more functionality faster without starting from scratch each time. By breaking some components out of our monolithic ERPs, we significantly improved our time to market. We implemented Agile and DevOps across all projects, which is helping tear down silos between IT and the business. And, our new application development follows a cloud native methodology with scale out microservices. From a people standpoint, we are also transforming the culture and how our teams work to foster creative thinking and drive faster product deployment.  

If we don’t figure it out, our competitors will. The good news: We now have a culture that encourages people to experiment and take risks. I’ve always believed that the IT strategy must emanate from the company’s core strategy. This is especially important as IT is breaking out of IT, meaning that a company can’t do anything—design a product, make a product, have a service, sell something, or manufacture something—without IT. Technology affects everything, not just for giant companies but for all companies today. The time is now to reengineer the critical technology discipline, and to create a foundation to compete in the brave new digital world.

Risk implications

As we modernize technology infrastructure and operations, it is critical to build in modernized risk management strategies from the start. Given that nearly every company is now a technology company at its core, managing cyber risk is not an “IT problem” but an enterprise-wide responsibility:

  • Executives, often with the help of the CIO, should understand how entering a new market, opening a new sales channel, acquiring a new company, or partnering with a new vendor may increase attack surfaces and expose the organization to new threats.
  • CIOs should work with their cyber risk leaders to transform defensive capabilities and become more resilient.
  • Risk professionals should get comfortable with new paradigms and be willing to trade methodical, waterfall-type approaches for context, speed, and agility.

Increasingly, government and regulators expect executives, particularly those in regulated industries, to understand the risks associated with their decisions—and to put in place the proper governance to mitigate those risks during execution and ongoing operations.

Historically, cyber risk has fallen under the purview of the information or network security team. They shored up firewalls and network routers, seeking to protect internal data and systems from external threats. Today, this approach to cybersecurity may be ineffective or inadequate. In many cases, organizations have assets located outside their walls—in the cloud or behind third-party APIs—and endpoints accessing their networks and systems from around the globe. Additionally, as companies adopt IoT-based models, they may be expanding their ecosystems to literally millions of connected devices. Where we once thought about security at the perimeter, we now expand that thinking to consider managing cyber risk in a far more ubiquitous way.

From an architecture (bottom up) perspective, cloud adoption, software-defined networks, intensive analytics, tighter integration with customers, and digital transformation are driving IT decisions that expand the risk profile of the modern technology stack. However, those same advancements can be leveraged to transform and modernize cyber defense. For example, virtualization, micro-segmentation, and “infrastructure as code” (automation) can enable deployment and teardown of environments in a far faster, more secure, more consistent, and agile fashion than ever before.

Additionally, as part of a top-down reengineering of technology operating and delivery models, risk and cybersecurity evaluation and planning should be the entire organization’s responsibility. Development, operations, information security, and the business should be in lockstep from the beginning of the project life cycle so that everyone collectively understands the exposures, trade-offs, and impact of their decisions.

To manage risk proactively in a modernized infrastructure environment, build in security from the start:

  • Be realistic. From a risk perspective, acknowledge that some things are outside of your control and that your traditional risk management strategy may need to evolve. Understand the broader landscape of risks, your priority use cases, and revisit your risk tolerance while considering automation, speed, and agility.
  • Adapt your capabilities to address increased risk. This could mean investing in new tools, revising or implementing technology management processes, and standing up new services, as well as hiring additional talent.
  • Take advantage of enhanced security capabilities enabled by a modern infrastructure. The same changes that can help IT become faster, agile, and more efficient—automation and real-time testing, for example—can help make your systems and infrastructure more secure.
  • Build secure vendor and partner relationships. Promote resilience across your supply chain, and develop an operating model to determine how they (and you) would address a breach in the ecosystem.

Global impact

The reengineering technology trend is a global phenomenon. In a survey of Deloitte leaders across 10 global regions, respondents consistently find companies in their market looking for opportunities to enhance the speed and impact of technology investments. Several factors make the trend highly relevant across regions: increasing CIO influence, IT’s desire to drive innovation agendas, and the scale and complexity of many existing IT portfolios and technology assets.

Expected timeframes for adoption vary around the globe. Survey respondents in all regions are seeing many companies express an active interest in adopting Agile or implementing DevOps, regardless of whether their investments in ITIL and IT service management are mature. In Asia Pacific and Latin America, this tension between desire and readiness may actually be impeding reengineering progress. In southern Europe, we are seeing some companies building digital teams that operate independently of existing processes and systems. North America is the only region where organizations across many industry sectors are taking on the kind of overarching top-down and bottom-up transformation this chapter describes, though there are some emerging discrete examples elsewhere—for example, in UK financial services and Asia high tech.

Finally, survey results indicate that company readiness to embrace the reengineering technology trend differ region to region. Regional economic downturns of the last few years and weakened currencies have compressed IT budgets in southern Europe and Latin America. Cultural dynamics and skillsets are also impacting trend readiness. For example, in northern Europe, factors range from potential delays due to hierarchical biases and a lack of executive mandates, to optimism and clear desire for change in companies where building and teaming leadership styles are the norm. Broadly, however, lack of expertise and landmark proof points are common obstacles to executing ambitious change.

Global impact

Where do you start?

Reengineering IT shops from the top down and bottom up is no small order. Though a major goal of the reengineering trend is moving beyond incremental deployments and reacting to innovation and market demands, few companies likely have the resources to full-stop reengineer themselves in a single, comprehensive project. Before embarking on your journey, consider taking the following preliminary steps. Each can help you prepare for the transformation effort ahead, whether it be incremental or comprehensive.

  • Know thy organization: People react to change in different ways. Some embrace it enthusiastically; others resist it. The same can be said for organizations. Before committing to any specific reengineering strategy, take a clear-eyed look at the organization you are looking to impact. Failure to understand its culture and workers can undermine your authority and make it difficult to lead the transformation effort ahead.

    Typically, IT organizations fall into one of three categories:
    • “There is a will, but no way.” The organization may operate within strict guidelines or may not react well to change; any shifts should be incremental.
    • “If there is a will, there is a way.” People in these IT shops may be open to change, but actually getting them to learn new tools or approaches may take effort.
    • “Change is the only constant.” These IT organizations embrace transformational change and respond well to fundamental shifts in the way that IT and the business operate.

    By understanding an organization’s culture, working style, and morale drivers, you can tailor your reengineering strategy to accommodate both technological and human considerations. This may mean offering training opportunities to help IT talent become more comfortable with new systems. Or, piloting greenfield development teams that feature rotational staffing to expose workers from across IT to new team models and technologies.
  • Know thyself: Just as CIOs should understand their IT organizations, so should they understand their own strengths and weaknesses as leaders before attempting to reengineer a company’s entire approach to technology. There are three leadership patterns that can add value in distinct ways:
    • Trusted operator. Delivers operational discipline within their organizations by focusing on cost, operational efficiency, and performance reliability. Also provides enabling technologies, supports business transformation efforts, and aligns to business strategy.
    • Change instigator. Takes the lead on technology-enabled business transformation and change initiatives. Allocates significant time to supporting business strategy and delivering emerging technologies.
    • Co-creator. Spends considerable time collaborating with the business, working as a partner in strategy and product development, and executing change across the organization.

    Examining your own strengths and weakness as a technology leader is not an academic exercise. With explicit understanding of different leadership patterns and of your own capabilities, you can better set priorities, manage relationships, and juggle responsibilities. Moreover, this leadership framework may even inspire some constructive soul-searching into how you are spending your time, how you would like to spend your time, and how you can shift your focus to deliver more value to your organizations.
  • Change your people or change your people? Most successful tech workers are successful in IT because they like change. Even so, many have gotten stuck in highly specialized niches, siloed functions, and groupthink. As part of any reengineering initiative, these workers should change—or consider being changed out. Given reengineering’s emphasis on automation, there should be plenty of opportunities for IT talent to upskill and thrive. Of course, it’s possible there will be fewer IT jobs in the future, but more of the jobs that remain will likely be more satisfying ones—challenging, analytical, creative—that allow people to work with technologies that can deliver more impact than ever before.

Bottom line

In many companies, IT’s traditional delivery models can no long keep up with the rapid-fire pace of technology innovation and the disruptive change it fuels. The reengineering technology trend offers CIOs and their teams a roadmap for fundamentally overhauling IT from the bottom up and the top down. Pursued in concert, these two approaches can help IT address the challenges of today and prepare for the realities of tomorrow.

© 2021. See Terms of Use for more information.

✓ Link copied to clipboard