API imperative: From IT concern to business mandate has been saved
Do you see your technology assets as, well, assets? Companies are architecting and designing their technology as discrete sets of digital building blocks intended to be reused. The “API imperative” involves strategically deploying services and platforms for use both within and beyond the enterprise.
View Tech Trends 2018
Watch the video
Learn more about Cloud Custom Implementation and Systems Integration
Create a custom PDF
Looking back across successive industrial revolutions, interoperability and modularity have consistently delivered competitive advantage. Eli Whitney’s interchangeable rifle parts gave way to Henry Ford’s assembly lines, which ushered in the era of mass production. Sabre transformed the airline industry by standardizing booking and ticketing processes—which in turn drove unprecedented collaboration. Payment networks simplified global banking, with SWIFT and FIX becoming the backbone of financial exchanges, which in turn made dramatic growth in trade and commerce possible.
The same concept manifests in the digital era as “platforms”—solutions whose value lies not only in their ability to solve immediate business problems but in their effectiveness as launching pads for future growth. Look no further than the core offerings of global digital giants, including Alibaba, Alphabet, Apple Inc., Amazon, Facebook, Microsoft, Tencent, and Baidu. These companies have become dominant in part by offering platforms that their customers can use to extend services to entire ecosystems of end users, third parties, and others—platforms designed around the principles of interoperability and modularity.
In the world of information technology, application programming interfaces (APIs) are one of the key building blocks supporting interoperability and design modularity. APIs, an architectural technique as old as computer science, can help improve the way systems and solutions exchange information, invoke business logic, and execute transactions. In previous editions of Tech Trends, we have tracked the growth of API deployment and the increasingly critical role that APIs are playing in systems architecture, innovation, modernization, and in the burgeoning “API economy.”1 This growth continues apace: As of early 2017, the number of public APIs available surpassed 18,000, representing an increase of roughly 2,000 new APIs over the previous year.2 Across large enterprises globally, private APIs likely number in the millions.
What accounts for such growth? Increasingly, APIs are becoming a strategic mandate. If every company is a technology company, then the idea that technology assets should be built for reuse seems intuitive. Reuse compounds return on technology investments in ways that couldn’t be imagined when IT departments were developing many legacy solutions.
That said, reuse requires new capabilities to manage the exchange of what is essentially an encapsulation of intellectual property. These new capabilities also make it possible to support the flow of information and operations across organizational boundaries, and to manage the discovery, usage, and servicing of API assets. Collectively, the strategic intent of APIs and this underlying enabling response represent the API imperative trend.
Given that APIs have been around for many years, moving forward suggests that we separate the tenets of the API imperative trend from previous incarnations and potential biases. Large, complex projects have always featured interfaces that exchange information between systems. A vast majority of these interfaces were, and continue to be, completely bespoke, engineered to meet specific project needs. As point-to-point interfaces proliferated, complex interdependencies between systems begat the spaghetti diagrams that represent too many IT landscapes today. In brittle, custom-built interfaces, customer, order, product, and sales information is often duplicated; making changes has required trying—often unsuccessfully—to unwind a tangled mess. Meanwhile, each successive project introduces new interfaces and more complexity.
APIs were an attempt to control the chaos by encapsulating logical business concepts like core data entities (think customer or product) or transactions (for example, “place an order” or “get price”) as services. APIs could be consumed in broad and expanding ways. What’s more, good API design also introduced controls to help manage their own life cycle, including:
Today, many organizations have yet to fully embrace API opportunities. We know anecdotally that while developing shared APIs inside IT is growing in popularity, traditional project-based, siloed integration approaches remain the rule, not the exception.Much of IT’s budget and effort go to paying back technical debt and maintaining legacy assets that were not designed to gracefully expose data and business logic. Remediating that existing legacy to be API-friendly is akin to open-heart surgery.
At the same time, rebuilding a foundation with greenfield solutions can be challenging, adding new expectations of cost, time, and complexity to project plans. It also requires a different set of skills to architect and realize the vision. For many companies, the prospect of disrupting established controls, budgeting models, processes, and talent models seems daunting—especially if the “so what” is left as a tactical IT architecture decision.
And this apprehension is hardly unfounded: The need for agility, scalability, and speed grows more pressing each month as innovation presents new opportunities, remakes markets, and fuels competition. Over the next 18 to 24 months, expect many heretofore cautious companies to embrace the API imperative—the strategic deployment of application programming interfaces to facilitate self-service publishing and consumption of services within and beyond the enterprise.
In embracing the API imperative, companies are making a strategic choice. They are committing to evolve their expectations of technology investments to include the creation of reusable assets—and committing to build a lasting culture of reuse to inform future project planning. Preparing, both strategically and culturally, to create and consume APIs is key to achieving business agility, unlocking new value in existing assets, and accelerating the process of delivering new ideas to the market.
APIs can deliver a variety of operational and strategic benefits. For example, revitalizing a legacy system with modern APIs encapsulates intellectual property and data contained within that system, making this information reusable by new or younger developers who might not know how to use it directly (and probably would not want to). Likewise, building APIs onto monument systems makes it possible to extract more value from IT assets, while at the same time using valuable existing data to drive new innovations. Finally, incorporating APIs into new applications allows for easier consumption and reuse across new web, mobile, and IoT experiences, not to mention the option for exposing those APIs externally to enable new business models and partner ecosystems.
APIs’ potential varies by industry and the deploying company’s underlying strategy. In a recent in-depth study of API use in the financial services sector, Deloitte, in collaboration with the Association of Banks and the Monetary Authority in Singapore, identified 5,636 system and business processes common to financial services firms, mapping them to a manageable collection of 411 APIs.3 Once created, these building blocks could allow for vastly accelerated development of new solutions and offerings—from blockchain-driven trade finance to a virtual-reality retail branch experience.
As companies evolve their thinking away from project- to API-focused development, they will likely need to design management programs to address new ways of:
Deploying and scaling APIs requires capabilities that are different from those typically used in established integration and messaging layers. Whether APIs are being consumed internally to orchestrate a new business process or externally as parts of new products, managing APIs deliberately throughout their life cycle can help make them more discoverable, serviceable, and more easily monitored.
As your ambitions evolve, explore how one or more of the following technology layers can help you manage APIs more strategically throughout their life cycle:
The API imperative trend is a strategic pillar of the reengineering technology trend discussed earlier in Tech Trends 2018. As with reengineering technology, the API imperative embodies a broader commitment not only to developing modern architecture but to enhancing technology’s potential ROI. It offers a way to make broad digital ambitions actionable, introducing management systems and technical architecture to embody a commitment toward business agility, reuse of technology assets, and potentially new avenues for exposing and monetizing intellectual property.
In the last decade, AT&T embarked upon a series of mergers, uniting several large companies. They resulted in an IT organization having to manage more than 6,000 applications, as well as distinct operating and software development life cycle processes, each of which worked well in its own right. With the ultimate goal of bringing all of these applications and processes under the AT&T umbrella, the organization pursued a transformation effort to integrate the systems, remove duplicate costs, streamline global products and network care, and increase speed—all while delivering an effortless customer experience. To enable this transformation, the company defined a variety of big technology plays, with API platforms as the core, integral component.
The first step was application rationalization, which leaders positioned as an enterprise-wide business initiative. In the last decade, the IT team reduced the number of applications from 6,000-plus to 2,500, with a goal of 1,500 by the year 2020. When the team started the rationalization process in 2007, they quickly recognized the need for a modern, platform-based architecture designed for reuse rather than purpose-built applications with point-to-point interfaces. The team spent the next couple of years putting a platform architecture in place, and then introduced an API layer in 2009 with a common data model across the wired and wireless business.
“We saw an opportunity to reduce the total cost of ownership by billions of dollars, as well as achieve huge savings for care centers as they were consolidated,” says Sorabh Saxena, president of business operations (formerly, CIO of network and shared services) for AT&T. “APIs also enable more agility and speed to market for product teams. The goal was to motivate both the corporate and technology teams to build a software-driven, platform-based company.”4
AT&T made the API platform the focus of its solutions architecture team, which fields more than 3,000 business project requests each year and lays out a blueprint of how to architect each solution within the platform. Saxena’s team implemented a federated development program so each business unit’s unique needs would be taken into consideration on the API platform. As a $160 billion-plus company, some voiced concerns that business knowledge couldn’t be centralized on one team. AT&T now has close to 200 federated development teams, aligned to the applications themselves. Federated teams develop on the platform, combining the commonality of the platform with the teams’ business knowledge. However, the platform teams are responsible for the environment, development standards, design and test assurance, deployment, and production support.
In the beginning, they seeded the API platform by building APIs to serve specific business needs. Over time, the team shifted from building new APIs to reusing them. In 2017, they had approximately 4,000 instances of reuse, which Saxena values at hundreds of millions in savings over the years. Likewise, by September 2017, AT&T had 24 billion transactions per month on its API platforms—for internal, developer, and business-to-business applications—compared to 10 billion transactions per month in 2013. The number of APIs has grown more than threefold in that timeframe, and cycle time and quality have improved significantly. Though the API platform hasn’t removed all instances of point-to-point application interfaces, the bias is to use APIs.
But in the beginning, the IT team needed to encourage buy-in across the organization for the API strategy. Saxena says teams were reluctant at first, expecting latency to result from a shared services model, so his team cultivated relationships with local champions in each area of the organization and tied their performance to the program. They also zoned in on potential detractors and proactively provided white-glove service before any issues bubbled up, thereby increasing overall support.
Additionally, the team instituted an exception process that was “made painful on purpose.” Saxena hosted a twice-weekly call in which departments presented a request to build an application outside the API platform, and he would personally approve or deny the exception. In the beginning, there was a 20 percent exception rate that eventually stabilized to 4 to 5 percent, as teams saw that the upfront investment would quickly pay back, with big dividends. They redirected business funding to build the APIs, which became the architecture standard. By sharing reuse benefits with the business, the API platform has succeeded in speeding up deployment while lowering costs.
The next step in AT&T’s transformation is a microservices journey. The team is taking monolithic applications with the highest spend, pain points, and total cost of ownership, and turning them and all the layers—UI/UX, business logic, workflow, and data, for example—into microservices. At AT&T the microservices transformation has tangible business goals. Since “change” is the one constant, the goals are to increase the speed, reduce the cost, and reduce the risk of change to the enterprise suite of APIs. The “right sizing” of microservices versus previous monoliths helps componentize the distributed business functions, which facilitates change. To ease the microservices transition, the team is deploying a hybrid architecture, putting in place an intelligent routing function to direct services to either the monolith or microservices, and implementing data sharing.
The API and microservices platform will deliver a true DevOps experience (forming an automated continuous integration/continuous delivery pipeline) supporting velocity and scalability to enable speed, reduce cost, and improve quality. The platform will support several of AT&T’s strategic initiatives: artificial intelligence, machine learning, cloud development, and automation, among others.
“We positioned the API journey as a business initiative, rather than a technology effort,” Saxena says. “We worked with product partners to educate them on how technology changes would streamline nationwide product launches, with single processes, training programs, and greater flexibility in arranging the workforce. We built the necessary upswell and secured the support across teams. Now, whenever we want to do something new with technology, we think business first.”
What’s the secret to being an industry leader for 131 years? For the Coca-Cola Co., it’s adapting to the needs and desires of its customers, which entails everything from crowdsourcing new sweeteners to delivering summer shipments via drones. More importantly, it means embracing digital, a goal set by the organization’s new CEO, James Quincey. The enterprise architecture team found itself well positioned for the resulting IT modernization push, having already laid the foundation with an aggressive API strategy.
“All APIs are not created equal,” says Michelle Routh, Coca-Cola chief enterprise architect. “It’s one thing to have an API, and another thing to have an API that operates well.”
Coca-Cola’s API journey began several years ago, when Routh was CIO for North America and she and her team put in place a modern marketing technology platform. They moved all of their applications onto the public cloud and based their marketing technology platforms on software-as-a-service solutions. Routh’s team then built an API conceptual layer across the marketing and technology stack, facilitating a move from a monolithic to a modern platform. Next, they decomposed and decoupled the platform into a set of easily consumable microservices and made them available to the thousands of marketing agencies with which they work.
The team leveraged Splunk software to monitor the APIs’ performance; this enabled them to shift from being reactive to proactive, as they could monitor performance levels and intervene before degradation or outages occurred. A friendly competition ensued between the teams and departments providing APIs to build the best performer, resulting in even greater efficiencies over time. The marketing agencies could access the services quickly and easily, and Coca-Cola scaled its investment with agility and speed-to-market, resulting in best-in-class digital marketing.
Now the enterprise architecture team is leveraging that experience as it works alongside the chief digital officer to transform Coca-Cola’s business and modernize its core to meet the demands of a digital enterprise. The organization is undergoing a systemwide assessment to gauge its readiness in five areas: data, digital talent, automation innovation, cloud, and cyber. The enterprise architecture team is developing reference architectures to align with each of those five capabilities—mapping all the way to an outcome that builds a solution for a particular business problem. Routh realized that to become more digital, the company needs to do things at scale to drive growth: “For us to provide a technology stack for a truly digital company, we need a set of easily consumable APIs to help the business go to market quickly.”
The modernization program first targeted legacy systems for Foodservice, one of Coca-Cola’s oldest businesses. The challenge was to convince long-established customers—some with contracts dating back a century—that moving away from paper-based data delivery would make it easier to do business with the company. The ability to develop and publish standard APIs facilitated the process and elevated the organization’s engagement with those customers.
“We want to be able to offer a series of services that people can call on, by domain, to start building their own experiences right away,” says Bill Maynard, Coca-Cola global senior director of innovation and enterprise architecture. “We don’t debate the need for APIs. We just do it.”
Indeed, APIs have already become an integral part of the fabric of the new, digital Coca-Cola. “When we look at the business case, we don’t decompose it into parts,” Routh says. “Migrating to the public cloud, embracing Agile methodology and DevOps, and building an API layer were all components of the overall initiative to move to a modern best-in-class technology stack. The collective of all three is enabling our growth and allowing us to achieve a digital Coca-Cola.”5
The State of Michigan’s Department of Technology, Management and Budget (DTMB) provides administrative and technology services and information for departments and agencies in the state government’s executive branch. When the Michigan Department of Health and Human Services (MDHHS) needed to exchange Medicaid-related information across agencies in support of legislative changes mandated by the Affordable Care Act, DTMB implemented an enterprise service bus and established a reusable integration foundation.
Later, the health and human services department embarked on a mission to reform how the agency engages with citizens, seeking to tailor service delivery to specific citizen needs via an Integrated Service Delivery program. In expanding services to help more families achieve self-sufficiency, the department—offering new cloud-based, citizen-facing programs—needed to scale technology to support the increased activity. DTMB decided to evolve its architecture to expand the enterprise service bus and add an API layer. An API layer would allow for reuse and scalability, as well as provide operational stability through service management, helping to prevent outages and performance degradation across the system by monitoring and limiting service consumers.
“Paired with our ongoing cloud initiatives, APIs were a sensible approach for a more effective architecture and reuse across all state agencies,” says DTMB general manager Linda Pung. “They can share APIs with each other to help drive down cost, as well as facilitate a quicker time to market.”6
DTMB has taken a multi-phased approach in leveraging APIs with existing IT assets such as back-end systems, data, enterprise shared services, and infrastructure. Data is the key driver to the entire strategy.
“We need to support sharing data in a standardized and simplified manner between cloud services and on-premises data sources, not only in the department but across multiple agencies to enable better customer service and data security,” says Judy Odett, DTMB’s business relationship manager. “Additionally, the solution must be scalable so it can continue to expand with additional datasets over time.”
The first step was to expand the enterprise service bus to enable the cloud-based portal to leverage existing state assets. This was followed by the deployment of an API management platform, building upon existing architecture and enabling reuse. The team chose a platform that allowed rate limiting and load balancing, as well as the ability to ingrain the state’s security policies. DTMB recently released its first pilot phase with bounded functionality, and the department plans to roll out the platform enterprise-wide, with full functionality, in the near future. A service management solution will provide a portal for DTMB architects to review and analyze consolidated web services, a responsibility that each individual system owner currently handles. This will reduce the number of duplicate web services and facilitate reuse.
Development time has decreased by leveraging existing enterprise shared services such as a master person index and address cleansing. It also has achieved centralized security by allowing citizens to verify their identities through third-party identity management services and enabling secure data exchange through centralized gateway services. Finally, MDHHS is anticipating a reduction in the number of customer inquiries by enabling citizens to access data through mobile applications supported by the APIs.
Reaction to the pilot has been positive, and the faster time to market, improved operational stability, and data quality are already yielding benefits to the consumers.
In the new digital economy, consumer expectations are rapidly evolving. They want “frictionless” transactions and rich digital experiences. Like many financial institutions, the Canadian Imperial Bank of Commerce (CIBC), a 150-year-old institution, is building new capabilities to help it meet customers’ increasingly sophisticated needs. This means integrating new functionality into its existing infrastructure. However, technology integration—whether it be extending an existing capability or introducing a new one—is often time-consuming and expensive. While CIBC has been on a service-oriented architecture journey for over a decade, it wants to further modernize its architecture to reduce the cost and effort of integration, while continuing to meet customer demands for an end-to-end experience.
Building a platform for integration is not new to CIBC, which has thousands of highly reusable web services running across its platform. But the team recognized that the current SOA-based model is being replaced by a next-gen architecture—one based on REST-ful APIs combined with a microservices architecture.
CIBC evaluated different approaches for modernizing its integration architecture, and decided to focus on cloud-native, open-source frameworks. The bank moved to a self-service publishing model, where API consumers can access microservices without a traditional API gateway intermediary. This simplified, democratized model has alleviated the bottlenecks common to more traditional approaches.
“From a technology standpoint, the combination of APIs, the cloud, and open source frameworks such as Light4J are creating tremendous benefit,” says Brad Fedosoff, CIBC vice president and head of enterprise architecture. “We currently have APIs implemented across some of our production systems, and implementation has been faster and cheaper, with greater flexibility, than initially thought.”
For example, internally CIBC identified a new technology for its data services. Working with the API platform team, CIBC had a working version a week later. Traditionally, this request would have taken months to come to fruition. From a business perspective, CIBC has been able to innovate and offer new capabilities in rapid fashion. One example is its Global Money Transfer service that allows clients in Canada to send money to more than 50 countries for no fee. The IT team quickly integrated internal and external capabilities from third parties to simplify the money transfer and to provide a smooth experience for its customers.
As it continues to evolve its customer experience, CIBC is turning its attention to payments and identity as the next areas of opportunity to expand its API footprint.
“We envision an API/microservices-based approach as the heart of the Global Open Banking movement,” Fedosoff says. “Financial services firms will look to open up capabilities, and as a result, will need to develop innovative features for clients and effortless journeys for clients. APIs may be a smart way to do it.”7
When Jeff Bezos started building Amazon, there was nothing else like it from a technology perspective. We were doing iterative development on a monolithic codebase that included everything from content to customer service apps to the logistics of shipping packages. Amazon’s mantra has always been “delight customers,” and that has been the driving force behind our evolutionary journey.
With each stage of growth, we refine our approach. Around 2000, our engineers were building stateless applications maintained in back-end databases. These databases were shared resources, so employees could easily access the data they needed and not worry about where the data lived. As Amazon rapidly scaled—adding product categories and expanding internationally—these shared resources became shared obstacles, compromising speed.
So the engineers started thinking about a different kind of architecture, one in which each piece of code would own its own database and encapsulated business logic. We called them “services,” well before the popularity of service-oriented architecture. Dependencies were embodied in APIs, giving teams the freedom to make rapid changes to the underlying data model and logic as the business demanded. This allowed an evolutionary approach to engineering and let us carve out the monolith, piece by piece. Performance metrics began ramping up again.
Then, around 2004, we realized that a few of the services had become as big as the monolith had been. Services were organized by data—order, customer, products—which had exploded as the business grew. For example, a single service maintained all of the code that operated on Amazon’s global customer base, even as that base expanded exponentially. Different capabilities needed different levels of service, but because they were grouped together, everything had to resort to the highest common need—for scalability, security, reliability, and more. We realized we needed to shift to a functional decomposition, creating what we now call microservices. We ended up with around 600 to 800 services.
After enjoying several years of increased velocity, we observed productivity declining again. Engineers were spending more and more time on infrastructure: managing databases, data centers, network resources, and load balancing. We concluded that a number of capabilities were much better suited to be shared services, in which all of our engineers could reuse technology without having to carry the burden of solving for the underlying platform. This led to the build-out of the technical components that would become Amazon Web Services (AWS).
Amazon is a unique company. It looks like a retailer on the outside, but we truly are a technology company. Senior management is not only supportive of technology initiatives—they are technologists themselves who take part in the architectural review. Technology is not a service group to the business—the two are intertwined. We hire the best engineers and don’t stand in their way: If they decide a solution is best, they are free to move forward with it. To move fast, we removed decision-making from a top-down perspective—engineers are responsible for their teams, their roadmaps, and their own architecture and engineering; that includes oversight for reuse of APIs. Teams are encouraged to do some lightweight discovery to see whether anybody else has solved parts of the problems in front of them, but we allow some duplication to happen in exchange for the ability to move fast.
Our experience with services and APIs has been crucial to building AWS, which turns everything—whether it’s a data center, outbound service, network, or database—into a software component. If we hadn’t experienced the process ourselves, we would have been unable to understand either the value it would have for our customers or the needs our customers would have to build, run, and evolve in such an environment. We realized this technology could help Internet-scale companies be successful, and it completely transformed the technology industry. Now, many of our AWS customers are transforming their worlds as well.
Speed of execution and speed of innovation are crucial to Amazon’s business. The shift to APIs enabled agility, while giving us much better control over scaling, performance, and reliability—as well as the cost profile—for each component. What we learned became, and remains, essential to scaling the business as we continue to innovate and grow.
Historically, organizations secured their siloed and controlled environments by locking down devices, systems, and platforms in order to protect data that lived inside their own four walls. In today’s computing environment, with the proliferation of loosely coupled systems, multi-vendor platforms, integrations across traditional enterprise boundaries, and open APIs, this strategy is likely no longer adequate.
Today’s API imperative is part of a broader move by the enterprise to open architectures—exposing data, services, and transactions in order to build new products and offerings and also to enable more efficient, newer business models. But this expansion of channels inherently increases the permeability of an organization’s network, which can create new seams and a broader attack surface that can be exploited as a result of new vulnerabilities.
Cyber risk should be at the heart of an organization’s technology integration and API strategy. Organizations should consider how to secure data traveling across and beyond enterprise boundaries—managing API-specific identities, access, data encryption, confidentiality, and security logging and monitoring controls as data travels from one API to another.
An API built with security in mind from the start can be a more solid cornerstone of every application it enables; done poorly, it can multiply application risks. In other words, build it in, don’t bolt it on:
While APIs can introduce new risks to an ecosystem, they can also help organizations facilitate standardized, dynamic protection against evolving threats.
An open and API-forward architecture can be well suited to address and help standardize on the implementation of core security, monitoring, and resiliency requirements in computing environments. Cyber risk capabilities made available to applications, developers, partners, and third parties alike through a standardized API set can help address security policy mandates, minimum security and privacy guidelines, and compliance obligations. When common cyber risk APIs are implemented effectively, organizations can update, upgrade or reengineer services such as identity and access management, data encryption, certificate management, and security logging and monitoring, and have this enhanced functionality be automatically pushed out across their enterprise, extraprise, or customer base. APIs can also improve an organization’s resiliency posture and enable rapid updates when new threats are identified—within a matter of hours, not days—thereby helping to reduce costs, operational overhead, and overall time to detect and respond. Many security technology vendors are also moving to open API-based models, which could mean an increasingly integrated security ecosystem in which multi-vendor platforms integrate with one another to present a united front rather than layers of disjointed security solutions that could present exposures which hackers can exploit.
As APIs become more common in organizations, the flexibility and scalability they provide can help improve an enterprise’s approach to being more secure, vigilant, and resilient against cyber-attacks.
Findings from a recent survey of Deloitte leaders across 10 regions suggest that several factors are driving the API imperative trend globally. First, with more organizations modernizing IT and reengineering technology delivery models, APIs are becoming centerpieces of digital transformation agendas and complex business models. Likewise, as major software vendors upgrade their solutions to support APIs and microservices, they are providing building blocks for API adoption. Finally, start-ups embracing API-driven architectures and capability models are providing proof points—and some competitive pressure—in regional ecosystems.
Survey respondents see API adoption progressing in several countries, with particular momentum in two industry sectors: financial services in the UK, US, Brazil, Canada, and across Asia Pacific; and media and telecommunications in Germany, Ireland, Italy, and Latin America. Across global markets, public-sector API adoption lags somewhat, perhaps due to ongoing “open government” guidelines that mandate longer time frames for organizing and executing larger-scale API transformation initiatives. Elsewhere, the trend is gaining momentum across verticals. For example, even though APIs are relatively new to the Middle East, a large number of businesses have already demonstrated how APIs can help organizations become leaner. Survey respondents see API adoption accelerating throughout the region, especially in Israel.
Globally, companies are recognizing that API ambitions go hand-in-hand with broader core modernization and data management efforts. Indeed, data management becomes particularly relevant in API projects, because data quality and control challenges may emerge when underlying technology assets become exposed. Survey respondents in Denmark specifically called out an issue that appears to be universal: New systems are being built with APIs incorporated within them, while legacy systems continue to impede information sharing.
Across regions, organizations are working with regulators to shape laws addressing the management and sharing of information, security, and privacy. Even so, regulation will continue to vary greatly by region, particularly in Asia Pacific and EMEA. That said, there are a few encouraging signs on the regulation front; a recent EU ruling makes providing transparency into all IT services that will be used in technology projects a condition for receiving government funding. The net result? Funding and procurement become forcing functions for the API imperative.
Viewed from the starting block, an API transformation effort may seem daunting, especially for CIOs whose IT environments include legacy systems and extensive technical debt. While the following steps do not constitute a detailed strategy, they can help lay the groundwork for the journey ahead: