Sooner than many expected, executives will confront the promise—and challenge—of artificial general intelligence and quantum computing, with major ramifications for security and much more. Leading organizations are developing a disciplined innovation response to these and other disruptive forces, creating capabilities to sense, scan, vet, experiment, incubate, and scale.
Science author Steven Johnson once observed that “innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.”1
In a business and technology climate where the ability to innovate has become critical to survival, many companies still struggle to create the disciplined, innovation-nurturing environments that Johnson describes. The process of innovating is, by definition, a hopeful journey into new landscapes. Without a clear destination, some executives can become unsure and frustrated. Where should we focus our innovation efforts? How can we develop breakthrough innovations that will set our business up for success in the future while delivering for the quarter? How can we turn our haphazard, episodic innovation efforts into methodical, productive processes?
With exponential technologies, the challenge becomes more daunting. Unlike many of the emerging tools and systems examined in this report—which demonstrate clear potential for impacting businesses in the next 18 to 24 months—exponentials can appear a bit smaller on the horizon. These are emerging technology forces that we think could manifest in a “horizon 3 to 5” timeframe—between 36 and 60 months. With some exponentials, the time horizon may extend far beyond five years before manifesting broadly in business and government. For example, artificial general intelligence (AGI) and quantum encryption, which we examine later in this chapter, fall into the 5+ category. Others could manifest more quickly; even AGI and quantum encryption are showing breadcrumbs of progress that may lead to breakthroughs in the nearer time horizon. As you begin exploring exponential forces, keep in mind that even though they may appear small on the horizon, you should not assume you have three to five years to put a plan together and get started. Now is the time to begin constructing an exponentials innovation environment in which, as Johnson says, “ideas can connect.”
At present, many enterprises lack the structures, capabilities, and processes required to innovate effectively in the face of exponential change—a reality that carries some risk. Though exponential initiatives may require leaps of faith and longer-term commitments, they can potentially deliver transformative outcomes. For example, in our Tech Trends 2014 report, we collaborated with faculty at Singularity University, a leading research institution, to explore robotics and additive manufacturing. At that time, these emerging technologies were outpacing Moore’s Law: Their performance relative to cost (and size) was more than doubling every 12 to 18 months. Just a few years later, we see these same technologies are disrupting industries, business models, and strategies.
Researchers at Doblin, the innovation practice of Deloitte Digital, have studied how effective innovators approach these challenges and risks. They found that companies with the strongest innovation track records clearly articulate their innovation ambitions and maintain a strategically relevant portfolio of initiatives across ambition levels. Some efforts will focus on core innovation that optimizes existing products for existing customers. Others are around adjacent innovation that can help expand existing markets or develop new products working from their existing asset base. Others still target transformational innovation—that is, deploying capital to develop solutions for markets that do not yet exist or for needs that customers may not even recognize that they have.
Doblin researchers examined companies in the industrial, technology, and consumer goods sectors, and correlated the pattern of companies’ innovation investments with their share price performance. (See figure 1.) A striking pattern emerged: Outperforming firms typically allocate about 70 percent of their innovation resources to core offerings, 20 percent to adjacent efforts, and 10 percent to transformational initiatives. In contrast, cumulative returns on innovation investments tend to follow an inverse ratio, with 70 percent coming from the transformational initiatives, 20 percent from adjacent, and 10 percent from core.2 These findings suggest that most successful innovators have struck the ideal balance of core, adjacent, and transformational initiatives across the enterprise, and have put in place the tools and capabilities to manage those various initiatives as parts of an integrated whole. To be clear, a 70-20-10 allocation of innovation investments is not a magic formula that works for all companies—it is an average allocation based on cross-industry and cross-geography analysis. The optimum balance will vary from company to company.3
One might assume that innovations derived from exponential technologies will emerge only in the transformational zone. In fact, exponential innovation can occur in all three ambition zones. Author and professor Clayton Christensen observed that truly disruptive technologies are often deployed first to improve existing products and processes—that is, those in the core and nearby adjacent zones. Only later do these technologies find net new whitespace applications.4
Innovation investments allocated to exploring exponentials might be broadly characterized as “unknowable.” Whether targeted at core, adjacent, or transformational returns, exponential investments focus largely on possibilities and vision that work beyond today’s habits of success. Even though an exponential technology’s full potential may not become apparent for several years, relevant capabilities and applications are probably emerging today. If you wait three years before thinking seriously about them, your first non-accidental yield could be three to five years beyond that. Because exponential forces develop at an atypical, nonlinear pace, the longer you wait to begin exploring them, the further your company may fall behind.
As you begin planning the exponentials innovation journey ahead, consider taking a lifecycle approach that includes the following steps:
Also examine how developing an ecosystem around each exponential force could help you engage external business partners, vendors, and suppliers as well as stakeholders in your own organization. How could such an ecosystem enable exchanges of value among members? What kind of governance and processes would be needed to manage such an ecosystem? How could your enterprise benefit from ecosystem success?
As you and stakeholders across the enterprise gradually deepen your understanding of exponential forces, you can begin exploring “state of the practical.” Specifically, which elements of a given exponential force can potentially benefit the business? To develop a more in-depth understanding of the state of the practical, examine an exponential’s viability through the lens of a balanced breakthrough model: What about this opportunity is desirable from a customer perspective? Is this opportunity viable from a business perspective? And importantly, do you have the critical capabilities and technology assets you will need to capitalize on this opportunity?
To move beyond exploration and into experimentation, try to prioritize use cases, develop basic business cases, and then build initial prototypes. If the business case yields—perhaps with some use case pivots—then you may have found a winning innovation.
As you dive into exponentials and begin thinking more deliberately about the way you approach innovation, it is easy to become distracted or discouraged. You may think, “This is scary and can’t be true” or, “This is only about technology.” It’s important not to lose sight of the fact that for most companies, human beings are the fundamental unit of economic value. For example, people remain at the center of investment processes, and they still make operational decisions about what innovations to test and deploy. Exploring exponential possibilities is first and foremost about driving certain human behaviors—in your operation, and in the marketplace. Moreover, as Steven Johnson suggests, when human ideas connect, innovation surely follows. With humans as the focus of your efforts, you will be able to keep exponentials—in all their mind-blowing grandeur—in a proper perspective.
Humans are not wired to think in an exponential way. We think linearly because our lives are linear journeys: We move from sunup to sundown, from Mondays to Fridays. The idea that something could be evolving so dramatically that its rate of change must be expressed in exponents seems, on a very basic level, nonsensical.
Yet exponential progress is happening, especially in technologies. Consider this very basic example: In 1997, the $46 million ASCI Red supercomputer had 1.3 teraflops of processing power, which at the time made it the world’s fastest computer.5 Today, Microsoft’s $499 Xbox One X gaming console has 6 teraflops of power.6 Mira, a supercomputer at Argonne National Laboratory, is a 10 petaflop machine.7 That’s ten thousand trillion floating point operations per second!
Exponential innovation is not new, and there is no indication it will slow or stop. More importantly, exponential advances in computers enable exponential advances—and disruptions—in other areas. And therein lies the challenge for CIOs and other executives. How can companies ultimately harness exponential innovation rather than be disrupted by it? Consider the often-cited cautionary tale of Kodak. In the 1970s, Kodak created a .01 megapixel camera but decided to sit on the technology rather than market it.8 If you try to do what Kodak did, will somebody eventually come along and disrupt you?
Should you assume that every technology can have exponential potential? In 2011, a group of researchers demonstrated a neural network AI that could recognize a cat in a video—a breakthrough that some people found funny. If they had been able to see five years into the future, they might not have laughed. Today, retailers are projecting store performance and positively impacting revenue by analyzing in-store video feeds to determine how many bags each shopper is carrying.9
Reorienting linear-thinking, quarterly revenue-focused stakeholders and decision-makers toward exponential possibilities can be challenging. Institutional resistance to change only hardens when the change under consideration has a five-year time horizon. But exponential change is already under way, and its velocity only continues to increase. The question that business and agency leaders face is not whether exponential breakthroughs will upset the status quo, but how—and how much, and how soon . . .
In the 2013 Spike Jonze film Her, a sensitive man on the rebound from a broken marriage falls in love with “Samantha,” a new operating system that is intuitive, self-aware, and empathetic.10 Studio marketers advertised the film’s storyline as science fiction. But was it? Ongoing advances in artificial intelligence suggest that at some point in the future, technology may broadly match human intellectual (and social or emotional) capabilities and, in doing so, erase the boundary between humans and machines.11
Known as artificial general intelligence (AGI), this advanced version of today’s AI would have many capabilities that broadly match what humans call our gut instinct—the intuitive understanding we bring to unfamiliar situations that allows us to perceive, interpret, and deduce on the spot.
Consider the disruptive potential of a fully realized AGI solution: Virtual marketers could analyze massive stores of customer data to design, market, and sell products and services—data from internal systems fully informed by social media, news, and market feeds. Algorithms working around the clock could replace writers altogether by generating factual, complex, situation-appropriate content free of biases and in multiple languages. This list goes on.
As an exponential force, AGI may someday prove profoundly transformational. However, before that day arrives, AI will have to advance far beyond its current capabilities. Existing variations of AI can do only the things that programmers tell them to do, either explicitly or through machine learning. AI’s current strength lies primarily in “narrow” intelligence—so-called artificial narrow intelligence (ANI), such as natural language processing, image recognition, and deep learning to build expert systems. A fully realized AGI system will feature these narrow component capabilities, plus several others that currently do not yet exist: the ability to reason under uncertainty, to make decisions and act deliberately in the world, to sense, and to communicate naturally.
These “general” capabilities that may someday make AGI much more human-like remain stubbornly elusive. While there have been breakthroughs in neural networks, computer vision, and data mining, significant research challenges beyond computational power must be overcome for AGI to achieve its potential.12 Indeed, the most formidable challenge may lie in finding a means for technology to reason under uncertainty. This is not about harnessing a spectrum of existing learning, language, and sensing capabilities. It’s about creating something entirely new that enables mechanisms to explore an unfamiliar environment, draw actionable conclusions about it, and use those conclusions to complete an unfamiliar task. Three-year-old humans can do this well. At present, AI cannot.
In all likelihood, AGI’s general capabilities will not appear during some eureka! moment in a lab. Rather, they will emerge over time as part of AI’s ongoing evolution. During the next three to five years, expect to see improvements in AI’s current component capabilities. Likewise, there will likely be progress made toward integrating and orchestrating these capabilities in pairs and multiples. What you probably won’t see in this time horizon is the successful development, integration, and deployment of all AGI component capabilities. We believe that milestone is at least 10+ years away. (See “My take” below for more on this topic.) As AI use cases progress into full deployment and the pace of enterprise adoption accelerates, standards will likely emerge for machine learning and other AI component capabilities, and eventually for AI product suites.
From an enterprise perspective, many companies have already begun narrow intelligence journeys, often by exploring potential applications for ANI components, such as pattern recognition to diagnose skin cancer, or machine learning to improve decision-making in HR, legal, and other corporate functions.
In many cases, these initial steps yield information that becomes part of an internal ANI knowledge base—one that can be refined in the coming years as technologies advance and best practices emerge. For example, in a pioneering ANI initiative, Goldman Sachs is investing in machine learning in what will be an ongoing effort to leverage data as a strategic asset.13 Across the financial and other sectors, expect to see smaller applications as well—for example, applying deep learning to emails to identify patterns and generate insights into best practices and insider threats. Some of these individual successes will likely be launched in greenfield initiatives. Others may be accretive, but they too could illuminate insights that help companies develop and refine their ANI knowledge bases.
The state-of-the-art reflects progress in each sub-problem and innovation in pair-wise integration. Vision + empathy = affective computing. Natural language processing + learning = translation between languages you’ve never seen before. Google Tensor Flow may be used to build sentiment analysis and machine translation, but it’s not easy to get one solution to do both well. Generality is difficult. Advancing from one domain to two is a big deal; adding a third is exponentially harder.
John Launchbury, former director of the Information Innovation Office at the Defense Advanced Research Projects Agency, describes a notional artificial intelligence scale with four categories: learning within an environment; reasoning to plan and to decide; perceiving rich, complex, and subtle information; and abstracting to create new meanings.14 He describes the first wave of AI as handcrafted knowledge in which humans create sets of rules to represent the structure of knowledge in well-defined domains, and machines then explore the specifics. These expert systems and rules engines are strong in the reasoning category and should be important elements of your AI portfolio. Launchbury describes the second wave—which is currently under way—as statistical learning. In this wave, humans create statistical models for specific problem domains and train them on big data with lots of label data, using neural nets for deep learning. These second-wave AIs are good at perceiving and learning but less so at reasoning. He describes the next wave as contextual adaptation. In this wave, AI constructs contextual explanatory models for classes of real-world phenomena; these waves balance the intelligence scale across all four categories, including the elusive abstracting.
Though many believe that computers will never be able to accurately recognize or fully understand human emotions, advances in machine learning suggest otherwise. Machine learning, paired with emotion recognition software, has demonstrated that it is already at human-level performance in discerning a person’s emotional state based on tone of voice or facial expressions.15
These are critical steps in AI’s evolution into AGI. Other breadcrumbs suggest that the evolution may be gaining momentum. For example, a supercomputer became the first machine to pass the long-established “Turing test” by fooling interrogators into thinking it was a 13-year-old boy.16 (Other experts proffer more demanding measures, including standardized academic tests.)
Though it made hardly a ripple in the press, the most significant AGI breadcrumb appeared on January 20, 2017, when researchers at Google’s AI skunkworks, DeepMind, quietly submitted a paper on arXiv titled “PathNet: Evolution Channels Gradient Descent in Super Neural Networks.” While not exactly beach reading, this paper will be remembered as one of the first published architectural designs for a fully realized AGI solution.17
As you work in the nearer time horizons with first- and second-wave ANIs, you may explore combining and composing multiple sub-problem solutions to achieve enterprise systems that balance the intelligence categories, including abstracting. Perhaps in the longer horizons, Samantha, Spike Jonze’s empathetic operating system, is not so fictional after all.
In March 2016, the American Association for Artificial Intelligence and I asked 193 AI researchers how long it would be until we achieve artificial “superintelligence,” defined as an intellect that is smarter than the best human in practically every field. Of the 80 Fellows responding, roughly 67.5 percent of respondents said it could take a quarter century or more. 25 percent said it would likely never happen.18
Given the sheer number of “AI is coming to take your job” articles appearing across media, these survey findings may come as a surprise to some. Yet they are grounded in certain realities. While psychometrics measure human IQ fairly reliably, AI psychometrics are not nearly as mature. Ill-formed problems are vague and fuzzy, and wrestling them to the ground is a hard problem.
Few interactions in life have clearly defined rules, goals, and objectives, and the expectations of artificial general intelligence on such areas as language communications are squishy. How can you tell whether I’ve understood a sentence properly? Improving speech recognition doesn’t necessarily improve language understanding, since even simple communication can quickly get complicated—consider that there are more than 2 million ways to order a coffee at a popular chain. Successfully creating AGI that matches human intellectual capabilities—or artificial superintelligence (ASI) that surpasses them—will require dramatic improvements beyond where we are today.
However, you don’t have to wait for AGI to appear (if it ever does) to begin exploring AI’s possibilities. Some companies are already achieving positive outcomes with so-called artificial narrow intelligence (ANI) applications by pairing and combining multiple ANI capabilities to solve more complex problems. For example, natural language processing integrated with machine learning can expand the scope of language translation; computer vision paired with artificial empathy technologies can create affective computing capabilities. Consider self-driving cars, which have taken the sets of behaviors needed for driving—such as reading signs and figuring out what pedestrians might do—and converted them into something that AI can understand and act upon.
You need specialized skillsets to achieve this level of progress in your company—and currently there aren’t nearly enough deep learning experts to meet the demand. You also need enormous amounts of label data to bring deep learning systems to fruition, while people can learn from just a few labels. We don’t even know how to represent many common concepts to the machine today.
Keep in mind that the journey from ANI to AGI is not just difference in scale. It requires radical improvements and perhaps radically different technologies. Be careful to distinguish what seems intelligent from what is intelligent, and don’t mistake a clear view for a short distance. But regardless, get started. The opportunity may well justify the effort. Even current AI capabilities can offer useful solutions to difficult problems, not just in individual organizations but across entire industries.
At some point in the future—perhaps within a decade—quantum computers that are exponentially more powerful than the most advanced supercomputers in use today could help address real-world business and governmental challenges. In the realm of personalized medicine, for example, they could model drug interactions for all 20,000-plus proteins encoded in the human genome. In climate science, quantum-enabled simulation might unlock new insights into human ecological impact.19
Another possibility: Quantum computers could render many current encryption techniques utterly useless.
How? Many of the most commonly deployed encryption algorithms today are based on integer factorization of large prime numbers, which in number theory is the decomposition of a composite number into the product of smaller integers. The mathematical proofs show that it would take classical computers millions of years to decompose the more than 500-digit number sequences that comprise popular encryption protocols like RSA-2048 or Diffie-Hellman. Mature quantum computers will likely be able to decompose those sequences in seconds.20
Thought leaders in the quantum computing and cybersecurity fields offer varying theories on when or how such a mass decryption event might begin, but on one point they agree: Its impact on personal privacy, national security, and the global economy would likely be catastrophic.21
Yet all is not lost. As an exponential force, quantum computing could turn out to be both a curse and a blessing for cryptology. The same computing power that bad actors deploy to decrypt today’s common security algorithms for nefarious purposes could just as easily be harnessed to create stronger quantum resistant encryption. In fact, work on developing post-quantum encryption around some principles of quantum mechanics is already under way.
In the meantime, private and public organizations should be aware of the quantum decryption threat on the horizon, and that in the long term, they will need new encryption techniques to “quantum-proof” information—including techniques that do not yet exist. There are, however, several interim steps organizations can take to enhance current encryption techniques and lay the groundwork for additional quantum-resistant measures as they emerge.
In Tech Trends 2017, we examined quantum technology, which can be defined broadly as engineering that exploits properties of quantum mechanics into practical applications in computing, sensors, cryptography, and simulations. Efforts to harness quantum technology in a general-purpose quantum computer began years ago, though at present, engineering hurdles remain. Nonetheless, there is an active race under way to achieve a state of “quantum supremacy” in which a provable quantum computer surpasses the combined problem-solving capability of the world’s current supercomputers.22
To understand the potential threat that quantum computers pose to encryption, one must also understand Shor’s algorithm. In 1994, MIT mathematics professor Peter Shor developed a quantum algorithm that could factor large integers very efficiently. The only problem was that in 1994, there was no computer powerful enough to run it. Even so, Shor’s algorithm basically put “asymmetric” cryptosystems based on integer factorization—in particular, the widely used RSA—on notice that their days were numbered.23
To descramble encrypted information—for example, a document or an email—users need a key. Symmetric or shared encryption uses a single key that is shared by the creator of the encrypted information and anyone the creator wants to access the information. Asymmetric or public-key encryption uses two keys—one that is private, and another that is made public. Any person can encrypt a message using a public key. But only those who hold the associate private key can decrypt that message. With sufficient (read quantum) computing power, Shor’s algorithm would be able to crack two-key asymmetric cryptosystems without breaking a sweat. It is worth noting that another quantum algorithm—Grover’s algorithm, which also demands high levels of quantum computing power—can be used to attack ciphers.24
One common defensive strategy calls for larger key sizes. However, creating larger keys requires more time and computing power. Moreover, larger keys often result in larger encrypted files and signature sizes. Another, more straightforward post-quantum encryption approach uses large symmetric keys. Symmetric keys, though, require some way to securely exchange the shared keys without exposing them to potential hackers. How can you get the key to a recipient of the encrypted information? Existing symmetric key management systems such as Kerberos are already in use, and some leading researchers see them as an efficient way forward. The addition of “forward secrecy”—using multiple random public keys per session for the purposes of key agreement—adds strength to the scheme. With forward secrecy, hacking the key of one message doesn’t expose other messages in the exchange.
Key vulnerability may not last indefinitely. Some of the same laws of quantum physics that are enabling massive computational power are also driving the growing field of quantum cryptography. In a wholly different approach to encryption, keys become encrypted within two entangled photons that are passed between two parties sharing information, typically via a fiber-optic cable. The “no cloning theorem” derives from Heisenberg’s Uncertainty Principle and dictates that a hacker cannot intercept or try to change one of the photons without altering them. The sharing parties will realize they’ve been hacked when the photon-encrypted keys no longer match.25
Another option looks to the cryptographic past while leveraging the quantum future. A “one-time pad” system widely deployed during World War II generates a randomly numbered private key that is used only to encrypt a message. The receiver of the message uses the only other copy of the matching one-time pad (the shared secret) to decrypt the message. Historically, it has been challenging to get the other copy of the pad to the receiver. Today, the photonic-perfect quantum communication channel described above can facilitate the key exchange. In fact, it can generate the pad on the spot during an exchange.
We don’t know if it will be five, 10, or 20 years before efficient and scalable quantum computers fall into the hands of a rogue government or a black hat hacker. In fact, it’s more likely that instead of the general-purpose quantum computer, special-purpose quantum machines will emerge sooner for this purpose. We also don’t know how long it will take the cryptography community to develop—and prove—an encryption scheme that will be impervious to Shor’s algorithm.
In the meantime, consider shifting from asymmetric encryption to symmetric. Given the vulnerability of asymmetric encryption to quantum hacking, transitioning to a symmetric encryption scheme with shared keys and forward secrecy may help mitigate some “quantum risk.” Also, seek opportunities to collaborate with others within your industry, with cybersecurity vendors, and with start-ups to create new encryption systems that meet your company’s unique needs. Leading practices for such collaborations include developing a new algorithm, making it available for peer review, and sharing results with experts in the field to prove it is effective. No matter what strategy you choose, start now. It could take a decade or more to develop viable solutions, prototype and test them, and then deploy and standardize them across the enterprise. By then, quantum computing attacks could have permanently disabled your organization.
Shihan Sajeed holds a Ph.D. in quantum information science. His research focuses on the emerging fields of quantum key distribution systems (QKD), security analyses on practical QKD, and quantum non-locality. As part of this research, Dr. Sajeed hacks into systems during security evaluations to try to find and exploit vulnerabilities in practical quantum encryption.
Dr. Sajeed sees a flaw in the way many people plan to respond to the quantum computing threat. Because it could be a decade or longer before a general-purpose quantum computer emerges, few feel any urgency to take action. “They think, ‘Today my data is secure, in flight and at rest. I know there will eventually be a quantum computer, and when that day comes, I will change over to a quantum-resistant encryption scheme to protect new data. And then, I’ll begin methodically converting legacy data to the new scheme,’” Dr. Sajeed says. “That is a fine plan if you think that you can switch to quantum encryption overnight—which I do not—and unless an adversary has been intercepting and copying your data over the last five years. In that case, the day the first quantum computer goes live, your legacy data becomes clear text.”
A variety of quantum cryptography solutions available today can help address future legacy data challenges. “Be aware that the technology of quantum encryption, like any emerging technology, still has vulnerabilities and there is room for improvement,”Dr. Sajeed says. “But if implemented properly, this technology can make it impossible for a hacker to steal information without alerting the communicating parties that they are being hacked.”
Dr. Sajeed cautions that the journey to achieve a reliable implementation of quantum encryption takes longer than many people think. “There’s math to prove and new technologies to roll out, which won’t happen overnight,” he says. “Bottom line: The time to begin responding to quantum’s threat is now.”26
Some think it is paradoxical to talk about risk and innovation in the same breath, but coupling those capabilities is crucial when applying new technologies to your business. In the same way that developers don’t typically reinvent the user interface each time they develop an application, there are foundational rules of risk management that, when applied to technology innovation, can both facilitate and even accelerate development rather than hinder it. For example, having common code for core services such as access to applications, logging and monitoring, and data handling can provide a consistent way for developers to build applications without reinventing the wheel each time. To that end, organizations can accelerate the path to innovation by developing guiding principles for risk, as well as developing a common library of modularized capabilities for reuse.
Once you remove the burden of critical and common risks, you can turn your attention to those that are unique to your innovation. You should evaluate the new attack vectors the innovation could introduce, group and quantify them, then determine which risks are truly relevant to you and your customers. Finally, decide which you will address, which you can transfer, and which may be outside your scope. By consciously embracing and managing risks, you actually may move faster in scaling your project and going to market.
Artificial general intelligence. AGI is like a virtual human employee that can learn, make decisions, and understand things. You should think about how you can protect that worker from hackers, as well as put controls in place to help it understand the concepts of security and risk. You should program your AGI to learn and comprehend how to secure data, hardware, and systems.
AGI’s real-time analytics could offer tremendous value, however, when incorporated into a risk management strategy. Today, risk detection typically occurs through analytics that could take days or weeks to complete. leaving your system open to similar risks until the system is updated to prevent it from happening again.
With AGI, however, it may be possible to automate and accelerate threat detection and analysis. Then notification of the event and the response can escalate to the right level of analyst to verify the response and speed the action to deflect the threat—in real time.
Quantum computing and encryption. The current Advanced Encryption Standard (AES) has been in place for more than 40 years. In that time, some have estimated that even the most powerful devices and platforms would take decades to break AES with a 256-bit key. Now, as quantum computing allows higher-level computing in a shorter amount of time, it could be possible to break the codes currently protecting networks and data.
Possible solutions may include generating a larger key size or creating a more robust algorithm that is more computing-intensive to decrypt. However, such options could overburden your existing computing systems, which may not have the power to complete these complex encryption functions.
The good news is that quantum computing also could have the power to create new algorithms that are more difficult and computing-intensive to decrypt. For now, quantum computing is primarily still in the experimental stage, and there is time to consider designing quantum-specialized algorithms to protect the data that would be most vulnerable to a quantum-level attack.
Though the promise—and potential challenge—exponential innovations such as AGI and quantum encryption hold for business is not yet fully defined, there are steps companies can take in the near term to lay the groundwork for their eventual arrival. As with other emerging technologies, exponentials often offer competitive opportunities in adjacent innovation and early adoption. CIO, CTOs, and other executives can and should begin exploring exponentials’ possibilities today.