Exponentials has been saved
Exponentials represent unprecedented opportunities as well as existential threats. Explore five with far-reaching, transformative impact.
Each year, this report analyzes trends in technology put to business use. To be included, a topic should clearly demonstrate its potential to impact businesses in the next 18 to 24 months. We also require a handful of concrete examples that demonstrate how organizations have put the trend to work—either as early adoption of the concept or “bread crumbs” that point toward the fully realized opportunity. Our criteria for choosing trends keeps us on the practical side of provocative, as each trend is relevant today and exhibits clear, growing momentum. We encourage executives to explore these concepts and feed them into this year’s planning cycle. Not every topic warrants immediate investment. However, enough have demonstrated potential impact to justify a deeper look.
Learn more about Singularity University.
Create and download a custom PDF of the Business Trends 2014 report.
Because we focus on the nearer-term horizon, our Technology Trends report typically only hints at broader disruptive technology forces. This year, in collaboration with leading researchers at Singularity University, we have added this section on "exponential" technologies, the core area of research and focus at Singularity University. The fields we chose to cover have far-reaching, transformative impact and represent the elemental advances that have formed technology trends both this year and in the past. In this section, we explore five exponentials with wide-ranging impact across geographies and industries: artificial intelligence, robotics, cyber security, additive manufacturing, and advanced computing.
In these pages we provide a high-level introduction to each exponential—a snapshot of what it is, where it comes from, and where it’s going. Each exponential stems from many fields of study and torrents of research. Our goal is to drive awareness and inspire our readers to learn more. Many of these exponentials will likely create industry disruption in 24 months or more, but there can be competitive opportunities for early adoption. At a minimum, we feel executives can begin contemplating how their organizations can embrace exponentials to drive innovation. Exponentials represent unprecedented opportunities as well as existential threats. Don’t get caught unaware—or unprepared.
In 2012 the world experienced what I call “the new Kodak moment.” A moment in time when an exponential technology put a linear thinking company out of business. Kodak, the company that invented the digital camera in 1976, and had grown to a 145,000-person,1 28-billion-dollar global company at its peak, ultimately filed for bankruptcy in 2012 as it was put out of business by the exponential technology of digital imagery. In stark contrast, another company—also in the digital imagery business—called Instagram, was acquired in that same year by Facebook for $1 billion. Instagram’s headcount: 13 employees.
These moments are going to be the norm as exponentially thinking startups replace linear businesses with unprecedented products and services. Although a daunting challenge, exponential technologies offer extraordinary opportunities to the businesses that can keep pace with them.
The lessons learned from Kodak are the consequences of failing to keep up with what I call the “six Ds.” The first D is digitization. Technology that becomes digitized hops on Moore’s Law and begins its march up the exponential growth curve. Like many companies, Kodak was blindsided by the next D—deceptive growth. When a product, such as imagery, becomes digitized, it jumps from a linear path to an exponential trajectory. The challenge is that early exponential doublings are deceptive. The first Kodak digital camera was only 0.01 megapixels. Even though it was doubling every year, when you double 0.01, to 0.02, 0.04, 0.08, 0.16, this doubling of small numbers near zero looks to the mind like linear growth, and is dismissed. It’s only when you continue forward past what is called the “knee of the curve” that it begins to change. Double seven times from “1” and you get to 128. Twenty-three more doublings (a total of 30) gets you to 1 billion. Business leaders often perceive the early stages as slow, linear progress. Until, of course, the trend hits the third D—disruption.
By the time a company’s product or service is disrupted, it is difficult to catch up. Disruptive growth ultimately leads to the last three Ds—dematerialization, demonetization, and democratization, which can fundamentally change the market. The smartphone in your pocket has dematerialized many physical products by providing their virtual equivalents—a GPS receiver in your car, books, music, and even flashlights. Once these equivalents gain market traction, the established product’s commercial value can plummet. It becomes demonetized. iTunes®,2 for example, is impacting the value of record stores. eBay is doing the same to specialty retailers. Craigslist has stripped newspapers of classified advertising revenue. Once products become dematerialized and demonetized, they become democratized—spreading around the world through the billions of connected devices we carry around.
Many business leaders confront exponentials with a stress mindset. They realize that the odds of survival aren’t great. Babson College noted that 40 percent of the Fortune 500 companies in 2000 didn’t exist 10 years later.3 However, the other side of the coin is an abundance mindset—awareness of the limitless opportunity. Between now and 2020, the world’s population of digitally connected people will jump from two to five billion.4 That growth will also add tens of trillions of dollars in economic value.
To land on the opportunity side of the coin and avoid shocks down the road, companies can take two immediate steps:
Your competition is no longer multinational powerhouses in China or India. Your competition now is the hyper-connected startup anywhere in the world that is using exponential technologies to dematerialize and demonetize your products and services. Someone in New York can upload a new idea into the cloud, where a kid in Mumbai builds on it and hands it off to a Bangladeshi company to handle production and marketing. Companies need to make sure their plans are in sync with this world and its dynamics.
Lastly, companies should consider their strategy in the context of leveraging two types of exponentials: First, pure exponential technologies such as artificial intelligence, synthetic biology, robotics, and 3D printing; and second, what I call “exponential crowd tools”: crowdsourcing, crowdfunding, and prized-based competition incentive models. If companies then marry this portfolio of exponential assets with the understanding that today’s grandest societal and planet challenges are also today’s most promising commercial market opportunities, it can truly be a formula for abundance.
Computer science researchers have been studying Artificial Intelligence (AI) since John McCarthy introduced the term in 1955.5 Defined loosely as the science of making intelligent machines, AI can cover a wide range of techniques, including machine learning, deep learning, probabilistic inference, neural network simulation, pattern analysis, decision trees and random forests, and others. For our purposes, we focus on how AI can simulate reasoning, develop knowledge, and allow computers to set and achieve goals.
The ubiquity and low-cost access to distributed and cloud computing have fueled the maturity of AI techniques. AI tools are becoming more powerful and simpler to use. This maturity is the first part of the story: how AI is becoming democratized and can be applied across industries, not just in areas such as credit card processing and trading desks, where AI has been gainfully employed for 45 years. The next part of the story focuses on our desire to augment and enhance human intelligence.
We are increasingly overwhelmed by the flood of data in our lives—1.8 zettabytes of information are being created annually.6 But we are saddled with an ancient computing architecture that hasn’t seen a major upgrade in more than 50,000 years: the brain. We suffer from cognitive biases and limitations that restrict the amount of information we can process and the complexity of calculations we can entertain. People are also susceptible to affectations and social perceptions that can muddy logic—anchoring on first impressions to confirm suspicions instead of testing divergent thinking.
AI can help solve specific challenges such as improving the accuracy of predictions, accelerating problem solving, and automating administrative tasks. The reality is that with the right techniques and training, many jobs can be automated. That automation is underway through many applications in several fields, including advanced manufacturing, self-driving vehicles, and self-regulating machines. In addition, the legal profession is availing itself of AI in everything from discovery to litigation support. DARPA is turning to AI to improve military air traffic control as automated, self-piloted aircraft threaten to overrun air-spaces. In health care, AI is being used in both triage and administrative policies. The world’s first synthetic bacterium was created using AI techniques with sequencing.7 Energy firms are using AI for micro-fossil exploration in deep oil preserves at the bottom of the ocean. AI can also be leveraged for situational assistance and logistics planning for military campaigns or mass relief programs. In sum, AI represents a shift, a move from computers as tools for executing tasks to a team member that helps guide thinking and can do work.
Despite these successes, many of today’s efforts focus on specific, niche tasks where machine learning is combined with task and domain knowledge. When we add biologically inspired computing architectures, the ability to reason, infer, understand context, develop evolving conceptual models of cognitive systems, and perform many different flavors of tasks becomes attainable.
In the meantime, AI faces barriers to its widespread adoption. Recognize that in developed nations, its use may encounter obstacles, especially as labor organizations fight its increased use and its potential to decrease employment. The ethics of AI are also rightly a focus of attention, including the need for safeguards, transparency, liability determination, and other guidelines and mechanisms that steer toward responsible adoption of AI. But these realities should not curb the willingness to explore. Companies should experiment and challenge assumptions by seeking out areas where seemingly unachievable productivity could positively disrupt their businesses.
Inspired by lectures given by Neil Jacobstein, artificial intelligence and robotics co-chair, Singularity University
Neil Jacobstein co-chairs the artificial intelligence and robotics track at Singularity University. He served as president of Singularity University from October 2010 to October 2011 and worked as a technical consultant on AI research for a variety of businesses and government agencies.
Mechanical devices that can perform both simple and complex tasks have been a pursuit of mankind for thousands of years. Artificial intelligence and exponential improvements in technology have fueled advances in modern robotics through tremendous power, a shrinking footprint, and plummeting costs. Sensors are a prime example. Those that guided the space shuttle in the 1970s were the size of foot lockers and cost approximately $200,000. Today, they are the size of a fingernail, cost about 10 cents, and are far more reliable.
Robotics is fundamentally changing the nature of work. Every job could potentially be affected—it’s only a matter of when. Menial tasks were the early frontiers. Assembly lines, warehouses, and cargo bays have been enterprise beachheads of robotics. But that was only the beginning. Autonomous drones have become standard currency in militaries, first for surveillance and now with weapon payloads. Amazon fulfillment centers are largely automated, with robots picking, packing, and shipping in more than 18 million square feet of warehouses.8 The next frontier is tasks that involve gathering and interpreting data in real time. Eventually these tasks can be replaced by a machine, threatening entire job categories with obsolescence. Oxford Martin research predicts that 45 percent of US jobs will be automated in the next 20 years.9
On the not-so-distant horizon, for example, gastroenterologists won’t need to perform colonoscopies. Patients will be able to ingest a pill-sized device with a camera that knows what to look for, photograph and, potentially, attack diseases or inject new DNA. Boston Dynamics is rolling out Big Dog, Bigger Dog, and Cheetah—robots that can carry cargo over uneven terrain in dangerous surroundings. Exoskeletons can create superhuman strength or restore motor functions in the disabled. Remote health care is coming. It will likely arrive first with robotics-assisted virtual consultation, followed by surgical robots that can interpret and translate a surgeon’s hand movements into precise robotic movements thousands of miles away. Companies are also pursuing autonomous cars. Personal drone-based deliveries could disrupt retail. The limits are our imaginations—but not for long.
Robotics should be on many companies’ radars, but businesses should expect workplace tension. To ease concerns, companies should target initial forays into repetitive, unpleasant work. Too often robotics is focused on tasks that people enjoy. Equally important, companies should prepare for the inevitable job losses. Enterprises should identify positions that aren’t likely to exist in 10 years, and leverage attrition and training to prepare employees for new roles. The challenge for business—and society as a whole—is to drive job creation at the same time that technology is making many jobs redundant. Ideally, displaced resources can be deployed in roles requiring creativity and human interaction—a dimension technology can’t replicate. Think of pharmacists. After as much as eight years of education, they spend the majority of their time putting pills into bottles and manually assessing complex drug interactions. When those functions are performed by robots, pharmacists can become more powerful partners to physicians by understanding a patient’s individual situation and modifying drug regimens accordingly.
At the end of the day, there are two things robots can’t help us with. The first is preservation of the human species, a concern more civic and philosophical than organizational. But the second is more practical—indefinable problems. For example, robots can’t find life on Mars because we don’t know what it might look like. Everything else is fair game. Be ready to open the pod bay doors of opportunity—before your competition does.
Inspired by lectures given by Dan Barry, artificial intelligence and robotics co-chair, Singularity University
Dan Barry is a former NASA astronaut and a veteran of three space flights, four spacewalks, and two trips to the International Space Station. He is a licensed physician and his research interests include robotics, signal processing with an emphasis on joint time-frequency methods, and human adaptation to extreme environments.
A few hundred years ago, a robbery consisted primarily of a criminal and an individual victim—a highly personal endeavor with limited options for growth. The advent of railroads and banks provided opportunities to scale, allowing marauders to rob several hundred people in a single heist. Today, cyber criminals have achieved astonishing scale. They can attack millions of individuals at one time with limited risk and exposure.
The same technological advances and entrepreneurial acumen that are creating opportunities for business are also arming the world’s criminals. Criminal organizations are employing an increasing number of highly educated hackers who find motivation in the challenges of cracking sophisticated cyber security systems.10 These entrepreneurial outlaws are a new crime paradigm that is reaching frightening levels of scale and efficiency.
A few examples illustrate the daunting landscape: Hackers are available for hire online and also sell software capable of committing their crimes. A few years ago, for example, INTERPOL caught a Brazilian crime syndicate selling DVD software that could steal customer identities and banking information. The purveyors guaranteed that 80 percent of the credit card numbers pilfered through the software would be valid. Its customers could also contact a call center for support.
Cyber criminals are also leveraging the crowd. Flash Robs, for example, are becoming a new craze where social media is used to bring individuals to a specific store to steal goods before police can arrive. Another crowdsourced crime looted $45 million from a pre-paid debit card network. Hackers removed the card limits. Thieves then bought debit cards for $10 and withdrew what they wanted. In just 10 hours, the crowd made more than 36,000 withdrawals in 27 countries.
What looms on the horizon is even more daunting. With the Internet of Things, every car, consumer appliance, and piece of office equipment could be linked and ready for hacking. As fingerprints become the standard means of authentication, biometrics will become a powerful source of ingenious theft.
The experience of the US Chamber of Commerce portends the future. The organization’s photocopiers, like many, are equipped with hard drives that store printed documents. In the past, industrial criminals disguised as repairmen removed the devices. However, when the chamber installed thermostats connected to the Internet, hackers could breach the copiers. Officials only discovered the attack through a defect that inadvertently sent the hackers’ documents to the copiers.
There are steps that companies can take to combat cybercrime. The first is to establish risk-prioritized controls that protect against known and emerging threats while complying with standards and regulations. Companies should also identify which of their assets would likely attract criminals and assess the impact of a theft or breach. Organizations should then become vigilant and establish situation risk and threat awareness programs across the environment. Security and information event management capabilities can be enhanced and new functionality can be mined from tools including endpoint protection, vulnerability assessment/patch management, content monitoring, data loss prevention, intrusion prevention, and core network services. The final step is building resilience: the ability to handle critical incidents, quickly return to normal operations, and repair damage done to the business.
Companies can also turn to the crowd. Security professionals have knowledge that can help investigations and warn of potential threats. The legal environment is also important. Business leaders should advocate for laws and policies that seek to contain cybercrime and also avail themselves of resources provided by federal agencies.
Cybercrime is accelerating at an exponential pace. In the not-so-distant future, everything from our watches to the EKG monitors in hospitals will be connected to the Internet and ready to be hacked. Companies should be prepared to survive in an environment where these threats are commonplace.
Inspired by lectures given by Marc Goodman, chair for policy, law, and ethics and global security advisor, Singularity University
Marc Goodman is a global strategist, author, and consultant focused on the disruptive impact of advancing technologies on security, business, and international affairs. At Singularity University, he serves as the faculty chair for policy, law, and ethics and the global security advisor, examining the use of advanced science and technology to address humanity’s grand challenges.
The technology that supports additive manufacturing, or 3D printing, is more than 30 years old. Its recent popularity has been fueled in part by patent expirations which are driving a wave of consumer-oriented printers. Prices have fallen, putting the technology within the reach of early adopters. 3D printing is democratizing the manufacturing process and bringing about a fundamental change in what we can design and what we can create.
But the story goes much deeper than hobbyists and desktop models. The cost of a 3D printer ranges from a few hundred to a few million dollars. The machines can print with hundreds of materials, including nylons, plastics, composites, fully dense metals, rubber-like materials, circuit boards, and even genetic tissue. Breakthroughs in speed, resolution, and reliability demonstrate potential not only for scale but also for unlocking new possibilities.
The real exponential impact, however, is in the simplicity of the supporting tools. They provide a means to digitize existing objects, customize and tweak open source designs, or create brand new designs based on structural and industrial engineering know-how. Intuitive, easy-to-use tools allow “things” to be created, manipulated, and shared.
In essence, 3D printing makes manufacturing complexity free of charge, allowing otherwise impossible designs to be realized. Objects are built one layer at a time, depositing material as small as 100 nanometers exactly where and when needed. Mechanical items with moving parts can be printed in one step—no assembly required. Interlocking structures mimicking nature’s design laws are possible with nearly unlimited geometrical freedom—no tooling, set-ups, or change-overs. Moreover, objects can be built just in time when and where they are needed. The capability unlocks business performance in a highly sustainable manner by reducing inventory, freight, and waste. 3D printing’s value is not limited to complex objects. On-site creation of investment castings or construction molds can supplement traditional manufacturing techniques.
3D printing is not just for prototypes and mock-ups. Many sectors already use the technology for finished parts and products. The aerospace industry, for example, has led the charge on additive manufacturing. Jet engine parts such as manifolds require more than 20 pieces that are individually manufactured, installed, welded, grinded, and tested into a finished product. The 3D printed alternative is easier to build and service and also reduces overall system weight. Medical devices use 3D printing to customize and personalize everything from dental crowns to hearing aids to prosthetics.
The potential doesn’t end there. More fantastical use cases are starting to become a reality, such as mass customization of consumer goods, including personalized products ranging from commodities to toys to fashion, with “print at home” purchase options. Even food printers are entering the market, starting with chocolates and other sugar and starch staples, but moving toward meats and other proteins. Organs, nerves, and bones could be fully printed from human tissue, transforming health care from clinical practice to part replacement—and even life extension. Leading thinkers are exploring self-organizing matter and materials with seemingly magical properties. One example is already here: a plane built of composites with the ability to morph and change shape, ending the need for traditional flaps and their associated hydraulic systems and controls.
The enterprise implications are many—and potentially profound. First, organizations should take an honest look at their supply chain and market offerings—and identify where the technology could enhance or replace these offerings. As we discussed in the Digital engagement chapter, intellectual property and rights issues will emerge, along with new paths to monetize and disrupt. Finally, business leaders should embrace the democratized creativity the technology is unleashing. Companies can use 3D printing to drive faster product innovation cycles, especially where it can push the boundaries of possibilities based on materials science and manufacturing techniques.
Inspired by lectures given by Avi Reichental, co-chair for nanotechnology and digital fabrication, Singularity University
Avi Reichental currently serves as faculty co-chair of the additive manufacturing program at Singularity University. He has been the president and chief executive officer of 3D systems since September 2003.
Advances in raw computing power and connectivity are frequently the building blocks of our annual tech trends report. Core lessons that have guided us through the Internet revolution remain true today, and are steering us toward exponential advances in the future of computing.
The first lesson is the importance of early adopters and how they personally and commercially kick-start industries and adoption. Early adopters have an insatiable demand for improvement and for the doubling of performance. Moore’s Law forecasts how many transistors per dollar could be put onto a chip wafer. Engineering curiosity and scientific prowess have fueled many advances in the field. Nonetheless, to build growth and feed customer demand, companies continue to invest in seismic performance improvements because they know there is a demand for products that are twice as good.
The second lesson is an open, hackable ecosystem with a cost contract that encourages experimentation through its lack of incremental accounting for network usage. From the system kits of the PC revolution to the open source movement to today’s Arduino and Raspberry Pi hobbyists, a culture of innovation and personal discovery is driving advances in open groups instead of proprietary labs. Lessons and learnings are being shared that accelerate new discoveries.
The third lesson is that the magical ingredient of the Internet is not the technology of packet switching or transport protocols. The magic is that the network is necessarily “stupid,” allowing for experimentation and new ideas to be explored on the edges without justifying financial viability on day one.
On the computing side, we are at a fascinating point in history. Rumblings about the end of Moore’s Law are arguing the wrong point. True, chip manufacturers are reaching the theoretical limits of materials science and the laws of physics that allow an indefinite doubling of performance based on traditional architectures and manufacturing techniques. Even if we could pack in the transistors, the power requirements and heat profile pose unrealistic requirements. However, we have already seen a shift from measuring the performance of a single computer to multiple cores/processors on a single chip. We still see performance doubling at a given price point—not because the processor is twice as powerful, but because twice the number of processors are on a chip for the same price. We’re now seeing advances in multidimensional chip architecture where three-dimensional designs are taking this trend to new extremes. Shifts to bio and quantum computing raise the stakes even further through the potential for exponential expansion of what is computationally possible. Research in the adjacent field of microelectromechnical systems (MEMS) and nanotech is redefining “hardware” in ways that can transform our world. However, like our modest forays into multi-core traditional architectures, operating systems and software need to be rewritten to take advantage of advances in infrastructure. We’re in the early days of this renaissance.
The network side is experiencing similar exponential advances. Technologies are being developed that offer potentially limitless bandwidth at nearly ubiquitous reach. Scientific and engineering breakthroughs include ultra-capacity fiber capable of more than 1 petabit per second11 to heterogeneous networks of small cells (micro-, pico-, and femtocells12) to terahertz radiation13 to balloon-powered broadband in rural and remote areas.14
Civic implications are profound, including the ability to provide education, employment, and life-changing utilities to the nearly five billion people without Internet access today. Commercially, the combination of computing and network advances enable investments in the Internet of Things and synthetic biology, fields that also have the ability to transform our world. Organizations should stay aware of these rapidly changing worlds and find ways to participate, harness, and advance early adoption and innovation at the edge. These lessons will likely hold true through this exponential revolution—and beyond.
Inspired by lectures given by Brad Templeton, networks and computing chair, Singularity University
Brad Templeton is a developer of and commentator on self-driving cars, software architect, board member of the Electronic Frontier Foundation, Internet entrepreneur, futurist lecturer, and writer and observer of cyberspace issues. He is noted as a speaker and writer covering copyright law, political and social issues related to computing and networks, and the emerging technology of automated transportation.