Innovation: A chimera no more
Innovation is celebrated far and wide, but the lack of a shared, accurate definition has undermined our collective ability to manage it effectively. The implications are anything but academic. Companies that treat an attack based on differentiation as if it were breaking important trade-offs may overreact, but mistake a true innovator for the merely different and the pain can last for decades.
Like the old chestnut of the bumblebee’s flight, innovation seems to work in practice but not in theory. There are myriad examples of success from which we can draw inspiration, yet almost no one seems able to innovate repeatedly and on purpose. Practitioners continue to lament the unpredictability of innovation, the more Zen-like among them embracing the idea that failure is inevitable. Who hasn’t been told something to the effect that if you’re not failing often, you’re not trying hard enough? It’s difficult to know if this is powerful advice or just defeat cloaked in the rhetoric of victory.
In such circumstances, it is common practice to invoke the parable of the six blind men and the elephant, with the hope that progress lies in synthesizing the many and divergent views. Unfortunately, such a path is not available to those who wish to understand innovation, for this field of inquiry faces a much more fundamental problem: Where the blind men knew that they each had purchase on the same animal, when it comes to innovation, many of us hold parts of entirely different beasts.
Think of the variety and diversity of initiatives in most organizations that seek to bask in innovation’s golden light. From disruptive new product initiatives to efforts to introduce recyclable cutlery in the commissary, there is precious little that doesn’t seem to qualify. It is not an elephant we seek to describe, but a menagerie. Imagine now the sightless six grasping, respectively, the wing of a condor, the body of a lion, the horn of a rhino, and the fluke of a whale. It is unsurprising, if disappointing, that our efforts to make innovation manageable have conjured only chimeras.
Few other fields in applied management labor under this burden: Hedging financial risk belongs to finance, while motivating and rewarding employees falls to a subfield within human resources, and reducing the variation in the output of a manufacturing process belongs to operations management. Managers can be effective in these domains largely because the implicit or explicit definitions that limn the boundaries of each tell them what they need to know in order to achieve specifiable outcomes and how to improve over time. If we are to become similarly effective at managing innovation, we need to define what it is in practical, useful terms. Only then can we assemble the parts of the creature that truly belong together.
More than a harmless drudge
Establishing a useful definition to guide any field of inquiry is not an esoteric exercise but the most practical of first steps. Unfortunately, it is a step we have yet to take for innovation, which has been plagued, almost since its inception, with far too broad a notion of what it might encompass.
The trouble began with the seminal work of Joseph Schumpeter in the 1930s and 1940s. Almost single-handedly, the Harvard economist convinced a discipline obsessed with marginal cost competition that what really mattered was innovation, which he defined as, “the introduction of new goods…, new methods of production…, the opening of new markets…, the conquest of new sources of supply…, and the carrying out of a new organization of any industry.”1
Consider now what this definition places within innovation’s remit. Do we really think that finding a Chinese distributor for CAD software (opening new markets) requires the same sort of management processes as shifting from bricks to clicks in the retail sector (establishing a new organization)? Does exploring digital fabrication or additive manufacturing (3-D printing as a new method of production) raise challenges that are sufficiently similar to those arising from finding substitutes for rare earth metals in the high-tech sector (new sources of supply) that they can be treated as one and the same?
A reasonable question is whether having a common definition matters all that much. Can’t we follow the lead of Potter Stewart, a late Justice of the US Supreme Court, who famously averred that when it came to obscenity, he knew it when he saw it?2 As a practical matter, the answer appears to be no. In a seemingly direct riposte to the Potter Stewart school of thought, recent literature identified 60 distinct definitions of innovation, prompting the derisive conclusion that researchers had collectively abandoned the question of definition entirely, leaving it “to the reader to intuitively understand what is now a popular subject in management literature.”3
When definitions are offered, they collectively lack the coherence necessary to create a solid, common foundation. Is innovation “the creation of new knowledge and ideas to facilitate new business outcomes,”4 “the effective application of processes and products new to the organization and designed to benefit it and its stakeholders,”5 “the generation, acceptance, and implementation of new ideas, processes, products, or services,”6 or something else altogether?
The lack of a shared, accurate definition has undermined our collective ability to manage innovation effectively because we cannot determine what matters and why.7 One study identified 9 factors and 31 subfactors that determined success.8 Another revealed 55 factors, and a metastudy of the field itemized 42 subfactors clustered into 10 factors.9 In short, efforts to understand innovation are looking at phenomena that are the same in name only, so it is no surprise that there are wildly different opinions about what matters most.10
How shall we get out of this muddle? We cannot adopt the lexicographer’s conceit and attempt to derive a definition from how the word is used. Yet on what basis and with what authority would we—or anyone else, for that matter—impose a definition?
No free lunch?
There is perhaps a third way: Rather than infer or impose a definition, we can perhaps derive one by following to its logical conclusion the microeconomic theory at the heart of modern competitive strategy.
In his 1996 article “What is strategy?” Harvard Business School professor Michael E. Porter synthesizes over 20 years of writing, research, and reflection on the implications of microeconomic theory on business competition.11 He concludes that different strategies are defined by the trade-offs in the performance of the activities that define the value created by a business model.12 Porter illustrates this framework using two dimensions of customer value: price and nonprice. (Nonprice value is really a vector of all the different dimensions of performance that customers want. For instance, in the case of automobiles, these might be safety, acceleration, styling, roominess, and so on.)
Delivering any given bundle of nonprice benefits always incurs a cost—it is tough, after all, to get something for nothing. The minimum cost required to achieve a specified nonprice value is not some fixed Platonic ideal: It is whatever cost is incurred by the lowest-cost provider in the market. Similarly, the level of any nonprice value that can be provided at any cost has a maximum: No matter what you’re willing to pay, you cannot have a car that goes from 0 to 60 in 2.8 seconds and gets 75 miles per gallon in the city. The limits of what can be provided at what cost describe the “productivity frontier” for a business model at a point in time.
In figure 1, at point 1, a firm can appear to break trade-offs and deliver greater nonprice value without an increase in cost; that is, it can move “right” to point 2 (an increase in nonprice value) without moving “down” (an increase in cost). This is because a firm is merely wringing out inefficiencies that others already know how to avoid. In other words, at at point 1, it really can get something for nothing by working smarter rather than harder. Firms that have reached the frontier of what a given business model can do are “operationally excellent,” in Porter’s terms.
Once a firm gets to 2, however, that is as smart as it can work: The frontier defines the limits of what is possible at that moment. Of course, one could exploit different types of trade-offs to reach a different point on the frontier, competing instead at 3 by moving “up” (a reduction in cost) from 1 without moving “left” (a reduction in nonprice value). Once firms are at the frontier, however, changes in cost and nonprice value are inextricably linked: More of one necessarily means less of the other. Thus, 2 and 3 are qualitatively different strategies because they are at different points on the same frontier.
A company’s strategy, then, is defined by the trade-offs inherent in its business model, or the activities it performs in order to deliver value to customers. A company’s business model is strategically differentiated to the extent that it exploits a different set of trade-offs than its competition, choosing, for example, to provide higher quality but at higher cost and hence price.
For all its power, this model is essentially static because it takes the production possibility frontier (PPF) as given and fixed. This is a useful assumption, but like many assumptions, it eventually buckles under the weight of accumulating reality. In the auto industry, for example, the trade-off between cost and power has changed dramatically over time.
Today, for example, one of the least expensive machines that we are willing to call a “car” (a closed-body private transportation device with a given passenger capacity and range) is the Tata Nano. Its price (a proxy for relative cost) is approximately $2,600, and it has 38 horsepower. At the other end of the spectrum is the Bugatti Veyron, which at $1.9 million delivers 987 horsepower. These two automobiles define, to a reasonable approximation, the PPF of the trade-offs between cost and power in the commercial market for automobiles (figure 2).
It will come as no surprise that 90 years ago the industry was subject to different constraints. In 1920, a good candidate for the cheapest car generally available was the Ford Model T, which cost $3,200 (in 2013 dollars) and delivered 20 horsepower. Back then it was still a Bugatti (the Type 35) at the other extreme, which cost $180,000 inflation-adjusted and delivered 140 horsepower.
It’s worth noting that breaking a trade-off does not necessarily translate into commercial success: Some innovations disappoint when the trade-offs broken are not broken in ways valued by customers. For example, the Nano has faced some headwind in finding marketplace acceptance. March 2013 Nano sales were down 86 percent from a year prior, and only 229,157 units have sold since inception. The reason seems to be that many scooter owners aren’t upgrading to the Nano because it isn’t viewed as a “real” car, and car buyers view the Nano as inexpensive and too akin to a scooter.13 In other words, although the Nano falls between a car and a scooter, it is still too close to a scooter. Consequently, commercial success seems to lie in being more like a car.
Independently of the commercial success, from an engineering standpoint, this outward expansion in the automotive sector’s PPF means that the combination of cost per horsepower and total horsepower readily available in a minivan today would have been unfathomable to the engineers contesting Le Mans during the interwar period. Such movement does not pose a problem for Porter’s notion of strategy since minivans in 2013 do not compete with racing cars from 1923. Yet this somewhat contrived example reveals how the accretion of many small improvements over the years can yield dramatic improvement overall.
Conceptually, of course, there is no difference between any one of those small improvements and their collective impact on automotive performance. How then are we to think of those products or services that expand the frontier compared to their contemporaries and, rather than competing by making different sets of trade-offs, compete by breaking trade-offs?
We propose that strategy is defined by the trade-offs you exploit, while innovation is defined by the trade-offs you break.
Establishing the utility of a definition is not something one does with regression equations or purely deductive arguments. This definition will have to prove its worth one case at a time and gain currency only through adoption. To begin to make the case for defining innovation this way, consider four competitive battles and how viewing them through the lens of innovation as “breaking trade-offs” brings into focus what happened and why.
Beer and wings
In an oft-told tale, the structure of today’s American beer market is a legacy of prohibition. With the repeal of 1919’s 18th Amendment to the US Constitution through the passage of the 21st Amendment in 1933, the manufacture and sale of alcohol was once again legal. Americans, so the story goes, wanted their beer cheap, fast, and in large quantities. The only breweries that had managed to stay afloat were those big enough to diversify into other businesses, and so America’s brewing industry has long been dominated by a relatively small number of megabrewers: Today, the two largest, both global players, have 75 percent market share between them.14
Beginning in the 1970s, however, smaller microbreweries began to crop up. Focusing on specialty formulations—bocks, pale ales, wheat or honey beers, and so on—microbreweries brew small batches, distribute locally, and often use highly idiosyncratic ingredients and processes. With 10 percent of the US beer market today, microbreweries see themselves as innovative and are frequently described as such by the popular media.15
In truth, however, they are simply exploiting cost/performance trade-offs to appeal to less price-sensitive segments of the beer market. They have not found a way to make “better beer, cheaper.” Rather, they sacrifice economies of scale in their supply chain, production, and distribution in the pursuit of other dimensions of performance that matter to the customers they court. They have not expanded the frontier of the beer industry, merely staked a claim to a different spot on the same frontier.
Megabrewers have responded by launching their own craft beer brands, addressing increasing market fragmentation with a careful balancing of production efficiencies and product differentiation. Leveraging production facilities and expertise, supply chains, and even marketing spend, the craft beer divisions of the major brewers are really no different from traditional line extensions one might see in any consumer products industry. One of the majors in the United States has a portfolio of over 250 craft labels, and megabrewer craft brands are now growing faster than microbrewery volumes.16 The result has been a new competitive equilibrium in the beer market, with the majors taking constant and careful measure of the craft beer segments of the markets they serve.
Incumbents are not always able to mount such effective responses to competitive incursions, however. Consider the fate of established airlines at the hands of low-cost carriers (LCCs). At one level, it is a mirror image of the same problem the larger brewers faced. New entrants popped up in response to regulatory changes that allowed them to exploit different cost/performance trade-offs that appealed to more price-sensitive segments of the market for air travel. Incumbent airlines typically responded in much the same way the megabrewers responded to microbreweries, comparing the marginal cost of leveraging existing assets such as planes, airport gates, reservation systems, loyalty programs, and staff with the total cost of setting up something from scratch. This strategy led them to launch LCC divisions that were very often closely tied to the core operations, just as the megabrewers have done.
Yet the outcomes were far less favorable. Over a 13-year period, there were six major attempts by incumbent airlines to launch an LCC division, none of which proved successful. Continental was first out of the gate with Continental Lite (1993–1995), followed by United’s Shuttle by United (1994–2001), whose run overlapped with Delta’s Delta Express (1996–2003). US Air took a kick at the can with MetroJet (1998–2001). Delta’s Song (2003–2006) was a second at-bat for the Atlanta-based carrier, and United tried it again with Ted (2004–2009). What kept going wrong?
The problem was that, unlike the microbrewery challenge, the stand-alone LCCs were true innovators, delivering comparable performance at a cost that incumbents could not match (figure 3). They were not merely exploiting trade-offs in the interests of differentiation; they were breaking trade-offs, that is, they were innovating.
Microbreweries opened up new growth opportunities in the beer industry by creating products that appealed more directly to what had been latent, unserved market segments. The megabrewers’ response was effective at least in part—and perhaps in large part—because the organizational context of their response was appropriate to the nature of the challenge. Faced with the need to differentiate their product, they used the organizational tools of differentiation but kept those elements of the underlying business model that did not need to change. This allowed them to exploit their inherent cost and distribution advantages. Incumbent airlines, however, mistook a true innovation for mere differentiation. Consequently, when they too reached for the tools of differentiation, their responses fell dramatically short.
It needn’t have turned out this way. What might have happened had the megabrewers responded to the microbreweries as if they were true innovators? How bad could it have gotten for them? What if the airlines had better understood the nature of the threat they faced? How effective a response might they have mounted? We can never know for sure, of course, but for some insight into these questions, consider the experiences of Intel in microprocessors and incumbent management consulting firms during the dot-com era.
Silicon Valley vs. Silicon Alley
From 1985 to the end of the twentieth century, Intel enjoyed near hegemony in the chip business thanks to its ability to introduce increasingly faster chips on an increasingly shorter life cycle. Yet in 1999, for the first time, Advanced Micro Devices (AMD) had higher market share than Intel in the US retail desktop segment with 43.9 percent, thanks largely to its gains in the sub-$1,000 system segment.17 AMD had gained this lead by beginning early—in the mid-1990s—to focus on less demanding tiers of the market, where chips that were less powerful than the best that Intel had to offer were welcomed with open arms, especially since they were being sold at much lower prices than Intel’s highest-performing products. In other words, AMD captured a different segment of the market by making different trade-offs among dimensions of performance and cost.
So far, this is the just the beer example with higher capital intensity. However, unlike the microbreweries and far more similar to the case of the LCCs, AMD had set itself on a trajectory of performance improvements that promised to break the cost/performance trade-offs that, at that time, defined Intel’s product roadmap. What looked in cross section like a segmentation-based attack was actually the beginning of one based on innovation.
Intel’s response was to establish a new unit in Israel, far away from the core operations in Santa Clara, California, to focus on building what would become the Celeron processor. Based on the Pentium “chassis,” the Celeron was a deliberate attempt to fight back with a lower-cost, lower-priced, lower-performance microprocessor. Launched in 1998, the Celeron’s performance improved dramatically even as its price remained constant (figure 4). It quickly became the largest line of processors by revenue in Intel’s history. Only in the last few years has Intel phased out the Celeron and replaced it with Atom, Intel’s new line of low-price microprocessors.
Now cast your mind back to the late 1990s. Venture capital partnerships prowl university campuses, showering millions in seed financing on anyone who could spell “dot com.” (At least it felt that way.) No industry seemed immune from the corrosive yet generative, terrifying yet exhilarating impact of the Internet, including management consulting. The so-called Fast Five (in a dig at the consulting arms of the then Big Five accounting firms) of RazorFish, iXL, Scient, Viant, and marchFirst were scooping up the cream of the business school crop and securing high-profile engagements with not just other start-ups but even the incumbent firms’ major clients. With dot-com era financing to sustain them, the Fast Five were eager to take equity rather than cash in payment, and, unencumbered by established process or allegedly outdated paradigms, they promised a level of creativity and insight mainstream firms couldn’t even aspire to. (At least it felt that way.)
After two or three years of this, even the bluest-blooded consulting firms began to respond in ways Intel would have recognized. They set up new divisions with new names, new brands, new locations, and seemingly unprecedented autonomy. They looked for talent in entirely new places, claiming that they didn’t want all those MBAs after all, and that PhD students in physics and math were just what they needed. They aped the “payment in equity” with some clients and developed new compensation models, sometimes based on ghost equity in the division itself in an effort to create the buzz of a true e-consultancy and the high-powered reward structures that implied.
None of it lasted long or amounted to much. Scient and iXL became part of Razorfish, which is today part of Publicis, a multinational advertising and public relations company. Viant was acquired by divine inc., which went bankrupt in 2003, and marchFirst went public in March 2000 and was defunct by May 2001. Most of the mainstream consulting firms, if they talk about this period at all, do so with some chagrin. Their new divisions were closed, the ping pong tables disposed of, the new business models and compensation systems abandoned.
The major management consultancies of the day overreacted because they mistook mere differentiation for a true innovation. Thanks to the economic and sociological phenomenon of the dot-com bubble, new market segments emerged that wanted, for a time, a different set of price/performance trade-offs. But the e-consultancies that sought to capitalize on those preferences had not created a new frontier. They were at best seeking to exploit trade-offs and were a long way from breaking them.
Providing high degrees of organizational autonomy and developing new business models seem to increase dramatically the likelihood that one can eventually break the trade-offs that define an industry’s existing frontier.
The end of the beginning
These case studies reveal the importance of understanding at a fundamental level what is and isn’t innovation. Treat an attack based on differentiation as if it were breaking important trade-offs and you will likely overreact, but mistake a true innovator for the merely different and the pain can last for decades.
As these examples illustrate, at least some of what is prescribed for successful innovation can be very effective. Providing high degrees of organizational autonomy and developing new business models seem to increase dramatically the likelihood that one can eventually break the trade-offs that define an industry’s existing frontier. Taking advantage of this insight, however, demands that we apply this advice only where appropriate—that is, where innovation is in fact called for.
Identifying these circumstances means having a practical, accurate definition of innovation, and “breaking constraints” would appear to meet these criteria. In each of the four cases examined above, it would have been possible to map the cost/performance profiles of the market opportunities in play and determine with sufficient precision whether innovation or differentiation were likely to be the more effective response (figure 5).
For innovation researchers, we hope our definition will help bring some consistency to the field so that it can emerge from its current pre-paradigmatic welter. By consistently defining the underlying phenomenon, perhaps it will be possible to move beyond arguments over the factors and subfactors of innovation and engage the real question: how to innovate effectively.
For practicing managers, who are deliberate or de facto consumers of management theory, we hope our definition will allow them to screen the advice they receive and identify the nuggets that speak to the problems they actually face. Is it any wonder that so many see “predictable innovation” as an oxymoron when so much of the advice on offer is actually targeted at an entirely different outcome?
Whatever the merits of our definition, we remain convinced that one is needed. Only when we attempt to synthesize our elephant from the parts of an elephant will innovation be a chimera no more.