Can Gillette Disrupt Itself?

On the surface, Gillette looks like a model of innovation success. A flagship brand of innovation champ P&G, Gillette’s achieved a remarkable ~70% share of the global men’s razor market, all while maintaining huge margins. The secret to getting so many men to pay so much is a series of new-and-improved razors that – despite making Gillette the butt of endless jokes - has carefully targeted areas of consumer dissatisfaction. Last year’s new Fusion ProGlide was a perfect example: it built on a key insight – men get post-shave irritation due to facial hair “tug and pull” – by using finer blades to slice through tough beard hair more effortlessly. Despite blade cartridges retailing for roughly $4 each, ProGlide sales since launching last summer – backed by a massive marketing campaign – were some of Gillette’s best ever for a new product. They’ve followed their innovation playbook for so long that it looks easy: a great business model + big market research + big R&D + big marketing = huge profits.

The Bigger They Are, the Harder They Fall

To a student of Clay Christensen’s theory of disruptive innovation, however, Gillette’s core business looks intensely vulnerable. All the signs are there:

  • A clear consumer “job-to-be-done” (hair removal)
  • A dominant, likely overconfident incumbent
  • Ongoing “sustaining” technological improvement (in blades, lubrication, battery-powered vibration, etc.) that vastly outpaces the rate of change in consumer needs
  • Resulting innovations (next-generation razors) that primarily serve a profitable segment of demanding customers willing to pay ever-higher prices (affluent Western men who shave frequently)
  • An unknown but presumably large number of “overserved” consumers and untapped nonconsumers (those who don’t shave frequently – or at all – because of cost or inconvenience)

The theory’s prediction is clear: some entrant will develop a less effective but simpler and/or cheaper solution to hair removal. It may initially capture only a small – and relatively less profitable – portion of the bottom of the market, but will likely improve its technology over time and relentlessly advance up-market. Gillette would find itself in the innovator’s dilemma, choosing (rationally) to cede less profitable business at the market’s low end and retreat to ever-higher ground, ultimately ending up with only a niche specialty market, if it’s not forced to exit altogether. If a fall from such a lofty position as Gillette’s sounds unlikely, consider the fate of Bethlehem Steel in the 1980s, IBM in the 1990s, Kodak in the 2000s, and, most recently, HP.

Reversing the Process

Fortunately, there are ways to escape this trap. In one prominent 2009 article, Tuck professors Vijay Govindarajan and Chris Trimble, along with CEO Jeff Immelt, described GE’s plan to disrupt itself via “reverse innovation.” Rather than develop products for the affluent U.S. market and try to sell them in the developing world, GE’s business units have begun to develop products specifically for the mass Chinese and Indian markets, such as a portable ultrasound device with lower quality and fewer features – but a price tag 80% below a conventional one. To pull this off, the key for GE was, as the authors put it, “shifting the center of gravity” to the overserved emerging market - in customer research, R&D, and organizational decision-making. Even more remarkably, GE has advanced its low-end technology to the point where a version can be sold competitively in the developed world, completing the reverse innovation cycle. GE Healthcare’s PC-based ultrasounds, for example, were developed for rural China but have been introduced into the U.S., where they may have cannibalized sales of GE’s traditional machines – but have also disrupted competitors, as well as preempted other potential developing-world entrants.

P&G isn’t stupid either. Since Gillette was acquired by the global conglomerate in 2005, its approach to market research and product development has been slowly but dramatically transformed. The razor business’s far less visible but perhaps more important 2010 product launch was the Gillette Guard, its first razor developed entirely in and for the Indian and other emerging markets. Through thousands of hours of in-person study, Gillette researchers learned that Indian men primarily sought a safe razor that could be easily rinsed in a bowl of still water, and that was cheap enough to be a reasonable alternative to a barber – or to not shaving at all. The Guard was developed (from a “clean sheet” design) with a safety comb, easy-to-rinse blade cartridges, and a single blade in a plastic housing with 80% fewer parts. Compared with the ProGlide, this simple design likely yields a relatively worse shaving experience by American standards, but the Guard’s replacement blades cost a mere 5 rupees – 95% less than the Indian version of Gillette’s Mach3.

Disrupt or Be Disrupted

But does Gillette’s emerging-market razor solve its innovator’s dilemma? For one thing, Gillette has shown no interest in importing even an improved version of its ultra-cheap, “good enough” product back to the U.S. It’s perfectly reasonable to point out that, in the developed world, Gillette’s share is so dominant (and margins so huge) that the cost of cannibalizing its sales of premium razors would be much higher than GE’s. Competitors won’t care, however, which is why Govindarajan argues that, unless it is willing to risk much of its core business itself, someone else will eventually do it for them. And lest Gillette think it can wait until it spies a potential disruptor before developing a U.S. version, it might do well to remember the lessons of Seagate, which developed its own 3.5″ computer hard drive but ignored its unattractive business case relative to its core 5.25s – only to be disrupted by Conner Peripherals, a former Seagate spinoff which focused on 3.5″ drives and rapidly left Seagate behind. As Christensen put it,

“[W]hen established firms wait until a new technology has become commercially mature in its new applications and launch their own version of the technology only in response to an attack on their home markets, the fear of cannibalization can become a self-fulfilling prophecy.”

A deeper question is whether a redesigned low-end razor is really what will ultimately disrupt this market. After all, a durable handle with disposable snap-on blades, scraped across a lathered face every day, is a rather clumsy solution to the job of hair removal (especially when defined broadly). The Gillette Guard made a radical trade-off in relative performance and price attributes, but didn’t fundamentally change Gillette’s model, entrenched as it is by decades of pervasive marketing. It’s easy to imagine how a chemist might develop a cheap cream that stops hair growth entirely, but has some negative side effects or other factors that cause traditional shaving consumers – and therefore Gillette – to ignore it. Until, that is, the kinks begin to be ironed out, and its inexorable march up-market causes Gillette to flee rather than fight.

By then it will be too late. The key question, therefore, is whether Gillette has the courage to truly disrupt its own seemingly invincible core business. If not, disruption will eventually come from without. It’s just a matter of when.

What Should I Watch? The Evolution of Recommendation

One of the great promises of the Digital Age is a better way to figure out the answer to the question above. People love great writing, artwork, film, and music, but no one is going to experience, in their lifetime, more than a fraction of all the content in existence. That’s why we try hard to find the stuff we’ll probably enjoy.

But that’s always been really difficult – as the saying goes, you can’t judge a book by its cover. Even if you could, no one wants to waste time searching through every title ever written to find the ones they’ll like. So for ages we’ve relied on poor solutions for discovery of new content (not to mention food, fashion, software, etc.). The three main ways we’ve done this are:

Curation: Experts decide what the best content is, and we listen to them. That’s why everyone read To Kill A Mockingbird in high school, and why movie critics put out Top 10 lists. Of course, there’s much to be said for being exposed to high culture and different viewpoints, whether we want to be or not. But the nature of art is subjectivity – everyone has different interpretations and tastes, so I might not like the experts’ picks. And who decides who’s an expert anyways – have you ever bought a book from the Staff Favorites rack at a bookstore?

Popularity: TV channels, radio stations, movie theaters, and bookstores offer an array of the most popular content, and we pick from the available options. Pretty simple – they modify their offering based on what sells, and everyone wins, right? But again, there’s no personalization here, and we don’t all have statistically average tastes. Worse, picking based on popularity creates a feedback loop that might misrepresent reality (did anyone actually like Rebecca Black’s Friday video?).

Word of Mouth: The old standby. Our friends and family probably have a better idea of what we’ll like than anyone else, and we’re more inclined to trust them (I’ll read anything my dad or my buddy Tom sends me). But unfortunately their experiences probably overlap significantly with ours (as Mark Granovetter pointed out decades ago), so while you might get fewer false positives (bad recs), you’ll also have more false negatives (missed content). It’s also tedious to poll your friends every time you’re looking for a movie to watch.

Enter the recommendation engine. Of course, in the Digital Age of plentiful data, a lot of companies can get more mileage out of the same basic methods listed above – for instance, the New York Times can now easily measure and display its most popular articles. But technology can also do a much better job of helping us discover new content when our tastes take us beyond the Top Ten (this has also created a revolutionary paradigm for content sellers, which Chris Anderson of Wired termed the Long Tail). Although I’m not an expert in the field, it seems like there are at least three entirely new ways to use consumer data to recommend new content:

Intrinsic algorithms use the actual attributes of the content and combine them with individual user feedback. The best example of this is probably online radio Pandora, which uses the Music Genome Project’s 400 attributes to tag every song in its database. If you say you like a song, it cues up more songs with similar traits (e.g., beat, vocal pitch, etc.). While this approach is widely praised for helping discover good music, it’s probably harder to apply to other types of content. There are also certain things we love about great art (like a metaphor in a song’s lyrics) that can’t be reduced to digitized attributes.

Preference algorithms rely on both our own and others’ ratings. Amazon was a pioneer in using its massive scoring database to shift from just popularity-based discovery (“X is highly rated”) to adding a preference-based algorithm, too (“you liked X, and most people who like X also like Y, so we recommend Y”). But while the logic is simple, the algorithms get incredibly complex. The gold standard is Netflix, whose Cinematch recommendation engine is so critical to their success they offered a $1 million prize to researchers that could improve it by 10%. But this approach has limits too, many of which have been described by Eli Pariser (example: preference algorithms tend to be risk-averse, so restaurant recommendation engines keep sending people to decent, inoffensive places like Chipotle).

Social algorithms – right now, social networks’ role in recommendation is just word of mouth on steroids, but their use for discovery is only just beginning. You could talk about Game of Thrones (or “like” it) on Facebook today, and your friends may be intrigued. But far more powerful would be an automatically generated recommendation if a significant percentage of your closest ties liked or mentioned something.

So which approach is best? The more interesting question is how these approaches can be combined to produce exponentially better discovery. That’s why Facebook’s announcement last week that Reed Hastings, Netflix’s founder and CEO, was joining its board was exciting. Sure, it may signal Facebook’s preparations for an IPO, or its future addition of streaming video, but it might also pave the way for Netflix to integrate a social element into its recommendations. What if curation and/or intrinsic factors were added too? Google might also be well positioned to offer a killer recommendation engine in the future if Google+ takes off. And at the very least, a better recommendation system could help Amazon win the retail war against Walmart – or vice versa.

Going further, recommendation engines have been mostly add-ons for content sellers so far (stand-alone recommendation platforms haven’t been widely adopted), but imagine how powerful a universal recommendation engine across all types of content (and other choices we make) could be. Again, there are legitimate concerns about a world of excessively personalized discovery, as Pariser argues in The Filter Bubble - ideally, we’d always be quite conscious of recommendation and decide when to switch it on and off. But at the very least, I bet we’d watch a lot less bad TV.

As always, your feedback is welcome.

Dissatisfaction Is the Mother of Innovation

My friend Dave is a great guy, but terrible to go to restaurants with. Invariably, he ends up peppering the server with endless questions, trying to order things not on the menu, and complaining about the food once it’s served.

People like Dave may be hard to dine with, but they can be great for innovation. That’s because they’re continually dissatisfied with what’s available, looking instead for an ideal experience. The best innovators utilize several techniques to understand consumer dissatisfaction – and then use that understanding to drive innovative ideas.

Listen to problems, not solutions

Recently, many have cited Henry Ford (who famously quipped, “If I had asked people what they wanted, they would have said faster horses”) and Steve Jobs (“You can’t just ask customers what they want and then try to give that to them”) to make the case that listening to customer feedback is pointless. But as Ted Levitt, Tony Ulwick, and others have argued, while customers are notoriously bad at coming up with solutions to their own problems, their actual difficulties and complaints – the problems themselves – are a goldmine for observant researchers. That’s why management gurus like Clay Christensen and Gary Hamel have advocated listening not only to your core (and presumably satisfied) customers, but to those on the fringe – the unhappy non-users and complainers. And the louder they whine, the better.

Map out dissatisfaction

To better understand consumer dissatisfaction, author and consultant Adrian Slywotzky has advocated creating a “hassle map” – laying out the entire customer experience with a product or service to pinpoint where customers become frustrated by wasted time and effort. Far too many companies focus solely on adding exciting features to the product itself; great innovators instead often aim to eliminate irritating aspects of the experience. For example, Apple’s most successful products have often reduced hassle in the customer experience as much as they’ve added new capabilities. Through Visual Voicemail, the iPhone improved the bothersome process of navigating phone messages. The iPad greatly reduced both lengthy computer start-up time and the painful need to frequently recharge (through its hugely extended battery life). Most recently, the iCloud service aims to eliminate the irritating need to sync Apple devices using cords. Contrast these improvements with those of other PC-makers in recent years, who focused on adding security features, hundreds of gigs of storage, cameras, etc.

Imagine the ideal

P&G’s consumer researchers have been known to put on “futurist exhibits” to help spur innovative product concepts. After extensive consumer observation and discussion, researchers mock up nonworking but clever products in answer to the question: “How might consumers solve this problem in 50 years?” For example, rather than using an imperfect product that P&G offers today, perhaps the consumer of the future will simply swallow a pill annually to prevent hair from going gray, press a button to have house walls suck away dirt, or drink a tasty beverage to automatically clean his or her teeth. While these Jetson-like inventions may seem far-fetched, the brilliance of the “in the future” conceit is that it allows P&G innovators to forget today’s technical limitations and instead imagine what a perfectly simple and effective solution could look like. Who doesn’t like to imagine a frustration-free future?

Through these and other methods, companies can use consumer dissatisfaction to drive better innovation. A twist on the old maxim is appropriate: Don’t let today’s ‘good enough’ be the enemy of ‘better yet…’ And if you learn to love customer dissatisfaction, you may even be able to put up with a whiner like Dave.