Design Is About Intent

The most admired companies of each age are often associated with a certain core competency. Ford popularized assembly line manufacturing in the 1910s. Toyota kicked off the lean revolution with its Toyota Production System in the postwar years. GE’s enthusiastic adoption of Six Sigma in the ’90s spread the mantra of quality. These capabilities are credited with helping transform the respective industry of each company.

Apple is unquestionably the most admired company in the world today. So what is Apple’s defining capability?

Lest there be any doubt, they told us last summer: Apple is about design. It’s what they value, teach, and celebrate, and it’s what has enabled them to revolutionize industry after industry with innovative products and business models. 

 

Design as the New Management Tool

Largely due to Apple’s unprecedented success, design has recently become extremely fashionable in the broader business imagination:

A selection of recent headlinesDesign

Business gurus like Roger Martin, institutes like Stanford’s d.school, and consultancies like IDEO have all helped spread the gospel. With the worthy aim of making design accessible to the rest of us, they’ve broken down “design thinking” into step-by-step frameworks, which generally involve empathetic understanding, creative ideation, and experimental prototyping.

We saw this pattern with the Lean and Quality movements too – both generated extensive, organized, and widely adopted disciplines (think of Six Sigma’s DMAIC methodology and hierarchy of belt colors). But I fear that “design” has moved too quickly to the tools and techniques stage – the “how,” instead of the “what.” It’s quite evident that even Apple’s close competitors have not come anywhere close to replicating its design capabilities. And the reason is that many companies are missing the forest for the trees.

 

What Design Is Really About

Putting aside all the trappings associated with them, the big management ideas described above can be whittled down to first principles. The core object of the Lean philosophy is waste. Quality is fundamentally about variability. And design is about intent.

Intent means purpose; something highly designed was crafted with intention in every creative decision. Frank Lloyd Wright explained that intent drives design with the credo “form follows function“; P&G calls this being “purpose-built.” The designer is the person who answers the question “How should it be?”

Overarching intent is easy. The hard part is driving that conscious decision-making throughout every little choice in the creative process. Good designers have a clear sense of the overall purpose of their creation; great designers can say, “This is why we made that decision” about a thousand details.

Which is exactly what Apple does. Their obsession with intentional choice is palpable and personal. When Jony Ive, Apple’s newly titled SVP of Design, criticizes a material selection or feature decision, “he’s known to use ‘arbitrary’ as a term of abuse.” Steve Jobs himself couldn’t even make the most mundane personal design decisions without deep consideration of intent; according to his biographer, this led to a longtime lack of ample furniture in his home:

“We spoke about furniture in theory for eight years,” recalled [wife Laurene] Powell. “We spent a lot of time asking ourselves, ‘What is the purpose of a sofa?'”

 

The Three Design Evasions

The opposite of design, then, is the failure to develop and employ intent in making creative decisions. This doesn’t sound hard, but, astonishingly, no other leading tech company makes intentional design choices like Apple. Instead, they all commit at least one of what I term the Three Design Evasions:

The first evasion: Preserving

The easiest way to avoid a decision is to not ask the question in the first place. Anyone who’s ever led a business project knows the temptation of recycling precedent – why reinvent the wheel? That’s why, for all of Microsoft’s recent design plaudits, the Surface still features a 30-year-old vestigial key. That’s also why BlackBerry’s last-ditch effort at mobile relevance, the Q10, has a physical keyboard yet again.

But great designers know that sacred cows must always be evaluated for slaughter. Apple is famed for aggressively making clean breaks with the past; you can decry any one decision, but to Apple, nothing is ever settled for good. As Christa Mrgan astutely observed in Macworld, “Sentimentality doesn’t make for good design.”

The second evasion: Copying

Copying others’ design choices is the most obvious way to abdicate forming your own intent and having to make decisions yourself. That didn’t stop Google from fundamentally redesigning Android after the iPhone was unveiled. Nor did it stop HTC from replicating the iPhone’s UI features or colors. Most shameless of all, of course, is Samsung, whose list of appropriated products, features, and even strategies is so long that one suspects the tendency is deeply entrenched in the company’s culture.

Without a doubt, Apple has copied certain features from its rivals as well. The difference is that Apple seems biased to design based on its own intent first, and copy second; its rivals tend to copy first.

The third evasion: Delegating

Delegating is by far the most subtle, pernicious, and widespread of the three evasions, particularly among tech companies. Under the guise of being “user-driven” or providing “choice,” delegators leave crucial design decisions up to the user. One can even subdivide this tactic into three distinct flavors:

A) Offering a wide range of product choice

Many of the most successful hardware companies seem incapable of deciding how their products should be, so instead they offer variety:

The banner of “choice” is always good PR, and may even be good product strategy for many companies. But it’s not design. Design means curating the choice for the consumer. John Gruber summarizes Apple’s starkly limited product line well:

“Apple offers far fewer configurations. Thus, [Apple products] are, to most minds, subjectively better-designed – but objectively, they’re more designed. Apple makes more of the choices than do PC makers.” 

As an analogy, giving someone birthday money instead of taking the time to choose a gift seems eminently logical – why limit the recipient’s choices? But the gifts we remember most fondly are seldom checks.

B) Trying to offer an omni-functional product

Good designers create things with specific uses in mind, which implies making purposeful trade-offs. Another way to abdicate design is refusing to accept those trade-offs; it feels better to make something that could be anything for anyone. Seth Godin calls this a design copout – creating something that “helps the user do whatever the user wants to do,” instead of expressing the creator’s intent.

Once more, Samsung is a prime example; David Pogue summed up his review of the Galaxy S5 thus:

“… if you had to characterize the direction Samsung has chosen for its new flagship phone – well, you couldn’t. There isn’t one … Overall, the sense you get of the S5 is that it was a dish prepared by a thousand cooks. It’s so crammed with features and options and palettes that it nearly sinks under its own weight.” 

This unwillingness to choose, to say no – to exert intent – is also exactly what plagued Microsoft’s Surface, its “no compromises” hybrid tablet/laptop. Unsurprisingly, this jack-of-all-trades device is still a master of none.

Does this mean good design is assertive, ultimately subjective, even restrictive? Absolutely. As Marco Arment put it,

“Apple’s products are opinionated. They say, ‘We know what’s best for you. Here it is. Oh, that thing you want to do? We won’t let you do that because it would suck.'” 

C) Deciding based on user testing

The final flavor of Delegating is a favorite of Internet software and services companies: using A/B testing (or some variant) to see which designs elicit the best metrics from users. Witness the descriptions of how design decisions get made at leading firms:

  • Google: “We think of design as a science. It doesn’t matter who is the favorite or how much you like this aesthetic versus that aesthetic. It all comes down to data. Run a 1% test [on 1% of the audience] and whichever design does best against the user-happiness metrics over a two-week period is the one we launch.”
  • Amazon: “We’ve always operated in a way where we let the data drive what to put in front of customers … We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • Facebook: “It doesn’t matter what any individual person thinks about something new. Everything must be tested. It’s feature echolocation: we throw out an idea, and when the data comes back we look at the numbers. Whatever goes up, that’s what we do. We are slaves to the numbers. We don’t operate around innovation. We only optimize. We do what goes up.”

This kind of user testing – often dressed up as “failing fast” or “experimenting” – can be useful, but it’s not design. You can safely bet that Apple has never tested 41 shades of blue on users to decide the right color for its website links.

Look again at the list of companies cited above – Microsoft, BlackBerry, Google, HTC, Samsung, Lenovo, HP, Dell, Facebook, and Amazon. All ten were or are leading, innovative tech companies; all ten could be considered rivals to Apple in some sense; all ten evade the one capability Apple embraces most.

 

Designing Apple’s Future

What’s noteworthy is that while its competitors avoid design, Apple has been doubling down on it. The clearest example of this last year was iOS 7, Apple’s complete redesign of its most central product. iOS 7’s changes were deeply polarizing, but far from capricious; they were clearly underlain with deep intent.

Gruber correctly characterized iOSes 1-6 as prioritizing obviousness, with buttons and app icons so skeuomorphic, shadowed, and shiny that they looked lickable. iOS 7 did away with much of this ornamentation and use of affordances, and for a clear reason. As Ive explained:

“When we sat down last November (to work on iOS 7), we understood that people had already become comfortable with touching glass, they didn’t need physical buttons, they understood the benefits … So there was an incredible liberty in not having to reference the physical world so literally.”

In other words, as Apple’s intent changed, the design had to also. The new priorities seem to be clarity and order (compensating for the iPhone’s growing capabilities), hardware integration, and what I call “functional delight” – the feeling of joyfully intuitive, effortless actions with immediate, satisfying feedback. You can criticize any of the design decisions they made (and many have), but to do so without considering Apple’s intent is foolish.

This brings us to the present. Many analysts and pundits are puzzling over why Apple is reported to be buying Beats; I suspect Dave Troy and Ben Thompson are on to something:

Troy: The strategy that Apple is undertaking is to reposition the company away from being valued as simply a very good tech company that also happens to have aspirational brand appeal and instead as the world’s most valuable fashion and lifestyle company that provides fashionable, attractive technology through its ecosystem of compatible products.

Thompson: [A]re we witnessing a reinvention, into the sort of company that seeks to transcend computing, demoting technology to an essential ingredient of an aspirational brand that identifies its users as the truly with it? Is Apple becoming a fashion house? 

No outsider knows with certainty why Apple is buying Beats. But consider the following: if design is Apple’s core competency, then that skill should extend beyond computing. And if design can set it apart from all its rivals, then the goal must be to convince the world’s consumers to trust that Apple makes the right design choices for them. “Apple” must mean “great design.” And fashion brands are what we call the signifiers of great design taste.

*          *          *

We tend to think of Ford’s introduction of the assembly line as ushering in an industrywide transition. In reality, the majority of its contemporary competitors struggled to adopt the new system, and were terribly disadvantaged as a result: between 1920 and 1940, over 90% of several hundred U.S. automakers went bankrupt or otherwise vanished.

I don’t expect such a dramatic outcome for Apple’s rivals. But design has lifted Apple to great heights, and I suspect it can take them further. The rest of the world has certainly noticed. But they would do well to think a little harder about what adopting design really means.

 

Can Gillette Disrupt Itself?

On the surface, Gillette looks like a model of innovation success. A flagship brand of innovation champ P&G, Gillette’s achieved a remarkable ~70% share of the global men’s razor market, all while maintaining huge margins. The secret to getting so many men to pay so much is a series of new-and-improved razors that – despite making Gillette the butt of endless jokes – has carefully targeted areas of consumer dissatisfaction. Last year’s new Fusion ProGlide was a perfect example: it built on a key insight – men get post-shave irritation due to facial hair “tug and pull” – by using finer blades to slice through tough beard hair more effortlessly. Despite blade cartridges retailing for roughly $4 each, ProGlide sales since launching last summer – backed by a massive marketing campaign – were some of Gillette’s best ever for a new product. They’ve followed their innovation playbook for so long that it looks easy: a great business model + big market research + big R&D + big marketing = huge profits.

The Bigger They Are, the Harder They Fall

To a student of Clay Christensen’s theory of disruptive innovation, however, Gillette’s core business looks intensely vulnerable. All the signs are there:

  • A clear consumer “job-to-be-done” (hair removal)
  • A dominant, likely overconfident incumbent
  • Ongoing “sustaining” technological improvement (in blades, lubrication, battery-powered vibration, etc.) that vastly outpaces the rate of change in consumer needs
  • Resulting innovations (next-generation razors) that primarily serve a profitable segment of demanding customers willing to pay ever-higher prices (affluent Western men who shave frequently)
  • An unknown but presumably large number of “overserved” consumers and untapped nonconsumers (those who don’t shave frequently – or at all – because of cost or inconvenience)

The theory’s prediction is clear: some entrant will develop a less effective but simpler and/or cheaper solution to hair removal. It may initially capture only a small – and relatively less profitable – portion of the bottom of the market, but will likely improve its technology over time and relentlessly advance up-market. Gillette would find itself in the innovator’s dilemma, choosing (rationally) to cede less profitable business at the market’s low end and retreat to ever-higher ground, ultimately ending up with only a niche specialty market, if it’s not forced to exit altogether. If a fall from such a lofty position as Gillette’s sounds unlikely, consider the fate of Bethlehem Steel in the 1980s, IBM in the 1990s, Kodak in the 2000s, and, most recently, HP.

Reversing the Process

Fortunately, there are ways to escape this trap. In one prominent 2009 article, Tuck professors Vijay Govindarajan and Chris Trimble, along with CEO Jeff Immelt, described GE’s plan to disrupt itself via “reverse innovation.” Rather than develop products for the affluent U.S. market and try to sell them in the developing world, GE’s business units have begun to develop products specifically for the mass Chinese and Indian markets, such as a portable ultrasound device with lower quality and fewer features – but a price tag 80% below a conventional one. To pull this off, the key for GE was, as the authors put it, “shifting the center of gravity” to the overserved emerging market – in customer research, R&D, and organizational decision-making. Even more remarkably, GE has advanced its low-end technology to the point where a version can be sold competitively in the developed world, completing the reverse innovation cycle. GE Healthcare’s PC-based ultrasounds, for example, were developed for rural China but have been introduced into the U.S., where they may have cannibalized sales of GE’s traditional machines – but have also disrupted competitors, as well as preempted other potential developing-world entrants.

P&G isn’t stupid either. Since Gillette was acquired by the global conglomerate in 2005, its approach to market research and product development has been slowly but dramatically transformed. The razor business’s far less visible but perhaps more important 2010 product launch was the Gillette Guard, its first razor developed entirely in and for the Indian and other emerging markets. Through thousands of hours of in-person study, Gillette researchers learned that Indian men primarily sought a safe razor that could be easily rinsed in a bowl of still water, and that was cheap enough to be a reasonable alternative to a barber – or to not shaving at all. The Guard was developed (from a “clean sheet” design) with a safety comb, easy-to-rinse blade cartridges, and a single blade in a plastic housing with 80% fewer parts. Compared with the ProGlide, this simple design likely yields a relatively worse shaving experience by American standards, but the Guard’s replacement blades cost a mere 5 rupees – 95% less than the Indian version of Gillette’s Mach3.

Disrupt or Be Disrupted

But does Gillette’s emerging-market razor solve its innovator’s dilemma? For one thing, Gillette has shown no interest in importing even an improved version of its ultra-cheap, “good enough” product back to the U.S. It’s perfectly reasonable to point out that, in the developed world, Gillette’s share is so dominant (and margins so huge) that the cost of cannibalizing its sales of premium razors would be much higher than GE’s. Competitors won’t care, however, which is why Govindarajan argues that, unless it is willing to risk much of its core business itself, someone else will eventually do it for them. And lest Gillette think it can wait until it spies a potential disruptor before developing a U.S. version, it might do well to remember the lessons of Seagate, which developed its own 3.5″ computer hard drive but ignored its unattractive business case relative to its core 5.25s – only to be disrupted by Conner Peripherals, a former Seagate spinoff which focused on 3.5″ drives and rapidly left Seagate behind. As Christensen put it,

“[W]hen established firms wait until a new technology has become commercially mature in its new applications and launch their own version of the technology only in response to an attack on their home markets, the fear of cannibalization can become a self-fulfilling prophecy.”

A deeper question is whether a redesigned low-end razor is really what will ultimately disrupt this market. After all, a durable handle with disposable snap-on blades, scraped across a lathered face every day, is a rather clumsy solution to the job of hair removal (especially when defined broadly). The Gillette Guard made a radical trade-off in relative performance and price attributes, but didn’t fundamentally change Gillette’s model, entrenched as it is by decades of pervasive marketing. It’s easy to imagine how a chemist might develop a cheap cream that stops hair growth entirely, but has some negative side effects or other factors that cause traditional shaving consumers – and therefore Gillette – to ignore it. Until, that is, the kinks begin to be ironed out, and its inexorable march up-market causes Gillette to flee rather than fight.

By then it will be too late. The key question, therefore, is whether Gillette has the courage to truly disrupt its own seemingly invincible core business. If not, disruption will eventually come from without. It’s just a matter of when.

What Should I Watch? The Evolution of Recommendation

One of the great promises of the Digital Age is a better way to figure out the answer to the question above. People love great writing, artwork, film, and music, but no one is going to experience, in their lifetime, more than a fraction of all the content in existence. That’s why we try hard to find the stuff we’ll probably enjoy.

But that’s always been really difficult – as the saying goes, you can’t judge a book by its cover. Even if you could, no one wants to waste time searching through every title ever written to find the ones they’ll like. So for ages we’ve relied on poor solutions for discovery of new content (not to mention food, fashion, software, etc.). The three main ways we’ve done this are:

Curation: Experts decide what the best content is, and we listen to them. That’s why everyone read To Kill A Mockingbird in high school, and why movie critics put out Top 10 lists. Of course, there’s much to be said for being exposed to high culture and different viewpoints, whether we want to be or not. But the nature of art is subjectivity – everyone has different interpretations and tastes, so I might not like the experts’ picks. And who decides who’s an expert anyways – have you ever bought a book from the Staff Favorites rack at a bookstore?

Popularity: TV channels, radio stations, movie theaters, and bookstores offer an array of the most popular content, and we pick from the available options. Pretty simple – they modify their offering based on what sells, and everyone wins, right? But again, there’s no personalization here, and we don’t all have statistically average tastes. Worse, picking based on popularity creates a feedback loop that might misrepresent reality (did anyone actually like Rebecca Black’s Friday video?).

Word of Mouth: The old standby. Our friends and family probably have a better idea of what we’ll like than anyone else, and we’re more inclined to trust them (I’ll read anything my dad or my buddy Tom sends me). But unfortunately their experiences probably overlap significantly with ours (as Mark Granovetter pointed out decades ago), so while you might get fewer false positives (bad recs), you’ll also have more false negatives (missed content). It’s also tedious to poll your friends every time you’re looking for a movie to watch.

Enter the recommendation engine. Of course, in the Digital Age of plentiful data, a lot of companies can get more mileage out of the same basic methods listed above – for instance, the New York Times can now easily measure and display its most popular articles. But technology can also do a much better job of helping us discover new content when our tastes take us beyond the Top Ten (this has also created a revolutionary paradigm for content sellers, which Chris Anderson of Wired termed the Long Tail). Although I’m not an expert in the field, it seems like there are at least three entirely new ways to use consumer data to recommend new content:

Intrinsic algorithms use the actual attributes of the content and combine them with individual user feedback. The best example of this is probably online radio Pandora, which uses the Music Genome Project’s 400 attributes to tag every song in its database. If you say you like a song, it cues up more songs with similar traits (e.g., beat, vocal pitch, etc.). While this approach is widely praised for helping discover good music, it’s probably harder to apply to other types of content. There are also certain things we love about great art (like a metaphor in a song’s lyrics) that can’t be reduced to digitized attributes.

Preference algorithms rely on both our own and others’ ratings. Amazon was a pioneer in using its massive scoring database to shift from just popularity-based discovery (“X is highly rated”) to adding a preference-based algorithm, too (“you liked X, and most people who like X also like Y, so we recommend Y”). But while the logic is simple, the algorithms get incredibly complex. The gold standard is Netflix, whose Cinematch recommendation engine is so critical to their success they offered a $1 million prize to researchers that could improve it by 10%. But this approach has limits too, many of which have been described by Eli Pariser (example: preference algorithms tend to be risk-averse, so restaurant recommendation engines keep sending people to decent, inoffensive places like Chipotle).

Social algorithms – right now, social networks’ role in recommendation is just word of mouth on steroids, but their use for discovery is only just beginning. You could talk about Game of Thrones (or “like” it) on Facebook today, and your friends may be intrigued. But far more powerful would be an automatically generated recommendation if a significant percentage of your closest ties liked or mentioned something.

So which approach is best? The more interesting question is how these approaches can be combined to produce exponentially better discovery. That’s why Facebook’s announcement last week that Reed Hastings, Netflix’s founder and CEO, was joining its board was exciting. Sure, it may signal Facebook’s preparations for an IPO, or its future addition of streaming video, but it might also pave the way for Netflix to integrate a social element into its recommendations. What if curation and/or intrinsic factors were added too? Google might also be well positioned to offer a killer recommendation engine in the future if Google+ takes off. And at the very least, a better recommendation system could help Amazon win the retail war against Walmart – or vice versa.

Going further, recommendation engines have been mostly add-ons for content sellers so far (stand-alone recommendation platforms haven’t been widely adopted), but imagine how powerful a universal recommendation engine across all types of content (and other choices we make) could be. Again, there are legitimate concerns about a world of excessively personalized discovery, as Pariser argues in The Filter Bubble – ideally, we’d always be quite conscious of recommendation and decide when to switch it on and off. But at the very least, I bet we’d watch a lot less bad TV.

As always, your feedback is welcome.

Dissatisfaction Is the Mother of Innovation

My friend Dave is a great guy, but terrible to go to restaurants with. Invariably, he ends up peppering the server with endless questions, trying to order things not on the menu, and complaining about the food once it’s served.

People like Dave may be hard to dine with, but they can be great for innovation. That’s because they’re continually dissatisfied with what’s available, looking instead for an ideal experience. The best innovators utilize several techniques to understand consumer dissatisfaction – and then use that understanding to drive innovative ideas.

Listen to problems, not solutions

Recently, many have cited Henry Ford (who famously quipped, “If I had asked people what they wanted, they would have said faster horses”) and Steve Jobs (“You can’t just ask customers what they want and then try to give that to them”) to make the case that listening to customer feedback is pointless. But as Ted Levitt, Tony Ulwick, and others have argued, while customers are notoriously bad at coming up with solutions to their own problems, their actual difficulties and complaints – the problems themselves – are a goldmine for observant researchers. That’s why management gurus like Clay Christensen and Gary Hamel have advocated listening not only to your core (and presumably satisfied) customers, but to those on the fringe – the unhappy non-users and complainers. And the louder they whine, the better.

Map out dissatisfaction

To better understand consumer dissatisfaction, author and consultant Adrian Slywotzky has advocated creating a “hassle map” – laying out the entire customer experience with a product or service to pinpoint where customers become frustrated by wasted time and effort. Far too many companies focus solely on adding exciting features to the product itself; great innovators instead often aim to eliminate irritating aspects of the experience. For example, Apple’s most successful products have often reduced hassle in the customer experience as much as they’ve added new capabilities. Through Visual Voicemail, the iPhone improved the bothersome process of navigating phone messages. The iPad greatly reduced both lengthy computer start-up time and the painful need to frequently recharge (through its hugely extended battery life). Most recently, the iCloud service aims to eliminate the irritating need to sync Apple devices using cords. Contrast these improvements with those of other PC-makers in recent years, who focused on adding security features, hundreds of gigs of storage, cameras, etc.

Imagine the ideal

P&G’s consumer researchers have been known to put on “futurist exhibits” to help spur innovative product concepts. After extensive consumer observation and discussion, researchers mock up nonworking but clever products in answer to the question: “How might consumers solve this problem in 50 years?” For example, rather than using an imperfect product that P&G offers today, perhaps the consumer of the future will simply swallow a pill annually to prevent hair from going gray, press a button to have house walls suck away dirt, or drink a tasty beverage to automatically clean his or her teeth. While these Jetson-like inventions may seem far-fetched, the brilliance of the “in the future” conceit is that it allows P&G innovators to forget today’s technical limitations and instead imagine what a perfectly simple and effective solution could look like. Who doesn’t like to imagine a frustration-free future?

Through these and other methods, companies can use consumer dissatisfaction to drive better innovation. A twist on the old maxim is appropriate: Don’t let today’s ‘good enough’ be the enemy of ‘better yet…’ And if you learn to love customer dissatisfaction, you may even be able to put up with a whiner like Dave.