8 Laws Driving Sucess In Tech: Amazon’s 2-Pizza Rule, The 80/20 Principle, & More – CB Insights Research

cbinsights.com
 

What separates success from failure? These 8 laws contain some of the most influential ideas that the biggest tech companies use to run their operations, design business models, and build products.

Since the early days of computers, the rises and falls of tech companies have inspired countless theories about what drives success — and predicts failure — in the fast-moving world of startups. While many such theories fall flat, a few have become well-regarded descriptions of how the tech business works.

These tech “laws” describe why a given network’s value rises exponentially with the volume of participants (Metcalfe’s Law), why most marketing channels eventually produce diminishing returns (Law of Shitty Click throughs), or why starting with simple iterations leads to better long-term products (Gall’s Law).

Some, like Moore’s Law, have been extremely prescient. Others, like Conway’s Law, provide counter intuitive insights — such as why Apple’s corporate structure is key to its user experience.

In this report, we look at where 8 of the most famous tech laws come from, why they matter, and how they help describe some of tech’s biggest successes.

Table Of Contents

  1. Moore’s Law: The self-fulfilling prophecy that ushered in the digital age
  2. Metcalfe’s Law: Why big networks produce colossal winners
  3. Gall’s Law: Why the best products are built from simple systems
  4. The Two-Pizza Rule: Why small teams lead to big success
  5. Conway’s Law: Why corporate structure is vital to product development
  6. The Law of Shitty Clickthroughs: Why innovative marketing is better than expensive marketing
  7. Zimmermann’s Law: How free products can build rich businesses
  8. Pareto Principle: Why startups can raise capital even though most will eventually fail

1. Moore’s Law: The self-fulfilling prophecy that ushered in the digital age

In 1965, Intel co-founder Gordon Moore made a prediction.

Moore had observed that, every year, semiconductor manufacturers were able to double the number of transistors on a single microchip and halve the cost of building them.

When he was asked by Electronics magazine to contribute his perspective on the future of semiconductor manufacturing, Moore said this trend would continue. And thanks in large part to advances in transistor miniaturization — most notably, the advent of the MOSFET transistor type — Moore’s prediction panned out.

Today, we call his observation Moore’s Law.

How Moore’s Law predicted chip production at Intel for decades

Over the decades following Moore’s prediction, the number of transistors on the average Intel computer chip produced grew exponentially, from a bit over 2,000 in 1971 to 8B in 2017.

While this progress relied on advances in technology and manufacturing to make ever-smaller transistors, it also became a kind of self-fulling prophecy in the industry.

For decades, the heads of rival semiconductor companies used Moore’s Law to construct their annual production goals, according to Ethan Mollick of the MIT Sloan School of Management.

This approach pushed them to increase the amount of transistors on their chips — mainly out of the fear of being left behind by companies like Intel.

The number of transistors on a variety of integrated circuit chips since 1970, shown on a log scale. Source: Our World in Data

Gordon Moore did revise his prediction slightly in 1975. Instead of doubling every year, he predicted the power of integrated circuit chips would double every two years. And this math proved a relatively accurate guide to growth in the semiconductor industry until just a few years ago, when Intel itself announced that it would “slow the pace” of new chip releases because of the increasing difficulty of continuing to shrink its transistors in a cost-effective manner.

Samsung, Nvidia, and AMD — a few of Intel’s biggest competitors — have all since come out with chips that have more than 8B transistors. And in 2019, Amazon showcased its Graviton2 processor, which contained 30B transistors on a single chip.

Despite competitive pressure and the limits of Moore’s Law, Intel argues that the true importance of Moore’s Law is not the literal technological feats it describes but its overall ethos of exponential improvement.

“If you go look at Moore’s Law, Moore’s Law was never one thing. There was transistor architecture, strain, materials, 3D architecture… Every single component of shrinking the transistor changed year over year. […] Imagine a business where a million people wake up every day, working on making things smaller and better. It was that collective belief system about this [that helped] the inventions keep coming.” – Jim Keller, senior vice president of Intel’s Silicon Engineering Group

Quantum computing and the new Moore’s Law

As Moore’s Law slows in the world of traditional computer architecture, one place companies like Intel see the future is quantum computing.

In classical computing, a “bit” can have a value of 0 or 1, and whether it is one or the other is represented by the state of a transistor.

Quantum computers, however, don’t use traditional transistors or bits. The basic unit of processing in quantum computing is the qubit, a quantum state that can represent multiple values at the same time. Stringing qubits together could allow some computations to be run much faster than otherwise possible, as the information represented by the qubits can be processed simultaneously.

Quantum computing is still years away from being useful in practice and won’t replace classical computing in many contexts. But companies like IBM and D-Wave are already providing access to early quantum computing infrastructure — which would be highly expensive for most companies to operate themselves — through the cloud. Google, one of the field’s leaders, wants to build a commercial quantum computer within a few years.

Thanks to the combined efforts of companies like IBM, D-Wave, and Google, as well as researchers around the world, the number of qubits inside quantum computers has increased at a steady rate over the last 2 decades. In 2018, when Google announced it had built a 72-qubit quantum computer, the qubits chart suddenly started to look more and more like the beginning of an exponential curve.

In 2018, Hartmut Neven, director of the Quantum Artificial Intelligence lab at Google, made a similar observation to Moore’s: quantum computers were developing at a “doubly exponential” rate when compared with classical computers.

This double exponential effect comes from a qubit’s ability to represent multiple bits of information at the same time: for every qubit added to a quantum computer, its power increases exponentially. As such, an exponential increase in qubits can, in theory, result in a double exponential increase in computing power.

Double exponential growth could lead to rapid advancements that far outperform what came immediately before. Nevens summed it up as such: “it looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world.”

Some experts believe this observation, now known as Neven’s Law, may turn out to be the new Moore’s Law.

Takeaway

Moore’s eponymous law — what Intel later called “the golden rule for the electronics industry” — presaged and, in some ways, propelled a revolution in both the performance and price of computing power. Even today, when Moore’s Law may finally be reaching its limit, a successor to Moore’s Law — that predicts similar exponential increases in computing power — is emerging to help define the next age of computing.

2. Metcalfe’s Law: Why big networks produce colossal winners

In 1980, Robert Metcalfe — one part of the duo behind Ethernet, a technology for connecting computers together — observed that communications networks increased in value in proportion to the number of users.

One telephone in a city was useless. Even a few hundred, dispersed across millions of inhabitants, wasn’t very useful. But once you had friends and family who owned telephones — and restaurants and movie theaters and stores all had telephones — it was suddenly very desirable to own one.

Metcalfe was working at 3Com, a computer network company he co-founded, when he came up with the idea. At the time, he was trying to understand why his company’s local area network (LAN) starter kits — which allowed 3 PC users to share a printer and a hard disk — weren’t selling.

What he found wasn’t necessarily that the cost was too high, but that 3 people didn’t constitute a sufficiently large network to justify the purchase.

Like early installations of the telephone, the cost of connecting the network initially exceeded the value generated — but there would also be a point of critical mass where the benefits of connecting did exceed the upfront investment.

How Facebook leveraged Metcalfe’s Law to become a social media giant

When Facebook first launched at Harvard University in 2004, it was like a singular telephone: pretty useless. But it wasn’t useless for long. Within a month, 50% of the student body was on Facebook.

This was partly a case of pent-up demand: every year, Harvard released a physical “face book” that included every student and faculty member on campus, designed to help everyone at the school get to know one another. “TheFacebook,” as the social network was then known, was an attempt to digitize this already existing physical object and make frictionless the process of snooping on other people’s photos.

The effect, however, was this: with every new Harvard student that joined the platform, it became more valuable for other students to join too. Each new student promised new people to look at, learn about, and “poke.”

Source: Getty

As Facebook grew, the company added features that were designed to tap into the “more is better” mechanics of Metcalfe’s Law. Photos, groups, likes, comments — they all leveraged this concept to bring users back into Facebook again and again. And the more people there were on Facebook, the more photos were uploaded and tagged, the more groups were created, and so on.

The non-linear effects of Metcalfe’s Law would also contribute to the long-term viability and success of Facebook’s advertising business. The more people on the platform, the more data and connections, the greater the revenue opportunity in targeted advertising.

Metcalfe himself acknowledged the impact of this law on Facebook’s growth, observing that if Facebook’s revenue was used as a proxy for its value, it was impossible to deny that the network’s value had risen exponentially compared to the steady linear growth of its user base.

Death spiral: what happens when networks don’t reach critical mass

According to Metcalfe, a network’s critical mass is a function of the cost of a new connection (e.g. the cost of acquiring a user), the number of users, and the value of each connection.

This approach describes a few mechanics key to network effects. The lower your cost per connection, for example, the lower the number of users that you’ll need to hit critical mass — and the same goes for a higher value per connection.

Andrew Chen, partner at Andreessen Horowitz, has written not only about how the mechanics of Metcalfe’s law are key to growth at companies like Facebook, Twitter, and Snap, but also how the law can work against these networks if user retention isn’t high enough.

In one scenario, each new user that comes to a platform makes the experience better for everyone else. Growth is self-perpetuating because not only are users encouraged to stick around, but there’s an increasing value proposition to entice new users at a steady clip.

In another, not-so-good scenario, new users may still be joining and helping generate the benefits of network effects (for a time), but the network’s retention isn’t great.

If a network starts losing too many users, it can get stuck in what he calls a “social network death spiral.” Because while the forces described by Metcalfe’s Law can help startups gain lots of users quickly, they can also cause those startups to lose users at the same pace. In other words, “as you lose users, the value of your network decays exponentially.”

What’s lacking in this reverse-network effect scenario is the value offered by joining the network. If that value is too low, retention will be low, and it becomes difficult to reach critical mass. Meanwhile, if the value is high, your retention will likely be high as well.

Chen’s death spiral provides a way of understanding what happens when social networks take off for a time but ultimately don’t give each new user sufficient value to stick around — a fate that has befallen prospective social networks from Ello to Path to Peach. These apps often launch with a bang, attract initial interest, and then rapidly lose their user base as the promise of the network fails to materialize.

Takeaway

The telephone, the fax machine, and early LAN networks had what venture capitalists today call “network effects.” The more people that were on these networks, the more valuable it was for new users to join.

This is one of the most important dynamics for businesses in the 21st century, especially in tech. Many of the most valuable businesses of the last several decades, from Microsoft to Facebook to Airbnb to Uber, have succeeded in part thanks to the power of network effects.

While building a business through network effects was valuable before the internet, the internet has made it possible for companies to grow their user base at exponentially faster speeds, often building their entire business models on this growth.

3. Gall’s Law: Why the best products are built from simple systems

In 1975, the American author and pediatrician Robert Gall made an observation about systems that would go on to become hugely influential in computing.

“A complex system that works is invariably found to have evolved from a simple system that worked,” he wrote. “A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”

This idea, originating in his book “General Systemantics,” would eventually become known as Gall’s Law. Since then, it has become one of the most important and foundational ideas in tech, especially when it comes to the design of new products.

How Gall’s Law fosters products that people actually want to use

One concept that can help illuminate the power of Gall’s Law is the idea of emergence: the notion that complex systems possess properties that the individual constituents of the system do not.

Powerful technologies can be built when users can manipulate and recombine individual pieces in order to create something of greater value.

Twitter, for example, was founded without reply or retweet functions. All users could do was post 140-character messages to their feeds.

Then Chris Messina proposed the idea of using a hashtag to collect all tweets relating to a single topic, and not long after, hashtags were added into the Twitter platform itself.

Replies and retweets followed later as the Twitter team realized that their users were trying to approximate these actions within their constrained set of possible behaviors.

Source: Getty Images

Twitter could have predicted that users of a public broadcasting app would want some way to share content with their followers, reply to other people’s messages, and even organize information along topical lines.

But cultivating an openness to what users actually did, rather than assuming they knew exactly what to build, was key to how Twitter developed into the product it would become. As co-founder Evan Williams put it, it wasn’t even clear “what [Twitter] was” in those early days, and the product took several turns before settling on what it is now.

This kind of emergence-friendly development process is more or less conventional wisdom in Silicon Valley today, with the creation of a “minimum viable product” (MVP) being standard practice within product businesses.

The thinking behind the MVP is simple and echoes the development processes followed both by Amazon in inventing AWS and by the team at Twitter: build a version of your product with just enough functionality that it can engage your core users. Roll that initial version out and start gathering data and feedback. Then, use that data and feedback to chart the path forward. When done right, the result is what Gall’s Law predicts: a complex system that has arisen from the organic growth of a much simpler system.

How Amazon followed Gall’s Law to dominate cloud services

The development of Amazon Web Services (AWS) from an internal data storage service to a crucial backbone of the internet encapsulates the power of Gall’s Law.

AWS emerged from a relatively simple desire: Amazon needed a more modular system for its internal engineering work.

Every time a team at Amazon wanted to build out a new web feature, whether for internal use or for a retail partner, Amazon developers would end up building everything — from databases to computing power to storage — from scratch.

That’s when the idea emerged to build out each one of these functions as a modularized service. For one, the team had become highly competent at working on these functions. For another, the Amazon team was operating at a unique scale, being one of the biggest early internet companies.

It soon became clear that Amazon had an equally unique opportunity to make databases, computing, and storage easier for other companies and other developers to use.

“We tried to imagine a student in a dorm room who would have at his or her disposal the same infrastructure as the largest companies in the world,” Andy Jassy, head of AWS, told Brad Stone, author of “The Everything Store.” “We thought it was a great playing-field leveler for startups and smaller companies to have the same cost structure as big companies.”

Source: Flickr

While AWS today appears to be an intimidatingly complex system, it emerged from the interaction of a very simple set of modules designed for Amazon to use internally.

The concept was heavily inspired by the 2003 book “Creation,” written by game designer Steve Grand. Grand’s design approach had centered around building simple creatures, then sitting back and “watching surprising behaviors emerge.” For Amazon CEO Jeff Bezos, this idea proved perfectly applicable to computing.

On the influence of “Creation” across Amazon’s leadership, Brad Stone writes,

“If Amazon wanted to stimulate creativity among its developers, it shouldn’t try to guess what kind of services they might want; such guesses would be based on patterns of the past. Instead, it should be creating primitives — the building blocks of computing — and then getting out of the way. In other words, it needed to break its infrastructure down into the smallest, simplest atomic components and allow developers to freely access them with as much flexibility as possible.”

Today, Amazon Web Services drives most of Amazon’s operating profit and about 15% of its overall revenue.

Feature bloat: what happens when companies flout Gall’s Law

Tech history is full of products that failed because they went the opposite way of Twitter and Amazon: they built over-engineered products, bloated with features, that lacked enough simple utility for users.

One of the classic examples of feature bloat bringing a product down is ICQ, the once-popular instant messaging client. In a 2007 blog post, Robert Scoble attributed the service’s sudden decline mainly to over-engineering: “It got too cluttered and stopped being developed.”

At one point, ICQ had been the king of instant messaging. In 1998, AOL bought the company behind ICQ for $287M upfront and $120M in performance-related payments. The service had more than 100M accounts registered around the time it peaked in 2001.

For Scoble, that’s where the decline, and the clutter, started. ICQ started branching out from its core utility by adding features around shopping, music, games, and even careers, resulting in a busy interface that felt removed from the purpose of the product.

At the same time, new products like Facebook were emerging that allowed people to more organically connect with their friends outside the constraints of a messenger.

“Feature bloat is how most consumer web and desktop products suffocate themselves,” Dropbox co-founder and CEO Drew Houston has said. “ICQ became so comically bloated that they released an ‘ICQ Lite’ version, but by then they were already on the decline.”

Releasing a “lite” version of a product might make sense if you’re differentiating it from a “premium” version that serves an audience with more complex needs, but ICQ and ICQ Lite were the same basic product for the same basic target user.

The release of a “lite” version, in this case, could be seen as a way to actually offer a superior product — an attempt to cut the feature bloat that had crept in over the years.

Takeaway

Today, with free trials, SaaS, and subscription business models in vogue, it’s more sensible than ever to produce a minimum viable product (MVP) and iterate on it based on user feedback.

While companies should avoid feature bloat, aiming for a gradual evolution from simple functionality is a way to build a growing product that actually reflects users’ needs.

And according to Gall’s Law, products that start small — as simple, effective systems — stand the best chance of becoming powerful businesses built on high-functioning, complex systems.

4. The Two-Pizza Rule: Why small teams lead to big success

Jeff Bezos didn’t just leverage the basic insight of Gall’s Law to develop what has become Amazon’s fastest-growing internal unit — he is also responsible for popularizing one of his own tech rules.

In early 2002, Bezos decided that to reduce communication overhead and improve productivity, all of Amazon would be re-organized into so-called “two-pizza teams” — squads small enough that just two pizzas would be enough to fully feed them when working late in the office.

While this has also been interpreted as a rule for meetings — either capping the maximum size of meetings at Amazon or capping the size of meetings that Bezos himself will attend — the original formulation was an edict about team size.

Why two-pizza teams gave Amazon a strategic advantage

For analyst Benedict Evans, the two-pizza rule has been a crucial structural advantage for Amazon.

The root of the two-pizza team idea was Bezos’ attitude toward both centralization and communication. Bezos wanted Amazon to remain nimble even as it grew, and that started with encouraging independent decision-making rather than an over-reliance on hierarchy.

And Bezos hated the overhead created by excess communication. In “The Everything Store,” Brad Stone quotes Bezos as saying that “communication is a sign of dysfunction. It means people aren’t working together in a close, organic way. We should be trying to figure out a way for teams to communicate less with each other, not more.”

Source: Getty Images

The main advantage this conferred upon Amazon was simple: smaller and independent teams meant the ability to spin up new teams much faster, giving Amazon the power to scale more cheaply, explore new ideas easily, and ultimately ship more products to customers.

While many of the new initiatives spearheaded by these small teams have failed, some — like Prime, Kindle, and AWS — have become core businesses for Amazon.

For Bezos, it comes back to the idea of building a corporate structure that can generate the maximum amount of innovation for customers. Big, centralized teams maintain companies. Small, autonomous teams find new ideas.

“We have the good fortune of a large, inventive team and a patient, pioneering, customer-obsessed culture — great innovations, large and small, are happening every day on behalf of customers, and at all levels throughout the company,” he wrote in his 2013 shareholder letter. “[Decentralized] distribution of invention throughout the company — not limited to the company’s senior leaders — is the only way to get robust, high-throughput innovation.”

In other words, as Benedict Evans writes, “you don’t (in theory) need to fly to Seattle and schedule a bunch of meetings to get people to implement support for launching make-up in Italy, or persuade anyone to add things to their roadmap.”

There is a machine, and then there is a machine to build the machine, says Evans. And Amazon’s two-pizza rule ensures that the company, aside from being a ubiquitous tech leviathan, can operate a machine to build more Amazons.

Spotify’s squads combine autonomy with responsibility

The concept of small, autonomous teams has become extremely popular in the tech world over the years, and some of the biggest and most successful companies have adopted similar ideas to the two-pizza rule to help make their people more efficient and productive.

Spotify is one example of a company that abides by the smaller-is-better rule when it comes to organizational structure.

Source: Getty

The music streaming app organizes teams into 8-person squads, with the function of each squad determined autonomously, rather than being directed from above. Each squad functions as a mini-startup inside Spotify: cross-functional, self-sufficient, and sharing the same geographical location.

One unusual feature of Spotify’s squads is the lack of a designated leader or manager, with each squad member equally responsible for results. In the vein of autonomous operations, any leader of these squads emerges organically and informally. The company uses the squad structure to encourage greater innovation, disruptive thinking, and complete accountability, without the burden of excessive control.

Squads with similar functions across the organization are then grouped under tribes, chapters, and guilds. While tribes act as incubators and provide support for the development of squads, chapters connect employees based on specific skill sets such as web development and quality assurance. Guilds are a way to encourage knowledge sharing across the organization, irrespective of squad, function, or location.

This organizational structure is designed to support a bottom-up approach at Spotify. For example, best practices at the music streaming company only emerge after enough squads have adopted them.

Furthermore, by keeping teams highly autonomous, the structure “lowers the cost of failure” through ensuring that “failure has a ‘limited blast radius’ and affects only part of the user experience,” according to Michael Mankins and Eric Garton of the Harvard Business Review.

In the vein of Bezos’ idea of excessive communication being wasteful, Spotify looks to make communication more efficient by seating squads, guilds, and tribes that need to coordinate closer to each other.

Not everyone is satisfied with two pizzas

According to organizational psychologist J. Richard Hackman, the more interconnections you have between people, the slower your decision-making and the higher the management costs for the organization.

As you add people to an organization, the number of communication links increases exponentially.

“The cost of coordinating, communicating, and relating with each other snowballs to such a degree that it lowers individual and team productivity,” writes blogger Janet Choi. Two-pizza teams solve this inherent scaling problem by artificially capping the number of links.

Despite the organizational clarity derived from the two-pizza team, not everyone at Amazon and in the wider tech world is a fan of the rule.

Some former Amazon employees and others believe the strategy was counterproductive, and some think that building products using two-pizza teams creates a disconnected user experience.

According to Brad Stone, the concept of two-pizza teams was unevenly applied throughout the Amazon. It took root most of all in engineering, while the idea barely touched the finance and legal departments.

Additionally, each team had to set up its own “fitness function” — some kind of quantitative, linear equation that could be used to judge whether that team succeeded or failed in its mission. For a marketing team, that might be the average email blast open rate multiplied by ensuing order value.

Yet, Stone writes, making some teams define this function for themselves was like “asking a condemned man to decide how he’d like to be executed.” For others, it was merely ineffective.

Kim Rachmeler, former VP at Amazon, said, “Being a two-pizza team was not exactly liberating. […] It was actually kind of a pain in the ass. It did not help you get your job done and consequently the vast majority of engineers and teams flipped the bit on it.”

Small, disconnected teams run the risk of producing disjointed products that don’t contribute to a seamless experience, according to product management consultant Matt LeMay.

For LeMay, the most important aspect of a product for a customer is not the individual features of a product, but how those features come together to deliver a cohesive experience.

Given the vast number of products that Amazon has put out over the years, and how cluttered many of its website’s menus can seem, it’s fair to say that Amazon’s drive for two-pizza teams has not been without its drawbacks.

Takeaway

Jeff Bezos wanted small, autonomous teams that could be “independently set loose on Amazon’s biggest problems,” as Brad Stone writes. These teams wouldn’t have to waste cycles communicating with other teams, and they would each have all the resources and people necessary to launch new products. The result, Bezos thought, would be more creative offerings and faster results for customers.

Though not without its detractors, Bezos’ idea that over-communication between teams risks stoking inefficiencies is now commonly implemented at companies like Google and Spotify.

5. Conway’s Law: Why corporate structure is vital to product development

In 1967, the computer scientist Melvin Conway made a key observation about organizational structure.

The way that a team communicated and the design of that team’s products, Conway argued, were mirrored — one always reflected the other.

A droll illustration of Conway’s Law. Source: Manu Cornet, bonkersworld.net

In a simple explanation of Conway’s thesis, there are two pieces of software: software A and software B. If the developers of these two pieces of software don’t communicate, there’s no easy way for the software to integrate.

When communication between the developers occurs often and openly, on the other hand, the odds of a seamless experience are far greater.

How Apple produces an end-to-end customer experience

At Apple, teams are organized according to what’s called the Unitary Organizational Form. The basic idea — rooted in Conway’s Law — is that the company should be organized around functional expertise rather than products.

That means that instead of dedicated teams for products like the iPhone, the Mac, or the iPad, Apple has teams that work on design, teams that work on engineering, teams that work on marketing, and so on.

This structure encourages coordination between teams and helps Apple deliver a unified experience across products. No product is ever released that departs from the predominant Apple design, engineering, or operational paradigm. Even its credit card has an unmistakable Apple “feel.”

After the iPad first launched, Steve Jobs said this “post-PC device” needed to be “even easier to use than a PC” and “even more intuitive than a PC, […] where the software and the hardware and the applications need to intertwine in an even more seamless way than they do on a PC. […] We think we have the right architecture not just in silicon but in our organization to build these kinds of products.”

Reorganizing Apple along functional rather than divisional lines was one of the first things Jobs did upon returning to Apple in 1997, and his successor at Apple, Tim Cook, still attributes the success of the company’s products to this move.

“We’ve found a way to make our products such that the experience is jaw-dropping,” Cook told Businessweek.

Why GitHub is structured like an open-source project

GitHub provides an example of a company that obeys Conway’s Law while doing so in a remarkably different way from Apple.

Instead of integrating in order to promote an end-to-end experience, GitHub is intentionally structured like one of the open-source projects the service hosts: decentralized, autonomous, and asynchronous.

That structure reflects the kind of product GitHub has built — one designed for developers more than managers — as well as how that product works.

The GitHub tool is built for asynchronous collaboration: new code can be submitted anytime from anywhere, then reviewed at the responsible party’s leisure. Developers across the globe can use GitHub to collaborate on a project without having to deal with overlapping codebase changes or inconsistencies between their work.

The company itself is also built for asynchronous collaboration, with many of its basic organizational tenets ripped directly from the processes of open source development.

There are no codified standards about what time to come into the office, and most work is completely self-directed. “If you’re interested in working on something, then work on it,” wrote Zach Holman, one of the first engineers at GitHub.

Members of the team are spread out across the world, there are no daily stand-up meetings, and most of the communication you have with your colleagues happens asynchronously, over chat or email.

Collaborating asynchronously to the extreme is a way for the team at GitHub to stay close to one of the biggest problems they want to solve with the company: the difficulty of collaborating asynchronously when working on software projects. One of the ways they take on this problem is through “open, easy-to-use platforms” — precisely what GitHub itself is trying to build.

Apple, in other words, uses an integrated organization in order to build products that give the customer a seamless, end-to-end experience. GitHub uses an organization structured like an open-source project because its goal is to give its user base of developers a collaboration platform that allows distributed, decentralized teams to build great products.

Takeaway

Conway’s Law helps explain not just how companies operate — and how their structures enable or hinder business activity — but also how they are managed.

As computer scientist Fred Brooks has pointed out, for an organization that provides some good or service, the structure that it naturally takes on as it grows is unlikely to be the ideal system for delivering that offering. Remaining flexible is essential to the organization’s structure.

The people that run companies, in other words, must consider organizational design on a similar level as operations, R&D, and products. Much of what we attribute to the latter is rooted in the former.

6. The Law of Shitty Clickthroughs: Why innovative marketing is better than expensive marketing

Coined by Andreessen Horowitz partner Andrew Chen, the Law of Shitty Clickthroughs states that new marketing channels, no matter how useful at first, gradually lose their effectiveness over time.

Emails, waiting lists, referrals, search engine marketing — all obey the Law of Shitty Clickthroughs. Chen gives 3 main reasons for this trend:

  • Novelty: The first time you use a new marketing technique on a customer, you can get them to respond out of curiosity. Humans are attracted to novelty, but they also quickly recognize patterns.
  • Fast followers: Once news of a new channel’s effectiveness gets out, competitors are fast to provide similar services, and customers start to get fatigued.
  • Scale: While early adopters may respond positively to novel marketing efforts, the mass market will be more hesitant.

For example, Chen says, when banner ads first debuted on the website HotWired in 1994, their clickthrough rate (CTR) was 78%. They were novel, and no one had ever seen them before. But soon, every website had banner ads. By 2011, the CTR for banner ads on Facebook was about 0.05%, according to Chen.

When D2C user acquisition met the Law of Shitty Clickthroughs

With the advent of digital marketing, it became relatively cheap to acquire new customers. Brands no longer needed to spend on billboards, print, and broadcast ads, nor did they need inventories or expensive shiny catalogs to attract customers.

Facebook and Google, with their massive and broad user bases, offered brands a particularly cheap channel for mass customer acquisition — and few industries took to it as well as the burgeoning direct-to-consumer (D2C) market.

For companies like Warby Parker and Bombas, digital ads were pivotal to their rapid growth. Companies could reportedly acquire customers for as little as $10 a piece. In 2015, D2C luggage startup Away was making $5 for every $1 that it spent on Facebook, according to co-founder and former CEO Steph Korey.

Source: Getty

A key driver of this success was the fact that these digitally native brands, often led by founders or co-founders with technical backgrounds, were effective at leveraging the powerful targeting options available on platforms like Facebook.

The contact lens startup Hubble, for example, found many of its first customers through Facebook’s Lead Ads, which allows users to send their already uploaded email addresses to a company. Running Lead Ads became Hubble’s most powerful early sales generator.

Doing this kind of advertising on a traditional website would have required prompting users for their email address and having them manually type it out. On Facebook, it was all done in a single click.

The problem for Hubble was that Lead Ads eventually started to become less effective. So did all the other ad types that Hubble tried using. “No matter what new ads they put in an ad set, the growth rate of sales declined and the cost per acquisition went up,” Burt Helm wrote in the New York Times.

This gradual decrease in effectiveness — with growth faltering and costs increasing — hasn’t been isolated to Hubble. Customer acquisition costs on Facebook and Google have increased across the board by as much as 3x over the last few years, according to Digiday.

For some companies, the solution has been to get off digital ads altogether. After the beverage brand Iris Nova saw its advertising costs on Facebook increase, it decided to cut its spend entirely and focus on events and physical expansion.

Other brands are increasingly joining forces to acquire customers together, with some teaming up to offer joint product lines, events, and giveaways.

In September 2019, for example, beauty brand Glossier partnered with dog-walking startup Bark to launch a line of Glossier-branded toys for dogs. This approach appeals to both companies because the brands are not direct competitors, the risk of poaching customers from each other is limited, and both brands get to target a demographic that overlaps with their target audience.

Source: Getty

However, collaborations are still not a big part of most D2C brands’ customer acquisition strategies, as it can be tough to find complementary product fits that don’t pose a competition risk.

How Axios bucks the Law of Shitty Clickthroughs by focusing on engagement

While email marketing is just as subject to the Law of Shitty Clickthroughs as banner ads, the news startup Axios looks to keep its various email newsletters effective through its focus on customers with high lifetime value.

A big mistake media companies tend to make with email is to treat top-line open rates as a sacred metric, according to Rameez Tase, VP of growth for Axios.

If some readers are opening up a newsletter 100% of the time, and another group of readers opens it 0% of the time, that doesn’t mean that there’s a singular 50% open rate, he argues. Instead, there are 2 cohorts: one that is highly engaged and one that isn’t.

For Axios, focusing on how to improve engagement with each specific cohort — instead of obsessing over a top-line metric like opens — is key to building a sustainable business that can overcome the natural decay of the Law of Shitty Clickthroughs.

What Axios has found is that members of the highly engaged cohort (those with high newsletter open rates) tend to be similar people in terms of profile. They’re of above-average income, they’re professionals, and they tend to be decision-makers in media and government. This is Axios’ ideal customer profile in terms of customer lifetime value.

The more readers like this that Axios gets, the more opportunities it has to sell its premium membership products, acquire big sponsorships, and expand its business.

To grow this high-value segment, the company encourages its most engaged subscribers to share the newsletter with others. For example, Axios uses social retargeting, ambassador reward plans, and surveys — whatever will generate the “motivation […] to share this product with [readers’] personal or professional networks.”

But it’s not just about sharing — it’s about the kinds of people that readers in this cohort tend to share Axios’ newsletters with. Getting high-value subscribers to share the newsletter, Axios has found, tends to bring in other subscribers like them. “An 80 percent open rate user begets another 80 percent open rate user,” Tase says.

Takeaway

In an effort to outrun the Law of Shitty Clickthroughs, marketing channels and advertising platforms are always looking to develop new kinds of targeting methods and formats.

At Facebook, the race has generated new features ranging from geotargeted ads to Custom Audiences to Lead Ads to split testing. Each new type of feature Facebook puts out promises greater specificity, greater results, and less competition.

But what may be more important in determining marketing success is how companies define their markers of success. Campaigns built around a deep understanding of a company’s most valuable and engaged customers, rather than purely measuring success by elevating clicks and open rates, could be a way to sidestep the Law of Shitty Clickthroughs and ensure the most reward for each marketing dollar spent.

7. Zimmermann’s Law: How free products can build rich businesses

In 2013, email encryption pioneer Phil Zimmermann stated that the natural trajectory of technology today was to move in the direction of making surveillance easier, with computers’ estimated ability to track users doubling — in a wry nod to Moore’s Law — every two years.

Data is frequently said to be the most valuable commodity in the world, with some of the most successful companies in recent history capitalizing on it — often by offering free services in return for user data.

How data collection made Facebook one of the world’s most valuable companies

Google was the first company that found massive success providing a free service in exchange for valuable user data, but Facebook took that formula and applied it to keep tabs on user behavior on its platform.

Facebook’s success did not lie in being a first mover, but in learning from the mistakes its competitors were making.

Years before Facebook was launched, social networking began with SixDegrees.com. The website’s founder described it as “placing your Rolodex in a central location.”

SixDegrees was followed by companies like LiveJournal, Friendster, and MySpace. At its peak in 2006, MySpace had 100M users. Facebook that year recorded only 12M users.

However, within a decade of its launch, Facebook grew to have over 1.2B monthly active users. Today, it boasts over 2.5B.

Facebook does not charge a single penny for its services. Users pay in kind through their usage data — their likes, dislikes, check-ins — all of which helps Facebook profile its users to fuel its targeted advertising business.

Over the years, Facebook’s data collection ability has steadily grown in power and resolution — as Zimmermann may have predicted — fueling the growth of the company’s ad platform and its increasing revenue.

The company’s mounting ability to extract data and understand its users has gone hand in hand with the increasing data collection potential of smartphones. The company’s founder and chief executive, Mark Zuckerberg, declared in 2014 that Facebook was “a mobile company now.” By Q2’19, nearly 94% of Facebook’s advertising revenue came from mobile ads.

Facebook achieved the counterintuitive feat of becoming one of the most valuable companies in the world by offering a free service. Through gathering behavior data, it has gained a deep understanding of its users that increases the company’s value to advertisers and allows it to deliver a more personalized service to keep users engaged. As tech improves — from health monitoring sensors to virtual reality — so too could the avenues for companies to offer free services in exchange for a better understanding of potential customers.

Snooping toasters offer companies a new stream of revenue

Smart home devices like speakers and security devices have become a part of many people’s homes, with sales of these devices on the rise since 2016.

Over the last decade, we’ve seen physical, internet-connected technology crop up in virtually every corner of the household. There are hubs (Amazon Echo, Google Home), thermostats (Nest), video doorbells (Ring), security cameras (Nest, Netgear), scales (Withings), and more.

These devices can track and store sensitive data, such as how many people live in a household or your buying history. This data improves the system’s responses to user commands and questions, in addition to helping associated services make relevant suggestions.

For example, if a user asks Alexa about the symptoms associated with pregnancy, that data can be fed into other Amazon systems to show them suggestions of diapers, baby formula, and other baby products when shopping on Amazon.

These data collecting devices offer convenience and personalization to users, but at the cost of reduced privacy — a trade-off that many consumers seem willing to make.

While Amazon does not reveal detailed sales numbers for Alexa smart home devices, its vice president for devices and services said last year that it had sold 100M of these devices in total. This included third-party devices like Facebook Portal that had the Alexa smart assistant built in.

Many of these products explicitly or implicitly surveil their users. Security cameras and baby monitors do so by definition. Amazon’s and Google’s smart speakers need to always be listening for the command that triggers them to start working. But even products like smart fridges and smart TVs have been known to record their users or collect their watching habits and sell the information to third parties.

None of this should be a surprise, according to Zimmermann’s Law. It is the natural trajectory of business to seek out new ways to drive revenue from products like microwaves, televisions, refrigerators, and speakers. And now that microwaves and TVs can effectively operate as mini-computers, it feels inevitable that manufacturers would look to collect potentially valuable data — whether for resale, for product optimization, or to bring down the sticker price of the device.

The topic of surveillance in regard to the IoT space is a controversial one. A big fear is security.

There have been several high-profile incidents of security cameras and baby monitors with lax security being hacked. New regulations and better security standards may mitigate some of these privacy risks while retaining the benefits, but it remains to be seen if companies will face a backlash from consumers before then.

One thing that seems sure is that the prevalence of these connected devices — and their data-collecting reach — will only increase.

Takeaway

As technology improves, it’s likely that data collection will increase as well. This may help lower consumer prices, enable more personalized services, and boost convenience, but it also raises fears around privacy that could give consumers pause.

Whether or not these kinds of benefits can balance out the negative possibilities will likely continue to define the conversation around tech and surveillance — especially if Zimmermann’s Law holds to be accurate in its predictions about the natural trajectory of technology.

8. Pareto Principle: Why startups can raise capital even though most will eventually fail

In the late 19th century, Italian economist Vilfredo Pareto recorded his observation that just 20% of the population in the country owned about 80% of all the land.

After doing a number of surveys abroad, Pareto was surprised to find that the same was true in a number of other countries.

What Pareto probably didn’t predict is the wide extent of the contexts to which this rough 80/20 distribution can be applied.

Since Pareto coined his rule of thumb, the 80/20 split has been used to provide insights into everything from optimization efforts in engineering control theory to the overwhelming influence of stars in baseball (approximately 15% of the players in the MLB are responsible for 85% of the total wins).

In 2002, Microsoft found that the Pareto Principle even applied to quality assurance and software development: fixing the top 20% of reported bugs handled 80% of the crashes and errors in a given system.

How the Pareto Principle guides Union Square Ventures’ strategy

For Union Square Ventures (USV) co-founder Fred Wilson, his firm’s entire allocation strategy essentially hinges on the Pareto Principle.

“Every really good venture fund I have been involved in or have witnessed has had one or more investments that paid off so large that one deal single handedly returned the entire fund.” – Fred Wilson, co-founder of USV

At USV, each fund holds about $200M and invests in around 20 to 25 companies. To get the 3x return that Wilson says venture firms should be shooting for, USV’s focus is on making just two or three of each fund’s companies into what they call “high impact” — but the ultimate goal is that at least one or two of them exit “at a billion dollars or more.”

And between 2011 and 2016, USV had a billion-dollar exit like that every year.

There are a few different tactics that USV uses to try and encourage these kinds of home runs.

First, cut your losses. “In our 2004 fund, we invested a total of $50M out of $120M of total investment in our nine losers. That wasn’t so good,” Wilson has written. “We could have, and should have, recognized our bad investments earlier and cut them off.”

Second, as you start to discern the winners of a fund, funnel the majority of your follow-on money and operational expertise to them: “A few really good companies can carry a fund to the moon. You must make sure you can get a disproportionate amount of your time and money invested in those great investments.”

This doesn’t mean, however, that the optimal venture strategy is to simply swing for the fences every time. For Wilson, the real goal is to identify a wide swath of “great investments” and nurture them over a period of several years, hoping that a half-dozen or so turn into companies with “home run potential.” After about 10 years, one or two of those could lead to billion-dollar exits.

“There are hitters in baseball, the best hitters in fact, that hit balls out of the park when they are just trying to make good contact,” Wilson writes. “That’s how you have to do it in the venture business.”

How one investment made the fund for Sequoia

Sequoia’s investment in WhatsApp is a great illustration of the Pareto Principle in action.

WhatsApp, which was founded by Brian Acton and Jan Koum in 2009, was not the kind of company many investors would have necessarily expected to become a fund-returning company.

The startups’ founders were iconoclasts who rejected the idea of advertising or making money through degrading the user experience. Nor were they eager to work with venture capitalists: Koum and Acton insisted on working with, at maximum, one venture capital firm.

Fortunately for the venture firm, the founders chose Sequoia.

Sequoia would go on to put about $60M in the company through 3 rounds of funding.

A few years later, in 2014, Facebook acquired WhatsApp in a $22B deal, which remains to date the largest private acquisition of a VC-backed company.

Sequoia’s entire Venture XI Fund, which raised $387M from about 40 limited partners in 2003, returned $3.6B in gains over the 11 years it had been open.

By the time WhatsApp was sold, Sequoia’s stake in the company was worth around $3B — equivalent to just over 80% of the Venture XI Fund’s returns.

Takeaway

Venture capital returns — where, indeed, about 80% of the returns do tend to come from 20% of the companies — present one of the most prominent examples of the Pareto Principle in action in the tech world.

In VC, most investments fail, but entire funds are often made through just a few home run deals.


This report was created with data from CB Insights’ emerging technology insights platform, which offers clarity into emerging tech and new business strategies through tools like:

If you aren’t already a client, sign up for a free trial to learn more about our platform.

 

Leave a Reply

Your email address will not be published. Required fields are marked *