“In a few years, men will be able to communicate more effectively through a machine than face to face.”
It was 1968, and J.C.R. Licklider, a director at ARPA, had become convinced that humanity was on the cusp of a computing revolution. In a landmark paper called “The Computer as a Communication Device,” he described “a radically new organization of hardware and software, designed to support many more simultaneous users than the current systems, and to offer them… the fast, smooth interaction required for truly effective man-computer partnership.” For Licklider, this wasn’t just a new technology, but a new way for human beings to exist in the world.
You’re reading this on a website, so you know what happened next: the internet. What initially seemed like a new way to transfer information turned into a revolution that rewrote the basic assumptions of society. Entirely new kinds of economic and social organization evolved on these networks, taking root faster than anyone would have thought possible. For an entire generation — my generation — that process is all we’ve ever known.
Now, that vision is fraying. The social fabric of the internet is built on very specific assumptions, many of which are giving way. Licklider envisioned the internet as a patchwork of decentralized networks, with no sense of how it would work when a handful of companies wrote most of its software and managed most of its traffic. Licklider conceived a level playing field for different networks and protocols, with no sense that the same openness could enable a new kind of monopoly power. Most painfully, this new network was imagined as a forum for the free exchange of ideas, with no sense of how predatory and oppressive that exchange would become.
These failures are connected, and they leave us in a difficult place. It’s easy to say this was a bad year for Google or Facebook (it was), but the news is actually worse than that. Companies are falling into crisis because the basic social compact of the internet has reached its limit — and begun to break.
FREE SPEECH MINIMALISM
In March 1989, a researcher named Tim Berners-Lee laid out a new system for connecting computers at CERN, a proposal that would ultimately lay the groundwork for the World Wide Web. Information was being lost as CERN grew and projects turned over, so Berners-Lee envisioned a computer system that could accommodate that kind of constant change, a network built on hypertext links that were indifferent to the content they were transmitting.
“The hope would be to allow a pool of information to develop which could grow and evolve with the organisation and the projects it describes,” Berners-Lee wrote. “For this to be possible, the method of storage must not place its own restraints on the information.”
That ideology grew into a set of business practices, codified by Section 230 of the Communications Decency Act. There were still crimes you could commit with just information (particularly content piracy), but 230 meant you could only blame the source of the information, not the networks that delivered it. At the same time, operators developed authentication and filtering methods to deal with basic problems like spam, but it was always an uphill fight, and fighting speech with speech was always the preferred option.
Persistent, targeted harassment has made that logic harder to defend, and the move to closed platforms like Facebook has scrambled the conversation even further. Abuse is everywhere, and left to their own devices, malicious users can easily make platforms unusable. Even committed speech advocates like Jillian C. York see the end goal as consistent principles and accountable systems on platforms, rather than a lack of moderation itself. And while there are lots of complaints about moderation on Facebook and Twitter, almost no one seems to think the companies should be taking a lighter touch.
The internet is still catching up to that logic. After white nationalists rallied in Charlottesville this August, web providers realized they, too, were in the moderation business, dropping neo-Nazi sites in response to widespread public pressure. But outside easy victories (which are largely Nazi-related), there are still very few moderation principles everyone agrees on, and there’s no higher authority to appeal to when disagreements happen. There’s no law telling platforms how to moderate (such a law would violate the First Amendment), and no mechanisms for consensus or due process to take the law’s place. More practically, nobody’s good at it, and everyone is taking heat for it more or less continuously. With new legislation poised to chip away even more at Section 230, the problem is only getting more complex.
In the early days, it seemed like online anonymity had opened the door to a new kind of identity. Not only could you be a different person online, but you could be more than one person at once, exploring your own personhood from multiple angles. In a 2011 TED Talk, 4chan founder Christopher Poole said the key was to think of identity as a diamond, not a mirror.
“You can look at people from any angle and see something totally different,” he told the crowd, “and yet they’re still the same.” It’s a beautiful idea, although the fact that it came from the founder of 4chan should give you some sense of how it worked out in practice.
For a long time, hardly anyone knew who you were online. Handles replaced real names, and though your service provider certainly knew who you were, massive swaths of the internet (Facebook, e-commerce, etc.) hadn’t developed enough to make the information widely available. Prosecutions for online crime were still relatively rare, stymied by inexperience and jurisdictional issues. There was simply nothing tying you to a single, persistent identity.
Now, nearly everything you do online happens under your name. It started with Facebook, the most popular single product on the internet, which has enforced its real-name policy since the beginning. Today, your Google searches, Netflix history, and any cloud-stored photos and text messages are all only a single link removed from your legal identity. As those services cover more of what we do on the web, it’s become much harder to create a space where anonymity can be maintained. As I type this, my browser is carrying auto-login tokens for at least five web services, each registered under my real name. If I were trying to maintain a secret identity online, any one of those tokens could give me away.
That’s not all bad news. Real names have helped close the gap between online and offline space, clearing space for new kinds of personal branding and online commerce that would have been impossible before. At the same time, you can see the old system withering. Anonymity still exists in certain places, but it’s grown fragile and taken on a different meaning. It’s easy to break through in most cases — an FBI director can’t even keep his Twitter account secret — so it only thrives in mobs where no individual member can be singled out. Using web anonymity for any sustained purpose like criticizing government officials or organizing political dissent, has become a losing bet.
Four days after the rally in Charlottesville, the content distribution network Cloudflare publicly discontinued service to the neo-Nazi site Daily Stormer. The move came after months of escalating pressure from anti-racist activists, and after finally giving in, CEO Matthew Prince wrote a post explaining what made him so reluctant to drop the site. It wasn’t sympathy for neo-Nazis, Prince wrote, but a fear of how powerful networks like Cloudflare were becoming.
“In a not-so-distant future,” he wrote, “it may be that if you’re going to put content on the Internet you’ll need to use a company with a giant network like Cloudflare, Google, Microsoft, Facebook, Amazon, or Alibaba.” The implication was clear: if those six companies don’t like what you’re doing, they can keep you off the internet.
It wasn’t always like this. An online presence has always required lots of partners (a host, a domain registrar, a caching network), but for most of the history of the internet, no single player was powerful enough to pose a threat. Even if they did, most functions could be brought in-house without any significant reduction in service. The shaggy, decentralized network had given rise to a shaggy, decentralized infrastructure, with no single choke point where a business could be shut down.
Now, the internet is full of choke points. Part of the reason is the shift to the mobile web (which tends to be owned by a handful of carriers per country), but another part is the growing centralization of how we reach things on the web in the first place. After a decade of laughing off AOL as a walled garden, we’ve ended up with a handful of services that have a similar level of power over everything we see online. Google is where the world finds information: if you’re a listing service competing with Google, your days are numbered. Facebook is how people share things: if you can’t share it on Facebook, whatever you’re talking about just won’t travel. Uber is a billion-dollar company, but if iOS and Android decided to delist its software, the product would be inaccessible in a matter of hours.
That centralization causes problems beyond outright blocking. Web users were throwing off just as much personal data 20 years ago, but the data was spread between dozens of different companies and there was no clear infrastructure for coordinating them. Now, it’s entirely plausible for Facebook or Google to collect every website you visit, following logged-in users from site to site. Data collection has become a pivotal part of the internet, used either to target ads or to build products, but there are only a handful of players with the scale to meaningfully pull it off. The result is a series of competing walled gardens that look almost nothing like the idealized internet we started out with.
The first spark of the internet was the open connection. Hosting a website meant anyone with a modem could dial-up and stop by — and anyone with a server could set up a website. All the servers ran the same set of protocols, and no provider was favored over any other. In short, everyone connected to the same internet, even if some hosts and connections were better than others.
Those principles have come under immediate threat this month, after the FCC’s official vote to roll back Title II protections. The order is still being challenged in court, but we now face the very real prospect of a tiered internet, as companies aligned with Comcast or Verizon navigate a completely different network than independent competitors. The network can also segment according to types of content, with high-traffic services like Netflix facing throttling and interconnection standoffs that services like Twitter will never have to deal with. There’s no longer one single network, and managing those asymmetric frictions are now just part of running a business online.
In fact, the open network has been closing for far longer than Ajit Pai has been in charge. Today’s technology runs on a string of closed networks — app stores, social networks, and algorithmic feeds. Those networks have become far more powerful than the web, in large part by limiting what you can see and what you can distribute. Services like Fire TV and YouTube are built on the internet, but they’re playing by different rules. As long as Google can block Fire TV’s YouTube access by fiat, we are not dealing with an open network. The basic promise of the internet — the scale, the possibility — is no longer possible without closed corporate networks. To thrive on today’s internet, you need much more than a server and a dream.
The internet also made a lot of people very, very rich in ways that were difficult to predict or even comprehend. In a 2012 post, Y Combinator co-founder Paul Graham made it sound as if a startup idea could come from almost anywhere. “Pay particular attention to things that chafe you,” Graham wrote. “Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.”
In economic terms, this was about tearing down barriers to entry. If you wanted to sell glasses frames or mattresses, now all you needed was a product and a website. You could cut out the intermediaries that had defined the industry pre-internet. Legacy businesses were slow to catch on to the possibilities of the internet, which created a power vacuum and lots of opportunities for entrepreneurs.
The result was a flood of startups, which have attacked incumbent industries more or less indiscriminately for the past 20 years. Not all of the resulting businesses were successful or good (RIP Pets.com), but it’s hard to name a section of the economy that hasn’t been reshaped by them in some way. Internet-fueled disintermediation resulted in profound and lasting shifts in the global economy, and minted a new generation of tech billionaires. When folks like Marc Andreessen get excited about the internet-like properties of the blockchain, this is what they’re talking about, and it’s independent from issues of free speech, or even net neutrality.
But by now, the disintermediating magic of the internet is mostly used up. There’s still plenty of VC money out there, but the easy disruptions have already happened. Any new entrants with real promise are most likely to be acquired or Sherlocked by one of the major tech companies. In either case, they’re plugged up before they can do too much damage to the incumbent order of things.
Occasionally, a startup will make it through the gauntlet to become an independent public company — Snapchat and Uber being the most recent examples — but it’s much harder than it was even five years ago. For those that make it, the now-centralized internet means you’ll have a new set of intermediaries to deal with, relying on Apple’s App Store, Google’s search rankings, and Amazon’s server farms. The power vacuum is over. If you’re fighting to save the internet for entrepreneurs, there’s simply nothing left to save.
It feels sad writing all of this down. These were important, world-shaping ideas. They gave us a specific vision of how networks could make society better — a vision I still believe did more good than harm. With no argument for an open web, how do you tell a country not to shut down networks in the run-up to an election, or not to block apps used to organize opposition? We’ve shunned the tech world for hiding behind content neutrality, or using the gospel of disruption to entrench their power. How will the same companies act when they believe in nothing at all?
Maybe they never did. The last year has toppled over many of the old assumptions, but they had been weakening for a long time. The sooner we acknowledge that the old ideas have failed, the sooner we can start building new ones. As technologists look for a way forward, those new ideas are sorely needed. The scary thought is that we may be starting from scratch.