Most people see the Internet as something like a car, a toaster, or a cell phone: we like it, rely on it in our daily lives, but don’t care to understand how it works. The debate about net neutrality – how to regulate the Internet – has exposed us to many strong opinions about how the Internet does or should work, but they’re mostly wrong. Both regulators and advocates need an in-depth understanding of the Internet if they’re to accomplish their goals without doing harm. Here are some of the major myths circulating around the net neutrality debate that we can do without:
Myth #1: Net neutrality has always been the law of the Internet.
Reality: Net neutrality is a simplified form of the “end-to-end argument,” a technical precept that argues for implementing new Internet functions mainly in the computers attached to the network and only enriching the network proper when it’s absolutely necessary for performance, security, or some equally vital purpose. Because of the way the Internet was designed, the rise of video calling, gaming, and P2P file transfer applications causes performance and fairness issues that can only be resolved by network operators managing packet streams and enriching core network services. The fine print in the end-to-end argument recognizes this, but the net neutrality simplification doesn’t.
Myth #2: All applications are equal on the Internet.
Reality: The Internet is not a level playing field; she’s a bad mother who doesn’t love all her children equally. The Internet favors human-to-machine applications above direct person-to-person interaction. The primary motivation for its design was to speed up the applications it favors over the ones that it disfavors, and hence the Internet’s biases are the mirror image of those built-in to the traditional telephone network. Now that more of us want to use the Internet as the primary way we communicate with other people – by voice, video call, or Twitter – we need to make the Internet a less biased system. The remedy for the Internet’s structural bias is active management by ISPs. This is not unlike affirmative action, which the Brits call “positive discrimination.” The Internet needs to employ a bias to correct a bias. This bitter pill is fundamental to the Internet’s design.
Myth #3: All content is equal on the Internet.
Reality: The Internet favors the content that’s located close to you over content that’s far away, and it favors content that’s transmitted to your ISP over high-speed lines from very fast computers over content on a general-purpose, low-cost hosting service. Big content publishers like Apple buy accelerator services from companies such as Akamai that stash their content in multiple places around the Web in order to get it to you faster. Even bigger publishers like Google/YouTube build massive hosting complexes around the world for similar purposes. The more you spend, the better equipped you are to score the valuable subscription and advertising fees that come from speedy delivery. Eliminating location bias devalues content accelerator networks, and this is why the owners of such networks want regulations to slow the pace of change.
Myth #4: Net neutrality protects the freedom to innovate.
Reality: It would be nice if enabling innovation were as easy as enacting prescriptive regulations on ISPs. Once again, the traditional Internet is friendly only to the applications that conform to its structural bias. Web services are fine, but if you want to transfer a ton of data, like P2P does, or get your packets delivered with extremely low delay, as gamers want, the cards are stacked against you. There are standardized technical systems to help these applications (called “DiffServ” and “IntServ” by the Internet engineers who created them,) but it’s not clear that these systems pass proposed regulatory muster even though they’re Internet standards. They discriminate, you see.
Myth #5: With net neutrality, Internet-based services can beat ISP-based services.
Reality: Not only do Internet-based applications like Skype and Netflix streaming have to overcome structural bias, if they succeed they’ll have replace some of the ISP revenue they’ll bleed off or the Internet will stagnate. The reason for this is that triple-play voice and video services subsidize Internet access, helping to pay for the continual speed and coverage upgrades we all expect. We geeks love our Internet like vegans love their tofu, but the average American consumer wants his MTV (or ESPN or Showtime;) and we depend on Joe Average to help pay for the Internet.
Myth #6: It’s all better in Europe.
Reality: Isn’t everything? Seriously, though, with the exception of Sweden, where the government has subsidized a massive program to pull fiber to large apartment buildings in big cities and selected rural areas, Europe is a copper cul-de-sac that’s in serious danger of being shut out of the Next-Generation Internet. Aside from Sweden, the fastest Internet service in Europe is provided by cable companies, which regulators in countries such as Germany have until recently allowed the phone company to own. Europe addresses problems that arise from the concentration of ownership by regulation, but it hasn’t solved the problem of deploying fiber to the home nearly as well as Japan and Korea have, or even as well as Verizon has in the U. S.
Myth #7: The wireless Internet is just like the wired Internet.
Reality: Wireless networks are about as different from wired networks as they could possibly be, but the larger problem is that Internet access is only a secondary use for wireless networks. Five billion people use cell networks to talk to each other, and only 2 billion or so access the Internet from all networks combined. Imposing the net neutrality regulations we’ve been discussing for the past five to ten years on cellular operators can only compromise the cell network’s primary purpose, interpersonal communication. This is too high a price to pay to watch cats on treadmills.
Myth #8: The Internet is the Best of All Possible Networks.
Reality: Nobody is crazy enough to say this, but it’s the subtext behind most of the demand for “Preserving the Internet.” The reality is that the people who designed the Internet back in the Nixon era – engineers and programmers in the U. S. and Europe – did an amazing job of creating a system that could grow to its current size, accommodate a breathtakingly wide range of uses, and take advantage of technical progress, but they didn’t create the perfect network. Internet engineers constantly battle the shortcomings in the Internet’s design, and they’re rapidly running out of bubble gum and bailing wire to fix them all. At some point in the near future we’ll need to get serious about replacing the Internet with a system that incorporates some of what we’ve learned in 40 years of networking.
The Internet was originally meant to be a large-scale experiment on network design, not just a system for Web services and social networks. We need to resume the experiment if we’re to have a network that can serve us for the next 40 years, and that will require some regulatory freedom. This is not to say that the FCC and its cousins around the world shouldn’t scrutinize the practices of ISPs, content providers, and application designers; we clearly need them to be capable watchdogs. One area where they can do a lot of good is making companies provide full disclosure about limitations in their service offerings and side effects caused by their applications. We don’t want regulators usurping the role of network engineers by delivering detailed prescriptions about what’s good engineering and what’s bad; the state-of-the-art is not so well defined that anyone can do this yet, so it’s arrogant and destructive even to try.