The speech was quite moderate overall, with concessions to the more nuanced network engineering requirements for managed services. Chairman Genachowski recognizes that the Internet is an evolving system that may need to supplement the traditional “best-effort” delivery system it inherited from early Ethernet with more sophisticated traffic management.
The Internet has changed a lot since it was first cobbled together to connect Arpanet with other networks, and we can expect that it will keep on changing as long as we use it for more things in more places.
The chairman made some interesting observations about the Internet’s history and architecture at the very beginning of the speech. However, his comments slightly missed the mark when he opined that the Internet’s historic openness means that it’s never been “biased in favor of any particular application.”
People who study network architectures and those who’ve followed the net neutrality debate in detail recognize this as more an aspiration than a statement of fact. In his 2003 paper, which introduced the term “network neutrality” into the debate on Internet regulation, Columbia law professor Tim Wu conceded that the Internet is biased by design in favor of certain applications:
Proponents of open access have generally overlooked the fact that, to the extent an open access rule inhibits vertical relationships, it can help maintain the Internet’s greatest deviation from network neutrality. That deviation is favoritism of data applications, as a class, over latency-sensitive applications involving voice or video.
A large part of the net neutrality debate revolves around this built-in bias. Given that we have a network that favors one kind of application over another, and the fact that increasing numbers of users want to employ voice and video applications, isn’t it necessary for the network to actively remedy its bias?
In the first phase of Internet design work, TCP and IP were conceived as a single, unified protocol, but the realization that this unified design would never work for voice led designers to split them in two and add a real-time enabler, UDP, to the mix. Separating the Siamese twins was a step in the right direction, although it didn’t solve the whole problem. The advent of extremely high volume P2P applications to the Internet aggravates the bias even more. Depending on how the rule-making comes out, we may look to “managed services” to deliver us from the content bias.
Chairman Genachowski’s remarks and the entry of net neutrality onto the final road to rule-making make research into the evolution of the Internet — about which much remains not well known — timely and necessary.
The net neutrality advocates tend to emphasize the end-to-end arguments as central to the Internet’s design, but in my research, I’ve found they’re more a means than an end.
The Internet Engineering Task Force (IETF) RFC 1958 on “Architectural Principles of the Internet” captures the question most clearly with this suggestion: “The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely.”
There’s some very compelling evidence that this sort of thinking is more productive than an insistence on adding all new features in the application space.
Network engineers have hard choices to make about how best to support the new applications, and it’s better to make these choices on the basis of engineering than on glib regulations. The current set of FCC commissioners are as good as any we’ve had for several years, so I’m confident they’re taking a serious look at their role in moving the Internet forward.