Scientific American’s Too Simple Solution to Internet Regulation

Scientific American played a big role on my personal career path. In September, 1977 the magazine devoted an entire issue to microelectronics which was so good it was sold as a book a few months later. I happened to read it while considering a career change that would take me from the hum-drum routine of writing software for Medicaid claims processing on mainframe computers into the emerging world of desktop computers. It seemed to me that the advances in microelectronics that Sci Am heralded would one day lead us toward a new normal in which we would all own personal computers, and that these computers would form a vast network of some kind. The network of personal computers would replace the telephone and television networks, empowering people with information and communicative power, and bring in a new phase in human society. I already knew about networks and small-scale computers, but the microelectronics insights flipped a switch regarding scale, increases in power, and lower costs year-after-year.

It’s therefore especially disappointing to read the sort of nonsense that Sci Am publishes on a regular basis about Internet regulation. I appreciate that the editors have their hearts in the right place – just beneath their sternums where they should be – but it’s clear that the base of information from which they draw in formulating their opinions about Internet regulation is deficient. The latest example is an editorial titled “Keep the Internet Fair” that puports to offer a “simple fix” the FCC’s Open Internet order’s shortcomings.

As a former FCC chief technologist told a group of us in an email exchange last week, “simple solutions to complex problems are usually inadequate,” and Sci Am’s suggestion is no exception. The editorial compares net neutrality to the idea that bridge tolls should be load-independent, and demands that ISPs treat all applicaitons the same, whether they depend on real-time information (as Skype does) or less urgent data like non-linear entertainment (AKA movie downloads.) They argue that the only dimension of Internet traffic that should have any importance to an ISP is volume.

The fallacy in their reasoning is apparent in even an Internet Policy 101 analysis of the problem. If we take the shipping analogy just a step further, it doesn’t make any sense for me to pay the same price per pound to ship the marine animals I buy on the Internet from Florida that need to be delivered overnight as for the 100 pound roto-tiller that spent ten days in transit. Live marine invertebrates aren’t going to do well for ten days in a plastic bag half full of water, and air shipping a tiller from Ohio costs more than the machine is worth.

The parallel for Internet applications is straightforward: Skype packets that take more than 200 ms to arrive must be discarded, but Netflix packets have several seconds of wiggle room because of their buffering scheme and one-way nature. This was obvious to the fathers of the Internet, who incorported Type of Service flags in the Internet Protocol’s initial design, and to the latter day IETF engineers who updated the original scheme in 1998 (RFC 2475, An Architecture for Differentiated Services) and have continued to refine it since then. The rationale is straightforward:

Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service.

(Note: This notion is developed further in my recent ITIF report, “Facts of Life: The Citizen’s Guide to Network Engineering“.)

Sci Am has fallen victim to a defective application of Barbara van Schewick’s belief that innovation happens so rapidly in the application space that network operators can never possibly keep up with the pace of change. This notion is provably wrong as an empirical matter by examining industry reactions to malware: There is no sector of the information economy that is more “innovative” than the production of spam, viruses, and malware, yet anti-virus firms are able to update their detection systems quick enough to catch all significant new attacks within a matter of days, if not hours. By comparison, legitimate applications develop much more slowly, and enjoy the advantage of Internet standard approaches for communicating their needs to network operators. In fact, van Schewick herself doesn’t have a problem with “user-driven” approaches to differential treatment of traffic by ISPs, such as IETF Differentiated Services.

Scientific American has its head into the sand. It pretends that all Internet applications have the same requirements and compounds the error by claiming that most Americans are victims of a broadband monopoly when 80% of us have a choice of at least two wireline broadband providers.  Thirty five years ago, this magazine helped me take a plunge that lead me to collaborate on the development of some of the world’s first personal computers, high-speed commercial packet networks, and practical wireless networks. I’m happy to have seen the 1977 Microelectronics issue, but the current issue is one that I could have easily lived without.

 

Print Friendly

About the author

Richard Bennett is an ITIF Senior Research Fellow specializing in broadband networking and Internet policy. He has a 30 year background in network engineering and standards. He was vice-chair of the IEEE 802.3 task group that devised the original Ethernet over Twisted Pair standard, and has contributed to Wi-Fi standards for fifteen years. He was active in OSI, the instigator of RFC 1001, and founder, along with Bob Metcalfe, of the Open Token Foundation, the first network industry alliance to operate an interoperability lab. He has worked for leading applied research labs, where portions of his work were underwritten by DARPA. Richard is also the inventor of four networking patents and a member of the BITAG Technical Working Group.