Byte Counting, Part Two

xbox on tv

Reihan Salam of National Review Online seeks Timothy B. Lee’s reaction to the Comcast/Xbox controversy, and Lee offers a clever analysis that avoids the pertinent facts and reverses some of his former positions. Lee (not to be confused with web guy Tim Berners-Lee) is a recent graduate of the Princeton Center for Information Technology Policy, an interdisciplinary grad school program that tilts heavily towards a “Freedom to Tinker” perspective. He now writes for Ars Technica, a trollish tech blog that tends to lambaste intellectual property enforcement in very harsh terms and to advocate for a highly-regulated Internet regardless. Before Princeton, Lee was something of a libertarian, but higher education has refined his views and he’s now a conventional left wing regulatory hawk.

The essential issue is whether Comcast’s byte metering system is unfair to Internet-based video streamers such as Netflix. In order to make this case, it’s necessary for Lee to establish that the system is arbitrary in some basic way and that its arbitrariness is retarding the growth of essential video distribution services generally. In particular, Lee and his new colleagues on the “information wants to be free” wing need to show that Comcast’s decision to exempt its on-demand content from the byte cap is harmful to users and also to competitive content services.

We dealt with the arbitrariness issue in our first post on the Comcast/Xbox deal, and Lee offers no new information. Comcast’s byte limits measure the traffic that passes into the consumer’s home from the Internet. Video content stored within the Comcast (or Time Warner, Cox, Verizon, AT&T, or Century Link) local network doesn’t come from the Internet, so it’s not measured. Byte cap systems are designed to reduce congestion in the Internet gateways of these systems, not to eliminate all forms of congestion in all places at all times.

The Internet is designed in such a way that there will always be congestion somewhere, so it’s perfectly reasonable for networks that connect to it to employ policy measures to limit the congestion at the points where congestion relief is most expensive.

We know there will always be congestion because no transfer of content occurs instantaneously. When you copy a large file from one hard drive to another, the copy takes time because the transfer depends on mechanical features in the hard drives, hard drive interface speed, memory latency, and CPU speed. Add a network to the process, and things tend to slow down even more, as even the fastest common Ethernet is six times slower than a common hard drive. Use a Wi-Fi connection and the speed falls off by another order of magnitude, and we’re still not even outside the home or office.

When we transfer data outside the building, we’re sharing a provider network with millions of others before we even reach the Internet and sometimes several hundred thousand of them are active at the same time. To use the Internet, all the providers have to exchange data at a few hundred exchange points, and even with the best technology money can buy there’s going to be noticeable congestion from time to time.

Network bandwidth is most expensive between the neighborhood and the Internet exchange, so it’s natural such bandwidth would be metered. All the providers who meter count in exactly the same way: they add up the bytes moving down from the Internet and up to the Internet, between end user systems and the Internet. The don’t count bytes moving about within the provider network, only those entering or leaving it from or to the Internet. Their policies aren’t the same, of course.

Some providers have a hard cap that they don’t want users to ever exceed, and some have soft caps that trigger higher prices or reduced performance, and some have combinations of different policies. You can buy minimal use plans with low caps or higher–priced unlimited use plans, and many shades in between. AT&T has a 250 GB limit for its U-verse VDSL+ users and a 150 GB limit for its rural ADSL users, and Time Warner gives you a break if you’re a light user. Comcast has a hard cap of 250 GB that seems to fit the needs of more than 99% of their customers, and discourages the massive consumption that’s typical of piracy.

So it’s difficult to establish that any system of consumption caps is arbitrary simply by distinguishing Internet traffic from local traffic. There are laws of physics at work here: Local networks are cheaper per unit of bandwidth than wide area networks, and cable is sold by the foot. Data – video or other – confined to a local network can also be staged close to users to further reduce congestion and distribution costs, especially by distributors who hold licenses to it. Popular on-demand titles are staged in locations that are very close to end users by license holders, but it’s very hard to legally duplicate this arrangement for arbitrary video streams that originate on the public Internet. So capping is not inherently arbitrary, but there can still be something in the implementation of a cap that’s troublesome.

To make the case that there really is something bad about the specifics of today’s caps, Lee unfortunately turns to a pair of claims that are false and misleading. This is quite sad, of course.

In the first of these, Lee claims that Comcast’s download speeds are no better today than they were in 2008, or 10 Mbps in his experience. I find this quite peculiar as I’ve been a Comcast customer for many years and haven’t seen the stagnation that Lee complains about. I have the basic, 22 Mbps service today, and my Speedtest numbers are generally close to 25 Mbps thanks to Powerboost. In 2008, the same service tier was 12 Mbps, so it’s almost double what it was. I can’t believe Lee has actually measured his service if he thinks it’s only running at 10 Mbps.

Lee’s second and even more outrageous claim is that the current Comcast cap only permits an hour a day of 1080p video streaming. This is based on his intuition about the requirements of Blu-Ray right off the disk, which he puts at 10 GB/hour. If you know Blu-Ray, you know this isn’t a real number. Blu-Ray is a variable bit rate MPEG-2 or MPEG-4 encoding whose bandwidth requirement depends on the nature of the content. 10 GB/hour was probably not a bad rule of thumb for MPEG-2, but it’s not precise; it can be much less or much more. The tendency is for new disks to be more highly compressed with MPEG-4/H.264 to consume only half as much bandwidth.

The larger error is that nobody streams pure Blu-Ray over the Internet in 2012. You can get a lot of 1080p content compressed with MPEG-4 that moves along quite nicely at less than a quarter of the bandwidth Lee claims it needs. I’m not aware of any legal service that sends raw Blu-Ray over the Internet. So the “one hour a day” claim is false and deceptive. You can easily do four or more.

So what we have in Timothy B. Lee’s argument is a set of facts that aren’t facts providing support for a bad policy, apparently a mandated cap-free Internet service at every price level. Internet-based video content is quite practical on today’s Internet, caps and all, so there’s no need to take this step.

In fact, caps are one area of competition between service providers and I see no reason to take this tool away from them. Certainly, there are many ways to implement a cap – you might exempt off-peak usage for example – but to argue for the total abolition of all limits on Internet usage is simply foolish. In the final analysis, Internet users are affected more by other users than by their ISPs, and Lee’s technical naïveté blinds the policy discourse to this essential fact.

 

image credit Flickr user Rogue Soul

Print Friendly

About the author

Richard Bennett is an ITIF Senior Research Fellow specializing in broadband networking and Internet policy. He has a 30 year background in network engineering and standards. He was vice-chair of the IEEE 802.3 task group that devised the original Ethernet over Twisted Pair standard, and has contributed to Wi-Fi standards for fifteen years. He was active in OSI, the instigator of RFC 1001, and founder, along with Bob Metcalfe, of the Open Token Foundation, the first network industry alliance to operate an interoperability lab. He has worked for leading applied research labs, where portions of his work were underwritten by DARPA. Richard is also the inventor of four networking patents and a member of the BITAG Technical Working Group.