Innovation Files has moved! For ITIF's quick takes, quips, and commentary on the latest in tech policy, go to

Measuring American Broadband

The FCC released an important new report Tuesday, Measuring Broadband America, which shows how actual broadband speeds compare to advertising claims. You can read the report and download the data the FCC collected here. The report is the result of a year of work by the FCC, its contractor Sam Knows, and a diverse group of people from the FCC, industry, universities, and public interest advocacy. The report follows a year after a quick snapshot of broadband speeds conducted during the development of the National Broadband Plan that used a different (and inferior) methodology. This report is significant because it’s both comprehensive and rigorous, as I said at the release event at Best Buy in Washington on Tuesday.

It’s also significant because the methodology was hammered out by the stakeholder group and the raw data is public, including the source code for the measurement devices. The system was developed in public, the data is public, and the code is public. There can’t be any legitimate doubt as the accuracy and reliability of the data, certainly not by people in Washington who were free to work with the stakeholder group and chose not to, or by people who were part of the stakeholder group and were in a position to examine the data as it came in.

The results are surprising to some because they contradict a widely-circulated myth to the effect that America’s residential broadband users were not getting what they paid for. The FCC’s previous study, based on comScore data, claimed that Americans were getting only half the peak download speeds they expected to get, and that story fits the desired narrative of some public interest professionals perfectly. The old report was flawed on several grounds – there weren’t enough measurement servers for one – but mainly by the fact that it didn’t know which service tiers the measured users were actually on and tried to guess them from the observed speeds:

The trade-off made in applying this methodology is that subscribed speed tiers are inferred from observed speeds, rather than known directly (from, say, subscribers’ bills). For example, some machines in the data were tested more than 100 times: if any one speed read was more than 10% above the actual subscribed tier, the machine would be wrongly identified as subscribing to a higher speed tier. Alternately, if the maximum measured speed was substantially lower than the actual subscribed tier, that machine could be wrongly identified as subscribing to a lower speed tier. Both could bias the advertised tier upward or downward.

It’s fairly obvious that you can’t very well estimate advertised speed from observed speed without bias, and this method penalized the ISPs who offered actual performance above the advertised “up to” rates; the new study found that four of America’s largest ISPs (Verizon,  Comcast, Time Warner Cable, and Cox) are in this group, giving users more than they paid for.This methodology was used because it was expedient: The National Broadband Plan needed the data on a short time line and couldn’t worry too much about accuracy.

Now what we have a measurement system that focuses on the significant variables that are under the ISP’s control (web server performance isn’t one of these factors, of course) and for which subscribed speed tier is known, a very different picture emerges. In fact, a great number of the ISPs outperform advertised claims, and those who underperform are close to expectations.



Figure 1: FCC data, 2011

This is particularly impressive because of the dramatic increase in network traffic that we’ve seen in just the last two years because of the sudden uptake of video streaming. Networks are carrying rapidly increasing data loads, and they’re doing it well.

Studies of this sort become more meaningful as they’re repeated, because they allow us to determine whether the overall ecosystem is going in the right direction or the wrong one. It’s possible that the current data are unusually low because of the recent rise of video streaming, and it’s possible that they’re unusually high because cable companies have recently begun deploying DOCSIS 3 (Comcast is all-in on D3 already.) We don’t know which way we’re headed, but we’re now in a position to know in the near future as the FCC does further testing, which can and will be done with the present system.

So the story here is that we’re not flying blind any more, and the next time this kind of study is done we’ll be able to construct a trend line. This is all good and I praised the FCC for shepherding this process along and showing that government can actually work.

The reaction to the Measuring Broadband America has been pretty reasonable and intelligent, given the complexity of broadband speed measurement and the history of these sorts of studies. By any reasonable standard, the fact that the vast majority of America’s residential broadband users are experiencing actual performance in the range of 80-100% of advertised “up to rates” means they’re getting what they’re paying for.

The relevant fact is that residential broadband is sold on an “up to” basis, not on an “absolutely guaranteed” basis, so the customer expectation should be for performance in the general neighborhood of the up-to rate, which is what 80-100% is. The advertised rate is a ceiling, not a floor.

While most of us get this, there are a couple of notable examples out today of people who insist that advertised speeds are floors rather than ceilings, and they’re both outraged. The most amusing is former Cranky Geeks host John Dvorak. In a PC Magazine column titled “Scamming Americans with Fake Broadband Speeds” Dvorak goes after the Federal Trade Commission for not sanctioning ISPs for delivering less performance than they’ve been promising; that’s right, he’s mad at the FTC for things that are in the Federal Communication Commission’s jurisdiction. Dvorak’s such a stickler for details that he compares the findings in Measuring Broadband America with the speculations in the 2009 report, Broadband Performance that used the comScore data:

…today the advertised speeds promoted by DSL, cable, and fiber providers are up to 80 to 90 percent of what is advertised. And somehow what appears like a blatant lie, to me, is perceived as a good thing, because in 2009, these folks were way off.

It seems that Dvorak is serious, as weird as that is. He goes on to work himself up to a fine lather:

I do not care if they are 10 percent or 30 percent or 50 percent off. How is this not bait-and-switch? How is this not deceptive advertising? How is this legal? The FTC is supposed to prevent false advertising. At least, that’s what I thought.

OK, wrong comparison, wrong agency, and wrong claim in the ISPs’ advertising, confusing a ceiling with a floor. Other than that, he’s right on top of the situation and practicing tabloid journalism at its finest. Given that Dvorak is more entertainer than analyst, it’s not too surprising that he goes for a cheap shot instead of offering an actual assessment; that’s more or less what he does. It’s a bit more interesting that Free Press, one of the largest so-called public interest communications groups, adopts a similar tone in its complaint:

No matter how industry tries to put a positive spin on these results, the report shows conclusively that many Americans are simply not getting what they pay for. This study indicates Comcast, Cox and Verizon FiOS largely perform well, but other companies like Cablevision, AT&T, MediaCom and Frontier all fail to deliver their customers the quality of service promised.

Free Press’ friends are going to look at this outlandish spin and say: “oh well, they’re just saying the glass is half-empty rather than half-full,” but that’s not really what’s going on. Two of America’s largest ISPs, Verizon and Comcast, are providing customers with service that’s consistently higher than the advertised ceiling, and two more of the largest ISPs, Cox and Time Warner Cable, are generally providing customers with better service than they’ve paid for as well; that’s what the chart above shows.

It’s reasonable to also consider than any ISP who provides performance that’s above 90% of the advertised ceiling is providing consumers more or less exactly what they’ve sold them; broadband is a statistical service, there’s always going to be some margin of error in any measurement, and it’s doubtful that anybody even notices a shortfall of 10% from the advertised ceiling for long-running file transfers; I know I wouldn’t, and in the past I’ve been paid to measure network performance for my employers.

In order for the Dvorak/Free Press rants to be considered anywhere close to credible, the study would need to show that a significant number of consumers are experiencing network performance that’s significantly below their expectations, and the study doesn’t show any such thing.

Cablevision is clearly lagging the pack: Their 15 Mbps service tier downloads at half the ceiling rate, and their 30 Mbps tier downloads at about 75% of the ceiling (their uploads are fine, and they have problems with the study worth considering.) These figures stand out because they’re well outside the norms; the only similar carrier is Frontier, a rural DSL system whose 3 Mbps tier runs at about 70% of the ceiling rate. Some other small DSL carriers such as Windstream, Mediacom, and Qwest aren’t meeting the 90 percent level either, with some tiers in the 80% range.

The overall message should be: “Most Americans are getting what they pay for most of the time, but a few aren’t.” If this is a truth in advertising problem, it’s easily corrected. All the rural carriers have to do is say: “Tier A provides speeds from X to Y or better” and be done with it. That kind of careful statement doesn’t actually improve anyone’s quality of life however; it’s just another hair to split by the busy consumer who has too many things on his or her mind already.

A better approach would be to take a look at some programs, regulations, or initiatives that might actually improve the performance of the services that the rural carriers provide, such as Universal Service Reform. Guess what? We’re already doing that. The other good approach is to give the market some time to react to this new information; there could be changes coming as carriers competer with each other on the basis of measured performance among other factors.

The best reason to publish such reports is to make it easier for consumers to take their business to the providers who offer them the best deals. Carriers certainly notice when they lose customers, and they react accordingly.

It’s going to be very interesting to track these performance measures over time, and even more interesting to look for correlations between performance and customer churn, and we’ll be doing that. It’s also going to be interesting to see how much longer the dead-enders hang onto their unjustified “sky is falling” rhetoric; their credibility is at stake.

Print Friendly, PDF & Email