Once again the media is full of reports of the impending collapse of the internet. Apparently we users are to blame, as we are using too much bandwidth watching movies and so forth- thus says yet another study by “respected” think tank, Nemertes Research. They tried to push this line of bull on us in 2007, again in 2008, and now in 2009. Only the dates of the “impending” collapse have changed: always a year or two in the future. But anyone who knows much of about the internet and infrastructure behind it knows this “impending doom” is a fallacy. So why is Nemertes repeating it over and over and over?
Follow the money. Nemertes is funded by the Internet Innovation Alliance. The IIA is funded by (wait for it)… AT&T and other telecommunications providers. Prominent in every Nemertes “the sky is falling” report is a need for the ISPs to be allowed to shape traffic (I.E.: charge you more for accessing your favorite content), filter (I.E.: charge you more for accessing your favorite sites), and bandwidth cap (I.E.: charge you more for if you go over poorly defined and tracked “limits”). In other words, it is a money grab by the service providers.
I am all for companies making money. But basically what the internet service providers (ISPs) are trying to do is be given carte blanche to lie and cheat their customers. They want to sell “unlimited bandwidth” connections, then be able to charge you for actually using that bandwidth. Let me repeat that: they want to charge you and the services you connect to like Google once for the bandwidth available, and then charge you again when you actually use that bandwidth. In many cases this thievery is further desirable for the ISPs that are also cable TV companies: some users are (gasp!) using their internet connections to entertain themselves, and spending less money on over-priced cable channel packages and pay per view movies.
If the ISPs were honest, they would do something like change their billing model. Perhaps a low flatrate “base” fee, and a small charge per gigabyte combined with a verifiable and current traffic meter available in the user’s home. But this would put them in a bind on several fronts. 80% of the users are being massively overcharged via their existing flat rate “unlimited use” packages. If they switched to a clear and verifiable measure of bandwidth, competitive pressures would push down the per gigabyte prices and the ISPs would end up making billions less per year. What they really want is to continue overcharging most users *AND* be able to charge heavy users even more. That’s their definition of “win-win”.
Metered bandwidth and “traffic shaping” would, in my opinion, have a massive chilling effect on the technology industry. Technologists and users are finding new and innovative ways every day to make the internet and their computers do more for them. Picture sharing, streaming video, voice over IP, online games: if people were always looking at the cost per byte the end result would be hesitation of people to use any of these capabilities. The ISPs, particularly the telcos, have received tens of billions of dollars in tax exemptions and outright funding to build the networks behind the “open” internet, have spent years overcharging for bandwidth, and now want to start deciding what type of traffic is “good” and what is “bad” for their near-monopoly businesses.
The ISPs dream of a chance to make tens of billions of dollars with no investment on their part: just some changes in legislation that would let them continue to feed us “unlimited” packages with ever-increasing fees and limits. Nemertes is one of the tools, along with heavily funded bribery lobbying programs, the ISPs are using to make their dreams come true.
This posting is sort of my attempt to be of service to the various professional media outlets. I’m hoping that the next time they receive a pumped-up press release from Nemertes my little site will add one more voice saying “don’t believe them! They aren’t reliable sources! Follow the money!” I am certain that my hope is utterly in vain, but I can still try to be optimist
Your representation of Nemertes is inaccurate. Our model is that we have a base of clients who subscribe to our research and advisory services. Our clients include users, makers of, and investors in, technology. Specifically, they include the IT departments of Fortune 200 enterprise organizations, vendors and service providers, not-for profits, financial services firms/investors, and a couple of publications. We base our insights on best practices which we uncover while conducting our benchmarks. Clients pay to have access to our data, our insights, and us–our data comes from the research projects that we choose to undertake, based on our own best judgment on what makes sense. Again, the cost is shared across a portfolio of players who have diverging agendas, a range of interest levels in any individual topic, and most importantly do NOT have line-item veto (or rights of approval) on our topic selection, process, methodology, or findings. All of our clients have exceedingly different agendas, which generally conflict with each others’. Our role is not to serve any one agenda, but rather to provide objective data that can be used by all.
I urge your readers to read our report and come to their own conclusions:
FAQ: http://nemertes.com/internet_infrastructure_study_2009_frequently_asked_questions_faq
Report: http://www.nemertes.com/studies/internet_interrupted_why_architectural_limitations_will_fracture_net
Ted Ritter
Nemertes Research
I think the best way to describe all the network shaping crap is that it is EXACTLY like an airline overbooking seats. They sell more seats than they have in hopes that not everyone will show up. And when they do, they try and blame it on the passengers for not volunteering to take a later flight or change their plans.
It should be noted that this practice is so offensive that it is illegal in many places, and that a great many airlines avoid it like the plague.
This refusal to overbook doesn’t seem to hurt their ability to make money ( WestJet, Southwest ring a bell? )
The thing that is saving the ISP’s is that people don’t understand all the techno babble. They do understand showing up at the airport with a paid ticket and being told they can’t fly.
So when explaining traffic shaping, filtering and all the rest to the uninitiated, and particularly to politicians and lawmakers, just tell them it is all about overbooking – just like an airline.
Agreed, Chris. If the ISPs have a problem with how much bandwidth people are using, it is a problem they created themselves. If you tell me I’m paying for “4 megabits per second unlimited use”, I assume that’s what I’m getting: not “0-4 megabits per second, and no more than 5 gigabytes a month / 500 seconds of actual use, whichever is less”.
The ISPs do what they call “overcommitting”, which as you say is pretty much exactly what those airlines you are referring to do. 500 seats on the airplane, sell 750 of them because usually that many don’t show up. Except with bandwidth it is more like 10 megabits per second available, sell 250 gigabits per second because most of the time people don’t use what they bought. Now people are starting to use that 10 megabits per second (out of the 250 gigabits the ISP actually sold), and the ISPs are seeing that at some point their high profit margin might actually shrink a bit.
Hopefully the service providers won’t be given the ability to sell one thing, provide much less, and get away with it. Unfortunately, as has been shown by the copyright lobbyists, our politicians are suckers for big donations and fancy dinners.
That comment from Ted of Nemertes up above wasn’t visible earlier because I hadn’t checked my moderation queue yet today. In any case, Welcome to my blog, Ted.
Your comment about the nature of your research and published reports may very well be true, Ted. But the fact is that you have allowed your conclusions to be edited, massaged, and apparently misquoted by the media and by the IIA. I assume you care about the reputation of your firm, so I also have to assume that the reason you haven’t issued retractions and sued the hell out of the IIA is that you are being paid by them for more than simply access to your conclusions. Hopefully you are being paid a lot.
My suspicion is that we’ll see another “the sky is falling! Only bandwidth capping and traffic shaping can save us! Abolish network neutrality or your computer will become sluggish and unresponsive!” headline in 2010. It will once again quote “respected think-tank Nemertes Research” with a revised doomsday set (surprise) one year further into the future. I’d be really happy to be proven wrong, and would literally smile if I saw a correction or retraction issued by Nemertes with the same vigor as the original misinformation was released. But past experience suggests that I won’t be smiling.
Kelly, I know there is nothing I can say to change your opinion. I do want to address a few points:
1) If you read our 2007 and 2008 studies our estimates of potential demand and supply crossing are about the same. We actually say in North America the situation is slightly worse – based on new FCC data which shows less capacity growth than we originally projected. As far as 2009 is concerned we have yet to decide if we will issue a new study.
2) As we say in the FAQ (referenced above) our studies are what they are. We don’t let clients or outsiders influence our research. These studies have been made publicly available which allows anyone to read and reference them for free.
3) Our fundamental assertion is potential demand is growing at a greater rate than supply at the edge. The lines have to cross.
4) What’s ironic is if you read the 2008 report, the far greater issue – in our estimation – is the depletion of IPv4 addresses and our analysis that IPv6 adoption/transition is going to be a huge challenge. We know how to solve bandwidth shortages at the edge – add more capacity. The IPv6 issue is a global issue and the resolution is not nearly as clear.
Thank you, Ted, for following up. I appreciate the clarifications.
I would agree with you that the deployment of IPv6 is taking far longer than it should: there are a lot of organizations dragging their feet on that one. Similarly, I agree that the client edge of the major ISP networks are seriously under-provisioned. Overall your report isn’t unreasonable: like any analysis, there are aspects that are debatable, but that isn’t the issue.
I think the real issue is how what you report is being used. IIA and similar organizations are taking your analysis and using it to push for things like bandwidth capping and traffic shaping: *NOT* increasing capacity, but containing use. They are using your report as a bludgeon to hammer a very self-serving objective into place, and your firm isn’t doing anything to contradict them. It is obvious that the core of the network is not collapsing, and given that the most reasonable/fair/logical solution is to properly provision the edge of the network. Yet Nemertes Research says nothing, leaving the impression that the IIA and friends are merely stating your conclusions, not crafting their own.
To paraphrase Ben Franklin, if you lie with dogs you will rise up with fleas. And unfortunately that is the situation Nemertes Research is in.
Having had some experience with think tanks in both the medical field and the natural resource fields I can point out one of the problems with “independent” research institutes.
They have to sell their reports.
As Mr Ritter says, “research projects that we choose to undertake, based on our own best judgment on what makes sense.”
Few think tanks funded by the petroleum industry, even if that funding is wide spread including automakers, producers, industry manufacturers, safety specialist, accounting firms, and environmental remediation, are going to produce studies that say get rid of oil altogether. It just isn’t a question they will look at. The research they do on the questions they do look at may be impeccable… but they have pre limited themselves by only looking at the questions that will produce studies their client base is interested enough in to pay for.
Those clients are not going to pay for a study that says “give it up, it’s all over.” They may very well pay for a study that tells them how to produce oil cheaper and cleaner, or how to maximize their profit during production, or how to get into new markets. Those are not even all bad tings … but they are all very much inside the box.
If you do an analysis of most newspapers you will find that the content is overwhelmingly local, and even international news is slanted to the country in which the paper is published. Why, because a person in Detroit doesn’t want to read about all the stuff going on in Buenos Aires. The paper doesn’t even bother to look because it knows that’s not what it’s customer base wants.
The sins are sins of Omission.
If Mr Ritter could point to a number of studies examining the negative effects of traffic shaping, network throttling and controversial marketing of bandwidth he would have a much greater claim to independence. But it doesn’t matter how honestly you answer the questions, if you only pick certain questions.
We have no issue with anyone that questions our research, our findings or our methodology. We do take issue with questions of our integrity and independence. It is the foundation of our business and it is why we have such a diverse portfolio of clients with very different goals and objectives.
For the Internet Infrastructure research we went into the project with no bias on what the outcome might be. We knew the only way to project Internet supply and demand is to do it independently since all other studies had either looked at one issue or the other. If our model projected no issues with supply and demand we would have published the findings the same way we’ve published the findings we are discussing now.
If we found no issues, what would you be posting about Nemertes?
Now see, I have a problem with any report as unbiased and impartial that contains commentary like this:
“One of the greatest challenges we face is the perception that our model predicts an Internet failure. This is incorrect. Rather, it predicts the potential for Internet brownouts when access demand is greater than supply. In following on the power grid analogy, most people experience minor disturbances, such as lights flickering, fans slowing, or computers freezing, long before a system-wide brownout or outage occurs. In the Internet, these disturbances may already be occurring, though they are yet to be systemic or even predictable. For example, some access operators are putting caps on usage. The justification is better bandwidth management so the 2% of the population that downloads the equivalent of the Porn Library of Congress each week does not negatively affect the 98% of “normal” users.”
This is full of emotionally charged supposition and judgment, and the only “fact” is that some ISP’s are limiting usage. It is then taken that some ISP are limiting usage is proof that “brownouts” occur, and that brownouts occur is taken as proof that the internet is suffering from overload. This is self referential logic of the worst sort.
The figures for consumer demand seem to be based on theoretical maximums of end user devices. Of course this is like saying that because my car can go 120 mph we will have a crisis if we don’t build residential streets capable of handling 120 mph traffic.
As another example of using self referential and anecdotal evidence to support the conclusions of those that would see “throttling” as preferential marketing strategy…
“Though this traffic load is more than typical, it certainly isn’t exceptional. This type of usage will become typical over the next three to five years. The fact that Comcast’s network is, by the company’s own admission, not able to cope with such usage patterns is a clear indication that the crunch we predicted last year is beginning to occur.”
This is hardly independent rigorous evidence, it is a spoon fed conclusion seeking validation.
“What this summary doesn’t show and what is fascinating about the analysis is that of the 71 peering points observed until mid-2008, 26 of them show an annual growth rate of less than 1.0. This means that more than one-third of the peering points are showing a decreased traffic rate (1.0 would mean no change in rate). These points include major international gateways, including Korea Internet Exchange (KINX), Amsterdam Internet Exchange (AMS-IX), and London Internet Providers Exchange (LIPEX). It is hard to imagine that these reflect broader traffic trends, especially given that other organizations estimate that global Internet traffic increased at a rate between 50% and 100% from mid-2007 to mid-2008 and globally, MINTS still shows that traffic is increasing at a mean annual growth rate of 66%.”
Here we see a dismissal of truly independent verifiable data with the wave of the hand because you can not “imagine” it to be true, and instead it is replaced with reports of “other organizations” (without proper references) to support your conclusion the internet is under serious stress.
NOTE: not that eventually if supply is not increased it will be out paced by demand – a no brainer, nor that eventually the average user will be using as much bandwidth as today’s “power user”, again, something self evident. No, this is supposed to support your conclusion that the internet is under severe stress NOW and that within 2 years widespread service disruptions – caused not by market driven throttling but by actually capacity limits, will be occurring.
There is one line in the report that is extremely telling, since all your conclusions are assumptions based upon just that:
“Internet traffic measurements that assess overall growth rates based on the public peering point data are insufficient”
In other words, you have no data, you are guessing. It is dressed up nicely, but that “research” would be tossed out of any peer reviewed paper without a second glance.
So Mr Ritter, your organization publishes reports that uses quotes from companies pushing net throttling as “evidence” to support the contention the internet is near breaking point. You accept pay from companies pushing net throttling, and the only verifiable hard data in your report says that the internet is not near breaking point. But you then dismiss this data and replace it with anecdotes and unreferenced sources. These reports are then used by the same companies quoted in the reports as justification for throttling.
Your methodology is poor and yes, your independence suspect.
Hello all. Sorry for the delay in responding. Chris, thanks for taking the time to read our research. I’d like to respond to a few of your points:
1)We are not using “self referential logic” as you say. We are saying that there is a potential for Internet brownouts if demand exceeds supply. Our comment on bandwidth caps was not an endorsement or justification. If there is plenty of capacity then there is no technical need for bandwidth caps or management. We interpret Cox’s comments as an indication of capacity limitations.
2)When you’re talking about a car that can go 120 MPH you’re talking about distance over time. We’re talking about capacity – bits over time. We also use utilization factors to take into account that a 1 Gigabit/s Ethernet port cannot drive 1 Gigabit/s of Internet traffic today. We are very clear that this is one of the most unique aspects of our approach.
3)We actually don’t dismiss MINTS research and findings. We acknowledge the growth rates they are tracking. A premise of our study last year is that traffic is shifting away from public peering points. This raises the distinct possibility that traffic growth is higher than what peering point measurements indicate. Hence, our comment about insufficiency.
4)We are clear from the start that we’ve built a model; a model that maps adequately to historical traffic trends going back to 2000. We are also very clear that a great challenge for anyone projecting Internet supply and demand is the lack of disclosure from ISPs. If ISPs disclosed their traffic and capacity measurements we’d be happy to compare our model to their actual growth rates.
5)Despite speculation we do not disclose the names of our clients. And, our clients have no influence over the outcomes of our research.
So, I respect your opinion but we are confident in our independence and our methodology.