No Neutral Ground: The Problem of Net Neutrality

On November 21, the Federal Communications Commission announced plans to revisit its Obama-era internet regulations. It seems likely that the resulting vote will repeal the policies often referred to as net neutrality. The name is, perhaps, misleading; to support net neutrality is to support placing the internet more fully under government supervision. The related political debate often divides traditional allies with arguments for free expression pitted against defenses of small government.
To understand net neutrality, one must see its position in technical history. Traditionally, internet service providers (ISPs), such as Comcast and Verizon, have guaranteed their customers a certain quantity of bandwidth – that is, a certain amount of data per unit of time. It was assumed that even a voracious user would rarely use his maximum bandwidth, and services were priced under this assumption. ISPs also de facto allowed customers to access whatever websites they wished; while there was no legal protection for this behavior, technical complexities made discrimination by website infeasible. The result was a largely open web: anyone with a blog could potentially reach millions.
In the early 2000s, the situation changed. Technological innovations enabled providers to determine which site a user visited and so potentially to restrict access. In principle, an ISP could now sell ‘packages’ of websites, in a fashion resembling cable television: ‘basic internet’ for news and Facebook, say, or ‘premium internet’ for those who wanted more. These years also saw the rising popularity of streaming video services like Netflix and YouTube. Users now binge-watched videos, consuming their maximum available bandwidth for hours at a stretch. Such trends increased costs for the ISPs, leading them to investigate new responses: restricted access to high-usage sites, artificially slow downloads, and so on.

This post was published at Ludwig von Mises Institute on December 13, 2017.