DEBORAH D. MCADAMS /

01.23.2014 03:00 PM

http://www.tvtechnology.com/mcadams-on/0117/mcadams-on-network-neutrality-and-the-world-of-statecraft/223357

 

McAdams On: Network Neutrality and the World of Statecraft

Conspiracy by shenanigan

 

COINCIDENCE—This will sound like a conspiracy theory, so let me just say that I find most conspiracy theories tedious and not contributive. Most would require tactical organizational skills that do not exist in Washington, D.C. There’s a boatload of shenanigans going on in D.C., to be sure, but no one’s keeping aliens prisoner outside of Roswell. I’m pretty confident on that one.

 

The unraveling of network neutrality is more of a shenanigan. It’s not an earth-shaker that the D.C. Court of Appeals bounced it back to the Federal Communications Commission. The court ruled that while the FCC has the authority to regulate broadband, it cannot impose network neutrality rules under the current classification of broadband as a Title I service under the 1996 Communications Act. Title I provides the FCC with “ancillary” jurisdiction over wired and wireless services; Title II provides more explicit authority.

 

Robert McDowell called it back in December of 2010, when the commission adopted a network neutrality order during his tenure there:

 

“The order claims that it does not attempt to classify broadband services as Title II common carrier services. Yet functionally, that is precisely what the majority is attempting to do to Title I information services, Title III licensed wireless services, and Title VI video services by subjecting them to nondiscrimination obligations in the absence of a congressional mandate. What we have before us today is a Title II Order dressed in a threadbare Title I disguise. Thankfully, the courts have seen this bait-and-switch maneuver by the FCC before, and they have struck it down each time.”

 

The court saw it in 2007 when the commission cited Comcast for throttling BitTorrent. Comcast sued in the D.C. Appeals Court and won on the same premise that network neutrality was struck down on last week.

 

“For a variety of substantive and procedural reasons, those provisions cannot support its exercise of ancillary authority over Comcast’s network management practices,” the court wrote in its April, 2010 decision. “We therefore grant Comcast’s petition for review and vacate the challenged order.”

 

To clarify, network neutrality rules prohibit Internet Service Providers like Comcast from fiddling with the bits that traverse the networks they paid to build and manage. I am not a fan of network neutrality in part for this very reason—it violates the Fifth Amendment, in my view.

 

It’s no different than buying farmland, building the infrastructure and having the government tell you what you can and cannot grow. Which—behold!—the government does, but under a subsidy program. So why not pay ISPs to abide by network neutrality? Because that would be as ridiculous as comparing U.S. broadband speeds to those of Korea or any other country with a tiny fraction of the landmass of the United States. I’ve commented on this absurdity before, but it continues to persist in what passes for the national dialog about broadband in this country. (Richard Bennett does a cogent takedown at High Tech Forum.)

 

The other reason I’m not all acolytic over network neutrality is because it presumes what I’m getting over the Internet isn’t already being controlled by the search engine of my choice. We’ll call it “Google.” Google is worth $390 billion precisely because it does control what I see on the Internet. How is that different from Comcast throttling BitTorrent streamers? It’s not, really, except that A) we’ve all tacitly accepted Google’s interference as a fact of life, and B) it’s virtually impossible for the average user to quantify Google’s impingement on users versus throttling by an ISP. To hold up network neutrality as a tenant of a “free and open Internet” is like complementing the naked emperor on his fine attire.

 

So where’s the conspiracy theory already? It goes like this. The FCC already was under pressure from Google and the file-sharing community to impose network neutrality when Comcast was caught throttling BitTorrent. Here’s Google’s Eric Schmidt in 2006:

 

“The Internet as we know it is facing a serious threat. There’s a debate heating up in Washington, D.C., on something called ‘net neutrality’—and it’s a debate that’s so important, Google is asking you to get involved. We’re asking you to take action to protect Internet freedom.”

 

Thus, “Internet freedom” already was established in the D.C. Lobbyist Lexicon when Comcast gifted its proponents. This all came about as a new Democratic Administration entered the scene with a goal to cover the country with the fastest broadband service on earth, upon which Google could do anything it wished.

 

The Obama Administration and Democrats in general pushed for network neutrality while the GOP railed against it. So the Administration came up with a broadband plan that promised to put several billion dollars in the Treasury for Congress to spend six or seven times. The plan involved taking 40 percent of the TV spectrum and selling it to wireless providers.

 

This plan, however, depended on the participation of the very ISPs who would be subject to network neutrality, particularly Verizon and AT&T, because they have all the money. Spectrum decked out in net neutrality regulations would attract less of it for Congress to have already spent.

 

So then you have an FCC caught in the crosshairs between Google and the major ISPs as well as an expectation from Congress to cough up $26 billion in spectrum auction proceeds. The agency had to do something, and it was becoming increasingly clear that it would not involve regulating broadband under Title II.

 

By May of 2010, more than 275 members of Congress from both sides of the aisle had “urged” then-FCC Chairman Julius Genachowski not to try it. This is more D.C. slang for, “we’ll reverse it so fast your head will spin.” This was due in part to the aforementioned spectrum auction proceeds, which Congress has spent at least once extending unemployment benefits. Never mind both the spectrum and the auction have yet to materialize.

 

Genachowski needed Google because none of the major ISPs were ever going to provide wireless broadband in the boonies, while Google was doing it with nascent white-space technology. He also needed Verizon and AT&T positioned for a pretend bidding war over spectrum so that whatever members of Congress remained alive by the time of the auction could crow about it. Crowing is of tantamount importance in Washington, D.C., and must never be underestimated.

 

The result was the 2010 network neutrality order that required “all broadband providers to publicly disclose network management practices, restrict broadband providers from blocking Internet content and applications, and bar fixed broadband providers from engaging in unreasonable discrimination in transmitting lawful network traffic.” It reflected the language of Title II, which authorizes the commission to compel Internet service free of “unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities or services.”

 

Title I service, meanwhile, can be regulated only “reasonably ancillary to the effective performance” of its responsibilities. Net neutrality applied under Title I would be a cinch for Verizon’s lawyers to overturn in court, which they did. It will now appear to hang in the balance while the new FCC chief, Tom Wheeler, decides his next move.

 

In the meantime, virtually everyone got what they wanted. The ISPs got out from under network neutrality and will be more likely to bid up spectrum for wireless broadband to the delight of Congress, while supporters can hope that Wheeler’s FCC will keep it alive.

 

Full of sound and fury, signifying nothing.

 

- See more at: http://www.tvtechnology.com/mcadams-on/0117/mcadams-on-network-neutrality-and-the-world-of-statecraft/

 

What the Heck is “Net Neutrality” Anyhow?

January 21st, 2014 by Richard Bennett

http://www.hightechforum.org/what-the-heck-is-net-neutrality-anyhow/

 

You may have noticed there’s been some talk on the Internet lately about something called “net neutrality”. It’s connected to a court decision on the FCC, in which the court (the DC Circuit) determined that the FCC has once again overstepped its boundaries and imposed some rules that it was never legally entitled to make. This is the second time the court has smacked the FCC around on net neutrality. I don’t generally like to define basic terms of Internet policy since I believe most of my readers are intelligent enough to know what they mean, but in this instance the FCC’s continued pigheadedness convinces me that a definition is needed.

 

The issue came up in American Enterprise Institute’s discussion of tech policy issues on the agenda for 2014, which you can see on C-Span’s web site: American Enterprise Institute Scholars Predict 2014 Tech Policy Issues

 

As I said, the root of net neutrality is the fear that we’re getting a raw deal on Internet service. This fear – grounded in the fact that it’s more expensive to build a network that covers a dispersed population than to cover one that lives in high-rise buildings – combines with a theory about network design and network quality that’s fundamentally defective. Network neutrality advocates believe that broadband information networks are very, very simple, about like the water system. All it takes to supply a city with water is a well, a pump, and some pipes, so hooking the city up to the Internet should just be about some wires, some switches, and a little bit of electricity. The wires may break from time to time, but when that happens you just patch ‘em up and it all works like magic.

 

It would be great if things were really like that, but they simply aren’t. Broadband is like a water system that pumps fifty percent more water each year to each home for the same price. That would be pretty hard for most water systems to do unless they were massively overbuilt to begin with.

 

Most broadband networks were actually built for a different job than the one they’re doing now. The cable network was built to share a TV antenna, the telephone network was built for, you guessed it, old black telephones, and the mobile network was built for cell phones. These original tasks were all quite a bit less demanding than the tasks they’re doing now. If a phone call is a trickle of water, a pirated movie is a torrent; that’s much, much more water. Not only were broadband networks not overbuilt, they were quite under-built from today’s point of view.

 

The only broadband technology that was really meant for the Internet from day one is “Passive Optical Network” (PON), the glass fiber stuff that Verizon sells as FiOS (and Google sells as, well, Google Fiber,) and even it relies on telephone poles that were built for, you guessed it again, copper telephone wire. FiOS is like a water system that runs a three-foot pipe to each home in order to have capacity for 30 years worth of upgrades, and the other stuff is like the current water system plus some engineering magic that makes the water go faster every year by processes most of us don’t want to understand.

 

So the net neutrality people believe our networks are pathetically slow and overpriced, and that they’ll only get better if their owners are prevented from doing anything to them except making them faster and cheaper. The net neutrality rules are actually aimed at foreclosing all the engineering options that might improve network quality and profitability except those options that improve speed and lower prices. Going back to water: net neutrality advocates don’t want the water to be more pure or better tasting, they just want more water for a lower price, period.

 

So net neutrality is both a fear and a plan.

 

The plan happens to be wrong. The idea that networks should emphasize capacity over quality is a classic engineering error that goes back to the 1970s, when the first Local Area Networks (LANs) were designed. In those days, the computer industry was making its first foray into network design; previously, they relied on the telephone company to supply them with equipment to hook computers up to each other, typically over long distances. The phone company would supply us with modems that either worked over regular telephone connections – at low speeds like 300 bits per second – or over dedicated circuits known as “leased lines” that were much faster, like 56,000 bits per second. These devices could cover thousands of miles.

 

Minicomputer companies started making computers so cheap that companies could have more than one of them in any given office, so they invented LANs to connect computers at even higher speeds – like millions of bits per second – over distances of less than a mile. One of the great revelations of that era is that distance is the enemy of speed in communication systems. Actually, that revelation was about a thousand years old, but computer people came to realize it quite intimately in the 1970s.  More on that later.

 

The minicomputer companies were able to try a number of tradeoffs in the design of their LANs. Datapoint (of San Antonio, Texas) was the first to sell a full-featured LAN, which they called ARCNet. ARCNet was the first LAN I heard about in 1975 and the one that got me inspired to do some LAN inventing of my own a decade later. It was an ingenious little system that used some parts and cables designed by IBM for its 3270 terminal couple to a micro-controller programmed by Datapoint’s Gordon Peterson, a Wozniak-like character who was larger than life and immensely talented. ARCNet was capable of providing predictable service with a controlled delay. This made the system good for a wide range of factory floor applications as well as for less demanding office work. I programmed an ARCNet application 25 years after I first heard about ARCNet for printing presses that still worked perfectly well.

 

Other LANs of the 1970s used all of their logic circuits for speed and didn’t care about variations in delay. The infamous Ethernet system devised at Xerox was like that, but it was redesigned in the mid-80s to provide both higher speeds and lower delay. The Ethernet redesign was my first bit of network inventing, but most of the work was done by a fellow named Tim Rock who worked for AT&T Information Systems and by Pat Thaler of Hewlett Packard. The redesign enabled Ethernet to run over fiber optics as well as copper wires at a wide range of speeds; the whole panoply runs from 1 megabit/second over plain old telephone wire all the way up to 400 gigabits/second (gigabits are thousands of millions of bits per second) over fiber optics today.

 

In the 1970s it was necessary to make choices between speed and bounded delay because the chips we had to work with were expensive by today’s standards and not very capable. The most up-to-date Ethernet chips – the ones that run at speeds of 1 gigabit/second and more – are also capable of handling multiple levels of priority. If you have an application that needs to push information at a rate that matches the rotation of a hard drive’s platter, it needs both high capacity and bounded delay. If the information you’re pushing doesn’t get to the disk at the right time, the platter needs to rotate another full revolution before the data can be written to it and sometimes this is bad; if your network causes you to miss this rotational window millions of times a day, it’s noticeably bad.

 

A good network has a combination of high capacity and low delay, neither of which is a substitute for the other. An airplane will generally get you where you want to go faster than a car will, because the airplane has higher speed or “more bandwidth” in networking terms. But it the airplane only flies three times a week and you need to arrive on one of the days in between flights, you’re better off driving because the car has lower delay, or “latency” in networking terms.

 

Net neutrality people don’t understand that network quality is distinct from network speed in the same way that speed and delay are distinct for airplanes. So when the network operators want to supply networks with service offerings that limit latency as well as those that increase capacity they think they’re being scammed. They also don’t understand the relationship between speed and distance or the relationship between cost and distance, so they compare speeds and prices in the US and Korea and once again think they’re being scammed. In fact, the laws of nature are doing the scamming. Most Koreans live in Seoul; more than 65% to be precise, and Seoul is the most densely populated city in the OECD.

 

If US broadband companies invested exactly as much money per connection as Korean companies invest, and if both countries featured the markup of retail price over investment, would Americans and Koreans pay the same prices for the same broadband speeds? The answer, of course, is “no”. So why do net neutrality advocates complain about the speeds and prices for broadband in the US by comparison with South Korea, Hong Kong, and Stockholm, more densely populated areas than any US city? This doesn’t reflect well on them.

 

So net neutrality is a regulation that claims to improve the quality of American broadband networks by imposing a set of conditions that can only make them worse. It also blames limitations caused by the way Americans choose to live on the carriers. This regulation seeks to force carriers to over-invest in secondary network characteristics while ignoring the primary sources of the problems our networks really have, the distances between households and households and between households and network services. American firms already invest several times more money per capita than those in the city-states with the fastest and cheapest networks, and this investment leads to higher prices.

 

America’s position on the international broadband charts is determined by the way we live. Net neutrality ignores this and tries to make things worse. That should not be allowed to happen.

 

Network neutrality also violates the Internet’s technical standards and its architecture. Consider the text of RFC 2475 from 1998, titled Architecture for Differentiated Services: (http://www.ietf.org/rfc/rfc2475.txt)

 

"This document defines an architecture for implementing scalable service differentiation in the Internet. A “Service” defines some significant characteristics of packet transmission in one direction across a set of one or more paths within a network. These characteristics may be specified in quantitative or statistical terms of throughput, delay, jitter, and/or loss, or may otherwise be specified in terms of some relative priority of access to network resources. Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service."

 

Now look at the FCC’s Open Internet rule that the court struck down:

(http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10-201A1.pdf)

 

"2. No Unreasonable Discrimination

 

68. Based on our findings that fixed broadband providers have incentives and the ability to discriminate in their handling of network traffic in ways that can harm innovation, investment, competition, end users, and free expression we adopt the following rule:

 

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.

 

69. The rule strikes an appropriate balance between restricting harmful conduct and permitting beneficial forms of differential treatment. As the rule specifically provides, and as discussed below, discrimination by a broadband provider that constitutes “reasonable network management” is “reasonable” discrimination. We provide further guidance regarding distinguishing reasonable from unreasonable discrimination:

76. For a number of reasons, including those discussed above in Part II.B, a commercial arrangement between a broadband provider and a third party to directly or indirectly favor some traffic over other traffic in the broadband Internet access service connection to a subscriber of the broadband provider (i.e., “pay for priority”) would raise significant cause for concern. First, pay for priority would represent a significant departure from historical and current practice. Since the beginning of the Internet, Internet access providers have typically not charged particular content or application providers fees to reach the providers’ retail service end users or struck pay-for-priority deals, and the record does not contain evidence that U.S. broadband providers currently engage in such arrangements. Second this departure from longstanding norms could cause great harm to innovation and investment in and on the Internet. As discussed above, pay-for-priority arrangements could raise barriers to entry on the Internet by requiring fees from edge providers, as well as transaction costs arising from the need to reach agreements with one or more broadband providers to access a critical mass of potential end users. Fees imposed on edge providers may be excessive because few edge providers have the ability to bargain for lesser fees, and because no broadband provider internalizes the full costs of reduced innovation and the exit of edge providers from the market. Third, pay-for-priority arrangements may particularly harm non-commercial end users, including individual bloggers, libraries, schools, advocacy organizations, and other speakers, especially those who communicate through video or other content sensitive to network congestion. Even open Internet skeptics acknowledge that pay for priority may disadvantage non-commercial uses of the network, which are typically less able to pay for priority, and for which the Internet is a uniquely important platform. Fourth, broadband providers that sought to offer pay-for-priority services would have an incentive to limit the quality of service provided to non-prioritized traffic. In light of each of these concerns, as a general matter, it is unlikely that pay for priority would satisfy the “no unreasonable discrimination” standard. The practice of a broadband Internet access service provider prioritizing its own content, applications, or services, or those of its affiliates, would raise the same significant concerns and would be subject to the same standards and considerations in evaluating reasonableness as third-party pay-for-priority arrangements."

 

See the problem? RFC 2475 says: Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service.

 

But the FCC second-guessed the Internet Engineering Task Force and said “…as a general matter, it is unlikely that pay for priority would satisfy the “no unreasonable discrimination” standard.” The FCC also rewrote history with its claim that “First, pay for priority would represent a significant departure from historical and current practice” despite the plain language of RFC 2475, the specifications for IP, and a host of other specifications for such things as Integrated Services which are part of LTE.

 

So not only do net neutrality advocates ignore the physical laws of the universe and seek to engage in amateur network engineering, the FCC’s Open Internet rules directly contradicted the design of the Internet architecture.

 

In a word, net neutrality is hubris, prideful overreach. But it’s motivated by fear, so we mustn’t be too hard on its supporters because they know not what they do.