Closing the Virtual Frontier (Redux)

by Lindy Davies

The issue of “Net Neutrality” has been somewhat shoved to the back burner by the more pressing economic concerns of deep unemployment and financial crisis – but it has exercised many people, for some years now, and it isn’t going away. What is it about? Why would we be talking about it in the Georgist Journal?

The “information superhighway” of the Internet has changed our world in so many ways that it has become hard to remember (or, for our kids, to imagine) life without it. Its rate of growth has been mind-bending by any measure: number of users, commercial importance, journalistic and scholarly attention, hours of our time devoted to it. It has had a “wild frontier” character that has made billionaires out of the geeks who thought up things like Google, Ebay, PayPal, Facebook and Twitter. For a while there it seemed as though just about any Net enterprise would make money — but then things calmed down a bit; around the turn of the century, the Internet settled down to simply being ubiquitous and indispensable. Interestingly, that was about the time that “Net Neutrality” started to become a controversial issue. You see, the neat thing about the Internet is that from the very start, it has been a “many to many” medium. Some sites and services might have become wildly popular, but they all started out with the same access to the same Net as every other. At the end of 2009 there were 234 million websites and 126 million blogs on the Internet.*

*www.royal.pingdom.com offers a fascinating, documented compilation of Net stats

*www.royal.pingdom.com offers a fascinating, documented compilation of Net stats

The universality and wide-open character of the Internet was, really, a happy accident of its design. I wrote about this in 1995:

The Internet started out as a free public good – indeed, as arguably the single most important by-product of US military spending. It started out as the ARPAnet, whose original mission was to provide a command-and-control network that was so hyper-redundant that it could not be disabled. The basic design was for a network that could still function even if huge chunks of it (the Washington, DC and New York metro areas, say) were vaporized in a nuclear war. Thus, packet-switching communications technology was born.*

*This paper surveyed the then-current literature on the Internet and its future. Much of its analysis was based on Henry George’s The Science of Political Economy. A PDF copy of the paper is available at http://www.henrygeorge.org/pdfs/netecon.pdf

*This paper surveyed the then-current literature on the Internet and its future. Much of its analysis was based on Henry George’s The Science of Political Economy. A PDF copy of the paper is available here.

This technology ended up serving purposes that were (thank goodness) quite different from what it was designed for. Yes, it was an efficient means for exchanging information – but people soon realized that this TCP-IP network was far more than that. In fact, it represented a whole new way of communicating. Douglas Adams explained this in his fascinating essay, “The Four Ages of Sand.”*

This appeared in Adams’s posthumous book The Salmon of Doubt, and is available on the Web at http://www.douglasadams.se/stuff/sand.html

This appeared in Adams’s posthumous book The Salmon of Doubt, and is available on the Web here.

Adams said that we humans had always had one-to-one communication, of course, from the very dawn of humanity. Eventually we became adept at one-to-many communication, such as was practiced by Greek orators and was greatly accelerated by the printing press. In time, we started experimenting, in a halting, flawed manner, with many-to-one communication in the form of democratic institutions. But never, until the Internet, did we have the potential for many-to-many communication. It was a whole new way for human beings to exchange information.

The technical reason for this amazing potential was a happy accident. To achieve the capacity to get messages from Washington, DC to Boston, despite the physical absence of the Greater New York Metropolitan Area, the network had to be 100% “dumb.” Messages were broken up into packets of data by the Internet Protocol (IP) software, digitally addressed, like little envelopes, and sent off to bounce around the Net through whatever path was open to them. When they reached their destination, they would be reassembled and displayed. It was impossible to know exactly what path the packets of your message would follow during the milliseconds that they went bouncing through the Net. In achieving a network that would function even after large segments of its central trunk were vaporized, we’d also created a network on which each user had exactly the same access as every other user.

Therefore, the Internet also presented an economic problem that wasn’t so new: the “tragedy of the commons.” The year I wrote my paper, 1995, turned out to be an historic one for the Internet: it was the year that the Federal government stopped funding the NSFnet, the fiber-optic threaded supercomputing centers that formed the “backbone” of the Net. Additional capacity had been provided by universities and research facilities. But, now the Internet was becoming the Next Big Thing, and demand would soar.

Indeed, this brought about the “dot corn bubble,” which led, among many other things, to telecommunications companies investing in backbone facilities. The profitability of that investment, however, was always doubtful. As the number of Internet users soared in the early 2000s, demand was robust enough for telcos to simply re-sell Internet access to service providers. Now, though, demand for individual internet-access accounts seems to have plateaued — while the bandwidth desired by each user is rapidly increasing.

Why don’t the telecommuncations folks just build more capacity to handle the load? The short answer is that if a company builds a new backbone facility, it is open for use by every user on the whole Net – not just the ones who pay for it. Early attempts, by companies such as CompuServe and America Online, to restrict users to “home” content were an utter failure. Those companies ended up as regular portals to the open Internet. If the entire Net is open, why would one want to buy access to just a segment of it? But – if the entire Net is open, how can one recoup private investment in what is, by its nature, a public resource?

Congestion has become a serious problem on the Internet, and it looks to get worse. Dire predictions started being made about two years ago that without considerable investment in new backbone facilities, traffic would start to reach the limits of existing capacity by, about: now. Internet “brownouts” are hard to unambiguously identify because of the great variety of local variables — but no one thinks that broadband access, today, is all that it should be, and average connection speed in the United States currently ranks 31st in the world.*

The obvious solution to the return-on-investment question – if it could be accomplished technically – would be to monopolize the Internet’s infrastructure in such a way as to ensure high-quality delivery of your content to customers, wherever they might be on the Internet. But how can that be done, in an end-to-end, packet-switched, “dumb network”?

You had to figure that eventually somebody would find a way. You’ve noticed, haven’t you, that streaming video, audio – and commercials – are clearer, sharper and more reliable from big media outlets than from small websites? This is being accomplished by the use of edge-server caching technology, whose leading provider is a hot new company called Akamai. For fees that are affordable for outfits like J. C. Penney, Fox Sports or MTV – but not to folks like the Henry George Institute, or thousands of small businesses hoping to sell their wares on the Web – Akamai’s systems pull content from its original server and copy it, dynamically as required by traffic needs, to proprietary servers near cities, places with high volumes of end-user traffic. This system dramatically improves performance, and this is especially noticeable in the case of the biggest bandwidth-hog on the Net, streaming video. Unlike email or basic Web pages, for which a half- second delay is no problem, streaming content requires smoothly uninterrupted delivery.*

*In other words it requires what those in the trade call “low latency” or a high guaranteed QoS (quality of service)

*In other words it requires what those in the trade call “low latency” or a high guaranteed QoS (quality of service)

Edge-server caching provides this — but, so far anyway, at too high a price for small players. Will competitors come into this booming market and bring prices down — perhaps making edge-server caching as commonplace as DSL lines? Well, probably not anytime soon. Much of Akamai’s technology is patented. It has already won three lawsuits against would-be competitors, and has recently filed a fourth against a startup named Cotendo.

So we’re left with the situation that new capacity will only be provided if the provider can own that capacity; and charge for its exclusive use. Looks like we really are seeing a “Closing of the Virtual Frontier.” Back in ’95, I envisioned an Internet in which big players could build themselves high-capacity proprietary pipes for their own content…

Everyone gets a taste, apparently: those who can afford to pay for priority service can get near-instant transmission; those with less money can still use internet services, paying in the form of delay. Providers of large-bandwidth information services would clearly be happy with such a system. Receivers of large-bandwidth applications would too…

Only one group of internet users is left in the cold in this scenario: the small-scale internet publisher. Up until now, one of the most important defining characteristics of the internet was that everyone has the same access. Sky Cries Mary [a 90s Grunge band from Seattle] can perform a video concert to the same audience as the Rolling Stones. Any small dissident ‘zine can command (potentially) as large an audience on the World Wide Web as a major national publication. This is a vital fact about the way the internet has grown and a pervasive aspect of its character (upon which much of its current marketability depends). But if NBC, say, can afford to pay whatever it costs to make its web page zip through to web-browsers, while the local nonprofits, community groups (or geeks-in-basements) get squeezed further and further into the netherworld of delay. To use congestion pricing to incentivize development of information superhighway infrastructure could do irrevocable harm to the two-way character of the internet.

Indeed, many people who are concerned with the rights of privacy and free expression in cyberspace are very troubled by this possibility, and many feel that the only way the open, democratic character of the internet can be preserved is by maintaining it as a public good…

That was fifteen years ago, and that spirit has re-emerged today in the “Net Neutrality” movement.

This can all get rather abstruse. Let’s quickly sum up what we’ve seen so far — and then get to what Georgist theory might have to offer toward a solution. The original nature of the Internet was a “dumb network,” an end-to-end medium in which every user had the same access to network resources as every other user. This “many-to-many” communication model offered great incentives for innovation and — changed our world. The great increase in demand for Internet resources cannot be supplied in a free market unless investors can be guaranteed the exclusive use of the new capacity they provide. But if they secure that exclusive access while providing content to users of the Internet, they are effectively walling off portions of a public resource for their own profit! It should be noted that these huge video files go to the edge-servers, and go from the edge-servers to the end users, via the same Internet we all use — but it is the proprietary use of these servers that makes them worth paying for, because users get to see stuff clearly, crisply, and without interruption.

Skeptics in the net-neutrality debate wonder why all this is a problem. Companies, they say, are offering to sell users an enhanced experience of the Web. Should they not expect to be paid for this? Conventional economic theory does have such considerations as antitrust policy informed by analyzing various levels of imperfect competition, but it’s a highly complex matter that affords no categorical answers. Georgist theory claims to be able to do better. Is it possible to look at the net-neutrality issue in terms of the factors of production, and identify what rent, if any, is being generated?

Additional physical capacity for the Net, whether communal or proprietary; is clearly “capital” in Georgist terms; there’s no doubt about that. But what about the Internet itself? Isn’t it also just a collection of physical equipment that carries bits of data here and there? It looks that way – but if so, then why all this talk about monopolizing and enclosing? There’s something about the Internet that seems very land-like.

That perception is a sound one, I think, and it arises from two facts about the Internet: 1) that it was initially established and maintained by the community and was freely available to all users on an equal basis; and 2) the Internet exhibits a positive network externality: in other words, the more people use it, the more valuable it is to each user. This naturally-occuring process, applied to something that started out as a free public good, created a natural opportunity for the users of the Internet. It was, in Georgist terms, “land at the margin.” It was free to all,*

Users had to pay local providers for access to the Internet, but such charges were for the equipment and services used in gaining access, not for the use of the Net itself.

*Users had to pay local providers for access to the Internet, but such charges were for the equipment and services used in gaining access, not for the use of the Net itself.

and the marginal cost of sending a piece of data across the Net was zero. But wait a minute. When we talk about land monopoly, we talk about the monopolization of unique, addressable spaces. What space was getting monopolized on the Internet? Doesn’t the Net transcend space, allowing everybody to interact in real time with everybody else? Perhaps. But Henry George, in The Science of Political Economy, reminds us that the natural limitations on production happen not just in space, but also in time. As the demands for bandwidth increase, and especially as more and more users want streaming applications that don’t tolerate delays, we find that time is the important limiting factor in the economy of the Internet. By providing the means to assure that proprietary content gets to users without delay, Akamai and similar firms are effectively monopolizing particular, valuable bits of time.

Georgist theory goes on to assert that a natural opportunity acquires value due to the actions of the entire community, and its value will be equal to the cost of maintaining the community’s infrastructure that creates it. In this case, the rent is the fees paid to providers of edge-servers to enable high QoS in bandwidth-intensive applications to end users. These fees are high enough to elicit considerable investment in R&D and expensive equipment. Indeed, some commentators predict that mirroring, edge- server technology will become the norm for the entire Internet someday. But that won’t happen while the technology for delivering it is exclusive, and its owners can charge a toll for its use — use which, unavoidably, places congestion-load demands on the entire Internet.

So what’s the remedy? The standard net-neutrality position today would be to outlaw practices that lead to time-privatization on the Net — but opponents scream, rightly, that this amounts to outlawing innovation.*

*For example, in the proposed the Internet Freedom Preservation Act of 2009 (H.R. 3458), Internet providers shall “not provide or sell to any content, application, or service provider, including any affiliate provider or joint venture, any offering that prioritizes traffic over that of other such providers on an Internet access service…” See, for example, papers by Tim Wu and Chris Yoo.

*For example, in the proposed the Internet Freedom Preservation Act of 2009 (H.R. 3458), Internet providers shall “not provide or sell to any content, application, or service provider, including any affiliate provider or joint venture, any offering that prioritizes traffic over that of other such providers on an Internet access service…” See, for example, papers by Tim Wu and Chris Yoo.

What would a Georgist solution to this conundrum look like? Suppose we made a national decision to beef up everyone’s broadband access to the same level as those who now pay for the high speeds provided by Akamai’s equipment. That would put folks like Akamai out of business, probably — and the public might have to buy out their current interest. However, rent that big media companies are willing to pay for such access could be reinvested in increased capacity. Users willing to pay for extra-high speeds could still get them, but rather than incentivizing a self-limiting exclusivity, the profits gained therefrom could go toward making such services much more affordable – perhaps even free.

What would that mean, in the real world? It would mean nothing less than a realization of the idealistic early vision of the World Wide Web: that people with important things to say via the Internet would be free to say them as powerfully as they could — limited by nothing but their own imaginations.

Leave a Reply

Your email address will not be published. Required fields are marked *