1
Introduction
Since it was originally commissioned in the late 1960s, the Internet, a computer data network governed by the Transmission Control Protocol/Internet Protocol (TCP/IP), has undergone a tremendous expansion. While the Internet has been in existence for several decades, commercial expansion of the Internet occurred in 1990s, when the Internet has been transformed from an exclusive research facility to a network which currently has an estimated 80 million users in the U.S. alone (IntelliQuest, 1999). This growth in the user base has gone hand in hand with capital investment for the expansion of network capacity. From 1986 to 1996, the capacity of a single backbone line increased 2500 times thanks to better technology, while the number of major backbones grew from 2 to more than 30 (Kristula, 1997). Greater capacity has allowed new types of media such as graphics, audio, and video to be transferred over the Internet, and this has helped popularize the Internet among casual users.
The expansion of the Internet is primarily due to the open standards and positive network externalities. Open standards prevent any single firm from changing the standard or charging licensing fees, and this stability and independence of standards encourages firms to develop applications utilizing Internet technologies. Positive network externalities arise because additional users increase the interconnection possibilities for current users, and therefore increase the utility all users derive from the network. The original designers of the Internet created flexible standards that can be extended, and made sure that the network can grow. However, because at the time the Internet was envisioned as an academic, government subsidized network, they did not attempt to address issues of scarcity and congestion through pricing. Instead, early Internet users were mostly researchers who upheld a code of behavior that discouraged any unnecessary usage of network resources. As the Internet grew and became more commercial, new users did not follow these unspoken rules, while the concept of "unnecessary" usage became increasingly difficult to define. Throughout Internet's expansion, many experts have raised fears that the Internet can not handle all the extra traffic that was being generated, and that congestion would become unbearable (Gupta, Stahl and Whinston 1995). While this has yet to happen, economic analysis shows that welfare losses can be significant even without gridlock, and the current pricing model, which does not address the “tragedy of the commons” problem, is to blame. In this paper I hope to show that the current pricing model will not be sufficient in the future, and evaluate proposed pricing mechanisms in the light of current industry organization and future trends.
The earliest Internet, known as ARPANET (Advanced Research Private Agency Network), was set up by the U.S. Department of Defense in 1969, originally connecting four universities on the West Coast, and expanding to about thirty university sites by 1971 (Cerf, 1993). Research departments at major corporations wanted to participate in the new network, so corporations connected their internal networks to the Internet. Both universities and corporations covered all costs and gave their users access to the Internet free of charge, while the U.S. government subsidized the major backbone. As new applications made the Internet easier to use, its popularity as a way to share information spread among less technical users.
In the 1980s households began to acquire personal computers and users began to look for ways to get the same access from home that they were accustomed to at work, especially for e-mail. Private companies known as Internet Service Providers (ISPs) set up equipment that could be dialed into by modem over the telephone line, and began to charge for home Internet access. Because the home market has grown so quickly, today more people in the United States access the Internet from the home than from the workplace (Mediamark Research, 1998). Falling prices of personal computers are making Internet access affordable to more and more households. In addition, new low-cost devices such as WebTV, and advertising supported ventures such as Free-PC are further reducing the total cost of access. Because most households pay for their access, and pricing information for the home market is widely available, households' demand for Internet access can be directly observed. This paper is most interested in the properties and the future of the households' access to the Internet because this appears to be the most rapidly growing, hotly contested segment.
Because of the speed of change in the industry, industry organization is still quite confused. New entrants are competing with natural monopolies in telecommunications and cable, even as these monopolies are undergoing deregulation. Software vendors, content providers, and hardware resellers are forming and breaking alliances in an attempt to win market share. Future pricing will both depend on and help shape the organization and concentration of the ISP industry, making it central to the future of the Internet.
At a time when the Internet had only a small number of hosts and a single backbone, it was possible to set up simple pricing schemes that estimated the congestion externality. This method works quite well while the network structure is simple and property rights are well defined. New Zealand provides a very interesting study of how such methods can be effectively used in a simple network system (Brownlee 1997). New Zealand had a simple settlement scheme, where all users (which were large universities) of a particular segment on the network partitioned network capacity among themselves, and then settled all accounts on monthly or quarterly basis. Payment was made based on the total amount of data that a particular host sent through the backbone during the billing period. The actual payment process was not automated and if congestion turned out to be too high, prices could only be adjusted at the beginning of the next billing period. However, as the number of hosts has grown, New Zealand's Internet community is now looking for more efficient ways of pricing usage.
The Internet infrastructure in the United States is much too complicated for the simple settlement scheme used in New Zealand. By 1998 the Internet had more than 20 million hosts, each capable of originating and receiving traffic. Most points are connected by several routes, and packets are often dropped or disappear into private networks. The heavy commercial use has taken away reasons for public funding, and in 1995, the National Science Foundation announced that it would cease subsidizing NSFNet, the major Internet backbone. Private companies such as MCI, UUNet and AT&T had by this time created their own Internet backbones, so that today almost all traffic on the Internet is carried over private networks. However, private companies had to invest heavily to create large Internet backbones, and need to recoup their costs. Since technology for monitoring and settling accounts was not developed ahead of time, uncertainty about billing has prevented the owners of Internet lines from raising much revenue. Any pricing scheme would have to be widely accepted to be useful, because pricing mechanism has to be uniform across the network. However, no single company today has the market power to simply implement a pricing method and force others to adopt it. Unless providers agree on a single pricing mechanism, the only way for a firm to recover costs is to achieve near-monopoly position and then impose their own pricing on the rest of the market. In the later part of this thesis, we will also look at various methods of pricing available to the ISPs to see what types of pricing packages exist, and whether we can expect such pricing structures to generate efficient outcomes. It has been expected for quite some time that the firms within the ISP industry would consolidate, both to achieve greater economies of scale and to obtain market power. Several large mergers in the past year have raised concerns that the largest ISPs would be able to obtain enough market power to raise home access rates.
Before we look at issues of congestion and market structure, let us turn to the technologies that define the Internet, and see how these technologies both shape and are defined by the economics of this new medium.
Chapter I
Economics of Internet Technologies
1.1 Terminology of the Internet
Internet's flexibility makes precise definitions difficult, but a definition should focus on the standards and design philosophies of the Internet rather than particular uses. The Federal Networking Commission came up with the following definition (FNC 1995):
"'Internet' refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein."
The Internet is governed by TCP/IP (Transmission Control Protocol/Internet Protocol) and a network infrastructure that has proven to be both powerful and flexible during Internet's early growth. Created in 1973, TCP/IP was designed from the start to be the common language of the different computer networks that connected to the Internet, and by 1983 every computer connected to Internet had to use TCP/IP (Cerf, 1993). The flexibility of the protocols and infrastructure designed by the original ARPANET team allows multiple services (text, images, audio, and video) to travel over the same set of wires, creating new economies of scale. The underlying network simply carries data without regard to content or purpose, and this allows companies to create a rich array of products to be sent via this new media.
Before we can start a discussion of the economic properties of the Internet, it is important to become familiar with the vocabulary of the Internet, and understand the technical characteristics that are responsible for its economic properties. The Internet consists of many types of different computers, and of data lines of various speeds. Some of the computers used on the Internet were designed decades apart, and therefore have very limited interoperability. TCP/IP runs on all of these computers, and standardizes the format in which data is sent from one computer to another. The protocol makes sure that the information encoded in a particular message will arrive at the receiving computer as intended. The unit of data on the Internet is a packet, which can vary in size, but is commonly around 200 bytes (MacKie-Mason and Varian, 1994a). Each packet contains a header section and a data section. The data section simply contains the data being transferred, such as the text of e-mail. The header contains all the information needed to get the message to its destination, such as the address of the sender, the address of the recipient, size of the packet, and the order of the packet in the overall message. The Internet Protocol breaks all data to be sent into packets, puts the destination on each and sends them separately across the Internet. The packets are numbered in such a way that the receiving computer can put the whole message back together by arranging the packets in proper order.
The Internet is best represented as a non-directional graph of lines, wires that carry information,and routers, computers that route each packet from line to line. The property that this graph attempts to maintain is that there should always be at least one route between any two nodes on the graph. Thus, any computer that is added to the Internet should be able to communicate with every other computer on the Internet. Most simple representation of this is a singly connected graph, a network that has only one route between any two points. This is much like the original Internet in the United States, with the Internet backbone connecting many local networks. The failure of any link or node within this graph would destroy the connected property of the graph. For purposes of reliability, it is therefore desirable to put in additional links, but these links are costly. Therefore designers compare the cost of possible downtime to the cost of building additional lines when deciding if a singly linked network is enough. As the cost of both lines and routers has declined, while the importance of staying connected has increased, extra links have been put in to ensure reliability. The modern Internet is much more connected, and a failure of a link, even on the backbone, normally allows the information to be routed another way.
For example, figure 1.1 represents the topography of the Internet within the Five Colleges. This figure was obtained by using a program that traces packets as they travel to different destinations and shows all intermediate servers. University of Massachusetts serves as a center of a star network, which connects together the five colleges. Amherst College connects to the Internet backbone in Boston through the University of Massachusetts routers and lines. There are four routers between a user at Amherst College and the major backbone entrance in Boston. The Boston router is a major hub that connects to several regional and national backbones. Once packets arrive at the Boston hub, there are multiple ways for them to travel to their destination, so congestion or failure of any one link can be remedied. However, the failure of any of the servers between Amherst College and Boston would completely disconnect Amherst College from the Internet. The designers of this network clearly felt that the danger of lost connectivity due to failure of one of the intermediate nodes did not justify the expense of setting up an alternative link between Amherst College and the major Boston hub.
Each line and each router have certain capacities, or the maximum amount of information they can transmit per unit time. This makes it possible to estimate the maximum amount of information any particular link can carry. Without direct connections between most points, information travels over lines, and is directed from node to node in the graph by routers, which are powerful computers that estimate the fastest path to the destination of each message. The capacity of the overall connection is always equal to the capacity of the smallest-capacity link that the information has to cross.
Digital communication is made up of sending zeroes and ones over the wire at a certain rate, known as frequency. Each individual signal travels near the speed of light, but if the signals are sent too close together, it becomes difficult to distinguish between them. Rather than speed, lines are measured in capacity or throughput, the amount of data a line can carry per second. Faster, higher capacity lines are able to shorten the time between signals, and thus send more information across the wire. The throughput of a wire is measured in bits per second (bps), and as speed has increased, in kilobits per second and megabits per second. A standard telephone line has small capacity, so even with various methods of compression of new modems it can only transmit approximate 56 Kbps (kilobits per second, 1Kbps=103bps). T1 lines are designed to carry more information, and have throughput of 1.4 Mbps (1Mbps=106 bits per second) (MacKie-Mason and Varian, 1994a). New fiber optic technology allows for lines with much higher capacities, because multiple signals can be sent simultaneously over the same wire at different frequencies of light. The idea of sending multiple signals simultaneously has inspired the term bandwidth, which is a synonym for capacity. Because precision with which light is broken down into multiple frequencies can be increased with more sophisticated equipment, capacity of fiber optic lines can be expanded. Early fiber optic lines had capacity of 45 Mbps, but this has been increased to 155 Mbps, and currently the same fiber optic lines are being upgraded to 900 Mbps of capacity. Fiber optic lines used for Internet data transport are known as T3 lines. The first T3 line was only completed in 1991, yet today much higher-capacity T3 lines are used on regional as well as national backbones.
Corporations generally developed internal networks long before they connected to the Internet. Many companies used network structures based on proprietary standards that had either been bought or developed internally, and did not use TCP/IP. To connect their employees to the Internet, these companies set up computers known as gateways, which act as translators between Internet servers outside the company and local users. Because internal networks are usually high-capacity, once a fast connection to the nearest major backbone is established, corporate users have a fast connection to the Internet. Because of the advantages of having a network based on open standards, many companies have in the recent years changed their internal networks to use TCP/IP protocols.