This paper is available in the Taxpayer Assets directory at cpsr.org. To subscribe to TAP information policy notes (about 1 to 3 posts per week, send subscription requests to email@example.com, with the message: subscribe tap-info your name
Address. Department of Economics, University of Michigan, Ann Ar bor, MI 48109-1220. E-mail: Hal.Varian@umich.edu and firstname.lastname@example.org.
This paper will appear in the Journal of Economic Perspectives in the Fall of 1994. The most current version of this paper will always be available for anonymous ftp, gopher, or World Wide Web at gopher.econ.lsa.umich.edu. We wish to thank the National Science Foundation for financial support. The definitive version of the paper is available in PostScript; this is a rough ASCII translation provided for the convenience of those who do not have ready access to PostScript. Hal Varian maintains a WWW archive of materials relating to the economics of the Internet at http://gopher.econ.lsa.umich.edu.
from physics to scuba diving to how to contact the White House. They are produced and maintained by volunteers. This FAQ answers questions about the economics of the Internet (and towards the end offers some opinions and forecasts). The companion paper in this Symposium, Goffe (1994), describes Internet resources of interest to economists, including how to find other FAQs.
Some of the regional networks receive subsidies from the NSF; many receive subsidies from state governments. A large share of their funds are collected through connection fees charged to organizations that attach their local networks to the mid-levels. For example, a large university will typically pay $60,000-$100,000 per year to connect to a regional.
Nowadays the commercial backbones and the NSFNET backbone interconnect so that traffic can flow from one to the other. Given the fact that both research and commercial traffic is now flowing on the same fiber, the NSF's Acceptable Use Policy has become pretty much of a dead letter. The charges for these interconnections are currently relatively small lump-sum payments, but there has been considerable debate about whether usage-based settlement charges will have to be put in place in the future.
To give some sense of the scale of this subsidy, add to it the approx imately $7 million per year that NSF pays to subsidize various regional networks, for a total of about $20 million. With current estimates that there are approximately 20 million Internet users (most of whom are con nected to the NSFNET in one way or another) the NSF subsidy amounts to about $2 per person per year. Of course, this is significantly less than the total cost of the Internet; indeed, it does not even include all of the public funds, which come from state governments, state-supported uni versities, and other national governments as well. No one really knows how much all this adds up to, although there are some research projects underway to try to estimate the total U.S. expenditures on the Internet. It has been estimated--read guessed--that the NSF subsidy of $20 million per year is less than 10% of the total U.S. expenditure on the Internet.
The main advantage of packet-switching is that it permits statistical multiplexing on the communications lines. That is, the packets from many different sources can share a line, allowing for very efficient use of the fixed capacity. With current technology, packets are generally accepted onto the network on a first-come, first-served basis. If the network becomes overloaded, packets are delayed or discard (dropped).
Along the way packets may be broken up into smaller packets, or reassembled into bigger ones. When the packets reach their final desti nation, they are reassembled at the host computer. The instructions for doing this reassembly are part of the TCP/IP protocol.
Some packet-switching networks are connection-oriented (notably, X.25 networks, such as Tymnet and frame-relay networks). In such a network a connection is set up before transmission begins, just as in a circuit-switched network. A fixed route is defined, and information necessary to match packets to their session and defined route is stored in memory tables in the routers. Thus, connectionless networks economize on router memory and connection set-up time, while connection-oriented networks economize on routing calculations (which have to be redone for every packet in a connectionless network).
The current T-3 45 Mbps lines can move data at a speed of 1,400 pages of text per second; a 20-volume encyclopedia can be sent coast to coast on the NSFNET backbone in half a minute. However, it is important to remember that this is the speed on the superhighway--the access roads via the regional networks usually use the much slower T-1 connections.
The costs of both communications lines and computers have been de clining exponentially for decades. However, since about 1970, switches (computers) have become relatively cheaper than lines. At that point packet switching became economic: lines are shared by multiple con nections at the cost of many more routing calculations by the switches. This preference for using many relatively cheap routers to manage few expensive lives is evident in the topology of the backbone networks. For example, in the NSFNET any packet coming on to the backbone has to pass through two routers at its entry point and again at its exit point. A packet entering at Cleveland and exiting at New York traverses four NSFNET routers but only one leased T-3 communications line.
However, given the high fixed costs of providing a network, the eco nomic incentive to develop an integrated services network is strong. Furthermore, now that all information can be easily digitized the need for separate networks for separate types of traffic is no longer necessary. Convergence toward a unified, integrated services network is a basic feature in most visions of the much publicized information superhigh way. The migration to integrated services networks will have important implications for market structure and competition.
The international telephone community has committed to a future net work design that combines elements of both circuit and packet switching to enable the provision of integrated services. The CCITT (an inter national standards body for telecommunications) has adopted a cell switching technology called ATM (asynchronous transfer mode) for future high-speed networks. Cell switching closely resembles packet switching in that it breaks a data stream into packets which are then placed on lines that are shared by several streams. One major difference is that cells have a fixed size while packets can have different sizes. This makes it possible in principle to offer bounded delay guarantees (since a cell will not get stuck for a surprisingly long time behind an unusually large packet).
An ATM network also resembles a circuit-switched network in that it provides connection-oriented service. Each connection has set-up phase, during which a virtual circuit is created. The fact that the circuit is virtual, not physical, provides two major advantages. First, it is not nec essary to reserve network resources for a given connection; the economic efficiencies of statistical multiplexing can be realized. Second, once a virtual circuit path is established switching time is minimized, which al lows much higher network throughput. Initial ATM networks are already being operated at 155 Mbps, while the non-ATM Internet backbones op erate at no more than 45 Mbps. The path to 1000 Mbps (gigabit) networks seems much clearer for ATM than for traditional packet switching.
Efforts to develop integrated services networks also have exploded. Several cable companies have already started offering Internet connec tions to their customers.4 ATT, MCI and all of the Baby Bell operating companies are involved in mergers and joint ventures with cable TV and other specialized network providers to deliver new integrated services such as video-on-demand. ATM-based networks, although initially de veloped for phone systems, ironically have been first implemented for data networks within corporations and by some regional and backbone providers.
Simple connection pricing still dominates the market, but a number of variants have emerged. The most notable is committed information rate pricing. In this scheme, an organization is charged a two-part fee. One fee is based on the bandwidth of the connection, which is the maximum feasible flow rate; the second fee is based on the maximum guaranteed flow to the customer. The network provider installs sufficient capacity to simultaneously transport the committed rate for all of its customers, and installs flow regulators on each connection. When some customers operate below that rate, the excess network capacity is available on a first come, first-served basis for the other customers. This type of pricing is more common in private networks than in the Internet because a TCP/IP flow rate can be guaranteed only network by network, greatly limiting its value unless a large number of the 20,000 Internet networks coordinate on offering this type of guarantee.
Networks that offer committed information pricing generally have enough capacity to meet the entire guaranteed bandwidth. This is a bit like a bank holding 100% reserves, but is necessary with existing technology since there is no commonly used way to prioritize packets.
For most usage, the marginal packet placed on the Internet is priced at zero. At the outer fringes there are a few exceptions. For example, several private networks (such as Compuserve) provide email connections to the Internet. Several of these charge per message above a low threshold. The public networks in Chile and New Zealand charge their customers by the packet for all international traffic. We discuss some implications of this kind of pricing below.
Without an incentive to economize on usage, congestion can become quite serious. Indeed, the problem is more serious for data networks than for many other congestible resources because of the tremendously wide range of usage rates. On a highway, for example, at a given moment a single user is more or less limited to either putting zero or one cars on the road. In a data network, however, single user at a modern workstation can send a few bytes of e-mail or put a load of hundreds of Mbps on the network. Within a year any undergraduate with a new Macintosh will be able to plug in a video camera and transmit live videos home to mom, demanding as much as 1 Mbps. Since the maximum throughput on current backbones is only 45 Mbps, it is clear that even a few users with relatively inexpensive equipment could bring the network to its knees.
Congestion problems are not just hypothetical. For example, conges tion was quite severe in 1987 when the NSFNET backbone was running at much slower transmission speeds (1.5 Mbps). Users running interac tive remote terminal sessions were experiencing unacceptable delays. As a temporary fix, the NSFNET programmed the routers to give terminal sessions (using the telnet program) higher priority than file transfers (using the ftp program). (See Goffe (1994) paper for a description of telnet and ftp.)
More recently, many services on the Internet have experienced se vere congestion problems. Large ftp archives, Web servers at the Na tional Center for Supercomputer Applications, the original Archie site at McGill University and many services have had serious problems with overuse. See Markoff (1993) for more detailed descriptions.
If everyone just stuck to ASCII email congestion would not likely become a problem for many years, if ever. However, the demand for multi-media services is growing dramatically. New services such as Mosaic and Internet Talk Radio are consuming ever-increasing amounts of bandwidth. The supply of bandwidth is increasing dramatically, but so is the demand. If usage remains unpriced is is likely that there will be periods when the demand for bandwidth exceeds the supply in the foreseeable future.
What other mechanisms can be used to control congestion? The most obvious approach for economists is to charge some sort of usage price. However, to date, there has been almost no serious consideration of usage pricing for backbone services, and even tentative proposals for usage pricing have been met with strong opposition. We will discuss pricing below but first we examine some non-price mechanisms that have been proposed.
Many proposals rely on voluntary efforts to control congestion. Nu merous participants in congestion discussions suggest that peer pressure and user ethics will be sufficient to control congestion costs. For example, recently a single user started broadcasting a 350-450Kbps audio-video test pattern to hosts around the world, blocking the network's ability to handle a scheduled audio broadcast from a Finnish university. A leading network engineers sent a strongly-worded e-mail message to the user's site administrator, and the offending workstation was disconnected from the network. However, this example also illustrates the problem with relying on peer pressure: the inefficient use was not terminated until after it had caused serious disruption. Further, it apparently was caused by a novice user who did not understand the impact of what he had done; as network access becomes ubiquitous there will be an ever-increasing number of unsophisticated users who have access to applications that can cause severe congestion if not properly used. And of course, peer pressure may be quite ineffective against malicious users who want to intentionally cause network congestion.
One recent proposal for voluntary control is closely related to the 1987 method used by the NSFNET (Bohn, Braun, Claffy, and Wolff (1993)). This proposal would require users to indicate the priority they want each of their sessions to receive, and for routers to be programmed to maintain multiple queues for each priority class. Obviously, the success of this scheme would depend on users' willingness to assign lower priorities to some of their traffic. In any case, as long as it is possible for just one or a few abusive users to create crippling congestion, voluntary priority schemes that are not robust to forgetfulness, ignorance, or malice may be largely ineffective.
In fact, a number of voluntary mechanisms are in place today. They are somewhat helpful in part because most users are unaware of them, or because they require some programming expertise to defeat. For example, most implementations of the TCP protocols use a slow start algorithm which controls the rate of transmission based on the current state of delay in the network. Nothing prevents users from modifying their TCP implementation to send full throttle if they do not want to behave nicely.
A completely different approach to reducing congestion is purely technological: overprovisioning. Overprovisioning means maintaining sufficient network capacity to support the peak demands without notice able service degradation.5 This has been the most important mechanism used to date in the Internet. However, overprovisioning is costly, and with both very-high-bandwidth applications and near-universal access fast approaching, it may become too costly. In simple terms, will the cost of capacity decline faster than the growth in capacity demand?
Given the explosive growth in demand and the long lead time needed to introduce new network protocols, the Internet may face serious prob lems very soon if productivity increases do not keep up. Therefore, we believe it is time to seriously examine incentive-compatible allocation mechanisms, such as various forms of usage pricing.
However, different kinds of data place different demands on network services. E-mail and file transfers requires 100% accuracy, but can easily tolerate delay. Real-time voice broadcasts require much higher bandwidth than file transfers, and can only tolerate minor delays, but they can tolerate significant distortion. Real time video broadcasts have very low tolerance for delay and distortion.
Because of these different requirements, network routing algorithms will want to treat different types of traffic differently--giving higher priority to, say, real-time video than to e-mail or file transfer. But in
order to do this, the user must truthfully indicate what type of traffic he or she is sending. If real-time video bit streams get the highest quality service, why not claim that all of your bit streams are real-time video?
Cocchi, Estrin, Shenker, and Zhang (1992) point out that it is useful to look at network pricing as mechanism design problem. The user can indicate the type of his transmission, and the workstation in turn reports this type to the network. In order to ensure truthful revelation of prefer ences, the reporting and billing mechanism must be incentive compatible. The field of mechanism design has been criticized for ignoring bounded rationality of human subjects. However, in this context, the workstation is doing most of the computation, so that quite complex mechanisms may be feasible.
Another accounting problem concerns the granularity of the records. Presumably accounting detail is most useful when it traces traffic to the user. Certainly if the purpose of accounting is to charge prices as incentives, those incentives will be most effective if they affect the person actually making the usage decisions. But the network is at best capable of reliably identifying the originating host computer (just as phone networks only identify the phone number that placed a call, not the caller). Another layer of expensive and complex authorization and accounting software will be required on the host computer in order to track which user accounts are responsible for which packets.6 Imagine, for instance, trying to account for student e-mail usage at a large public computer cluster.
Accounting is more practical and less costly the higher the level of aggregation. For example, the NSFNET already collects some informa tion on usage by each of the subnetworks that connect to its backbone (although these data are based on a sample, not an exhaustive accounting for every packet). Whether accounting at lower levels of aggregation is worthwhile is a different question that depends importantly on cost-saving innovations in internetwork accounting methods.
In any case, voluntary schemes will require substantial overprovision ing to handle the burstiness of demand, and the wide range of bandwidths required by different applications. Excess capacity has been subsidized heavily--directly or indirectly--through public funding. While provid ing network services as a zero marginal price public good probably made sense during the research, development and deployment phases of the Internet, it is harder to rationalize as the network matures and becomes widely used by commercial interests. Why should data network usage be free even to universities, when telephone and postal usage are not?7
Indeed, the Congress required that the federally-developed gigabit network technology must accommodate usage accounting and pricing. Further, the NSF will no longer provide backbone services, leaving the general purpose public network to commercial and state agency providers. As the net increasingly becomes privatized, competitive forces may ne cessitate the use of more efficient allocation mechanisms. Thus, it appears that there are both public and private pressures for serious consideration of pricing. The trick is to design a pricing system that minimizes trans actions costs.
Charging for connections is conceptually straightforward: a connec tion requires a line, a router, and some labor effort. The line and the router are reversible investments and thus are reasonably charged for on annual lease basis (though many organizations buy their own routers). Indeed, this is essentially the current scheme for Internet connection fees.
Charging for incremental capacity requires usage information. Ide ally, we need a measure of the organization's demand during the expected peak period of usage over some period, to determine its share of the incre mental capacity requirement. In practice, it might seem that a reasonable approximation would be to charge a premium price for usage during pre determined peak periods (a positive price if the base usage price is zero), as is routinely done for electricity. However, casual evidence suggests that peak demand periods are much less predictable than for other utility services. One reason is that it is very easy to use the computer to schedule some activities for off-peak hours, leading to a shifting peaks problem.9
In addition, so much traffic traverses long distances around the globe that time zone differences are important. Network statistics reveal very irregular time-of-day usage patterns (MacKie-Mason and Varian (1994)).
The basic idea is simple. Much of the time the network is uncon gested, and the price for usage should be zero. When the network is congested, packets are queued and delayed. The current queuing scheme is FIFO. We propose instead that packets should be prioritized based on the value that the user puts on getting the packet through quickly. To do this, each user assigns her packets a bid measuring her willingness-to-pay for immediate servicing. At congested routers, packets are prioritized based on willingness-to-pay. In order to make the scheme incentive compatible, users are charged not their own willingness-to-pay, however, but the packet price of the lowest priority packet that is admitted to the
network. It is well-known that this mechanism provides the right incentives for truthful revelation.
This scheme has a number of nice features. In particular, not only do those with the highest cost of delay get served first, but the prices also send the right signals for capacity expansion in a competitive market for network services. If all of the congestion revenues are reinvested in new capacity, then capacity will be expanded to the point where its marginal value is equal to its marginal cost.
A number of network specialists have suggested that many customers- particularly not-for-profit agencies and schools--will object because they do not know in advance how much network utilization will cost them. We believe that this argument is partially a red herring, since the user's bid always controls the maximum network usage costs. Indeed, since we expect that for most traffic the congestion price will be zero, it should be possible for most users to avoid ever paying a usage charge by simply setting all packet bids to zero.10 When the network is congested enough to have a positive congestion price, these users will pay the cost in units of delay rather than cash, as they do today.
We also expect that in a competitive market for network services, fluc tuating congestion prices would usually be a wholesale phenomenon, and that intermediaries would repackage the services and offer them at a guaranteed price to end-users. Essentially this would create a futures market for network services.
There are also auction-theoretic problems that have to be solved. Our proposal specifies a single network entry point with auctioned access. In practice, networks have multiple gateways, each subject to differing states of congestion. Should a smart market be located in a single, central hub, with current prices continuously transmitted to the many gateways? Or should a set of simultaneous auctions operate at each gateway? How much coordination should there be between the separate auctions? All of these questions need not only theoretical models, but also empirical work to determine the optimal rate of market-clearing and inter-auction information sharing, given the costs and delays of real-time communication.
Another serious problem for almost any usage pricing scheme is how to correctly determine whether sender or receiver should be billed. With telephone calls it is clear that in most cases the originator of a call should pay. However, in a packet network, both sides originate their own packets, and in a connectionless network there is no mechanisms for identifying party B's packets that were solicited as responses to a session initiated by party A. Consider a simple example: A major use of the Internet is for file retrieval from public archives. If the originator of each packet were charged for that packet's congestion cost, then the providers of free public goods (the file archives) would pay nearly all of the congestion charges induced by a user's file request.11 Either the public archive provider would need a billing mechanism to charge requesters for the (ex post) congestion charges, or the network would need to be engineered so that it could bill the correct party. In principle this problem can be solved by schemes like 800, 900 and collect phone calls, but the added complexity in a packetized network may make these schemes too costly.
The average cost of the Internet is so small today because the tech nology is so efficient: the packet-switching technology allows for very cost-effective use of existing lines and switches. If everyone only sent ASCII email, there would probably never be congestion problems on the Internet. However, new applications are creating huge demands for additional bandwidth. A video e-mail message could easily use 10^4 more bits than a plain text ASCII e-mail with the same information content and providing this amount of incremental bandwidth could be quite ex pensive. Well-designed congestion prices would not charge everyone the average cost of this incremental bandwidth, but instead charge those users whose demands create the congestion and need for additional capacity.
There are vast troves of high-quality information (and probably equally large troves of dreck) currently available on the Internet, all available as free goods. Historically, there has been a strong base of volunteerism to collect and maintain data, software and other information archives. However, as usage explodes, volunteer providers are learning that they need revenues to cover their costs. And of course, careful researchers may be skeptical about the quality of any information provided for free.
Charging for information resources is quite a difficult problem. A service like Compuserve charges customers by establishing a billing ac count. This requires that users obtain a password, and that the information provider implement a sophisticated accounting and billing infrastructure. However, one of the advantages of the Internet is that it is so
decentralized: information sources are located on thousands of different computers. It would simply be too costly for every information provider to set up an independent billing system and give out separate passwords to each of its registered users. Users could end up with dozens of different authentication mechanisms for different services.
A deeper problem for pricing information services is that our tra ditional pricing schemes are not appropriate. Most pricing is based on the measurement of replications: we pay for each copy of a book, each piece of furniture, and so forth. This usually works because the high cost of replication generally prevents us from avoiding payment. If you buy a table we like, we generally have to go to the manufacturer to buy one for ourselves; we can't just simply copy yours. With information goods the pricing-by-replication scheme breaks down. This has been a major problem for the software industry: once the sunk costs of software development are invested, replication costs essentially zero. The same is especially true for any form of information that can be transmitted over the network. Imagine, for example, that copy shops begin to make course packs available electronically. What is to stop a young entrepreneur from buying one copy and selling it at a lower price to everyone else in the class? This is a much greater problem even than that which publishers face from unauthorized photocopying, since the cost of replication is essentially zero.
There is a small literature on the economics of copying that examines some of these issues. However, the same network connections that ex acerbate the problems of pricing information goods may also help to solve some of these problems. For example, Cox (1992, 1993) describes the idea of superdistribution of information objects in which access ing a piece of information automatically sends a payment to the provider via the network. However, there are several problems remaining to be solved before such schemes can become widely used.
Bank debit cards and automatic teller cards work because they have reliable authentication procedures based on both a physical device and knowledge of a private code. Digital currency over the network is more difficult because it is not possible to install physical devices and protect them from tampering on every workstation.14 Therefore, authentication and authorization most likely will be based solely on the use of private codes. Another objective is anonymity so individual buying histories can not be collected and sold to marketing agencies (or Senate confirmation committees).
A number of recent computer science papers have proposed protocols for digital cash, checks and credit, each of which has some desirable features, yet none of which has been widely implemented thus far. The seminal paper is Chaum (1985) which proposed an anonymous form of digital cash, but one which required a single central bank to electronically verify the authenticity of each coin when it was used. Medvinsky and Neuman (1993) propose a form of digital check that is not completely anonymous, but is much more workable for widespread commerce with multiple banks. Low, Maxemchuk, and Paul (1994) suggest a protocol for anonymous credit cards.
As a result, the trend seems to be toward removing of barriers against cross-ownership of local phone and cable TV companies. The regional Bell operating companies have filed a motion to remove the remaining restrictions of the Modified Final Judgement that created them (with the 1984 breakup of ATT). The White House, Congress, and the FCC are all developing new models of regulation, with a strong bias towards deregulation (for example, see the New York Times, 12 January 1994, p. 1).
Internet transport itself is currently unregulated. This is consistent with the principal that common carriers are natural monopolies, and must be regulated, but the services provided over those common carriers are not. However, this principal has never been consistently applied to phone companies: the services provided over the phone lines are also regulated. Many public interest groups are now arguing for similar regulatory requirements for the Internet.
One issue is "universal access," the assurance of basic service for all citizens at a very low price. But what is "basic service"? Is it merely a data line, or a multimedia integrated services connection? And in an increasingly competitive market for communications services, where should the money to subsidize universal access be raised? High-value uses which traditionally could be charged premium prices by monopoly providers are increasingly subject to competition and bypass.
A related question is whether the government should provide some data network services as public goods. Some initiatives are already un derway. For instance, the Clinton administration has required that all published government documents be available in electronic form. An other current debate concerns the appropriate access subsidy for primary and secondary teachers and students.
One interesting question is the interaction between pricing schemes and market structure. If competing backbones continue to offer only connection pricing, would an entrepreneur be able to skim off high-value users by charging usage prices, but offering more efficient congestion control? Alternatively, would a flat-rate connection price provider be able to undercut usage-price providers, by capturing a large share of low value baseload customers who prefer to pay for congestion with delay rather than cash? The interaction between pricing and market structure may have important policy implications, because certain types of pricing may rely on compatibilities between competing networks that will enable efficient accounting and billing. Thus, compatibility regulation may be needed, similar to the interconnect rules imposed on regional Bell operating companies.
Scott Shenker and his colleagues have written two papers dealing with pricing problems and the use of mechanism design to deal with them (Cocchi et al. , Shenker , Cocchi, Estin, Shenker, and Zhang ). Huberman (1988) is a book that discusses computer networks as market economies.
Partridge (1993) has written an excellent book for a general audience interested in network technology now and in the near future. For a detailed discussion of computer networking theory and technologies, see Tanenbaum (1989). The best detailed treatment of the emerging ATM technology is de Prycker (1991), but ATM is evolving so quickly that it is already somewhat dated, and something better may be available by the time this article is published.
Braun, H.-W., and Claffy, K. (1993). Network analysis in support of internet policy requirements. Tech. rep., San Diego Supercomputer Center.
Chaum, D. (1985). Security without identification: Transaction systems to make big brother obsolete. Communications of the ACM, 28(10), 1030-1044.
Cocchi, R., Estin, D., Shenker, S., and Zhang, L. (1991). A study of priority pricing in multiple service class networks. In Proceedings of Sigcomm '91. Available from: ftp://ftp.parc.xerox.com/pub/net-research/pricing-sc.ps.
Cocchi, R., Estrin, D., Shenker, S., and Zhang, L. (1992). Pricing in computer networks: Motivation, formulation, and example. Tech. rep., University of Southern California.
de Prycker, M. (1991). Asynchronous Transfer Mode : Solution for ISDN. Ellis Horwood, New York.
Goffe, W. (1994). Internet resources for economists. Tech. rep., Univer sity of Southern Mississippi. To appear in Journal of Economic Perspectives Symposium. Available at gopher:niord.shsu.edu.
Huberman, B. (1988). The Ecology of Computation. North-Holland, New York.
Low, S., Maxemchuk, N. F., and Paul, S. (1994). Anonymous credit cards. Tech. rep., AT&T Bell Laboratories, Murray Hill, NJ. Available at ftp://research.att.com/dist/anoncc/anoncc.ps.Z.
MacKie-Mason, J. K., and Varian, H. (1993). Some economics of the internet. Tech. rep., University of Michigan.
MacKie-Mason, J. K., and Varian, H. (1994). Pricing the internet. In Kahin, B., and Keller, J. (Eds.), Public Access to the Internet. Unknown, Unknown.
Markoff, J. (1993). Traffic jams already on the information highway. New York Times, November 3, A1.
Medvinsky, G., and Neuman, B. C. (1993). Netcash: A design for practical electronic currency on the Internet. In Proceedings of the First ACM Conference on Computer and Communications Security New York. ACM Press. Available at: ftp://gopher.econ.lsa.umich.edu/pub/Archive/netcash.ps.Z.
Partridge, C. (1993). Gigabit Networking. Addison-Wesley, Reading, MA.
Shenker, S. (1993). Service models and pricing policies for an integrated services internet. Tech. rep., Palo Alto Research Center, Xerox Corporation.
Tanenbaum, A. S. (1989). Computer Networks. Prentice Hall, Engle wood Cliffs, NJ.
1 Current NSFNET statistics are available by anonymous ftp from nic.merit.edu.
2 Transport of TCP/IP packets is considered to be a value-added service and as such is not regulated by the FCC or state public utility commissions.
3 Recall that a byte is one ASCII character.
4 Because the cable network is one-way, these connections use an asymmetric network connector that brings the input in through the TV cable at 10 Mbps, but sends the output out through a regular phone line at about 14.4 Kbps. This scheme may be popular since most users tend to download more information than they upload.
5 The effects of network congestion are usually negligible until usage is very close to capacity.
6 Statistical sampling could lower costs substantially, but its acceptability depends on the level at which usage is measured--e.g., user or organization--and on the statistical distribution of demand. For example, strong serial correlation can cause problems.
7 Many university employees routinely use email rather than the phone to communicate with friends and family at other Internet-connected sites. Likewise, a service is now being offered to transmit faxes between cities over the Internet for free, then paying only the local phone call charges to deliver them to the intended fax machine.
8 See MacKie-Mason and Varian (1993).
9 The single largest current use of network capacity is file transfer, much of which is distribution of files from central archives to distributed local archives. The timing for a large fraction of file transfer is likely to be flexible. Just as most fax machines allow faxes to be transmitted at off-peak times, large data files could easily be transferred at off-peak times--if users had appropriate incentives to adopt such practices.
10 Since most users are willing to tolerate some delay for email, file transfer and so forth, most traffic should be able to go through with acceptable delays at a zero congestion price, but time-critical traffic will typically pay a positive price.
11 Public file servers in Chile and New Zealand already face this problem: any packets they send in response to requests from foreign hosts are charged by the network. Network administrators in New Zealand are concerned that this blind charging scheme is stifling the production of information public goods. For now, those public archives that do exist have a sign-on notice pleading with international users to be considerate of the costs they are imposing on the archive providers.
12 If revenue from congestion fees exceed the cost of the network, it would be profitable to expand the size of the network.
13 In our work on pricing for network transport (1994a, 1994b), we have found that some form of secure electronic currency is almost surely necessary if the transactions costs of accounting and billing are to be low enough to justify usage pricing.
14 Traditional credit cards are unlikely to receive wide use over a data network, though there is some use currently. It is very easy to set up an untraceable computer account to fraudulently collect credit card numbers; fraudulent telephone mail order operations are more difficult to arrange.