October 13, 1995
The present debate concerning the National Information Infrastructure (NII)[1] has focused primarily on the introduction of competitive markets for the supply and distribution of information. Although competition will be an important component of the NII, and one which we welcome, we argue that it is inappropriate to frame the debate entirely in terms of competition. Competition can be seen as a consequence of a fundamental development driving technical and economic innovation within the information industries; namely, the adoption of the digital paradigm.
The digital representation and transmission of information enables competition within the infrastructure. For example, competition within the US long distance telephone network is facilitated by software-controlled digital switches and cross-connect facilities. Competition has, in turn, accelerated the development of digital technology, thereby powering the present technology curve, the "perpetual engine" of the information age. However, digitization offers opportunities for innovation that go beyond competition. We hypothesize that the second, and more sweeping, wave of the digital revolution will emanate from the virtualization of the infrastructure. By a Virtual Infrastructure (VI), we mean the adoption of a software perspective that embraces a much more dynamic and heterogeneous approach to the representation of information, the configuration of hardware and software systems that process it, and the binding of resources to its processing and distribution.
In this paper, we offer a vision of what it would mean for the National Information Infrastructure to be a Virtual Infrastructure that takes full advantage of the digital paradigm. We begin by examining various distribution infrastructures - including traditional infrastructures for utilities and packaged goods as well as information. We point out differences among the commodities that these infrastructures are distributing, as well as differences among the infrastructures themselves.
We present a taxonomy of distribution models that could emerge for the National Information Infrastructure and describe the model we believe is the natural outcome. This model is substantially different from the "convergence model" that has been popularized by many NII pundits. In our model, the infrastructure is not only competitive - it is also generic and decoupled. By generic, we mean that the distribution infrastructure can carry many types of information in much the same way as the roads can be used to deliver many types of packaged goods. By decoupled, we mean that the distribution of information is not vertically bundled; there are open markets in which information suppliers and information appliance vendors offer their wares independent of the distribution systems.
We then focus on the fundamental properties of digitized information and describe how they favor the proposed model. These properties include the ability to represent all digital information symbolically, and in a fundamental unit (the bit); the ease and declining expense of switching and converting the information being distributed; the ability to distribute information over a general-purpose, medium-independent network; and the software-based ability to defer the binding of resources until they are needed. Taken together, these properties enable a much more flexible, dynamic, and heterogeneous infrastructure than is currently being realized either in practice, in industry proposals [3, 4], or in proposed legislation [2, 5-9].
We explore several technical issues associated with the emergence of a virtual infrastructure, including:
We conclude with a discussion of the policy implications of this work. We are particularly concerned with the identification of policies that foster innovation by reducing barriers to the insertion of new technology. Topics addressed include decoupling the regulation of information services from the regulation of information distribution, dealing with monopolies and vertical integration, and the publication of interface specifications.
In presenting our vision of a Virtual Infrastructure, we are spanning the gulf between the computer science and technology policy research communities. We have tried to use (or define) terminology and a writing style that will be understood by both communities.
The traditional utilities - electricity, gas, and water - are vertically coupled, as illustrated by the water example in Figure 1. For each utility, there is a fixed supply / distribution / consumption chain, and each industry has its own dedicated distribution channel for which there is no competition. Because each of these industries is vertically coupled, the brokerage function is not needed.

Figure 1. Traditional Infrastructures
The specialized nature of these distribution channels makes sense given the different physical properties of the commodities. It is difficult to imagine electricity flowing over the gas lines, or gas flowing through the water mains. In the case of water, it is partly sheer volume that makes the dedicated distribution channel popular. It is possible to deliver water over a more generic distribution channel (e.g., by truck), but the amount of water needed would make this alternative unwieldy. Similarly, consumer appliances, such as toilets and toasters, are tailored to the physical properties of the commodity.
The key attribute of this network, which supports the generic delivery of goods, is that it is relatively inexpensive to repackage goods of a wide variety of shapes and sizes as they are distributed. Goods may be packaged, unpacked, and repackaged a number of times as they proceed from the supplier to the consumer, enabling them to be multiplexed, switched and transported over the varying media (sea, air, rail, road, etc.). All of these goods share the cost of the distribution channel.
The resultant industry structure reflects this segregation; the information industries are, largely speaking, vertically coupled, as illustrated in Figure 2. For example, each television station has its own transmission facilities and its own portion of the spectrum - its channel. The same is true for radio. Similarly, the telephone company integrates telephone switching with the local distribution of telephone wires and the cable company bundles its transmission network with program packages. Within the home or business, we also see the separation of the appliances that are used with each distribution channel - radio, television, telephone, etc. Each channel has its own discrete types of appliances that are, largely speaking, not interoperable with other channels.[2]

Figure 2. Traditional Information Infrastructure
Within each of the vertically structured information infrastructures, it is difficult to engender a climate of rapid innovation - the tradition of homogeneity makes it difficult to introduce a second or third technology for the distribution of a given type of information or to introduce a new type of information to an existing distribution substrate.
Although the convergence model is generic, the vertical coupling between suppliers and a few distributors is not necessary, nor is it an innovative means of leveraging digital technology. It is not necessary because there is no physical characteristic that places a small limit on the number of feasible distribution channels; digital channels need not consume large amounts of physical space (as roads or water mains do). There is "room" for many distribution channels, both wired and wireless, to enter a customer's premises.

Figure 3. Convergence-Driven Infrastructure
The convergence model is not hospitable to innovation because it artificially limits the degree and dynamics of competition and choice. The supply chains will be relatively fixed, with consumers subscribing to bundles of information services on a month-to-month basis. Although competition may exist at the national and metropolitan level, individual neighborhoods and houses may not have access to multiple alliances - at least not on a dynamic (e.g., minute-to-minute) basis. Furthermore, the mechanisms and transaction costs associated with interoperation across alliance boundaries are likely to discourage competition and choice on a per transaction basis. The convergence model of information distribution may well be generic and competitive, but because it is not decoupled, it is not innovative.
Decoupled distribution channels must be safe. There must be no danger that the means of distribution will be captured by one competitor and the others will be starved out or acquired. A safe channel must also have ample capacity. If there is a danger that a company will not be able to move enough goods fast enough over the channel, it will not be considered safe. This means that there must be considerable competition within the distribution market. Ideally, the distribution channels will not just be shared among a single group of competing suppliers (sharing with your competitors is far from safe). The channels will also be shared with other groups of suppliers, who compete for the supply of different types of good or services. Therefore, decoupled channels must also be generic.
The package transport infrastructure satisfies these criteria for a safe, decoupled distribution channel (even though parts of it are not competitive). It is used by many competitors in many areas; it is also used by the government and private sector. There is no danger of a single supplier taking over the system and starving anyone out. The system also has enough capacity for most users' purposes. Even if demand for a supplier's goods quadruples unexpectedly, there is likely to be a way for those goods to be distributed.

Figure 4. Virtual Infrastructure

Table 1. CGD Taxonomy
~C~G~D. This is the utility state. It is monopolistic, specialized, and vertically bundled. Examples are the cable industry and traditional utilities, including the phone system before it was broken up. Complete vertical integration of distribution and service is so commonplace that it is often taken for granted as a necessity of building and operating a distributed service.
C~G~D We refer to this as the analog state. Examples of competitive, specialized, bundled infrastructures are radio, broadcast television, and the pre-AT&T competitive telephony industry. Due to economies of scale, this infrastructure may lead to a natural monopoly (~C~G~D) unless the start-up costs are low, as is the case with air waves (where little physical infrastructure need be built).
~C~GD. We refer to this as the MFJ state.[3] It is a monopolistic, specialized, decoupled infrastructure in which competing same-service providers share a monopoly distribution network. The quintessential example is the regulated "equal access" to local telephone networks that enables competition among the suppliers of long-distance telephony services. Until recently, decoupling has been enforced by barring the distributors from competing with the suppliers. The interface between the monopoly distributor and the competing suppliers is critical in making this model work.
C~GD. We have not identified any good examples of competitive, specialized, decoupled infrastructures. We believe that this state is not stable because a decoupled infrastructure based on specialized distribution does not meet the safety criteria discussed in Section 1.5.[4]
CG~D. This is the convergence state - a competitive, generic, vertically bundled infrastructure. It is emerging as the digitization of information and its distribution channels makes the channels interchangeable. Competition can be viewed as a side effect of a process in which a multiplicity of previously segregated information utilities each deploys a digital distribution substrate. It is also enabled by software, which provides the ability to support network interoperation and features essential to competition, such as telephone number portability. Examples of convergence models currently being planned are an alliance between a cable company and a telephone company, which would offer "full service networks" in competition with other such alliances.
~CG~D. This is the de facto monopoly state. It is a monopolistic, generic, integrated infrastructure that could well be the interim result of present convergence efforts (see Section 2.2 and Figure 5). A historic example is the rail road "robber barons" who had a de facto, unregulated monopoly.
~CGD. This is the "born again" regulated monopoly. In this state, competing suppliers share a generic monopoly distribution network that is held in check and decoupled from the suppliers by regulatory forces. This model also applies to the relationship between the local roads system and the higher levels of the packaged goods infrastructure - there is only one local road leading to a given address (monopoly), but many types of goods can travel over that road as long as they are packaged, and many different suppliers deliver goods over the same local road.
CGD. This is the competitive, generic, decoupled state that we believe would naturally emerge in the absence of the historic artifacts that presently surround the information industries. In this state there are many overlapping distribution channels. Each channel supports the distribution of a wide range of services and provides access to competitive suppliers of each type of service. An example is the package transport system as a whole (rail, air, etc.).
Figure 5 depicts the transition process we are presently embarked on, starting from the three states that were stable circa 1990:
We believe that the path leading through convergence and monopoly can be shortened or avoided altogether. Innovation and competition are mutually supportive and the continuous insertion of innovative technology can be used to offset the economies of scale that would otherwise lead to natural monopoly. In a regulated monopoly, carefully crafted interfaces attempt to simulate competition [11]. Instead of trying to simulate competition, we should favor the real thing: a competitive, generic, decoupled infrastructure that supports a high degree of innovation and competition.

Firgure 5. Transition Scenarios
Symbolic Representation and Conversion. Information is not a physical commodity. It can be represented symbolically; for example, by sequences of 0s and 1s. Information can be represented in many forms and distributed over many different media, including non-physical media such as the radio spectrum.
There is clearly room for many digital distribution channels - the free space spectrum is already shared; and fibers, unlike roads, occupy so little volume that a single conduit has room for many of them. Utility distribution networks, such as water mains or local roads, are fundamentally different: for these, space limitations and the physical inconvenience of digging up the streets dictate local distribution monopolies.
An important aspect of symbolic representation is that it is possible to convert between different representations. Coding, such as for error detection, can be done to suit the channel. Translations to address appliance incompatibility are also possible; for example, information originating from US telephones is routinely translated into a different format suitable for European telephones.
Encapsulation and Fragmentation. Sequences of bits (e.g., packets whose start and end can be identified) are the digital equivalent of packages. An especially powerful conversion operation involves the encapsulation of information within packets. Since the packets are opaque to the distribution channels, they can be used to transfer any information that is represented symbolically. As is the case with package transport, digital packets can be arbitrarily nested and repackaged, allowing flexible switching and multiplexing. However, the cost of this digital "packaging", measured in processor cycles and storage, is declining steeply.
Unlike physical goods, an item of information represented by a lengthy sequence of bits can easily and cheaply be fragmented into many smaller sequences that can be re-arranged during distribution and eventually put back together to recover the original form. (This could in theory be done with physical goods, too, but not without great expense. Much of the value of a forty story building, for example, is in its assembly. Furthermore, it is not possible to fragment and reassemble some items at all, such as people.)
Temporal Decoupling. There need not be any fixed timing associated with the bits representing an item of information. The individual bits can be transmitted at any speed; for example, they can be stretched or compressed, and the time between bits can be varied. If the timing of the bits within a sampled signal is important, that timing can be represented digitally and encoded within the digital sequence. This property supports the decoupling of supply, distribution, and consumption by allowing different components in the chain to operate at different speeds. For example, a sequence of bits may be squirted at high speed into a distribution channel, delivered at low speed to the consumer's premises, and played out by the appliance at a rate that is suited to the user. In conjunction with conversion, temporal decoupling also facilitates generic distribution by allowing different types of information, sent at different rates, to be transmitted over the same distribution substrate.

Table 2. Digital Properties
Dynamic Resource Configuration and Allocation. Information processing and transmission resources can be dynamically configured and allocated. Dynamic configuration allows resources to be tailored to the requirements of a particular transaction. Programmable resources, such as general purpose processors, are examples of configurable resources.
These properties, especially when exploited by software, enable virtual supply chains that have the appearance of always being available, yet only consume resources when they are used. These properties are related to the technology curve in two ways: the decline in the transaction overhead associated with dynamic configuration is tied to the declining costs of processors and memory; and with increasing processing power comes the ability to execute software that configures and allocates distribution resources at a faster rate and in more complex ways.
Medium independence and generic distribution. Information can be represented in many forms and distributed over many different types of distribution channels. Encapsulation, fragmentation, and temporal decoupling allow the channels to be generic; that is, they can carry a wide range of services rather than be tailored to a single service. From an architectural perspective, digital distribution channels (as opposed to the bits they deliver) are completely interchangeable; any type of digital information can flow over any type of channel.[5]
Composition. The many types of distribution channels can be transparently composed into complex networks through concatenation and hierarchy. Concatenation leverages the conversion and temporal decoupling properties. In conjunction with digital memory, it is possible to concatenate networks running at different speeds. Suppliers and consumers have the impression of a ubiquitous, seamless, and continually accessible distribution network. Although concatenation and hierarchical embedding are not unique to digital environments, their use is greatly assisted by conversion, temporal decoupling, and the software-based ability to dynamically configure and allocate resources.
Appliance independence. Subject to the availability of suitable adapters (converters), the same information appliances can be used with multiple distribution channels and information suppliers. This is a consequence of the conversion property. There will also be many different types of appliances, some generic and some specialized. However, the latter will be tailored in accordance with the users' functional requirements rather than those of the supplier or distributor. This property of digital information distinguishes it from the utilities: a gas stove can't be used with the electricity network.
Another driver, which is also tied to the processing and memory technology curves, is the ability to dynamically allocate and configure resources. Systems are increasingly built of virtual resources, which are emulated through software that dynamically allocates and configures the underlying assets. As infrastructure software becomes more pervasive and is implemented on higher performance platforms, the allocation and configuration of resources become more dynamic; for example, the number and frequency of allocation and configuration decisions increases.[6] Furthermore, once a resource has been virtualized, the process through which it becomes more dynamic is institutionalized - the frequency of decisions scales with the underlying technology.
It is easy to conclude that there are a relatively small number of high volume information suppliers providing services to an immense number of comparatively low volume consumers. However, this is not true today and will be even less so in the future. Organizations such as cable programmers consume large volumes of information from many sources and supply it to large numbers of clients. There are many independent suppliers of information, including authors, freelancers, and editors, who work from small business premises and homes.
The appropriate distinction is not between suppliers and consumers, but between high and low volume users of the infrastructure. Large (high volume) users will have sustained requirements for high capacity appliances and channels. Although small (low volume) users, on average, will have more modest needs, they will still have instantaneous requirements to supply and consume information at high rates. Thus the distinction between large and small has more to do with average capacity than instantaneous usage.

Figure 6. VI Building Blocks / Components
Brokerage plays an important role in a competitive interoperable infrastructure. Components at all levels of the infrastructure cooperate in the dynamic composition of supply chains, and brokerage functions are the glue that facilitates the interconnection of the components. These functions provide market and configuration services. In a market environment they support dynamic choice and arbitrage. Brokerage also permits the aggregation of small users into larger pools, providing them with access to information and distribution services that are purchased in bulk, divided into smaller units, and resold. Taken together, these services provide the liquidity that enables competition and lowers the barriers to entry and innovation.
Brokerage configuration services control and configure the points of conversion at which information is transformed from the service-specific form in which it is produced/consumed to formats that are compatible with the chosen distribution channels. These configuration services are essential to the dynamic creation of VI supply chains.
Brokerage functions may be provided by people, implemented in software, or achieved through some combination of both. The functions may be fully embedded within, and controlled by, user software or they may be controlled by independent third parties. Although the brokerage functions exercise control over VI components, they can do so remotely; that is, they need not be collocated with them nor does the data flowing between supplier and consumer need to pass through the brokerage elements.
In the VI, two types of information appliances are likely to emerge. Some appliances will be generic; they will be used for a wide range of information services. These are similar to today's personal computers, laptops, and workstations. Other appliances will continue to be specialized for the manipulation of a specific type of information, such as music. The bulk of these virtual appliances, especially the generic ones, will be role independent (i.e., they will be able to take on both the supplier and consumer roles). However, specialization and role dependence will make sense in cases where the appliances can be produced for an extremely low price and distributed about the home and workplace in large numbers, in much the same way as clocks, radios, and CD players are today.
The truly exciting breakthrough, waiting to happen, is the digitization of the interfaces to these specialized appliances. Over the past decade the internal workings of these appliances have become increasingly digital and software-based. However, the interconnection of appliances is largely analog. As the digitization process continues, we can expect to see the emergence of interfaces that support the digital interconnection of appliances based on local networks, such as LANs.
For example, within the home one might find a local area network (wired or wireless) to which generic appliances such as personal computers and specialized appliances such as CD players, digital VCRs, televisions, phones, and so forth, are attached. These appliances would be able to interact over the network; for example, one might be able to edit home videos on the personal computer or access music stored in a file server from any audio player in the home.
Digitization. Over the years, many of the real channels buried within these networks have been digitized. This process has been cost-justified through improvements in the price and quality of supplying analog bearer services. The onward march of digital technology has now reached the point where this cost argument applies to almost all real channels, including those that traverse the "last mile" to the premises of the consumer. Through digitization, cable and wired telephony see the potential for improved quality and reduced maintenance costs. The cellular telephony and broadcast television industries can benefit through more efficient use of the spectrum.
Virtualization. As the last mile is digitized, bearer channels that are digital and of high capacity become feasible. More importantly, the configuration of these channels can also be far more dynamic, leading to a virtualization of the distribution substrate. This substrate will provide the appliances, and their users, with the appearance of being simultaneously connected to a large number of peers. Instead of using one bearer channel at a time, they can gain access to a large number of virtual channels that share their point of attachment to the distribution system. Furthermore, the capacity of the individual bearer channel is not fixed, but can vary from zero to the limits imposed by the concatenated real channels and appliances. Typically this is accomplished by dynamically multiplexing and switching the real channels on a fine grain basis. In digital terms, this means leveraging the conversion, temporal decoupling, and dynamic resource allocation properties of digital technologies.
Concatenation. The VI's distribution substrate is a network of networks that includes wireless (TV, cellular, radio), fiber optic, coaxial cable, twisted pair, etc. Within their own premises or campus, users can operate private networks, shared only by their own appliances. Between premises, users contract to use shared infrastructure, possibly in the form of predefined virtual private networks, but often on a more dynamic basis.
The private and shared networks are concatenated together so that the combined substrate appears as one ubiquitous, continuous network to its users. Digital conversion supports the interoperation of decoupled components and brokerage functions enable competition at many levels. Users can choose among distribution channels, which rely on different technologies and may have different properties of delay, error correction, etc.
Large and small users. As previously discussed, it may be appropriate to distinguish between large and small users, rather than between suppliers and consumers. We believe that different distribution vendors will emerge to cater to pools of users with similar distribution requirements. Although many different types of pools may emerge, the discussion which follows arbitrarily distinguishes between pools of large and small capacity users.
Small users. Low volume users present a less attractive market for vendors, especially those using enclosed spectrum technologies, such as fiber and copper, which require the installation and maintenance of assets that couple the user to the distribution system. Distribution vendors have adopted a number of optimizations to offset their costs, including asymmetry, sharing, and sharply limiting instantaneous capacity.
Television and traditional cable are good examples of highly asymmetric systems that leverage sharing by broadcasting the same signal to a large geographic area. The difficulty with this arrangement is that it is unidirectional and the capacity of the single broadcast channel limits the vendor's aggregate capacity.
On average, small businesses and homes will consume more information than they produce. However, if they are precluded from transmitting information then they are precluded from acting in the supplier role. In fact, the situation is somewhat worse - if the small supplier is incapable of sending out large bursts of information from time to time, then they are effectively precluded from competing in many VI markets. In short, small users require access to virtual channels with similar capabilities to those made available to large users. The pooling of small users through cellularization and the dynamic allocation of shared resources provides the means to resolve this dilemma.
Cellularization. A common optimization is to pool users based on geographic locality so that they can maximize the sharing of physical assets. For example, users residing in a large building might benefit from an arrangement in which they are serviced by a single fiber. This form of geographic pooling is common in wireless telephony, where base stations are established to service each neighborhood cell and a portion of the RF spectrum is dynamically shared among all of the users within the cell. The channel's interconnecting base stations are also shared, irrespective of whether they are implemented through wireless or wired means. In effect, all of the physical assets are shared among pool users. Although present cellular systems only support fixed and limited capacity bearer channels, they are likely to be virtualized in the future.
Telephone systems are also being cellularized to improve sharing and reduce the maintenance costs of the distribution plant. High capacity fibers are run to each neighborhood, where they are connected to the individual twisted pairs that service each home. This is a less satisfying solution than the wireless case - the fiber plant is shared, but the twisted pairs remain dedicated to individual subscribers. Furthermore, the bandwidth limitations of these pairs place fixed and relatively low limits on the capacity available to each user.
Finally, we note that the terrestrial broadcast of television signals is also adopting digital techniques. To avoid interference with existing analog signals, digital television is likely to adopt low power techniques involving multiple transmitters, each of which covers a reduced geographic area. This might well be a first step towards the cellularization of television, through the transmission of different signals in different neighborhoods.
In summary, cellularization provides a sharing mechanism through which virtualized distribution can be made available to pools of small capacity users. In Section 5 we discuss how to safeguard competition within these markets.
Cross-connection. A further market for distribution vendors will be in the cross-connection of vendors. For telephony, this role is presently played by the long distance carriers, who provide cross-connection among the regional telephone companies, both wired and wireless. In terms of physical assets, this market is similar to the large user market. Traffic from the small users is aggregated by their access networks and presented for cross-connection at a relatively small number of high capacity points of presence. Although the vendor requires additional capabilities, such as the ability to bill large numbers of users, these services can be procured through third parties. Accordingly, we believe that these markets will engender the same vigorous and innovative competition as the large user markets.
To be successful, the VI must push software technology beyond its present limits, especially in our ability to mask the complexity associated with heterogeneity. However, computer scientists have been trying to address this problem for quite some time and progress has been slow. Rather than solve the generic problem, we have identified the following specific NII/VI objectives, towards which concrete steps might be taken:
Configuration management. A customer should be able to buy an appliance, plug it in (or not), and expect it to work with whatever surrounding infrastructure is available. The configuration may be achieved by customer managed software or through a service provided by another party. The Macintosh chooser has a glimmer of this functionality - when a printer is powered up and attached to the network the software automatically detects it and adds it to the list of available printers.
Publication of interfaces. An NII label allows a user to identify the model, or type, of a component. Given this information, one should be able to obtain a specification of its interface. The interface can be specified in a number of ways:
To automate the processing of published and self-describing interfaces we require mechanisms for the accurate and rigorous specification of component interfaces in a manner that allows users and software to reason about their compatibility (or lack thereof).
Within the computer science community there has been considerable research on formal methods for the specification of interfaces. Although progress has been made, the methods that have been developed to date are limited and have not achieved widespread acceptance. However, perhaps progress could be made in a restricted sub-domain, e.g., if a few key NII interfaces were identified and a focused effort were made to develop techniques for their specification. In previous work [19] we have referred to these high leverage points as NII reference points.
Conversion technology. The components used during a VI transaction (or telephone conversation) are dynamically assembled. In some cases it will be possible to assemble a chain whose components have (pair-wise) compatible interfaces. However, it will often be the case that the examination of the component interface specification will identify mismatches between components that might otherwise be well-suited to the task at hand. In these cases, it may be possible to insert adapters into the chain. Much like electrical adapters used when traveling between countries, these adapters would bridge otherwise incompatible interfaces.
Publication vs. standardization. Standards are such a good thing that everyone would like to have their own.[8] In the case of mechanical connectors this is inappropriate - imagine the assortment of adapters you would have to carry if each town had a unique electrical connector. However, many digital components have soft, rather than mechanical, connectors. Furthermore, the marginal cost of software adapters (processing and storage) is declining rapidly. While it remains important to have well-specified published interfaces, there can be many of them.
Competitive markets are not free of all regulation - they simply require a different style of oversight than monopolies. A mosaic involving anti-trust, self-regulation and public commissions is likely to evolve. Accordingly, progress towards a CGD infrastructure does not depend so much on the elimination of regulation as its decoupling. The various parts of the infrastructure - supply, distribution, appliances, and brokerage - should be regulated separately. Some limited monopolies in information distribution may be necessary and these must be organized in a way that preserves the competitive structure of the information market as a whole. Finally, steps may be necessary to ensure the timely and accurate publication of interface specifications.
Much of the (proposed) U.S. legislation [2, 9] is consistent with our detailed prescriptions and we are not arguing against it. However, we would have preferred to see a proactive Communications Act that champions innovation and sets out a vision of the fundamental challenges and the means by which they will be addressed.
Although we do not preclude the vertical bundling of services (see below), the bundling of regulation is another matter. It may present a barrier to entry for an innovative player who wants to provide a new service or deploy a new distribution technology. The present food supply and distribution industry presents an alternative structure that is reasonably successful. Different mechanisms and organizations oversee the safety of different types of food (e.g., baby food), the reliability of the food supply, the safety and reliability of the distribution system, the fairness of the various markets, and the safety of the appliances used.
We presently couple the allocation of spectrum to specific services and to specific modulation/coding technologies that implement the channels over which those services are delivered. This leads to a hard view of spectrum in which each band is used to deliver a specific service with a specific technology. We envision a soft spectrum environment in which spectrum is fairly generic; that is, a service can be delivered over many different bands and technologies.[9] This is already happening in bands that are used with digital technology and the process should be accelerated so as to increase the pool of generic digitized spectrum. Although there may be public policy reasons for allocating capacity to certain purposes, those decisions should be decoupled from the choice of band and technology. This decoupling of policy and mechanism, through the deployment of technologies that can be configured to support a range of policies, is common practice in the design of computer systems and networks.
Our approach does not preclude the bundled marketing of appliances and distribution for the purpose of innovation - so long as it does not force an artificial coupling. The cellular telephony industry presents an interesting example. The industry bundles and discounts appliances in order to market their information and distribution services. However, the appliances are built within a competitive environment and are not tailored to a single provider's facilities. The packages are marketed through many different sales channels and there is ample room for competition and innovation. This is in sharp contrast to the settop box market where the appliances are ordered and marketed largely by the service providers. Although there has been some innovation in the settop market, the wholesale cost of the appliances seems very high when compared to other consumer devices of similar complexity.
No single channel will be suited to all of the information services that are offered. However, we envision a mosaic in which, for any given service, there is likely to be more than one channel to choose from. Similarly, any one channel will provide a range of services and compete for each of those services with some subset of the other channels.[10] In practice, each channel may have certain core strengths (e.g., killer applications) and compete with other channels at the fringes of its strengths.
Wireless distribution provides an important safety net within the distribution mosaic. In markets where the demand is not great enough to ensure competition and innovation within wired channels, the low start-up costs of wireless ensure the availability of an alternative channel. However, to fulfill this role, wireless must be a truly generic channel. Although its aggregate capacity may be limited, we should strive to maximize the range of services that can be carried. In Section 4.3.2 we highlighted the need to provide small users with instantaneous access to high bandwidth channels. The wireless spectrum provides a way to do this while maximizing the degree of resource sharing across pools of small users.
Dealing with monopolies. Although competition is our preferred approach, it may sometimes be appropriate to accept limited monopolies within the NII. However, we should establish stringent tests for why monopolies are required and what their scope should be.
Granting a monopoly encourages deployment of today's technology by granting the investor a degree of protection from new technologies. We should be aware that in return for the investment we are, in effect, providing the investors with insurance against innovation [22-24]. If monopolies are necessary to the development of a particular type of distribution channel, then we offer the following prescriptions for their regulation:
The historic advantage of vertical integration has been lowered transaction costs; if a business is tightly coupled between the supply, distribution, and consumer appliance segments, then the transactions between those segments can be made cheap and efficient. However, as we have shown, digital technology reduces the cost of dynamically assembled supply chains. Provided the degree of human intervention associated with each transaction is declining. As the transaction costs go down, the units of transaction can be smaller and more frequent. Thus we would not expect vertical bundling to confer an advantage.
However, access to a specification is not sufficient if one is precluded from building an adapter by intellectual property rights (IPR) that cannot be licensed in a convenient and cost effective manner. Many researchers are dealing with the overall IPR question. We would like to draw their attention to the narrower question of component interfaces and the question of building adapters that support interoperation with them.
The first question that should be addressed is whether it is appropriate to grant protection on interfaces and, if so, what is the appropriate vehicle (e.g., copyright, patent, trade secret). Although we may have personal views on the matter, we have not identified a policy reason to refuse such protection, provided that it does not apply to interfaces to monopoly assets, and that there is some mechanism through which competitors can be readily licensed to build adapters for software licensing [26] might be applicable.
It is also possible that the dominant business ethic will come to favor the public and free distribution of interface specifications.[11] In most cases, the vendor of a product benefits from having a good specification of its interfaces. The question is whether or not they publish that specification and do so in a manner that engenders confidence in its completeness and accuracy. Microsoft's Rich Text Format (RTF) provides an interesting example concerning the publication of interface specifications. Microsoft relies on the RTF specification for its own purposes (e.g., to support interoperation across its own product line). RTF also allows Microsoft's competitors to produce adapters that support interoperation with its products. In the case of RTF, Microsoft determined that publishing the specification would encourage others to market value-added products that would increase the market for its own products.
However, we have also shown that, despite the current emphasis on competition, the present convergence activities could inadvertently establish artificial barriers that would lead us back to a regulated monopoly - one that is generic and thus far more pervasive than those of the past. Ultimately, we believe that the digital properties will destabilize such a monopoly, but only after a lengthy interval. It would be preferable to avoid the monopoly state entirely and proceed directly to a VI. Realizing this vision will require a synthesis of technical and policy efforts and Sections 4 and 5 represent our first pass at enumerating the challenges that lie ahead.
Others have identified some of the issues that are raised in this paper. However, we believe that there is novelty in our taxonomy and in the chain of reasoning through which we link CGD to the underlying digital properties and to the VI and its means of realization. The underlying drivers, which are tied to the technology curve are:
2. 104th Congress, S.652 (Proposed) Telecommunications Competition and Deregulation Act of 1995 (as reported), 1995, United States Senate.
3. Council on Competitiveness, Competition Policy: Unlocking The National Information Infrastructure, 1993, Council on Competitiveness.
4. Cross-Industry Working Team, An Architectural Framework for the National Information Infrastructure, 1994, Corporation For National Research Initiatives.
5. 103d Congress, H.3609 (Proposed) Telecommunications Equipment Research and Manufacturing Competition Act of 1993 (Not enacted), 1993, United States House of Representatives.
6. 103d Congress, S.1822 (Proposed) Communications Act of 1994 (Not enacted), 1994, United States Senate.
7. 104th Congress, H.1275 (Proposed) Competitive Consumer Electronics Availability Act of 1995, 1995, United States House of Representatives.
8. 104th Congress, (Proposed) Universal Service Telecommunications Act of 1995, 1995, United States Senate.
9. 104th Congress, H.411 (Proposed) Antitrust and Communications Reform Act of 1995, 1995, United States House of Representatives.
10. Collis, D.J., P.W. Bane, and S.P. Bradley. Colliding Worlds: Resources and Winners. in Colliding Worlds: The Convergence of Computers, Telecommunications, and Consumer Electronics. 1994. Harvard Business School, October 5-7, 1994.
11. Baumol, W.J. and J.G. Sidak, Toward Competition in Local Telephony. AEI Studies in Telecommunications Deregulation, ed. J.G. Sidak and P.W. MacAvoy. 1994, Cambridge, MA: The MIT Press.
12.
13. Rose, M.T., The Simple Book - An Introduction to Management of TCP/IP-based Internets. 1991, Englewood Cliffs: Prentice Hall.
14. SMPTE Header/Descriptor Task Force, SMPTE Header/Descriptor Task Force: Final Report. SMPTE Journal, (June 1992).
15. Microsoft Corporation, The Component Object Model: Technical Overview, 1994, Microsoft Corporation.
16. Brockschmidt, K., Introducing OLE 2.0, Part I: Windows Objects and the Component Object Model. Microsoft Systems Journal, 1993. (August) p. 15-23.
17. IBM Corporation Object Technology Products Group, The System Object Model (SOM) and the Component Object Model (COM): A comparison of technologies from a developer's perspective, 1994, IBM Corporation.
18. Piersol, K., A Close-Up of OpenDoc, 1994,
19. Tennenhouse, D.L. and D.H. Staelin, Inter-Operability Guidelines for the Home Information Infrastructure. in Workshop on Advanced Digital Television in the National Information Infrastructure. 1994. Washington, DC.
20. High-Level Group on the Information Society, Europe and the global information society: Recommendations to the European Council, 1994, European Council.
21. Tennenhouse, D.L. and Vanu Bose, SpectrumWare - A Software-Oriented Approach to Wireless Signal Processing. To appear in ACM Mobile Computing and Networking 95, Berkeley, CA, November 1995.
22. Arrow, K.J., Economic Welfare and the Allocation of Resources for Invention, in The Rate and Direction of Inventive Activity, R. Nelson, Editor. 1962, Princeton University Press: New Jersey.
23. Kahn, A.E., The Economics of Regulation: Principles and Institutions. 1988, Cambridge, MA: MIT Press.
24. McNamara, J.R., The Economics of Innovation in the Telecommunications Industry. 1991, New York: Quorum Books.
25. Pool, I.d.S., Technologies of Freedom. 1983, Cambridge, MA: The Belknap Press of Harvard University Press.
26. Samuelson, P., et al., A Manifesto Concerning the Legal Protection of Computer Programs. Columbia Law Review, 1994. 94(2308) p. 526-648.