Robert ZICH, special assistant to the associate librarian for special projects, Library of Congress, and moderator of this session, first noted the blessed but somewhat awkward circumstance of having four very distinguished people representing networks and networking or at least leaning in that direction, while lacking anyone to speak from the strongest possible background in CD-ROMs. ZICH expressed the hope that members of the audience would join the discussion. He stressed the subtitle of this particular session, "Options for Dissemination," and, concerning CD-ROMs, the importance of determining when it would be wise to consider dissemination in CD-ROM versus networks. A shopping list of issues pertaining to CD-ROMs included: the grounds for selecting commercial publishers, and in-house publication where possible versus nonprofit or government publication. A similar list for networks included: determining when one should consider dissemination through a network, identifying the mechanisms or entities that exist to place items on networks, identifying the pool of existing networks, determining how a producer would choose between networks, and identifying the elements of a business arrangement in a network.
Options for publishing in CD-ROM: an outside publisher versus self-publication. If an outside publisher is used, it can be nonprofit, such as the Government Printing Office (GPO) or the National Technical Information Service (NTIS), in the case of government. The pros and cons associated with employing an outside publisher are obvious. Among the pros, there is no trouble getting accepted. One pays the bill and, in effect, goes one's way. Among the cons, when one pays an outside publisher to perform the work, that publisher will perform the work it is obliged to do, but perhaps without the production expertise and skill in marketing and dissemination that some would seek. There is the body of commercial publishers that do possess that kind of expertise in distribution and marketing but that obviously are selective. In self-publication, one exercises full control, but then one must handle matters such as distribution and marketing. Such are some of the options for publishing in the case of CD-ROM.
In the case of technical and design issues, which are also important, there are many matters which many at the Workshop already knew a good deal about: retrieval system requirements and costs, what to do about images, the various capabilities and platforms, the trade-offs between cost and performance, concerns about local-area networkability, interoperability, etc.
Clifford LYNCH, director, Library Automation, University of California, opened his talk with the general observation that networked information constituted a difficult and elusive topic because it is something just starting to develop and not yet fully understood. LYNCH contended that creating genuinely networked information was different from using networks as an access or dissemination vehicle and was more sophisticated and more subtle. He invited the members of the audience to extrapolate, from what they heard about the preceding demonstration projects, to what sort of a world of electronics information--scholarly, archival, cultural, etc.--they wished to end up with ten or fifteen years from now. LYNCH suggested that to extrapolate directly from these projects would produce unpleasant results.
Putting the issue of CD-ROM in perspective before getting into generalities on networked information, LYNCH observed that those engaged in multimedia today who wish to ship a product, so to say, probably do not have much choice except to use CD-ROM: networked multimedia on a large scale basically does not yet work because the technology does not exist. For example, anybody who has tried moving images around over the Internet knows that this is an exciting touch-and-go process, a fascinating and fertile area for experimentation, research, and development, but not something that one can become deeply enthusiastic about committing to production systems at this time.
This situation will change, LYNCH said. He differentiated CD-ROM from the practices that have been followed up to now in distributing data on CD-ROM. For LYNCH the problem with CD-ROM is not its portability or its slowness but the two-edged sword of having the retrieval application and the user interface inextricably bound up with the data, which is the typical CD-ROM publication model. It is not a case of publishing data but of distributing a typically stand-alone, typically closed system, all--software, user interface, and data--on a little disk. Hence, all the between-disk navigational issues as well as the impossibility in most cases of integrating data on one disk with that on another. Most CD-ROM retrieval software does not network very gracefully at present. However, in the present world of immature standards and lack of understanding of what network information is or what the ground rules are for creating or using it, publishing information on a CD-ROM does add value in a very real sense.
LYNCH drew a contrast between CD-ROM and network pricing and in doing so highlighted something bizarre in information pricing. A large institution such as the University of California has vendors who will offer to sell information on CD-ROM for a price per year in four digits, but for the same data (e.g., an abstracting and indexing database) on magnetic tape, regardless of how many people may use it concurrently, will quote a price in six digits.
What is packaged with the CD-ROM in one sense adds value--a complete access system, not just raw, unrefined information--although it is not generally perceived that way. This is because the access software, although it adds value, is viewed by some people, particularly in the university environment where there is a very heavy commitment to networking, as being developed in the wrong direction.
Given that context, LYNCH described the examples demonstrated as a set of insular information gems--Perseus, for example, offers nicely linked information, but would be very difficult to integrate with other databases, that is, to link together seamlessly with other source files from other sources. It resembles an island, and in this respect is similar to numerous stand-alone projects that are based on videodiscs, that is, on the single-workstation concept.
As scholarship evolves in a network environment, the paramount need will be to link databases. We must link personal databases to public databases, to group databases, in fairly seamless ways--which is extremely difficult in the environments under discussion with copies of databases proliferating all over the place.
The notion of layering also struck LYNCH as lurking in several of the projects demonstrated. Several databases in a sense constitute information archives without a significant amount of navigation built in. Educators, critics, and others will want a layered structure--one that defines or links paths through the layers to allow users to reach specific points. In LYNCH's view, layering will become increasingly necessary, and not just within a single resource but across resources (e.g., tracing mythology and cultural themes across several classics databases as well as a database of Renaissance culture). This ability to organize resources, to build things out of multiple other things on the network or select pieces of it, represented for LYNCH one of the key aspects of network information.
Contending that information reuse constituted another significant issue, LYNCH commended to the audience's attention Project NEEDS (i.e., National Engineering Education Delivery System). This project's objective is to produce a database of engineering courseware as well as the components that can be used to develop new courseware. In a number of the existing applications, LYNCH said, the issue of reuse (how much one can take apart and reuse in other applications) was not being well considered. He also raised the issue of active versus passive use, one aspect of which is how much information will be manipulated locally by users. Most people, he argued, may do a little browsing and then will wish to print. LYNCH was uncertain how these resources would be used by the vast majority of users in the network environment.
LYNCH next said a few words about X-Windows as a way of differentiating between network access and networked information. A number of the applications demonstrated at the Workshop could be rewritten to use X across the network, so that one could run them from any X-capable device- -a workstation, an X terminal--and transact with a database across the network. Although this opens up access a little, assuming one has enough network to handle it, it does not provide an interface to develop a program that conveniently integrates information from multiple databases. X is a viewing technology that has limits. In a real sense, it is just a graphical version of remote log-in across the network. X-type applications represent only one step in the progression towards real access.
LYNCH next discussed barriers to the distribution of networked multimedia information. The heart of the problem is a lack of standards to provide the ability for computers to talk to each other, retrieve information, and shuffle it around fairly casually. At the moment, little progress is being made on standards for networked information; for example, present standards do not cover images, digital voice, and digital video. A useful tool kit of exchange formats for basic texts is only now being assembled. The synchronization of content streams (i.e., synchronizing a voice track to a video track, establishing temporal relations between different components in a multimedia object) constitutes another issue for networked multimedia that is just beginning to receive attention.
Underlying network protocols also need some work; good, real-time delivery protocols on the Internet do not yet exist. In LYNCH's view, highly important in this context is the notion of networked digital object IDs, the ability of one object on the network to point to another object (or component thereof) on the network. Serious bandwidth issues also exist. LYNCH was uncertain if billion-bit-per-second networks would prove sufficient if numerous people ran video in parallel.
LYNCH concluded by offering an issue for database creators to consider, as well as several comments about what might constitute good trial multimedia experiments. In a networked information world the database builder or service builder (publisher) does not exercise the same extensive control over the integrity of the presentation; strange programs "munge" with one's data before the user sees it. Serious thought must be given to what guarantees integrity of presentation. Part of that is related to where one draws the boundaries around a networked information service. This question of presentation integrity in client-server computing has not been stressed enough in the academic world, LYNCH argued, though commercial service providers deal with it regularly.
Concerning multimedia, LYNCH observed that good multimedia at the moment is hideously expensive to produce. He recommended producing multimedia with either very high sale value, or multimedia with a very long life span, or multimedia that will have a very broad usage base and whose costs therefore can be amortized among large numbers of users. In this connection, historical and humanistically oriented material may be a good place to start, because it tends to have a longer life span than much of the scientific material, as well as a wider user base. LYNCH noted, for example, that American Memory fits many of the criteria outlined. He remarked the extensive discussion about bringing the Internet or the National Research and Education Network (NREN) into the K-12 environment as a way of helping the American educational system.
LYNCH closed by noting that the kinds of applications demonstrated struck him as excellent justifications of broad-scale networking for K-12, but that at this time no "killer" application exists to mobilize the K-12 community to obtain connectivity.
During the discussion period that followed LYNCH's presentation, several additional points were made.
LYNCH reiterated even more strongly his contention that, historically, once one goes outside high-end science and the group of those who need access to supercomputers, there is a great dearth of genuinely interesting applications on the network. He saw this situation changing slowly, with some of the scientific databases and scholarly discussion groups and electronic journals coming on as well as with the availability of Wide Area Information Servers (WAIS) and some of the databases that are being mounted there. However, many of those things do not seem to have piqued great popular interest. For instance, most high school students of LYNCH's acquaintance would not qualify as devotees of serious molecular biology.
Concerning the issue of the integrity of presentation, LYNCH believed that a couple of information providers have laid down the law at least on certain things. For example, his recollection was that the National Library of Medicine feels strongly that one needs to employ the identifier field if he or she is to mount a database commercially. The problem with a real networked environment is that one does not know who is reformatting and reprocessing one's data when one enters a client server mode. It becomes anybody's guess, for example, if the network uses a Z39.50 server, or what clients are doing with one's data. A data provider can say that his contract will only permit clients to have access to his data after he vets them and their presentation and makes certain it suits him. But LYNCH held out little expectation that the network marketplace would evolve in that way, because it required too much prior negotiation.
CD-ROM software does not network for a variety of reasons, LYNCH said. He speculated that CD-ROM publishers are not eager to have their products really hook into wide area networks, because they fear it will make their data suppliers nervous. Moreover, until relatively recently, one had to be rather adroit to run a full TCP/IP stack plus applications on a PC-size machine, whereas nowadays it is becoming easier as PCs grow bigger and faster. LYNCH also speculated that software providers had not heard from their customers until the last year or so, or had not heard from enough of their customers.
Howard BESSER, School of Library and Information Science, University of Pittsburgh, spoke primarily about multimedia, focusing on images and the broad implications of disseminating them on the network. He argued that planning the distribution of multimedia documents posed two critical implementation problems, which he framed in the form of two questions: 1) What platform will one use and what hardware and software will users have for viewing of the material? and 2) How can one deliver a sufficiently robust set of information in an accessible format in a reasonable amount of time? Depending on whether network or CD-ROM is the medium used, this question raises different issues of storage, compression, and transmission.
Concerning the design of platforms (e.g., sound, gray scale, simple color, etc.) and the various capabilities users may have, BESSER maintained that a layered approach was the way to deal with users' capabilities. A result would be that users with less powerful workstations would simply have less functionality. He urged members of the audience to advocate standards and accompanying software that handle layered functionality across a wide variety of platforms.
BESSER also addressed problems in platform design, namely, deciding how large a machine to design for situations when the largest number of users have the lowest level of the machine, and one desires higher functionality. BESSER then proceeded to the question of file size and its implications for networking. He discussed still images in the main. For example, a digital color image that fills the screen of a standard mega-pel workstation (Sun or Next) will require one megabyte of storage for an eight-bit image or three megabytes of storage for a true color or twenty-four-bit image. Lossless compression algorithms (that is, computational procedures in which no data is lost in the process of compressing [and decompressing] an image--the exact bit-representation is maintained) might bring storage down to a third of a megabyte per image, but not much further than that. The question of size makes it difficult to fit an appropriately sized set of these images on a single disk or to transmit them quickly enough on a network.
With these full screen mega-pel images that constitute a third of a megabyte, one gets 1,000-3,000 full-screen images on a one-gigabyte disk; a standard CD-ROM represents approximately 60 percent of that. Storing images the size of a PC screen (just 8 bit color) increases storage capacity to 4,000-12,000 images per gigabyte; 60 percent of that gives one the size of a CD-ROM, which in turn creates a major problem. One cannot have full-screen, full-color images with lossless compression; one must compress them or use a lower resolution. For megabyte-size images, anything slower than a T-1 speed is impractical. For example, on a fifty-six-kilobaud line, it takes three minutes to transfer a one-megabyte file, if it is not compressed; and this speed assumes ideal circumstances (no other user contending for network bandwidth). Thus, questions of disk access, remote display, and current telephone connection speed make transmission of megabyte-size images impractical.
BESSER then discussed ways to deal with these large images, for example, compression and decompression at the user's end. In this connection, the issues of how much one is willing to lose in the compression process and what image quality one needs in the first place are unknown. But what is known is that compression entails some loss of data. BESSER urged that more studies be conducted on image quality in different situations, for example, what kind of images are needed for what kind of disciplines, and what kind of image quality is needed for a browsing tool, an intermediate viewing tool, and archiving.
BESSER remarked two promising trends for compression: from a technical perspective, algorithms that use what is called subjective redundancy employ principles from visual psycho-physics to identify and remove information from the image that the human eye cannot perceive; from an interchange and interoperability perspective, the JPEG (i.e., Joint Photographic Experts Group, an ISO standard) compression algorithms also offer promise. These issues of compression and decompression, BESSER argued, resembled those raised earlier concerning the design of different platforms. Gauging the capabilities of potential users constitutes a primary goal. BESSER advocated layering or separating the images from the applications that retrieve and display them, to avoid tying them to particular software.
BESSER detailed several lessons learned from his work at Berkeley with Imagequery, especially the advantages and disadvantages of using X-Windows. In the latter category, for example, retrieval is tied directly to one's data, an intolerable situation in the long run on a networked system. Finally, BESSER described a project of Jim Wallace at the Smithsonian Institution, who is mounting images in a extremely rudimentary way on the Compuserv and Genie networks and is preparing to mount them on America On Line. Although the average user takes over thirty minutes to download these images (assuming a fairly fast modem), nevertheless, images have been downloaded 25,000 times.
BESSER concluded his talk with several comments on the business arrangement between the Smithsonian and Compuserv. He contended that not enough is known concerning the value of images.
During the brief exchange between LESK and BESSER that followed, several clarifications emerged.
LESK argued that the photographers were far ahead of BESSER: It is almost impossible to create such digitized photographic collections except with large organizations like museums, because all the photographic agencies have been going crazy about this and will not sign licensing agreements on any sort of reasonable terms. LESK had heard that National Geographic, for example, had tried to buy the right to use some image in some kind of educational production for $100 per image, but the photographers will not touch it. They want accounting and payment for each use, which cannot be accomplished within the system. BESSER responded that a consortium of photographers, headed by a former National Geographic photographer, had started assembling its own collection of electronic reproductions of images, with the money going back to the cooperative.
LESK contended that BESSER was unnecessarily pessimistic about multimedia images, because people are accustomed to low-quality images, particularly from video. BESSER urged the launching of a study to determine what users would tolerate, what they would feel comfortable with, and what absolutely is the highest quality they would ever need. Conceding that he had adopted a dire tone in order to arouse people about the issue, BESSER closed on a sanguine note by saying that he would not be in this business if he did not think that things could be accomplished.
Ronald LARSEN, associate director for information technology, University of Maryland at College Park, first addressed the issues of scalability and modularity. He noted the difficulty of anticipating the effects of orders-of-magnitude growth, reflecting on the twenty years of experience with the Arpanet and Internet. Recalling the day's demonstrations of CD-ROM and optical disk material, he went on to ask if the field has yet learned how to scale new systems to enable delivery and dissemination across large-scale networks.
LARSEN focused on the geometric growth of the Internet from its inception circa 1969 to the present, and the adjustments required to respond to that rapid growth. To illustrate the issue of scalability, LARSEN considered computer networks as including three generic components: computers, network communication nodes, and communication media. Each component scales (e.g., computers range from PCs to supercomputers; network nodes scale from interface cards in a PC through sophisticated routers and gateways; and communication media range from 2,400-baud dial-up facilities through 4.5-Mbps backbone links, and eventually to multigigabit-per-second communication lines), and architecturally, the components are organized to scale hierarchically from local area networks to international-scale networks. Such growth is made possible by building layers of communication protocols, as BESSER pointed out. By layering both physically and logically, a sense of scalability is maintained from local area networks in offices, across campuses, through bridges, routers, campus backbones, fiber-optic links, etc., up into regional networks and ultimately into national and international networks.
LARSEN then illustrated the geometric growth over a two-year period-- through September 1991--of the number of networks that comprise the Internet. This growth has been sustained largely by the availability of three basic functions: electronic mail, file transfer (ftp), and remote log-on (telnet). LARSEN also reviewed the growth in the kind of traffic that occurs on the network. Network traffic reflects the joint contributions of a larger population of users and increasing use per user. Today one sees serious applications involving moving images across the network--a rarity ten years ago. LARSEN recalled and concurred with BESSER's main point that the interesting problems occur at the application level.
LARSEN then illustrated a model of a library's roles and functions in a network environment. He noted, in particular, the placement of on-line catalogues onto the network and patrons obtaining access to the library increasingly through local networks, campus networks, and the Internet. LARSEN supported LYNCH's earlier suggestion that we need to address fundamental questions of networked information in order to build environments that scale in the information sense as well as in the physical sense.
LARSEN supported the role of the library system as the access point into the nation's electronic collections. Implementation of the Z39.50 protocol for information retrieval would make such access practical and feasible. For example, this would enable patrons in Maryland to search California libraries, or other libraries around the world that are conformant with Z39.50 in a manner that is familiar to University of Maryland patrons. This client-server model also supports moving beyond secondary content into primary content. (The notion of how one links from secondary content to primary content, LARSEN said, represents a fundamental problem that requires rigorous thought.) After noting numerous network experiments in accessing full-text materials, including projects supporting the ordering of materials across the network, LARSEN revisited the issue of transmitting high-density, high-resolution color images across the network and the large amounts of bandwidth they require. He went on to address the bandwidth and synchronization problems inherent in sending full-motion video across the network.
LARSEN illustrated the trade-off between volumes of data in bytes or orders of magnitude and the potential usage of that data. He discussed transmission rates (particularly, the time it takes to move various forms of information), and what one could do with a network supporting multigigabit-per-second transmission. At the moment, the network environment includes a composite of data-transmission requirements, volumes and forms, going from steady to bursty (high-volume) and from very slow to very fast. This aggregate must be considered in the design, construction, and operation of multigigabyte networks.
LARSEN's objective is to use the networks and library systems now being constructed to increase access to resources wherever they exist, and thus, to evolve toward an on-line electronic virtual library.
LARSEN concluded by offering a snapshot of current trends: continuing geometric growth in network capacity and number of users; slower development of applications; and glacial development and adoption of standards. The challenge is to design and develop each new application system with network access and scalability in mind.
Edwin BROWNRIGG, executive director, Memex Research Institute, first polled the audience in order to seek out regular users of the Internet as well as those planning to use it some time in the future. With nearly everybody in the room falling into one category or the other, BROWNRIGG made a point re access, namely that numerous individuals, especially those who use the Internet every day, take for granted their access to it, the speeds with which they are connected, and how well it all works. However, as BROWNRIGG discovered between 1987 and 1989 in Australia, if one wants access to the Internet but cannot afford it or has some physical boundary that prevents her or him from gaining access, it can be extremely frustrating. He suggested that because of economics and physical barriers we were beginning to create a world of haves and have-nots in the process of scholarly communication, even in the United States.
BROWNRIGG detailed the development of MELVYL in academic year 1980-81 in the Division of Library Automation at the University of California, in order to underscore the issue of access to the system, which at the outset was extremely limited. In short, the project needed to build a network, which at that time entailed use of satellite technology, that is, putting earth stations on campus and also acquiring some terrestrial links from the State of California's microwave system. The installation of satellite links, however, did not solve the problem (which actually formed part of a larger problem involving politics and financial resources). For while the project team could get a signal onto a campus, it had no means of distributing the signal throughout the campus. The solution involved adopting a recent development in wireless communication called packet radio, which combined the basic notion of packet-switching with radio. The project used this technology to get the signal from a point on campus where it came down, an earth station for example, into the libraries, because it found that wiring the libraries, especially the older marble buildings, would cost $2,000-$5,000 per terminal.
BROWNRIGG noted that, ten years ago, the project had neither the public policy nor the technology that would have allowed it to use packet radio in any meaningful way. Since then much had changed. He proceeded to detail research and development of the technology, how it is being deployed in California, and what direction he thought it would take. The design criteria are to produce a high-speed, one-time, low-cost, high-quality, secure, license-free device (packet radio) that one can plug in and play today, forget about it, and have access to the Internet. By high speed, BROWNRIGG meant 1 megabyte and 1.5 megabytes. Those units have been built, he continued, and are in the process of being type-certified by an independent underwriting laboratory so that they can be type-licensed by the Federal Communications Commission. As is the case with citizens band, one will be able to purchase a unit and not have to worry about applying for a license.
The basic idea, BROWNRIGG elaborated, is to take high-speed radio data transmission and create a backbone network that at certain strategic points in the network will "gateway" into a medium-speed packet radio (i.e., one that runs at 38.4 kilobytes), so that perhaps by 1994-1995 people, like those in the audience for the price of a VCR could purchase a medium-speed radio for the office or home, have full network connectivity to the Internet, and partake of all its services, with no need for a FCC license and no regular bill from the local common carrier. BROWNRIGG presented several details of a demonstration project currently taking place in San Diego and described plans, pending funding, to install a full-bore network in the San Francisco area. This network will have 600 nodes running at backbone speeds, and 100 of these nodes will be libraries, which in turn will be the gateway ports to the 38.4 kilobyte radios that will give coverage for the neighborhoods surrounding the libraries.
BROWNRIGG next explained Part 15.247, a new rule within Title 47 of the Code of Federal Regulations enacted by the FCC in 1985. This rule challenged the industry, which has only now risen to the occasion, to build a radio that would run at no more than one watt of output power and use a fairly exotic method of modulating the radio wave called spread spectrum. Spread spectrum in fact permits the building of networks so that numerous data communications can occur simultaneously, without interfering with each other, within the same wide radio channel.
BROWNRIGG explained that the frequencies at which the radios would run are very short wave signals. They are well above standard microwave and radar. With a radio wave that small, one watt becomes a tremendous punch per bit and thus makes transmission at reasonable speed possible. In order to minimize the potential for congestion, the project is undertaking to reimplement software which has been available in the networking business and is taken for granted now, for example, TCP/IP, routing algorithms, bridges, and gateways. In addition, the project plans to take the WAIS server software in the public domain and reimplement it so that one can have a WAIS server on a Mac instead of a Unix machine. The Memex Research Institute believes that libraries, in particular, will want to use the WAIS servers with packet radio. This project, which has a team of about twelve people, will run through 1993 and will include the 100 libraries already mentioned as well as other professionals such as those in the medical profession, engineering, and law. Thus, the need is to create an infrastructure of radios that do not move around, which, BROWNRIGG hopes, will solve a problem not only for libraries but for individuals who, by and large today, do not have access to the Internet from their homes and offices.
During a brief discussion period, which also concluded the day's proceedings, BROWNRIGG stated that the project was operating in four frequencies. The slow speed is operating at 435 megahertz, and it would later go up to 920 megahertz. With the high-speed frequency, the one-megabyte radios will run at 2.4 gigabits, and 1.5 will run at 5.7. At 5.7, rain can be a factor, but it would have to be tropical rain, unlike what falls in most parts of the United States.
Timestamp: Thursday, 04-Nov-2010 14:30:41 PDT
Retrieved: Friday, 19-Oct-2018 05:49:06 GMT