With the emergence of large professional databases, constructing, managing, and providing effective access to such archives and collections of archives becomes crucial. Malcolm Getz conceives of electronic agents as third parties, or intermediaries, between publishers and libraries or publishers and end-users. He imagines ways such agents might "acquire rights from publishers and sell access to libraries," archive materials from disparate sources, maximize electronic storage resources, diversify payment structures, and develop "superior search interfaces and engines" across meta-hypertexts ("Electronic Publishing in Academia").
In the long run, there is little doubt that the electronic agent model will emerge as a growth industry. Getz boldly states that the role of "electronic distribution agents" is fast becoming more important than the role of printers, for two reasons: presumably because it would solve serious "congestion" problems (an economy of scale) and organize journals that are acquired from publishers, along with "other electronic materials into a coherent database." In fact, few publishers, academics, or even professional organizations have the research and development funds to develop innovative resources for managing huge volumes of digital materials efficiently. While lone web editors can spend the time necessary to filter through raw archives as they construct specialized ex post websites, what they can produce will always be limited to their own choices and perspectives on the materials and to exclusionary biases that may not take into account cross-disciplinary, divergent, or innovative study.
Unlike web editors, electronic agents, or agencies, are the likely candidates to develop dynamic hypertext linking systems that are capable of searching vast bodies of disciplinary work to produce much more than a list of unrelated hits. Such archival engines could literally assemble on-the-fly annotated meta-hypertexts. A search for string theory in quantum physics, for instance, would identify several hundred interrelated articles, book chapters, and webbed resources, rate them on the basis of user-specified criteria, select a predetermined number of texts, embed specified patterns of hypertextual links among the selected texts, and present the searcher with recommended strategies for reading based on search criteria and previous search requests by that searcher or by other users. Packaged with this meta web engine would be a range of archival software that would enable users to edit, annotate, store these on-the-fly hypertexts, and even submit their results and annotations for other users to access. Every new search would produce a new pattern of collected texts and hypertext link structures.
While the texts themselves would remain on the host server until the user called it up by following a hypertextual node, the on-the-fly linking structure would remain on the user's computer and could be saved for future access to the same search variables. The user could also save as many versions of the same type of search that she wanted, comparing them for usefulness or cross-referencing multiple search patterns for more inclusive results. If the user found that too many hypertextual links made the text harder to read, there would be facilities for hiding link markers while still keeping the links active. In addition, multiple versions of the same article in varying lengths (from abstract to five page summary, to full articles) would make it possible for readers to choose the level of specificity of a given search or reading session.
Furthermore, these engines might be programmed to remember who searched for what patterns and to track who was using the system at any given time. Searchers who happened to be looking for similar materials might expect to be notified automatically that like-minded researchers were making simultaneous inquiries. Spontaneous meetings in cyberspace, as often occur in physical libraries, might become common place. This process of socially constructing knowledge in a "professional working space" reflects what we mean by a "living space." Shifting, organic libraries (archives), meeting rooms, Email exchanges form a living web in which academics can produce, play, and work. Of course, certain users would undoubtedly want to do their work undisturbed and would therefore disable the alert function for given searchers. At other times, these same researchers might perform a series of searches in order to identify colleagues with whom to discuss their own work in progress.
A wide range of on-the-fly meta web software would surely emerge, some as small Java programs and others as full-scale data corporations that host huge stores of disciplinary and cross disciplinary knowledge. The larger scale systems that were hosted by third-party electronic agents could charge institutional or pay-per-search fees. Users would be paying for speed, professional quality of hits, proficiency of the software, specialized data sources, and insurance against non-professional users simply playing with the system.
Such "smart" search engines would save incredible amounts of time and might replace some the functions of web editors. They would also challenge many of our currently held sacred assumptions about intellectual property. As Getz suggests, any form of large-scale integration of databases might make "common ownership" of intellectual property "necessary to achieve the control and commonality necessary for high levels of integration." On the other hand, with faster cable modems and Internet 2 infrastructures currently in development, distributed information systems might soon provide the necessary bandwidth to overcome the need for centralized databases. Unlike the modernist libraries and museums of the turn of the twentieth century that worked to collect all that has be written in Western culture, a postmodern distributed system would find better ways to manage previously unimaginable volumes of digital knowledge.
The roles that electronic agents would play would, of course, extend well beyond the development of smart hypertext publishing systems. As Getz suggests, publishing houses are more likely to rely on electronic agents to manage the storage, access, and value added resources that libraries and end-users would expect. But, like most growth sectors within information management specializations, electronic agents would emerge in all shapes and sizes. Large conglomerate services would likely make corporate deals with high profile publishers within specialized fields to expand the breadth of their searchable archives. Smaller companies would specialize in storage, distributed management systems, software development, or specific kinds of value added services for archives. Furthermore, electronic agents are likely candidates to establish corporate arrangement with communications services to couple information management (problems of contemporary technical work) with human interaction (problems of living space in cyberspace). Certainly other models will emerge that will complement the four we discuss. But in order for any of the archive models that we have described to develop into truly heterotopic sites online, the "abnormal discourse" of online archival systems that we propose must result in new models created by the new language.
But, in our hopes to actually influence a shift in the "perceived truth[s]" about the non-profit academic publishing industry, we offer the conversion scenario for the composition sub-field of computers and writing, drawing on the four transition models outlined previously. By describing how our own technologically savvy field could develop a disciplinary raw archive and, thereby, wrest exclusive control of our intellectual property from the exclusive hands of print publishers, we hope to offer one way of responding to Steven Harnad's lament over the lack of a "credible transition scenario"--from a print archival culture to an electronic one ("Paper House of Cards").