Keywords Abstract
Heimbürger, Anneli, Paula Silvonen, and Caj Sodergard. "A framework for automatic combination of media contents by minimising information redundancy. Case: Integrated publishing in multimedia networks." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Information redundancy becomes a crucial problem in the Web when contents from different resources are automatically combined to produce a new WWW- publication. Information retrieval, natural language processing and the latest WWW-activities offer a challenging framework to approach the information redundancy problem of automatically combined news articles. It seems reasonable, that minimising information redundancy should be performed by a hybrid technique that combines some elements of these approaches. The purpose of this exploratory study is to introduce a theoretical and practical framework for clarifying the information redundancy problem in the case of integrated publishing.
Goodman, David. "A year without print at Princeton, and what we plan next." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Princeton has begun to receive some major titles as electronic only, beginning with the 2000 subscription year. We are doing this in appropriate cases where we have confidence in the stability, archiving, and performance of the publisher, and where the financial advantage is significant. Not only have there been no complaints, but there have been almost no comments. Apparently users now look for current journal articles online , using the paper versions only if online is not available.
Rowland, Fytton, and Iris Rubbert. "An Evaluation of the Information Needs and Practices of Part-Time, Distance Learning and Mature Students in Higher Education." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. This paper evaluates the information needs and practices of part-time, distance learning and mature students in Higher Education (HE) outside the Open University (OU). In recent years, the government has pointed out the importance of individuals engaging in lifelong learning, to remain competitive in a globalised economy, which draws increasingly on successful knowledge creation. In response, the HE sector in the UK offers a growing number of its programmes on a part-time and distance-learning basis for students to remain in full- or part-time employment while studying for further qualifications. We question whether the information-gathering practices of part-time and distance learning students best reflect the pedagogical concept of lifelong learning. Our results show that the majority of universities do not cater for the specialised needs of part-time and distance learners well, which leads to an increasing use of the Internet and employer resources as a substitute for traditional information channels. Students have major problems coping with the complexity of the WWW and they made recommendations on how to improve existing information services in FM.
Sodergard, Caj, Matti Aaltonen, Christer Bäckström, Ari Heinonen, Timo Järvinen, Timo Kinnune, Pauliina Koivunen, Sari Lehtola, VilIe Ollikainen, Katja Rentto et al. "An integrated multiple media news portal." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

The emerging multiple media portals accessed by a variety of terminals require semi- and fully automatic procedures for managing the material. The IMU (Integrated Publishing in Multimedia Network) trial system, developed in this work, automatizes parts of the news content acquirement and processing work of the portal web master. The IMU active proxy server extracts the metadata from news web sites and from the television news broadcasts through video analysis making an automatic classification and linking of related articles and TV clips possible. The deeply integrated material is partitioned into news composites called channels, which can be personalised by the user. The automatically computed event and media calendar allows for a new type of integration of news and up-coming events. The news content is refined by setting up filters for monitoring of the business environment. Through our interfaces for PC, TV, WAP and MP3 terminals, the user accesses the same news content at work, at home in the living room and on the move. To balance the automatic procedures with journalistic judgement, we created web tools for human editors to control and override the automatic operations and for creating new content. The community feature enables groups to share news and to discuss topics internally. The trial including closely 400 users with PC, TV and WAP terminals showed a stable interest in the service. The typical user retrieved a few fresh articles including a TV clip at prime time in the evening. The television set user retrieved two times more material than the PC user, but proportionally less news. The most popular channels for the TV-user were TV programme schedules and TV clips. The community channels attracted the TV set users. Personalization was used scarcely and searches even more seldom. The interviews showed that the system was well accepted, except for navigating in the TV-IMU application with the remote controller.

Apps, Ann, and Ross Maclntyre. "CABRef: Cross-Referencing into an Abstracts Database." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

This paper describes a prototype system, CABRef, which provides seamless linking from a citation in the bibliography of an electronic journal article to a detailed abstract of the cited work, within a particular research domain. Publishers of journal articles will query CABRef to determine the URL links to embed in their article bibliographies which enable end-user cross-reference navigation. The abstracts within CABRef, and their discovery metadata, are encoded in XML and processed with X.M.L-aware software. The CABRef XML abstracts database is updated using data exported from the CABI abstracts production database on a regular basis when abstracts are 'published', involving a simple extra step in the CABI abstracts production process.

Hernández, Francisca, Bob Mulrenin, and Robin Yeates. "Converting heterogenous cultural catalogues and documents to XML -strategies and solutions of the COVAX project." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. COVAX (Contemporary Culture Virtual Archive in XML) is an 1ST (Information Society Technology) funded project, launched as part of the 1ST first call, corresponding to key action 3 (Multimedia content and tools: cultural heritage and digital content) in the action line 111.2.3 (Access to scientific and cultural heritage) under the European Community Fifth Framework for research and development. The main objective of COVAX is to test the use of XML to combine document descriptions and digitised surrogates of cultural documents to build a global system for search and retrieval, increasing accessibility via the Internet to electronic resources, regardless of their location. The project duration is 24 months. It started in January 2000 and the partners include content owners (memory institutions) and technological partners (developers: public RTD centres and private companies). CO VAX's approach to achieving its objectives is based on the conversion of existing records to homogeneously-encoded document descriptions of bibliographic records, archive finding aids, museum records and catalogues, and electronic texts and on the application of XML (eXtensible Markup Language) and the various Document Type Definitions (DTDs) currently being used for library resource descriptions (MARC DTD), archives finding aids (EAD), museum materials (AMICO DTD) and electronic versions of cultural texts (TEllite). COVAX is designed to form a network of XML repositories as a distributed database. This will be accessed as a single database and will act as a mets-search engine, offering access to book references, finding aids, facsimile images, museum items, and other resources. COVAX is constructing a multilingual user interface to access such data. The project does not intend to create standards but to rely on the adoption of existing standards and concepts (XML, DTDs already in use, http...), using the Z39.50 protocol as a conceptual basis for communication between the multilingual user interface and the meta-search engine and Dublin Core Metadata Element Set elements as cross-domain access points. The conversion process has proved to be a crucial one in the CO VAX-project and we therefore try to concentrate, in this article, on questions concerning our experiences of converting mainly library catalogues of different MARC or proprietary formats to XML.
Role, F., S. Chaudiron, and M. Ihadjadene. "Dealing with large documents in the Digital Library : a case study of ETD documents." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

This paper presents the first results of the CodeX project this project aims at conceiving and developing a software platform for visualizing electronic documents according different contexts of use. We introduce and define the term multiple view which is the core of our approach and we present the technical choices we made to develop this user oriented approach. We conclude by drawing some perspectives based particularly on the generation of multiple and dynamic views and the integration of language engineering tools. The Codex Project demonstrates that the XML format can successfully be used to provide electronic access to theses. While the project focusses on a prototype implementation using CODEX data, the approach will be general enough that it can be adopted by any group for implementing a digital library. Discussion of research results and ideas for future work appear in conclusion.

Krichel, Thomas, and Simeon Warner. "Disintermediation of Academic Publishing through the Internet: An Intermediate Report from the Front Line." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. There has been a lot of discussion about the potential for free access to scholarly documents on the Internet. At the turn of the century, there a major initiatives. These are arXiv, which covers Physics, Mathematics and Computer Science and RePEc, which covers Economics. These initiatives work in very different ways. This paper is the fruit of collaboration between authors working for both initiatives. It therefore reflects the perspective of people working to achieve change, rather than an academic perspective of pure observation. We first introduce both arXiv and RePEc, and then consider future scenarios for disintermediated academic publishing. We then discusses the issue of quality control from an c-print archive point of view. Finally, we review recent efforts to improve the interoperability of e-print archives through the Open Archive Initative (OAI). In particular, we draw on the workshop on OAI and peer review held at CERN in March 2001 to illustrate the level of interest in the OAI protocol as a way to improve scholarly communication on the Internet.
Li, Wei, and Rebecca Dahlin. "Distributed Parallel Multi-Channel Publishing System." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. With more and more electronic media appearing and popularly accepted, the publishing industry is facing a greater challenge than anytime else before. The only solution to cope with these problems is to extend the traditional publishing to cover both print publishing and electronic publishing business. This paper intends to propose an integrated system solution to assist this transition. Distributed Parallel Multi-channel Parallel Publishing, as one of the outcomes of DIP (Digital Information Processing) project, sponsored by SIT! (Swedish IT Institute) aims to propose a comprehensive solution to cover both paper-based and electronical media publishing, based on advanced distributed software system architecture and fieldrelated up-to-date standards. In the DIP project, the new standard of news industry, NewsML is adopted as the authoring, storing and transforming format. JDF (Job Definition Format) is partly employed to monitor and assist the control of the entire paper-based publishing process. On the aspect of software system architecture. Java- based distributed network technology, Jini, acts as the infrastructure of the system.
Costa, Sely, A.A. Silva Wagner, and Marcos B. Costa. "Electronic publishing in Brazilian academic institutions: changes in formal communication, too?" In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The use of electronic facilities for informal communication amongst academic researchers is a common place already. That is, according to a number of studies in different disciplines around the world, the adoption and use of electronic communication in the informal stages of the scholarly communication system is prevalent. Nevertheless, the same studies that testify such a ubiquity of the informal electronic communication in the academic world also show that the adoption of these media in the formal stages of the communication process, though occurring, has been slow and cautious. The reasons for that are many and have also been extensively discussed. In reality, these studies point out that the adoption and diffusion of electronic media for formal communication are in progress and therefore needs to be assessed in order to see to what extent, and in which way it has been occurring. The study discussed in this paper aims to provide a picture of the adoption and use of electronic facilities for scholarly publishing in Brazil within the academic environment. As an exploratory research, the study aims to see whether there have been electronic publications being produced by academic institutions - universities and learned societies- and, if so, which sort of publications have been mostly provided, what are their major features in terms of format, content, availability, accessibility, etc. and whether they can provide any basis for the identification of disciplinary differences in publication patterns. Since none of the sampling frames available from departments and institutes within the Ministry of Education and the Ministry of Science and Technology hierarchy, a number of procedures have been applied in order to define the sample. It actually consists of those academic institutions that have produced scholarly electronic publications - namely books, journals, serials, annual proceedings and secondary sources such as abstracts, reviews, etc. Some partial results show that so far, there has been electronic publications being produced by Brazilian academic institutions, but the adoption of pure electronic formal communication amongst academic researchers seems to be far from become ubiquitous, as can be observed in relation to the informal communication.
Stasch, Eckhard, Michael Reiche, Arved Hubler, and Klaus Kreulich. "Embracing the Museum Publication." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. This paper deals with a document management application which is being devoloped for and implemented in the Industrial Museum of Saxony. Whereas most of the common museum documentation projects focus on the issue of collection management, this project is content-centered. It intends to establish an XML-based visitor-information-system, which enables the museum to provide on- sreen visitor information 'on-the-floor' and in the internet as well as print-output (PoD) automatically from the same data source. The paper will sketch the system's architecture and take a closer look at the con- ception and development of the document models, meta-data and stylesheets, which allow to create rich and diverse multimedia presentations. The discussion will outline some key demands of a publication system which is tailored to the needs and expectations of museum visitors as well as the museum staff.
Schirmbacher, Peter, Susanne Dobratz, and Matthias Schulz. "High quality electronic publishing in universities using XML - the DiDi principle." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The number of electronic documents, which are to be handled, archived and made accessible to the public by university libraries in cooperation with other institutions, is highly increasing. Using an SGML/XML-based publishing concept enables universities not only to unlock their information and research resources but also to set up new services, like a document management or a printing on demand with higher quality as by using conventional publishing concepts. There are three main arguments for using an SGMIJXML-based publishing strategy not only for theses and dissertations at Humboldt-University Berlin but also for university publications like conference proceedings, public readings, technical reports and other material, that is subsumed under the term "Grey Literature': 1. Archiving, 2. Retrieval and 3. Reusability of structured documents. This paper will describe the policy and technology that stands behind the local electronic publishing concept.
Hitchcock, Steve, and Wendy Hall. "How Dynamic E-journals can Interconnect Open Access Archives." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Influential scientists are urging journal publishers to free their published works so they can be accessed in comprehensive digital archives. That would create the opportunity for new services that dynamically interconnect material in the archives. To achieve this, two issues endemic to scholarly journal publishing need to be tackled: decoupling journal content from publishing process; defragmentation of the control of access to works at the article level. It is not necessary to wait for publishers to act. It was predicted that, enabled by links, c-journal publishing will become more distributed. (Hitchcock et al. 1998) An editorially controlled new model e-journal that links material from over 100 distributed, open access sources realises that prediction. Perspectives in Electronic Publishing (PeP) combines the functions of a review journal with original materials and access to full-text papers on a focussed topic, in this case on electronic publishing, in a single coherent package that indexes and links selected works. The paper describes the main features of PeP and how it can be used, and considers whether PeP contributes to the scientists' objective of a dynamic and integrated scientific literature.
Mellon, Scott, and Aldyth Holmes. "Human and Economic Impacts of Electronic Publishing at the NRC Research Press : A Case Study." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The NRC Research Press has a seventy-year history of provision of high-quality scientific journals, currently totalling 14 titles in various areas of science and technology. These titles were made available to our clients electronically in 1996, [see http://researchpress.nrc.ca and we are currently engaged in a second phase of what will likely be an ongoing process of technical evolution. This paper focuses on the impacts of such rapid change on our organization and people, and discusses change management principals, which have been found to be effective. Such impacts include the recruitment, and retention of appropriately qualified staff and the costs associated with each; and the design of space to facilitate new working styles. We will explore the "build-or-buy' conundrum and present our current philosophy on this issue. Strategies such as matrix management and out sourcing for managing technology workers in a hitherto non-technical operation; and techniques for learning and transferring technology to and among staff will likewise be addressed. This paper will also discuss our co-evolution with our clients, the journals' readers, authors and editors, and ways in which their development and changing expectations have informed our products, systems and processes.
Jéribi, Lobna. "Improving Information Retrieval Performance by Experience Reuse." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The goal to build a knowledge base, making "permanent" the user evaluations experiences on search results, constitutes the main motivation of this paper. In information retrieval systems, an experience is a search instance, represented by a search context and a set of evaluated document results. In this paper, we propose a search context model, characterized by the user's profile and query. Similarity functions between users' profiles and search contexts are defined, on the basis of the proposed model. The reuse process consists in expanding the initial user query by the documents terms contained in the similar instances. In contrast with the Rocchio method, these documents are those validated beforehand by users, having "similar" profiles as the current user, and being in a "similar" search situation. Our proposition enables to reduce system interactions with the user during his/her search session. These features were carried out in the COSYDOR (Cooperative System for Document Retrieval) project, based on Inrermedia (Oracle 8i). Tests and evaluations are carried out using the test corpus of TREC (Text Retrieval Conference). The results show, for first search iterations, a significant improvement of performance compared to that of Internsedia. This work is carried out within the framework of a regional project, in which the sight deficient users constitute our application case. This research represents a considerable contribution for these users, considering their use difficulties of current information retrieval systems (too interactive systems, accessibility problems, etc).
Krottmaier, Harald. "Improving the Usability of a Digital Library." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

The Journal of Universal Computer Science (J.UCS) was introduced in 1994 and after more than 7 years of operation the service is still up and running. Articles published on the server are categorized in several ways (e.g. in the ACM Classification Scheme) to simplify browsing and finding articles again. Nevertheless users wants to create their own view on the material and their own categories. Traditionally a user creates some categories on the clientside in a bookmark-structure and stores the URIs of the according papers in this structure. But the use of bookmarks binds the user to a specific client on one single machine and so it is not possible to take full advantage of the restructured view. This is just one reason why it is helpful for the user to manage his personal view of the articles on the server-side. In this paper we introduce a personal workspace for registered users of the Journal of Universal Computer Science. It is possible to create and modify a personal view, add some personal and/or public comments of the articles and use the workspace as personal repository of published articles. We also show other advantages like personalized search scope, customization of the order of articles and some other useful features of the system.

Mufloz-MartIn, Guadalupe, Ignacio Aedo, and Paloma DIaz. "Interaction with electronic documents using the library metaphor." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The information available in the Internet could be more useful if in addition to its accessibility, it would be organized as users require. Internet users can feel lost if they can not find information management and retrieval services like those provided by traditional libraries. In the belief that the use of the traditional library metaphor for documents over the Internet will improve their management, we have developed a model for digital libraries called the VILMA model, and a prototype that implements it. VILMA's user interface utilizes a spatial metaphor where the typical elements (such as books, shelves, etc.) are presented in a three-dimensional world. Moreover, VILMA is composed of the three main elements of a traditional library: information entities, metadata, and processes. In VILMA, the information entities that form the library's collection, are documents in the Internet, and the metadata are those required by the Dublin Core Metadata Set. There are two types of processes in VILMA: public, which are all those related to the user, and technical, that have to do with traditional librarian's tasks. Public processes are subscription, identification, searching (analytical, expert, accidental) and customising, while technical processes are selection of documents, acquisition, classification and cataloguing, indexing, maintenance and notification. In this paper, we will present the VILMA model, and its prototype.
Cavallin, Mats, and Carin Björklund. "Introducing electronic books at Göteborg University." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Goteborg University Library has signed a contract with netLibrary providing users on the University's network access to 500 copyrighted electronic books. These electronic books represent "the third wave" at our Digital Library- the earlier waves of resource networking were bibliographic databases and electronic journals. The project is funded by the university board and was started in 2001 after testing and preparations during the fall of 2000. The only realistic alternative for the library was found to be netLibrary. The process started with a collection evaluation resulting in the creation of a list of titles appropriate for acquisition. Negotiations with netlibrary followed and a contract was signed in February 2001. The service was launched over the University's network in March 2001. After the local introduction of the netLibrary electronic books at the library the project work will continue with forming a eBook consortium. A study of other c-book alternatives will also be carried through. The current project will be evaluated on the basis of end-user experience.
Peig, Enric, and Jaime Delgado. "Metadata interoperability for e-commerce of multimedia publishing material." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Using metadata for referencing multimedia material is becoming more and more usual. This allows better ways of discovering and locating this material published on the Internet. Several initiatives for establishing standards for metadata models are being carried out at the moment, but everyone focuses on their own requirements when defining metadata attributes, their possible values and the relation between them. From the point of view of someone who wants to seek and buy information (multimedia content in general) in different environments, this is a real problem, because he has to face different metadata sets, and so, must have different tools in order to deal with them. In this paper, we present a model for the interoperability of different metadata communities, where neither the publishers nor the customers have to be aware that they all may be working with different metadata models. We are mapping the semantics of different metadata models with the objective of not loosing information when the user and the content provider use different metadata schemas. A "metadata agent" is used to carry out the interoperability information.
Baptista, Ana Alice, and Altamiro Barbosa Machado. "Metadata Usage in an Online Journal - An application Profile." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. In a chaotic environment like the Internet, data are not enough anymore. The description of resources is fundamental in order to keep some structure and make Internet services more efficient and more effective Metadata is, basically, data about data. However, metadata per se is also insufficient: with different kinds of services and software using different metadata and metadata structures, the problem persists. As it happens in other areas, standardization is a keystone to the metadata usage and implementation. Dublin Core (DC) and RDF are two recommendations from two different initiatives: DCMI (Dublin Core Metadata Initiative) and WK (World Wide Web Consortium). In order to be widely used, the DCMI opted for broadly defining the DC semantics, while leaving the syntax issues open and undefined. This is the reason why RDF and DC match so well: RDF brings the syntax rules on which DC can be embedded. The RDF schema, on its turn, makes it possible to design and implement, in a consistent way, project specific metadata vocabularies not covered by DC or other standard metadata vocabularies. In this paper we will illustrate the use of DC, RDF and RDF Schema in the context of an online journal project: Informattica Online. An evaluation of this approach will also be presented.
Chapman, Christopher, and David F. Brailsford. "Navigating a corpus of journal papers using Handles." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. For some years now the Internet and World Wide Web communities have envisaged moving to a 'next generation' of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of 'published objects'. These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOl) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOl organisation, founded by a consortium of publishers and based in Washington DC. The DOl is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via meradata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References Section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.
Delgado, Jaime, and Isabel Gallego. "Negotiation of copyright in e-commerce of multimedia publishing material." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Intellectual Property Rights (IPR) management in c-publishing is becoming a key issue for its full deployment. There is a real and urgent need to solve the problems associated with multimedia content copyright control that the spreading of the Internet and the WWW has created. Although many technical solutions to the problems already exist, there is still a lack of an efficient and agreed business model for c-commerce of c-publishing. Apart from introducing copyright information in the content to sell, there are other new specific needs related to IPR management in c-publishing. One of them is the possibility of negotiating the copyright conditions when buying multimedia material. This material could be directly bought by end-users, or, more interesting, by other publishers. An example of this situation could be the negotiation between an intellectual rights holder and a prospective publisher. In the paper, we present a specific approach to this IPR need, which is based on the use of metadata for representing IPR features, and on the specification of a protocol for negotiation of copyright at the purchase moment. Furthermore, the paper introduces the issues associated to IPR negotiation in a multi-broker environment.
Warwick, Claire, and Celine Carty. "Only Connect: A study of the problems caused by platform specificity and researcher isolation in Humanities Computing Projects." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The paper presents work performed in a collaboration between the department of Information Studies and the Humanities Research Institute at the University of Sheffield. In it we investigate, problems caused by the use of platform specific software by humanities computing projects. The methodology for the project was that of case studies focusing on projects in the history of Natural history and science: The Hartlib Project (University of Sheffield) The Darwin Correspondence Project, (University of Cambridge) The Robert Boyle project (Birkbeck College, London), The Mueller Correspondence Project and The Newton Project (Imperial College. London). All projects had encountered problems with the use of platform specific formats, and had decided to use SGML as a way of providing a platform independent format that would also help to preserve the material independent of hardware platforms. This was influenced either by advice from other scholars, contact with humanities computing centres, or with national advisory bodies. In particular, projects considered adopting the DTD developed by the Text Encoding Initiative. Despite this enthusiasm a number of problems were encountered with the type of material to be encoded. All but one project therefore chose not to adopt it in its original form. The paper argues that problems were caused by researchers working in isolation from each other. This can cause the replication of problems that others have already worked on, and expertise not being shared with different projects. Documentation was often not kept up to date because those working on projects have no time to do this. The paper contends that there is a role for outside reportage in the collection and presentation of documentation and its dissemination throughout the humanities computing community.
Ritz, Thomas. "Personalized Information Services – an Electronic Information Commodity and ist Production." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

Personalization has become a buzzword in the last months when talking about the internet and the World Wide Web. Nowadays, a lot of sites promise personalization features, but the services provided differ a lot. Most of the recent publications utilize personalization as a marketing tool, which helps to address customers in a one-to-one marketing manner. Apart from this, personalization and more personalized information services can be seen as unique information commodities supplied by content providers. In order to maximize benefits, the production tools have to be perfectly integrated into the production process applied for the production of recent mass publications. In this paper, we present a definition and characterization of personalization and resulting information goods and sketch an integrated production framework applicable mainly for publishers to provide personalized information services as a unique product in their portfolio. Finally, the paper discusses usability aspects of personalized information services.

Montgomery, Carol Hansen, and Linda S. Marion. "Print to Electronic: Measuring the Operational and Economic Implications of an Electronic Journal Collection." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. As digital libraries move from demonstration projects to the real world of working libraries, it is critical to assess and to document the impact of the shift. This paper reports the methodology and initial results of an Institute for Library and Information Studies (IMLS) funded research study of the operational and economic impact of an academic library's migration to an all-electronic journal collection. Drexel Library's entire print and electronic journal collections and associated staff are the test bed to study three key research questions: (1) What is the impact on library staffing needs? (2) How have library costs been reduced, increased and/or re-allocated? (3) What other library resources have been affected? We are using quantitative and qualitative methods to answer the research questions operationalized in the following tasks: (1) Measure the staff time, subscriptions costs and other costs related to each activity required to acquire and maintain print and electronic journals. (2) Compute the per-volume, per-title, and per-use costs of acquiring and maintaining print and electronic subscriptions. (3) Study all impacted library services, including changes in reference service, document delivery, and instructional programs. Initial results of measuring staff time indicate Information Services and Systems Operation departments constitute the majority of personnel costs for electronic journals. Technical Services and Circulation account for the majority of staff costs for print journals. Per title subscription costs appear to be substantially lower for electronic titles obtained through aggregator collections.
Roussey, Catherine, Sylvie Calabretto, and Jean-Marie Pinon. "SyDoM: A Multilingual Information Retrieval System for Digital Libraries." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

In this paper, we present a multilingual information retrieval system based on knowledge representation model. This system allows document indexing and information retrieval in a multilingual document collection where documents are written in different languages, though each individual document may contain text in only one language. The underlying model permits to describe the semantic of document in a multilingual context. This model, called semantic graph, is an extension of the Sowa's model of conceptual graphs where different vocabularies are available. Indeed, in this model two kinds of knowledge are identified: • Domain knowledge organises domain entity in two hierarchies of types (concept types and relation types), • Lexical knowledge associates term, belonging to a vocabulary, to concept type or relation type. Then, a same semantic graph can have different representations depending on the vocabulary chose. For example, the French representation of a semantic graph uses the French labels of types to display each graph component and the English representation of the same graph uses the English labels of types to display the graph. Our proposition has been validated in the logical information retrieval system SyDoM. The system is dedicated to the needs of virtual libraries for managing XML documents. Thanks to the semantic graph model, SyD0M develops several functionality for multilingual information retrieval. This first evaluation gives better results than traditional information retrieval system.

Gligorov, Zoran. "The Role of the First Macedonian Internet portal "Macedonia Search" in Enhancing of the Development of Digital Communication and the Associated Problems." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. One of the often-overlooked aspects of today's modern electronic information spread is that not all people speak English. Since the software applications are mainly made for areas where they sell best, smaller and poorer nations with a local language and alphabet are often ignored and forgotten by the software industry. The situation today is that the people who do not speak English and would like to use their own language are somewhat deprived and as a result - sometimes quite reluctant to start using electronic communication. This paper concerns itself with the sources, consequences and means to solve this problem. The issue is discussed in regard to real-life example of situations with the use of the Macedonian language and a comparative analysis of the solutions applied. The examples Concentrate on Internet publishing using the Macedonian language and its effects and popularity among the Macedonian-speaking individuals. The paper provides an analysis of the current situation from which conclusions applicable to other cultures can be extracted.
Grygierczyk, Natalia, and Bas Savenije. "The Roquade project: an infrastructure for new models of academic publishing." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. Due to a number of problems the traditional scientific journal has become an obstacle for efficient scientific communication. Many initiatives have been started for realising alternative ways of scientific publishing thereby using information technology. In various disciplines, however, a relatively large number of scientists are still reluctant to make use of completely new ways of publishing. The extraordinary aspect of the Roquade project is marked by the fact that it offers a variety of possibilities. Together they constitute an expeditious way for gradually changing the publication behaviour of scientists. This project, initiated by the university libraries of the Dutch universities of Delft and Utrecht, aims at creating an infrastructure that encompasses the swiftness of publication which hitherto could only be realised by grey publishing, with quality judgement without the serious delay of the traditional review procedures. Roquade offers a wide number of facilities to a broad audience, based on a common organisational and technical infrastructure. So far already 18 journals and other types of scientific work have been published within the Roquade infrastructure - and the project continues until 2002. In the paper, both project philosophy and organisation are described, as well as the preliminary results, experiences and best practices, technical XML based choices, web marketing and of course the needs and demands of authors and readers. The advantages of co-operation and the role of the university library in facilitating scientific communication are dealt with too. An outline of the Roquade business model after the project stage will also be presented.
Vitiello, Giuseppe. "The State and the Content: electronic publishing and policy measures." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001.

At the beginning is the author, a brilliant individual or group of people who produce, experiment, innovate. Once the work is there, what happens next? The relation between the creator and the created thing is quite obvious. In spite of that, experience shows that the created thing has often an independent life and may be associated, grouped and distributed with other related forms of contents and objects. Policies for such dissemination are usually market driven and follow the law of supply and demand and the innovative impulse of individuals and firms. After all, success may be the result of blunt necessity or mere fate. Experience also shows that governmental action can be applied usefully as a way of regulating, mediating, and balancing the effects of a purely market oriented approach. Policy measures may be direct and affect market mechanisms (in many countries, for instance, printed matters benefit from an advantageous fiscal regime). Sometimes, governmental action can be equally successful when it is indirect and public authorities can act strategically as major owners, providers and users of information.

Sotirova, Kalina Spasova. "Training in Electronic Publishing for Cultural Heritage Specialists: The Bulgarian Experience." In Electronic Publishing '01 - 2001 in the Digital Publishing Odyssey: Proceedings of an ICCC/IFIP Conference. ELPUB. Kenterbury, UK: University of Kent, 2001. The issue at the heart of this article is an analysis of the actual condition of the publishing sector in Bulgaria and some predictions about the expected directions for its' future development in electronic environment. As the fourth great technique of writing, electronic text becomes unique medium for limitless exchange of human knowledge not only in the old continent, but in the world-wide scale. Background The c-publishing in Bulgaria is not widely spread and most publishers believe that the basic reason for this is financial. In spite of this many publishers show an interest in publishing in the new multimedia, recognising it as a priority type of publishing or intend to produce c-products in the future. What are the facts ? Most Bulgarian publishers (84%) produce only printed materials. 11% - both printed and electronic, and 1% - only electronic products. The book sector in Bulgaria survives with incredible difficulties in spite of increasing amount of private publishers (1.800 in 1998). The first Bulgarian c-publications were on display at the Sofia Book Fair in 1998, but the most of them were produced with foreign participation. Since then private companies showed growing interest in investing in this field and the tendency for future development is positive.