Scientific Information Retrieval
SCIENTIFIC INFORMATION RETRIEVAL
SCIENTIFIC INFORMATION RETRIEVAL is generally meant to cover the entire problem of recovering from recorded scientific and engineering knowledge those particular pieces of information that may be required for particular purposes at particular times. In this context it usually includes a wide range of techniques and devices used to collect, index, store, retrieve, and disseminate specialized information resulting from scientific research and development. Scientific information may be text-or multimedia-based (for example, sound, video, or imagery combined with text). Scientific information retrieval systems are generally automated by computers and computer networks.
Vannevar Bush is a pioneer in the development of automated retrieval systems. In his seminal article, "As We May Think," published in the July 1945 issue of the Atlantic Monthly, he proposed the ground-breaking concept of an easily accessible, individually configurable automated storehouse of knowledge, which would operate like the human brain. Bush's article inspired subsequent generations of information systems researchers, including Ted Nelson and Doug Engelbart. During the late 1950s Nelson researched methods by which computers could manipulate text. This research was dubbed "Xanadu," and in a 1965 paper on Xanadu he coined the term "hyper-text," a method that allows computer users to retrieve information contained in separate text documents. Today, almost all documents found on the World Wide Web are hypertext. Engelbart's contribution to the field of information retrieval was the oN-Line System (NLS), which he developed in the 1960s at Stanford University. NLS was the first computer system to use a mouse. Engelbart invented the mouse so that users could point and click on hypertext links displayed on the system's computer screen. From the 1960s until the 1990s, computers could only store and retrieve information at individual research sites. Scientists had limited ability to access needed information stored on another computer at a different facility or from one at a separate location within the same facility. Over-coming the limits on scientists' ability to access all relevant information stored on computerized systems led Tim Berners-Lee to invent the World Wide Web. In the early 1980s Berners-Lee was hired as a computer consultant by the Swiss scientific research center European Organization for Nuclear Research (CERN). To help researchers access the center's vast store of information, he first invented the Enquire system, which used hypertext and was able to access documents stored in all of CERN's various information systems.
Berners-Lee wanted to expand the scope of Enquire's capabilities so it could retrieve information stored at other research facilities. He was aware of the Internet and how it was enabling universities throughout the world to exchange information in a limited way through text file transfers and electronic mail. By adapting the Enquire system for use on the Internet, Berners-Lee invented the World Wide Web, which has revolutionized more than just the field of scientific information retrieval.
The World Wide Web uses hypertext, hyperlinks, and the hypertext transfer protocol (HTTP) for retrieving information, and universal record locators (URLs) for uniquely identifying pieces of information. Its retrieval capabilities are enhanced by search engines that use key words and indexes for storing, identifying, and retrieving information.
The World Wide Web is very effective at delivering large amounts of information to its users. However, it is not so effective at retrieving specific information that users request. As of the early 2000s, Berners-Lee and other researchers at the World Wide Web Consortium, headquartered at the Massachusetts Institute of Technology, were trying to solve this problem by developing the Semantic Web, which is an enhanced version of the World Wide Web. In the Semantic Web, a user submits a request for information to a search program called an intelligent agent. Semantic Web documents give the information contained in them a well-defined meaning from which the intelligent agent has the ability to determine the relevancy of the information to the user's request. Thus, the Semantic Web would be more efficient at retrieving the specific type of information a user may need.
BIBLIOGRAPHY
Berners-Lee, Tim, and Mark Fischetti. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor. San Francisco: Harper San Francisco, 1999.
Maybury, Mark T., ed. Intelligent Multimedia Information Retrieval. Cambridge, Mass.: MIT Press, 1997.
Sparck-Jones, Karen, and Peter Willets, eds. Readings in Information Retrieval. San Francisco: Morgan-Kaufman, 1997.
JohnWyzalek
See alsoInternet .