and Kwong Bor Ng
We report on a study concerned with understanding people's adaptation to new information searching environments. We have investigated how people with varying degrees of familiarity with information retrieval systems, and varying models of the information retrieval process, interacted in an information retrieval system which did not support exact match retrieval with structured queries, but which did support best match ranked output retrieval with unstructured queries and automatic relevance feedback. Our results include a classification of "normal" information retrieval strategies, the description of several adaptation strategies, and the relationships between type and strength of people's mental models of information retrieval and their searching behaviors in the new information retrieval context. An important aspect of our study is its methodology for understanding and relating cognitive contexts to information seeking behaviors.
We are concerned with the issue of how people will understand and adapt to the new information retrieval (IR) or searching environments which we expect will become the standard for both experienced searchers and ordinary end users of IR systems in the very near future. The reason for considering this issue at all, is in order both to understand the nature of people's information seeking goals, and the ways in which they attempt to reach them in their interactions in IR systems; and, to influence the design of IR systems in order better to support people's goals and behaviors. The specific goal for the study which we report here is to understand how people's interactions in the new IR system environments are influenced by the cognitive contexts (that is, their previous searching experiences and their routine information seeking strategies, or in summary, their mental models of IR) which they bring to the new searching situation.
There is reasonably substantial understanding of the searching behaviors, strategies, heuristics, and so on of expert searchers in complex Boolean information retrieval systems (e.g. Bates, 1979; Fidel, 1991; Harter, 1985; Saracevic, et al., 1988; Spink, 1993), and some knowledge of non-expert or novice searchers in less complex environments such as OPACs (e.g. Dalrymple, 1990; Mischo & Lee, 1987). All of this knowledge is, however, of how people cope with, and use, very specific types of IR system. That is, exact match systems which encourage the use of highly structured Boolean queries; which return, in response to such queries, sets of documents (or just numbers of documents) retrieved by the query which are ordered only by formal characteristics of the documents such as date or author; which offer minimal support for query reformulation; which often depend upon highly controlled and structured indexing vocabularies or thesauri; and, in which documents are usually surrogates such as titles or abstracts, rather than full texts.
Although this is certainly an important class of systems, we note that it is a class which is rapidly being replaced by radically different types of IR systems. This new class of systems can be roughly characterized as follows. They are: best match systems which encourage the use of unstructured "natural language" queries; which return, in response to such queries lists of documents ranked according to their probability of relevance to the query; which offer facilities for automatically or semi-automatically reformulating queries through relevance feedback; which depend largely upon uncontrolled indexing; and, which are based upon, and can display, the full texts of documents, rather than surrogates such as abstracts. Lest it be thought that we are describing here only experimental IR systems, we note that this description roughly fits many large-scale commercially available IR systems, including: TARGET from Dialog/Knight-Ridder, FREESTYLE from Lexis/Nexis; WIN from WestLaw; DowQuest from Dow-Jones; AppleSearch; and, PLS. Furthermore, many of the search engines on the World Wide Web (WWW) have several, or all of these features. We can anticipate that, given the rapid penetration of this type of system in the database host, standalone system and networked environments, they will soon become the norm, rather than the exception.
Unfortunately, there is almost no substantive, empirically-based knowledge of how any class of users, or potential users of these new systems, actually understands and uses their features and facilities, and to what effect. One of the few exceptions is the research program concerned with OKAPI at The City University, London (Hancock-Beaulieu & Walker, 1992; Hancock-Beaulieu, Fieldhouse & Do, 1995), which has investigated, in particular, the use of relevance feedback in an OPAC environment; another is the set of studies conducted within the "Interactive Track" of the TREC program (see the relevant papers in Harman, 1995; Harman, 1996), which have tended to concentrate upon the effectiveness of particular experimental system features in interactive searching. Although these studies have begun to give us some characterization of the effectiveness of various system features in these contexts, and some descriptions of the use of these features, they have typically not been able to address the users' understandings of the features, and why they used them in the way that they did. We attempt, in this paper, to address precisely these issues. The results that we report here were inspired by our TREC-3 study, (Koenemann, et al., 1995) and are based on analysis of our TREC-4 investigations (Belkin, et al., 1996).
In our TREC-3 study (Koenemann, et al., 1995), in which we observed ten experienced searchers in their interactions while searching on five different topics each in a system which supported both structured and unstructured searching within a best match, ranked output, full text retrieval environment, we noted that there appeared to be a relationship between the kind and strength of the model of IR held by the searchers, and their behavior (both in terms of search effectiveness, and patterns of interaction). We were able to identify three different ways in which the searchers adapted to this, to them new, searching environment. One was to use the new system features in ways which supported routine searching strategies; another to develop new searching behaviors which matched the capabilities of the system; a third to attempt to use both routine strategies and new features in combination. The first two seem to have been more effective than the third, in terms of performance, but because in this study we had relatively little information about the searchers' models of IR, and because the searchers were relatively homogeneous in terms of their experience in IR systems, we were unable to identify factors which might have led to a person's taking up one or another of these methods, or which might otherwise have influenced their performance or behaviors.
Borgman (1986), among others, has demonstrated that people's mental models of IR can significantly affect their performance and behavior in interaction in an IR system. On the basis of these arguments, and on the basis of our TREC-3 results, we designed our TREC-4 study explicitly to consider the two related issues of:
the relationship of one's mental model of IR to behavior in a new type of IR system; and how people with little or no experience of best match, ranked output, automatic relevance feedback IR would understand and use these features.
For a complete description of the methods used in this study, see Belkin, et al. (1996). Here we give a relatively brief outline of the methods, with special emphasis on how we tried to discover and analyse mental models of IR, and IR searching behavior. Fifty searchers of varying degrees of experience were recruited to perform two searches each on topics which were assigned to them (there were 25 topics in total, which led to four different searches for each topic). An appointment was made with each volunteer to come to the experiment site for a two and one-quarter hour session. Each session consisted of: a initial searcher questionnaire, eliciting data about their previous searching experience and other background characteristics ; a pre-search interview with the searcher, in which we elicited information about their normal searching behaviors; a hands-on tutorial in the use of the new IR system; a practice search; the two experimental searches, each lasting a maximum of 30 minutes; search evaluation questionnaires following each search; and, an exit interview in which we elicited information about their searching behavior. The searchers were asked to "think aloud" during their searches, and this verbal protocol was recorded on videotape, together with the interaction on the monitor, and simultaneously with a complete log of the interaction with the IR system.
The system on which the searches were conducted was a version of INQUERY (Callan, Croft & Harding, 1992), a best match retrieval engine which offers ranked output and automatic relevance feedback. Because of our interest in the relationship between previous experience of IR and new system features, this version did not support Boolean (or any other form of) structured searching, apart from the ability to specify phrases. Input was rather unstructured, natural language queries. In addition, the system offered the ability to: save and reuse queries; save documents; mark documents as relevant for automatic relevance feedback; scroll through the full text of any specified documents; and identify query terms (including those added by relevance feedback) through highlighting in the full text display of retrieved documents.
The topics which were assigned for searching were the 25 interactive track TREC4 adhoc topics (Harman, 1996); an example topic (this was the topic used for the practice search following the tutorial) is:
What can be done to lower blood pressure for people diagnosed with high blood pressure,) Include benefits and side effects.
The subjects were instructed that their task was to find as many good documents as they could which addressed the given topic, in up to 30 minutes, either explicitly saving these documents, or constructing and saving a final query which they thought retrieved a lot of good documents at the top of the retrieved list.
The database which they searched on was the TREC-4 adhoc database (see Harman, 1996), which consists of the full text of about 750,000 documents, including documents from the AP Newswire, the Wall Street Journal, the San Jose Mercury-News, selected Ziff-Davis publications, the U.S. Federal Register, and U.S. Patents.
During the pre-search interview, we gave each searcher a hypothetical topic (this was one of the 25 adhoc topics which were not used in the interactive track) and asked them to describe: how they would go about putting together a search on that topic, given that they would be searching a database of the full texts of newspaper articles (a search plan); the initial search statement they would input to the system, in whatever IR system they were most familiar with; what they would do to determine if their query had retrieved any good documents; what they would do if their query did not retrieve any good documents; and, how they would decide that they were finished. In the post-search interview, we asked our subjects to reflect upon their searches, and upon the "normal" searching strategies which they had described to us in the pre-search interview. We then asked them: whether, and to what extent they were able to use their routine strategies; which they were able to use and which they weren't; why they weren't able to use those so specified; what they did instead; whether they used or tried to use automatic relevance feedback, and if not, why not; if they used relevance feedback, what they found useful and not useful about it; what they found helpful and not helpful about both ranked output and full text; and, for their comments on the features of the system in general, including the lack of features which they would have liked. On the bases of the interview data and questionnaires, the search logs, and the thinking-aloud protocols, we were able to construct pictures of the searchers' mental models of IR (that is, their cognitive contexts) and their normal searching behaviors, and to investigate relationships between these factors and the types of interactions and adaptations which took place in the new searching environment.
3.1 Descriptive Characterization of our Searchers
In order to get a first level description of our subjects' models of IR and their strength, we asked them for self reports of their level of experience with different types of IR systems. These data are displayed in Table 1, where level of experience ranges from 1, meaning none, through 3, meaning some, to 5, meaning a great deal. The distribution of responses is indicated in column 3 of Table 1. From this table, it is clear that the searchers in this study all had at least a little experience with some type of IR system, but overall had very little experience with the specific features of most interest to us in our study, namely ranked output and relevance feedback. This is corroborated with the self-reported years of searching experience, which was a mean of 5.5 years, with a minimum of 6 months, and a maximum of 25 years. Nineteen subjects reported doing searching only for themselves, and 3 1 for themselves and others.
|Type of System||Mean|
|Full Text Databases||2.57||1(10)||2(15)||3(17)||4(5)||5(4)|
Table 1. Experience of searchers with IR systems (I =none, 3=some, 5=a great deal)
Other demographic characteristics of our searchers of some interest are their educational level (all had at least a Bachelor's degree, 24 a Masters, 39 had or expected to receive an MLS, 14 had or expected to receive a PhD, and 3 had a JD); their age distribution (10 were between 21 and 30 years, 18 between 31 and 40, 16 between 41 and 50, 5 between 51 and 60, and two were over 60 years old); and their gender (33 Female, 18 Male). Nine of our subjects reported using a computer once or twice a week, and the remainder (41) reported daily use. On the same 1-5 scale, their experience of mouse-based interfaces (our version of INQUERY has a direct manipulation interface which requires using a mouse) was a mean of 4.4.
3.2 Classification of Searchers' Pre-existing Searching Strategies
Before they began their searches, we conducted a pre-search interview with each subject to obtain information about the routine methods they typically employed in their everyday searching environments. Our purpose here was to elicit information about the cognitive context our searchers brought to bear as they interacted with the new IR system we exposed them to. We construe these pre-existing searching strategies to be illustrative of our searchers' mental models of the search process. In order to understand and characterize these mental models, we gave each searcher a hypothetical search problem, and we asked them a number of questions about the searching methods they would typically use if they were given this problem to address. Specifically, we asked about: how they would typically put together a search on such a topic, that is, the type of query they might use; how they would evaluate the effectiveness their query; and what they would do if their query did not retrieve any good documents. Based upon a content analysis of these open-ended interview data, we have derived a classification of the routine searching strategies employed by our searchers in their everyday environments. This classification of typical searching strategies is presented below in figure 1.
I. TERM STRATEGIES
1. Identifying keywords
2. Identifying synonyms
3. Identifying controlled vocabulary
4. Specifying phrases
II. DATABASE STRATEGIES
1. Understanding database
Ill. INTERACTION STRATEGIES
I. Interaction with thesaurus
2. Interaction with documents
3. Magnitude feedback
IV. SEARCH STRATEGY
I . Facet analysis
2. Broad to narrow
4. Specific term search
6. Iterative/interactive searching
7. Structured query
Figure 1. Typical Pre-Existing Searching Strategies
Several of these categories of searching strategies have been reported elsewhere in the literature (see for example, Bates, 1979). The consistency between our results and the earlier findings of other investigators gives us confidence in the validity of our observations. The classification scheme we present in Figure I does, in addition to confirming the results of others, suggest some new categories of searching behaviors. In particular, we describe three interaction strategies typically employed by our searchers, and a specific search strategy characterized as "iterative/interactive searching." To the best of our knowledge, this is the first time that these specifically interaction-related searching methods have appeared in classifications of searching strategies, and we suggest that their appearance in our searchers' descriptions of routine searching methods attests to the increasingly interactive nature of online searching in general.
The method most frequently mentioned by the searchers in our study was Boolean searching. Many of the searchers said that in their ordinary searching environments they typically combined a Boolean search procedure with one or more interactive strategies. From the pre-search interview data we have extracted quotes to illustrate such a pattern which begins with Boolean query formulation and incorporates various interactive strategies as the search progresses.
(Searcher is given a written problem description to read):
Question: What would be your initial query?
Answer: Automobile(?) and energy(N) source? (Boolean query)
Question: Would you plan out your search ahead of times
Answer: Not in the beginning. I would see what develops as it goes. (iterative searching)
Question: After you've run the query, what would you do to determine if it had retrieved any relevant documents?
Answer: First, look at the number of documents. (Magnitude feedback) Look at articles that are relevant, look at the keywords in them.(Interaction with documents).
Question: What would you do if your query did not retrieve any good documents?
Answer: Probably go into the thesaurus, to see if there was something specific that I hadn't found. To get other terms.(Interaction with thesaurus)
We present these examples to illustrate the existing mental models of searching held by our subjects in the pre-experimental condition, and in particular some of the routine interactive strategies they employ in addition to their Boolean query formulation procedures. Only some of these interactive strategies, such as iterative searching, and interaction with documents, were usable in the experimental retrieval environment. In general, the experimental context in which our subjects performed their searches was one in which it was not possible to completely map existing mental models of searching onto the new retrieval environment. Many of the routine strategies or heuristics typically used by our searchers were simply not supported by the INQUERY retrieval system. Figure 2 lists the strategies that our searchers could not use in this new system environment.
Identifying controlled vocabulary
Interaction with thesaurus
Figure 2. Search Strategies not supported by INQUERY.
These strategies, especially Boolean search and structured query, were among the most frequently reported routine behaviors in our searchers' typical environments. Below we give some examples from both the pre-and post-search interview data of how the searchers said they use these strategies when they put together and evaluate their searches in their normal searching contexts.
Example use of typical search strategies
I start with a broad based search, from autos, from that narrow it to gasoline, gasoline additives. From there, put in other words, other synonyms. (searcher 003) 1 would identify main terms, and think of synonyms or alternative terms (searcher 025)
Identifying controlled vocabulary:
I would look for documentation about database (i.e., controlled vocabulary, syntax), and translate main, alternative terms to the query using the controlled vocabulary and syntax and execute the query. (searcher 025)
Interaction with thesaurus:
Depending on the kind of database, like an lnfotrac newspaper index, that system has a topic search mechanism. . . . Infotrac usually have subtopics under the broad topics. (searcher 002)
If I had 2000 hits I would think that I had to narrow; if I had 200 1 would begin to look at it. (searcher 003)
Depending on what the number of hits were, I would go on from there. (searcher 004)
First, look at the number of documents. If the number is broad, think about limiting factors.
If the number is small, think about how I could make it broader. (searcher 002)
Start by picking out keywords first, combine them. . . For example, automobiles AND gasoline (w) additives (Dialog). (searcher 004)
Try to find out all different terms, combine the terms, Boolean search... (searcher 024)
I would start from general search to get a sense of the database, then look at the documents relevant, and modify search by adding more terms. (searcher 028)
I guess I might look up, first look up automobile and I would expect to find a subheading for energy, so see what comes up over there. (searcher 002)
I could not use connectors (it's nice not to use connectors), operators, limiters, because they are not available, and I did not have to use them. (searcher 032)
I couldn't conduct formal queries. (searcher 025)
3.3 The Nature of Adaptive Behaviors in New Searching Environments
In our discussion above, we have described the routine searching strategies employed by participants in our study, in their usual searching environments. Our particular concern in this study is with the question of how experienced searchers, with pre-existing mental models of searching and the search situation, behave in new system environments which do not completely support their typical searching strategies. In our earlier studies of online searchers in new retrieval environments, conducted under the program of TREC-3 (Koenemann, et al 1995), we investigated the query formulation procedures used by experienced Boolean searchers using the INQUERY retrieval system. There, we discovered three different patterns of adaptation to new system features. One we characterized as "fitting new tools to old habits." The searcher we used to illustrate this type of adaptation made minimal use of new system features, and tried to apply routine searching behaviors in the new environment. Our example searcher tried to use her routine query formulation strategy of constructing Boolean sets by creating three separate queries, evaluating them individually and then together as one set. A second pattern of adaptation we observed was characterized by partial use of new system features, or, attempting to "combine old and new models of systems and search strategies. The searcher we described to illustrate this strategy tried to use some of her traditional manual query formulation strategies along with automatic query expansion techniques through relevance feedback. It appeared as if this pattern of adaptation was reflective of an incomplete model of the new system, and an attempt to use routine strategies while learning about how the new system works. This particular "nonadaptive" strategy was not successful for the searcher we described, because her old searching strategies were not supported by the system, and the new system features were only partially and ineffectively employed. The third pattern of adaptation we observed in this study we characterized as effective use of new system features. The searcher we described to illustrate this pattern was able to interact with the new system in ways which enabled her to form a new system model, and to change some pre-existing behaviors to fit the new system environment.
In this study we continue our investigation of patterns of adoption and adaptation to new searching environments by asking two related questions: in general, how do experienced searchers, with well formed mental models of searching, understand and use system features which they have never before encountered?, and, how do experienced searchers behave in system environments which do not completely support routine searching behaviors; or in which pre-existing mental models of searching are only partially transferable? With respect to the first question, we are particularly interested in understanding searchers' uses and understandings of two new system features - automatic relevance feedback and ranked output.
In order to roughly characterize the extent to which our subjects used the new system features to which they were exposed, we summarize here some of the data from the transaction logs (the use of system features) and the post-search questionnaires (the acceptance of system features). The mean number of iterations (or cycles) in which our searchers engaged was 9.46 (minimum 2, maximum 28, standard deviation 5.23) per search. This led to a total of 799 queries (iterations being queries plus 1) submitted to the system by all searchers over all searches, which retrieved at least one document. Of all of the queries in which relevance feedback was possible (699), relevance feedback was used in 491 (70.2%), and the mean number of documents marked relevant per query was 4.9. Only one searcher did not ever use relevance feedback (two others replied in the post-search interview that they had not, but according to the search logs had used it). The searchers were close to unanimous in their approval of both ranked output and full text.
Our investigation of these issues is ongoing. In this paper we present case study examples of two searchers, to illustrate different patterns of adaptation. In the first case, (SO24) we give examples of a searcher's uses of automatic relevance feedback, full text, and the "save" feature in an attempt to support her typical searching strategies which included Boolean search, structured query, going from broad to narrow, using magnitude feedback, and identifying keywords. This searcher reported that she was able to use her routine searching methods, "Somewhat." In the second case example, we describe a searcher (SO09) who was reportedly able to use her routine strategies, which included Boolean searching, identifying keywords, magnitude feedback, going from broad to narrow, specific term search "Completely" by using full text, ranked output and automatic relevance feedback.
3.4.1 Adaptive Behaviors Of Searcher 024
Searcher 024 was a female between 21-30 years old who had five years of online searching experience. She reported having a "great deal" of experience with a variety of online systems. She had "some" experience searching in full-text databases, and "none" with systems offering ranked output or automatic relevance feedback. An analysis of her responses to questions during the pre-search interview revealed the following typical searching strategies: structured, set-based Boolean query formulation; a strategy of starting broad and then narrowing; identifying keywords; and use of-magnitude feedback. During the post-search interview searcher 024 said that she had been able to use these typical searching behaviors only "somewhat." In our analysis of S024's search log, we see that she actively used several of the INQUERY features; particularly, full text, automatic relevance feedback, and the "save query" option. We turn now to a selected analysis of searcher 024's "thinking aloud" protocol data to better understand the nature of her uses of these system features as she tried to employ some of her typical searching methods.
In the following example, S024 describes her inability to construct set based queries, and her use of the "save query" feature along with natural language string searches to approximate this usual strategy.
Use of Save Query to Accomplish Set-Based Search:
I have hypertension as another term besides hyper-pressure. Rather than separating these terms, I will try to combine these to save one query.
I just combined, um, to save terms with the other terms.
I put down three sets of the query ... All three are important in being able to retrieve relevant documents.
Searcher 024 told us in the pre-search interview that typically, she tried to think of relevant keywords for her queries. In other words, she usually relied upon her own understanding of the topic to come up with what she thought might be good search terms. However, through her interactions with the INQUERY system, she discovered that automatic relevance feedback, in combination with full text, was a valuable tool for suggesting relevant keywords.
Use of Relevance Feedback to Identify Keywords:
(after running RF): I see keywords in relevant articles. I am trying to elaborate with relevance feedback.
This searcher made frequent use of automatic relevance feedback during her searches, in other ways as well.
Right now, I am looking at the article of different types of hard drugs, but I am not sure whether it's relevant or not. I am trying to look (browse) through to figure out what's relevant to this topic. It looks like there are some with pretty good explanations about different types of drug, what they do. So I will count them as relevant documents, Um, I will rerun the query with relevant documents. So right now, I am moving down to evaluate what I have.
I'm trying to evaluate this article to see whether it's relevant.
I am trying to elaborate with relevance feedback, to see if they can get different articles. Seem to begin getting same articles rather than different. Seem to have six relevant articles so far.
In the new search environment, instead of going back to her routine search strategies, searcher 024 tried multiple features of interactive system and also was self-conscious of new features. Her searching behavior suggests an example of "adaptive behavior in new searching environment."
3.4.2 Adaptive Behaviors Of Searcher 009
Searcher 009 was a female, 41-50 years old, with one year of searching experience. She had less experience with a variety of online searching systems than did searcher 024; the system she reported the most experience with was computerized library catalogs. Searcher 009 said that she had had "Some" experience searching full-text databases, but no experience using ranked-output or automatic relevance feedback.
The typical searching methods used by searcher 009 were Boolean searching, identifying keywords, going from broad to narrow and specific term search. She reported in the postsearch interview that she was able to use these methods "Completely" in the experimental environment. Below we give some examples from her thinking aloud protocol to illustrate uses of relevance feedback to support her typical strategies of identifying keywords and narrowing.
I'm thinking about what terms I could use to get the most relevant terms, rather than tons of things about Microsoft... (while running relevance feedback)
As is shown above, she used relevance feedback not to retrieve relevant documents or interact with system, but to find some relevant terms that she could put in her query. She was a little bit frustrated during the whole search process, because she was unfamiliar with the topic (Microsoft) and also could not get what she wanted: I cannot get what I wanted to have ...
When I submit the query, it seemed to retrieve all the bad ones of before...
While searcher 009 made adaptive use of relevance feedback, she did not make full use of the option to view full text documents. She often browsed through the titles rather than the full-text:
I am browsing the title.
I am browsing through first forty documents...
In general, she could not completely adapt to new system features, and when she used new features, she mainly did it to support her routine searching strategy.
This study has attempted to investigate the nature of patterns of adaptation to new searching environments. In order to study this, we first looked at the routine strategies typically employed by experienced searchers in their familiar searching environments. We discovered that while most searchers in our study were experienced with structured Boolean searching, they typically combined these methods with a variety of interactive strategies as well.
Next, when we looked at uses of new system features, we found that all but one searcher used automatic relevance feedback, a feature that was new to almost all of them, and that all of the searchers were able to use ranked output (also new to most) in ways which both supported their normal searching behaviors, and supported new kinds of searching behaviors. In particular, these new features were used to support a variety of typical and new strategies which were instantiated in explicitly interactive ways. It appears to be the case that, as features which supported interaction were made available, they were adopted (in different ways) to accomplish routine, ordinarily non-interactive strategies, in interactive ways.
A comparison of the typical search strategies of our two example searchers shows a strong similarity in routine behaviors. However, the ways in which they used relevance feedback and other new system features varied. The first searcher, a person with five years searching experience, and perhaps a stronger mental model of searching, used more new features, and in ways which both supported existing strategies and explored new uses. We might say that she both supported and expanded upon her routine strategies. The second searcher, with only one year experience, and perhaps a less well formed mental model of searching, used fewer features, and in more restricted ways. So, although there were some common features in their adaptation and adoption behaviors, they also differed in terms of the types of adaptation/adoption.
These different patterns of behavior suggest that it may be hard to predict, based upon descriptions of routine strategies alone, how searchers will behave in new environments. Some factors appear to remain constant across searchers, no matter what their experience (that is, increasing use of interaction to support a variety of strategies), but others seem to change in what might be regular ways. However, we have to date been unable to identify the aspects of their experience and normal behaviors which we could say characterize their mental models of IR in such a way as to predict specific adoption/adaptation behaviors.
Our first, and most obvious conclusion, is that people with substantial variation in experience of IR, but generally without experience of the features of the new class of IR systems, are able to use such features, with a minimum of training, reasonably effectively. Although the range of performance, and the range of behaviors engaged in the new system are rather broad (as expected, given the mental models hypothesis), there was fairly strong agreement amongst all the searchers concerning the utility and value of the new features. Thus, we can perhaps conclude that, not only is it inevitable that these new types of systems appear, but also desirable.
Perhaps more important, and more interesting, is the central role of interaction in the accomplishment of almost all of the searching strategies, whether routine or new. Not only were we able to identify some interactive searching behaviors that people routinely used in what must be characterized as systems which do not actively support interaction, but we could also see that they used whatever features they had available to them, in any system, to accomplish other strategies which might not normally be thought of as inherently interactive, by using interactive means. Our searchers were, of course, purposely put into a new situation in which there was rather little opportunity to engage in many of their normal searching behaviors, and were constrained to the small set of features that were offered. Nevertheless, the general uptake of these features, the generally positive response to them, and the means in which they were used strongly underscore what we can interpret as the underlying interactive dimension of all information seeking behavior. These results therefore strongly support the position of designing new IR systems explicitly to support maximally effective interaction of the user with the other components of the system, and especially taking explicit advantage of features such as relevance feedback and ranking which seem inherently to support such interaction.
Finally, we note that we have not yet been able to specify characteristics of people which predict just how they will adapt to or adopt features of the new class of IR systems. We have been able to characterize some classes of adaptation/adoption, and to identify some candidate characteristics of mental models of IR which could be useful in understanding behavior and performance in new IR systems. This seems to us a good start. But being able to relate these results to the design of systems which will effectively adapt to such characteristics is a task which still lies before us. That's good, because it leaves lots more scope for further research, the goal of any good research project.
This research was supported by a Cooperative Agreement Grant from the National Institute of Standards and Technology, No. 70NANB5HOO50. We owe thanks to the Center for Intelligent Information Retrieval at the University of Massachusetts, Amherst, for supporting our use of the INQUERY retrieval system. Our greatest thanks go to the wonderful volunteers who participated in our study.
Bates, M.J. (1979) Information search tactics. Journal of the American Society for Information Science, 30: 205-213.
Belkin, N.J., Cool, C., Koenemann, J., Ng, K.B. & Park, S. (1996) Using relevance feedback and ranking in interactive searching. In: Harman (1996), in press.
Borgman, C. L. (I 986) The user's mental model of an information retrieval system: An experiment on a prototype online catalog. International Journal of Man-Machine Studies, 24: 47-64.
Callan, J.P., Croft, W.B. & Harding, S.M. (1992) The INQUERY retrieval system. In: DEXA 3. Proceedings of the Third International Conference on Database and Expert Systems Applications. Berlin: Springer Verlag, 83-97.
Dalrymple, P.W. (1990) Retrieval by reformulation in two library catalogs: Toward a cognitive model of searching behavior. Journal of the American Society for Information Science, 41: 272-281.
Fidel, R. (1991) Searchers' selection of search keys: 1-111. Journal of the American Society for Information Science, 42: 490-527.
Hancock-Beaulieu, M., Fieldhouse, M. & Do, T. (1995) An evaluation of interactive query expansion in an online library catalogue with a graphical user interface. Journal of Documentation, 51: 225-243.
Hancock-Beaulieu, M. & Walker, S. (1992) An evaluation of automatic query expansion in an online library catalog. Journal of Documentation, 48: 406-421.
Harman, D., ed. (1995) TREC-3. Proceedings of the Third Text Retrieval Conference. Washington, DC: GPO.
Harman, D., ed. (1996) TREC4. Proceedings of the Fourth Text Retrieval Conference. Washington, DC: GPO.
Harter. S.P. & Peters, A.R. (1985) Heuristics for online information retrieval: a typology and preliminary listing. Online Review, 9:407424.
Koenemann, J., Quatrain, R., Cool, C. & Belkin, N.J. (1995) New tools and old habits: The interactive searching behavior of expert online searchers using INQUERY. In: Harman (1995), 145-117.
Mischo, W.H. & Lee, J. (1987) End-user searching of bibliographic databases. Annual Review of Information Science and Technology, 22: 227-264.
Saracevic, T., Kantor, P. and others (1988) A study of information seeking and retrieving: I-III. Journal of the American Society for Information Science, 39: 161-216.
Spink, A. (1993) Feedback in information retrieval. Ph.D. Dissertation, School of Communication, Information and Library Studies, Rutgers University, New Brunswick, NJ.