that in IR we are searching for relevant documents as opposed to exactly matching items.
The extent of the match in IR is assumed to indicate the likelihood of the relevance of that item.
One simple consequence of this difference is that DR is more sensitive to error in the sense that, an error in matching will not retrieve the wanted item which implies a total failure of the system.
In IR small errors in matching generally do not affect performance of the system significantly.
Many automatic information retrieval systems are experimental. I only make occasional reference to operational systems.
Experimental IR is mainly carried on in a 'laboratory' situation whereas operational systems are commercial systems which charge for the service they provide.
Naturally the two systems are evaluated differently.
The 'real world' IR systems are evaluated in terms of 'user satisfaction' and the price the user is willing to pay for its service.
Experimental IR systems are evaluated by comparing the retrieval experiments with standards specially constructed for the purpose.
I believe that a book on experimental information retrieval, covering the design and evaluation of retrieval systems from a point of view which is independent of any particular system, will be a great help to other workers in the field and indeed is long overdue.
Many of the techniques I shall discuss will not have proved themselves incontrovertibly superior to all other techniques, but they have promise and their promise will only be realised when they are understood.
Information about new techniques has been so scattered through the literature that to find out about them you need to be an expert before you begin to look.
I hope that I will be able to take the reader to the point where he will have little trouble in implementing some of the new techniques.
Also, that some people will then go on to experiment with them, and generate new, convincing evidence of their efficiency and effectiveness.
My aim throughout has been to give a complete coverage of the more important ideas current in various special areas of information retrieval.
Inevitably some ideas have been elaborated at the expense of others.
In particular, emphasis is placed on the use of automatic classification techniques and rigorous methods of measurement of effectiveness.
On the other hand, automatic content analysis is given only a superficial coverage.
The reasons are straightforward, firstly the material reflects my own bias, and secondly, no adequate coverage of the first two topics has been given before whereas automatic content analysis has been documented very well elsewhere.
A subsidiary reason for emphasising automatic classification is that little appears to be known or understood about it in the context of IR so that research workers are loath to experiment with it.