linguistics in information science.
Secondly, linguistic analysis has proved to be expensive to implement and it is not clear how to use it to enhance information retrieval.
Part of the problem has been that very little progress has been made in formal semantic theory.
However, there is some reason for optimism on this front, see for example, Keenan[4, 5].
Undoubtedly a theory of language will be of extreme importance to the development of intelligent IR systems.
But, to date no such theory has been sufficiently developed for it to be applied successfully to IR.
In any case satisfactory, possibly even very good, document retrieval systems can be built without such a theory.
Thirdly, the statistical approach has been examined and tried ever since the days of Luhn and has been found to be moderately successful.
This chapter therefore starts with the original ideas of Luhn on which much of automatic text analysis has been built, and then goes on to describe a concrete way of generating document representatives.
Furthermore, ways of exploiting and improving document representatives through weighting or classifying keywords are discussed.
In passing, some of the evidence for automatic indexing is presented.
In one of Luhn's early papers he states: 'It is here proposed that the frequency of word occurrence in an article furnishes a useful measurement of word significance.
It is further proposed that the relative position within a sentence of words having given values of significance furnish a useful measurement for determining the significance of sentences.
The significance factor of a sentence will therefore be based on a combination of these two measurements.'
I think this quote fairly summarises Luhn's contribution to automatic text analysis.
His assumption is that frequency data can be used to extract words and sentences to represent a document.
Let f be the frequency of occurrence of various word types in a given position of text and r their rank order, that is, the order of their frequency of occurrence, then a plot relating f and r yields a curve similar to the hyperbolic curve in Figure 2.1. This is in fact a curve demonstrating Zipf's Law* which states that the product of the frequency of use of words and the rank order is approximately constant.
Zipf verified his law on American Newspaper English.
Luhn used it as a null hypothesis to enable him to specify two cut-offs, an
* Also see Fairthorne, R. A., `Empirical hyperbolic distributions (BradfordZipfMandelbrot) for bibliometric description and prediction,' Journal of Documentation, 25, 319343 (1969).