This paper describes several experiments which tested the effectiveness and feasibility of using adaptive mechanisms to select improved search strategies in a bibliographic retrieval system. Because different search strategies are more or less effective in different situations, the authors were trying to develop a system which could improve or shift its strategy based on prior success. While this approach seems logically or theoretically plausible, two major difficulties occurred: (1) They found a “lack of query features that correlate with good search strategy choices.” (2) They were unable to develop an accurate payoff function; i.e., a function to evaluate query performance which would appropriately reward effective strategies and penalize those with lower precision, or recall, or both.
Their conclusion is that neither problem seems to be easily solved. They are continuing to work along different lines where they will try to develop an “expert intermediary” to assist the user. The intermediary would be controlled by explicit rules.