This reviewer infers that the authors, all educated in French-speaking institutions, are acquainted with English only in written form, and only from technical material. The text abounds with words and phrases that ring false to the native, idiomatic English, ear and eye. Examples include “affinity” for “closeness,” “conspicuous” for “apparent,” “localize” for “locate,” and “searchers” for “researchers.” This persistent, slight misuse of English makes the paper difficult to understand.
We turn now from style to substance, recognizing that the reviewer may have sometimes missed the authors’ point. The problem addressed is to identify an unknown strain of bacteria as one of a limited number of members of a sample space (“targets”). The available tools for the identification process are a battery of biochemical tests, each of which has a characteristic response pattern to each of the target strains. Four methods are discussed; the first three are standard. The authors’ new contribution lies in a decision rule based on multicriteria decision making methods. Consider each available test as generating a decisionmaker’s utility function to rank order the targets. For n different tests, there are n different rank orderings. A group decision function of the ordered ballots may (or possibly may not) then select an overall group favorite candidate from among the sample space of targets.
The authors describe two specific collective decision rules. One is an Analytic Profile Index (API), dating from 1973. An evaluation by NIH in 1974 showed excellent performance for strains whose API test profile could be matched, but the library is incomplete. The API scheme does aid to establish weighting indices for use with each possible test. These weights can be used in the second scheme, ELECTRE II, defined by Roy and Bertier, also in 1973. (No reference is provided.) It is based on ranking targets as good or bad candidates based on the number of other strains the target outranks. The computer program can lead to selection of a target strain, or proposal for another test to resolve a tie. Some rather impressive experience statistics are cited; e.g., for an actual set of 15 unknowns, ELECTRE II had 12 hits, one miss, and two requests for additional test results. By comparison, on the same set of unknowns, another decision rule had 9 hits, 2 misses, and 4 requests for additional tests.
We come at long last to the topic cited in the title. The hardware used was a DEC 20/60, which took 0.5 seconds of CPU time per unknown. In Europe, many biological test labs lack access to such a maxi. To overcome this situations, the authors cite a BASIC version of ELECTRE for an APPLE-II with one floppy. This program can do the ELECTRE II calculation, taking up to 60 seconds per sample.