What makes automatic music classification an imbroglio is the difficulty encountered in isolating the significant musical features that need to be taken into account. Genre is of course an important aspect, but what properties determine a particular genre is definitely not very clear. The strength of Armentano et al.’s paper lies here. The authors’ contribution is twofold. First, they are successful in classifying a genre based on only three features--instrumentation, rhythm, and pitch. Second, they have developed a technique that works for symbolic music and hence is applicable even in those cases where only a score (musical notation) is available but not an audio sample.
Their strategy comprises the following four steps (it is assumed that the input music file is in musical instrument digital interface (MIDI) format):
- (1)Data transformation: here, various characteristics are extracted from the music file.
- (2)Classification: by computing the nearest neighbor (with respect to the knowledge base) each classifier assigns a particular genre.
- (3)Coordination: A voting system assigns the given music file a genre by taking into account the confidence on each classifier.
- (4)User feedback: updating the knowledge base using user feedback.
I found the paper absorbing and the use of user feedback quite novel, but it remains to be seen how well it will work for extempore music where a score is not available (in such cases, one has to acquire an audio sample and extract the score first). Nevertheless, the experimental results on 225 musical pieces with three genres and nine sub-genres are encouraging, and I do recommend this paper for postgraduates and researchers of computational music science, for whom it is meant.