Computing Reviews

Labeling data and developing supervised framework for Hindi music mood analysis
Patra B., Das D., Bandyopadhyay S. Journal of Intelligent Information Systems48(3):633-651,2017.Type:Article
Date Reviewed: 09/14/17

The fundamental difference between speech and music is that the content of a speech is essentially a message while musical content is emotion. Consequently, musical mood identification or classification is an important research problem.

This paper focuses on classifying the musical mood in Hindi songs based on three features: taxonomy development, annotation, and automated mood classification. Excited, delighted, calm, sad, and angry are the five primary moods used for classification. The authors used Russell’s circumplex model.

Altogether, 1540 song clips, each one minute, have been annotated. The tools used in the study include neural networks (such as feedforward neural networks), support vector machines (SVMs), and decision trees. The experimental results are encouraging, and reveal that musical features like timbre, rhythm, and intensity have a strong bearing on increased accuracy in mood classification. The best results were obtained for feedforward neural networks with an F-measure of 0.725 based on tenfold cross validation.

I found the paper interesting and would certainly recommend it to postgraduate students and researchers of music. However, it should be carefully remembered that songs actually fall under a form called composite art in which the interaction of speech and music is crucial.

Reviewer:  Soubhik Chakraborty Review #: CR145538 (1711-0746)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy