Computing Reviews

A regularization approach to learning task relationships in multitask learning
Zhang Y., Yeung D. ACM Transactions on Knowledge Discovery from Data8(3):1-31,2013.Type:Article
Date Reviewed: 07/01/14

Multitask learning is very popular in many domains and has been applied in numerous applications. This paper proposes a novel regularization approach, multitask relationship learning (MTRL), to learn task relationships in multitask learning. One big challenge is to find out the relationships among all tasks. In this paper, the authors utilize “a matrix-variate normal distribution as a prior on the model parameters of all tasks” and provide a method to “learn the optimal model parameters for each task.” They experimentally study the generality of MTRL by adopting both symmetric and asymmetric settings, and use several benchmark datasets such as toy problems to demonstrate the effectiveness and interpretability of MTRL.

The approach provides a good way to formulate the learning problem in multitask learning into “a convex optimization problem by [using] the matrix-variate normal distribution as a prior.” The experiments also prove that the approach works very well in different settings and on different datasets. The contributions and future work are very clear. To generalize the results, the authors may need to scale up the datasets and choose additional sources of data. Overall, it is a well-written and easy-to-follow paper.

Reviewer:  De Wang Review #: CR142460 (1409-0789)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy