Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Generalized deep transfer networks for knowledge propagation in heterogeneous domains
Tang J., Shu X., Li Z., Qi G., Wang J. ACM Transactions on Multimedia Computing, Communications, and Applications12 (4s):1-22,2016.Type:Article
Date Reviewed: Mar 7 2017

This paper is about the transfer of learning knowledge from one domain to another domain. Specifically, the paper considers knowledge transfer from textual domains to image domains. Input consists of some tagged images and their corresponding labels. The authors’ goal is to “transfer the labels from the tag set to the images in the target domain for visual concept classification.”

The authors propose a method that combines learning across two domains. In this method, a multi-level neural network is created in which the first few (L1) layers are for individual domains, whereas the last few layers (L2) are used to transfer knowledge from the source domain to the target domain. The authors propose three methods for transferring the knowledge:

(1) Representation shared: a cost term is defined as a function of corresponding differences between input values to various stages;

(2) Parameter shared: a cost term is defined based on corresponding differences of state parameters, weights, and labels; and

(3) Generalized scheme: a sum of the above two costs.

In all of these methods a neural network is used to minimize these cost metrics. The authors compare the proposed scheme with other algorithms, such as support vector machines (SVM), stacked autoencoders (SAE) (using image representations only), heterogeneous transfer learning (HTL), translator from text to image (TTI), and so on, by using accuracy as a performance parameter. These experiments prove that the proposed algorithms can work well in cases when there is an insufficient amount of labeled training samples by utilizing the co-occurrence information between texts and images.

The authors take an important problem and solve it well with proven performance. However, there are a couple of issues. First, the paper is difficult to read. It would have been better if the authors had included the example text and images (which are in the performance results section) while introducing the problem. Second, the algorithm still needs a lot of training data and it won’t work well for correlated data. For example, if the training data has images of cats of a particular size, then it won’t be good at predicting images of different sizes of cats.

Reviewer:  Rajeev Gupta Review #: CR145104 (1705-0303)
Bookmark and Share
 
Learning (I.2.6 )
 
 
Database Applications (H.2.8 )
 
 
Heterogeneous Databases (H.2.5 )
 
 
Image Representation (I.4.10 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy