Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Sequential summarization: a full view of Twitter trending topics
Mladenovic M., Gao D., Li W., Cai X., Zhang R., Ouyang Y. IEEE/ACM Transactions on Audio, Speech and Language Processing22 (2):293-302,2014.Type:Article
Date Reviewed: Jul 11 2014

This paper describes an approach to summarize trending topics on Twitter. The tweets for a given topic are linearly segmented into more granular subtopics that define a single action or event.

This linear segmentation is achieved by one of two means: burstiness detection or dynamic topic models (DTMs). The former approach is not particularly novel, but the latter approach is definitely interesting. DTMs [1] are an extension of static topic models (for example, latent Dirichlet allocation (LDA) [2]). In this work, the authors make use of DTMs to capture subtopics in a stream of tweets as time evolves.

The paper also proposes the use of coverage, novelty, and correlation as alternative evaluation metrics in place of the widely used ROUGE, which I find to be worthy and useful. Coverage here is a variation of the traditional ROUGE measure, but takes into account reorderings in generated summaries. Novelty measures how much new content is discovered in successive summaries, generated as time evolves. Lastly, correlation leverages the concept of Kendall’s tau coefficient to evaluate how similar generated summaries are to human-written ones.

It would have been great had the authors done a comparison against similar work, such as that of Olariu [3]. Olariu had earlier similarly proposed breaking up a set of tweets into clusters, and summarizing each cluster separately.

That aside, this paper is let down mainly by poor writing. Many sentences are ungrammatical, and sometimes it is frustrating trying to understand the ideas that are being conveyed.

Pick up this paper if you are interested in DTMs or summarization evaluation metrics, but be prepared for a tiring read.

Reviewer:  Jun-Ping Ng Review #: CR142496 (1410-0889)
1) Blei, D.; Lafferty, J. Dynamic topic models. In Proc. of the 23rd International Conference on Machine Learning ACM, 2006, 113–120.
2) Blei, D.; Ng, A.; Jordan, M. Latent Dirichlet allocation. The Journal of Machine Learning Research 3, (2003), 993–1022.
3) Olariu, A. Clustering to improve microblog stream summarization. In Proc. of the 14th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing IEEE, 2012, 220–226.
Bookmark and Share
  Featured Reviewer  
 
Language Parsing And Understanding (I.2.7 ... )
 
 
Text Analysis (I.2.7 ... )
 
 
Web 2.0 (H.3.4 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Language Parsing And Understanding": Date
Computer processing of natural language
Krulee G., Prentice-Hall, Inc., Upper Saddle River, NJ, 1991. Type: Book (9780136102885)
Sep 1 1992
Deep and superficial parsing
Wilks Y., Prentice Hall International (UK) Ltd., Hertfordshire, UK, 1985. Type: Book (9789780131638419)
Dec 1 1987
Compound noun interpretation problems
Jones K., Prentice Hall International (UK) Ltd., Hertfordshire, UK, 1985. Type: Book (9789780131638419)
Dec 1 1987
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy