Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
A multimodal-sensor-enabled room for unobtrusive group meeting analysis
Bhattacharya I., Foley M., Zhang N., Zhang T., Ku C., Mine C., Ji H., Riedl C., Welles B., Radke R.  ICMI 2018 (Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, Oct 16-20, 2018)347-355.2018.Type:Proceedings
Date Reviewed: Jul 23 2019

Countless meetings fall short of being efficient due to lack of discussion leadership and contributions. Advancements in motion tracking and voice data analysis enable the development of active automated conferencing systems that process several meeting parameters (for example, head pose, body position, and speech). After data analysis, the system is able to make recommendations that will help in the group decision process, with the primary goal to improve the efficiency of the session.

This research focuses on the automatic identification of the person that leads the discussions (that is, the leader) and also of the person that adds the most contributions to the meeting.

The visual focus of attention during the meeting is used as the main component of the system. For this reason, the head position of each participant is tracked with a noninvasive and “invisible” system of cameras (that is, the location of the tracking system is out of the users’ direct view). The proposed method for head pose and visual focus estimation is ceiling-mounted Kinect systems. Keeping the head tracking system out of the users’ field of view makes it “invisible” to the participants and allows them to focus on the discussion. The authors claim a nearly 50 percent accuracy for head pose (position plus orientation), allowing them to approximate the direction of gaze for each participant.

To identify the main contributor to the discussion, a verbal speech analysis system is combined with the head pose estimation system. The verbal speech system applies language processing algorithms to each communication channel in order to extract the words set for each participant. From the collected data, each participant’s influence within the group is computed and the leader is identified.

Many verbal metrics have been proposed in the past to identify a discussion group leader (for example, word frequency, time used to communicate ideas). In general, such methods allow for the dynamic detection of the group leader as different individuals take on leadership roles, sometimes stimulated by the discussion topic.

In this case, the metrics used are (1) related to the items and ranking-item pairs and (2) related to the efficiency of delivering information. The assumption is that the contributor with the largest number of informative statements is automatically perceived as the group leader. This assumption may not always be true, especially if the discussion topics are very dynamic in nature and the participants are coming from a diverse background.

The authors found an accuracy of 90 percent for predicting the leader of a meeting based on the proposed system. They also claim 100 percent accuracy for determining the main contributor using just the verbal metrics. Hence, a 50 percent head pose estimation system combined with a speech recognition system can generate very high accuracy in detecting the roles of individual participants.

The current system seems to be readily available and easy to implement in a variety of scenarios, for improving meeting efficiency to potential class teaching scenarios. Among the important questions still unanswered: how do participants feel about being recorded and monitored during an entire meeting?

Reviewer:  Felix Hamza-Lup Review #: CR146629 (1910-0374)
Bookmark and Share
  Featured Reviewer  
 
Computer-Supported Cooperative Work (H.5.3 ... )
 
 
Collaborative Computing (H.5.3 ... )
 
 
Law (I.2.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Computer-Supported Cooperative Work": Date
sTeam: structuring information in team-distributed knowledge management in cooperative learning environments
Hampel T., Keil-Slawik R. Journal of Educational Resources in Computing 1(2): 3-es, 2001. Type: Article
Feb 1 2002
The social life of avatars: presence and interaction in shared virtual environments
Schroeder R. Springer-Verlag New York, Inc., New York, NY,2002. Type: Divisible Book
Nov 13 2003
Bringing participatory design to practical application: the interrelation between LCD projection, facilitation, and participatory design
Gärtner J., Hanappi-Egger E. interactions 6(2): 13-22, 1999. Type: Article
Jun 1 1999
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy