Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Rehumanized crowdsourcing: a labeling framework addressing bias and ethics in machine learning
Barbosa N., Chen M.  CHI 2019 (Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, May 4-9, 2019)1-12.2019.Type:Proceedings
Date Reviewed: Jun 1 2021

Crowdsourcing is the practice of obtaining information or input into a task from a large number of people, either paid or unpaid, typically via the Internet. With its fast growth, crowdsourcing has produced large volumes of data manually labeled via human crowds. Processing this data with various machine learning algorithms, people expect meaningful information to meet their objectives. The authors refer to the “dehumanization effects” of crowdsourcing because both data collection and processing are carried out by machines.

Due to the open nature of crowdsourcing, the data collected is prone to biases for various human factors, such as age, country of residence, culture, ethics, gender, knowledge level, and so on. Data with human bias may affect the quality of information derived either positively or negatively. After providing strong evidence for skewed information caused by biased labels, the authors propose a labeling framework that takes human factors into consideration to improve the efficacy of crowdsourcing. The key idea of the framework is based on the following: different tasks have their specific preferences related to human factors. Therefore, a requester should specify different settings in the task transparently before launching a task. Making decisions about tradeoffs on such specifications is a kind of rehumanization.

Furthermore, because of the framework’s transparency, requesters are made aware of any potential issues introduced and can mitigate biases in the process at any point in time if a task is launched. Deploying the framework to a popular crowdsourcing platform in Python, the authors report “experiments with 1,919 workers collecting 160,345 human judgments.” The authors explain:

By routing microtasks to workers based on demographics and appropriate pay, our framework mitigates biases in the contributor sample and increases the hourly pay given to contributors.

The quality of crowdsourcing work depends on the quality of labels collected. While popular mobile computing devices and broadband networks make it easy to collect inputs from the public, the control of data quality has been a challenge. This paper provides a practical approach for managing human factors in crowdsourcing with convincing results. Researchers and practitioners working in the area of socially aware computing and machine learning should benefit from reading this paper.

Reviewer:  Chenyi Hu Review #: CR147277 (2111-0271)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Human Factors (H.1.2 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Human Factors": Date
A theory of computer semiotics
Andersen P., Cambridge University Press, New York, NY, 1990. Type: Book (9780521393362)
Aug 1 1992
An experimental comparison of tabular and graphic data presentation
Powers M., Lashley C., Sanchez P., Shneiderman B. International Journal of Man-Machine Studies 20(6): 545-566, 1984. Type: Article
May 1 1985
Organizing for human factors
Thomas J. (ed)  Human factors and interactive computer systems (, New York,461984. Type: Proceedings
May 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy