Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Improving fairness in machine learning systems: what do industry practitioners need?
Holstein K., Wortman Vaughan J., Hal I., Dudik M., Wallach H.  CHI 2019 (Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, May 4-9, 2019)1-16.2019.Type:Proceedings
Date Reviewed: Mar 24 2020

Research works involving qualitative methods are generally rare in technical domains. Holstein et al. break this trend, however, as they set out to explore fairness in machine learning systems, deploying empirical methods to arrive at conclusions.

Most social anomalies, like justice or health, can be best defined and understood by exploring their negative forms, like injustice or illness. Fairness, too, is a hard-to-define positive construct, something that can better be addressed by looking for unfairness, which again shows up in different ways and means in different situations. At its simplest of forms, unfairness could be a prediction done by a machine learning algorithm, where its suggestions or predictions would be prejudiced against ethnicity, race, gender, or other social norms.

To capture such a subtle nuance, the researchers use “a series of semi-structured, one-on-one interviews” from a cross section of machine learning practitioners spread across “ten major companies,” with each of these series of interviews designed to capture more detailed aspects that pop out in the preceding ones. The authors hypothesize one major anomaly in contemporary machine learning solutions: the absence of any tools to assess fairness metrics. Upon confirming their assumptions via the interviews, the work proceeds to validate findings over a larger set of samples from the machine learning developer population by deploying an online survey using Qualtrics.

Significant findings and future directions for machine learning practitioners are listed based on the work done. Some of these findings include a) curating high-quality datasets with an eye towards fairness; b) building better processes and metrics into the developing stages; c) auditing demographic datasets for fairness at a coarse-grained level; d) incorporating debugging mechanisms in the development processes that might require detailed investigation; and e) automating auditing tools and newer approaches in the prototyping of machine learning systems.

The paper urges the machine learning development community to exercise more restraint in order to prevent unfair solutions that might further complicate social anomalies.

Reviewer:  CK Raju Review #: CR146941 (2008-0195)
Bookmark and Share
  Editor Recommended
 
 
Learning (I.2.6 )
 
 
Sociology (J.4 ... )
 
 
User-Centered Design (H.5.2 ... )
 
 
General (K.4.0 )
 
 
General (K.7.0 )
 
 
General (I.0 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy