Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Biases in AI systems
Srinivasan R., Chander A. Communications of the ACM64 (8):44-49,2021.Type:Article
Date Reviewed: Oct 24 2022

As Srinivasan and Chander discuss, software packages and algorithms encounter many biases related to images on the web. This is then an article on computer application control, not human user control. Machine learning (ML) and artificial intelligence (AI) architecture are macro examples of how inference tools from large unstructured data may come up with inaccurate, if not intentionally malicious, predictions. The authors further their discussion by pointing out that, more than the need for fairly designed AI algorithms, domain and nondomain experts should highlight “practical aspects that can be followed to limit and test for bias during problem formulation, data creation, data analysis, and evaluation.”

Piecewise linear reifications would, perhaps, allow systems “to learn numerical function values at a number of equidistant points in the attribute space and use linear interpolation to predict function values at other points” [1]. Furthermore, proxies can only be a snapshot of the real phenomena in a sample (so-called measurement bias); in addition, labeling may be affected by the subjective opinions of labelers (so-called label bias). Unknown mechanisms such as the halo effect bias--“the predisposition of an overall impression to influence the observer” [2]--may also have an effect.

It is not that computer applications are negatively evaluated here, more so a lack of trust is explored, for example, how computers may influence our personal expectations. Though there is much literature on the topic, the authors ask readers to understand “the structural dependencies among various features in datasets.” Creating such “dependencies” implies that our emotions are a predictive expression of whether or not a general cognitive association will lead to an evaluative disadvantage.

Reviewer:  Romina Fucà Review #: CR147506 (2301-0007)
1) Šuc, D.; Vladušič, D.; Bratko, I. Qualitatively faithful quantitative prediction. Artificial Intelligence 158 (2004), 189–214.
2) Varona, D.; Suárez, J. L. Discrimination, bias, fairness, and trustworthy AI. Applied Sciences 12 (2022), https://doi.org/10.3390/app12125826.
Bookmark and Share
  Editor Recommended
Featured Reviewer
 
 
Software Architectures (D.2.11 )
 
 
Information Search And Retrieval (H.3.3 )
 
 
General (H.0 )
 
 
Mathematical Software (G.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Software Architectures": Date
Software architecture in practice
Bass L., Clements P., Kazman R., Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1998. Type: Book (9780201199307)
Sep 1 1999
CORBA design patterns
Mowbray T., Malveau R., John Wiley & Sons, Inc., New York, NY, 1997. Type: Book (9780471158820)
Sep 1 1998
Developing business systems with CORBA
Sadiq W., Cummins F., Cambridge University Press, New York, NY, 1998. Type: Book (9780521646505)
Feb 1 1999
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy