Computing Reviews

A framework for understanding sources of harm throughout the machine learning life cycle
Suresh H., Guttag J.  EAAMO 2021 New York, NY, Oct 5-9, 2021)1-9,2021.Type:Proceedings
Date Reviewed: 01/05/23

The most recent studies show how machine learning has increasingly started to affect people and society in a positive way, pointing out the main advantages and disadvantages for different areas of activities. Anticipation and prevention in this field will help to address a variety of consequences, for example, harm.

In this paper, the authors point out how harm can be treated throughout the machine learning life cycle. The authors do an extraordinary job of identifying the possible sources for a variety of harms. The process through which the authors accomplish the goal is based on a basic process that includes data collection, development, and deployment. In this way, the communication is productive and precise for the identified issues, as well as providing mitigation for them.

The work goes on to follow the harmful conduct, which includes possible behaviors related to physical causes or psychological harm, such as harassment and intimidation, which might cause fear, alarm, and distress. Also, the idea presented by the authors could be extended to self-harm and self-neglect behaviors.

The proposed machine learning model is well structured, focused on the data generation process and its purpose to collect data, and defined for a specific population (and sampling from it) by measuring features and labels. The obtained dataset is split into training and sets used for testing purposes. The current process is cyclic, and its decisions are influenced by those models that affect the state of the world.

In a discrete way, the authors point out how “the ability to independently make increasingly complex decisions--such as which financial products to trade, how vehicles react to obstacles, and whether a patient has a disease--and continuously adapt in response to new data” distinguishes machine learning from the digital technologies that came before it [1]. However, these algorithms don’t always operate faultlessly; “they don’t always make ethical or accurate choices” [1].

The proposed model can be extended to other sectors and areas that rely on the likelihood that someone may, for example, renege on a debt or contract a sickness. Many predictions are made; therefore, it stands to reason that some of them will probably turn out to be inaccurate. The proposed model contains in its design a component of errors that depends on a variety of factors, “including the amount and quality of the data used to train the algorithms, the specific type of machine-learning method chosen (for example, deep learning, which uses complex mathematical models, versus classification trees that rely on decision rules), and whether the system uses only explainable algorithms,” which may prevent it from achieving maximum accuracy [1].

The authors focus on another important component that is related to bias. When selecting training data, bias is considered. Predictive engines are at the heart of machine learning models. The authors demonstrate that using large datasets enable machine learning algorithms to learn from the past and predict the future, based on the targeted population. Models can read large amounts of material and comprehend intent when that intent is known. By digesting millions of pieces of data, such as well-labeled time and geography, they can learn to recognize differences such as those between, say, the level of harm.

In the end, the authors demonstrate that one of the most intriguing technological advancements with practical relevance in the last ten years is machine learning. Machine learning has the potential to transform how individuals interact with technology, and possibly with entire industries when paired with big data technology and the enormous processing power made available by the public cloud. However, despite how promising it seems, machine learning technology needs careful preparation to prevent unintentional biases.


1)

Babic, B.; Cohen, I. G.; Evgeniou, T.; and Gerke, S. When machine learning goes off the rails. Harvard Business Review (2021) https://hbr.org/2021/01/when-machine-learning-goes-off-the-rails 2021.

Reviewer:  Mihailescu Marius Iulian Review #: CR147532 (2306-0078)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy