As Srinivasan and Chander discuss, software packages and algorithms encounter many biases related to images on the web. This is then an article on computer application control, not human user control. Machine learning (ML) and artificial intelligence (AI) architecture are macro examples of how inference tools from large unstructured data may come up with inaccurate, if not intentionally malicious, predictions. The authors further their discussion by pointing out that, more than the need for fairly designed AI algorithms, domain and nondomain experts should highlight “practical aspects that can be followed to limit and test for bias during problem formulation, data creation, data analysis, and evaluation.”
Piecewise linear reifications would, perhaps, allow systems “to learn numerical function values at a number of equidistant points in the attribute space and use linear interpolation to predict function values at other points” . Furthermore, proxies can only be a snapshot of the real phenomena in a sample (so-called measurement bias); in addition, labeling may be affected by the subjective opinions of labelers (so-called label bias). Unknown mechanisms such as the halo effect bias--“the predisposition of an overall impression to influence the observer” --may also have an effect.
It is not that computer applications are negatively evaluated here, more so a lack of trust is explored, for example, how computers may influence our personal expectations. Though there is much literature on the topic, the authors ask readers to understand “the structural dependencies among various features in datasets.” Creating such “dependencies” implies that our emotions are a predictive expression of whether or not a general cognitive association will lead to an evaluative disadvantage.