As computing technology continues to evolve, artificial intelligence (AI) is playing an increasing role in various aspects of our lives. Although AI dates back many decades--I took an AI subject in my university studies, in 1988--only recently have dramatic advances been made, with AI-based virtual assistants and recommendation systems being developed in fields as diverse as autonomous vehicles and medical diagnostics. The rapid growth and development of AI is engaging researchers, engineers, innovators, and most certainly journalists. The success, or otherwise, of this growth will rest largely on the public’s acceptance of AI as trustworthy.
In this paper, Li et al. set out to provide AI developers with a comprehensive guide for building trustworthy AI systems. They first introduce and define AI and machine learning, introducing the theoretical framework of important aspects of AI and how acceptance and trustworthiness are fundamental to the success of the technology. The second section discusses what trustworthiness means in this context, and that it is much more than simple predictive accuracy. The technology and the outcomes generated need to be explainable, transparent, reproduceable, accountable, and fair in the estimation of its users, the general public. The third section provides a detailed recommended systematic approach for development, algorithm design, system security, preparation of data for training the system, and, importantly, bias and anomaly detection. Finally, the fourth section discusses conclusions, future challenges, the need for an interdisciplinary approach, opportunities for further research, and, importantly, end-user involvement in the development of AI systems that are trustworthy. A detailed set of references is provided.
This paper is an excellent and detailed discussion of an important aspect of a very topical subject.