Artificial intelligence (AI) techniques are useful in creating effective computing tools for diverse applications in areas such as transportation, agriculture, medicine, and justice systems. Yet the credibility of AI is still an interesting subject of intellectual debate, especially for computing experts. What are the characteristics of a truthful AI system? How should trustworthy AI systems be formally validated? How should future trustworthy systems predicated on AI techniques be constructed? Wing succinctly discusses the problems and solutions to these kinds of questions in this inspiring article.
The article clearly documents and critiques the historical records of trustworthy computing and recommends precise metrics for assessing trustworthy AI. Certainly, trusted AI systems should be providing more accurate, fair, robust, and evidence-based data and decision justifications, far beyond the current reliable, secure, private, and usable data requirements in the current trustworthy computing models in the literature.
Wing presents precise operational definitions of accuracy, robustness, fairness, accountability, transparency, interpretability, and ethical compliance for futuristic AI systems. So, how should innovative trustworthy AI be created, realized, and installed? She succinctly discusses the existing old and new formal techniques for creating and verifying truthful AI systems. Indeed, computing scientists ought to be working with statisticians, to help resolve the issues of incorporating unknown and unobserved data into the decision-making models of reliable AI systems.
Wing compellingly echoes the current and future joint areas of research collaborations between the government, industry, and academia, required to build imminent and maintain dependable AI systems. Without a doubt, it takes endowed companies such as IBM, Google, Microsoft, and Facebook; and government agencies like DARPA and NSF; and even the US White House, to provide and implement guidance for implementing AI systems that earn public trust. AI computer scientists, statisticians, and social scientists should read this article and offer insights into the development of robust AI algorithms for investigating issues such as security threats and bias detection.