Have you ever feel unsure about the output from an artificial intelligence (AI)-based program? Frustrated with the unintelligible results? This book’s title implies that there is an approach to making the output understandable. This would be very useful. The book’s subtitle is philosophical foundations, and the approach is thoroughly based in philosophy. The main approach appears to be based on the theory of content. I was not able to fully understand what the authors mean by “content” or “theory of content.” Even browsing a referenced book, A theory of content and other essays by Jerry Fodor [1], did not help. So, unfortunately, I must review the book strictly from a computer science (CS) viewpoint.
The main example in this book is a program (SmartCredit) that calculates a credit score (and other things). Most of us know what a credit score is and how it may determine whether we can buy a house and so on. It used to be calculated by hand. It is very subjective. It is not a binary decision. It is a number between 300 and 850. It has no units (for example, dollars, probability of default). Usually a bank officer uses it as one consideration in making a decision about lending. Testing such a program can use human-calculated data to compare. Data from actual use can be used to refine the decision process. Recent approaches to interpreting large language models would be able to identify any inappropriate bias (for example, gender, age) in the credit score number.
In the presented Socratic dialogue, the philosopher proposes questions that a user might need answered:
- (1) “What does the output ‘550’ that has been assigned to me mean?”
- (2) “Why is the ‘550’ that the computer displays on the screen an assessment of my credit-worthiness? What makes it mean that?”
- (3) “How is the final meaningful state of SmartCredit (the output ‘550’ ...) the result of other meaningful considerations that SmartCredit is taking into account?”
I did not see an example of added content that would have been helpful to answer those questions. The first question can be answered by general knowledge, the second by what data the program uses, and the third might be answered by the recent approaches to interpreting large language models. However, since I do not completely understand “the theory of content,” I cannot evaluate the approach.
My recommendation is that you need a strong background in philosophy to understand and appreciate this book.