Slides: Explainable Machine Learning: An Integrated Epistemic-Ethical Analysis

Author

Dan Hicks

Published

February 1, 2019

Machine learning (ML) algorithms have become widely adopted over the past decade. Contemporary ML can achieve near-human levels of accuracy, but operates as an inscrutable “black box.” This has stimulated significant research in methods to explain the behavior of ML systems. However, many of the proposed methods violate a common assumption that explanations must be true. Arguably, such “explanations” of ML systems are not actually explanations. I draw on work in the philosophy of science and political philosophy to clarify the requirements for explainable ML. Recent work on idealization in science suggests that explanations can be false, but only if these falsehoods promote the goals of inquiry. I argue that, at least when ML is used in policy contexts, the goals of explainable ML ultimately trace back to democratic accountability, and conclude that ML can be explainable only when system development is participatory and democratic.

PDF: https://drive.google.com/open?id=0B6oYmzobonqoM1lNZWdmMFEzTG8