Explainable AI: A guide for making black box machine learning models explainable

Explainable AI: A guide for making black box machine learning models explainable

In the future, AI will explain itself, and interpretability could boost machine intelligence research. Getting started with the basics is a good way to get there, and Christoph Molnar’s book is a good place to start.

Machine learning is taking the world by storm, helping automate more and more tasks. As digital transformation expands, the volume and coverage of available data grows, and machine learning sets its sights on tasks of increasing complexity, and achieving better accuracy.

But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest, based on the historical data used to train the model.

A typical example is diagnosing medical conditions. ML models can be produced using data such as X-rays and CT scans, and then be fed with new data and asked to identify whether a medical condition is present or not. In situations like these, however, getting an outcome is not enough: we need to know the explanation behind it, and this is where it gets tricky.

Christoph Molnar is a data scientist and PhD candidate in interpretable machine learning. Molnar has written the book “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable”, in which he elaborates on the issue and examines methods for achieving explainability.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.