Machine learning, or artificial intelligence, guides decisions that impact our daily lives, but are we always aware of it? Applications extend to many sectors such as entertainment with music or movie recommendations, but also products purchasing on commercial websites. Other fields with more serious consequences on our lives are also using it, such as medical diagnosis and bank loan services.
The specificity of this technology is that it relies on data, rather than on models designed by experts. The algorithm learns behaviors by reading large amounts of data, thus optimizing an internal model, instead of the models designed by experts. Once this training phase is completed, it can then provide a result on the new data that are suggested to it.
This specificity gives it the considerable advantage of being able to reproduce complex behaviors that are difficult to model by experts. In image recognition, for example, machine learning has surpassed the best image analysis algorithms.
However, the downside is that, these machine learning algorithms - or models - are particularly opaque: they rely on hundreds or even thousands of intermediate variables, whose value results from a complex optimization process that cannot be understood by a human.
Given the importance of machine learning algorithms in our lives, it becomes crucial to better understand their behavior, biases and limitations. This is the objective of the fairly recent discipline of machine learning explicability, often called eXplainable AI. This discipline, at the crossroads of social sciences, human-computer interaction and artificial intelligence, aims to give more transparency to machine learning algorithms.
"If it works, it's good enough, right?" Well, it's not! Many examples of machine learning abuses have already hit the headlines:
From a less sensational point of view, for many applications, it is crucial to be able to explain the reasons for a machine learning system's prediction. For example:
Explainability or eXplainable AI (XAI) is the discipline at the intersection of social sciences and AI that seeks to make machine learning models more transparent. This discipline provides technical tools and recommendations for making machine learning algorithms more understandable to humans.
Explainability has many uses depending on the profile:
Explainability consists in justifying a result. Because this justification is directed at a person, it’s important to keep several things in mind when designing a system that aims to explain a result or pattern.
Tim Miller, drawing on insights from human psychology, gives some good practices for designing good explanations in his 2017 article:
To be convinced of the value of these recommendations, a look at the way GAFAs provide explanations for their recommendation systems is enough. "Those who bought X also bought Y", "Because you know X", "Because you looked at X" are all explanations that appeal to the intuition of the service user. But they are limited to a very small number of justifications and are tailored to the audience.
This article was imagined and realized by Thibaut Chataing, Yoann Couble and Camille Ponthus.
See you soon for part 2 !