Explainable AI in practice
Deep learning approaches enable the creation of highly precise models for use in a wide range of application domains, such as production or medicine. However, these models have a black-box character, as the relationships they learn are so complex and abstract that they can no longer be understood by humans - even experts. However, in some applications, e.g. in safety-critical areas, not only the accuracy of the predictions but also the trust in the algorithms is of enormous importance.
For this reason, Fraunhofer IPA and the Institute for Innovation and Technology (iit) are working together on the topic of xAI (explainable AI) in two coordinated studies. The iit is looking at the specific needs and usability of xAI in industry and the healthcare sector. On the one hand, this includes an analysis of the requirements for explanations, taking into account different industries and target groups as well as the current and future planned use of AI algorithms. On the other hand, general strengths and weaknesses of various xAI solution approaches are highlighted in view of these requirements and on the basis of specific use cases from industry and from projects of the "Artificial Intelligence Innovation Competition" sponsored by the BMWi.
Fraunhofer IPA, on the other hand, is concerned with analyzing currently popular explanation methods. The wide range on offer, as well as the lack of a standardized interface for the practical implementation of the methods, makes it difficult to make a selection. There is a lack of overview and evaluation criteria that make it possible to appropriately classify the methods on offer. The categorization of the selected methods and their evaluation with regard to various assessment criteria should make it easier for readers to narrow down the appropriate method for their application.