eXplainable Artificial Intelligence

Introduction

Nowadays, sophisticated machine learning models are more and more assisting humans in the most disparate domains, but they usually are so complex and intricate that no one – neither their designers – is able to understand the “cognitive” processes leading to decisions.

The main reason for the current focus on the eXplainable AI (XAI) topic concerns the increase of AI applications, especially in critical domains (e.g. medicine, military, finance) where the interpretability of decisions is a mandatory requirement.

Available Theses

  • Extending PASTLE framework for image analysis
  • Knowledge probing di language models

Theses In progress

  • None at the moment

Project & Collaborations

In progress

References

  • Vincenzo Moscato, Associate Professor (This email address is being protected from spambots. You need JavaScript enabled to view it.)
  • Marco Postiglione, PhD student (This email address is being protected from spambots. You need JavaScript enabled to view it.)
  • Valerio La Gatta, PhD student (This email address is being protected from spambots. You need JavaScript enabled to view it.)