Introduction to XAI

Anandkumar NS
4 min readApr 10, 2024

--

Explainable Artificial Intelligence

Explainable AI (XAI) is used to describe an AI Model, its expected impact and potential biases. It helps to give light to model accuracy, fairness and transparency in the model. As AI advances in our day and age, we face the problem of explaining and retracing the decision flow for the AI Model. The calculation process for the model in current day methods is referred to as a black box which is called so because it is impossible to interpret. These black boxes are created directly from the data that is used as training information. Even data scientists cannot understand why the algorithm reaches a specific result. This was the reason for the creation of XAI.

Why Explainable AI (XAI) Matters ?

There are many advantages to ‘bringing light to the black box’, namely we will be able to trust the decisions taken by the models because we will have the reason… This allows us to use AI in fields where the decisions are high-stakes. XAI is one of the key requirements for implementing responsible AI, a methodology for large-scale implementation of AI Methods in companies with fairness, model explainability and accountability.

Explainable AI (XAI) Methods

Here are explanations for some XAI methods out there, all of these methods help explain the logic behind the decisions taken by the models.

  1. Locally Interpretable Model-Agnostic Explanations (LIME) :- This is a very popular method for explaining outputs from black box models. It takes into consideration a local view of the data. This means it considers a subset of the data when approximating the explanations for the model output, this boils down to a single case of the model’s input and outputs. LIME works by working around a small subset of the data.
Locally Interpretable Model-Agnostic Explanations (LIME)

This allows a localized view. LIME then generates a perturbed version ( a version with random noise or small modifications ), then this new data is feed back into the black box to see the output of the model. This means that the explainer will be able to explain what weightage each feature has on the outcome.

2. Shapley Additive Explanations (SHAP):- This is a model agnostic approach to XAI that identifies the importance of each feature in a certain prediction. This is called feature importance, these are computed using the so-called Shapley Values which originate from Game Theory.

Shapey Additive Explanations (SHAP)

This involves the distribution of predictions of a model across the various features. The SHAP technique works by running multiple inputs to the model by varying the values to each feature allowing us to calculate the variation in the model prediction when the value to the feature chosen is changed.

3. Gradient-Weighted Class Activation Mapping (Grad-CAM):- It is a local model-specific method used to explain Convolutional Neural Networks (CNNs), which are most commonly used for image classification tasks.

Gradient-Weighted Class Activation Mapping (Grad-CAM)

It produces heat maps for the images highlighting the parts that are most important for the prediction task. The last layer of the CNN image classifier contains the feature map which represents the features learned such as edges and other shapes involved.

Ethical Implications

Explainable Artificial Intelligence (XAI) brings forth critical ethical implications in the development and application of AI systems. Central among these is the imperative to address fairness and bias, where XAI serves as a tool to identify and mitigate biases present in AI systems, thus preventing unfair treatment of individuals or groups. Additionally, XAI promotes transparency and accountability by providing explanations for AI system decisions, fostering trust and understanding among users. However, concerns arise regarding privacy and consent, as XAI techniques may involve analyzing sensitive data, necessitating clear communication and consent mechanisms.

Furthermore, XAI contributes to the trustworthiness and reliability of AI systems but raises questions about the accuracy and clarity of explanations provided. Accountability and responsibility become paramount, as XAI helps attribute decision-making factors, prompting discussions about the allocation of responsibility among stakeholders.

The societal impact of AI systems is also a concern, with XAI influencing decisions in critical domains like healthcare and criminal justice, demanding careful consideration of fairness, equity, and social justice implications. Ultimately, addressing these ethical implications requires collaboration among AI researchers, ethicists, policymakers, and stakeholders to ensure that AI systems are transparent, fair, accountable, and trustworthy, thereby benefiting individuals and society as a whole.

Resources : -

  • “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable” by Christoph Molnar.
  • “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” by Christoph Molnar, et al.
  • “Explainable AI in Healthcare: Concepts, Algorithms, and Applications” by Michael Miller and Peter D. Turney.
  • Towards Data Science, Medium.
  • AI Explainability 360.

--

--

Anandkumar NS

I'm a computer science student with a passion for learning and creating. My journey in tech has led me to explore various areas of software development like AI.