
Once we have built the model, we need to evaluate its performance and since this is credit decision, we are interested in how it makes decisions. We can use this dataset to predict payment defaults using XGBoost. The columns include information on each borrower: The dataset is from a Taiwanese bank and includes information In this case, we will use the credit card default prediction dataset from UCI. These distinctions may seem small, but they have a significant impact as the right to explanation legislation becomes more prevalent. Local importance is how is the model making decisions for this one person.Global importance can be thought of as overall how is the model making decisions.When we talk about impact, we differentiate between global and local importance: import requests import pandas as pd from io import StringIO import matplotlib.pyplot as plt import numpy as np from sklearn import linear_model # The URL for the dataset url = '' r = requests.get(url) # Read the file in and store as df file = r.text.replace(" \t"," ") # list_labels written manually: list_labels = df = pd.read_csv(StringIO(file),sep="\s+",header = None,names=list_labels) # Modeling regr = linear_model.LinearRegression() train_x = np.asanyarray(df]) train_y = np.asanyarray(df]) regr.fit (train_x, train_y) # Plotting the model plt.scatter(df.cylinders, df.mpg, color='blue') plt.plot(train_x, ef_*train_x + regr.intercept_, '-r') plt.xlabel("Cylinders") plt.ylabel("MPG") At the same time, the individual predictions are consistent with the global level predictions. The nice thing here is that we can simply see how the overall model makes decisions (global level). We find the coefficient and the intercept and plot it. This can be easily done in any programming tool or even excel these days. What if we were to try and predict the mpg on the number of cylinders a car has using a linear regression model. Let’s dive into a practical example of explainability. We build trust in AI and models the same way we do as humans. How do we make people comfortable with the model? They need see how the model reasoned to be comfortable handing it off to the model to make the decision. Often times the decision-maker needs to understand the decision. It’s using the right inputs to derive a good answer. A better decision is not the same as higher accuracy scores or lower RMSE. One of the biggest challenges I’ve found in my experience isn’t building the right model, it’s getting stakeholder buy-in that the model makes a better decision than a human. Explainability helps with this as it provides insights into how models make decisions. A lot of organizations want to leverage AI but are not comfortable letting the model or AI make more impactful decisions because they do not yet trust the model. Much like hiring decision-makers in the organization, it's important to understand how AI makes decisions.

Why Explainability Matters?ĪI has the power to automate decisions and those decisions have business impacts, both positive and negative. When we talk about Explainable AI, we are really talking about the input variables impact on the output. In it’s simplest form, AI takes some inputs to produce an output. XAI is an implementation of the social right to explanation It contrasts with the concept of the “black box” in machine learning where even their designers cannot explain why the AI arrived at a specific decision.

This is where the field of Explainable AI (XAI) can help.Įxplainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. How do we achieve this transparency while harnessing the efficiencies AI brings. As more and more companies embed AI and advanced analytics within a business process and automate decisions, there needs to have transparency into how these models make decisions grows larger and larger. Almost every company either has plans to incorporate AI, is actively using it, or is rebranding their old rule-based engines as AI-enabled technologies. There is a lot of buzz around AI these days.
