Social Construction of XAI, Do We Need One Definition to Rule Them All
@ehsanSocialConstructionXAI2022
Abstract
In this paper, we argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI’s development
We view XAI through the lenses of Social Construction of Technology (SCOT) to explicate how diverse stakeholders (relevant social groups) have different interpretations (interpretative flexibility) that shape the meaning of XAI. Forcing a standardization (closure) on the pluralistic interpretations too early can stifle innovation and lead to premature conclusions.
Of Bicycles & Explainable AI
As we reflect on the evolution of the bicycle, why and how did things evolve the way they did?
We will address this question using three concepts from SCOT. First, we have relevant social groups—stakeholders with skin in the game such as bikers, families of bikers, mechanics fixing bikes, etc. These are the ones who are involved in or affected by a technological development
Different relevant social groups have their own interpretive flexibility— interpretations of what it means to be a bicycle.
ifferent interpretive flexibilities can give rise to different types of bicycles such as mountain bikes, electric bikes, BMX bikes, etc
Finally, we have the notion of closure– over time, some interpretations of the bicycle achieved stability while others withered out (e.g., equal sized wheels won out over differently-sized wheels
Just like bicycles, XAI has its relevant social groups
Let’s consider two relevant social groups: the Natural Language Processing (NLP) and Computer Visions (CV) communitie
Given each group has its own ways of knowing (epistemology), there is interpretive flexibility on how they operationalize the notion of explainability
in NLP question-answering, explanations are often of the form of additional text that justifies the ground truth answer
In CV, object recognition can consider saliency maps that show how visual features correlate to a predicted label
This is to be expected because, unlike bicycles, we don’t have 200+ years of development to reach clusters of closures yet.
olving XAI challenges may require more than just “opening the black-box” [6]
Human-centered XAI (HCXAI) advocates to tackle XAI problems through a sociotechnical view (vs. a purely technical one) [7]
We need to consider who is opening the box just as much as the algorithmic mechanisms of opening it
Whereas a lot of initial focus was on developers and data scientists as end-users of XAI systems, there is a growing recognition that we need to accommodate a diverse set of end-users, especially non-AI experts [10, 11]
Making Progress in XAI
XAI is pluralistic
Given the different epistemic cultures co-existing in the space,we cannot expect monolithic conformity at this stage.
Pluralism, however, does not mean that anything goes; in fact, it’s the opposite—we need to be precise in our articulation of what we mean by explainability when we communicate.
Thus, instead of using the term at face value, whenever we write a paper, we should strive to justify how our conception of explainability satisfies some of the shared goals we have in the space.
who is saying what, when, and why. To grasp the flavor of explainability in a given context, we need to pay attention to a relevant social group’s interpretation of it and how that informs their operationalization.
While the notion of XAI is in flux, we are fortunate to join the conversation at this stage. We have substantial agency in steering the discourse, a privilege we need to exercise responsibly.