Mental Model Matching

  • A user’s mental model [42] of a technology is their internal understanding of how a technology works.
  • People rely heavily on their mental models of technology to make decisions
  • It has been found that XAI stakeholders use their mental models of XAI to decide when to use the technology [10], to evaluate how much to trust the outputted explanations [10, 20, 22], and to make sense of any results [22, 30]
  • While ML practitioners may have had access to specialized training on how LLMs work, this is decidedly not the case for the vast majority of the general population
  • How a general user believes an LLM to work may be very different from how it actually works, and this mismatch can be dangerous
  • It is not difficult to imagine frightening scenarios where users anthropomorphize or deify an LLM chatbot, understanding it to be a “magical” source of ground truth. This could very quickly lead to conspiracy theories and the legitimization of disinformation campaigns [see, e.g., 23]