Superposition Catastrophe

  • Bowers et al. (2014)
  • Common claim of connnectionist models: they learn the best representations for a given task
  • Learned representations are emergent, not stipulated
  • If a PDP model learns localist codes when coding for multiple things at the same time, strongly suggests that the superposition problem pressures models to learn selective (e.g. localist) coding.
  • Recurrent network
  • Simple task, given vocabulary of 30 words
  • Banding/selective responses do not appear with distributed letter coding when chance of ambiguity is null
  • This means: when ambiguity/superposition catastrophe is very possible, hidden units learn selective responses
  • Selective responses ‘emerge’ as a response to the potential for superposition catastrophe
  • Recurrent networks trained to store multiple things at the same time over the same set of units learn highly selective (localist) representations