Common claim of connnectionist models: they learn the best representations for a given task
Learned representations are emergent, not stipulated
If a PDP model learns localist codes when coding for multiple things at the same time, strongly suggests that the superposition problem pressures models to learn selective (e.g. localist) coding.