Inductive Learning

  • Bayesian is inductive learning
  • Learning is identifying which hypothesis set is a concept
  • Hypotheses don’t disappear, they just become less likely
  • Learning develops through more experience
  • One challenge of Bayesian learning is that any small subset is consistent with many hypotheses
  • Different hypotheses have different likelihoods based on the examples we are exposed to
  • But in the end we also prefer smaller hypotheses over larger ones: The size principle
  • Simple Clustering methods can be used to get the data to automatically create the hypothesis space needed for Bayesian modelling
  • Probabilities of different sets then match with human judgments surprisingly well
  • Clustering based on biology worked worse!
  • Clustering using linguistic co-occurrences with Latent Semantic Analysis also worked worse!
  • Human subject judgements of similarity worked best
  • Suggests some human reasoning relies on Probability
  • Bayesian learning can also learn categories
  • Models are capable of making generalizations about the specific objects as well as the appropriate generalizations about categorization (superordinate categories!) in general.
  • Advanced learning means learn constraints on what is a possible hypothesis
  • Hierarchical Bayesian Modelling (HBM) can explain how we acquire Overhypotheses
  • using observations from the lowest level (data) and calculating statistical inferences