Misyak Et Al 2010
- Does the ability to learn statistical Non-adjacent dependencies correlate with the ability to process Non-adjacent dependencies in language?
- Can we model non-adjacent dependency learning with simple SRNs?
- allows us to see the continuous timecourse of statistical processing
- Uses both linguistic stimulus tokens and auditory cues
- on-line non-adjacency learning
- Investigation of Individual differences in language processing and statistical learning
- Participants trained in blocks of three word sequence trials.
- First and second word were random, but the third word was dependent on the first word.
- Intervening second word creates non-adjacency
- After final block: Prediction task where participants had to say what the third word was from two word sequences
- People can learn non-adjacent sequences with only implicit exposure
- SRN can capture performance on AGL tasks
- SRNs can deal with temporal structures and associations
- Localist representations: 30 input and output units, each unique unit corresponding to each nonword
- Standard backpropagation with a learning rate of 0.1 and momentum at 0.8
- The higher the prediction task accuracy (x-axis) the shorter reading times for object relatives.
- Even the people who are bad at sequential learning are still fluent speakers and listeners
- Is it possible that sequential learning and language learning are unrelated
- Maybe children are better at sequential learning, which helps them acquire languag
- Adults then lose this ability