Index page
DRL and neuroscience (Just notes
[30] DRL and neuroscience (Just notes
- Botvinick, M., Wang, J. X., Dabney, W., Miller, K. J., & Kurth-Nelson, Z. (2020). Deep Reinforcement Learning and Its Neuroscientific Implications. Neuron. Paper
introduction
- using deep reinforcement learning it is now possible for brain science to have a new set of research tools
- a relationship between deep reinforcement learning and brain function was identified
- turns out that neural networks are a good model of neural representation.
- Multiple small neural networks with handcrafted structures can actually provide good models of how the brain regions interact to guide a decision making and eventually learning
- meta reinforcement learning is basically training a recurrent deep reinforcement learning networks with forced choice decision tasks of interrelated types which allows the network to adapt to new tasks without changing existing weights
- to make decisions under risk, It is essential to account for distribution and uncertainty of rewards
representation learning
- representation learning is basically a reward based learning which shapes internal representations which help to influence later decision making based on reward.
- in the brain, visual stimuli is represented in the prefrontal cortex based on which task the animal performs and this also affects neural responses.
- This kind of rewards are generally sparse which might lead to overfitting
- a sort of solution is to use So supervised learning or unsupervised which also allows the ability to transfer learn
model-based reinforcement learning
- sometimes it is possible for systems using model free RL to have behavior and processes resembling model based systems
- The attempt is to find out the balance between them
memory
- experience replay seems to be similar to the replay events which are absorbed in the brain but are non-uniform as compared to systems in the computer
- aside from online decision making, episodeic memory as well as maintenance and retrieval and consolidation is also important
- working memory maintenance in regarding neural networks seems to be similar to that of neuroscience
- attention also seems to be important
exploration
- might I love agents to learn or evolve their own motivations based on experience
- Even if changes in weights are suspended, metal learning would allow activation dynamics that support
cognition
- heirarchical systems can actually operate at time scales with different levels
- social cognition such as competitive team games as well as multiple agents being trained in parallel also might emulate the brain
- since human learners bring a lifetime of experience for every problem the AI system cannot hope to match up yet
- a long-term credit assignment is important
- back propagation itself might not allow preserving very old learning when there is new information available
- majority of the research is not being done to copy the brain but attempt to recreate it in some other way
- neuroscience and deep learning has been connected for decades and reinforcement learning seems to be a good place to continue that learning