Events

CBE/ENGR 225 Faculty Seminar: F. Gregory Ashby, Ph.D., Professor, Department of Psychological & Brain Sciences, UCSB

Tuesday, October 11, 2016, ESB, Room #2001

CBE Faculty Seminar Series Presents: 

 

F. Gregory Ashby, Ph.D.

Professor, Department of Psychological & Brain Sciences

University of California, Santa Barbara

 

Tuesday, October 11, 2016

4:00 pm

ESB, Room #2001

 

*Cookies and Coffee will be provided*

 

The Learning and Unlearning of Procedural Skills:

A Computational Cognitive Neuroscience Approach

Computational Cognitive Neuroscience (CCN) is a new field that falls between computational

neuroscience on one side and machine learning and neural network theory on the other. The goal of

CCN is to build and test neurobiologically accurate computational models that account for human

behavior. In this talk I will describe how a CCN approach can be used to better understand how people

learn complex skills and why bad habits are so difficult to unlearn. Skills acquired via procedural

learning (also called skill or habit learning) improve incrementally and require extensive practice with

immediate feedback. Prototypical examples include athletic skills, but many cognitive skills such as

looking for tumors in an x-ray also meet these criteria. Considerable evidence exists that the learning

of procedural skills depends critically on the basal ganglia, and in particular on synaptic plasticity within

the striatum. The COVIS theory of category learning (Ashby, Alfonso-Reese, Turken, & Waldron, 1998,

Psychological Review) includes a biologically detailed computational model of procedural learning.

During the past decade, many strong a priori predictions of COVIS have been tested and supported.

Recently, we have proposed that cholinergic interneurons in the striatum serve as a gate to the

striatum. The default state of the gate is closed, but the gate opens in rewarding environments. When

the gate is open, procedural learning is possible. When rewards are unavailable, the gate closes,

which protects procedural learning from decay and prevents unlearning. The model successfully

accounts for a variety of single-unit recording and behavioral data. We have also used the model to

 

design novel unlearning protocols that show promising initial signs of success.