Computational Neuroscience | Coursera

Brief information

  • Instructors: Rajesh P. N. Rao, Adrienne Fairhall
  • About this course: This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of vision, sensory-motor control, learning, and memory. Specific topics that will be covered include representation of information by spiking neurons, processing of information in neural networks, and algorithms for adaptation and learning. We will make use of Matlab/Octave/Python demonstrations and exercises to gain a deeper understanding of concepts and methods introduced in the course. The course is primarily aimed at third- or fourth-year undergraduates and beginning graduate students, as well as professionals and distance learners interested in learning how the brain processes information.

Syllabus

Week 1: Introduction & Basic Neurobiology (Rajesh Rao)

This module includes an Introduction to Computational Neuroscience, along with a primer on Basic Neurobiology.

  • Reading: Welcome Message & Course Logistics
  • Reading: About the Course Staff
  • Reading: Syllabus and Schedule
  • Reading: Matlab & Octave Information and Tutorials
  • Reading: Python Information and Tutorials
  • Reading: Week 1 Lecture Notes
  • Video: 1.1 Course Introduction
  • Video: 1.2 Computational Neuroscience: Descriptive Models
  • Video: 1.3 Computational Neuroscience: Mechanistic and Interpretive Models
  • Video: 1.4 The Electrical Personality of Neurons
  • Video: 1.5 Making Connections: Synapses
  • Video: 1.6 Time to Network: Brain Areas and their Function
  • Practice Quiz: Matlab/Octave Programming
  • Practice Quiz: Python Programming

Week 2: What do Neurons Encode? Neural Encoding Models (Adrienne Fairhall)

This module introduces you to the captivating world of neural information coding. You will learn about the technologies that are used to record brain activity. We will then develop some mathematical formulations that allow us to characterize spikes from neurons as a code, at increasing levels of detail. Finally we investigate variability and noise in the brain, and how our models can accommodate them.

  • Reading: Welcome Message
  • Reading: Week 2 Lecture Notes and Tutorials
  • Video: 2.1 What is the Neural Code?
  • Video: 2.2 Neural Encoding: Simple Models
  • Video: 2.3 Neural Encoding: Feature Selection
  • Video: 2.4 Neural Encoding: Variability
  • Video: Vectors and Functions (by Rich Pang)
  • Video: Convolutions and Linear Systems (by Rich Pang)
  • Video: Change of Basis and PCA (by Rich Pang)
  • Video: Welcome to the Eigenworld! (by Rich Pang)
  • Reading: IMPORTANT: Quiz Instructions
  • Graded: Spike Triggered Averages: A Glimpse Into Neural Encoding

Week 3: Extracting Information from Neurons: Neural Decoding (Adrienne Fairhall)

In this module, we turn the question of neural encoding around and ask: can we estimate what the brain is seeing, intending, or experiencing just from its neural activity? This is the problem of neural decoding and it is playing an increasingly important role in applications such as neuroprosthetics and brain-computer interfaces, where the interface must decode a person’s movement intentions from neural activity. As a bonus for this module, you get to enjoy a guest lecture by well-known computational neuroscientist Fred Rieke.

  • Reading: Welcome Message
  • Reading: Week 3 Lecture Notes and Supplementary Material
  • Video: 3.1 Neural Decoding and Signal Detection Theory
  • Video: 3.2 Population Coding and Bayesian Estimation
  • Video: 3.3 Reading Minds: Stimulus Reconstruction
  • Video: Fred Rieke on Visual Processing in the Retina
  • Video: Gaussians in One Dimension (by Rich Pang)
  • Video: Probability distributions in 2D and Bayes’ Rule (by Rich Pang)
  • Graded: Neural Decoding

Week 4: Information Theory & Neural Coding (Adrienne Fairhall)

This module will unravel the intimate connections between the venerable field of information theory and that equally venerable object called our brain.

  • Reading: Welcome Message
  • Reading: Week 4 Lecture Notes and Supplementary Material
  • Video: 4.1 Information and Entropy
  • Video: 4.2 Calculating Information in Spike Trains
  • Video: 4.3 Coding Principles
  • Video: What’s up with entropy? (by Rich Pang)
  • Video: Information theory? That’s crazy! (by Rich Pang)
  • Graded: Information Theory & Neural Coding

Week 5: Computing in Carbon (Adrienne Fairhall)

This module takes you into the world of biophysics of neurons, where you will meet one of the most famous mathematical models in neuroscience, the Hodgkin-Huxley model of action potential (spike) generation. We will also delve into other models of neurons and learn how to model a neuron’s structure, including those intricate branches called dendrites.

  • Reading: Welcome Message
  • Reading: Week 5 Lecture Notes and Supplementary Material
  • Video: 5.1 Modeling Neurons
  • Video: 5.2 Spikes
  • Video: 5.3 Simplified Model Neurons
  • Video: 5.4 A Forest of Dendrites
  • Video: Eric Shea-Brown on Neural Correlations and Synchrony
  • Video: Dynamical Systems Theory Intro Part 1: Fixed points (by Rich Pang)
  • Video: Dynamical Systems Theory Intro Part 2: Nullclines (by Rich Pang)

Week 6: Computing with Networks (Rajesh Rao)

This module explores how models of neurons can be connected to create network models. The first lecture shows you how to model those remarkable connections between neurons called synapses. This lecture will leave you in the company of a simple network of integrate-and-fire neurons which follow each other or dance in synchrony. In the second lecture, you will learn about firing rate models and feedforward networks, which transform their inputs to outputs in a single “feedforward” pass. The last lecture takes you to the dynamic world of recurrent networks, which use feedback between neurons for amplification, memory, attention, oscillations, and more!

  • Reading: Welcome Message
  • Reading: Week 6 Lecture Notes and Tutorials
  • Video: 6.1 Modeling Connections Between Neurons
  • Video: 6.2 Introduction to Network Models
  • Video: 6.3 The Fascinating World of Recurrent Networks
  • Graded: Computing with Networks

Week 7: Networks that Learn: Plasticity in the Brain & Learning (Rajesh Rao)

This module investigates models of synaptic plasticity and learning in the brain, including a Canadian psychologist’s prescient prescription for how neurons ought to learn (Hebbian learning) and the revelation that brains can do statistics (even if we ourselves sometimes cannot)! The next two lectures explore unsupervised learning and theories of brain function based on sparse coding and predictive coding.

  • Reading: Welcome Message
  • Reading: Week 7 Lecture Notes and Tutorials
  • Video: 7.1 Synaptic Plasticity, Hebb’s Rule, and Statistical Learning
  • Video: 7.2 Introduction to Unsupervised Learning
  • Video: 7.3 Sparse Coding and Predictive Coding
  • Video: Gradient Ascent and Descent (by Rich Pang)
  • Graded: Networks that Learn

Week 8: Learning from Supervision and Rewards (Rajesh Rao)

In this last module, we explore supervised learning and reinforcement learning. The first lecture introduces you to supervised learning with the help of famous faces from politics and Bollywood, casts neurons as classifiers, and gives you a taste of that bedrock of supervised learning, backpropagation, with whose help you will learn to back a truck into a loading dock.The second and third lectures focus on reinforcement learning. The second lecture will teach you how to predict rewards à la Pavlov’s dog and will explore the connection to that important reward-related chemical in our brains: dopamine. In the third lecture, we will learn how to select the best actions for maximizing rewards, and examine a possible neural implementation of our computational model in the brain region known as the basal ganglia. The grand finale: flying a helicopter using reinforcement learning!

  • Reading: Welcome Message and Concluding Remarks
  • Reading: Week 8 Lecture Notes and Supplementary Material
  • Video: 8.1 Neurons as Classifiers and Supervised Learning
  • Video: 8.2 Reinforcement Learning: Predicting Rewards
  • Video: 8.3 Reinforcement Learning: Time for Action!
  • Video: Eb Fetz on Bidirectional Brain-Computer Interfaces
  • Graded: Learning from Supervision and Rewards

Leave a Reply

Your email address will not be published. Required fields are marked *