An Introductory Course in Computational Neuroscience

by Miller

ISBN: 9780262364539 | Copyright 2018

Click here to preview

Instructor Requests

Digital Exam/Desk Copy Print Desk Copy Ancillaries
Tabs
Expand/Collapse All
Contents (pg. v)
Series Foreword (pg. xiii)
Acknowledgments (pg. xv)
Preface (pg. xvii)
1 Preliminary Material (pg. 1)
1.1 Introduction (pg. 1)
1.1.1 The Cell, the Circuit, and the Brain (pg. 1)
1.1.2 Physics of Electrical Circuits (pg. 1)
1.1.3 Mathematical Preliminaries (pg. 2)
1.1.4 Writing Computer Code (pg. 4)
1.2 The Neuron, the Circuit, and the Brain (pg. 4)
1.2.1 The Cellular Level (pg. 4)
1.2.2 The Circuit Level (pg. 7)
1.2.3 The Regional Level (pg. 8)
1.3 Physics of Electrical Circui (pg. 11)
1.3.1 Terms and Properties (pg. 11)
1.3.2 Pumps, Reservoirs, and Pipes (pg. 12)
1.3.3 Some Peculiarities of the Electrical Properties of Neurons (pg. 13)
1.4 Mathematical Background (pg. 14)
1.4.1 Ordinary Differential Equations (pg. 15)
1.4.2 Vectors, Matrices, and Their Basic Operations (pg. 24)
1.4.3 Probability and Bayes’ Theorem (pg. 28)
1.5 Introduction to Computing and MATLAB (pg. 36)
1.5.1 Basic Commands (pg. 37)
1.5.2 Arrays (pg. 38)
1.5.3 Allocation of Memory (pg. 40)
1.5.4 Using the Colon (:) Symbol (pg. 41)
1.5.5 Saving Your Work (pg. 42)
1.5.6 Plotting Graphs (pg. 42)
1.5.7 Vector and Matrix Operations in MATLAB (pg. 43)
1.5.8 Conditionals (pg. 44)
1.5.9 Loops (pg. 46)
1.5.10 Functions (pg. 47)
1.5.11 Some Operations Useful for Modeling Neurons (pg. 48)
1.5.12 Good Coding Practice (pg. 49)
1.6 Solving Ordinary Differential Equations (ODEs) (pg. 51)
1.6.1 Forward Euler Method (pg. 51)
1.6.2 Simulating ODEs with MATLAB (pg. 52)
1.6.3 Solving Coupled ODEs with Multiple Variables (pg. 54)
1.6.4 Solving ODEs with Nested for Loops (pg. 55)
1.6.5 Comparing Simulation Methods (pg. 55)
1.6.6 Euler-Mayamara Method: Forward Euler with White Noise (pg. 56)
2 The Neuron and Minimal Spiking Models (pg. 59)
2.1 The Nernst Equilibrium Potential (pg. 59)
2.2 An Equivalent Circuit Model of the Neural Membrane (pg. 62)
2.2.1 Depolarization versus Hyperpolarization (pg. 65)
2.3 The Leaky Integrate-and-Fire Model (pg. 66)
2.3.1 Specific versus Absolute Properties of the Cell (pg. 68)
2.3.2 Firing Rate as a Function of Current (f-I Curve) of the Leaky Integrate-and-Fire Model (pg. 69)
2.4 Tutorial 2.1: The f-I Curve of the Leaky Integrate-and-Fire Neuron (pg. 70)
2.5 Extensions of the Leaky Integrate-and-Fire Model (pg. 72)
2.5.1 Refractory Period (pg. 72)
2.5.2 Spike-Rate Adaptation (SRA) (pg. 74)
2.6 Tutorial 2.2: Modeling the Refractory Period (pg. 76)
2.7 Further Extensions of the Leaky Integrate-and-Fire Model (pg. 78)
2.7.1 Exponential Leaky Integrate-and-Fire (ELIF) Model (pg. 78)
2.7.2 Two-Variable Models: The Adaptive Exponential Leaky Integrate-and-Fire (AELIF) Neuron (pg. 79)
2.7.3 Limitations of the LIF Formalism (pg. 81)
2.8 Tutorial 2.3: Models Based on Extensions of the LIF Neuron (pg. 81)
2.9 Appendix: Calculation of the Nernst Potential (pg. 86)
3 Analysis of Individual Spike Trains (pg. 89)
3.1 Responses of Single Neurons (pg. 89)
3.1.1 Receptive Fields (pg. 89)
3.1.2 Time-Varying Responses and the Peristimulus Time Histogram (PSTH) (pg. 92)
3.1.3 Neurons as Linear Filters and the Linear-Nonlinear Model (pg. 93)
3.1.4 Spike-Triggered Average (pg. 96)
3.1.5 White-Noise Stimuli for Receptive Field Generation (pg. 96)
3.1.6 Spatiotemporal Receptive Fields (pg. 98)
3.2 Tutorial 3.1: Generating Receptive Fields with Spike-Triggered Averages (pg. 100)
3.3 Spike-Train Statistics (pg. 104)
3.3.1 Coefficient of Variation (CV) of Interspike Intervals (pg. 105)
3.3.2 Fano Factor (pg. 107)
3.3.3 The Homogeneous Poisson Process: A Random Point Process for Artifi cial Spike Trains (pg. 108)
3.3.4 Comments on Analyses and Use of Dummy Data (pg. 109)
3.4 Tutorial 3.2: Statistical Properties of Simulated Spike Trains (pg. 110)
3.5 Receiver-Operating Characteristic (ROC) (pg. 113)
3.5.1 Producing the ROC Curve (pg. 113)
3.5.2 Optimal Position of the Threshold (pg. 115)
3.5.3 Uncovering the Underlying Distributions from Binary Responses: Recollectionversus Familiarity (pg. 118)
3.6 Tutorial 3.3: Receiver-Operating Characteristic of a Noisy Neuron (pg. 121)
3.7. Appendix A: The Poisson Process (pg. 123)
3.7.1 The Poisson Distribution (pg. 123)
3.7.2 Expected Value of the Mean of a Poisson Process (pg. 125)
3.7.3 Fano Factor of the Poisson Process (pg. 125)
3.7.4 The Coeffi cient of Variation (CV) of the ISI Distribution of a Poisson Process (pg. 126)
3.7.5 Selecting from a Probability Distribution: Generating ISIs for the Poisson Process (pg. 127)
3.8. Appendix B: Stimulus Discriminability (pg. 128)
3.8.1 Optimal Value of Threshold (pg. 129)
3.8.2 Calculating the Probability of an Error (pg. 130)
4 Conductance-Based Models (pg. 133)
4.1 Introduction to the Hodgkin-Huxley Model (pg. 133)
4.1.1 Positive versus Negative Feedback (pg. 134)
4.1.2 Voltage Clamp versus Current Clamp (pg. 136)
4.2 Simulation of the Hodgkin-Huxley Model (pg. 137)
4.2.1 Two-State Systems (pg. 138)
4.2.2 Full Set of Dynamical Equations for the Hodgkin-Huxley Model (pg. 139)
4.2.3 Dynamical Behavior of the Hodgkin-Huxley Model: A Type-II Neuron (pg. 140)
4.3 Tutorial 4.1: The Hodgkin-Huxley Model as an Oscillator (pg. 147)
4.4 The Connor-Stevens Model: A Type-I Model (pg. 150)
4.5 Calcium Currents and Bursting (pg. 154)
4.5.1 Thalamic Rebound and the T-Type Calcium Channel (pg. 155)
4.6 Tutorial 4.2: Postinhibitory Rebound (pg. 156)
4.7 Modeling Multiple Compartments (pg. 159)
4.7.1 The Pinsky-Rinzel Model of an Intrinsic Burster (pg. 160)
4.7.2 Simulating the Pinsky-Rinzel Model (pg. 160)
4.7.3 A Note on Multicompartmental Modeling with Specific Conductances versus Absolute Conductances (pg. 163)
4.7.4 Model Complexity (pg. 166)
4.8 Hyperpolarization-Activated Currents (Ih) and Pacemaker Control (pg. 166)
4.9 Dendritic Computation (pg. 168)
4.10 Tutorial 4.3: A Two-Compartment Model of an Intrinsically Bursting Neuron (pg. 170)
5 Connections between Neurons (pg. 173)
5.1 The Synapse (pg. 173)
5.1.1 Electrical Synapses (pg. 173)
5.1.2 Chemical Synapses (pg. 174)
5.2 Modeling Synaptic Transmission through Chemical Synapses (pg. 179)
5.2.1 Spike-Induced Transmission (pg. 179)
5.2.2 Graded Release (pg. 181)
5.3 Dynamical Synaps (pg. 182)
5.3.1 Short-Term Synaptic Depression (pg. 183)
5.3.2 Short-Term Synaptic Facilitation (pg. 183)
5.3.3 Modeling Dynamical Synapses (pg. 184)
5.4 Tutorial 5.1: Synaptic Responses to Changes in Inputs (pg. 185)
5.5 The Connectivity Matrix (pg. 187)
5.5.1 General Types of Connectivity Matrices (pg. 189)
5.5.2 Cortical Connections: Sparseness and Structure (pg. 190)
5.5.3 Motifs (pg. 191)
5.6 Tutorial 5.2. Detecting Circuit Structure and Nonrandom Features within a Connectivity Matrix (pg. 193)
5.7 Oscillations and Multistability in Small Circuits (pg. 196)
5.8 Central Pattern Generators (pg. 197)
5.8.1 The Half-Center Oscillator (pg. 199)
5.8.2 The Triphasic Rhythm (pg. 199)
5.8.3 Phase Response Curve (pg. 200)
5.9 Tutorial 5.3: Bistability and Oscillations from Two LIF Neurons (pg. 203)
5.10 Appendix: Synaptic Input Produced by a Poisson Process (pg. 205)
5.10.1 Synaptic Saturation (pg. 205)
5.10.2 Synaptic Depression (pg. 208)
5.10.3 Synaptic Facilitation (pg. 209)
5.10.4 Notes on Combining Mechanisms (pg. 209)
6 Firing-Rate Models and Network Dynamics (pg. 211)
6.1 Firing-Rate Models (pg. 211)
6.2 Simulating a Firing-Rate Model (pg. 213)
6.2.1 Meaning of a Unit and Dale’s Principle (pg. 216)
6.3 Recurrent Feedback and Bistability (pg. 217)
6.3.1 Bistability from Positive Feedback (pg. 217)
6.3.2 Limiting the Maximum Firing Rate Reached (pg. 221)
6.3.3 Dynamics of Synaptic Response (pg. 222)
6.3.4 Dynamics of Synaptic Depression and Facilitation (pg. 223)
6.3.5 Integration and Parametric Memory (pg. 225)
6.4 Tutorial 6.1: Bistability and Oscillations in a Firing-Rate Model with Feedback (pg. 227)
6.5 Decision-Making Circuits (pg. 229)
6.5.1 Decisions by Integration of Evidence (pg. 232)
6.5.2 Decision-Making Performance (pg. 233)
6.5.3 Decisions as State Transitions (pg. 235)
6.5.4 Biasing Decisions (pg. 235)
6.6 Tutorial 6.2: Dynamics of a Decision-Making Circuit in Two Modes of Operation (pg. 236)
6.7 Oscillations from Excitatory and Inhibitory Feedback (pg. 238)
6.8 Tutorial 6.3: Frequency of an Excitatory-Inhibitory Coupled Unit Oscillator and PING (pg. 242)
6.9 Orientation Selectivity and Contrast Invariance (pg. 245)
6.9.1 Ring Mode (pg. 246)
6.10 Ring Attractors for Spatial Memory and Head Direction (pg. 250)
6.10.1 Dynamics of the Ring Attrac (pg. 252)
6.11 Tutorial 6.4: Orientation Selectivity in a Ring Model (pg. 254)
7 An Introduction to Dynamical Systems (pg. 257)
7.1 What Is a Dynamical System? (pg. 257)
7.2 Single Variable Behavior and Fixed Points (pg. 258)
7.2.1 Bifurcations (pg. 258)
7.2.2 Requirement for Oscillations (pg. 260)
7.3 Models with Two Variables (pg. 261)
7.3.1 Nullclines and Phase-Plane Analysis (pg. 262)
7.3.2 The Inhibition-Stabilized Network (pg. 264)
7.3.3 How Inhibitory Feedback to Inhibitory Neurons Impacts Stability of States (pg. 267)
7.4 Tutorial 7.1: The Inhibition-Stabilized Circuit (pg. 267)
7.5 Attractor State Itinerancy (pg. 269)
7.5.1 Bistable Percepts (pg. 269)
7.5.2 Noise-Driven Transitions in a Bistable System (pg. 270)
7.6 Quasistability and Relaxation Oscillators: The FitzHugh-Nagumo Model (pg. 271)
7.7 Heteroclinic Sequences (pg. 275)
7.8 Chaos (pg. 275)
7.8.1 Chaotic Systems and Lack of Predictabilit (pg. 277)
7.8.2 Examples of Chaotic Neural Circuits (pg. 279)
7.9 Criticality (pg. 282)
7.9.1 Power-Law Distributions (pg. 283)
7.9.2 Requirements for Criticality (pg. 284)
7.9.3 A Simplified Avalanche Model with a Subset of the Features of Criticality (pg. 287)
7.10 Tutorial 7.2: Diverse Dynamical Systems from Similar Circuit Architectures (pg. 288)
7.11 Appendix: Proof of the Scaling Relationship for Avalanche Sizes (pg. 290)
8 Learning and Synaptic Plasticity (pg. 293)
8.1 Hebbian Plasticity (pg. 293)
8.1.1 Modeling Hebbian Plasticity (pg. 296)
8.2 Tutorial 8.1: Pattern Completion and Pattern Separation via Hebbian Learning (pg. 297)
8.3 Spike-Timing Dependent Plasticity (STDP) (pg. 300)
8.3.1 Model of STDP (pg. 302)
8.3.2 Synaptic Competition via STDP (pg. 304)
8.3.3 Sequence Learning via STDP (pg. 305)
8.3.4 Triplet STDP (pg. 305)
8.3.5 A Note on Spike-Timing Dependent Plasticity (pg. 308)
8.3.6 Mechanisms of Spike-Timing Dependent Synaptic Plasticity (pg. 309)
8.4 More Detailed Empirical Models of Synaptic Plasticity (pg. 309)
8.5 Tutorial 8.2: Competition via STDP (pg. 311)
8.6 Homeostasis (pg. 313)
8.6.1 Firing-Rate Homeostasis (pg. 314)
8.6.2 Homeostasis of Synaptic Inp (pg. 316)
8.6.3 Homeostasis of Intrinsic Properties (pg. 317)
8.7 Supervised Learning (pg. 319)
8.7.1 Conditioning (pg. 321)
8.7.2 Reward Prediction Errors and Reinforcement Learning (pg. 322)
8.7.3 The Weather-Prediction Task (pg. 324)
8.7.4 Calculations Required in the Weather-Prediction Task (pg. 325)
8.8 Tutorial 8.3: Learning the Weather-Prediction Task in a Neural Circuit (pg. 326)
8.9 Eyeblink Conditioning (pg. 329)
8.10 Tutorial 8.4: A Model of Eyeblink Conditioning (pg. 331)
8.11 Appendix A: Rate-Dependent Plasticity via STDP between Uncorrelated Poisson Spike Trains (pg. 335)
8.12 Appendix B: Rate-Dependence of Triplet STDP between Uncorrelated Poisson Spike Trains (pg. 336)
9 Analysis of Population Data (pg. 339)
9.1 Principal Component Analysis (PCA) (pg. 340)
9.1.1 PCA for Sorting of Spikes (pg. 341)
9.1.2 PCA for Analysis of Firing Rates (pg. 342)
9.1.3 PCA in Practice (pg. 342)
9.1.4 The Procedure of PCA (pg. 345)
9.2 Tutorial 9.1: Principal Component Analysis of Firing-Rate Trajectories (pg. 346)
9.3 Single-Trial versus Trial-Averaged Analyses (pg. 348)
9.4 Change-Point Detection (pg. 349)
9.4.1 Computational Note (pg. 351)
9.5 Hidden Markov Modeling (HMM) (pg. 351)
9.6 Tutorial 9.2: Change-Point Detection for a Poisson Process (pg. 355)
9.7 Decoding Position from Multiple Place Fields (pg. 357)
9.8 Appendix A: How PCA Works: Choosing a Direction to Maximize the Variance of the Projected Data (pg. 362)
9.8.1 Carrying out PCA without a Built-in Function (pg. 364)
9.9 Appendix B: Determining the Probability of Change Points for a Poisson Process (pg. 366)
9.9.1 Optimal Rate (pg. 366)
9.9.2 Evaluating the Change Point, Method 1 (pg. 367)
9.9.3 Evaluating the Change Point, Method 2 (pg. 367)
References (pg. 369)
Index (pg. 381)

Paul Miller

Paul Miller is Associate Professor in the Department of Biology and the Volen National Center for Complex Systems at Brandeis University, where he is also Undergraduate Advising Head for the Neuroscience Program.


Instructors Only
You must have an instructor account and submit a request to access instructor materials for this book.
eTextbook
Go paperless today! Available online anytime, nothing to download or install.

Features

  • Annotating
  • Highlighting
  • Note-taking