An Introductory Course in Computational Neuroscience
by Miller
ISBN: 9780262364539  Copyright 2018
Instructor Requests
Expand/Collapse All  

Contents (pg. v)  
Series Foreword (pg. xiii)  
Acknowledgments (pg. xv)  
Preface (pg. xvii)  
1 Preliminary Material (pg. 1)  
1.1 Introduction (pg. 1)  
1.1.1 The Cell, the Circuit, and the Brain (pg. 1)  
1.1.2 Physics of Electrical Circuits (pg. 1)  
1.1.3 Mathematical Preliminaries (pg. 2)  
1.1.4 Writing Computer Code (pg. 4)  
1.2 The Neuron, the Circuit, and the Brain (pg. 4)  
1.2.1 The Cellular Level (pg. 4)  
1.2.2 The Circuit Level (pg. 7)  
1.2.3 The Regional Level (pg. 8)  
1.3 Physics of Electrical Circui (pg. 11)  
1.3.1 Terms and Properties (pg. 11)  
1.3.2 Pumps, Reservoirs, and Pipes (pg. 12)  
1.3.3 Some Peculiarities of the Electrical Properties of Neurons (pg. 13)  
1.4 Mathematical Background (pg. 14)  
1.4.1 Ordinary Differential Equations (pg. 15)  
1.4.2 Vectors, Matrices, and Their Basic Operations (pg. 24)  
1.4.3 Probability and Bayes’ Theorem (pg. 28)  
1.5 Introduction to Computing and MATLAB (pg. 36)  
1.5.1 Basic Commands (pg. 37)  
1.5.2 Arrays (pg. 38)  
1.5.3 Allocation of Memory (pg. 40)  
1.5.4 Using the Colon (:) Symbol (pg. 41)  
1.5.5 Saving Your Work (pg. 42)  
1.5.6 Plotting Graphs (pg. 42)  
1.5.7 Vector and Matrix Operations in MATLAB (pg. 43)  
1.5.8 Conditionals (pg. 44)  
1.5.9 Loops (pg. 46)  
1.5.10 Functions (pg. 47)  
1.5.11 Some Operations Useful for Modeling Neurons (pg. 48)  
1.5.12 Good Coding Practice (pg. 49)  
1.6 Solving Ordinary Differential Equations (ODEs) (pg. 51)  
1.6.1 Forward Euler Method (pg. 51)  
1.6.2 Simulating ODEs with MATLAB (pg. 52)  
1.6.3 Solving Coupled ODEs with Multiple Variables (pg. 54)  
1.6.4 Solving ODEs with Nested for Loops (pg. 55)  
1.6.5 Comparing Simulation Methods (pg. 55)  
1.6.6 EulerMayamara Method: Forward Euler with White Noise (pg. 56)  
2 The Neuron and Minimal Spiking Models (pg. 59)  
2.1 The Nernst Equilibrium Potential (pg. 59)  
2.2 An Equivalent Circuit Model of the Neural Membrane (pg. 62)  
2.2.1 Depolarization versus Hyperpolarization (pg. 65)  
2.3 The Leaky IntegrateandFire Model (pg. 66)  
2.3.1 Specific versus Absolute Properties of the Cell (pg. 68)  
2.3.2 Firing Rate as a Function of Current (fI Curve) of the Leaky IntegrateandFire Model (pg. 69)  
2.4 Tutorial 2.1: The fI Curve of the Leaky IntegrateandFire Neuron (pg. 70)  
2.5 Extensions of the Leaky IntegrateandFire Model (pg. 72)  
2.5.1 Refractory Period (pg. 72)  
2.5.2 SpikeRate Adaptation (SRA) (pg. 74)  
2.6 Tutorial 2.2: Modeling the Refractory Period (pg. 76)  
2.7 Further Extensions of the Leaky IntegrateandFire Model (pg. 78)  
2.7.1 Exponential Leaky IntegrateandFire (ELIF) Model (pg. 78)  
2.7.2 TwoVariable Models: The Adaptive Exponential Leaky IntegrateandFire (AELIF) Neuron (pg. 79)  
2.7.3 Limitations of the LIF Formalism (pg. 81)  
2.8 Tutorial 2.3: Models Based on Extensions of the LIF Neuron (pg. 81)  
2.9 Appendix: Calculation of the Nernst Potential (pg. 86)  
3 Analysis of Individual Spike Trains (pg. 89)  
3.1 Responses of Single Neurons (pg. 89)  
3.1.1 Receptive Fields (pg. 89)  
3.1.2 TimeVarying Responses and the Peristimulus Time Histogram (PSTH) (pg. 92)  
3.1.3 Neurons as Linear Filters and the LinearNonlinear Model (pg. 93)  
3.1.4 SpikeTriggered Average (pg. 96)  
3.1.5 WhiteNoise Stimuli for Receptive Field Generation (pg. 96)  
3.1.6 Spatiotemporal Receptive Fields (pg. 98)  
3.2 Tutorial 3.1: Generating Receptive Fields with SpikeTriggered Averages (pg. 100)  
3.3 SpikeTrain Statistics (pg. 104)  
3.3.1 Coefficient of Variation (CV) of Interspike Intervals (pg. 105)  
3.3.2 Fano Factor (pg. 107)  
3.3.3 The Homogeneous Poisson Process: A Random Point Process for Artifi cial Spike Trains (pg. 108)  
3.3.4 Comments on Analyses and Use of Dummy Data (pg. 109)  
3.4 Tutorial 3.2: Statistical Properties of Simulated Spike Trains (pg. 110)  
3.5 ReceiverOperating Characteristic (ROC) (pg. 113)  
3.5.1 Producing the ROC Curve (pg. 113)  
3.5.2 Optimal Position of the Threshold (pg. 115)  
3.5.3 Uncovering the Underlying Distributions from Binary Responses: Recollectionversus Familiarity (pg. 118)  
3.6 Tutorial 3.3: ReceiverOperating Characteristic of a Noisy Neuron (pg. 121)  
3.7. Appendix A: The Poisson Process (pg. 123)  
3.7.1 The Poisson Distribution (pg. 123)  
3.7.2 Expected Value of the Mean of a Poisson Process (pg. 125)  
3.7.3 Fano Factor of the Poisson Process (pg. 125)  
3.7.4 The Coeffi cient of Variation (CV) of the ISI Distribution of a Poisson Process (pg. 126)  
3.7.5 Selecting from a Probability Distribution: Generating ISIs for the Poisson Process (pg. 127)  
3.8. Appendix B: Stimulus Discriminability (pg. 128)  
3.8.1 Optimal Value of Threshold (pg. 129)  
3.8.2 Calculating the Probability of an Error (pg. 130)  
4 ConductanceBased Models (pg. 133)  
4.1 Introduction to the HodgkinHuxley Model (pg. 133)  
4.1.1 Positive versus Negative Feedback (pg. 134)  
4.1.2 Voltage Clamp versus Current Clamp (pg. 136)  
4.2 Simulation of the HodgkinHuxley Model (pg. 137)  
4.2.1 TwoState Systems (pg. 138)  
4.2.2 Full Set of Dynamical Equations for the HodgkinHuxley Model (pg. 139)  
4.2.3 Dynamical Behavior of the HodgkinHuxley Model: A TypeII Neuron (pg. 140)  
4.3 Tutorial 4.1: The HodgkinHuxley Model as an Oscillator (pg. 147)  
4.4 The ConnorStevens Model: A TypeI Model (pg. 150)  
4.5 Calcium Currents and Bursting (pg. 154)  
4.5.1 Thalamic Rebound and the TType Calcium Channel (pg. 155)  
4.6 Tutorial 4.2: Postinhibitory Rebound (pg. 156)  
4.7 Modeling Multiple Compartments (pg. 159)  
4.7.1 The PinskyRinzel Model of an Intrinsic Burster (pg. 160)  
4.7.2 Simulating the PinskyRinzel Model (pg. 160)  
4.7.3 A Note on Multicompartmental Modeling with Specific Conductances versus Absolute Conductances (pg. 163)  
4.7.4 Model Complexity (pg. 166)  
4.8 HyperpolarizationActivated Currents (Ih) and Pacemaker Control (pg. 166)  
4.9 Dendritic Computation (pg. 168)  
4.10 Tutorial 4.3: A TwoCompartment Model of an Intrinsically Bursting Neuron (pg. 170)  
5 Connections between Neurons (pg. 173)  
5.1 The Synapse (pg. 173)  
5.1.1 Electrical Synapses (pg. 173)  
5.1.2 Chemical Synapses (pg. 174)  
5.2 Modeling Synaptic Transmission through Chemical Synapses (pg. 179)  
5.2.1 SpikeInduced Transmission (pg. 179)  
5.2.2 Graded Release (pg. 181)  
5.3 Dynamical Synaps (pg. 182)  
5.3.1 ShortTerm Synaptic Depression (pg. 183)  
5.3.2 ShortTerm Synaptic Facilitation (pg. 183)  
5.3.3 Modeling Dynamical Synapses (pg. 184)  
5.4 Tutorial 5.1: Synaptic Responses to Changes in Inputs (pg. 185)  
5.5 The Connectivity Matrix (pg. 187)  
5.5.1 General Types of Connectivity Matrices (pg. 189)  
5.5.2 Cortical Connections: Sparseness and Structure (pg. 190)  
5.5.3 Motifs (pg. 191)  
5.6 Tutorial 5.2. Detecting Circuit Structure and Nonrandom Features within a Connectivity Matrix (pg. 193)  
5.7 Oscillations and Multistability in Small Circuits (pg. 196)  
5.8 Central Pattern Generators (pg. 197)  
5.8.1 The HalfCenter Oscillator (pg. 199)  
5.8.2 The Triphasic Rhythm (pg. 199)  
5.8.3 Phase Response Curve (pg. 200)  
5.9 Tutorial 5.3: Bistability and Oscillations from Two LIF Neurons (pg. 203)  
5.10 Appendix: Synaptic Input Produced by a Poisson Process (pg. 205)  
5.10.1 Synaptic Saturation (pg. 205)  
5.10.2 Synaptic Depression (pg. 208)  
5.10.3 Synaptic Facilitation (pg. 209)  
5.10.4 Notes on Combining Mechanisms (pg. 209)  
6 FiringRate Models and Network Dynamics (pg. 211)  
6.1 FiringRate Models (pg. 211)  
6.2 Simulating a FiringRate Model (pg. 213)  
6.2.1 Meaning of a Unit and Dale’s Principle (pg. 216)  
6.3 Recurrent Feedback and Bistability (pg. 217)  
6.3.1 Bistability from Positive Feedback (pg. 217)  
6.3.2 Limiting the Maximum Firing Rate Reached (pg. 221)  
6.3.3 Dynamics of Synaptic Response (pg. 222)  
6.3.4 Dynamics of Synaptic Depression and Facilitation (pg. 223)  
6.3.5 Integration and Parametric Memory (pg. 225)  
6.4 Tutorial 6.1: Bistability and Oscillations in a FiringRate Model with Feedback (pg. 227)  
6.5 DecisionMaking Circuits (pg. 229)  
6.5.1 Decisions by Integration of Evidence (pg. 232)  
6.5.2 DecisionMaking Performance (pg. 233)  
6.5.3 Decisions as State Transitions (pg. 235)  
6.5.4 Biasing Decisions (pg. 235)  
6.6 Tutorial 6.2: Dynamics of a DecisionMaking Circuit in Two Modes of Operation (pg. 236)  
6.7 Oscillations from Excitatory and Inhibitory Feedback (pg. 238)  
6.8 Tutorial 6.3: Frequency of an ExcitatoryInhibitory Coupled Unit Oscillator and PING (pg. 242)  
6.9 Orientation Selectivity and Contrast Invariance (pg. 245)  
6.9.1 Ring Mode (pg. 246)  
6.10 Ring Attractors for Spatial Memory and Head Direction (pg. 250)  
6.10.1 Dynamics of the Ring Attrac (pg. 252)  
6.11 Tutorial 6.4: Orientation Selectivity in a Ring Model (pg. 254)  
7 An Introduction to Dynamical Systems (pg. 257)  
7.1 What Is a Dynamical System? (pg. 257)  
7.2 Single Variable Behavior and Fixed Points (pg. 258)  
7.2.1 Bifurcations (pg. 258)  
7.2.2 Requirement for Oscillations (pg. 260)  
7.3 Models with Two Variables (pg. 261)  
7.3.1 Nullclines and PhasePlane Analysis (pg. 262)  
7.3.2 The InhibitionStabilized Network (pg. 264)  
7.3.3 How Inhibitory Feedback to Inhibitory Neurons Impacts Stability of States (pg. 267)  
7.4 Tutorial 7.1: The InhibitionStabilized Circuit (pg. 267)  
7.5 Attractor State Itinerancy (pg. 269)  
7.5.1 Bistable Percepts (pg. 269)  
7.5.2 NoiseDriven Transitions in a Bistable System (pg. 270)  
7.6 Quasistability and Relaxation Oscillators: The FitzHughNagumo Model (pg. 271)  
7.7 Heteroclinic Sequences (pg. 275)  
7.8 Chaos (pg. 275)  
7.8.1 Chaotic Systems and Lack of Predictabilit (pg. 277)  
7.8.2 Examples of Chaotic Neural Circuits (pg. 279)  
7.9 Criticality (pg. 282)  
7.9.1 PowerLaw Distributions (pg. 283)  
7.9.2 Requirements for Criticality (pg. 284)  
7.9.3 A Simplified Avalanche Model with a Subset of the Features of Criticality (pg. 287)  
7.10 Tutorial 7.2: Diverse Dynamical Systems from Similar Circuit Architectures (pg. 288)  
7.11 Appendix: Proof of the Scaling Relationship for Avalanche Sizes (pg. 290)  
8 Learning and Synaptic Plasticity (pg. 293)  
8.1 Hebbian Plasticity (pg. 293)  
8.1.1 Modeling Hebbian Plasticity (pg. 296)  
8.2 Tutorial 8.1: Pattern Completion and Pattern Separation via Hebbian Learning (pg. 297)  
8.3 SpikeTiming Dependent Plasticity (STDP) (pg. 300)  
8.3.1 Model of STDP (pg. 302)  
8.3.2 Synaptic Competition via STDP (pg. 304)  
8.3.3 Sequence Learning via STDP (pg. 305)  
8.3.4 Triplet STDP (pg. 305)  
8.3.5 A Note on SpikeTiming Dependent Plasticity (pg. 308)  
8.3.6 Mechanisms of SpikeTiming Dependent Synaptic Plasticity (pg. 309)  
8.4 More Detailed Empirical Models of Synaptic Plasticity (pg. 309)  
8.5 Tutorial 8.2: Competition via STDP (pg. 311)  
8.6 Homeostasis (pg. 313)  
8.6.1 FiringRate Homeostasis (pg. 314)  
8.6.2 Homeostasis of Synaptic Inp (pg. 316)  
8.6.3 Homeostasis of Intrinsic Properties (pg. 317)  
8.7 Supervised Learning (pg. 319)  
8.7.1 Conditioning (pg. 321)  
8.7.2 Reward Prediction Errors and Reinforcement Learning (pg. 322)  
8.7.3 The WeatherPrediction Task (pg. 324)  
8.7.4 Calculations Required in the WeatherPrediction Task (pg. 325)  
8.8 Tutorial 8.3: Learning the WeatherPrediction Task in a Neural Circuit (pg. 326)  
8.9 Eyeblink Conditioning (pg. 329)  
8.10 Tutorial 8.4: A Model of Eyeblink Conditioning (pg. 331)  
8.11 Appendix A: RateDependent Plasticity via STDP between Uncorrelated Poisson Spike Trains (pg. 335)  
8.12 Appendix B: RateDependence of Triplet STDP between Uncorrelated Poisson Spike Trains (pg. 336)  
9 Analysis of Population Data (pg. 339)  
9.1 Principal Component Analysis (PCA) (pg. 340)  
9.1.1 PCA for Sorting of Spikes (pg. 341)  
9.1.2 PCA for Analysis of Firing Rates (pg. 342)  
9.1.3 PCA in Practice (pg. 342)  
9.1.4 The Procedure of PCA (pg. 345)  
9.2 Tutorial 9.1: Principal Component Analysis of FiringRate Trajectories (pg. 346)  
9.3 SingleTrial versus TrialAveraged Analyses (pg. 348)  
9.4 ChangePoint Detection (pg. 349)  
9.4.1 Computational Note (pg. 351)  
9.5 Hidden Markov Modeling (HMM) (pg. 351)  
9.6 Tutorial 9.2: ChangePoint Detection for a Poisson Process (pg. 355)  
9.7 Decoding Position from Multiple Place Fields (pg. 357)  
9.8 Appendix A: How PCA Works: Choosing a Direction to Maximize the Variance of the Projected Data (pg. 362)  
9.8.1 Carrying out PCA without a Builtin Function (pg. 364)  
9.9 Appendix B: Determining the Probability of Change Points for a Poisson Process (pg. 366)  
9.9.1 Optimal Rate (pg. 366)  
9.9.2 Evaluating the Change Point, Method 1 (pg. 367)  
9.9.3 Evaluating the Change Point, Method 2 (pg. 367)  
References (pg. 369)  
Index (pg. 381) 
Paul Miller
Paul Miller is Associate Professor in the Department of Biology and the Volen National Center for Complex Systems at Brandeis University, where he is also Undergraduate Advising Head for the Neuroscience Program.
Instructors Only  

You must have an instructor account and submit a request to access instructor materials for this book.

eTextbook
Go paperless today! Available online anytime, nothing to download or install.
Features
