by Goodfellow, Bengio, Courville
ISBN: 9780262337434  Copyright 2016
“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.”
—Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.
The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.
Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject. It provides muchneeded broad perspective and mathematical preliminaries for software engineers and students entering the field, and serves as a reference for authorities.”
—Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX
“This is the definitive textbook on deep learning. Written by major contributors to the field, it is clear, comprehensive, and authoritative. If you want to know where deep learning came from, what it is good for, and where it is going, read this book.”
—Geoffrey Hinton FRS, Emeritus Professor, University of Toronto; Distinguished Research Scientist, Google
“Deep learning has taken the world of technology by storm since the beginning of the decade. There was a need for a textbook for students, practitioners, and instructors that includes basic concepts, practical aspects, and advanced research topics. This is the first comprehensive textbook on the subject, written by some of the most innovative and prolific researchers in the field. This will be a reference for years to come.”
—Yann LeCun, Director of AI Research, Facebook; Silver Professor of Computer Science, Data Science, and Neuroscience, New York University
Expand/Collapse All  

Contents (pg. v)  
Website (pg. xiii)  
Acknowledgments (pg. xv)  
Notation (pg. xix)  
1 Introduction (pg. 1)  
1.1 Who Should Read This Book? (pg. 8)  
1.2 Historical Trends in Deep Learning (pg. 12)  
I Applied Math and Machine Learning Basics (pg. 27)  
2 Linear Algebra (pg. 29)  
2.1 Scalars, Vectors, Matrices and Tensors (pg. 29)  
2.2 Multiplying Matrices and Vectors (pg. 32)  
2.3 Identity and Inverse Matrices (pg. 34)  
2.4 Linear Dependence and Span (pg. 35)  
2.5 Norms (pg. 36)  
2.6 Special Kinds of Matrices and Vectors (pg. 38)  
2.7 Eigendecomposition (pg. 39)  
2.8 Singular Value Decomposition (pg. 42)  
2.9 The MoorePenrose Pseudoinverse (pg. 43)  
2.10 The Trace Operator (pg. 44)  
2.11 The Determinant (pg. 45)  
2.12 Example: Principal Components Analysis (pg. 45)  
3 Probability and InformationTheory (pg. 51)  
3.1 Why Probability? (pg. 52)  
3.2 Random Variables (pg. 54)  
3.3 Probability Distributions (pg. 54)  
3.4 Marginal Probability (pg. 56)  
3.5 Conditional Probability (pg. 57)  
3.6 The Chain Rule of Conditional Probabilities (pg. 57)  
3.7 Independence and Conditional Independence (pg. 58)  
3.8 Expectation, Variance and Covariance (pg. 58)  
3.9 Common Probability Distributions (pg. 60)  
3.10 Useful Properties of Common Functions (pg. 65)  
3.11 Bayes’ Rule (pg. 68)  
3.12 Technical Details of Continuous Variables (pg. 68)  
3.13 Information Theory (pg. 70)  
3.14 Structured Probabilistic Models (pg. 74)  
4 Numerical Computation (pg. 77)  
4.1 Overflow and Underflow (pg. 77)  
4.2 Poor Conditioning (pg. 79)  
4.3 GradientBased Optimization (pg. 79)  
4.4 Constrained Optimization (pg. 89)  
4.5 Example: Linear Least Squares (pg. 92)  
5 Machine Learning Basics (pg. 95)  
5.1 Learning Algorithms (pg. 96)  
5.2 Capacity, Overfitting and Underfitting (pg. 107)  
5.3 Hyperparameters and Validation Sets (pg. 117)  
5.4 Estimators, Bias and Variance (pg. 119)  
5.5 Maximum Likelihood Estimation (pg. 128)  
5.6 Bayesian Statistics (pg. 132)  
5.7 Supervised Learning Algorithms (pg. 136)  
5.8 Unsupervised Learning Algorithms (pg. 142)  
5.9 Stochastic Gradient Descent (pg. 147)  
5.10 Building a Machine Learning Algorithm (pg. 149)  
5.11 Challenges Motivating Deep Learning (pg. 151)  
II Deep Networks: Modern Practices (pg. 161)  
6 Deep Feedforward Networks (pg. 163)  
6.1 Example: Learning XOR (pg. 166)  
6.2 GradientBased Learning (pg. 171)  
6.3 Hidden Units (pg. 185)  
6.4 Architecture Design (pg. 191)  
6.5 BackPropagation and Other Differentiation Algorithms (pg. 197)  
6.6 Historical Notes (pg. 217)  
7 Regularization for Deep Learning (pg. 221)  
7.1 Parameter Norm Penalties (pg. 223)  
7.2 Norm Penalties as Constrained Optimization (pg. 230)  
7.3 Regularization and UnderConstrained Problems (pg. 232)  
7.4 Dataset Augmentation (pg. 233)  
7.5 Noise Robustness (pg. 235)  
7.6 SemiSupervised Learning (pg. 236)  
7.7 Multitask Learning (pg. 237)  
7.8 Early Stopping (pg. 239)  
7.9 Parameter Tying and Parameter Sharing (pg. 246)  
7.10 Sparse Representations (pg. 247)  
7.11 Bagging and Other Ensemble Methods (pg. 249)  
7.12 Dropout (pg. 251)  
7.13 Adversarial Training (pg. 261)  
7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier (pg. 263)  
8 Optimization for Training Deep Models (pg. 267)  
8.1 How Learning Differs from Pure Optimization (pg. 268)  
8.2 Challenges in Neural Network Optimization (pg. 275)  
8.3 Basic Algorithms (pg. 286)  
8.4 Parameter Initialization Strategies (pg. 292)  
8.5 Algorithms with Adaptive Learning Rates (pg. 298)  
8.6 Approximate SecondOrder Methods (pg. 302)  
8.7 Optimization Strategies and MetaAlgorithms (pg. 309)  
9 Convolutional Networks (pg. 321)  
9.1 The Convolution Operation (pg. 322)  
9.2 Motivation (pg. 324)  
9.3 Pooling (pg. 330)  
9.4 Convolution and Pooling as an Infinitely Strong Prior (pg. 334)  
9.5 Variants of the Basic Convolution Function (pg. 337)  
9.6 Structured Outputs (pg. 347)  
9.7 Data Types (pg. 348)  
9.8 Efficient Convolution Algorithms (pg. 350)  
9.9 Random or Unsupervised Features (pg. 351)  
9.10 The Neuroscientific Basis for Convolutional Networks (pg. 353)  
9.11 Convolutional Networks and the History of Deep Learning (pg. 359)  
10 Sequence Modeling: Recurrent and Recursive Nets (pg. 363)  
10.1 Unfolding Computational Graphs (pg. 365)  
10.2 Recurrent Neural Networks (pg. 368)  
10.3 Bidirectional RNNs (pg. 383)  
10.4 EncoderDecoder SequencetoSequence Architectures (pg. 385)  
10.5 Deep Recurrent Networks (pg. 387)  
10.6 Recursive Neural Networks (pg. 388)  
10.7 The Challenge of LongTerm Dependencies (pg. 390)  
10.8 Echo State Networks (pg. 392)  
10.9 Leaky Units and Other Strategies for Multiple Time Scales (pg. 395)  
10.10 The Long ShortTerm Memory and Other Gated RNNs (pg. 397)  
10.11 Optimization for LongTerm Dependencies (pg. 401)  
10.12 Explicit Memory (pg. 405)  
11 Practical Methodology (pg. 409)  
11.1 Performance Metrics (pg. 410)  
11.2 Default Baseline Models (pg. 413)  
11.3 Determining Whether to Gather More Data (pg. 414)  
11.4 Selecting Hyperparameters (pg. 415)  
11.5 Debugging Strategies (pg. 424)  
11.6 Example: MultiDigit Number Recognition (pg. 428)  
12 Applications (pg. 431)  
12.1 LargeScale Deep Learning (pg. 431)  
12.2 Computer Vision (pg. 440)  
12.3 Speech Recognition (pg. 446)  
12.4 Natural Language Processing (pg. 448)  
12.5 Other Applications (pg. 465)  
III Deep Learning Research (pg. 475)  
13 Linear Factor Models (pg. 479)  
13.1 Probabilistic PCA and Factor Analysis (pg. 480)  
13.2 Independent Component Analysis (ICA) (pg. 481)  
13.3 Slow Feature Analysis (pg. 484)  
13.4 Sparse Coding (pg. 486)  
13.5 Manifold Interpretation of PCA (pg. 489)  
14 Autoencoders (pg. 493)  
14.1 Undercomplete Autoencoders (pg. 494)  
14.2 Regularized Autoencoders (pg. 495)  
14.3 Representational Power, Layer Size and Depth (pg. 499)  
14.4 Stochastic Encoders and Decoders (pg. 500)  
14.5 Denoising Autoencoders (pg. 501)  
14.6 Learning Manifolds with Autoencoders (pg. 506)  
14.7 Contractive Autoencoders (pg. 510)  
14.8 Predictive Sparse Decomposition (pg. 514)  
14.9 Applications of Autoencoders (pg. 515)  
15 Representation Learning (pg. 517)  
15.1 Greedy LayerWise Unsupervised Pretraining (pg. 519)  
15.2 Transfer Learning and Domain Adaptation (pg. 526)  
15.3 SemiSupervised Disentangling of Causal Factors (pg. 532)  
15.4 Distributed Representation (pg. 536)  
15.5 Exponential Gains from Depth (pg. 543)  
15.6 Providing Clues to Discover Underlying Causes (pg. 544)  
16 Structured Probabilistic Models for Deep Learning (pg. 549)  
16.1 The Challenge of Unstructured Modeling (pg. 550)  
16.2 Using Graphs to Describe Model Structure (pg. 554)  
16.3 Sampling from Graphical Models (pg. 570)  
16.4 Advantages of Structured Modeling (pg. 572)  
16.5 Learning about Dependencies (pg. 572)  
16.6 Inference and Approximate Inference (pg. 573)  
16.7 The Deep Learning Approach to Structured Probabilistic Models (pg. 575)  
17 Monte Carlo Methods (pg. 581)  
17.1 Sampling and Monte Carlo Methods (pg. 581)  
17.2 Importance Sampling (pg. 583)  
17.3 Markov Chain Monte Carlo Methods (pg. 586)  
17.4 Gibbs Sampling (pg. 590)  
17.5 The Challenge of Mixing between Separated Modes (pg. 591)  
18 Confronting the Partition Function (pg. 597)  
18.1 The LogLikelihood Gradient (pg. 598)  
18.2 Stochastic Maximum Likelihood and Contrastive Divergence (pg. 599)  
18.3 Pseudolikelihood (pg. 607)  
18.4 Score Matching and Ratio Matching (pg. 609)  
18.5 Denoising Score Matching (pg. 611)  
18.6 NoiseContrastive Estimation (pg. 612)  
18.7 Estimating the Partition Function (pg. 614)  
19 Approximate Inference (pg. 623)  
19.1 Inference as Optimization (pg. 624)  
19.2 Expectation Maximization (pg. 626)  
19.3 MAP Inference and Sparse Coding (pg. 627)  
19.4 Variational Inference and Learning (pg. 629)  
19.5 Learned Approximate Inference (pg. 642)  
20 Deep Generative Models (pg. 645)  
20.1 Boltzmann Machines (pg. 645)  
20.2 Restricted Boltzmann Machines (pg. 647)  
20.3 Deep Belief Networks (pg. 651)  
20.4 Deep Boltzmann Machines (pg. 654)  
20.5 Boltzmann Machines for RealValued Data (pg. 667)  
20.6 Convolutional Boltzmann Machines (pg. 673)  
20.7 Boltzmann Machines for Structured or Sequential Outputs (pg. 675)  
20.8 Other Boltzmann Machines (pg. 677)  
20.9 BackPropagation through Random Operations (pg. 678)  
20.10 Directed Generative Nets (pg. 682)  
20.11 Drawing Samples from Autoencoders (pg. 701)  
20.12 Generative Stochastic Networks (pg. 704)  
20.13 Other Generation Schemes (pg. 706)  
20.14 Evaluating Generative Models (pg. 707)  
20.15 Conclusion (pg. 710)  
Bibliography (pg. 711)  
Index (pg. 767) 
Ian Goodfellow is Research Scientist at OpenAI.
Yoshua Bengio is Professor of Computer Science at the Université de Montréal.
Aaron Courville is Assistant Professor of Computer Science at the Université de Montréal.
eTextbook
Go paperless today! Available online anytime, nothing to download or install.
