Probabilistic Machine Learning for Civil Engineers
by Goulet
ISBN: 9780262538701 | Copyright 2020
Instructor Requests
An introduction to key concepts and techniques in probabilistic machine learning for civil engineering students and professionals; with many step-by-step examples, illustrations, and exercises.
This book introduces probabilistic machine learning concepts to civil engineering students and professionals, presenting key approaches and techniques in a way that is accessible to readers without a specialized background in statistics or computer science. It presents different methods clearly and directly, through step-by-step examples, illustrations, and exercises. Having mastered the material, readers will be able to understand the more advanced machine learning literature from which this book draws.
The book presents key approaches in the three subfields of probabilistic machine learning: supervised learning, unsupervised learning, and reinforcement learning. It first covers the background knowledge required to understand machine learning, including linear algebra and probability theory. It goes on to present Bayesian estimation, which is behind the formulation of both supervised and unsupervised learning methods, and Markov chain Monte Carlo methods, which enable Bayesian estimation in certain complex cases. The book then covers approaches associated with supervised learning, including regression methods and classification methods, and notions associated with unsupervised learning, including clustering, dimensionality reduction, Bayesian networks, state-space models, and model calibration. Finally, the book introduces fundamental concepts of rational decisions in uncertain contexts and rational decision-making in uncertain and sequential contexts. Building on this, the book describes the basics of reinforcement learning, whereby a virtual agent learns how to make optimal decisions through trial and error while interacting with its environment.
Expand/Collapse All | |
---|---|
Contents (pg. v) | |
List of Figures (pg. xi) | |
List of Algorithms (pg. xix) | |
Acknowledgments (pg. xxi) | |
Nomenclature & Abbreviations (pg. xxiii) | |
1. Introduction (pg. 1) | |
I. Background (pg. 7) | |
2. Linear Algebra (pg. 9) | |
2.1 Notation (pg. 9) | |
2.2 Operations (pg. 10) | |
2.3 Norms (pg. 12) | |
2.4 Transformations (pg. 12) | |
3. Probability Theory (pg. 17) | |
3.1 Set Theory (pg. 18) | |
3.2 Probability of Events (pg. 19) | |
3.3 Random Variables (pg. 22) | |
3.4 Functions of Random Variables (pg. 29) | |
4. Probability Distributions (pg. 35) | |
4.1 Normal Distribution (pg. 35) | |
4.2 Log-Normal Distribution (pg. 41) | |
4.3 Beta Distribution (pg. 44) | |
5. Convex Optimization (pg. 47) | |
5.1 Gradient Ascent (pg. 48) | |
5.2 Newton-Raphson (pg. 50) | |
5.3 Coordinate Ascent (pg. 52) | |
5.4 Numerical Derivatives (pg. 53) | |
5.5 Parameter-Space Transformation (pg. 54) | |
II. Bayesian Estimation (pg. 57) | |
6. Learning from Data (pg. 59) | |
6.1 Bayes (pg. 59) | |
6.2 Discrete State Variables (pg. 61) | |
6.3 Continuous State Variables (pg. 66) | |
6.4 Parameter Estimation (pg. 71) | |
6.5 Monte Carlo (pg. 74) | |
6.6 Conjugate Priors (pg. 79) | |
6.7 Approximating the Posterior (pg. 82) | |
6.8 Model Selection (pg. 85) | |
7. Markov Chain Monte Carlo (pg. 89) | |
7.1 Metropolis (pg. 90) | |
7.2 Metropolis-Hastings (pg. 92) | |
7.3 Convergence Checks (pg. 92) | |
7.4 Space Transformation (pg. 97) | |
7.5 Computing with MCMC Samples (pg. 99) | |
III. Supervised Learning (pg. 105) | |
8. Regression (pg. 107) | |
8.1 Linear Regression (pg. 107) | |
8.2 Gaussian Process Regression (pg. 115) | |
8.3 Neural Networks (pg. 126) | |
9. Classification (pg. 139) | |
9.1 Generative Classifiers (pg. 140) | |
9.2 Logistic Regression (pg. 144) | |
9.3 Gaussian Process Classification (pg. 146) | |
9.4 Neural Networks (pg. 150) | |
9.5 Regression versus Classification (pg. 152) | |
IV. Unsupervised Learning (pg. 155) | |
10. Clustering and Dimension Reduction (pg. 157) | |
10.1 Clustering (pg. 157) | |
10.2 Principal Component Analysis (pg. 163) | |
11. Bayesian Networks (pg. 167) | |
11.1 Graphical Models Nomenclature (pg. 169) | |
11.2 Conditional Independence (pg. 170) | |
11.3 Inference (pg. 171) | |
11.4 Conditional Probability Estimation (pg. 173) | |
11.5 Dynamic Bayesian Network (pg. 177) | |
12. State-Space Models (pg. 181) | |
12.1 Linear Gaussian State-Space Models (pg. 182) | |
12.2 State-Space Models with Regime Switching (pg. 194) | |
12.3 Linear Model Structures (pg. 198) | |
12.4 Anomaly Detection (pg. 208) | |
13. Model Calibration (pg. 213) | |
13.1 Least-Squares Model Calibration (pg. 215) | |
13.2 Hierarchical Bayesian Estimation (pg. 218) | |
V. Reinforcement Learning (pg. 227) | |
14. Decisions in Uncertain Contexts (pg. 229) | |
14.1 Introductory Example (pg. 229) | |
14.2 Utility Theory (pg. 230) | |
14.3 Utility Functions (pg. 232) | |
14.4 Value of Information (pg. 236) | |
15. Sequential Decisions (pg. 241) | |
15.1 Markov Decision Process (pg. 244) | |
15.2 Model-Free Reinforcement Learning (pg. 252) | |
Bibliography (pg. 259) | |
Index (pg. 267) |
James-A. Goulet
Instructors Only | |
---|---|
You must have an instructor account and submit a request to access instructor materials for this book.
|
eTextbook
Go paperless today! Available online anytime, nothing to download or install.
Features
|