Bayesian Models of Cognition
Reverse Engineering the Mind
by Griffiths, Chater, Tenenbaum
 ISBN: 9780262381055  Copyright 2024
Instructor Requests
The definitive introduction to Bayesian cognitive science, written by pioneers of the field.
How does human intelligence work, in engineering terms? How do our minds get so much from so little? Bayesian models of cognition provide a powerful framework for answering these questions by reverseengineering the mind. This textbook offers an authoritative introduction to Bayesian cognitive science and a unifying theoretical perspective on how the mind works. Part I provides an introduction to the key mathematical ideas and illustrations with examples from the psychological literature, including detailed derivations of specific models and references that can be used to learn more about the underlying principles. Part II details more advanced topics and their applications before engaging with critiques of the reverseengineering approach. Written by experts at the forefront of new research, this comprehensive text brings the fields of cognitive science and artificial intelligence back together and establishes a firmly grounded mathematical and computational foundation for the understanding of human intelligence.
·The only textbook comprehensively introducing the Bayesian approach to cognition
·Written by pioneers in the field
·Offers cuttingedge coverage of Bayesian cognitive science's research frontiers
·Suitable for advanced undergraduate and graduate students and researchers across the sciences with an interest in the mind, brain, and intelligence
·Features short tutorials and case studies of specific Bayesian models
Expand/Collapse All  

Contents (pg. vii)  
Preface (pg. ix)  
I: The Basics (pg. 1)  
1. Introducing the Bayesian Approach to Cognitive Science (pg. 3)  
1.1. Generalization and Induction (pg. 3)  
1.2. From One Question to Many (pg. 10)  
1.2.1. The Role of Abstract Knowledge (pg. 13)  
1.2.2. The Form of Abstract Knowledge (pg. 15)  
1.2.3. The Origins of Abstract Knowledge (pg. 18)  
1.2.4. Using Knowledge to Inform Action (pg. 24)  
1.2.5. Making the Most of Limited Cognitive Resources (pg. 26)  
1.2.6. From Abstract Models to Universal Languages and Their Physical Implementation (pg. 28)  
1.2.7. Capturing the Contents of the Minds of Infants (pg. 29)  
1.2.8. Learning from and About Other People (pg. 30)  
1.3. A ReverseEngineering View of the Mind and Brain (pg. 32)  
1.4. The Promise of the Bayesian Approach to Cognitive Science (pg. 34)  
2. Probabilistic Models of Cognition in Historical Context (pg. 37)  
2.1. Symbolic Cognitive Science (pg. 38)  
2.2. Connectionism (pg. 45)  
2.3. Rational Approaches (pg. 49)  
2.4. The Bayesian Synthesis (pg. 55)  
2.5. Explaining the Mind, from the Top Down (pg. 56)  
2.6. Summary and Prospectus (pg. 57)  
3. Bayesian Inference (pg. 59)  
3.1. What Is Bayes’ Rule, and Why Be Bayesian? (pg. 59)  
3.2. Bayesian Inference with a Discrete Set of Hypotheses (pg. 66)  
3.3. Bayesian Inference with a Continuous Hypothesis Space (pg. 77)  
3.4. Bayesian Inference for Gaussians (pg. 85)  
3.5. Bayesian Inference for Other Distributions (pg. 90)  
3.6. Bayesian Model Selection (pg. 96)  
3.7. Summary (pg. 100)  
4. Graphical Models (pg. 101)  
4.1. Bayesian Networks (pg. 102)  
4.2. Probabilistic Inference in Graphical Models (pg. 111)  
4.3. Causal Graphical Models (pg. 115)  
4.4. Learning Graphical Models (pg. 117)  
4.5. Summary (pg. 126)  
5. Building Complex Generative Models (pg. 129)  
5.1. Mixture Models and Density Estimation (pg. 130)  
5.2. Mixture Models as Priors (pg. 134)  
5.3. Estimating Parameters in the Presence of Latent Variables (pg. 136)  
5.4. Topic Models (pg. 142)  
5.5. Hidden Markov Models (pg. 147)  
5.6. The BayesKalman Filter and Linear Dynamical Systems (pg. 152)  
5.7. Combining Probabilistic Models (pg. 156)  
5.8. Summary (pg. 158)  
6. Approximate Probabilistic Inference (pg. 159)  
6.1. Simple Monte Carlo (pg. 160)  
6.2. When Does Simple Monte Carlo Fail? (pg. 161)  
6.3. Rejection Sampling (pg. 162)  
6.4. Importance Sampling (pg. 164)  
6.5. Sequential Monte Carlo (pg. 168)  
6.6. Markov Chain Monte Carlo (pg. 171)  
6.7. Variational Inference (pg. 180)  
6.8. Summary (pg. 187)  
7. From Probabilities to Actions (pg. 189)  
7.1. Minimizing Losses: Statistical Decision Theory (pg. 190)  
7.2. Utilities and Beliefs (pg. 194)  
7.3. When Can a Utility Scale Be Defined? (pg. 195)  
7.4. The Accumulation of Evidence (pg. 201)  
7.5. Sequential DecisionMaking (pg. 204)  
7.6. Active Learning (pg. 216)  
7.7. Forward and Inverse Models (pg. 222)  
7.8. The Limits of Reason (pg. 223)  
7.9. Summary (pg. 225)  
II: Advanced Topics (pg. 227)  
Interlude (pg. 229)  
8. Learning Inductive Bias with Hierarchical Bayesian Models (pg. 231)  
8.1. A Hierarchical BetaBinomial Model (pg. 233)  
8.2 Causal Learning (pg. 237)  
8.3 Property Induction (pg. 240)  
8.4 Beyond Strict Hierarchies (pg. 242)  
8.5 Future Directions (pg. 243)  
8.6 Conclusion (pg. 244)  
9. Capturing the Growth of Knowledge with Nonparametric Bayesian Models (pg. 245)  
9.1. Infinite Models for Categorization (pg. 246)  
9.2. Infinite Models for Feature Representations (pg. 256)  
9.3. Infinite Models for Function Learning (pg. 259)  
9.4 Future Directions (pg. 264)  
9.5 Conclusion (pg. 265)  
10. Estimating Subjective Probability Distributions (pg. 267)  
10.1 Elicitation of Probabilities (pg. 268)  
10.2 Iterated Learning (pg. 269)  
10.3 Serial Reproduction (pg. 273)  
10.4 Markov Chain Monte Carlo with People (pg. 278)  
10.5 Gibbs Sampling with People (pg. 281)  
10.6 Future Directions (pg. 283)  
10.7 Conclusion (pg. 284)  
11. Sampling as a Bridge Across Levels of Analysis (pg. 285)  
11.1 A Strategy for Bridging Levels of Analysis (pg. 286)  
11.2 Monte Carlo as a Psychological Mechanism (pg. 287)  
11.3 Exemplar Models as Importance Samplers (pg. 287)  
11.4 Particle Filters and Order Effects (pg. 291)  
11.5 Markov Chain Monte Carlo and Stochastic Search (pg. 292)  
11.6 A More Bayesian Approach to Sampling (pg. 294)  
11.7 Making Connections to the Implementational Level (pg. 295)  
11.8 Future Directions (pg. 296)  
11.9 Conclusion (pg. 297)  
12. Bayesian Models and Neural Networks (pg. 299)  
12.1. What Is a Neural Network? (pg. 300)  
12.2. Bayesian Inference by Neural Networks (pg. 301)  
12.3. Bayesian Inference for Neural Networks (pg. 306)  
12.4. Future Directions (pg. 311)  
12.5. Conclusion (pg. 313)  
13. ResourceRational Analysis (pg. 315)  
13.1. The Rational Use of Cognitive Resources (pg. 316)  
13.2. The Process of ResourceRational Analysis (pg. 319)  
13.3. Cognition as a Sequential Decision Problem (pg. 330)  
13.4. Future Directions (pg. 338)  
13.5. Conclusion (pg. 339)  
14. Theory of Mind and Inverse Planning (pg. 341)  
14.1. Representing and Inferring Desires (pg. 341)  
14.2. Representing and Inferring Beliefs (pg. 346)  
14.3. Action Understanding in Space and Time (pg. 349)  
14.4. Minds Thinking About Themselves and Other Minds (pg. 359)  
14.5. Future Directions (pg. 366)  
14.6 Conclusion (pg. 368)  
15. Intuitive Physics as Probabilistic Inference (pg. 369)  
15.1. The Ecological Nature of Physical Reasoning (pg. 369)  
15.2. The Psychological Nature of Physical Reasoning (pg. 370)  
15.3. A Mental Model of Physics (pg. 371)  
15.4. Human Physical Reasoning (pg. 375)  
15.5. Efficient Physical Reasoning (pg. 384)  
15.6. Future Directions (pg. 389)  
15.7 Conclusion (pg. 393)  
16. Language Processing and Language Learning (pg. 395)  
16.1. Language Processing (pg. 397)  
16.2. Language Acquisition (pg. 406)  
16.3. Ascending the Chomsky Hierarchy (pg. 412)  
16.4. Have Deep Neural Networks Solved the Problem of Processing and Learning Language? (pg. 418)  
16.5. Future Directions (pg. 421)  
16.6. Conclusion (pg. 422)  
17. Bayesian Inference over Logical Representations (pg. 423)  
17.1 Logical Theories (pg. 424)  
17.2 A Hierarchical Bayesian Account of Theory Learning (pg. 425)  
17.3 Learning a Kinship Theory (pg. 426)  
17.4 Learning Relational Categories (pg. 428)  
17.5 Specifying Priors over Logical Theories Using Grammars (pg. 431)  
17.6 Future Directions (pg. 434)  
17.7 Conclusion (pg. 435)  
18. Probabilistic Programs as a Unifying Language of Thought (pg. 437)  
18.1. Probabilistic Programs and the Stochastic Lambda Calculus (pg. 438)  
18.2. A Probabilistic Programming Language: Church (pg. 440)  
18.3. Universality (pg. 443)  
18.4. Conditional Inference (pg. 445)  
18.5. From Probabilistic Programs to a Probabilistic Language of Thought (pg. 447)  
18.6. Putting the PLoT to Work: Bayesian TugOfWar and Ping Pong (pg. 450)  
18.7. Intuitive Theories (pg. 459)  
18.8 Concept Acquisition (pg. 467)  
18.9 Future Directions (pg. 469)  
18.10 Conclusion (pg. 470)  
19. Learning as Bayesian Inference over Programs (pg. 473)  
19.1. Background (pg. 473)  
19.2. The Hypothesis Space (pg. 475)  
19.3. Likelihoods for Program Models (pg. 484)  
19.4. Computability Concerns (pg. 486)  
19.5. Markov Chain Monte Carlo for Programs (pg. 487)  
19.6. Example Model Runs (pg. 489)  
19.7. Future Directions (pg. 493)  
19.8. Conclusion (pg. 498)  
20. Bayesian Models of Cognitive Development (pg. 499)  
20.1. Causal Inference and Intuitive Theories (pg. 500)  
20.2. The Sampling Hypothesis (pg. 504)  
20.3. Core Knowledge (pg. 509)  
20.4. Future Directions (pg. 513)  
20.5. Conclusion (pg. 514)  
21. The Limits of Inference and Algorithmic Probability (pg. 517)  
21.1. A Universal Recipe for Priors (pg. 518)  
21.2. From Programs to Priors (pg. 520)  
21.3. Kolmogorov Complexity and the Universal Prior (pg. 522)  
21.4. Bayes and Simplicity (pg. 524)  
21.5. Applying a CodeMinimizing Perspective in Cognition (pg. 527)  
21.6 Future Directions (pg. 539)  
21.7 Conclusion (pg. 541)  
22. A Bayesian Conversation (pg. 543)  
Conclusion (pg. 557)  
Acknowledgments (pg. 561)  
References (pg. 565)  
Index (pg. 615) 
Thomas L. Griffiths
Nick Chater
Joshua Tenenbaum
eTextbook
Go paperless today! Available online anytime, nothing to download or install.
Features
