The OpenMP Common Core

Making OpenMP Simple Again

by Mattson, Koniges, He

ISBN: 9780262538862 | Copyright 2019

Click here to preview

Instructor Requests

Digital Exam/Desk Copy Print Desk Copy Ancillaries
Tabs

How to become a parallel programmer by learning the twenty-one essential components of OpenMP.

This book guides readers through the most essential elements of OpenMP—the twenty-one components that most OpenMP programmers use most of the time, known collectively as the “OpenMP Common Core.” Once they have mastered these components, readers with no prior experience writing parallel code will be effective parallel programmers, ready to take on more complex aspects of OpenMP. The authors, drawing on twenty years of experience in teaching OpenMP, introduce material in discrete chunks ordered to support effective learning. OpenMP was created in 1997 to make it as simple as possible for applications programmers to write parallel code; since then, it has grown into a huge and complex system. The OpenMP Common Core goes back to basics, capturing the inherent simplicity of OpenMP.
After introducing the fundamental concepts of parallel computing and history of OpenMP's development, the book covers topics including the core design pattern of parallel computing, the parallel and worksharing-loop constructs, the OpenMP data environment, and tasks. Two chapters on the OpenMP memory model are uniquely valuable for their pedagogic approach. The key for readers is to work through the material, use an OpenMP-enabled compiler, and write programs to experiment with each OpenMP directive or API routine as it is introduced. The book's website, updated continuously, offers a wide assortment of programs and exercises.

Expand/Collapse All
Contents (pg. vii)
Series Foreword (pg. xiii)
Foreword (pg. xv)
Preface (pg. xvii)
I Setting the Stage (pg. 1)
1 Parallel Computing (pg. 5)
1.1 Fundamental Concepts of Parallel Computing (pg. 5)
1.2 The Rise of Concurrency (pg. 7)
1.3 Parallel Hardware (pg. 9)
1.4 Parallel Software for Multiprocessor Computers (pg. 16)
2 The Language of Performance (pg. 21)
2.1 The Basics: FLOPS, Speedup, and Parallel Efficiency (pg. 21)
2.2 Amdahl's Law (pg. 23)
2.3 Parallel Overhead (pg. 25)
2.4 Strong Scaling vs. Weak Scaling (pg. 27)
2.5 Load Balancing (pg. 29)
2.6 Understanding Hardware with the Roofline Model (pg. 31)
3 What is OpenMP? (pg. 35)
3.1 OpenMP History (pg. 35)
3.2 The Common Core (pg. 38)
3.3 Major Components of OpenMP (pg. 38)
II The OpenMP Common Core (pg. 43)
4 Threads and the OpenMP Programming Model (pg. 47)
4.1 Overview of OpenMP (pg. 47)
4.2 The Structure of OpenMP Programs (pg. 47)
4.3 Threads and the Fork Join Pattern (pg. 50)
4.4 Working with Threads (pg. 56)
4.5 Closing Comments (pg. 72)
5 Parallel Loops (pg. 75)
5.1 Worksharing-Loop Construct (pg. 76)
5.2 Combined Parallel Worksharing-Loop Construct (pg. 79)
5.3 Reductions (pg. 79)
5.4 Loop Schedules (pg. 83)
5.5 Implicit Barriers and the Nowait Clause (pg. 90)
5.6 Pi Program with Parallel Loop Worksharing (pg. 92)
5.7 A Loop-Level Parallelism Strategy (pg. 94)
5.8 Closing Comments (pg. 96)
6 OpenMP Data Environment (pg. 99)
6.1 Default Storage Attributes (pg. 100)
6.2 Modifying Storage Attributes (pg. 102)
6.3 Data Environment Examples (pg. 109)
6.4 Arrays and Pointers (pg. 116)
6.5 Closing Comments (pg. 119)
7 Tasks in OpenMP (pg. 121)
7.1 The Need for Tasks (pg. 121)
7.2 Explicit Tasks (pg. 125)
7.3 Our First Example: Schrödinger's Program (pg. 125)
7.4 The Single Construct (pg. 128)
7.5 Working with Tasks (pg. 130)
7.6 Task Data Environment (pg. 132)
7.7 Fundamental Design Patterns with Tasks (pg. 135)
7.8 Closing Comments (pg. 143)
8 OpenMP Memory Model (pg. 145)
8.1 Memory Hierarchies Revisited (pg. 146)
8.2 The OpenMP Common Core Memory Model (pg. 149)
8.3 Working with Shared Memory (pg. 152)
8.4 Closing Comments (pg. 156)
9 Common Core Recap (pg. 159)
9.1 Managing Threads (pg. 160)
9.2 Worksharing Constructs (pg. 161)
9.3 Parallel Worksharing-Loop Combined Construct (pg. 162)
9.4 OpenMP Tasks (pg. 163)
9.5 Synchronization and Memory Consistency Models (pg. 164)
9.6 Data Environment Clauses (pg. 166)
9.7 The Reduction Clause (pg. 167)
9.8 Environment Variables and Runtime Library Routines (pg. 168)
III Beyond the Common Core (pg. 171)
10 Multithreading beyond the Common Core (pg. 175)
10.1 Additional Clauses for OpenMP Common Core Constructs (pg. 175)
10.2 Multithreading Functionality Missing from the Common Core (pg. 190)
10.3 Closing Comments (pg. 201)
11 Synchronization and the OpenMP Memory Model (pg. 203)
11.1 Memory Consistency Models (pg. 204)
11.2 Pairwise Synchronization (pg. 210)
11.3 Locks and How to Use Them (pg. 217)
11.4 The C++ Memory Model and OpenMP (pg. 220)
11.5 Closing Comments (pg. 224)
12 Beyond OpenMP Common Core Hardware (pg. 225)
12.1 Nonuniform Memory Access (NUMA) Systems (pg. 226)
12.2 SIMD (pg. 247)
12.3 Device Constructs (pg. 256)
12.4 Closing Comments (pg. 262)
13 Your Continuing Education in OpenMP (pg. 265)
13.1 Programmer Resources from the ARB (pg. 265)
13.2 How to Read the OpenMP Specification (pg. 267)
13.3 The Structure of the OpenMP Specification (pg. 272)
13.4 Closing Comments (pg. 275)
Glossary (pg. 277)
References (pg. 289)
Subject Index (pg. 291)
Timothy G. Mattson

Timothy G. Mattson

Timothy G. Mattson is Senior Principal Engineer at Intel Corporation.

Alice E. Koniges

Alice E. Koniges

Alice E. Koniges is a Computer Scientist and Research Affiliate at the University of Hawaii.

len He

len He

Yun (Helen) He is a High Performance Computing Consultant at the National Energy Research Supercomputing Center of Lawrence Berkeley National Laboratory.

eTextbook
Go paperless today! Available online anytime, nothing to download or install.