Introduction to Online Convex Optimization, 2e

by Hazan

ISBN: 9780262370134 | Copyright 2022

Click here to preview

Instructor Requests

Digital Exam/Desk Copy Print Desk Copy Ancillaries
Tabs

In many practical applications, the environment is so complex that it is not feasible to lay out a comprehensive theoretical model and use classical algorithmic theory and/or mathematical optimization. Introduction to Online Convex Optimization presents a robust machine learning approach that contains elements of mathematical optimization, game theory, and learning theory: an optimization method that learns from experience as more aspects of the problem are observed. This view of optimization as a process has led to some spectacular successes in modeling and systems that have become part of our daily lives.

Based on the “Theoretical Machine Learning” course taught by the author at Princeton University, the second edition of this widely used graduate level text features thoroughly updated material throughout; new chapters on boosting, adaptive regret, and approachability; expanded exposition on optimization; examples of applications, including prediction from expert advice, portfolio selection, matrix completion and recommendation systems, and SVM training, offered throughout; and exercises that guide students in completing parts of proofs.

Expand/Collapse All
Contents (pg. vii)
Preface (pg. xi)
Acknowledgments (pg. xv)
List of Figures (pg. xvii)
List of Symbols (pg. xix)
1. Introduction (pg. 1)
1.1 The Online Convex Optimization Setting (pg. 1)
1.2 Examples of Problems That Can Be Modeled via Online Convex Optimization (pg. 3)
1.3 A Gentle Start: Learning from Expert Advice (pg. 7)
1.4 Bibliographic Remarks (pg. 12)
1.5 Exercises (pg. 13)
2. Basic Concepts in Convex Optimization (pg. 15)
2.1 Basic Definitions and Setup (pg. 15)
2.2 Gradient Descent (pg. 19)
2.3 Constrained Gradient/Subgradient Descent (pg. 25)
2.4 Reductions to Nonsmooth and Nonstrongly Convex Functions (pg. 27)
2.5 Example: Support Vector Machine Training (pg. 31)
2.6 Bibliographic Remarks (pg. 33)
2.7 Exercises (pg. 34)
3. First-Order Algorithms for Online Convex Optimization (pg. 37)
3.1 Online Gradient Descent (pg. 38)
3.2 Lower Bounds (pg. 40)
3.3 Logarithmic Regret (pg. 41)
3.4 Application: Stochastic Gradient Descent (pg. 43)
3.5 Bibliographic Remarks (pg. 46)
3.6 Exercises (pg. 46)
4. Second-Order Methods (pg. 49)
4.1 Motivation: Universal Portfolio Selection (pg. 49)
4.2 Exp-Concave Functions (pg. 52)
4.3 Exponentially Weighted Online Convex Optimization (pg. 53)
4.4 The Online Newton Step Algorithm (pg. 55)
4.5 Bibliographic Remarks (pg. 60)
4.6 Exercises (pg. 61)
5. Regularization (pg. 63)
5.1 Regularization Functions (pg. 64)
5.2 The RFTL Algorithm and Its Analysis (pg. 65)
5.3 Online Mirror Descent (pg. 69)
5.4 Application and Special Cases (pg. 72)
5.5 Randomized Regularization (pg. 74)
5.6 Adaptive Gradient Descent (pg. 81)
5.7 Bibliographic Remarks (pg. 86)
5.8 Exercises (pg. 87)
6. Bandit Convex Optimization (pg. 89)
6.1 The Bandit Convex Optimization Setting (pg. 89)
6.2 The Multiarmed Bandit (MAB) Problem (pg. 90)
6.3 A Reduction from Limited Information to Full Information (pg. 94)
6.4 Online Gradient Descent without a Gradient (pg. 99)
6.5 Optimal Regret Algorithms for Bandit Linear Optimization (pg. 101)
6.6 Bibliographic Remarks (pg. 105)
6.7 Exercises (pg. 106)
7. Projection-Free Algorithms (pg. 107)
7.1 Review: Relevant Concepts from Linear Algebra (pg. 107)
7.2 Motivation: Recommender Systems (pg. 108)
7.3 The Conditional Gradient Method (pg. 110)
7.4 Projections versus Linear Optimization (pg. 114)
7.5 The Online Conditional Gradient Algorithm (pg. 116)
7.6 Bibliographic Remarks (pg. 119)
7.7 Exercises (pg. 120)
8. Games, Duality, and Regret (pg. 123)
8.1 Linear Programming and Duality (pg. 124)
8.2 Zero-Sum Games and Equilibria (pg. 125)
8.3 Proof of von Neumann Theorem (pg. 128)
8.4 Approximating Linear Programs (pg. 130)
8.5 Bibliographic Remarks (pg. 131)
8.6 Exercises (pg. 131)
9. Learning Theory, Generalization, and Online Convex Optimization (pg. 133)
9.1 Statistical Learning Theory (pg. 133)
9.2 Agnostic Learning Using Online Convex Optimization (pg. 138)
9.3 Learning and Compression (pg. 143)
9.4 Bibliographic Remarks (pg. 145)
9.5 Exercises (pg. 145)
10. Learning in Changing Environments (pg. 147)
10.1 A Simple Start: Dynamic Regret (pg. 148)
10.2 The Notion of Adaptive Regret (pg. 149)
10.3 Tracking the Best Expert (pg. 151)
10.4 Efficient Adaptive Regret for Online Convex Optimization (pg. 153)
10.5 Computationally Efficient Methods (pg. 155)
10.6 Bibliographic Remarks (pg. 159)
10.7 Exercises (pg. 160)
11. Boosting and Regret (pg. 163)
11.1 The Problem of Boosting (pg. 164)
11.2 Boosting by Online Convex Optimization (pg. 165)
11.3 Bibliographic Remarks (pg. 169)
11.4 Exercises (pg. 170)
12. Online Boosting (pg. 171)
12.1 Motivation: Learning from a Huge Set of Experts (pg. 171)
12.2 The Contextual Learning Model (pg. 173)
12.3 The Extension Operator (pg. 174)
12.4 The Online Boosting Method (pg. 175)
12.5 Bibliographic Remarks (pg. 179)
12.6 Exercises (pg. 180)
13. Blackwell Approachability and Online Convex Optimization (pg. 181)
13.1 Vector-Valued Games and Approachability (pg. 181)
13.2 From OCO to Approachability (pg. 183)
13.3 From Approachability to OCO (pg. 186)
13.4 Bibliographic Remarks (pg. 189)
13.5 Exercises (pg. 190)
Notes (pg. 191)
Chapter 1 (pg. 191)
Chapter 2 (pg. 191)
Chapter 5 (pg. 191)
Chapter 8 (pg. 192)
Chapter 10 (pg. 192)
Chapter 12 (pg. 192)
References (pg. 193)
Index (pg. 207)

Elad Hazan

Elad Hazan is Professor of Computer Science at Princeton University and cofounder and director of Google AI Princeton.

eTextbook
Go paperless today! Available online anytime, nothing to download or install.

Features

  • Bookmarking
  • Note taking
  • Highlighting