Nonlinear Programming

Sprednja platnica
Athena Scientific, 1. sep. 2016 - 1100 strani

This book provides a comprehensive and accessible presentation of algorithms for solving continuous optimization problems. It relies on rigorous mathematical analysis, but also aims at an intuitive exposition that makes use of visualization where possible. It places particular emphasis on modern developments, and their widespread applications in fields such as large-scale resource allocation problems, signal processing, and machine learning.

The 3rd edition brings the book in closer harmony with the companion works Convex Optimization Theory (Athena Scientific, 2009),  Convex Optimization Algorithms (Athena Scientific, 2015),  Convex Analysis and Optimization (Athena Scientific, 2003), and Network Optimization (Athena Scientific, 1998). 

These works are complementary in that they deal primarily with convex, possibly nondifferentiable, optimization problems and rely on convex analysis. By contrast the nonlinear programming book focuses primarily on analytical and computational methods for possibly nonconvex differentiable problems. It relies primarily on calculus and variational analysis, yet it still contains a detailed presentation of duality theory and its uses for both convex and nonconvex problems.

This on-line edition contains detailed solutions to all the theoretical book exercises.


Among its special features, the book:


Provides extensive coverage of iterative optimization methods within a unifying framework

Covers in depth duality theory from both a variational and a geometric point of view

Provides a detailed treatment of interior point methods for linear programming

Includes much new material on a number of topics, such as proximal algorithms, alternating direction methods of multipliers, and conic programming

Focuses on large-scale optimization topics of much current interest, such as first order methods, incremental methods, and distributed asynchronous computation, and their applications in machine learning, signal processing, neural network training, and big data applications

Includes a large number of examples and exercises

Was developed through extensive classroom use in first-year graduate courses

 

Vsebina

Unconstrained Optimization Basic Methods
1
Unconstrained Optimization Additional Methods
119
Optimization over a Convex Set
235
Lagrange Multiplier Theory
343
Lagrange Multiplier Algorithms
445
Duality and Convex Programming
553
Dual Methods
663
Mathematical Background
745
Convex Analysis
783
Line Search Methods
809
Implementation of Newtons Method
815
References
821
Index
857

Druge izdaje - Prikaži vse

Pogosti izrazi in povedi

O avtorju (2016)

Dimitri P. Bertsekas undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology.

Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept., Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979). From 1979 to 2019 he was with the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology (M.I.T.), where he served as McAfee Professor of Engineering. In 2019, he was appointed Fulton Professor of Computational Decision Making, and a full time faculty member at the department of Computer, Information, and Decision Systems Engineering at Arizona State University (ASU), Tempe, while maintaining a research position at MIT. His research spans several fields, including optimization, control, large-scale computation, and data communication networks, and is closely tied to his teaching and book authoring activities. He has written numerous research papers, and eighteen books and research monographs, several of which are used as textbooks in MIT and ASU classes. Most recently Dr Bertsekas has been focusing on reinforcement learning, and authored a textbook in 2019, and a research monograph on its distributed and multiagent implementation aspects in 2020.

Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming", the 2000 Greek National Award for Operations Research, the 2001 ACC John R. Ragazzini Education Award, the 2009 INFORMS Expository Writing Award, the 2014 ACC Richard E. Bellman Control Heritage Award for "contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control," the 2014 Khachiyan Prize for Life-Time Accomplishments in Optimization, the SIAM/MOS 2015 George B. Dantzig Prize, and the 2022 IEEE Control Systems Award. In 2018, he was awarded, jointly with his coauthor John Tsitsiklis, the INFORMS John von Neumann Theory Prize, for the contributions of the research monographs "Parallel and Distributed Computation" and "Neuro-Dynamic Programming". In 2001, he was elected to the United States National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks."

Dr. Bertsekas' recent books are "Introduction to Probability: 2nd Edition" (2008), "Convex Optimization Theory" (2009), "Dynamic Programming and Optimal Control," Vol. I, (2017), and Vol. II: (2012), "Abstract Dynamic Programming" (2018), "Convex Optimization Algorithms" (2015), "Reinforcement Learning and Optimal Control" (2019), and "Rollout, Policy Iteration, and Distributed Reinforcement Learning" (2020), all published by Athena Scientific.

Bibliografski podatki