BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Marc Teboulle (Tel Aviv University)
DTSTART:20200420T130000Z
DTEND:20200420T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 1/">Hidden Convexity in Nonconvex Quadratic Optimization</a>\nby Marc Tebo
 ulle (Tel Aviv University) as part of One World Optimization seminar\n\n\n
 Abstract\nthe address and password of the zoom room of the seminar are sen
 t by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexandre d'Aspremont (École Normale Supérieure Paris (ENS))
DTSTART:20200615T130000Z
DTEND:20200615T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 2/">Naive feature selection: Sparsity in naive Bayes</a>\nby Alexandre d'A
 spremont (École Normale Supérieure Paris (ENS)) as part of One World Opt
 imization seminar\n\n\nAbstract\nDue to its linear complexity\, naive Baye
 s classification remains an attractive supervised learning method\, especi
 ally in very large-scale settings. We propose a sparse version of naive Ba
 yes\, which can be used for feature selection. This leads to a combinatori
 al maximum-likelihood problem\, for which we provide an exact solution in 
 the case of binary data\, or a bound in the multinomial case. We prove tha
 t our bound becomes tight as the marginal contribution of additional featu
 res decreases. Both binary and multinomial sparse models are solvable in t
 ime almost linear in problem size\, representing a very small extra relati
 ve cost compared to the classical naive Bayes. Numerical experiments on te
 xt data show that the naive Bayes feature selection method is as statistic
 ally effective as state-of-the-art feature selection methods such as recur
 sive feature elimination\, l1-penalized logistic regression and LASSO\, wh
 ile being orders of magnitude faster. For a large data set\, having more t
 han with 1.6 million training points and about 12 million features\, and w
 ith a non-optimized CPU implementation\, our sparse naive Bayes model can 
 be trained in less than 15 seconds.\n\nThe talk is based on joint work wit
 h Armin Askari and Laurent El Ghaoui that can be found at https://arxiv.or
 g/abs/1905.09884\n\nthe address and password of the zoom room of the semin
 ar are sent by e-mail on the mailinglist of the seminar one day before eac
 h talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Peter Richtárik (KAUST)
DTSTART:20200427T130000Z
DTEND:20200427T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 3/">On Second Order Methods and Randomness</a>\nby Peter Richtárik (KAUST
 ) as part of One World Optimization seminar\n\n\nAbstract\nthe address and
  password of the zoom room of the seminar are sent by e-mail on the mailin
 glist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Russell Luke (University of Göttingen)
DTSTART:20200504T130000Z
DTEND:20200504T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 4/">Iterated self-mappings in nonlinear spaces: the case of random  functi
 on iterations and inconsistent stochastic feasibility</a>\nby Russell Luke
  (University of Göttingen) as part of One World Optimization seminar\n\n\
 nAbstract\nthe address and password of the zoom room of the seminar are se
 nt by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wotao Yin (UCLA)
DTSTART:20200511T130000Z
DTEND:20200511T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 5/">Scaled relative graph</a>\nby Wotao Yin (UCLA) as part of One World Op
 timization seminar\n\n\nAbstract\nthe address and password of the zoom roo
 m of the seminar are sent by e-mail on the mailinglist of the seminar one 
 day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francis Bach (INRIA)
DTSTART:20200518T130000Z
DTEND:20200518T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 6/">On the convergence of gradient descent for wide two-layer neural netwo
 rks</a>\nby Francis Bach (INRIA) as part of One World Optimization seminar
 \n\n\nAbstract\n[joint talk with the One World Seminar: Mathematical Metho
 ds for Arbitrary Data Sources]\n\nthe address and password of the zoom roo
 m of the seminar are sent by e-mail on the mailinglist of the seminar one 
 day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panayotis Mertikopoulos (CNRS / INRIA)
DTSTART:20200713T130000Z
DTEND:20200713T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 7/">Games\, dynamics and optimization</a>\nby Panayotis Mertikopoulos (CNR
 S / INRIA) as part of One World Optimization seminar\n\n\nAbstract\nThis t
 alk aims to survey the triple-point interface between optimization\, game 
 theory\, and dynamical systems. In the first part of the talk\, we will di
 scuss how the ordinary differential equation (ODE) method of stochastic ap
 proximation can be used to analyze the trajectories of stochastic first-or
 der algorithms in non-convex programs – both in terms of convergence to 
 the problem's critical set as well as the avoidance of non-minimizing crit
 ical manifolds. Subsequently\, we will examine the behavior of these algor
 ithms in a game-theoretic context involving \\emph{several} optimizing age
 nts\, each with their individual objective. In this multi-agent setting\, 
 the situation is considerably more involved: On the one hand\, if the game
  being played satisfies a monotonicity condition known as "diagonal strict
  convexity" (Rosen\, Econometrica\, 1965)\, the induced sequence of play c
 onverges to Nash equilibrium with probability $1$. On the other hand\, in 
 non-monotone games\, the sequence of play may converge with arbitrarily hi
 gh probability to spurious attractors that are in no way unilaterally stab
 le (or even stationary). "Traps" of this type can arise even in simple two
 -player zero-sum games with one-dimensional action sets and polynomial pay
 offs\, a fact which highlights the fundamental gap between min-min and min
 -max problems.\n\nWe will discuss both classical and recent results – bu
 t not the proofs thereof.\n\n[joint talk with the One World Mathematical G
 ame Theory Seminar]\n\nthe address and password of the zoom room of the se
 minar are sent by e-mail on the mailinglist of the seminar one day before 
 each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick L. Combettes (North Carolina State University)
DTSTART:20200525T130000Z
DTEND:20200525T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 8/">Back to single-resolvent Iterations\, with warping</a>\nby Patrick L. 
 Combettes (North Carolina State University) as part of One World Optimizat
 ion seminar\n\n\nAbstract\nThe scope of the classical proximal point algor
 ithm for finding a zero of a monotone operator may seem rather limited. Fo
 r this reason\, the field of operator splitting has moved away from single
 -resolvent iterations and significantly expanded in various directions. We
  introduce a generalization of the standard resolvent\, called warped reso
 lvent\, which is constructed with the help of an auxiliary operator. This 
 notion will be shown to be a central tool which not only underlies a broad
  range of existing algorithms\, but also serves as a platform to design ne
 w classes of splitting methods. The discussion will include Bregman-based 
 splitting in reflexive spaces\, primal-dual methods\, inertial methods\, s
 ystems of monotone inclusions\, and best approximation methods.\n\nBased o
 n preprints and on-going work with M. N. Bui.\n\nthe address and password 
 of the zoom room of the seminar are sent by e-mail on the mailinglist of t
 he seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adrien Taylor (INRIA)
DTSTART:20200601T130000Z
DTEND:20200601T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 9/">Computer-aided worst-case analyses and design of first-order methods f
 or convex optimization</a>\nby Adrien Taylor (INRIA) as part of One World 
 Optimization seminar\n\n\nAbstract\nIn this presentation\, I want to provi
 de a high-level overview of recent approaches for analyzing and designing 
 first-order methods using symbolic computations and/or semidefinite progra
 mming. A particular emphasis will be given to the "performance estimation"
  approach and some of its variants\, which enjoys comfortable tightness gu
 arantees: the approach fails only when the target results are impossible t
 o prove. In particular\, it allows obtaining (tight) worst-case guarantees
  for fixed-step first-order methods involving a variety of oracles - that 
 includes explicit\, projected\, proximal\, conditional\, inexact\, or stoc
 hastic (sub)gradient steps - and a variety of convergence measures.\n\nThe
  presentation will be example-based\, as the main ingredients necessary fo
 r understanding the methodologies are already present in the analysis base
  optimization schemes. For convincing the audience\, and if time allows\, 
 we will provide other examples that include analyses of the Douglas-Rachfo
 rd splitting\, and of a variant of the celebrated conjugate gradient metho
 d in its most naive form.\n\nThe methodology is implemented within the pac
 kage "PESTO" (for "Performance EStimation TOolbox"\, available at https://
 github.com/AdrienTaylor/Performance-Estimation-Toolbox)\, which allows usi
 ng the framework without the SDP modelling steps.\n\nThis talk is based on
  joint works with great collaborators (who will be mentioned during the pr
 esentation).\n\nthe address and password of the zoom room of the seminar a
 re sent by e-mail on the mailinglist of the seminar one day before each ta
 lk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Volkan Cevher (EPFL)
DTSTART:20200608T130000Z
DTEND:20200608T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 10/">Scalable semidefinite programming</a>\nby Volkan Cevher (EPFL) as par
 t of One World Optimization seminar\n\n\nAbstract\nThis talk first introdu
 ces new convex optimization methods based on linear minimization oracles t
 o obtain numerical solutions to semidefinite programs with a low-rank matr
 ix streaming model. This streaming model provides us an opportunity to int
 egrate\nsketching as a new tool for developing storage optimal convex opti
 mization methods that can solve semidefinite programs (SDP) efficiently wi
 thin space required to write down the problem and its solution.\n\nIn part
 icular\, for SDP formulations\, we obtain an approximate solution within a
 n $\\epsilon$-error region in the objective residual and distance to feasi
 ble set\, after a total of $\\texttt{Const}\\cdot \\epsilon^{-5/2}\\log(n/
 \\epsilon)$ matrix vector multiplications for the linear minimization orac
 le (approximate eigenvalue calculation)\, and an additional $\\mathcal{O}(
 \\max(n\,d)/\\epsilon^2)$ arithmetic operations for the remaining arithmet
 ics. $\\texttt{Const}$ is problem independent.\n\nWe then discuss a practi
 cal inexact augmented Lagrangian method for non-convex problems with nonli
 near constraints and contrast this approach to the convex one for solving 
 SDPs. We characterize the total computational complexity of the non-convex
  method subject to a verifiable geometric condition\, followed by numerica
 l demonstrations that include\, max-cut\, unsupervised clustering\, and qu
 adratic assignment problems. \n\nThe talk is based on joint work with seve
 ral collaborators\, including Alp Yurtsever\, Olivier Fercoq\, Joel A. Tro
 pp\, Madeleine Udell\, Fatih Sahin\, Armin Eftekhari\, and Ahmet Alacaoglu
 .\n\nthe address and password of the zoom room of the seminar are sent by 
 e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Claudia Sagastizábal (University of Campinas)
DTSTART:20200622T130000Z
DTEND:20200622T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 11/">Revisiting Augmented Lagrangian Duals</a>\nby Claudia Sagastizábal (
 University of Campinas) as part of One World Optimization seminar\n\n\nAbs
 tract\nFor nonconvex optimization problems\, possibly having mixed-integer
  variables\, a convergent primal-dual solution algorithm is proposed. The 
 approach applies a proximal bundle method to a certain augmented Lagrangia
 n dual that arises in the context of the so-called generalized augmented L
 agrangians. We recast these Lagrangians into the framework of a classical 
 Lagrangian\, by means of a special reformulation of the original problem. 
 Thanks to this insight\, the methodology yields zero duality gap. Lagrangi
 an subproblems can be solved inexactly without hindering the primal-dual c
 onvergence properties of the algorithm. Primal convergence is ensured even
  when the dual solution set is empty. The interest of the new method is as
 sessed on several problems\, including unit commitment in energy optimizat
 ion. These problems are solved to optimality by solving separable Lagrangi
 an subproblems. \n\nThe talk is based on joint work with Marcelo Cordova a
 nd Welington de Oliveira.\n\nthe address and password of the zoom room of 
 the seminar are sent by e-mail on the mailinglist of the seminar one day b
 efore each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yurii Nesterov (University of Louvain)
DTSTART:20200629T130000Z
DTEND:20200629T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 12/">Superfast Second-Order Methods for Unconstrained Convex Optimization<
 /a>\nby Yurii Nesterov (University of Louvain) as part of One World Optimi
 zation seminar\n\n\nAbstract\nIn this talk\, we present new second-order m
 ethods with convergence rate $O( 1/k^4)$\, where $k$ is the iteration coun
 ter. This is faster that the existing lower bound for this type of schemes
 \, which is $O ( 1/k^{7/2} )$. Our progress can be explained by a finer sp
 ecification of the problem class. The main idea of this approach consists 
 in implementation of an accelerated third-order scheme using a second-orde
 r oracle. At each iteration of our method\, we solve a nontrivial auxiliar
 y problem by a linearly convergent scheme based on the relative non-degene
 racy condition. During this process\, the Hessian of the objective functio
 n is computed once\, and the gradient is computed $O (\\ln {1 \\over \\eps
 ilon})$ times\, where $\\epsilon$ is the desired accuracy of the solution 
 for our problem.\n\nthe address and password of the zoom room of the semin
 ar are sent by e-mail on the mailinglist of the seminar one day before eac
 h talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jérôme Bolte (Toulouse 1 University Capitole)
DTSTART:20200706T130000Z
DTEND:20200706T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 13/">A Variational Model for Automatic Differentiation with Applications t
 o Deep Learning</a>\nby Jérôme Bolte (Toulouse 1 University Capitole) as
  part of One World Optimization seminar\n\n\nAbstract\nAutomatic different
 iation is an automatized implementation of differential calculus\, it play
 s a key computational role in several fields as machine learning\, design 
 optimization\, fluid dynamics\, physical modeling\, mechanics\, finance. I
 t is also efficient for nonsmooth problems despite the occurence of spurio
 us behaviors. In that case\, one indeed observes the apparition of calculu
 s artifacts and artificial critical points that have no variational nature
 .  Our goal is to provide a simple mathematical model for this differentia
 tion process. Our motivation comes from deep learning which will also serv
 e as an illustrative model for our ideas and results.\nThe first easy\, bu
 t somehow unexpected fact\, is that there is no\n«subdifferentiation» op
 erator modeling nonsmooth nonconvex automatic differentiation. This fact m
 otivates the introduction of a family of multivalued mappings generalizing
  gradient-like behaviors  that we call conservative fields. We shall revie
 w their salient properties and show how they allow us to study rigorously 
 forward and backward automatic differentiation. We will also try to clarif
 y the spurious behavior of automatic differentiation and study the role of
  what we call «artificial critical points». We apply our findings to sho
 w that the training of feedforward neural networks through mini-batch stoc
 hastic «subgradient» methods comes with rigorous convergence guarantees.
 \nJoint work with E. Pauwels\n\nthe address and password of the zoom room 
 of the seminar are sent by e-mail on the mailinglist of the seminar one da
 y before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xiaoming Yuan (University of Hong Kong)
DTSTART:20200720T130000Z
DTEND:20200720T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 14/">From Optimization to Optimal Control: An Algorithmic Design Perspecti
 ve</a>\nby Xiaoming Yuan (University of Hong Kong) as part of One World Op
 timization seminar\n\n\nAbstract\nOptimal control problems model the proce
 dures of controlling some\nphysical processes with certain objectives\; us
 ually they are modeled as\noptimization problems with PDE and other constr
 aints. It is generally\nnontrivial to find efficient numerical solvers for
  these problems\,\nespecially for time-dependent cases. Typical difficulti
 es include the\nextremely high dimensionality after discretization\, ill-c
 onditioned\nmatrices of the resulting systems of linear equations\, and po
 ssibly\ncomplicated coupling of PDEs with some other simple constraints. W
 e will\nshow how to extend some well-developed efficient operator splittin
 g\nalgorithms in the context of convex optimization problems to some ellip
 tic\nand parabolic optimal control problems. Particularly\, we will highli
 ght some\ncomputational techniques such as preconditioning to derive trust
 worthy\nnumerical schemes for various optimal control problems.\n\nthe add
 ress and password of the zoom room of the seminar are sent by e-mail on th
 e mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Coralia Carțiș (University of Oxford)
DTSTART:20200727T130000Z
DTEND:20200727T140000Z
DTSTAMP:20260404T110822Z
UID:OWOS/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 15/">Tensor Methods for Non-convex Optimization Problems</a>\nby Coralia C
 arțiș (University of Oxford) as part of One World Optimization seminar\n
 \n\nAbstract\nWe investigate the evaluation complexity of finding high-ord
 er critical points of non-convex smooth optimization problems when high or
 der derivatives of the objective are available.\nAdaptive regularisation (
 and time permitting trust region) methods are presented that use high-degr
 ee Taylor local models in a simple framework. Their global rates of conver
 gence to\nstandard notions of first\, second and third order critical poin
 ts of the objective are presented\, and observed to be natural generalisat
 ions of the optimal bounds known for cubic regularisation.\nHowever\, goin
 g beyond third-order criticality is challenging\, requiring new notions of
  (approximate) high-order optimality. A strong\, stable notion of a high-o
 rder local minimizer is presented\,\nalong with associated regularisation 
 and trust-region variants that can find such points in a quantifiable way 
 from a complexity viewpoint. Extensions of these methods and results to co
 mposite\noptimization\, as well as to special structure functions (such as
  those satisfying the PL inequality) may also be discussed\, time permitti
 ng. This work is joint with Nick Gould (Rutherford Appleton\nLaboratory\, 
 UK) and Philippe Toint (University of Namur\, Belgium).\n\nthe address and
  password of the zoom room of the seminar are sent by e-mail on the mailin
 glist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Amir Beck (Tel-Aviv University)
DTSTART:20200907T133000Z
DTEND:20200907T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 16/">Dual Randomized Coordinate Descent Method for Solving a Class of Nonc
 onvex Problems</a>\nby Amir Beck (Tel-Aviv University) as part of One Worl
 d Optimization seminar\n\n\nAbstract\nWe consider a nonconvex optimization
  problem consisting of maximizing the difference of two convex functions. 
 We present a randomized method that requires low computational effort at e
 ach iteration. The described method is a randomized coordinate descent met
 hod employed on the so-called Toland-dual problem. We prove subsequence co
 nvergence to dual stationarity points\, a new notion that we introduce and
  shown to be tighter than the standard criticality. Almost sure rate of co
 nvergence of an optimality measure of the  dual sequence is proven. We dem
 onstrate the potential of our results on three Principal Component Analysi
 s (PCA) models resulting in extremely simple algorithms\nJoint work Marc T
 eboulle\n\nthe address and password of the zoom room of the seminar are se
 nt by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jong-Shi Pang (University of Southern California)
DTSTART:20200914T133000Z
DTEND:20200914T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 17/">The Era of "Non"-Optimization Problems</a>\nby Jong-Shi Pang (Univers
 ity of Southern California) as part of One World Optimization seminar\n\n\
 nAbstract\nThis talk presents a systematic discussion of our comprehensive
  efforts in the study of modern “non”-optimization problems from the c
 ombined perspectives of motivation\, theory\, and algorithms.  The study i
 s documented in a forthcoming 700-page research monograph with the title: 
 “Modern Nonconvex Nondifferentiable Optimization”\, jointly authored b
 y the speaker and Dr. Ying Cui at the University of Minnesota. Beginning w
 ith a 100-page of prerequisite mathematics and optimization background\, t
 he monograph introduces the combined paradigm of structured learning and c
 omputational optimization\, with illustrations drawn from contemporary pro
 blems in statistical estimation\, operations research\, optimization\, and
  their diverse subfields. The goal of this monograph is multi-fold: \na)	p
 lace the foundational and algorithmic treatment of nonconvexity and nondif
 ferentiability on a rigorous footing\, focusing in particular on problems 
 where these two features are coupled\; \nb)	provide the basic concepts and
  powerful tools of nonsmooth analysis for this purpose\; \nc)	present a ho
 st of surrogation algorithms with convergence guarantees for computing sta
 tionary solutions of the appropriate kind\; \nd)	understand the roles of t
 hese computed solutions in the context of the source problems\, and \ne)	s
 et a forward path for advanced research and to reach out to extended probl
 ems such as nonconvex noncooperative games\, nonconvex stochastic programs
 \, and robustification of nonconvex problems.\nIn short\, our efforts aim 
 to put in action the monumental treatise of Rockafellar and Wets on Variat
 ional Analysis\, open it up for practical applications\, and cement its su
 stained contributions in the era of “non”-optimization problems.\n\nth
 e address and password of the zoom room of the seminar are sent by e-mail 
 on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adrian Lewis (ORIE Cornell)
DTSTART:20200921T133000Z
DTEND:20200921T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 18/">Smoothness in Nonsmooth Optimization</a>\nby Adrian Lewis (ORIE Corne
 ll) as part of One World Optimization seminar\n\n\nAbstract\nFast black-bo
 x nonsmooth optimization\, while theoretically out of reach in the worst c
 ase\, has long been an intriguing goal in practice.  Generic concrete nons
 mooth objectives are "partly" smooth:  their subdifferentials have locally
  smooth graphs with powerful constant-rank properties\, often associated w
 ith hidden structure in the objective.  One typical example is the proxima
 l mapping for the matrix numerical radius\, whose output is surprisingly o
 ften a "disk" matrix.  Motivated by this expectation of partial smoothness
 \, this talk describes a Newtonian black-box algorithm for general nonsmoo
 th optimization.  Local convergence is provably superlinear on a represent
 ative class of objectives\, and early numerical experience is promising mo
 re generally.\n\nJoint work with Dima Drusvyatskiy\, XY Han\, Alex Ioffe\,
  Jingwei Liang\, Michael Overton\, Tonghua Tian\, Calvin Wylie\n\nthe addr
 ess and password of the zoom room of the seminar are sent by e-mail on the
  mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephen Wright (University of Wisconsin)
DTSTART:20200928T133000Z
DTEND:20200928T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 19/">Second-Order Methods for Nonconvex Optimization with Complexity Guara
 ntees</a>\nby Stephen Wright (University of Wisconsin) as part of One Worl
 d Optimization seminar\n\n\nAbstract\nWidely used algorithms for smooth no
 nconvex optimization problems - unconstrained\, bound-constrained\, and ge
 neral equality-constrained - can be modified slightly to ensure that appro
 ximate first- and\nsecond-order optimal points are found\, with complexity
  guarantees that depend on the desired accuracy. We discuss methods constr
 ucted from Newton's method\, conjugate gradients\, randomized Lanczos\, tr
 ust-region\nframeworks\, log-barrier\, and augmented Lagrangians. We deriv
 e upper bounds on various measures of complexity in terms of the tolerance
 s required. Our methods use Hessian information only in the form of Hessia
 n-vector products - an operation that does not require the Hessian itself 
 to be evaluated or stored explicitly.\n\nthe address and password of the z
 oom room of the seminar are sent by e-mail on the mailinglist of the semin
 ar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aris Daniilidis (University of Chile / Autonomous University of Ba
 rcelona)
DTSTART:20201005T133000Z
DTEND:20201005T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 20/">Asymptotic Study of the Sweeping Process</a>\nby Aris Daniilidis (Uni
 versity of Chile / Autonomous University of Barcelona) as part of One Worl
 d Optimization seminar\n\n\nAbstract\nLet $r\\mapsto S(r)$ be a set-valued
  mapping with nonempty values and a closed semi-algebraic graph (or more g
 enerally\, with a graph which is definable in some o-minimal structure). W
 e shall be interested in the asymptotic behavior of the orbits of the so-c
 alled sweeping process $$\\dot x(r) \\in - N_{S(r)}\, \\quad r>0.\\hspace{
 2cm} (SPO)$$ \n\nKurdyka (Ann. Inst. Fourier\, 1998)\, in the framework of
  a gradient dynamics of a $C^1$-smooth definable function $f$\, generalize
 d the Lojasiewicz inequality and obtained a control of the asymptotic beha
 vior of the gradient orbits in terms of a desingularizing function $\\Psi$
  depending on $f$. We shall show that an analogous technique to the one us
 ed by Kurdyka can be replicated to our setting for the sweeping dynamics. 
 Our method recovers the aforementioned result of Kurdyka\, by simply consi
 dering the sweeping process defined by the sublevel sets of the function $
 f$: indeed\, in this case setting $S(r) = [f \\leq r]$\, we deduce that th
 e orbits of (SPO) are in fact gradient orbits for $f$\, and\nthe nowadays 
 called (smooth) Kurdyka-Lojasiwiecz inequality is recovered.\n\nThis talk 
 is based on a work in collaboration with D. Drusvyatskiy (Seattle).\n\nthe
  address and password of the zoom room of the seminar are sent by e-mail o
 n the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:R. Tyrrell Rockafellar (University of Washington)
DTSTART:20201012T133000Z
DTEND:20201012T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 21/">Augmented Lagrangians and Hidden Convexity in Sufficient Conditions f
 or Local Optimality</a>\nby R. Tyrrell Rockafellar (University of Washingt
 on) as part of One World Optimization seminar\n\n\nAbstract\nSecond-order 
 sufficient conditions for local optimality have long been central to desig
 ning solution algorithms and justifying claims about their convergence.  I
 n this talk a far-reaching extension of such conditions\, called variation
 al sufficiency\, will be explained in territory beyond just nonlinear prog
 ramming.  Variational sufficiency is already known to support multiplier m
 ethods that are able\, even without convexity\, to achieve problem decompo
 sition\, but further insight has been needed to see how it coordinates wit
 h other sufficient conditions.  In fact it characterizes local optimality 
 in terms of having a convex-concave-type local saddle point of an augmente
 d Lagrangian function.  \n\nA stronger version of variational sufficiency 
 corresponds in turn to local strong convexity in the primal argument of th
 at function and a property of augmented tilt stability which offers crucia
 l aid to Lagrange multiplier methods at a fundamental level of analysis.  
 Moreover\, that strong version can be translated through second-order vari
 ational analysis into statements which may readily be compared to existing
  sufficient conditions in nonlinear programming\, second-order cone progra
 mming\, and other problem formulations that are able to incorporate nonsmo
 oth objectives and regularization terms.\n\nthe address and password of th
 e zoom room of the seminar are sent by e-mail on the mailinglist of the se
 minar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Polyak (Institute for Control Science Moscow)
DTSTART:20201019T133000Z
DTEND:20201019T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 22/">Static Linear Feedback for Control as Optimization Problem</a>\nby Bo
 ris Polyak (Institute for Control Science Moscow) as part of One World Opt
 imization seminar\n\n\nAbstract\nIf we fix control as  static linear  feed
 back\, an optimal control problem reduces to optimization problem with res
 pect to the feedback gain matrix. We consider properties of the arising pe
 rformance functions (smoothness\, convexity\, connectedness of sublevel se
 ts) and provide gradient-like methods for optimization. The following exam
 ples are addressed: linear quadratic regulator\; static output feedback\; 
 design of low-order controllers. Possible extensions are discussed.\n\nTal
 k in collaboration with I.Fatkhullin\, P.Scherbakov\, M.Khlebnikov\n\nthe 
 address and password of the zoom room of the seminar are sent by e-mail on
  the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dmitriy Drusvyatskiy (University of Washington)
DTSTART:20201123T143000Z
DTEND:20201123T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 23/">Stochastic Optimization with Decision-dependent Distributions</a>\nby
  Dmitriy Drusvyatskiy (University of Washington) as part of One World Opti
 mization seminar\n\n\nAbstract\nStochastic optimization problems often inv
 olve data\ndistributions that change in reaction to the decision variables
 . For\nexample\, deployment of a classifier by a learning system\, when ma
 de\npublic\, often causes the population to adapt their attributes in orde
 r\nto increase the likelihood of being favorably labeled---a process\ncall
 ed ``gaming''. Even when the population is agnostic to the\nclassifier\, t
 he decisions made by the learning system (e.g. loan\napproval) may inadver
 tently alter the profile of the population (e.g.\ncredit score). Recent wo
 rks have identified an intriguing solution\nconcept for such problems as a
 n ``equilibrium'' of a certain game.\nContinuing this line of work\, we sh
 ow that typical stochastic\nalgorithms---originally designed for static pr
 oblems---can be applied\ndirectly for finding such equilibria with little 
 loss in efficiency.\nThe reason is simple to explain: the main consequence
  of the\ndistributional shift is that it corrupts the algorithms with a bi
 as\nthat decays linearly with the distance to the solution. Using this\npe
 rspective\, we obtain sharp convergence guarantees  for popular\nalgorithm
 s\, such as stochastic gradient\, clipped gradient\, proximal\npoint\, and
  dual averaging methods\, along with their accelerated and\nproximal varia
 nts.\n\nJoint work with Lin Xiao (Facebook AI Research)\n\nThe address and
  password of the zoom room of the seminar are sent by e-mail on the mailin
 glist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Friedlander (University of British Columbia)
DTSTART:20201026T143000Z
DTEND:20201026T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 24/">Polar deconvolution of mixed signals</a>\nby Michael P. Friedlander (
 University of British Columbia) as part of One World Optimization seminar\
 n\n\nAbstract\nThe signal demixing problem seeks to separate the superposi
 tion of multiple signals into its constituent components. We model the sup
 erposition process as the polar convolution of atomic sets\, which allows 
 us to use the duality of convex cones to develop an efficient two-stage al
 gorithm with sublinear iteration complexity and linear storage. If the sig
 nal measurements are random\, the polar deconvolution approach stably reco
 vers low-complexity and mutually-incoherent signals with high probability 
 and with optimal sample complexity. Numerical experiments on both real and
  synthetic data confirm the theory and efficiency of the proposed approach
 .\n\nJoint work with Zhenan Fan\, Halyun Jeong\, and Babhru Joshi at the U
 niversity of British Columbia.\n\nThe address and password of the zoom roo
 m of the seminar are sent by e-mail on the mailinglist of the seminar one 
 day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Defeng Sun (Hong Kong Polytechnic University)
DTSTART:20201130T143000Z
DTEND:20201130T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 25/">Several Observations about Using the ALM + Semismooth Newton Method f
 or Solving Large Scale Semidefinite Programming and Beyond</a>\nby Defeng 
 Sun (Hong Kong Polytechnic University) as part of One World Optimization s
 eminar\n\n\nAbstract\nSemidefinite Programming (SDP) has been one of the m
 ajor research fields in optimization during the last three decades and int
 erior point methods (IPMs) are perhaps the most robust and efficient algor
 ithms for solving small to medium sized SDP problems. For large scale SDPs
 \, IPMs are no longer viable due to their inherent high memory requirement
 s and computational costs at each iteration.  In this talk\, we will summa
 rize what we observed during the last 15 years or so in combining the augm
 ented Lagrangian algorithm with the semismooth Newton method for solving t
 he dual of  SDP and convex quadratic SDP of large scales. We will emphasiz
 e the importance of the constraint non-degeneracy in numerical implementat
 ions and the quadratic growth condition in convergence rate analysis. Easy
 -to-implement stopping criteria for the augmented Lagrangian subproblems w
 ill also be introduced. All these features are implemented in the publical
 ly available software packages SDPNAl/SDPNAL+ and QSDPNAL.\n\nThe address 
 and password of the zoom room of the seminar are sent by e-mail on the mai
 linglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guoyin Li (University of New South Wales)
DTSTART:20201207T143000Z
DTEND:20201207T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 26/">Estimating the Exponents of Kurdyka-Łojasiewicz (KL) Inequality and 
 Error Bounds for Optimization Models</a>\nby Guoyin Li (University of New 
 South Wales) as part of One World Optimization seminar\n\n\nAbstract\nThe 
 Kurdyka-Łojasiewicz (KL) inequality and error bounds are two fundamental 
 tools for establishing convergence of many numerical methods. In particula
 r\, the exponents of the KL inequality and error bounds play an important 
 role in estimating the convergence rate of many contemporary first-order m
 ethods. Nevertheless\, these exponents are extremely hard to estimate in g
 eneral\, particularly in the case where the associated mappings are not po
 lyhedral. In this talk\, we will outline some strategies in estimating or 
 identifying these exponents by exploiting the so-called inf-projection ope
 ration and specific structure such as polynomial structure\, semi-definite
  cone program representability and $C^2$-cone reducible structures.\n\nThe
  address and password of the zoom room of the seminar are sent by e-mail o
 n the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ya-xiang Yuan (Chinese Academy of Sciences)
DTSTART:20201214T143000Z
DTEND:20201214T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 27/">Orthogonality-free Approaches for Optimization Problems on Stiefel Ma
 nifold</a>\nby Ya-xiang Yuan (Chinese Academy of Sciences) as part of One 
 World Optimization seminar\n\n\nAbstract\nIn this talk\, I will discuss so
 me orthogonality-free approaches for optimization problems on Stiefel mani
 fold. Stiefel manifold consists of matrices with orthogonal columns. Optim
 ization problems with orthogonality constraints appear in many important a
 pplications such as leading eigenvalues computation\, discretized Kohn-Sha
 m total energy minimization\, and sparse principal component analysis. We 
 present new algorithms for solving optimization problems on Stieful manifo
 ld. These algorithms are based on penalty functions\, thus there are no ne
 eds to carry out orthogonalization calculations in each iteration. The maj
 or computation cost of orthogonality-free algorithms is in the form of mat
 rix-matrix multiplication\, which has the advantage of being parallelized 
 easily. Problems with both smooth and nonsmooth objective functions are co
 nsidered. Theoretical properties of our algorithms are discussed and numer
 ical experiments are also presented.\n\nThe address and password of the zo
 om room of the seminar are sent by e-mail on the mailinglist of the semina
 r one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Uday V. Shanbhag (Pennsylvania State University)
DTSTART:20201102T143000Z
DTEND:20201102T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 28/">Inexact and Distributed Best-Response Schemes for Stochastic Nash Equ
 ilibrium Problems</a>\nby Uday V. Shanbhag (Pennsylvania State University)
  as part of One World Optimization seminar\n\n\nAbstract\nWe consider the 
 class of Nash equilibrium problems where players solve convex optimization
  problems with expectation-valued objectives. In the first part of the pre
 sentation\, we discuss a class of inexact best-response schemes in which a
 n inexact best-response step is computed via stochastic approximation. We 
 consider synchronous\, asynchronous\, and randomized schemes and provide r
 ate and complexity guarantees in each instance. In the second part of the 
 presentation\, we consider distributed best-response schemes for aggregati
 ve games. In such settings\, an (inexact) best-response step is overlaid w
 ith a consensus step. In addition to the oracle and iteration complexity\,
  we examine the communication complexity of such schemes for computing sui
 tably defined ϵ-stochastic Nash equilibria.\n\nThis first part of this is
  joint work with Jinlong Lei\, Jong-Shi Pang and Suvrajeet Sen while the s
 econd part of this work is joint with Jinlong Lei.\n\nThe address and pass
 word of the zoom room of the seminar are sent by e-mail on the mailinglist
  of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ingrid Daubechies (Duke University)
DTSTART:20201109T143000Z
DTEND:20201109T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 29/">Discovering low-dimensional manifolds in high-dimensional data sets</
 a>\nby Ingrid Daubechies (Duke University) as part of One World Optimizati
 on seminar\n\n\nAbstract\nDiffusion methods help understand and denoise da
 ta sets\;\nwhen there is additional structure (as is often the case)\, one
  can use\n(and get additional benefit from) a fiber bundle model.\n\nThe a
 ddress and password of the zoom room of the seminar are sent by e-mail on 
 the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Asu Ozdaglar (Massachusetts Institute of Technology)
DTSTART:20201116T143000Z
DTEND:20201116T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 30/">Robustness in Machine Learning and Optimization: A Minmax Approach</a
 >\nby Asu Ozdaglar (Massachusetts Institute of Technology) as part of One 
 World Optimization seminar\n\n\nAbstract\nMinmax problems arise in a large
  number of problems in optimization\, including worst-case design\, dualit
 y theory\, and zero-sum games\, but also have become popular in machine le
 arning in the context of adversarial robustness and Generative Adversarial
  Networks (GANs). This talk will review our recent work on solving minmax 
 problems using discrete-time gradient based optimization algorithms. We fo
 cus on Optimistic Gradient Descent Ascent (OGDA) and Extra-gradient (EG) m
 ethods\, which have attracted much attention in the recent literature beca
 use of their superior empirical performance in GAN training.  We show that
  OGDA and EG can be seen as approximations of the classical proximal point
  method and use this interpretation to establish convergence rate guarante
 es for these algorithms. These guarantees are provided for the ergodic (av
 eraged) iterates of the algorithms. We also consider the last iterate of E
 G  and present convergence rate guarantees for the last iterate for smooth
  convex-concave saddle point problems. We finally turn to analysis of gene
 ralization properties of gradient based minmax algorithms using the algori
 thmic stability framework defined by Bousquet and Elisseeff. Our generaliz
 ation analysis suggests superiority of gradient descent ascent (GDA) compa
 red to GDmax algorithm (which involves exact solution of the maximization 
 problem at each iteration) in the nonconvex-concave case provided that sim
 ilar learning rates are used in the descent and ascent steps.\n\nThe addre
 ss and password of the zoom room of the seminar are sent by e-mail on the 
 mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monique Laurent (CWI Amsterdam & Tilburg University)
DTSTART:20210125T143000Z
DTEND:20210125T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 31/">Sum-of-Squares Approximation Hierarchies for Polynomial Optimization<
 /a>\nby Monique Laurent (CWI Amsterdam & Tilburg University) as part of On
 e World Optimization seminar\n\n\nAbstract\nMinimizing a polynomial functi
 on over a compact set is a computationally hard problem\, already when res
 tricting to quadratic polynomials and to simple sets like a ball\, a hyper
 cube\, or a simplex. We discuss some hierarchies of bounds introduced by L
 asserre\, that are based on searching for an optimal sum-of-squares densit
 y function minimizing the expected value of f over K.\n\nWe will discuss s
 everal techniques that permit to analyse the performance guarantee of thes
 e bounds depending on the degree of the sum-of-square density. This includ
 es using an eigenvalue reformulation of the bounds\, links to extremal roo
 ts of orthogonal polynomials\, and reducing to the univariate case by mean
 s of push-forward measures.\n\nBased on joint works with Etienne de Klerk 
 and Lucas Slot.\n\nThe address and password of the zoom room of the semina
 r are sent by e-mail on the mailinglist of the seminar one day before each
  talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hedy Attouch (University of Montpellier)
DTSTART:20210118T143000Z
DTEND:20210118T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 32/">Acceleration of First-Order Optimization Algorithms via Inertial Dyna
 mics with Hessian Driven Damping</a>\nby Hedy Attouch (University of Montp
 ellier) as part of One World Optimization seminar\n\n\nAbstract\nIn a Hilb
 ert space\, for convex optimization\, we report on recent advances regardi
 ng the acceleration of first-order algorithms. We rely on inertial dynamic
 s with damping driven by the Hessian\, and the link between continuous dyn
 amic systems and algorithms obtained by temporal discretization. We first 
 review the classical results\, from Polyak's heavy ball with friction meth
 od to Nesterov's accelerated gradient method. Then we introduce the dampin
 g driven by the Hessian which intervenes in the dynamic in the form $\\nab
 la^2f(x(t))\\dot{x}(t)$. By treating this term as the time derivative of $
 \\nabla f(x(t))$\, this gives\, in discretized form\, first-order algorith
 ms. As a fundamental property\, this geometric damping makes it possible t
 o attenuate the oscillations. In addition to the fast convergence of the v
 alues\, the algorithms thus obtained show a rapid convergence towards zero
  of the gradients. The introduction of time scale factors further accelera
 tes these algorithms. On the basis of a regularization technique using the
  Moreau envelope\, we extend the method to non-smooth convex functions wit
 h extended real values. Numerical results for structured optimization prob
 lems support our theoretical findings. Finally\, we evoke recent developme
 nt concerning the extension of these results to the case of general monoto
 ne inclusions\, inertial ADMM algorithms\, dry friction\, inexact/stochast
 ic case\, thus showing the versatility of the method.\n\nThis lecture is b
 ased on the recent collaborative article:\nH. Attouch\, Z. Chbani\, J. Fad
 ili\, H. Riahi\, First-order optimization algorithms via inertial  systems
  with Hessian driven damping\, Math. Program.\, (2020)\, https://doi.org/1
 0.1007/s10107-020-01591-1\,  preprint available at hal-02193846.\n\nThe ad
 dress and password of the zoom room of the seminar are sent by e-mail on t
 he mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Mordukhovich (Wayne State University)
DTSTART:20210201T143000Z
DTEND:20210201T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 33/">Generalized Newton Algorithms For Nonsmooth Systems With Applications
  To Lasso</a>\nby Boris Mordukhovich (Wayne State University) as part of O
 ne World Optimization seminar\n\n\nAbstract\nWe propose and develop severa
 l generalized Newton-type algorithms to solve nonsmooth optimization probl
 ems and subgradient systems that are based on constructions and results of
  (mainly second-order) variational analysis and generalized differentiatio
 n. Solvability of these algorithms is proved in rather broad settings\, an
 d then verifiable conditions for their local and global superlinear conver
 gence are obtained. A special attention is paid to problems of convex comp
 osite optimization for which a generalized damped Newton algorithm exhibit
 ing global superlinear convergence is designed. The efficiency of the latt
 er algorithm is demonstrated by solving a class of Lasso problems that are
  well-recognized in applications to machine learning and statistics. For t
 his class of nonsmooth optimization problems\, we conduct numerical experi
 ments and compare the obtained results with those achieved by using other 
 first-order and second-order methods.\n\nThis talk is based on recent join
 t works with P. D. Khanh (HCMUE)\, V. T. Phat (WSU)\, M. E. Sarabi (Miami 
 Univ.)\, and D. B. Tran (WSU).\n\nThe address and password of the zoom roo
 m of the seminar are sent by e-mail on the mailinglist of the seminar one 
 day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katya Scheinberg (ORIE Cornell)
DTSTART:20210215T143000Z
DTEND:20210215T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 34/">Complexity Analysis Framework of Adaptive Optimization Methods via Ma
 rtingales</a>\nby Katya Scheinberg (ORIE Cornell) as part of One World Opt
 imization seminar\n\n\nAbstract\nWe will present a very general framework 
 for unconstrained adaptive optimization which encompasses standard methods
  such as line search and trust region methods that use stochastic function
  measurements and/or derivatives. In particular\, methods that fall in thi
 s framework retain desirable practical features such as step acceptance cr
 iterion\, trust region adjustment and ability to utilize second order mode
 ls and enjoy the same convergence rates as their deterministic counterpart
 s. The framework is based on bounding the expected stopping time of a stoc
 hastic process\, which satisfies certain assumptions. Thus this framework 
 provides strong convergence analysis under weaker conditions than alternat
 ive approaches in the literature. We will conclude with a discussion about
  some interesting open questions.\n\nThe address and password of the zoom 
 room of the seminar are sent by e-mail on the mailinglist of the seminar o
 ne day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Antonin Chambolle (CMAP / École Polytechnique Palaiseau)
DTSTART:20210301T143000Z
DTEND:20210301T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 35/">Derivatives of Solutions of Saddle-Point Problems</a>\nby Antonin Cha
 mbolle (CMAP / École Polytechnique Palaiseau) as part of One World Optimi
 zation seminar\n\n\nAbstract\nIn a recent paper\, we have been interested 
 in optimizing the quality of the solutions of convex optimization problems
  among a class of consistent approximations of the total variation. Such a
  problem requires an efficient way to derivate a loss function with respec
 t to the solution of a convex problem\, computed by an iterative algorithm
  for which classical back-propagation is not always possible\, due to memo
 ry limitation. We will describe in this talk a simple way to compute the a
 djoint states which allows to estimate such gradients\, and discuss issues
  relative to the smoothness of the objective.\n\nJoint work with T. Pock (
 TU Graz).\n\nThe address and password of the zoom room of the seminar are 
 sent by e-mail on the mailinglist of the seminar one day before each talk\
 n
LOCATION:https://stable.researchseminars.org/talk/OWOS/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roberto Cominetti (Adolfo Ibáñez University)
DTSTART:20210208T143000Z
DTEND:20210208T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/36
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 36/">Convergence Rates for Krasnoselskii-Mann Fixed-Point Iterations</a>\n
 by Roberto Cominetti (Adolfo Ibáñez University) as part of One World Opt
 imization seminar\n\n\nAbstract\nA popular method to approximate a fixed p
 oint of a non-expansive map $T : C \\to C$ is the Krasnoselskii-Mann itera
 tion \n\n$$(KM)\\ \\ \\ \\hspace{3cm}  x_{n+1}  = (1 − \\alpha_{n+1})  x
 _n + \\alpha_{n+1} T  xn.$$\n\nThis covers a wide range of iterative metho
 ds in convex minimization\, equilibria\, and beyond. In the Euclidean sett
 ing\, a flexible method to obtain convergence rates for this iteration is 
 the PEP methodology introduced by Drori and Teboulle (2012)\, which is bas
 ed on semi-definite programming. When the underlying norm is no longer Hil
 bert\, PEP can be substituted by an approach based on recursive estimates 
 obtained by using optimal transport. This approach can be traced back to e
 arly work by Baillon and Bruck (1992\, 1996). In this talk we describe thi
 s optimal transport technique\, and we survey some recent progress that se
 ttles two conjectures by Baillon and Bruck\, and yields the following tigh
 t metric estimate for the fixed-point residuals\n\n$$x_n – T x_n = \\fra
 c{diam (C)}{\\pi \\sum_(k=1)^n \\alpha_k  (1-\\alpha_k)}.$$\n\nThe recursi
 ve estimates exhibit a very rich structure and induce a very peculiar metr
 ic over the integers. The analysis exploits an unexpected connection with 
 discrete probability and combinatorics\, related to the Gambler’s ruin f
 or sums of non-homogeneous Bernoulli trials. If time allows\, we will brie
 fly discuss the extension to inexact iterations\, and a connection to Mark
 ov chains with rewards.\n\nNote: The talk will be based on joint work with
  Mario Bravo\, Matías Pavez-Signé\, José Soto\, and José Vaisman. Pape
 rs are available at https://sites.google.com/site/cominettiroberto/.\n\nTh
 e address and password of the zoom room of the seminar are sent by e-mail 
 on the mailinglist of the seminar one day before each talk.\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jonathan Eckstein (Rutgers University)
DTSTART:20210222T143000Z
DTEND:20210222T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/37
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 37/">Progressive Hedging and Asynchronous Projective Hedging for Convex St
 ochastic Programming</a>\nby Jonathan Eckstein (Rutgers University) as par
 t of One World Optimization seminar\n\n\nAbstract\nOperator splitting meth
 ods for convex optimization and monotone inclusions have their roots in th
 e solution of partial differential equations\, and have since become popul
 ar in machine learning and image processing applications.  Their applicati
 on to "operations-research-style" optimization problems has been somewhat 
 limited.\n\nA notable exception is their application to stochastic program
 ming.  In a paper published in 1991\, Rockafellar and Wets proposed the pr
 ogressive hedging (PH) algorithm to solve large-scale convex stochastic pr
 ogramming problems.  Although they proved the convergence of the method fr
 om first principles\, it was already known to them that PH was an operator
  splitting method.\n\nThis talk will present a framework for convex stocha
 stic programming and show that applying the ADMM (and thus Douglas-Rachfor
 d splitting) to it yields the PH algorithm.  The equivalence of PH to ADMM
  has long been known but not explicitly published.\n\nNext\, the talk will
  apply the projective splitting framework of Combettes and Eckstein to the
  same formulation\, yielding a method which is similar to PH but can be im
 plemented in a partially aynchronous manner.  We call this method "asynchr
 onous projective hedging" (APH). Unlike most decomposition methods\, it do
 es not need to solve every subproblem at every iteration\; instead\, each 
 iteration may solve just a single subproblem or a small subset of the avai
 lable subproblems.\n\nFinally\, the talk will describe work integrating th
 e APH algorithm into mpi-sppy\, a Python package for modeling and distribu
 ted parallel solution of stochastic programming problems. Mpi-sppy uses th
 e Pyomo Python-based optimization modeling sytem.  Our experience includes
  using 8\,000 processor cores to solve a test problem instance with 1\,000
 \,000 scenarios.\n\nThis talk presents joint research with Jean-Paul Watso
 n (Lawrence Livermore National Laboratory\, USA)\, and David Woodruff (Uni
 versity of California\, Davis).\n\nThe address and password of the zoom ro
 om of the seminar are sent by e-mail on the mailinglist of the seminar one
  day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gabriel Peyré (CNRS & École Normale Supérieure)
DTSTART:20210322T143000Z
DTEND:20210322T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/38
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 38/">Scaling Optimal Transport for High Dimensional Learning</a>\nby Gabri
 el Peyré (CNRS & École Normale Supérieure) as part of One World Optimiz
 ation seminar\n\n\nAbstract\nOptimal transport (OT) has recently gained lo
 t of interest in machine learning. It is a natural tool to compare in a ge
 ometrically faithful way probability distributions. It finds applications 
 in both supervised learning (using geometric loss functions) and unsupervi
 sed learning (to perform generative model fitting). OT is however plagued 
 by the curse of dimensionality\, since it might require a number of sample
 s which grows exponentially with the dimension. In this talk\, I will revi
 ew entropic regularization methods which define geometric loss functions a
 pproximating OT with a better sample complexity. More information and refe
 rences can be found on the website of our book "Computational Optimal Tran
 sport" https://optimaltransport.github.io/\n\nThe address and password of 
 the zoom room of the seminar are sent by e-mail on the mailinglist of the 
 seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lieven Vandenberghe (UCLA)
DTSTART:20210308T143000Z
DTEND:20210308T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/39
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 39/">Bregman Proximal Methods for Semidefinite Optimization</a>\nby Lieven
  Vandenberghe (UCLA) as part of One World Optimization seminar\n\n\nAbstra
 ct\nGeneralized proximal methods based on Bregman distances offer the poss
 ibility of matching the distance to the structure in the problem\, with th
 e goal of reducing the complexity per iteration. In semidefinite optimizat
 ion\, the use of a generalized distance can allow us to avoid expensive ei
 gendecompositions\, needed in standard proximal methods for Euclidean proj
 ections on the positive semidefinite cone. We discuss applications to spar
 se semidefinite optimization\, and to other types of structure that are co
 mmon in control and signal processing\, such as Toeplitz structure.\n\nThe
  address and password of the zoom room of the seminar are sent by e-mail o
 n the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frank E. Curtis (Lehigh University)
DTSTART:20210315T143000Z
DTEND:20210315T153000Z
DTSTAMP:20260404T110822Z
UID:OWOS/40
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 40/">SQP Methods for Deterministically Constrained Stochastic Optimization
 </a>\nby Frank E. Curtis (Lehigh University) as part of One World Optimiza
 tion seminar\n\n\nAbstract\nStochastic gradient and related methods for so
 lving stochastic optimization problems have been studied extensively in re
 cent years.  It has been shown that such algorithms and much of their conv
 ergence and complexity guarantees extend in straightforward ways when one 
 considers problems involving simple constraints\, such as when one can per
 form projections onto the feasible region of the problem.  However\, setti
 ngs with general nonlinear constraints have received less attention\, and 
 many of the approaches that have been proposed for solving such problems r
 esort to using penalty or (augmented) Lagrangian methods\, which are often
  not the most effective strategies.  In this work\, we propose and analyze
  stochastic optimization algorithms for deterministically constrained prob
 lems based on the sequential quadratic optimization (commonly known as SQP
 ) methodology.  We discuss the rationale behind our proposed techniques\, 
 convergence in expectation and complexity guarantees for our algorithms\, 
 and the results of preliminary numerical experiments that we have performe
 d.\n\nThe address and password of the zoom room of the seminar are sent by
  e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sylvain Sorin (CNRS & Sorbonne University)
DTSTART:20210329T133000Z
DTEND:20210329T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/41
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 41/">No-Regret Algorithms  in On-Line Learning\, Games and Convex Optimiza
 tion</a>\nby Sylvain Sorin (CNRS & Sorbonne University) as part of One Wor
 ld Optimization seminar\n\n\nAbstract\nThe purpose of this talk is to unde
 rline links between no-regret algorithms used in learning\, games and conv
 ex optimization.\\\\\nWe will describe and analyze Projected Dynamics\, Mi
 rror Descent and Dual Averaging.\\\\\n In particular we will study  contin
 uous and discrete time versions and their connections.\\\\\nWe will discus
 s:\n- link with variational inequalities\,\\\\\n   - speed of convergence 
 of the no-regret evaluation\,\\\\\n   - convergence of the trajectories.\n
 \nThe address and password of the zoom room of the seminar are sent by e-m
 ail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Henry Wolkowicz (University of Waterloo)
DTSTART:20210405T133000Z
DTEND:20210405T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/42
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 42/">Robust Interior Point Methods for Quantum Key Rate Computation for Qu
 antum Key Distribution</a>\nby Henry Wolkowicz (University of Waterloo) as
  part of One World Optimization seminar\n\n\nAbstract\nWe use facial reduc
 tion on the nonlinear objective function and derive a stable reformulation
  of the quantum key rate for finite dimensional quantum key distribution (
 QKD) problems. This avoids the difficulties for current algorithms from si
 ngularities that arise due to loss of positive definiteness for the distri
 butions. This allows for the derivation of an efficient Gauss-Newton inter
 ior point approach. We provide provable lower and upper bounds for the har
 d nonlinear semidefinite programming problem.\n\nEmpirical evidence illust
 rates the strength of this approach as we obtain high accuracy solutions a
 nd theoretically guaranteed upper and lower bounds for QKD. We compare wit
 h other current approaches in the literature.\n\nJoint work with: Hao Hu\,
  Jiyoung Im\, Jie Lin\, Norbert Lutkenhaus.\n\nThe address and password of
  the zoom room of the seminar are sent by e-mail on the mailinglist of the
  seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Ulbrich (Technical University of Munich)
DTSTART:20210412T133000Z
DTEND:20210412T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/43
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 43/">An Approximation Scheme for Distributionally Robust Nonlinear Optimiz
 ation with Applications to PDE-Constrained Problems under Uncertainty</a>\
 nby Michael Ulbrich (Technical University of Munich) as part of One World 
 Optimization seminar\n\n\nAbstract\nWe present a sampling-free approximati
 on scheme for distributionally robust nonlinear optimization (DRO). The DR
 O problem can be written in a bilevel form that involves maximal (i.e.\, w
 orst case) value functions of expectation of nonlinear functions that depe
 nd on the optimization variables and random parameters. The maximum values
  are taken over an ambiguity set of probability measures which is defined 
 by moment constraints. To achieve a good compromise between tractability a
 nd accuracy we approximate nonlinear dependencies of the\ncost / constrain
 t functions on the random parameters by quadratic Taylor expansions. This 
 results in an approximate DRO problem which on the lower level then involv
 es value functions of parametric trust-region problems and of parametric s
 emidefinite programs. Using trust-region duality\, a barrier approach\, an
 d other techniques we construct gradient consistent smoothing functions fo
 r these value functions and show global convergence of a corresponding hom
 otopy method. We discuss the application of our approach to PDE constraine
 d optimization under uncertainty and present numerical results.\n\nThe add
 ress and password of the zoom room of the seminar are sent by e-mail on th
 e mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Asen L. Dontchev (University of Michigan)
DTSTART:20210419T133000Z
DTEND:20210419T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/44
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 44/">Sensitivity Analysis without Derivatives</a>\nby Asen L. Dontchev (Un
 iversity of Michigan) as part of One World Optimization seminar\n\n\nAbstr
 act\nThe classical sensitivity analysis developed in the early days of opt
 imization and control revolves around determining derivatives of optimal v
 alues and solutions  with respect to parameters in the problem considered.
  In problems with constraints however\, (standard) differentiability typic
 ally fails.  The idea to obtain implicit function theorems without differe
 ntiability goes back to  Hildebrandt and Graves in their paper from 1927 a
 nd has been   developed for optimization problems  in the 1980s.  \nIn thi
 s talk  some major developments in sensitivity analysis of optimization pr
 oblems in the last several decades are outlined.   Estimates for solution 
 dependence on various perturbations  are derived based on regularity prope
 rties of mappings involved in the description of the problem.  Application
 s to mathematical programming\,  numerical optimization and optimal contro
 l  illustrate the theoretical findings.\n\nThe address and password of the
  zoom room of the seminar are sent by e-mail on the mailinglist of the sem
 inar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bernd Sturmfels (MPI Leipzig)
DTSTART:20210426T133000Z
DTEND:20210426T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/45
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 45/">Wasserstein Distance to Independence Models</a>\nby Bernd Sturmfels (
 MPI Leipzig) as part of One World Optimization seminar\n\n\nAbstract\nAn i
 ndependence model for discrete random variables is a variety in a probabil
 ity simplex. \nGiven any data distribution\, we seek to minimize the Wasse
 rstein distance to the model.\nThat distance comes from a polyhedral norm 
 whose unit ball is dual to an alcoved polytope.\nThe solution to our optim
 ization problem is a piecewise algebraic function of the data.\nIn this ta
 lk we discuss the algebraic and geometric structure of this function.\n\nR
 eference: arXiv:2003.06725\n\nThe address and password of the zoom room of
  the seminar are sent by e-mail on the mailinglist of the seminar one day 
 before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jane J. Ye (University of Victoria)
DTSTART:20210503T133000Z
DTEND:20210503T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/46
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 46/">On Solving Bilevel Programming Problems</a>\nby Jane J. Ye (Universit
 y of Victoria) as part of One World Optimization seminar\n\n\nAbstract\nA 
 bilevel programming problem is a sequence of two optimization problems whe
 re the constraint region of the upper level problem is determined implicit
 ly by the solution set to the lower level problem. It can be used to model
  a two-level hierarchical system where the two decision makers have differ
 ent objectives and make their decisions on different levels of hierarchy. 
 Recently more and more applications including those in machine learning ha
 ve been modelled as bilevel optimization problems. In this talk\, I will r
 eport some recent developments in optimality conditions and numerical algo
 rithms for solving this class of very difficult optimization problems.\n\n
 The address and password of the zoom room of the seminar are sent by e-mai
 l on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marco Antonio López Cerdá (University of Alicante)
DTSTART:20210517T133000Z
DTEND:20210517T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/47
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 47/">Properties of Inequality Systems and Their Influence in Convex Optimi
 zation</a>\nby Marco Antonio López Cerdá (University of Alicante) as par
 t of One World Optimization seminar\n\n\nAbstract\nIn this seminar\, diffe
 rent optimality conditions are presented for the  convex optimization prob
 lem with an arbitrary number of constraints. One  possible approach is to 
 replace the set of constraints by a single constraint  involving the supre
 mum function\, and to appeal to different characterizations of  its subdif
 ferential. With a view to this goal\, we extend to infinite convex  system
 s two constraint qualifications that are crucial in semi-infinite linear  
 programming. The first one\, called the Farkas-Minkowski property\, is glo
 bal in  nature\, while the other one is a local property\, which is known 
 as the local  Farkas-Minkowski property. Four different KKT-type optimalit
 y conditions are  then deduced\, either exact or asymptotic\, under progre
 ssively weaker constraint  qualifications.\n\nThe address and password of 
 the zoom room of the seminar are sent by e-mail on the mailinglist of the 
 seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mikhail Solodov (IMPA Rio de Janeiro)
DTSTART:20210524T133000Z
DTEND:20210524T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/48
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 48/">Regularized Smoothing for Solution Mappings of Convex Problems\, with
  Applications to Two-Stage Stochastic Programming and Some Hierarchical Pr
 oblems</a>\nby Mikhail Solodov (IMPA Rio de Janeiro) as part of One World 
 Optimization seminar\n\n\nAbstract\nMany modern optimization problems invo
 lve in the objective function solution mappings or\noptimal-value function
 s of other optimization problems.\nIn most/many cases\, those solution map
 pings and optimal-value functions are nonsmooth\,\nand the optimal-value f
 unction is also possibly nonconvex (even if the defining data\nis smooth a
 nd convex).\nMoreover\, stemming from solving optimization problems\, thos
 e solution mappings and\nvalue-functions are usually not known explicitly\
 , via any closed formulas. Hence\,\nthere is no formula to differentiate (
 even in the sense of generalized derivatives).\nThis presents an obvious c
 hallenge for solving the "upper" optimization problem\,\nas derivatives th
 erein cannot be computed.\n\nWe present an approach to regularize and appr
 oximate solution mappings of fully\nparametrized convex optimization probl
 ems that combines interior penalty (log-barrier)\nwith Tikhonov regulariza
 tion. Because the regularized solution mappings are single-valued\nand smo
 oth under reasonable conditions\, they can also be used to build a computa
 tionally\npractical smoothing for the associated optimal-value function.\n
 \nOne motivating application of interest is two-stage (possibly nonconvex)
  stochastic\nprogramming. In addition to theoretical properties\, numerica
 l experiments are presented\,\ncomparing the approach with the bundle meth
 od for nonsmooth optimization.\n\nAnother application is a certain class o
 f hierarchical decision problems\nthat can be viewed as single-leader mult
 i-follower games.\nThe objective function of the leader involves the decis
 ions of the followers (agents)\,\nwhich are taken independently by solving
  their own convex optimization problems.\nWe show how our approach is appl
 icable to derive both agent-wise and scenario-wise\ndecomposition algorith
 ms for this kind of problems.\nNumerical experiments and some comparisons 
 with the complementarity solver PATH\nare shown for the two-stage stochast
 ic Walrasian equilibrium problem.\n\nThe address and password of the zoom 
 room of the seminar are sent by e-mail on the mailinglist of the seminar o
 ne day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heinz H. Bauschke (University of British Columbia Okanagan)
DTSTART:20210628T133000Z
DTEND:20210628T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/49
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 49/">Compositions of Projection Mappings: Fixed Point Sets and Difference 
 Vectors</a>\nby Heinz H. Bauschke (University of British Columbia Okanagan
 ) as part of One World Optimization seminar\n\n\nAbstract\nProjection oper
 ators and associated projection algorithms are fundamental building blocks
  in fixed point theory and optimization.\nIn this talk\, I will survey rec
 ent results on the displacement mapping of the right-shift operator and sk
 etch a new application deepening our understanding of the geometry of the 
 fixed point set of the composition of projection operators in Hilbert spac
 e.\nBased on joint works with Salha Alwadani\, Julian Revalski\, and Shawn
  Wang.\n\nThe address and password of the zoom room of the seminar are sen
 t by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Hintermüller (Weierstrass Institute / Humboldt University
  of Berlin)
DTSTART:20210510T133000Z
DTEND:20210510T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/50
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 50/">Optimization with Learning-Informed Differential Equation Constraints
  and Its Applications</a>\nby Michael Hintermüller (Weierstrass Institute
  / Humboldt University of Berlin) as part of One World Optimization semina
 r\n\n\nAbstract\nInspired by applications in optimal control of semilinear
  elliptic partial differential equations and physics-integrated imaging\, 
 differential equation constrained optimization problems with constituents 
 that are only accessible through data-driven techniques are studied. A par
 ticular focus is on the analysis and on numerical methods for problems wit
 h machine-learned components. For a rather general context\, an error anal
 ysis is provided\, and particular properties resulting from artificial neu
 ral network based approximations are addressed. Moreover\, for each of the
  two inspiring applications analytical details are presented and numerical
  results are provided.\n\nThe address and password of the zoom room of the
  seminar are sent by e-mail on the mailinglist of the seminar one day befo
 re each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Constantin Zălinescu (Octav Mayer Institute of Mathematics Iași)
DTSTART:20210607T133000Z
DTEND:20210607T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/51
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 51/">On the Role of Interiority Notions in Convex Analysis and Optimizatio
 n</a>\nby Constantin Zălinescu (Octav Mayer Institute of Mathematics Iaș
 i) as part of One World Optimization seminar\n\n\nAbstract\nIt is well kno
 wn that in finite dimensions\, in order to get the formulae for the subdif
 ferentials and conjugates of\nfunctions obtained by operations which prese
 rve convexity or for getting strong duality results\, the sufficient condi
 tions are expressed by using the relative interiors of the domains of the 
 involved functions.\n\nIn the infinitely dimensional case\, several interi
 ority notions are used: the algebraic (relative) interior\, the quasi (rel
 ative)\ninterior and mixtures of these. It is the aim of our talk to discu
 ss the advantages and limitations of these notions.\n\nThe address and pas
 sword of the zoom room of the seminar are sent by e-mail on the mailinglis
 t of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Darinka Dentcheva (Stevens Institute of Technology)
DTSTART:20210614T133000Z
DTEND:20210614T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/52
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 52/">Subregular Recourse in Multistage Stochastic Optimization</a>\nby Dar
 inka Dentcheva (Stevens Institute of Technology) as part of One World Opti
 mization seminar\n\n\nAbstract\nWe discuss nonlinear multistage stochastic
  optimization problems in the spaces of integrable functions.\nThe problem
 s may include nonlinear dynamics and general objective functionals with dy
 namic risk measures as a particular case.\nWe present analysis about the c
 ausal operators describing the dynamics of the system and the Clarke subdi
 fferential for a\npenalty function involving such operators. We introduce 
 concept of a subregular recourse in nonlinear multistage stochastic optimi
 zation\n and establish subregularity of the resulting systems in two formu
 lations:\nwith built-in nonanticipativity and with explicit nonanticipativ
 ity constraints.\nOptimality conditions for both formulations and their re
 lations will be presented.\nThis is a joint work with Andrzej Ruszczynski\
 , Rutgers University\, New Jersey.\n\nThe address and password of the zoom
  room of the seminar are sent by e-mail on the mailinglist of the seminar 
 one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Pock (Graz University of Technology)
DTSTART:20210621T133000Z
DTEND:20210621T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/53
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 53/">Learning with Markov Random Field Models for Computer Vision</a>\nby 
 Thomas Pock (Graz University of Technology) as part of One World Optimizat
 ion seminar\n\n\nAbstract\nIn this talk I will show how learning technique
 s can be used to significantly improve the quality of discrete Markov Rand
 om Field (MRF) models. I will start by discussing fast algorithms that com
 bine dynamic programming with continuous optimization for solving MRF mode
 ls. I then show how their potentials can be learned from data to achieve s
 tate-of-the-art performance for computer vision tasks such as stereo\, opt
 ical flow and image segmentation.\n\nThe address and password of the zoom 
 room of the seminar are sent by e-mail on the mailinglist of the seminar o
 ne day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Kuhn (EPFL)
DTSTART:20210531T133000Z
DTEND:20210531T143000Z
DTSTAMP:20260404T110822Z
UID:OWOS/54
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OWOS/
 54/">A General Framework for Optimal Data-Driven Optimization</a>\nby Dani
 el Kuhn (EPFL) as part of One World Optimization seminar\n\n\nAbstract\nWe
  propose a statistically optimal approach to construct data-driven decisio
 ns for stochastic optimization problems. Fundamentally\, a data-driven dec
 ision is simply a function that maps the available training data to a feas
 ible action. It can always be expressed as the minimizer of a surrogate op
 timization model constructed from the data. The quality of a data-driven d
 ecision is measured by its out-of-sample risk. An additional quality measu
 re is its out-of-sample disappointment\, which we define as the probabilit
 y that the out-of-sample risk exceeds the optimal value of the surrogate o
 ptimization model. The crux of data-driven optimization is that the data-g
 enerating probability measure is unknown. An ideal data-driven decision sh
 ould therefore minimize the out-of-sample risk simultaneously with respect
  to every conceivable probability measure (and thus in particular with res
 pect to the unknown true measure). Unfortunately\, such ideal data-driven 
 decisions are generally unavailable. This prompts us to seek data-driven d
 ecisions that minimize the out-of-sample risk subject to an upper bound on
  the out-of-sample disappointment - again simultaneously with respect to e
 very conceivable probability measure. We prove that such Pareto-dominant d
 ata-driven decisions exist under conditions that allow for interesting app
 lications: the unknown data-generating probability measure must belong to 
 a parametric ambiguity set\, and the corresponding parameters must admit a
  sufficient statistic that satisfies a large deviation principle. If these
  conditions hold\, we can further prove that the surrogate optimization mo
 del generating the optimal data-driven decision must be a distributionally
  robust optimization problem constructed from the sufficient statistic and
  the rate function of its large deviation principle. This shows that the o
 ptimal method for mapping data to decisions is\, in a rigorous statistical
  sense\, to solve a distributionally robust optimization model. Maybe surp
 risingly\, this result holds irrespective of whether the original stochast
 ic optimization problem is convex or not and holds even when the training 
 data is non-i.i.d. As a byproduct\, our analysis reveals how the structura
 l properties of the data-generating stochastic process impact the shape of
  the ambiguity set underlying the optimal distributionally robust optimiza
 tion model.\n\nThis is joint work with Tobias Sutter and Bart Van Parys.\n
 \nThe address and password of the zoom room of the seminar are sent by e-m
 ail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://stable.researchseminars.org/talk/OWOS/54/
END:VEVENT
END:VCALENDAR
