BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Tiến-Sơn Phạm (University of Dalat)
DTSTART:20200603T070000Z
DTEND:20200603T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/1/">Openness\, Hölder metric regularity and Hölder continuity prope
 rties of semialgebraic set-valued maps</a>\nby Tiến-Sơn Phạm (Univers
 ity of Dalat) as part of Variational Analysis and Optimisation Webinar\n\n
 \nAbstract\nGiven a semialgebraic set-valued map with closed graph\, we sh
 ow that it is Hölder metrically subregular and that the following conditi
 ons are equivalent:\n\n(i) the map is an open map from its domain into its
  range and the range of is locally closed\;\n\n(ii) the map is Hölder met
 rically regular\;\n\n(iii) the inverse map is pseudo-Hölder continuous\;\
 n\n(iv) the inverse map is lower pseudo-Hölder continuous.\n\nAn applicat
 ion\, via Robinson’s normal map formulation\, leads to the following res
 ult in the context of semialgebraic variational inequalities: if the solut
 ion map (as a map of the parameter vector) is lower semicontinuous then th
 e solution map is finite and pseudo-Holder continuous. In particular\, we 
 obtain a negative answer to a question mentioned in the paper of Dontchev 
 and Rockafellar [Characterizations of strong regularity for variational in
 equalities over polyhedral convex sets. SIAM J. Optim.\, 4(4):1087–1105\
 , 1996]. As a byproduct\, we show that for a (not necessarily semialgebrai
 c) continuous single-valued map\, the openness and the non-extremality are
  equivalent. This fact improves the main result of Pühn [Convexity and op
 enness with linear rate. J. Math. Anal. Appl.\, 227:382–395\, 1998]\, wh
 ich requires the convexity of the map in question.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michel Théra (University of Limoges)
DTSTART:20200617T070000Z
DTEND:20200617T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/2/">Old and new results on equilibrium and quasi-equilibrium problems
 </a>\nby Michel Théra (University of Limoges) as part of Variational Anal
 ysis and Optimisation Webinar\n\n\nAbstract\nIn this talk I will briefly s
 urvey some old results which are going back to Ky Fan and Brezis-Niremberg
  and Stampacchia. Then I will give some new results related to the existen
 ce of solutions to equilibrium and quasi- equilibrium problems without any
  convexity assumption. Coverage includes some equivalences to the Ekeland 
 variational principle for bifunctions and basic facts about transfer lower
  continuity. An application is given to systems of quasi-equilibrium probl
 ems.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marco A. López-Cerdá (University of Alicante)
DTSTART:20200624T070000Z
DTEND:20200624T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/3/">Optimality conditions in convex semi-infinite optimization. An ap
 proach based on the subdifferential of the supremum function</a>\nby Marco
  A. López-Cerdá (University of Alicante) as part of Variational Analysis
  and Optimisation Webinar\n\n\nAbstract\nWe present a survey on optimality
  conditions (of Fritz-John and KKT-type) for semi-infinite convex optimiza
 tion problems. The methodology is based on the use of the subdifferential 
 of the supremum of the infinite family of constraint functions. Our approa
 ch aims to establish weak constraint qualifications and\, in the last step
 \, to dropp out the usual continuity/closedness assumptions which are stan
 dard in the literature. The material in this survey is extracted  from the
  following papers:\n\nR. Correa\, A. Hantoute\, M. A. López\, Weaker cond
 itions for subdifferential calculus of convex functions. J. Funct. Anal. 2
 71 (2016)\, 1177-1212.\n\nR. Correa\, A. Hantoute\, M. A. López\, Moreau-
 Rockafellar type formulas for the subdifferential of the supremum function
 . SIAM J. Optim. 29 (2019)\, 1106-1130.\n\nR. Correa\, A. Hantoute\, M. A.
  López\, Valadier-like formulas for the supremum function II: the compact
 ly indexed case. J. Convex Anal. 26 (2019)\, 299-324.\n\nR. Correa\, A. Ha
 ntoute\, M. A. López\, Subdifferential of the supremum via compactificati
 on of the index set. To appear in Vietnam J. Math. (2020).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hoa Bui (Curtin University)
DTSTART:20200708T070000Z
DTEND:20200708T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/4/">Zero Duality Gap Conditions via Abstract Convexity</a>\nby Hoa Bu
 i (Curtin University) as part of Variational Analysis and Optimisation Web
 inar\n\n\nAbstract\nUsing tools provided by the theory of abstract convexi
 ty\, we extend conditions for zero duality gap to the context of nonconvex
  and nonsmooth optimization. Substituting the classical setting\, an abstr
 act convex function is the upper envelope of a subset of a family of abstr
 act affine functions (being conventional vertical translations of the abst
 ract linear functions). We establish new characterizations of the zero dua
 lity gap under no assumptions on the topology on the space of abstract lin
 ear functions. Endowing the latter space with the topology of pointwise co
 nvergence\, we extend several fundamental facts of the conventional convex
  analysis. In particular\, we prove that the zero duality gap property can
  be stated in terms of an inclusion involving ε-subdifferentials\, which 
 are shown to possess a sum rule. These conditions are new even in conventi
 onal convex cases. The Banach-Alaoglu-Bourbaki theorem is extended to the 
 space of abstract linear functions. The latter result extends a fact recen
 tly established by Borwein\, Burachik and Yao in the conventional convex c
 ase.\n\nThis talk is based on a joint work with Regina Burachik\, Alex Kru
 ger and David Yost.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Saunderson (Monash University)
DTSTART:20200715T070000Z
DTEND:20200715T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/5/">Lifting for simplicity: concise descriptions of convex sets</a>\n
 by James Saunderson (Monash University) as part of Variational Analysis an
 d Optimisation Webinar\n\n\nAbstract\nThis talk will give a selective tour
  through the theory and applications of lifts of convex sets. A lift of a 
 convex set is a higher-dimensional convex set that projects onto the origi
 nal set. Many interesting convex sets have lifts that are dramatically sim
 pler to describe than the original set. Finding such simple lifts has sign
 ificant algorithmic implications\, particularly for associated optimizatio
 n problems. We will consider both the classical case of polyhedral lifts\,
  which are described by linear inequalities\, as well as spectrahedral lif
 ts\, which are defined by linear matrix inequalities. The tour will includ
 e discussion of ways to construct lifts\, ways to find obstructions to the
  existence of lifts\, and a number of interesting examples from a variety 
 of mathematical contexts. (Based on joint work with H. Fawzi\, J. Gouveia\
 , P. Parrilo\, and R. Thomas).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Akiko Takeda (University of Tokyo)
DTSTART:20200729T070000Z
DTEND:20200729T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/6/">Deterministic and Stochastic Gradient Methods for Non-Smooth  Non
 -Convex Regularized Optimization</a>\nby Akiko Takeda (University of Tokyo
 ) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\n
 Our work focuses on deterministic/stochastic gradient methods for optimizi
 ng a smooth non-convex loss function with a non-smooth non-convex regulari
 zer. Research on stochastic gradient methods is quite limited\, and until 
 recently no non-asymptotic convergence results have been reported. After s
 howing a deterministic approach\, we present simple stochastic gradient al
 gorithms\, for finite-sum and general stochastic optimization problems\, w
 hich have superior convergence complexities compared to the current state-
 of-the-art. We also compare our algorithms’ performance in practice for 
 empirical risk minimization.\n\nThis is based on joint works with  Tianxia
 ng Liu\, Ting Kei Pong  and Michael R. Metel.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evgeni Nurminski (Far Eastern Federal University)
DTSTART:20200805T070000Z
DTEND:20200805T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/7/">Practical Projection with Applications</a>\nby Evgeni Nurminski (
 Far Eastern Federal University) as part of Variational Analysis and Optimi
 sation Webinar\n\n\nAbstract\nProjection of a point on a given set is a ve
 ry common computational operation in an endless number of algorithms and a
 pplications. However\, with exception of simplest sets it by itself is a n
 ontrivial operation which is often complicated by large dimension\, comput
 ational degeneracy\, nonuniqueness (even for orthogonal projection on conv
 ex sets in certain situations)\, and so on. This talk aims to present some
  practical solutions\, i.e. finite algorithms\, for projection on polyhedr
 al sets\, among those: simplex\, polytopes\, polyhedron\, finite-generated
  cones with a certain discussion of “nonlinearities”\, decomposition a
 nd parallel computations. We also consider the application of projection o
 peration in linear optimization and epi-projection algorithm for convex op
 timization.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xiaoqi Yang (The Hong Kong Polytechnic University)
DTSTART:20200812T070000Z
DTEND:20200812T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/8/">On error bound moduli for locally Lipschitz and regular functions
 </a>\nby Xiaoqi Yang (The Hong Kong Polytechnic University) as part of Var
 iational Analysis and Optimisation Webinar\n\n\nAbstract\nWe first introdu
 ce for a closed and convex set two classes of subsets: the near and far en
 ds relative to a point\, and give some full characterizations for these en
 d sets by virtue of the face theory of closed and convex sets. We provide 
 some connections between closedness of the far (near) end and the relative
  continuity of the gauge (cogauge) for closed and convex sets. We illustra
 te that the distance from 0 to the outer limiting subdifferential of the s
 upport function of the subdifferential set\, which is essentially the dist
 ance from 0 to the end set of the subdifferential set\, is an upper estima
 te of the local error bound modulus. This upper estimate becomes tight for
  a convex function under some regularity conditions. We show that the dist
 ance from 0 to the outer limiting subdifferential set of a lower C^1 funct
 ion is equal to the local error bound modulus.\n\n\nReferences:\nLi\, M.H.
 \, Meng K.W. and Yang X.Q.\, On far and near ends of closed and convex set
 s. Journal of Convex Analysis. 27 (2020) 407–421.\nLi\, M.H.\, Meng K.W.
  and Yang X.Q.\, On error bound moduli for locally Lipschitz and regular f
 unctions\, Math. Program. 171 (2018) 463–487.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marián Fabian (Czech Academy of Sciences)
DTSTART:20200701T070000Z
DTEND:20200701T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/9/">Can Pourciau’s open mapping theorem be derived from Clarke’s 
 inverse mapping theorem?</a>\nby Marián Fabian (Czech Academy of Sciences
 ) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\n
 We discuss the possibility of deriving Pourciau’s open mapping theorem f
 rom Clarke’s inverse mapping theorem. These theorems work with the Clark
 e generalized Jacobian. In our journey\, we will face several interesting 
 phenomena and pitfalls in the world of (just) 2 by 3 matrices.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Oliver Stein (Karlsruhe Institute of Technology)
DTSTART:20200722T070000Z
DTEND:20200722T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/10/">A general branch-and-bound framework for global multiobjective o
 ptimization</a>\nby Oliver Stein (Karlsruhe Institute of Technology) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe deve
 lop a general framework for branch-and-bound methods in multiobjective opt
 imization. Our focus is on natural generalizations of notions and techniqu
 es from the single objective case. In particular\, after the notions of up
 per and lower bounds on the globally optimal value from the single objecti
 ve case have been transferred to upper and lower bounding sets on the set 
 of nondominated points for multiobjective programs\, we discuss several po
 ssibilities for discarding tests. They compare local upper bounds of the p
 rovisional nondominated sets with relaxations of partial upper image sets\
 , where the latter can stem from ideal point estimates\, from convex relax
 ations\, or from relaxations by a reformulation-linearization technique. \
 n    \nThe discussion of approximation properties of the provisional nondo
 minated set leads to the suggestion for a natural selection rule along wit
 h a natural termination criterion. Finally we discuss some issues which do
  not occur in the single objective case and which impede some desirable co
 nvergence properties\, thus also motivating a natural generalization of th
 e convergence concept.\n\nThis is joint work with Gabriele Eichfelder\, Pe
 ter Kirst\, and Laura Meng.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christiane Tammer (Martin Luther University Halle-Wittenberg)
DTSTART:20200909T070000Z
DTEND:20200909T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/11/">Subdifferentials and Lipschitz properties of translation invaria
 nt functionals and applications</a>\nby Christiane Tammer (Martin Luther U
 niversity Halle-Wittenberg) as part of Variational Analysis and Optimisati
 on Webinar\n\n\nAbstract\nIn the talk\, we are dealing with translation in
 variant functionals and their application for deriving necessary condition
 s for minimal solutions of constrained and unconstrained optimization prob
 lems with respect to general domination sets.\n\nTranslation invariant fun
 ctionals are a natural and powerful tool for the separation of not necessa
 rily convex sets and scalarization. There are many applications of transla
 tion invariant functionals in nonlinear functional analysis\, vector optim
 ization\, set optimization\, optimization under uncertainty\, mathematical
  finance as well as consumer and production theory.\n\nThe primary objecti
 ve of this talk is to establish formulas for basic and singular subdiffere
 ntials of translation invariant functionals and to study important propert
 ies such as monotonicity\, the PSNC property\, the Lipschitz behavior\, et
 c. of these nonlinear functionals without assuming that the shifted set in
 volved in the definition of the functional is convex. The second objective
  is to propose a new way to scalarize a set-valued optimization problem. I
 t allows us to study necessary conditions for minimal solutions in a very 
 broad setting in which the domination set is not necessarily convex or sol
 id or conical. The third objective is to apply our results to vector-value
 d approximation problems.\n\nThis is a joint work with T.Q. Bao (Northern 
 Michigan University).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gerd Wachsmuth (BTU)
DTSTART:20200902T070000Z
DTEND:20200902T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/12/">New Constraint Qualifications for Optimization Problems in Banac
 h Spaces based on Asymptotic KKT Conditions</a>\nby Gerd Wachsmuth (BTU) a
 s part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nOpt
 imization theory in Banach spaces suffers from the lack of available const
 raint qualifications. Despite the fact that there exist only a very few co
 nstraint qualifications\, they are\, in addition\, often violated even in 
 simple applications. This is very much in contrast to finite-dimensional n
 onlinear programs\, where a large number of constraint qualifications is k
 nown. Since these constraint qualifications are usually defined using the 
 set of active inequality constraints\, it is difficult to extend them to t
 he infinite-dimensional setting. One exception is a recently introduced se
 quential constraint qualification based on asymptotic KKT conditions. This
  paper shows that this so-called asymptotic KKT regularity allows suitable
  extensions to the Banach space setting in order to obtain new constraint 
 qualifications. The relation of these new constraint qualifications to exi
 sting ones is discussed in detail. Their usefulness is also shown by sever
 al examples as well as an algorithmic application to the class of augmente
 d Lagrangian methods.\n\nThis is a joint work with Christian Kanzow (Würz
 burg) and Patrick Mehlitz (Cottbus).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Regina Burachik (UniSA)
DTSTART:20200923T070000Z
DTEND:20200923T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/13/">A Primal–Dual Penalty Method via Rounded Weighted-$L_1$ Lagran
 gian Duality</a>\nby Regina Burachik (UniSA) as part of Variational Analys
 is and Optimisation Webinar\n\n\nAbstract\nWe propose a new duality scheme
  based on a sequence of smooth minorants of the weighted-$l_1$ penalty fun
 ction\, interpreted as a parametrized sequence of augmented\nLagrangians\,
  to solve nonconvex constrained optimization problems. For the induced seq
 uence of dual problems\, we establish strong asymptotic duality properties
 . Namely\, we\nshow that (i) the sequence of dual problems is convex and (
 ii) the dual values monotonically increase to the optimal primal value. We
  use these properties to devise a subgradient based primal–dual method\,
  and show that the generated primal sequence accumulates at a solution of 
 the original problem. We illustrate the performance of the new method with
  three different types of test problems: A polynomial nonconvex problem\, 
 large-scale instances of the celebrated kissing number problem\, and the M
 arkov–Dubins problem. Our numerical experiments demonstrate that\, when 
 compared with the traditional implementation of a well-known smooth solver
 \, our new method (using the same well-known solver in its subproblem) can
  find better quality solutions\, i.e.\, “deeper” local minima\, or sol
 utions closer to the global minimum. Moreover\, our method seems to be mor
 e time efficient\, especially when the problem has a large number of const
 raints.\n\nThis is a joint work with C. Y. Kaya (UniSA) and C. J. Price (U
 niversity of Canterbury\, Christchurch\, New Zealand)\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christopher Price (University of Canterbury)
DTSTART:20200916T070000Z
DTEND:20200916T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/14/">A direct search method for constrained optimization via the roun
 ded $l_1$ penalty function.</a>\nby Christopher Price (University of Cante
 rbury) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstr
 act\nThis talk looks at the constrained optimization problem when the obje
 ctive and constraints are \nLipschitz continuous black box functions.   Th
 e approach uses a sequence of smoothed and offset $\\ell_1$ penalty functi
 ons. \nThe method generates an approximate minimizer to each penalty funct
 ion\, and then adjusts the offsets and other parameters.\nThe smoothing is
  steadily reduced\, ultimately revealing the $\\ell_1$ exact penalty funct
 ion. The method preferentially uses\na discrete quasi-Newton step\, backed
  up by a global direction search.  \nTheoretical convergence results are g
 iven for the smooth and non-smooth cases subject to relevant conditions.  
 \nNumerical results are presented on a variety of problems with non-smooth
  objective or constraint functions. \nThese results show the method is eff
 ective in practice.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yalçın Kaya (UniSA)
DTSTART:20200930T070000Z
DTEND:20200930T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/15/">Constraint Splitting and Projection Methods for Optimal Control<
 /a>\nby Yalçın Kaya (UniSA) as part of Variational Analysis and Optimisa
 tion Webinar\n\n\nAbstract\nWe consider a class of optimal control problem
 s with constrained control variable. We split the ODE constraint and the c
 ontrol constraint of the problem so as to obtain two optimal control subpr
 oblems for each of which solutions can be written simply.  Employing these
  simpler solutions as projections\, we find numerical solutions to the ori
 ginal problem by applying four different projection-type methods: (i) Dyks
 tra’s algorithm\, (ii) the Douglas–Rachford (DR) method\, (iii) the Ar
 agón Artacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinka
 ge-thresholding algorithm (FISTA).  The problem we study is posed in infin
 ite-dimensional Hilbert spaces. Behaviour of the DR and AAC algorithms are
  explored via numerical experiments with respect to their parameters. An e
 rror analysis is also carried out numerically for a particular instance of
  the problem for each of the algorithms.  This is joint work with Heinz Ba
 uschke and Regina Burachik.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hieu Thao Nguyen (TU Delft)
DTSTART:20200819T070000Z
DTEND:20200819T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/16/">Projection algorithms for phase retrieval with high numerical ap
 erture</a>\nby Hieu Thao Nguyen (TU Delft) as part of Variational Analysis
  and Optimisation Webinar\n\n\nAbstract\nWe develop the mathematical frame
 work in which the class of projection algorithms can be applied to high nu
 merical aperture (NA) phase retrieval. Within this framework we first anal
 yze the basic steps of solving this problem by projection algorithms and e
 stablish the closed forms of all the relevant prox-operators. We then stud
 y the geometry of the high-NA phase retrieval problem and the obtained res
 ults are subsequently used to establish convergence criteria of projection
  algorithms. Making use of the vectorial point-spread-function (PSF) is\, 
 on the one hand\, the key difference between this work and the literature 
 of phase retrieval mathematics which mostly deals with the scalar PSF. The
  results of this paper\, on the other hand\, can be viewed as extensions o
 f those concerning projection methods for low-NA phase retrieval. Importan
 tly\, the improved performance of projection methods over the other classe
 s of phase retrieval algorithms in the low-NA setting now also becomes app
 licable to the high-NA case. This is demonstrated by the accompanying nume
 rical results which show that all available solution approaches for high-N
 A phase retrieval are outperformed by projection methods.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reinier Diaz Millan (Deakin University)
DTSTART:20201007T060000Z
DTEND:20201007T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/17/">An algorithm for pseudo-monotone operators with application to r
 ational approximation</a>\nby Reinier Diaz Millan (Deakin University) as p
 art of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nThe mo
 tivation of this paper is the development of an optimisation method for so
 lving optimisation problems appearing in Chebyshev rational and generalise
 d rational approximation problems\, where the approximations are construct
 ed as ratios of linear forms (linear combination of basis functions). The 
 coefficients of the linear forms are subject to optimisation and the basis
  functions are continuous function. It is known that the objective functio
 ns in generalised rational approximation problems are quasi-convex. In thi
 s paper we also prove a stronger result\, the objective functions are pseu
 do-convex. Then we develop numerical methods\, that are efficient for a wi
 de range of pseudo-convex functions and test them on generalised rational 
 approximation problems.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jein-Shan Chen (NTNU)
DTSTART:20200826T070000Z
DTEND:20200826T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/18/">Two approaches for absolute value equation by using smoothing fu
 nctions</a>\nby Jein-Shan Chen (NTNU) as part of Variational Analysis and 
 Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Björn Rüffer (University of Newcastle)
DTSTART:20201014T060000Z
DTEND:20201014T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/19/">A Lyapunov perspective to projection algorithms</a>\nby Björn R
 üffer (University of Newcastle) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nThe operator theoretic point of view has be
 en very successful in the study of iterative splitting methods under a uni
 fied framework. These algorithms include the Method of Alternating Project
 ions as well as the Douglas-Rachford Algorithm\, which is dual to the Alte
 rnating Direction Method of Multipliers\, and they allow nice geometric in
 terpretations. While convergence results for these algorithms have been kn
 own for decades when problems are convex\, for non-convex problems progres
 s on convergence results has significantly increased once arguments based 
 on Lyapunov functions were used. In this talk we give an overview of the u
 nderlying techniques in Lyapunov's direct method and look at convergence o
 f iterative projection methods through this lens.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wilfredo Sosa (UCB)
DTSTART:20201021T060000Z
DTEND:20201021T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/20/">On diametrically maximal sets\, maximal premonotone maps and pro
 monote bifunctions</a>\nby Wilfredo Sosa (UCB) as part of Variational Anal
 ysis and Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Radek Cibulka (University of West Bohemia)
DTSTART:20201028T060000Z
DTEND:20201028T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/21/">Continuous selections for inverse mappings in Banach spaces</a>\
 nby Radek Cibulka (University of West Bohemia) as part of Variational Anal
 ysis and Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ernest Ryu (Seoul National University)
DTSTART:20201125T060000Z
DTEND:20201125T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/22/">Scaled Relative Graph: Nonexpansive operators via 2D Euclidean G
 eometry</a>\nby Ernest Ryu (Seoul National University) as part of Variatio
 nal Analysis and Optimisation Webinar\n\n\nAbstract\nMany iterative method
 s in applied mathematics can be thought of as fixed-point iterations\, and
  such algorithms are usually analyzed analytically\, with inequalities. In
  this work\, we present a geometric approach to analyzing contractive and 
 nonexpansive fixed point iterations with a new tool called the scaled rela
 tive graph (SRG). The SRG provides a rigorous correspondence between nonli
 near operators and subsets of the 2D plane. Under this framework\, a geome
 tric argument in the 2D plane becomes a rigorous proof of contractiveness 
 of the corresponding operator.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vinesha Peiris (Swinburne University of Technology)
DTSTART:20201111T060000Z
DTEND:20201111T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/23/">The extension of linear inequality method for generalised ration
 al Chebyshev approximation</a>\nby Vinesha Peiris (Swinburne University of
  Technology) as part of Variational Analysis and Optimisation Webinar\n\n\
 nAbstract\nIn this talk we will demonstrate the correspondence between the
  linear inequality method developed for rational Chebyshev approximation a
 nd the bisection method used in quasiconvex optimisation. It naturally con
 nects rational and generalised rational Chebyshev approximation problems w
 ith modern developments in the area of quasiconvex functions. Moreover\, t
 he linear inequality method can be extended to a broader class of Chebyshe
 v approximation problems\, where the corresponding objective functions rem
 ain quasiconvex.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chayne Planiden (University of Wollongong)
DTSTART:20201104T060000Z
DTEND:20201104T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/24/">New gradient and Hessian approximation methods for derivative-fr
 ee optimisation</a>\nby Chayne Planiden (University of Wollongong) as part
  of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nIn genera
 l\, derivative-free optimisation (DFO) uses approximations of first- and s
 econd-order information in minimisation algorithms. DFO is found in direct
 -search\, model-based\, trust-region and other mainstream optimisation tec
 hniques and is gaining popularity in recent years. This work discusses pre
 vious results on some particular uses of DFO: the proximal bundle method a
 nd the VU-algorithm\, and then presents improvements made this year on the
  gradient and Hessian approximation techniques. These improvements can be 
 inserted into any routine that requires such estimations.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aram Arutyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS)
DTSTART:20201118T060000Z
DTEND:20201118T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/25/">Local and Global Inverse and Implicit Function Theorems</a>\nby 
 Aram Arutyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nIn the talk\, 
 we present a local inverse function theorem on a cone in a neighbourhood o
 f abnormal point. We present a global inverse function theorem in the form
  of theorem on trivial bundle\, guaranteeing that if a smooth mapping of f
 inite-dimensional spaces is uniformly nonsingular\, then it has a smooth r
 ight inverse satisfying a priori estimate. We also present a global implic
 it function theorem guaranteeing the existence and continuity of a global 
 implicit function under the condition that the mappings in question are un
 iformly nonsingular. The generalization of these results to the case of ma
 ppings of Hilbert spaces and Banach spaces are discussed.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nam Ho-Nguyen (University of Sydney)
DTSTART:20210210T000000Z
DTEND:20210210T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/26/">Coordinate Descent Without Coordinates: Tangent Subspace Descent
  on Riemannian Manifolds</a>\nby Nam Ho-Nguyen (University of Sydney) as p
 art of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe con
 sider an extension of the coordinate descent algorithm to manifold domains
 \, and provide convergence analyses for geodesically convex and non-convex
  smooth objective functions. Our key insight is to draw an analogy between
  coordinate blocks in Euclidean space and tangent subspaces of a manifold.
  Hence\, our method is called tangent subspace descent (TSD). The core pri
 nciple behind ensuring convergence of TSD is the appropriate choice of sub
 space at each iteration. To this end\, we propose two novel conditions: th
 e gap ensuring and $C$-randomized norm conditions on deterministic and ran
 domized modes of subspace selection respectively. These ensure convergence
  for smooth functions\, and are satisfied in practical contexts. We propos
 e two subspace selection rules of particular practical interest that satis
 fy these conditions: a deterministic one for the manifold of square orthog
 onal matrices\, and a randomized one for the more general Stiefel manifold
 .\n(This is joint work with David Huckleberry Gutman\, Texas Tech Universi
 ty.)\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Javier Peña (Carnegie-Mellon University)
DTSTART:20210303T000000Z
DTEND:20210303T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/27/">The condition number of a function relative to a set</a>\nby Jav
 ier Peña (Carnegie-Mellon University) as part of Variational Analysis and
  Optimisation Webinar\n\n\nAbstract\nThe condition number of a differentia
 ble convex function\, namely the ratio of its smoothness to strong convexi
 ty constants\, is closely tied to fundamental properties of the function. 
 In particular\, the condition number of a quadratic convex function is the
  square of the aspect ratio of a canonical ellipsoid associated to the fun
 ction. Furthermore\, the condition number of a function bounds the linear 
 rate of convergence of the gradient descent algorithm for unconstrained co
 nvex minimization.\n\nWe propose a condition number of a differentiable co
 nvex function relative to a reference set and distance function pair. This
  relative condition number is defined as the ratio of a relative smoothnes
 s to a relative strong convexity constants. We show that the relative cond
 ition number extends the main properties of the traditional condition numb
 er both in terms of its geometric insight and in terms of its role in char
 acterizing the linear convergence of first-order methods for constrained c
 onvex minimization.\n\nThis is joint work with David H. Gutman at Texas Te
 ch University.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Russell Luke (University of Göttingen)
DTSTART:20210407T070000Z
DTEND:20210407T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/28/">Inconsistent Stochastic Feasibility: the Case of Stochastic Tomo
 graphy</a>\nby Russell Luke (University of Göttingen) as part of Variatio
 nal Analysis and Optimisation Webinar\n\n\nAbstract\nIn an X-FEL experimen
 t\, high-energy x-ray pulses are shot with high repetition rates on a \nst
 ream of identical single biomolecules and the scattered photons are record
 ed on a \npixelized detector. These experiments provide a new and unique r
 oute to \nmacromolecular structure determination at room temperature\, wit
 hout the \nneed for crystallization\, and at low material usage.  The main
  challenges in \nthese experiments are the extremely low signal-to-noise r
 atio due to the very \nlow expected photon count per scattering image (10-
 50) and the unknown \norientation of the molecules in each scattering imag
 e.\n\nMathematically\, this is a stochastic computed tomography problem wh
 ere the goal \nis to reconstruct a three-dimensional object from noisy two
 -dimensional images of \na nonlinear mapping whose orientation relative to
  the object is both random and \nunobservable. The idea is to develop of a
  two-step procedure for solving this problem.  \nIn the first step\, we nu
 merically compute a probability distribution associated with \nthe observe
 d patterns (taken together) as the stationary measure of a \nMarkov chain 
 whose generator is constructed from the individual observations. \nCorrela
 tion in the data and other a priori information is used to further constra
 in \nthe problem and accelerate convergence to a stationary measure. With 
 the stationary \nmeasure in hand\, the second step involves solving a phas
 e retrieval problem \nfor the mean electron density relative to a fixed re
 ference orientation.\n\nThe focus of this talk is conceptual\, and involve
 s re-envisioning projection algorithms\nas Markov chains.  We already pres
 ent some new routes to ``old" results\, and a \nfundamental new approach t
 o understanding and accounting for numerical computation\non conventional 
 computers.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Huynh Van Ngai (University of Quy Nhon)
DTSTART:20210324T060000Z
DTEND:20210324T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/29/">Generalized Nesterov's accelerated proximal gradient algorithms 
 with convergence rate of order $o(1/k^2)$</a>\nby Huynh Van Ngai (Universi
 ty of Quy Nhon) as part of Variational Analysis and Optimisation Webinar\n
 \n\nAbstract\nThe accelerated gradient method initiated by Nesterov is now
  recognized to be one of the most powerful tools for solving smooth convex
  optimization problems. This method improves significantly the convergence
  rate of function values from $O(1/k)$ of the standard gradient method dow
 n to $O(1/k^2).$ In this paper\, we present two generalized variants of Ne
 sterov's accelerated proximal gradient method for solving composition conv
 ex optimization problems in which the objective function is represented by
  the sum of a smooth convex function and a nonsmooth convex part. We show 
 that with suitable ways to pick the sequences of parameters\, the converge
 nce rate for the function values of this proposed method is actually  of o
 rder $o(1/k^2).$ Especially\, when the objective function is $p-$uniformly
  convex for $p>2\,$ the convergence rate is of order $O\\left(\\ln k/k^{2p
 /(p-2)}\\right)\,$ and the convergence is linear if the objective function
  is strongly convex. By-product\, we derive a forward-backward algorithm g
 eneralizing the one by Attouch-Peypouquet [SIAM J. Optim.\, 26(3)\, 1824-1
 834\, (2016)]\, which produces a convergence sequence with a convergence r
 ate of the function values of order $o(1/k^2).$\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yboon Garcia Ramos (Universidad del Pacífico)
DTSTART:20210331T000000Z
DTEND:20210331T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/30/">Characterizing quasiconvexity of the pointwise infimum of a fami
 ly of  arbitrary translations of quasiconvex functions</a>\nby Yboon Garci
 a Ramos (Universidad del Pacífico) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nIn this talk we will present some result
 s concerning the problem of preserving quasiconvexity when summing up  qua
 siconvex functions and we will relate it to the problem of preserving quas
 iconvexity when taking the infimum of a family of quasiconvex functions. T
 o develop our study\, the notion of quasiconvex family is introduced\, and
  we establish various characterizations of such a concept.\n\nJoint work w
 ith Fabián Flores\, Universidad de Concepción and Nicolas Hadjisavvas\, 
 University of the Aegean.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ewa Bednarczuk (Warsaw University of Technology and Systems Resear
 ch Institute of the PAS)
DTSTART:20210421T070000Z
DTEND:20210421T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/31/">On  duality for nonconvex minimization problems within the frame
 work of abstract convexity</a>\nby Ewa Bednarczuk (Warsaw University of Te
 chnology and Systems Research Institute of the PAS) as part of Variational
  Analysis and Optimisation Webinar\n\n\nAbstract\nBy applying the perturba
 tion function approach\, we propose   the Lagrangian and the conjugate dua
 ls for  minimization problems of the sum of two\, generally nonconvex\, fu
 nctions.  The main tool is the abstract convexity theory\, called  $\\Phi$
 -convexity\, and  minimax theorems for Φ\\Phi-convex functions. We provid
 e conditions ensuring zero duality gap and introduce generalized Karush-Ku
 hn-Tucker conditions that characterize solutions to primal and dual proble
 ms. We also discuss the relationship between the dual problems proposed th
 e present investigation and some conjugate-type duals existing in the lite
 rature. The presentation is based on joint works with Monika Syga.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roger Behling (Fundação Getúlio Vargas)
DTSTART:20210414T010000Z
DTEND:20210414T020000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/32/">Circumcentering projection type methods</a>\nby Roger Behling (F
 undação Getúlio Vargas) as part of Variational Analysis and Optimisatio
 n Webinar\n\n\nAbstract\nEnforcing successive projections\, averaging the 
 composition of reflections and barycentering projections are settled techn
 iques for solving convex feasibility problems. These schemes are called th
 e method of alternating projections (MAP)\, the Douglas-Rachfort method (D
 RM) and the Cimmino method (CimM)\, respectively. Recently\, we have devel
 oped the circumcentered-reflection method (CRM)\, whose iterations employ 
 generalized circumcenters that are able to accelerate the aforementioned c
 lassical approaches both theoretically and numerically. In this talk\, the
  main results on CRM are presented and a glimpse on future work will be pr
 ovided as well.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander J. Zaslavski (The Technion - Israel Institute of Technol
 ogy)
DTSTART:20210217T060000Z
DTEND:20210217T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/33/">Subgradient Projection Algorithm with Computational Errors</a>\n
 by Alexander J. Zaslavski (The Technion - Israel Institute of Technology) 
 as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe
  study the subgradient projection algorithm for minimization of convex and
  nonsmooth\nfunctions\, under the presence of computational errors. We sho
 w that our algorithms generate a good approximate solution\, if computatio
 nal errors are bounded from above by a small positive constant.\nMoreover\
 , for a known computational error\, we find out what an approximate soluti
 on can be obtained and how many iterates one needs for this.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yura Malitsky (Linköping University)
DTSTART:20210519T070000Z
DTEND:20210519T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/34/">Adaptive gradient descent without descent</a>\nby Yura Malitsky 
 (Linköping University) as part of Variational Analysis and Optimisation W
 ebinar\n\n\nAbstract\nIn this talk I will present some recent results for 
 the most classical optimization method — gradient descent. We will show 
 that a simple zero cost rule is sufficient to completely automate gradient
  descent. The method adapts to the local geometry\, with convergence guara
 ntees depending only on the smoothness in a neighborhood of a solution. Th
 e presentation is based on a joint work with K. Mishchenko\, see\nhttps://
 arxiv.org/abs/1910.09529.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nguyen Duy Cuong (Federation University)
DTSTART:20210224T060000Z
DTEND:20210224T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/35/">Necessary conditions for transversality properties</a>\nby Nguye
 n Duy Cuong (Federation University) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nTransversality properties of collections
  of sets play an important role in optimization and variational analysis\,
  e.g.\, as constraint qualifications\, qualification conditions in subdiff
 erential\, normal cone and coderivative calculus\, and convergence analysi
 s of computational algorithms. In this talk\, we present some new results 
 on primal (geometric\, metric\, slope) and dual (subdifferential\, normal 
 cone) necessary (in some cases also sufficient) conditions for transversal
 ity properties in both linear and nonlinear settings. Quantitative relatio
 ns between transversality properties and the corresponding regularity prop
 erties of set-valued mappings are also discussed.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lyudmila Polyakova (Saint-Petersburg State University)
DTSTART:20210505T070000Z
DTEND:20210505T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/36
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/36/">Smooth approximations of D.C. functions</a>\nby Lyudmila Polyako
 va (Saint-Petersburg State University) as part of Variational Analysis and
  Optimisation Webinar\n\n\nAbstract\nAn investigation of properties of dif
 ference of convex functions is based on the basic facts and theorems of co
 nvex analysis\, as the class of convex functions is one of the most invest
 igated among nonsmooth functions. For an arbitrary convex function a famil
 y of continuously differentiable approximations is constructed using the i
 nfimal convolution operation. If the domain of the considered function is 
 compact then such smooth convex approximations are uniform in the Chebyshe
 v metric. Using this technique a smooth approximation is constructed for t
 he d.c. functions. The optimization properties of these approximations are
  studied.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander Kruger (Federation University Australia)
DTSTART:20210310T060000Z
DTEND:20210310T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/37
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/37/">Error bounds revisited</a>\nby Alexander Kruger (Federation Univ
 ersity Australia) as part of Variational Analysis and Optimisation Webinar
 \n\n\nAbstract\nWe propose a unifying general framework of quantitative pr
 imal and dual sufficient error bound conditions covering linear and nonlin
 ear\, local and global settings. We expose the roles of the assumptions in
 volved in the error bound assertions\, in particular\, on the underlying s
 pace: general metric\, Banach or Asplund. Employing special collections of
  slope operators\, we introduce a succinct form of sufficient error bound 
 conditions\, which allows one to combine in a single statement several dif
 ferent assertions: nonlocal and local primal space conditions in complete 
 metric spaces\, and subdifferential conditions in Banach and Asplund space
 s. In the nonlinear setting\, we cover both the conventional and the ‘al
 ternative’ error bound conditions.\n\nIt is a joint work with Nguyen Duy
  Cuong (Federation University). The talk is based on the paper:\nN. D. Cuo
 ng and A. Y. Kruger\, Error bounds revisited\, arXiv: 2012.03941 (2020).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Bartl (Silesian University in Opava)
DTSTART:20210317T060000Z
DTEND:20210317T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/38
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/38/">Every compact convex subset of matrices is the Clarke Jacobian o
 f some Lipschitzian mapping</a>\nby David Bartl (Silesian University in Op
 ava) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstrac
 t\nGiven a non-empty compact convex subset $P$ of $m \\times n$ matrices\,
  we show constructively that there exists a Lipschitzian mapping $g\\colon
  {\\bf R}^n \\to {\\bf R}^m$ such that its Clarke Jacobian $\\partial g(0)
  = P$.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jiri Outrata (Institute of Information Theory and Automation of th
 e Czech Academy of Sciences)
DTSTART:20210428T070000Z
DTEND:20210428T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/39
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/39/">On the solution of static contact problems with Coulomb friction
  via the semismooth* Newton method</a>\nby Jiri Outrata (Institute of Info
 rmation Theory and Automation of the Czech Academy of Sciences) as part of
  Variational Analysis and Optimisation Webinar\n\n\nAbstract\nThe lecture 
 deals with application of a new Newton-type method to the numerical soluti
 on of discrete 3D contact problems with Coulomb friction. This method suit
 s well to the solution of inclusions and the resulting conceptual algorith
 m exhibits\, under appropriate conditions\, the local superlinear converge
 nce. After a description of the method a new model for the considered cont
 act problem\, amenable to the application of the new method\, will be pres
 ented. The second part of the talk is then devoted to an efficient impleme
 ntation of the general algorithm and to numerical tests. Throughout the wh
 ole lecture\, various tools of modern variational analysis will be employe
 d.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hung Phan (University of Massachusetts Lowell)
DTSTART:20210512T010000Z
DTEND:20210512T020000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/40
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/40/">Adaptive splitting algorithms for the sum of operators</a>\nby H
 ung Phan (University of Massachusetts Lowell) as part of Variational Analy
 sis and Optimisation Webinar\n\n\nAbstract\nA general optimization problem
  can often be reduced to finding a zero of a sum of multiple (maximally) m
 onotone operators\, which creates challenging computational tasks as a who
 le. It motivates the development of splitting algorithms in order to simpl
 ify the computations by dealing with each operator separately\, hence the 
 name "splitting". Some of the most successful splitting algorithms in appl
 ications are the forward-backward algorithm\, the Douglas-Rachford algorit
 hm\, and the alternating directions method of multipliers (ADMM). In this 
 talk\, we discuss some adaptive splitting algorithms for finding a zero of
  the sum of operators. The main idea is to adapt the algorithm parameters 
 to the generalized monotonicity of the operators so that the generated seq
 uence converges to a fixed point.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guoyin Li (The University of New South Wales)
DTSTART:20210526T070000Z
DTEND:20210526T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/41
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/41/">Proximal methods for nonsmooth and nonconvex fractional programs
 : when sparse optimization meets fractional programs</a>\nby Guoyin Li (Th
 e University of New South Wales) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nNonsmooth and nonconvex fractional programs
  are ubiquitous and also highly challenging. It includes the composite opt
 imization problems studied extensively lately\, and encompasses many impor
 tant modern optimization problems arising from diverse areas such as the r
 ecent proposed scale invariant sparse signal reconstruction problem in sig
 nal processing\, the robust Sharpe ratio optimization problems in finance 
 and the sparse generalized eigenvalue problem in discrimination analysis. 
  In this talk\, we will introduce extrapolated proximal methods for solvin
 g nonsmooth and nonconvex fractional programs and analyse their convergenc
 e behaviour. Interestingly\, we will show that the proposed algorithm exhi
 bits linear convergence for sparse generalized eigenvalue problem with eit
 her cardinality regularization or sparsity constraints. This is achieved b
 y identifying the explicit desingularization function of the Kurdyka-Lojas
 iewicz inequality for the merit function of the fractional optimization mo
 dels. Finally\, if time permits\, we will present some preliminary encoura
 ging numerical results for the proposed methods for sparse signal reconstr
 uction and sparse Fisher discriminant analysis.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vuong Phan (University of Southampton)
DTSTART:20210609T070000Z
DTEND:20210609T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/42
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/42/">The Boosted Difference of Convex Functions Algorithm</a>\nby Vuo
 ng Phan (University of Southampton) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nWe introduce a new algorithm for solving
  Difference of Convex functions (DC) programming\, called Boosted Differen
 ce of Convex functions Algorithm (BDCA). BDCA accelerates the convergence 
 of the classical difference of convex functions algorithm (DCA) thanks to 
 an additional line search step. We prove that any limit point of the BDCA 
 iterative sequence is a critical point of the problem under consideration 
 and that the corresponding objective value is monotonically decreasing and
  convergent. The global convergence and convergence rate of the iterations
  are obtained under the Kurdyka Lojasiewicz property. We provide applicati
 ons and numerical experiments for a hard problem in biochemistry and two c
 hallenging problems in machine learning\, demonstrating that BDCA outperfo
 rms DCA. For the biochemistry problem\, BDCA was \nve times faster than DC
 A\, for the Minimum Sum-of-Squares Clustering problem\, BDCA was on averag
 e sixteen times faster than DCA\, and for the Multidimensional Scaling pro
 blem\, BDCA was three times faster than DCA.\n\nJoint work with Francisco 
 J. Aragon Artacho (University of Alicante\, Spain).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Scott  B Lindstrom (Curtin University)
DTSTART:20210602T070000Z
DTEND:20210602T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/43
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/43/">A primal/dual computable approach to improving spiraling algorit
 hms\, based on minimizing spherical surrogates for Lyapunov functions</a>\
 nby Scott  B Lindstrom (Curtin University) as part of Variational Analysis
  and Optimisation Webinar\n\n\nAbstract\nOptimization problems are frequen
 tly tackled by iterative application of an operator whose fixed points all
 ow for fast recovery of locally optimal solutions. Under light-weight assu
 mptions\, stability is equivalent to existence of a function---called a Ly
 apunov function---that encodes structural information about both the probl
 em and the operator. Lyapunov functions are usually hard to find\, but if 
 a practitioner had a priori knowledge---or a reasonable guess---about one'
 s structure\, they could equivalently tackle the problem by seeking to min
 imize the Lyapunov function directly. We introduce a class of methods that
  does this. Interestingly\, for certain feasibility problems\, circumcente
 red-reflection method (CRM) is an extant example therefrom. However\, CRM 
 may not lend itself well to primal/dual adaptation\, for reasons we show. 
 Motivated by the discovery of our new class\, we experimentally demonstrat
 e the success of one of its other members\, implemented in a primal/dual f
 ramework.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adil Bagirov (Federation University)
DTSTART:20210623T070000Z
DTEND:20210623T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/44
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/44/">Nonsmooth DC optimization: recent developments</a>\nby Adil Bagi
 rov (Federation University) as part of Variational Analysis and Optimisati
 on Webinar\n\n\nAbstract\nIn this talk we consider unconstrained optimizat
 ion problems where the objective functions are represented as a difference
  of two convex (DC) functions. Various applications of DC optimization in 
 machine learning are presented. We discuss two different approaches to des
 ign methods of nonsmooth DC optimization: an approach based on the extensi
 on of bundle methods and an approach based on the DCA (difference of conve
 x algorithm). We also discuss numerical results obtained using these metho
 ds.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bruno F. Lourenço (Institute of Statistical Mathematics)
DTSTART:20210616T070000Z
DTEND:20210616T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/45
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/45/">Error bounds\, amenable cones and beyond</a>\nby Bruno F. Louren
 ço (Institute of Statistical Mathematics) as part of Variational Analysis
  and Optimisation Webinar\n\n\nAbstract\nIn this talk we present an overvi
 ew of the theory of amenable cones\, facial residual functions \nand their
  applications to error bounds for conic linear systems. A feature of our r
 esults is that no constraint qualifications are ever assumed\, so they are
  applicable  even to some problems with unfavourable theoretical propertie
 s. Time allowing\, we will discuss some recent findings on the geometry of
  amenable cones and also some extensions for non-amenable cones.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Eberhard (RMIT University)
DTSTART:20210630T070000Z
DTEND:20210630T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/46
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/46/">Bridges between Discrete and Continous Optimisation in Stochasti
 c Programming</a>\nby Andrew Eberhard (RMIT University) as part of Variati
 onal Analysis and Optimisation Webinar\n\n\nAbstract\nFor many years there
  has been a divide between the theoretical under pinning\nof algorithmic a
 nalysis in discrete and continuous optimisation. As a case\nstudy\, stocha
 stic optimisation provides a classic example. Here the\ntheoretical founda
 tions of continuous stochastic optimisation lies in the\ntheory of monoton
 e operators\, operator splitting and nonsmooth analysis\, none\nof which a
 ppear to be applicable to discrete problems. In this talks we will\ndiscus
 s the application of ideas from continuous optimisation and variational\na
 nalysis to the study of progressive hedging like methods for discrete\nopt
 imisation models. The key to the success of such approaches is the\naccept
 ance of the existence of MIP and QMIP\\ solvers that can be integrated in\
 nto analysis as "black box solvers" that return solutions within a broader
 \nalgorithmic analysis. Here methods more familiar to continuous optimiser
 s and\nnonsmooth analysts can be used to provide proofs of convergence of 
 both primal\nand dual methods. Unlike continuous optimisation there still 
 exists separate\nprimal and dual methods and analysis in the discrete cont
 ext. We will discuss\nthis aspect and  some convergent modifications that 
 yield robust and effective\nversions of these methods\, long with numerica
 l validation of their\neffectiveness.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Walaa Moursi (University of Waterloo)
DTSTART:20210915T010000Z
DTEND:20210915T020000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/47
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/47/">The Douglas-Rachford algorithm for solving possibly inconsistent
  optimization problems</a>\nby Walaa Moursi (University of Waterloo) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nMore th
 an 40 years ago\, Lions and Mercier introduced in a seminal paper the Doug
 las–Rachford algorithm. Today\, this method is well recognized as a clas
 sical and highly successful splitting method to find minimizers of the sum
  of two (not necessarily smooth) convex functions. While the underlying th
 eory has matured\, one case remains a mystery: the behaviour of the shadow
  sequence when the given functions have disjoint domains. Building on prev
 ious work\, we establish for the first time weak and value convergence of 
 the shadow sequence generated by the Douglas–Rachford algorithm in a set
 ting of unprecedented generality. The weak limit point is shown to solve t
 he associated normal problem which is a minimal perturbation of the origin
 al optimization problem. We also present new results on the geometry of th
 e minimal displacement vector.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nghia Tran (Oakland University)
DTSTART:20211027T000000Z
DTEND:20211027T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/48
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/48/">Sharp and strong minima for robust recovery</a>\nby Nghia Tran (
 Oakland University) as part of Variational Analysis and Optimisation Webin
 ar\n\n\nAbstract\nIn this talk\, we show the important roles of sharp mini
 ma and strong minima for robust recovery. We also obtain several character
 izations of sharp minima for convex regularized optimization problems. Our
  characterizations are quantitative and verifiable especially for the case
  of decomposable norm regularized problems including sparsity\, group-spar
 sity\, and low-rank convex problems. For group-sparsity optimization probl
 ems\, we show that a unique solution is a strong solution and obtain quant
 itative characterizations for solution uniqueness.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Yost (Federation University Australia)
DTSTART:20211201T060000Z
DTEND:20211201T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/49
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/49/">Minimising the number of faces of a class of polytopes</a>\nby D
 avid Yost (Federation University Australia) as part of Variational Analysi
 s and Optimisation Webinar\n\n\nAbstract\nPolytopes are the natural domain
 s of many optimisation problems. We consider a ``higher order" optimisatio
 n problem\, whose domain is a class of polytopes\, asking what is the mini
 mum number of faces (of a given dimension) for this class\, and which poly
 topes are the minimisers. Generally we consider the class of d-dimensional
  polytopes with V vertices\, for fixed V  and d. The corresponding maximis
 ation problem was solved decades ago\, but serious progress on the minimis
 ation question has only been made in recent years.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dominikus Noll (Institut de Mathématiques de Toulouse)
DTSTART:20211006T060000Z
DTEND:20211006T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/50
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/50/">Alternating projections with applications to Gerchberg-Saxton er
 ror reduction</a>\nby Dominikus Noll (Institut de Mathématiques de Toulou
 se) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract
 \nWe discuss alternating projections between closed non-convex sets $A\,B$
  in $R^n$ and obtain criteria for convergence when $A\,B$ do not intersect
  transversally. The infeasible case\, $A \\cap B = \\emptyset$\, is also a
 ddressed\, and here we expect convergence toward a gap between $A\,B$. For
  sub-analytic sets $A\,B$ sub-linear convergence rates depending on the Lo
 jasiewicz exponent of the distance function can be computed. We then prese
 nt applications to the Gerchberg-Saxton error reduction algorithm\, to Cad
 zow's denoising algorithm\, and to instances of the Gaussian EM-algorithm.
 \n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nadezda Sukhorukova (Swinburne University of Technology)
DTSTART:20211103T060000Z
DTEND:20211103T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/51
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/51/">Rational approximation and its role in different branches of mat
 hematics and applications</a>\nby Nadezda Sukhorukova (Swinburne Universit
 y of Technology) as part of Variational Analysis and Optimisation Webinar\
 n\n\nAbstract\nRational approximation is a powerful function approximation
  tool. Rational approximation is approximation by a ratio of two polynomia
 ls\, whose coefficients are subject to optimisation. Numerical methods for
  rational approximation have been developed independently in different bra
 nches of mathematics. In this talk\, I will present the interconnections b
 etween different numerical methods developed to rational approximation. Mo
 st of them can be extended to the case of the so called generalised ration
 al approximation where the approximation is a ration of two linear forms a
 nd the basis functions are not limited to monomials. Finally\, I am going 
 to talk about  real-life applications for rational and generalised rationa
 l approximation.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jane Ye (University of Victoria\, British Columbia)
DTSTART:20211110T000000Z
DTEND:20211110T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/52
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/52/">Difference of convex algorithms for bilevel programs with applic
 ations in hyperparameter selection</a>\nby Jane Ye (University of Victoria
 \, British Columbia) as part of Variational Analysis and Optimisation Webi
 nar\n\n\nAbstract\nA bilevel program is a sequence of two optimization pro
 blems where the constraint region of the upper level problem is determined
  implicitly by the solution set to the lower level problem. In this  talk\
 , I will present difference of convex algorithms for solving bilevel progr
 ams in which the upper level objective functions are difference of convex 
 functions and the lower level programs are fully convex. This nontrivial c
 lass of bilevel programs provides a powerful modelling framework for deali
 ng with  applications arising from hyperparameter selection in machine lea
 rning. Thanks to the full convexity of the lower level program\,  the valu
 e function of the lower level program turns out to be convex and hence the
  bilevel program can be reformulated as a difference of convex bilevel pro
 gram. We propose two algorithms for solving the reformulated difference of
  convex program and show their convergence to stationary points under very
  mild assumptions. Finally we conduct numerical experiments to a bilevel m
 odel of support vector machine classification.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sidney Morris (Federation University Australia)
DTSTART:20211020T060000Z
DTEND:20211020T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/53
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/53/">Tweaking Ramanujan's Approximation of n!</a>\nby Sidney Morris (
 Federation University Australia) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nIn 1730 James Stirling\, building on the wo
 rk of Abraham de Moivre\, published what is known as Stirling's approximat
 ion of n!. He gave a good formula which is asymptotic to n!. Since then hu
 ndreds of papers have given alternative proofs of his result and improved 
 upon it\, including notably by Burside\, Gosper\, and Mortici. However Sri
 nivasa Ramanujan gave a remarkably better asymptotic formula. Hirschhorn a
 nd Villarino gave a nice proof of Ramanujan's result and an error estimate
  for the approximation. \n\nThis century there have been several improveme
 nts of Stirling's formula including by Nemes\, Windschitl\, and Chen. In t
 his presentation it is shown \n\n(i)	how all these asymptotic results can 
 be easily verified\; \n\n(ii)	how Hirschhorn and Villarino's argument allo
 ws a tweaking of Ramanujan's result to give a better approximation\; \n\n(
 iii)	that a new asymptotic formula can be obtained by further tweaking of 
 Ramanujan's result\;\n\n(iv)	that Chen's asymptotic formula is better than
  the others mentioned here\, and the new asymptotic formula is comparable 
 with Chen's.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maxim Dolgopolik (Institute for Problems in Mechanical Engineering
  of the Russian Academy of Sciences)
DTSTART:20210922T070000Z
DTEND:20210922T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/54
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/54/">DC Semidefinite Programming</a>\nby Maxim Dolgopolik (Institute 
 for Problems in Mechanical Engineering of the Russian Academy of Sciences)
  as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nD
 C (Difference-of-Convex) optimization has been an active area of research 
 in nonsmooth nonlinear optimization for over 30 years. The interest in thi
 s class of problems is based on the fact that one can efficiently utilize 
 ideas and methods of convex analysis/optimization to solve DC optimization
  problems. The main results of DC optimization can be extended to the case
  of nonlinear semidefinite programming problems\, i.e. problems with matri
 x-valued constraints\, in several different ways. We will discuss two poss
 ible generalizations of the notion of DC function to the case of matrix-va
 lued functions and show how these generalizations lead to two different DC
  optimization approaches to nonlinear semidefinite programming.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rubén Campoy (University of Valencia)
DTSTART:20211013T060000Z
DTEND:20211013T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/55
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/55/">A product space reformulation with reduced dimension</a>\nby Rub
 én Campoy (University of Valencia) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nThe product space reformulation is a pow
 erful trick when tackling monotone inclusions defined by finitely many ope
 rators with splitting algorithms. This technique constructs an equivalent 
 two-operator problem\, embedded in a product Hilbert space\, that preserve
 s computational tractability. Each operator in the original problem requir
 es one dimension in the product space. In this talk\, we propose a new ref
 ormulation with a reduction on the dimension of the outcoming product Hilb
 ert space. We shall discuss the case of not necessarily convex feasibility
  problems. As an application\, we obtain a new parallel variant of the Dou
 glas-Rachford algorithm with a reduction in the number of variables. The c
 omputational advantage is illustrated through some numerical experiments.\
 n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quoc Tran-Dinh (University of North Carolina)
DTSTART:20210929T010000Z
DTEND:20210929T020000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/56
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/56/">Randomized Douglas-Rachford Splitting Algorithms for Federated C
 omposite Optimization</a>\nby Quoc Tran-Dinh (University of North Carolina
 ) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\n
 In this talk\, we present two randomized Douglas-Rachford splitting algori
 thms to solve a class of composite nonconvex finite-sum optimization probl
 ems arising from federated learning. Our algorithms rely on a combination 
 of three main techniques: Douglas-Rachford splitting scheme\, randomized b
 lock-coordinate technique\, and asynchronous strategy. We show that our al
 gorithms achieve the best-known communication complexity bounds under stan
 dard assumptions in the nonconvex setting\, while allow one to inexactly u
 pdating local models with only a subset of users each round\, and handle n
 onsmooth convex regularizers. Our second algorithm can be implemented in a
 n asynchronous mode using a general probabilistic model to capture differe
 nt computational architectures. We illustrate our algorithms with many num
 erical examples and show that the new algorithms have a promising performa
 nce compared to common existing methods.\n\nThis talk is based on the coll
 aboration with Nhan Pham (UNC)\, Lam M. Nguyen (IBM)\,\nand Dzung Phan (IB
 M).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fred Roosta-Khorasani (The University of Queensland)
DTSTART:20211201T000000Z
DTEND:20211201T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/57
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/57/">A Newton-MR Algorithm with Complexity Guarantee for Non-Convex P
 roblems</a>\nby Fred Roosta-Khorasani (The University of Queensland) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nClassic
 ally\, the conjugate gradient (CG) method has been the dominant solver in 
 most inexact Newton-type methods for unconstrained optimization. In this t
 alk\, we consider replacing CG with the minimum residual method (MINRES)\,
  which is often used for symmetric but possibly indefinite linear systems.
  We show that MINRES has an inherent ability to detect negative-curvature 
 directions. Equipped with this advantage\, we discuss algorithms\, under t
 he general name of Newton-MR\, which can be used for optimization of gener
 al non-convex objectives\, and that come with favourable complexity guaran
 tees. We also give numerical examples demonstrating the performance of the
 se methods for large-scale non-convex machine learning problems.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Majid Abbasov (Saint-Petersburg State University)
DTSTART:20211117T060000Z
DTEND:20211117T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/58
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/58/">Converting exhausters and coexhausters</a>\nby Majid Abbasov (Sa
 int-Petersburg State University) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nExhausters and coexhausters are notions of 
 constructive nonsmooth analysis which are used to study extremal propertie
 s of functions. An upper exhauster (coexhauster) is used to get an approxi
 mation of a considered function in the neighborhood of a point in the form
  of $\\min\\max$ of linear (affine) functions. A lower exhauster (coexhaus
 ter) is used to represent the approximation in the form of $\\max\\min$ of
  linear (affine) functions. Conditions for a minimum in a most simple way 
 are expressed by means of upper exhausters and coexhausters\, while condit
 ions for a maximum are described in terms of lower exhausters and coexhaus
 ters. Thus the problem of obtaining an upper exhauster or coexhauster when
  the lower one is given and vice verse arises. In the talk I will consider
  this problem and present new method for such a . Also all needed auxiliar
 y information will be provided.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Janosch Rieger (Monash University)
DTSTART:20220316T060000Z
DTEND:20220316T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/59
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/59/">Generalized Gearhart-Koshy acceleration for the Kaczmarz method<
 /a>\nby Janosch Rieger (Monash University) as part of Variational Analysis
  and Optimisation Webinar\n\n\nAbstract\nThe Kaczmarz method is an iterati
 ve numerical method for solving large and sparse rectangular systems of li
 near equations. Gearhart and Koshy have developed an acceleration techniqu
 e for the Kaczmarz method for homogeneous linear systems  that minimises t
 he distance to the desired solution in the direction of a full Kaczmarz st
 ep. Matthew Tam has recently generalised this acceleration technique to in
 homogeneous linear systems.\n\nIn this talk\, I will develop this techniqu
 e into an acceleration scheme that minimises the Euclidean norm error over
  an affine subspace spanned by a number of previous iterates and one addit
 ional cycle of the Kaczmarz method. The key challenge is to find a formula
 tion in which all parameters of the least-squares problem defining the uni
 que minimizer are known\, and to solve this problem efficiently.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/59/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shawn Wang (The University of British Columbia)
DTSTART:20220323T000000Z
DTEND:20220323T010000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/60
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/60/">Roots of the identity operator and proximal mappings: (classical
  and phantom) cycles and gap vectors</a>\nby Shawn Wang (The University of
  British Columbia) as part of Variational Analysis and Optimisation Webina
 r\n\n\nAbstract\nRecently\, Simons provided a lemma for a support function
  of a closed convex set in a general Hilbert space and used it to prove th
 e geometry conjecture on cycles of projections. We extend Simons's lemma t
 o closed convex functions\, show its connections to Attouch-Théra duality
 \, and use it to characterize (classical and phantom) cycles and gap vecto
 rs of proximal mappings. \n\nJoint work with H. Bauschke\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pham Ky Anh (Vietnam National University)
DTSTART:20220330T060000Z
DTEND:20220330T070000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/61
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/61/">Regularized dynamical systems associated with structured monoton
 e inclusions</a>\nby Pham Ky Anh (Vietnam National University) as part of 
 Variational Analysis and Optimisation Webinar\n\n\nAbstract\nIn this repor
 t\, we consider two dynamical systems associated with additively structure
 d monotone inclusions involving a multi-valued maximally monotone operator
  $\\mathcal{A}$ and a single-valued operator $\\mathcal{B}$ in real Hilber
 t spaces.\n\nWe established strong convergence of the regularized forward-
 backward and regularized forward - backward–forward dynamics to an “op
 timal” solution of the original inclusion under a weak assumption on the
  single-valued operator $\\mathcal{B}$.\n\nConvergence estimates are obtai
 ned if the composite operator $\\mathcal{A} + \\mathcal{B}$ is maximally m
 onotone and strongly (pseudo)monotone. Time-discretization of the correspo
 nding continuous dynamics provides an iterative regularization forward-bac
 kward method or an iterative regularization forward-backward-forward metho
 d with relaxation parameters. Some simple numerical examples were given to
  illustrate the agreement between analytical and numerical results as well
  as the performance of the proposed algorithms.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sorin-Mihai Grad (ENSTA Paris)
DTSTART:20220406T070000Z
DTEND:20220406T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/62
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/62/">Extending the proximal point algorithm beyond convexity</a>\nby 
 Sorin-Mihai Grad (ENSTA Paris) as part of Variational Analysis and Optimis
 ation Webinar\n\n\nAbstract\nIntroduced in in the 1970's by Martinet for m
 inimizing convex functions and extended shortly afterwards by Rockafellar 
 towards monotone inclusion problems\, the proximal point algorithm turned 
 out to be a viable computational method for solving various classes of (st
 ructured) optimization problems even beyond the convex framework. \n\nIn t
 his talk we discuss some extensions of proximal point type algorithms beyo
 nd convexity. First we propose a relaxed-inertial proximal point type algo
 rithm for solving optimization problems consisting in minimizing strongly 
 quasiconvex functions whose variables lie in finitely dimensional linear s
 ubspaces\, that can be extended to equilibrium functions involving such fu
 nctions. \nThen we briefly discuss another generalized convexity notion fo
 r functions we called prox-convexity for which the proximity operator is s
 ingle-valued and firmly nonexpansive\, and see that the standard proximal 
 point algorithm and Malitsky’s Golden Ratio Algorithm (originally propos
 ed for solving convex mixed variational inequalities) remain convergent wh
 en the involved functions are taken prox-convex\, too.\n\nThe talk contain
 s joint work with Felipe Lara and Raúl Tintaya Marcavillaca (both from Un
 iversity of Tarapacá).\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/62/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andreas Löhne (Friedrich Schiller University Jena)
DTSTART:20220427T070000Z
DTEND:20220427T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/63
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/63/">Approximating convex bodies using multiple objective optimizatio
 n</a>\nby Andreas Löhne (Friedrich Schiller University Jena) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nThe problem to
  compute a polyhedral outer and inner approximation of a convex body can b
 e reformulated as a problem to solve approximately a convex multiple objec
 tive optimization problem. This extends a previous result showing that mul
 tiple objective linear programming is equivalent to compute a $V$-represen
 tation of the projection of an $H$-polyhedron. These results are also disc
 ussed with respect to duality\, solution methods and error bounds.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/63/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Héctor Ramírez (Universidad de Chile)
DTSTART:20220413T010000Z
DTEND:20220413T020000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/64
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/64/">Extensions of Constant Rank Qualification Constrains condition t
 o Nonlinear Conic Programming</a>\nby Héctor Ramírez (Universidad de Chi
 le) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract
 \nWe present new constraint qualification conditions for nonlinear conic p
 rogramming that extend some of the constant rank-type conditions from nonl
 inear programming. As an application of these conditions\, we provide a un
 ified global convergence proof of a class of algorithms to stationary poin
 ts without assuming neither uniqueness of the Lagrange multiplier nor boun
 dedness of the Lagrange multipliers set. This class of algorithms includes
 \, for instance\, general forms of augmented Lagrangian\, sequential quadr
 atic programming\, and interior point methods. We also compare these new c
 onditions with some of the existing ones\, including the nondegeneracy con
 dition\, Robinson's constraint qualification\, and the metric subregularit
 y constraint qualification. Finally\, we propose a more general and geomet
 ric approach for defining a new extension of this condition to the conic c
 ontext. The main advantage of the latter is that we are able to recast the
  strong second-order properties of the constant rank condition in a conic 
 context. In particular\, we obtain a second-order necessary optimality con
 dition that is stronger than the classical one obtained under Robinson’s
  constraint qualification\, in the sense that it holds for every Lagrange 
 multiplier\, even though our condition is independent of Robinson’s cond
 ition.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/64/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lars Grüne (University of Bayreuth)
DTSTART:20220504T070000Z
DTEND:20220504T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/65
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/65/">The turnpike property: a classical feature of optimal control pr
 oblems revisited</a>\nby Lars Grüne (University of Bayreuth) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nThe turnpike p
 roperty describes a particular behavior of optimal control problems that w
 as first observed by Ramsey in the 1920s  and by von Neumann in the 1930s.
  Since then it has found widespread attention in mathematical economics an
 d control theory alike. In recent  years it received renewed interest\, on
  the one hand in optimization with  partial differential equations and on 
 the other hand in model predictive  control (MPC)\, one of the most popula
 r optimization based control  schemes in practice.\n\nIn this talk we will
  first give a general introduction to and a brief  history of the turnpike
  property\, before we look at it from a systems  and control theoretic poi
 nt of view. Particularly\, we will clarify its  relation to dissipativity\
 , detectability\, and sensitivity properties of  optimal control problems 
 in both finite and infinite dimensions. In the final part of the talk we w
 ill explain why the turnpike property is important for analyzing the perfo
 rmance of MPC.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/65/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mareike Dressler (University of New South Wales)
DTSTART:20220511T070000Z
DTEND:20220511T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/66
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/66/">Algebraic Perspectives on Signomial Optimization</a>\nby Mareike
  Dressler (University of New South Wales) as part of Variational Analysis 
 and Optimisation Webinar\n\n\nAbstract\nSignomials are obtained by general
 izing polynomials to allow for arbitrary real exponents. This generalizati
 on offers great expressive power\, but has historically sacrificed the org
 anizing principle of “degree” that is central to polynomial optimizati
 on theory. In this talk\, I introduce the concept of signomial rings that 
 allows to reclaim that principle and explain how this leads to complete co
 nvex relaxation hierarchies of upper and lower bounds for signomial optimi
 zation via sums of arithmetic-geometric exponentials (SAGE) nonnegativity 
 certificates. In the first part of the talk\, I discuss the Positivstellen
 satz underlying the lower bounds. It relies on the concept of conditional 
 SAGE and removes regularity conditions required by earlier works\, such as
  convexity of the feasible set or Archimedeanity of its representing signo
 mial inequalities. Numerical examples are provided to illustrate the perfo
 rmance of the hierarchy on problems in chemical engineering and reaction n
 etworks.\n\nIn the second part\, I provide a language for and basic result
 s in signomial moment theory that are analogous to those in the rich momen
 t-SOS literature for polynomial optimization. That theory is used to turn 
 (hierarchical) inner-approximations of signomial nonnegativity cones into 
 (hierarchical) outer-approximations of the same\, which eventually yields 
 the upper bounds for signomial optimization.\n\nThis talk is based on join
 t work with Riley Murray.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/66/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alberto De Marchi (Universität der Bundeswehr München)
DTSTART:20220525T070000Z
DTEND:20220525T080000Z
DTSTAMP:20260404T111108Z
UID:VAWebinar/67
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/VAWeb
 inar/67/">Constrained Structured Optimization and Augmented Lagrangian Pro
 ximal Methods</a>\nby Alberto De Marchi (Universität der Bundeswehr Münc
 hen) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstrac
 t\nIn this talk we discuss finite-dimensional constrained structured optim
 ization problems and explore methods for their numerical solution. Featuri
 ng a composite objective function and set-membership constraints\, this pr
 oblem class offers a modeling framework for a variety of applications. A g
 eneral and flexible algorithm is proposed that interlaces proximal methods
  and safeguarded augmented Lagrangian schemes. We provide a theoretical ch
 aracterization of the algorithm and its asymptotic properties\, deriving c
 onvergence results for fully nonconvex problems. Adopting a proximal gradi
 ent method with an oracle as a formal tool\, it is demonstrated how the in
 ner subproblems can be solved by off-the-shelf methods for composite optim
 ization\, without introducing slack variables and despite the appearance o
 f set-valued projections. Illustrative examples show the versatility of co
 nstrained structured programs as a modeling tool and highlight benefits of
  the implicit approach developed.\nA preprint paper is available at arXiv:
 2203.05276.\n
LOCATION:https://stable.researchseminars.org/talk/VAWebinar/67/
END:VEVENT
END:VCALENDAR
