BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Alex Townsend (Cornell University)
DTSTART:20200427T200000Z
DTEND:20200427T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/1/">The ultraspherical spectral method</a>\nby Alex Townsend
  (Cornell University) as part of CRM Applied Math Seminar\n\nLecture held 
 in Webinar.\n\nAbstract\nPseudospectral methods\, based on high degree pol
 ynomials\, have spectral accuracy when solving differential equations but 
 typically lead to dense and ill-conditioned matrices. The ultraspherical s
 pectral method is a numerical technique to solve ordinary and partial diff
 erential equations\, leading to almost banded well-conditioned linear syst
 ems while maintaining spectral accuracy. In this talk\, we introduce the u
 ltraspherical spectral method and develop it into a spectral element metho
 d using a modification to a hierarchical Poincaré-Steklov domain decompos
 ition method.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Bury (McGill University)
DTSTART:20200511T200000Z
DTEND:20200511T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/2/">Detecting and distinguishing bifurcations from noisy tim
 e series data</a>\nby Thomas Bury (McGill University) as part of CRM Appli
 ed Math Seminar\n\nLecture held in Webinar.\n\nAbstract\nNumerous systems 
 in the natural sciences have the capacity to undergo an abrupt change in t
 heir dynamical behaviour as a threshold is crossed. Prominent examples inc
 lude the collapse of fisheries\, algal blooms and paleoclimatic transition
 s. Mathematical models reveal such transitions as the result of crossing a
  bifurcation and help to elucidate the underlying mechanisms. However\, th
 e number of unknowns is often large\, making it difficult to infer where t
 he bifurcation occurs in the real system.\nIn this talk\, we will look at 
 methods for detecting bifurcations using data-driven approaches. These met
 hods exploit generic dynamical phenomena that occur prior to bifurcations\
 , such as critical slowing down\, in order to infer their approach. We wil
 l show how the power spectrum of noisy time series data provides informati
 on on the type of bifurcation and validate this approach with empirical pr
 edator-prey experiment that undergoes a Hopf bifurcation. Finally\, we wil
 l explore deep learning methods for detection of bifurcations and make com
 parison to the more traditional statistical methods in their ability to de
 tect bifurcations.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bamdad Hosseini (California Institute of Technology)
DTSTART:20200622T200000Z
DTEND:20200622T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/3/">Data-driven supervised learning: Neural networks and unc
 ertainty quantification</a>\nby Bamdad Hosseini (California Institute of T
 echnology) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn this talk
  I will discuss some ideas at the intersection of machine learning and unc
 ertainty quantification with a particular focus on data-driven methods tha
 t do not require explicit knowledge of processes that generate the data.  
 In the first half of the talk I will discuss supervised learning on Banach
  spaces for emulation of PDE based models and outline a method that combin
 es principal component analysis with neural network regression for mesh-in
 dependent approximation of PDE solutions.  In the second half I will take 
 a different approach to supervised learning viewing it as a conditional sa
 mpling problem.  I will then introduce a measure transport framework based
  on generative adversarial networks (GANs) for data-driven conditional sam
 pling.​\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Theodore Kolokolnikov (Dalhousie University)
DTSTART:20200629T200000Z
DTEND:20200629T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/4/">Simple agent-based models and their continuum limit</a>\
 nby Theodore Kolokolnikov (Dalhousie University) as part of CRM Applied Ma
 th Seminar\n\n\nAbstract\nWe discuss several very different ABM models and
  their continuum limits.\n\nFirst\, consider the following agent-based mod
 el of coronavirus spread: people move randomly and infection occurs with s
 ome nonzero probability when an infected individual comes within a certain
  ``infection radius'' of a susceptible individual. The question is how the
  infection radius affects the reproduction number. At low infection rates\
 , this model leads to the classical S-I-R ODE model as its continuum limit
 . However higher infection rates lead to a saturation effect\, which we co
 mpute explicitly using basic probability theory. Its continuum limit It le
 ads to an S-I-R type model with a specific saturation term. We also show t
 hat this modified model gives a much better fit to the real-world data tha
 n the classical SIR model.\n\nNext\, we will look at a very simple stochas
 tic model of bacterial aggregation which leads to a novel fourth-order non
 linear PDE in its continuum limit. This PDE admits soliton-type solutions 
 corresponding to bacterial aggregation patterns\, which we explicitly cons
 truct. \n\nIf time allows\, we will consider a spatial model of wealth exc
 hange which leads to novel integro-differential equations.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephen Becker (University of Colorado Boulder\, USA)
DTSTART:20200921T183000Z
DTEND:20200921T193000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/5/">Algorithmic stability for generalization guarantees in m
 achine learning</a>\nby Stephen Becker (University of Colorado Boulder\, U
 SA) as part of CRM Applied Math Seminar\n\n\nAbstract\nInspired by the pra
 ctical success of deep learning\, the broader math community has been ener
 gized recently to find theoretical justification for these methods. There 
 is a large amount of theory from the computer science community\, dating t
 o the 1980s and earlier\, but usually the quantitative guarantees are too 
 loose to be helpful in practice\, and it is rare that theory can predict s
 omething useful (such as what iteration to perform early-stopping in order
  to prevent over-fitting). \nMany of these theories are less well-known in
 side applied math\, so we briefly review essential results before focusing
  on the notion of algorithmic stability\, popularized in the early 2000s\,
  which is an alternative to the more mainstream VC dimension approach\, an
 d is one avenue that might give sharper theoretical guarantees. Algorithmi
 c stability is appealing to applied mathematicians\, and in particular ana
 lysts\, since a lot of the technical work is similar to analysis used for 
 convergence proofs. \nWe give an overview of the fundamental results of al
 gorithmic stability\, focusing on the stochastic gradient descent (SGD) me
 thod in the context of a nonconvex loss function\, and give the latest sta
 te-of-the-art bounds\, including some of our own work (joint with L. Madde
 n and E. Dall'Anese) which is one of the first results that suggests when 
 to do early-stopping.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yuji Nakatsukasa (Oxford University\, UK)
DTSTART:20200928T200000Z
DTEND:20200928T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/6/">Fast and stable randomized low-rank matrix approximation
 </a>\nby Yuji Nakatsukasa (Oxford University\, UK) as part of CRM Applied 
 Math Seminar\n\n\nAbstract\nRandomized SVD has become an extremely success
 ful approach for efficiently computing a low-rank approximation of matrice
 s. In particular the paper by Halko\, Martinsson\, and Tropp (SIREV 2011) 
 contains extensive analysis\, and has made it a very popular method. The t
 ypical complexity for a rank-r approximation of mxn matrices is O(mnlog n+
 (m+n)r^2) for dense matrices. The classical Nystrom method is much faster\
 , but only applicable to positive semidefinite matrices. This work studies
  a generalization of Nystrom's method applicable to general matrices\, and
  shows that (i) it has near-optimal approximation quality comparable to co
 mpeting methods\, (ii) the computational cost is the near-optimal O(mnlog 
 n+r^3) for dense matrices\, with small hidden constants\, and (iii) crucia
 lly\, it can be implemented in a numerically stable fashion despite the pr
 esence of an ill-conditioned pseudoinverse. Numerical experiments illustra
 te that generalized Nystrom can significantly outperform state-of-the-art 
 methods\, especially when r>>1\, achieving up to a 10-fold speedup. The me
 thod is also well suited to updating and downdating the matrix.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Russell Luke (Universität Göttingen\, Germany)
DTSTART:20201005T200000Z
DTEND:20201005T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/7/">Optimization on Spheres : Models and Proximal Algorithms
  with Computational Performance Comparisons</a>\nby David Russell Luke (Un
 iversität Göttingen\, Germany) as part of CRM Applied Math Seminar\n\n\n
 Abstract\nWe present a unified treatment of the abstract problem of findin
 g the best approximation between a cone and spheres in the image of affine
  transformations. Prominent instances of this problem are phase retrieval 
 and source localization. The common geometry binding these problems permit
 s a generic application of algorithmic ideas and abstract convergence resu
 lts for nonconvex optimization. We organize variational models for this pr
 oblem into three different classes and derive the main algorithmic approac
 hes within these classes (13 in all). We identify the central ideas underl
 ying these methods and provide thorough numerical benchmarks comparing the
 ir performance on synthetic and laboratory data. The software and data of 
 our experiments are all publicly accessible. We also introduce one new alg
 orithm\, a cyclic relaxed Douglas-Rachford algorithm\, which outperforms a
 ll other algorithms by every measure: speed\, stability and accuracy. The 
 analysis of this algorithm remains open.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sheehan Olver (Imperial College London\, UK)
DTSTART:20201019T200000Z
DTEND:20201019T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/8/">Sparse Spectral Methods for Power-Law Interactions</a>\n
 by Sheehan Olver (Imperial College London\, UK) as part of CRM Applied Mat
 h Seminar\n\n\nAbstract\nAttractive-repulsive power law equilbriums are an
  important tool in modelling phenomena in collective behaviour: picture a 
 flock of birds which simultaneously group together\, but not too closely (
 i.e.\, they practice social distancing)\, until an equilibrium distributio
 n is reached. In this talk we show that orthogonal polynomials have sparse
  recurrence relationships for power law (Riesz) kernels. This leads to hig
 hly structured and efficiently solvable linear systems for the attractive-
 repulsive case with two such kernels of opposite sign\, giving an effectiv
 e numerical method for computing such equilibrium distributions. This link
 s to and builds on related work in logarithmic potential theory\, singular
  integral equations\, and fractional differential equations.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Johannes Royset (Naval Postgraduate School\, USA)
DTSTART:20201026T200000Z
DTEND:20201026T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/9/">Variational Perspectives on Mathematical Optimization</a
 >\nby Johannes Royset (Naval Postgraduate School\, USA) as part of CRM App
 lied Math Seminar\n\n\nAbstract\nThe mathematical tools for building optim
 ization models and algorithms grow out of linear algebra\, differential ca
 lculus and real analysis. However\, the needs of applications have led to 
 a new area of mathematics that can handle systems of inequalities and func
 tions that are neither smooth nor well-defined in a traditional sense. Var
 iational analysis is the broad term for this area of mathematics. In this 
 presentation\, we show its crucial role in the development of optimization
  models and algorithms in finite dimensions. First\, we examine variationa
 l geometry and definitions of normal and tangent vectors that extend the c
 lassical notions for smooth manifolds. This in turn leads to subdifferenti
 ability\, a wide range of calculus rules and optimality conditions for arb
 itrary functions. Second\, we develop an approximation theory for optimiza
 tion problems that leads to consistent approximations\, error bounds and r
 ates of convergence even in the nonconvex and nonsmooth setting.\n\nDr. Jo
 hannes O. Royset is Professor of Operations Research at the Naval Postgrad
 uate School. Dr. Royset's research focuses on formulating and solving stoc
 hastic and deterministic optimization problems arising in data analytics\,
  sensor management\, and reliability engineering. He was awarded a Nationa
 l Research Council postdoctoral fellowship in 2003\, a Young Investigator 
 Award from the Air Force Office of Scientific Research in 2007\, and the B
 archi Prize as well as the MOR Journal Award from the Military Operations 
 Research Society in 2009. He received the Carl E. and Jessie W. Menneken F
 aculty Award for Excellence in Scientific Research in 2010 and the Goodeve
  Medal from the Operational Research Society in 2019. Dr. Royset was a ple
 nary speaker at the International Conference on Stochastic Programming in 
 2016 and at the SIAM Conference on Uncertainty Quantification in 2018. He 
 has a Doctor of Philosophy degree from the University of California at Ber
 keley (2002). Dr. Royset has been an associate or guest editor of Operatio
 ns Research\, Mathematical Programming\, Journal of Optimization Theory an
 d Applications\, Journal of Convex Analysis\, Set-Valued and Variational A
 nalysis\, Naval Research Logistics\, and Computational Optimization and Ap
 plications.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heather Harrington (Oxford University\, UK)
DTSTART:20201116T210000Z
DTEND:20201116T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/10/">Algebraic Systems Biology</a>\nby Heather Harrington (O
 xford University\, UK) as part of CRM Applied Math Seminar\n\n\nAbstract\n
 Signalling pathways in molecular biology can be modelled by polynomial dyn
 amical systems. I will present models describing two biological systems in
 volved in development and cancer. I will overview approaches to analyse th
 ese models with data using computational algebraic geometry\, differential
  algebra and statistics. Finally\, I will present how topological data ana
 lysis can provide additional information to distinguish wild-type and muta
 nt molecules in one pathway. These case studies showcase how computational
  geometry\, topology and dynamics can provide new insights in the biologic
 al systems\, specifically how changes at the molecular scale (e.g. molecul
 ar mutations) result in kinetic differences that are observed as phenotypi
 c changes (e.g.mutations in fruit fly wings).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Surowiec (Philipps-Universität Marburg\, Germany)
DTSTART:20201123T210000Z
DTEND:20201123T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/11/">A Primal-Dual Algorithm for Risk Minimization in PDE-Co
 nstrained Optimization</a>\nby Thomas Surowiec (Philipps-Universität Marb
 urg\, Germany) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe prese
 nt an algorithm for the solution of risk-averse optimization problems. The
  setting is sufficiently general so as to encompass both finite-dimensiona
 l and PDE-constrained stochastic optimization problems. Due to a lack of s
 moothness of many popular risk measures and non-convexity of the objective
  functions\, both the numerical approximation and numerical solution is a 
 major computational challenge. The proposed algorithm addresses these issu
 es in part by making use of the favorable dual properties of coherent risk
  measures. The algorithm itself is motivated by the classical method of mu
 ltipliers and exploits recent results on epigraphical regularization of ri
 sk measures. Consequently\, the algorithm requires the solution of a seque
 nce of smooth problems using derivative-based methods. We prove convergenc
 e of the algorithm in the fully continuous setting and conclude with sever
 al numerical examples. The algorithm is seen to outperform a popular bundl
 e-trust method and a direct smoothing-plus-continuation approach.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Damek Davis (Cornell University\, USA)
DTSTART:20201130T210000Z
DTEND:20201130T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/12/">Nonconvex Optimization for Estimation and Learning: Dyn
 amics\, Conditioning\, and Nonsmoothness</a>\nby Damek Davis (Cornell Univ
 ersity\, USA) as part of CRM Applied Math Seminar\n\n\nAbstract\nNonconvex
  optimization algorithms play a major role in solving statistical estimati
 on and learning problems. Indeed\, simple nonconvex heuristics\, such as t
 he stochastic gradient method\, often provide satisfactory solutions in pr
 actice\, despite such problems being NP hard in the worst case. Key exampl
 es include deep neural network training and signal estimation from nonline
 ar measurements. While practical success stories are common\, strong theor
 etical guarantees\, are rarer. The purpose of this talk is to overview a f
 ew (highly non exhaustive!) settings where rigorous performance guarantees
  can be established for nonconvex optimization\, focusing on the interplay
  of algorithm dynamics\, problem conditioning\, and nonsmoothness.\n\nBio:
  Damek Davis received his Ph.D. in mathematics from the University of Cali
 fornia\, Los Angeles in 2015. In July 2016 he joined Cornell University's 
 School of Operations Research and Information Engineering as an Assistant 
 Professor. Damek is broadly interested in the mathematics of data science\
 , particularly the interplay of optimization\, signal processing\, statist
 ics\, and machine learning. He is the recipient of several awards\, includ
 ing the INFORMS Optimization Society Young Researchers Prize in (2019) and
  a Sloan Research Fellowship in Mathematics (2020).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tim Hoheisel (McGill University)
DTSTART:20210111T210000Z
DTEND:20210111T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/13/">Halting Time is Predictable for Large Models: A Univers
 ality Property and Average-case Analysis</a>\nby Tim Hoheisel (McGill Univ
 ersity) as part of CRM Applied Math Seminar\n\n\nAbstract\nAverage-case an
 alysis computes the complexity of an algorithm averaged over all possible 
 inputs. Compared to worst-case analysis\, it is more representative of the
  typical behavior of an algorithm\,but remains largely unexplored in optim
 ization. One difficulty is that the analysis can depend on the probability
  distribution of the inputs to the model. However\, we show that this is n
 ot the case for a class of large-scale problems trained with first-order m
 ethods including random least squares and one-hidden layer neural networks
  with random weights.  In fact\, the halting time exhibits a universality 
 property: it is independent of the probability distribution. With this bar
 rier for average-case analysis removed\, we provide the first explicit ave
 rage-case convergence rates showing a tighter complexity not captured by t
 raditional worst-case analysis. Finally\, numerical simulations suggest th
 is universality property holds for a more general class of algorithms and 
 problems.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Friedlander (University of British Columbia)
DTSTART:20210118T210000Z
DTEND:20210118T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/14/">Polar deconvolution of mixed signals</a>\nby Michael P.
  Friedlander (University of British Columbia) as part of CRM Applied Math 
 Seminar\n\n\nAbstract\nThe signal demixing problem seeks to separate the s
 uperposition of multiple signals into its constituent components.  We mode
 l the superposition process as the polar convolution of atomic sets\, whic
 h allows us to use the duality of convex cones to develop an efficient two
 -stage algorithm with sublinear iteration complexity and linear storage.  
 If the signal measurements are random\, the polar deconvolution approach s
 tably recovers low-complexity and mutually-incoherent signals with high pr
 obability and with optimal sample complexity.  Numerical experiments on bo
 th real and synthetic data confirm the theory and efficiency of the propos
 ed approach.  Joint work with Zhenan Fan\, Halyun Jeong\, and Babhru Joshi
  at the University of British Columbia.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART:20210125T210000Z
DTEND:20210125T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/15/">Targeted use of deep learning for physics and engineeri
 ng</a>\nby Nathan Kutz (University of Washington) as part of CRM Applied M
 ath Seminar\n\n\nAbstract\nMachine learning and artificial intelligence al
 gorithms are now being used to automate the discovery of governing physica
 l equations and coordinate systems from measurement data alone.  However\,
  positing a universal physical law from data is challenging: (i) An approp
 riate coordinate system must also be advocated and (ii) simultaneously pro
 posing an accompanying discrepancy model to account for the inevitable mis
 match between theory and measurements must be considered.  Using a combina
 tion of deep learning and sparse regression\, specifically the sparse iden
 tification of nonlinear dynamics (SINDy) algorithm\, we show how a robust 
 mathematical infrastructure can be formulated for simultaneously learning 
 physics models and their coordinate systems.  This can be done with limite
 d data and sensors.  We demonstrate the methods on a diverse number of exa
 mples\, showing how data can maximally be exploited for scientific and eng
 ineering applications.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhaojun Bai (UC Davis)
DTSTART:20210201T210000Z
DTEND:20210201T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/16/">Rayleigh quotient optimizations and eigenvalue problems
 </a>\nby Zhaojun Bai (UC Davis) as part of CRM Applied Math Seminar\n\n\nA
 bstract\nMany computational science and data analysis techniques lead to o
 ptimizing Rayleigh quotient (RQ) and RQ-type objective functions\, such as
  computing excitation states (energies) of electronic structures\, robust 
 classification to handle uncertainty and constrained data clustering to in
 corporate domain knowledge.  We will discuss emerging RQ optimization prob
 lems\, variational principles\, and reformulations to algebraic linear and
  nonlinear eigenvalue problems.  We will show how to exploit underlying pr
 operties of these eigenvalue problems for designing fast solvers\, and ill
 ustrate the efficacy of these solvers in applications.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marwa El Halabi (MILA)
DTSTART:20210208T210000Z
DTEND:20210208T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/17/">Optimal approximation for unconstrained non-submodular 
 minimization</a>\nby Marwa El Halabi (MILA) as part of CRM Applied Math Se
 minar\n\n\nAbstract\nSubmodular function minimization is well studied\, an
 d existing algorithms solve it exactly or up to arbitrary accuracy.  Howev
 er\, in many applications\, such as structured sparse learning or batch Ba
 yesian optimization\, the objective function is not exactly submodular\, b
 ut close.  In this case\, no theoretical guarantees exist.  Indeed\, submo
 dular minimization algorithms rely on intricate connections between submod
 ularity and convexity.  We show how these relations can be extended to obt
 ain approximation guarantees for minimizing non-submodular functions\, cha
 racterized by how close the function is to submodular.  We also extend thi
 s result to noisy function evaluations.  Our approximation results are the
  first for minimizing non-submodular functions\, and are optimal\, as esta
 blished by our matching lower bound.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick Combettes (NC State)
DTSTART:20210215T210000Z
DTEND:20210215T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/18/">Perspective Functions and Applications</a>\nby Patrick 
 Combettes (NC State) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn
  this talk I will discuss mathematical and computational issues pertaining
  to perspective functions\, a powerful concept that permits to extend a co
 nvex function to a jointly convex one in terms of an additional scale vari
 able. Applications in inverse problems and statistics will be presented.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heinz Bauschke (UBC)
DTSTART:20210222T210000Z
DTEND:20210222T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/19/">Compositions of projection mappings: fixed point sets a
 nd difference vectors</a>\nby Heinz Bauschke (UBC) as part of CRM Applied 
 Math Seminar\n\n\nAbstract\nProjection operators and associated projection
  algorithms are fundamental building blocks in fixed point theory and opti
 mization.  In this talk\, I will survey recent results on the displacement
  mapping of the right-shift operator and sketch a new application deepenin
 g our understanding of the geometry of the fixed point set of the composit
 ion of projection operators in Hilbert space.  Based on joint works with S
 alha Alwadani\, Julian Revalski\, and Shawn Wang.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Paul E. Hand (Northeastern University)
DTSTART:20210308T210000Z
DTEND:20210308T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/20/">Signal Recovery with Generative Priors</a>\nby Paul E. 
 Hand (Northeastern University) as part of CRM Applied Math Seminar\n\n\nAb
 stract\nRecovering images from very few measurements is an important task 
 in imaging problems.  Doing so requires assuming a model of what makes som
 e images natural.  Such a model is called an image prior.  Classical prior
 s such as sparsity have led to the speedup of Magnetic Resonance Imaging i
 n certain cases.  With the recent developments in machine learning\, neura
 l networks have been shown to provide efficient and effective priors for i
 nverse problems arising in imaging.  In this talk\, we will discuss the us
 e of neural network generative models for inverse problems in imaging.  We
  will present a rigorous recovery guarantee at optimal sample complexity f
 or compressed sensing and other inverse problems under a suitable random m
 odel.  We will see that generative models enable an efficient algorithm fo
 r phase retrieval from generic measurements with optimal sample complexity
 .  In contrast\, no efficient algorithm is known for this problem in the c
 ase of sparsity priors.  We will discuss strengths\, weaknesses\, and futu
 re opportunities of neural networks and generative models as image priors.
   These works are in collaboration with Vladislav Voroninski\, Reinhard He
 ckel\, Ali Ahmed\, Wen Huang\, Oscar Leong\, Jorio Cocola\, Muhammad Asim\
 , and Max Daniels.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clarice Poon (University of Bath)
DTSTART:20210315T200000Z
DTEND:20210315T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/21/">Off-the-grid sparse estimation</a>\nby Clarice Poon (Un
 iversity of Bath) as part of CRM Applied Math Seminar\n\n\nAbstract\nThe b
 ehaviour of sparse regularization using the Lasso method is well understoo
 d when dealing with discretized linear models.  However\, the behaviour o
 f Lasso is poor when dealing with models with very large parameter spaces 
 and in recent years\, there has been much interest in the use of "off-the
 -grid" approaches\, using a continuous parameter space in conjunction with
  convex optimization problem over measures.  In my talk\, I will present 
 some recent results which explain the behaviour of this method in arbitrar
 y dimensions.  Some highlights include the use of the Fisher metric to st
 udy the performance of Blasso over general domains and the application of 
 this for quantitative MRI.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Olga Mula (Paris Dauphine)
DTSTART:20210322T200000Z
DTEND:20210322T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/22/">Depth-Adaptive Neural Networks from the Optimal Control
  viewpoint</a>\nby Olga Mula (Paris Dauphine) as part of CRM Applied Math 
 Seminar\n\n\nAbstract\nIn recent years\, deep learning has been connected 
 with optimal control as a way to define a notion of a continuous underlyin
 g learning problem.  In this view\, neural networks can be interpreted as
  a discretization of a parametric Ordinary Differential Equation which\, i
 n the limit\, defines a continuous-depth neural network.  The learning 
 task then consists in finding the best ODE parameters for the problem unde
 r consideration\, and their number increases with the accuracy of the tim
 e discretization.  Although important steps have been taken to realize the
  advantages of such continuous formulations\, most current learning techn
 iques fix a discretization (i.e.~the number of layers is fixed).  In this 
 work\, we propose an iterative adaptive algorithm where we progressively 
 refine the time discretization (i.e.~we increase the number of layers).  P
 rovided that certain tolerances are met across the iterations\, we prove 
 that the strategy converges to the underlying continuous problem.  One sal
 ient advantage of such a shallow-to-deep approach is that it helps to ben
 efit in practice from the higher approximation properties of deep networks
  by mitigating over-parametrization issues.  The performance of the approa
 ch is illustrated in several numerical examples.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sasha Aravkin (University of Washington)
DTSTART:20210412T200000Z
DTEND:20210412T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/23/">A tale of two models for Covid-19 scenarios</a>\nby Sas
 ha Aravkin (University of Washington) as part of CRM Applied Math Seminar\
 n\n\nAbstract\nCovid-19 Pandemic is a defining global health event in the 
 21st century.  Forecasting the evolution of the pandemic is a key problem 
 for anyone trying to plan ahead.  Since March 2020\, IHME has been generat
 ing Covid-19 scenarios\, first for US states and then for all Admin-1 loca
 tions around the world.  These scenarios have been intensively used\; resu
 lts are uploaded weekly to an interactive website: https://covid19.healthd
 ata.org/ \nIn this talk\, we describe two core mathematical models underly
 ing the IHME scenarios.  The first model\, dubbed CurveFit\, used strong a
 ssumptions to get useful predictions using extremely limited data\, and wa
 s used during March and April of 2020.  The second model\, a data-driven S
 EIIR model\, was put in play in June 2020\, and provides a flexible way to
  incorporate relationships with key drivers such as mobility\, mask use\, 
 and pneumonia seasonality.  We describe the mathematics underlying both mo
 dels\, and discuss the interplay between stability\, scalability\, and com
 plexity in mathematical modeling.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:test
DTSTART:20210614T200000Z
DTEND:20210614T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/24/">test</a>\nby test as part of CRM Applied Math Seminar\n
 \n\nAbstract\nWe show that intertwining operators for the discrete Fourier
  transform form a cubic algebra $C_q$ with $q$ a root of unity. This algeb
 ra is intimately related to the two other well-known\nrealizations of the 
 cubic algebra: the Askey-Wilson algebra and the Askey-Wilson-Heun algebra.
 \nThis is joint work with Mesuma Atakishiyeva (Universidad Autónoma del E
 stado de Morelos\,\nCentro de Investigación en Ciencias\, Cuernavaca\, 62
 250\, Morelos\, México) and Alexei Zhedanov (School of Mathematics\, Renm
 in University of China\, Beijing 100872\, China) curious theorem on S-inte
 grables D$\\Delta$Es and its consequences\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Kuhn (EPFL)
DTSTART:20210913T183000Z
DTEND:20210913T193000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/26/">Mathematical Foundations of Robust and Distributionally
  Robust Optimization</a>\nby Daniel Kuhn (EPFL) as part of CRM Applied Mat
 h Seminar\n\n\nAbstract\nRobust and distributionally robust optimization a
 re modeling paradigms for decision-making under uncertainty where the unce
 rtain parameters are only known to reside in an uncertainty set or are gov
 erned by any probability distribution from within an ambiguity set\, respe
 ctively\, and a decision is sought that minimizes a cost function under th
 e most adverse outcome of the uncertainty.  In this paper\, we develop a r
 igorous and general theory of robust and distributionally robust nonlinear
  optimization using the language of convex analysis.  Our framework is bas
 ed on a generalized `primal-worst-equals-dual-best' principle that establi
 shes strong duality between a semi-infinite primal worst and a non-convex 
 dual best formulation\, both of which admit finite convex reformulations. 
  This principle offers an alternative formulation for robust optimization 
 problems that may be computationally advantageous\, and it obviates the ne
 ed to mobilize the machinery of abstract semi-infinite duality theory to p
 rove strong duality in distributionally robust optimization.  We illustrat
 e the modeling power of our approach through convex reformulations for dis
 tributionally robust optimization problems whose ambiguity sets are define
 d through general optimal transport distances\, which generalize earlier r
 esults for Wasserstein ambiguity sets.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Diane Guignard (University of Ottawa)
DTSTART:20210920T200000Z
DTEND:20210920T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/27/">Nonlinear reduced models for parametric PDEs</a>\nby Di
 ane Guignard (University of Ottawa) as part of CRM Applied Math Seminar\n\
 n\nAbstract\nWe consider model reduction methods for parametric partial di
 fferential equations.  The usual approach to model reduction is to constru
 ct a low dimensional linear space which accurately approximates the parame
 ter-to-solution map\, and use it to build an efficient forward solver.  Ho
 wever\, the construction of a suitable linear space is not always feasible
  numerically.  It is well-known that nonlinear methods may provide improve
 d efficiency.  In a so-called library approximation\, the idea is to repla
 ce the linear space by a collection of linear (or affine) spaces of smalle
 r dimension.  In this talk\, we first review standard linear methods for m
 odel reduction.  Then\, we present a strategy which can be used to generat
 e a nonlinear reduced model\, namely a library based on piecewise (Taylor)
  polynomials.  We provide an analysis of the method\, in particular the de
 rivation of an upper bound on the size of the library\, and illustrate its
  performance through several numerical experiments.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gitta Kutyniok (LMU Munich)
DTSTART:20210927T183000Z
DTEND:20210927T193000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/28/">The Modern Mathematics of Deep Learning</a>\nby Gitta K
 utyniok (LMU Munich) as part of CRM Applied Math Seminar\n\n\nAbstract\nDe
 spite the outstanding success of deep neural networks in real-world applic
 ations\, ranging from science to public life\, most of the related researc
 h is empirically driven and a comprehensive mathematical foundation is sti
 ll missing.  At the same time\, these methods have already shown their imp
 ressive potential in mathematical research areas such as imaging sciences\
 , inverse problems\, or numerical analysis of partial differential equatio
 ns\, sometimes by far outperforming classical mathematical approaches for 
 particular problem classes.  The goal of this lecture is to first provide 
 an introduction into this new vibrant research area.  We will then survey 
 recent advances in two directions\, namely the development of a mathematic
 al foundation of deep learning and the introduction of novel deep learning
 -based approaches to mathematical problem settings.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jason Bramburger (George Mason University)
DTSTART:20211004T200000Z
DTEND:20211004T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/29/">Deep learning of conjugate mappings</a>\nby Jason Bramb
 urger (George Mason University) as part of CRM Applied Math Seminar\n\n\nA
 bstract\nDespite many of the most common chaotic dynamical systems being c
 ontinuous in time\, it is through discrete time mappings that much of the 
 understanding of chaos is formed. Henri Poincaré first made this connecti
 on by tracking consecutive iterations of the continuous flow with a lower-
 dimensional\, transverse subspace. The mapping that iterates the dynamics 
 through consecutive intersections of the flow with the subspace is now ref
 erred to as a Poincaré map\, and it is the primary method available for i
 nterpreting and classifying chaotic dynamics. Unfortunately\, in all but t
 he simplest systems\, an explicit form for such a mapping remains outstand
 ing. In this talk I present a method of discovering explicit Poincaré map
 pings using deep learning to construct an invertible coordinate transforma
 tion into a conjugate representation where the dynamics are governed by a 
 relatively simple chaotic mapping. The invertible change of variable is ba
 sed on an autoencoder\, which allows for dimensionality reduction\, and ha
 s the advantage of classifying chaotic systems using the equivalence relat
 ion of topological conjugacies. We illustrate with low-dimensional systems
  such as the Rössler and Lorenz systems\, while also demonstrating the ut
 ility of the method on the infinite-dimensional Kuramoto--Sivashinsky equa
 tion.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Terry Rockafellar (University of Washington)
DTSTART:20211018T200000Z
DTEND:20211018T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/30/">Hidden convexity in nonconvex optimization</a>\nby Terr
 y Rockafellar (University of Washington) as part of CRM Applied Math Semin
 ar\n\n\nAbstract\nIn nonconvex optimization\, not only the objective but e
 ven the feasible set may lack convexity.  It may seem therefore that the c
 oncepts and methodology of convex optimization can no longer have a fundam
 ental role\, but this is actually wrong.  Standard sufficient conditions f
 or local optimality in nonlinear programming and its extensions turn out t
 o correspond to characterizing optimality in terms of a local convex-conca
 ve-type saddle point of an augmented Lagrangian function.  Algorithms that
  effectively in both primal and dual elements are thereby revealed as work
 ing just as they would in the convex case.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aaron Berk (University of British Columbia)
DTSTART:20211025T200000Z
DTEND:20211025T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/31/">On LASSO parameter sensitivity</a>\nby Aaron Berk (Univ
 ersity of British Columbia) as part of CRM Applied Math Seminar\n\n\nAbstr
 act\nCompressed sensing theory explains why LASSO programs recover structu
 red high-dimensional signals with minimax order-optimal error.  Yet\, the 
 optimal choice of the program's governing parameter is often unknown in pr
 actice.  It is still unclear how variation of the governing parameter impa
 cts recovery error in compressed sensing\, which is otherwise provably sta
 ble and robust.  We provide an overview of parameter sensitivity in LASSO 
 programs in the setting of proximal denoising\; and of compressed sensing 
 with subgaussian measurement matrices and gaussian noise.  We demonstrate 
 how two popular ell-1 minimization programs exhibit sensitivity with respe
 ct to their parameter choice and illustrate the theory with numerical simu
 lations.  For example\, a 1% error in the estimate of a parameter can caus
 e the error to increase by a factor of 10^9\, while choosing a different L
 ASSO program avoids such sensitivity issues.  We hope that revealing param
 eter sensitivity regimes of LASSO programs helps to inform a practitioner'
 s choice.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frank E. Curtis (Lehigh University)
DTSTART:20211101T200000Z
DTEND:20211101T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/32/">Algorithms for Deterministically Constrained Stochast
 ic Optimization</a>\nby Frank E. Curtis (Lehigh University) as part of CR
 M Applied Math Seminar\n\n\nAbstract\nI will present the recent work by my
  research group on the design\, analysis\, and implementation of algorithm
 s for solving nonlinear optimization problems that involve a stochastic 
 objective function and deterministic constraints.  The talk will focus o
 n our sequential quadratic optimization (commonly known as SQP) methods fo
 r cases when the constraints are defined by nonlinear systems of equatio
 ns\, which arise in various applications including optimal control\, PDE-c
 onstrained optimization\, and network optimization problems.  One might 
 also consider our techniques for training machine learning (e.g.\, deep le
 arning) models with constraints.  I will also discuss the various extens
 ions that my group is exploring along with other related open questions.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rainer Groh (Bristol Composites Institute (ACCIS))
DTSTART:20211108T210000Z
DTEND:20211108T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/33/">Experimental continuation of nonlinear load-bearing str
 uctures</a>\nby Rainer Groh (Bristol Composites Institute (ACCIS)) as part
  of CRM Applied Math Seminar\n\n\nAbstract\nThe drive for lightweighting i
 n structural engineering leads to ever thinner structures that deform in n
 onlinear ways and that are prone to sudden instabilities.  Simultaneously\
 , a renewed interest in structural instability revolves around purposefull
 y embedding instabilities in structures to add functionality beyond struct
 ural load-carrying capability (e.g.  dynamic shape adaptivity).  To date\,
  the design of nonlinear structures is guided almost entirely by computati
 onal modelling\, in particular the use of numerical continuation tools.  A
 dvances in experimental testing of nonlinear structures\, on the other han
 d\, are significantly lagging behind numerical methods.  While numerical c
 ontinuation principles such as path-following\, calculation of bifurcation
 s\, branch-switching\, and bifurcation tracking are now well established\,
  nonlinear experimental methods of structures have not advanced beyond sim
 ple displacement and force control.  This means that the nonlinear respons
 e of even simple nonlinear structures cannot be fully characterised\, as e
 stablished techniques induce dynamic snaps at limit points and subcritical
  bifurcations.  There is thus huge potential for devising novel and non-de
 structive ways of testing nonlinear structures by applying concepts from t
 he field of continuation to experimental mechanics.  At the University of 
 Bristol\, we have developed a testing method based on adding control point
 s with auxiliary sensors and actuators to a structure to: (i) stabilise ot
 herwise unstable equilibria\; (ii) control the shape of the structure to t
 ransition between different stable equilibria\; and (iii) compute an exper
 imental â€œtangentialâ€ stiffness matrix (the Jacobian)\, which u
 ltimately allows Newton's root-finding algorithm to be implemented experim
 entally.  With this approach all the features of the numerical techniques 
 mentioned above can (theoretically) be replicated.  The testing method has
  been applied to laboratory scale test specimens such as the snap-through 
 of a shallow arch\, and this seminar will provide an overview of the mathe
 matical background to experimental continuation\, its application\, and ou
 tlook to future experiments.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolfgang Dahmen (University of South Carolina)
DTSTART:20211115T210000Z
DTEND:20211115T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/34/">Some Thoughts on Physics Informed Neural Networks</a>\n
 by Wolfgang Dahmen (University of South Carolina) as part of CRM Applied M
 ath Seminar\n\n\nAbstract\nEmploying Deep Learning concepts to "learn" phy
 sical laws\, has been recently attracting significant attention. In partic
 ular\, so called "Physics Informed Neural Networks” (PINN) refers to a p
 aradigm where the training of model surrogates is based on empirical risks
  that require only point-wise evaluation of residuals. This avoids the exp
 ensive computation of a sufficiently large number of training data\, typic
 ally given in terms of high fidelity approximations of model states. \n\nT
 he core issue addressed in this talk is the prediction capability of such 
 methods for models given in terms of parameter-dependent families of parti
 al differential equations. Related specific questions concern\, for instan
 ce\, the choice of "variationally correct" training risks\, that convey ce
 rtifiable information about the achieved accuracy in problem relevant metr
 ics\, the role of a priori versus a posteriori error bounds\, connections 
 with Generative Adversarial Networks\, as well as related implications on 
 training strategies and network adaptation.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soizic Terrien (CNRS - Université du Mans)
DTSTART:20211122T210000Z
DTEND:20211122T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/35/">Equidistant and non equidistant pulsing patterns in an 
 excitable microlaser with delayed feedback</a>\nby Soizic Terrien (CNRS - 
 Université du Mans) as part of CRM Applied Math Seminar\n\n\nAbstract\nEx
 citability is observed in many natural and artificial systems\, from spiki
 ng neurons to cardiac cells and semiconductor lasers.  It corresponds to t
 he all-or-none pulse-shaped response of a system to an external perturbati
 on\, depending whether or not the perturbation amplitude exceeds the so-ca
 lled excitable threshold.  When subject to delayed feedback\, an excitable
  system can regenerate its own excitable response when it is reinjected af
 ter a delay time τ.  As the process repeats\, this results in sustained p
 ulsing regimes\, which can be of interest for many applications\, from dat
 a transmission to all-optical signal processing or neuromorphic photonic n
 etworks.  \n\nHere we investigate the short-term and long-term dynamics of
  an excitable microlaser subject to delayed optical feedback.  This is don
 e both experimentally and numerically through a bifurcation analysis of a 
 suitable model written in the form of three delay-differential equations (
 DDEs) with one fast and two slow variables.  We show that almost any pulse
  sequence can be excited and regenerated by the system over short periods 
 of time.  In the long-term\, on the other hand\, the system settles down t
 o one of the coexisting\, slowly-attracting periodic orbits\, which corres
 pond to different numbers of pulses in the feedback cavity.  We show that\
 , depending on the internal timescales of the excitable system\, these pul
 ses appear to be either equidistant (i.e.  with equalized pulse intervals)
  or non-equidistant in the feedback cavity.  A bifurcation analysis demons
 trates that non-equidistant pulsing patterns originate in resonance phenom
 ena.  The mechanism for the emergence of very large locking regions in the
  parameter space in investigated.  \n\nJoint work with Bernd Krauskopf (Un
 iversity of Auckland)\, Neil Broderick (University of Auckland) and Sylvai
 n Barbay (C2N\, CNRS / Univ.  Paris Saclay)\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrus Giraldo (The University of Auckland)
DTSTART:20211129T210000Z
DTEND:20211129T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/36
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/36/">Degenerate singular cycles and chaotic switching in the
  two-site open Bose--Hubbard model</a>\nby Andrus Giraldo (The University 
 of Auckland) as part of CRM Applied Math Seminar\n\n\nAbstract\nThe two-si
 te open Bose--Hubbard dimer model is a celebrated fundamental quantum opti
 cal model that accounts for the dynamics of bosons at two lossy interactin
 g sites. Recently\, two coupled\, driven\, and lossy photonic crystal nano
 cavities ---which are optical devices that operate with only a few hundred
  photons due to their extremely small size--- have been shown to realise t
 his model experimentally. Thus\, there is much interest in understanding t
 he different behaviours that such model exhibits for theoretical and pract
 ical purposes.\nThis talk will show the different dynamics in the semiclas
 sical approximation of this quantum optical system by presenting a compreh
 ensive bifurcation analysis. We characterised different transitions of cha
 otic attractors in parameter plane by numerically computing tangency bifur
 cations between stable and unstable manifolds of saddle equilibria and per
 iodic orbits. By doing so\, we identify codimension-two degenerate singula
 r cycles\, and their generalisations\, as responsible for the organisation
 s of different tangency and heteroclinic bifurcations between saddle equil
 ibria periodic orbits in parameter plane. Thus\, we provide a roadmap for 
 observable chaotic dynamics in the semiclassical approximation of the two-
 site Bose--Hubbard dimer model\, which connects novel results in bifurcati
 on theory with novel applications through numerical continuation technique
 s.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Haesun Park (Georgia Institute of Technology)
DTSTART:20211206T210000Z
DTEND:20211206T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/37
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/37/">Multi-view Unsupervised and Semi-Supervised Clustering 
 based on Content and Connection Information</a>\nby Haesun Park (Georgia I
 nstitute of Technology) as part of CRM Applied Math Seminar\n\n\nAbstract\
 nConstrained Low Rank Approximation (CLRA) is a powerful framework for a v
 ariety of important tasks in large scale data analytics such as topic disc
 overy in text data and community detection in social network data. In this
  talk\, a hybrid method called Joint Nonnegative Matrix Factorization (Joi
 ntNMF) is introduced for latent information discovery from multi-view data
  sets that contain both text content and connection structure information.
  The method jointly optimizes an integrated objective function\, which is 
 a combination of the Nonnegative Matrix Factorization (NMF) objective func
 tion for handling text content/attribute information and the Symmetric NMF
  (SymNMF) objective function for handling relation/connection information.
  An effective algorithm for the joint NMF objective function is proposed u
 tilizing the block coordinate descent (BCD) method.\nThe proposed hybrid m
 ethod simultaneously discovers content associations and related latent con
 nections without any need for post-processing or additional clustering. In
  addition\, known partial label information can be incorporated into a Joi
 ntNMF for semi-supervised clustering framework. The experimental results f
 rom several real-life application problems illustrate the advantages of th
 e proposed approaches.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Bihlo (Memorial University)
DTSTART:20220110T210000Z
DTEND:20220110T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/38
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/38/">Deep neural networks for solving differential equations
  on general orientable surfaces</a>\nby Alex Bihlo (Memorial University) a
 s part of CRM Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matus Benko (Johannes Kepler University Linz)
DTSTART:20220117T210000Z
DTEND:20220117T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/39
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/39/">Variational Analysis: Basics\, Calculus\, and Semismoot
 hness*</a>\nby Matus Benko (Johannes Kepler University Linz) as part of CR
 M Applied Math Seminar\n\n\nAbstract\nThe purpose of this talk is to offer
  a brief introduction into set-valued and variational analysis and to try 
 to motivate the study of this area. To this end\, we first discuss some ba
 sic notions and ideas. Namely\, we try to explain why set-valued mappings 
 should be analyzed\, what properties of such mappings seems to be useful a
 nd are typically studied\, as well as how one can analyze them\, i.e.\, wh
 at are the available tools. It should not be very surprising that\, just l
 ike in the standard analysis of functions\, derivatives play a crucial rol
 e. Thus\, we clarify how to differentiate set-valued mappings using the ma
 chinery of variational geometry (tangent and normal cones). Then we discus
 s in more depth the topic of calculus rules that enable one to properly ma
 nipulate with generalized derivatives and apply them to practically releva
 nt problems. We conclude with some remarks about the new property of semis
 moothness* for set-valued mappings and the related Newton method for solvi
 ng generalized equations (inclusions).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Rolnick (McGill University)
DTSTART:20220124T210000Z
DTEND:20220124T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/40
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/40/">Expressivity and learnability in deep neural networks</
 a>\nby David Rolnick (McGill University) as part of CRM Applied Math Semin
 ar\n\n\nAbstract\nIn this talk\, we show that there is a large gap between
  the maximum complexity of the functions that a neural network can express
  and the expected complexity of the functions that it learns in practice. 
  Deep ReLU networks are piecewise linear functions\, and the number of dis
 tinct linear regions is a natural measure of their expressivity.  It is we
 ll-known that the maximum number of linear regions grows exponentially wit
 h the depth of the network\, and this has often been used to explain the s
 uccess of deeper networks.  We show that the expected number of linear reg
 ions in fact grows polynomially in the size of the network\, far below the
  exponential upper bound and independent of the depth of the network.  Thi
 s statement holds true both at initialization and after training\, under n
 atural assumptions for gradient-based learning algorithms.  We also show t
 hat the linear regions of a ReLU network reveal information about the netw
 ork's parameters.  In particular\, it is possible to reverse-engineer the 
 weights and architecture of an unknown deep ReLU network merely by queryin
 g it.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sebastien Le Digabel (Polytechnique Montreal)
DTSTART:20220214T210000Z
DTEND:20220214T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/41
DESCRIPTION:by Sebastien Le Digabel (Polytechnique Montreal) as part of CR
 M Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tom Trogdon (U of Washington)
DTSTART:20220221T210000Z
DTEND:20220221T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/42
DESCRIPTION:by Tom Trogdon (U of Washington) as part of CRM Applied Math S
 eminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guy Wolf (UdeM)
DTSTART:20220307T210000Z
DTEND:20220307T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/43
DESCRIPTION:by Guy Wolf (UdeM) as part of CRM Applied Math Seminar\n\nAbst
 ract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Margarida Carvalho (UdeM)
DTSTART:20220321T200000Z
DTEND:20220321T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/44
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/44/">Mathematical Programming Games: pushing the limits of e
 quilibria computation</a>\nby Margarida Carvalho (UdeM) as part of CRM App
 lied Math Seminar\n\n\nAbstract\nMathematical programming games (MPGs) enc
 ompass flexible problem modeling when decision makers interact.  Through t
 hem\, we can reflect each player's goal in the game through a parametric o
 ptimization problem.  In this talk\, we will first provide examples of suc
 h games and their MPG formulation.  Then\, we will focus on MPGs where dec
 isions can take integer values\, the so-called integer programming games. 
  We will also discuss Nash games among Stackelberg leaders.  The theoretic
 al intractability of these games will be presented as well as algorithmic 
 schemes to solve them in practice.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soledad Villar (Johns Hopkins)
DTSTART:20220404T200000Z
DTEND:20220404T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/45
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/45/">Units-equivariant machine learning</a>\nby Soledad Vill
 ar (Johns Hopkins) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe c
 ombine ideas from dimensional analysis and from equivariant machine learni
 ng to provide an approach for units-equivariant machine learning. Units eq
 uivariance is the exact symmetry that follows from the requirement that re
 lationships among measured quantities must obey self-consistent dimensiona
 l scalings. Our approach is to construct a dimensionless version of the le
 arning task\, using classic results from dimensional analysis\, and then p
 erform the learning task in the dimensionless space. This approach can be 
 used to impose units equivariance on almost any contemporary machine-learn
 ing methods\, including those that are equivariant to rotations and other 
 groups. Units equivariance is expected to be particularly valuable in the 
 contexts of symbolic regression and emulation. We discuss the in-sample an
 d out-of-sample prediction accuracy gains one can obtain if exact units eq
 uivariance is imposed\; the symmetry is extremely powerful in some context
 s. We illustrate these methods with simple numerical examples involving dy
 namical systems in physics and ecology.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Giang Tran (University of Waterloo)
DTSTART:20220411T200000Z
DTEND:20220411T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/46
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/46/">Sparse Random Feature Models: Theoretical Guarantees an
 d Applications</a>\nby Giang Tran (University of Waterloo) as part of CRM 
 Applied Math Seminar\n\n\nAbstract\nRandom feature methods have been succe
 ssful in various machine learning tasks\, are easy to compute\, and come w
 ith theoretical accuracy bounds.  They serve as an alternative approach to
  standard neural networks since they can represent similar function spaces
  without a costly training phase.  However\, for accuracy\, random feature
  methods require more measurements than trainable parameters\, limiting th
 eir use for data-scarce applications or problems in scientific machine lea
 rning.  In this talk\, we will introduce the sparse random feature expansi
 on to obtain parsimonious random feature models.  Specifically\, we levera
 ge ideas from compressive sensing to generate random feature expansions wi
 th theoretical guarantees even in the data-scarce setting.  We also presen
 t a random feature model for approximating high-dimensional sparse additiv
 e functions and a sparse random mode decomposition to extract intrinsic mo
 des from challenging time-series data.  Comparisons show that our proposed
  approaches perform better or are comparable to other state-of-the-art or 
 popular methods.  Applications of our methods on identifying important var
 iables in high-dimensional settings as well as on decomposing music pieces
  and visualizing black-hole mergers will be addressed.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fabian Pedregosa (Google)
DTSTART:20220207T210000Z
DTEND:20220207T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/47
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/47/">Efficient and Modular Implicit Differentiation</a>\nby 
 Fabian Pedregosa (Google) as part of CRM Applied Math Seminar\n\n\nAbstrac
 t\nAutomatic differentiation (autodiff) has revolutionized machine learnin
 g.  It allows expressing complex computations by composing elementary ones
  in creative ways and removes the burden of computing their derivatives by
  hand.  More recently\, differentiation of optimization problem solutions 
 has attracted widespread attention with applications such as optimization 
 layers\, and in bi-level problems such as hyper-parameter optimization and
  meta-learning.  However\, so far\, implicit differentiation remained diff
 icult to use for practitioners\, as it often required case-by-case tedious
  mathematical derivations and implementations.  In this paper\, we propose
  a unified\, efficient and modular approach for implicit differentiation o
 f optimization problems.  In our approach\, the user defines directly in P
 ython a function F capturing the optimality conditions of the problem to b
 e differentiated.  Once this is done\, we leverage autodiff of F and impli
 cit differentiation to automatically differentiate the optimization proble
 m.  Our approach thus combines the benefits of implicit differentiation an
 d autodiff.  It is efficient as it can be added on top of any state-of-the
 -art solver and modular as the optimality condition specification is decou
 pled from the implicit differentiation mechanism.  We show that seemingly 
 simple principles allow to recover many exiting implicit differentiation m
 ethods and create new ones easily.  We demonstrate the ease of formulating
  and solving bi-level optimization problems using our framework.  We also 
 showcase an application to the sensitivity analysis of molecular dynamics.
 \n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Ma (University of California\, Irvine)
DTSTART:20220314T200000Z
DTEND:20220314T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/48
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/48/">The Kaczmarz Algorithm: Greed\, Randomness\, and Tensor
 s</a>\nby Anna Ma (University of California\, Irvine) as part of CRM Appli
 ed Math Seminar\n\n\nAbstract\nIn settings where data sets become extremel
 y large-scale\, stochastic iterative methods such as the Kaczmarz algorith
 m and Randomized Coordinate Descent become advantageous due to their low m
 emory footprint.  The Randomized Kaczmarz algorithm in particular has garn
 ered attention owing to its applicability in large-scale settings and its 
 elegant geometric interpretation.  In this talk\, we will discuss the Rand
 omized Kaczmarz algorithm\, it's connection to the popular Stochastic Grad
 ient Descent algorithm and it's greedy counter-part: Motzkin's Method.  Th
 is presentation contains joint work with Jamie Haddock and Denali Molitor.
 \n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tom Trogdon (U of Washington)
DTSTART:20220328T200000Z
DTEND:20220328T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/49
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/49/">Perturbations of orthogonal polynomials: Riemann-Hilber
 t problems\, random matrices and numerical linear algebra</a>\nby Tom Trog
 don (U of Washington) as part of CRM Applied Math Seminar\n\n\nAbstract\nW
 e consider the perturbation of orthogonal polynomials (OPs) with respect t
 o changes in the orthogonality measure.  While the transformation from a m
 easure to its orthogonal polynomials is typically ill-conditioned as the d
 egree of the polynomial grows\, using the Fokas-Its-Kitaev Riemann--Hilber
 t problem\, we show that in certain settings this mapping is well-conditio
 ned.  A usable perturbation theory can then be obtained.  The results are 
 strengthened when the asymptotics of the OPs with respect to a limiting me
 asure are known. Our main applications are to random matrices and to numer
 ical algorithms and dynamical systems applied to these random matrices.  T
 his is joint work with Percy Deift\, Xuicai Ding and Elliot Paquette.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bruno Després (Jacques-Louis Lions Laboratory)
DTSTART:20220519T200000Z
DTEND:20220519T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/50
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/50/">Neural Networks from the viewpoint of Numerical Analysi
 s</a>\nby Bruno Després (Jacques-Louis Lions Laboratory) as part of CRM A
 pplied Math Seminar\n\n\nAbstract\nThe presentation will focus on the inte
 rplay between\, on the one hand Neural Networks and Machine Learning which
  are emerging hot topics\, and on the other hand Numerical Analysis which 
 is now a classical topic. The Yarotsky theorem will be discussed together 
 with a recent alternative to the polarisation formula (D.-Ancellin 19'). T
 he stability of the Adam algorithm will be shown with a particular Lyapuno
 v function. Discretization of transport equations for CFD will serve as an
  applicative illustration.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Forbes (McGill)
DTSTART:20220912T200000Z
DTEND:20220912T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/51
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/51/">Regularization Techniques in Koopman-based System Ident
 ification</a>\nby James Forbes (McGill) as part of CRM Applied Math Semina
 r\n\n\nAbstract\nUsing the Koopman operator\, nonlinear systems can be exp
 ressed as infinite-dimensional linear systems. Data-driven methods can the
 n be used to approximate a finite-dimensional Koopman operator\, which is 
 particularly useful for system identification\, control\, and state estima
 tion tasks. However\, approximating large Koopman operators is numerically
  challenging\, leading to unstable Koopman operators being identified for 
 otherwise stable systems.\nThis talk will present a selection of technique
 s to regularize the Koopman regression problem\, including a novel H-infin
 ity norm regularizer. In particular\, how to re-pose the system identifica
 tion problem in order to leverage numerically efficient optimization tools
 \, such as linear matrix inequalities\, will be presented. This talk is ba
 sed on the pre-print arxiv\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quentin Bertrand (Mila)
DTSTART:20220919T200000Z
DTEND:20220919T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/52
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/52/">Implicit Differentiation in Non-Smooth Convex Learning<
 /a>\nby Quentin Bertrand (Mila) as part of CRM Applied Math Seminar\n\n\nA
 bstract\nFinding the optimal hyperparameters of a model can be cast as a b
 ilevel optimization problem\, typically solved zero-order techniques. In t
 his work we study first-order methods when the inner optimization problem 
 is convex but non-smooth. We show that the forward-mode differentiation of
  proximal gradient descent and proximal coordinate descent yield sequences
  of Jacobians converging toward the exact Jacobian. Using implicit differe
 ntiation\, we show it is possible to leverage the non-smoothness of the in
 ner problem to speed up the computation. Finally\, we provide a bound on t
 he error made on the hypergradient when the inner optimization problem is 
 solved approximately. Results on regression and classification problems re
 veal computational benefits for hyperparameter optimization\, especially w
 hen multiple hyperparameters are required.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Akitoshi Takayasu (University of Tsukuba)
DTSTART:20220926T200000Z
DTEND:20220926T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/53
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/53/">A general approach for rigorously integrating PDEs usin
 g semigroup theory</a>\nby Akitoshi Takayasu (University of Tsukuba) as pa
 rt of CRM Applied Math Seminar\n\n\nAbstract\nIn this talk we introduce a 
 general rigorous PDE integrator that proves the existence of a solution to
  the Cauchy problem of time-dependent PDEs. We derive a fixed-point formul
 ation to prove the existence of a solution locally in time\, which is base
 d on the solution map of a linearized problem called evolution operator. U
 sing rigorous numerics we validate the contraction of the fixed-point form
  on a neighborhood of a numerically computed approximate solution. Then we
  extend the time interval to exist the solution via time stepping. The mai
 n advantage of our approach is that the rigorous integrator can be applied
  to a general class of PDEs\, even performed for higher spatial dimensiona
 l PDEs.\nThis is joint work with Jean-Philippe Lessard and Gabriel Duchesn
 e.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Goluskin (U. of Victoria)
DTSTART:20221003T200000Z
DTEND:20221003T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/54
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/54/">Verifying global stability of fluid flows despite trans
 ient growth of energy</a>\nby David Goluskin (U. of Victoria) as part of C
 RM Applied Math Seminar\n\n\nAbstract\nVerifying nonlinear stability of a 
 laminar fluid flow against all perturbations is a classic challenge in flu
 id dynamics. All past results rely on monotonic decrease of a perturbation
  energy or a similar quadratic generalized energy. This "energy method" ca
 nnot show global stability of any flow in which perturbation energy may gr
 ow transiently. For the many flows that allow transient energy growth but 
 seem to be globally stable (e.g. pipe flow and other parallel shear flows 
 at certain Reynolds numbers) there has been no way to mathematically verif
 y global stability. After explaining why the energy method was the only wa
 y to verify global stability of fluid flows for over 100 years\, I will de
 scribe a different approach that is broadly applicable but more technical.
  This approach\, proposed in 2012 by Goulart and Chernyshenko\, uses sum-o
 f-squares polynomials to computationally construct non-quadratic Lyapunov 
 functions that decrease monotonically for all flow perturbations. I will p
 resent a computational implementation of this approach for the example of 
 2D plane Couette flow\, where we have verified global stability at Reynold
 s numbers above the energy stability threshold. This energy stability resu
 lt for 2D Couette flow had not been improved upon since being found by Orr
  in 1907. The results I will present are the first verification of global 
 stability â€“ for any fluid flow â€“ that surpasses the energy m
 ethod. This is joint work with Federico Fuentes (Universidad CatÃ³lica d
 e Chile) and Sergei Chernyshenko (Imperial College London).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicola Guglielmi (Gran Sasso Science Institute)
DTSTART:20221017T200000Z
DTEND:20221017T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/55
DESCRIPTION:by Nicola Guglielmi (Gran Sasso Science Institute) as part of 
 CRM Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Baraldi (Sandia National Labs)
DTSTART:20221024T200000Z
DTEND:20221024T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/56
DESCRIPTION:by Robert Baraldi (Sandia National Labs) as part of CRM Applie
 d Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Serge Prudhomme (Polytechnique Montreal)
DTSTART:20221107T210000Z
DTEND:20221107T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/57
DESCRIPTION:by Serge Prudhomme (Polytechnique Montreal) as part of CRM App
 lied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kimon Fountoulakis (U. of Waterloo)
DTSTART:20221128T210000Z
DTEND:20221128T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/58
DESCRIPTION:by Kimon Fountoulakis (U. of Waterloo) as part of CRM Applied 
 Math Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephanie Dodson (Colby College)
DTSTART:20221205T210000Z
DTEND:20221205T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/59
DESCRIPTION:by Stephanie Dodson (Colby College) as part of CRM Applied Mat
 h Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/59/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tatiana Bubba (University of Bath)
DTSTART:20221114T200000Z
DTEND:20221114T210000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/60
DESCRIPTION:by Tatiana Bubba (University of Bath) as part of CRM Applied M
 ath Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ashwin Pananjady (Georgia Tech)
DTSTART:20221121T210000Z
DTEND:20221121T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/61
DESCRIPTION:by Ashwin Pananjady (Georgia Tech) as part of CRM Applied Math
  Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Test (test)
DTSTART:20221212T210000Z
DTEND:20221212T220000Z
DTSTAMP:20260404T110654Z
UID:AppliedMathematics/62
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMathematics/62/">test</a>\nby Test (test) as part of CRM Applied Math Se
 minar\n\n\nAbstract\nA tensor T(x_1\, ...\, x_n) is a multilinear function
  of the input vectors x_j in F_q^n\, F_q  a finite field. T has a small an
 alytic rank if its output distribution is far from uniform. It has partiti
 on rank `r' if we can write T = f_1 * g_1 + ... + f_r * g_r\, where f_r an
 d g_r are tensors in fewer variables. Analytic rank measures the amount of
  randomness\, and partition rank measures the amount of structure. It is k
 nown that if `f' has small partition rank\, it must have small analytic ra
 nk. Green and Tao proved an inverse theorem stating that if `f' has small 
 analytic rank then it has small partition rank. Their bound was qualitativ
 e\, however\, and several authors gave quantitative improvements. Janzer a
 nd Milicevic independently proved a polynomial dependence. We prove an opt
 imal inverse theorem: the analytic rank and partition rank are equivalent 
 up to constant factors over large enough fields. Our techniques are very d
 ifferent from the usual methods in this area\, we rely on algebraic geomet
 ry rather than additive combinatorics. This is joint work with Guy Moshkov
 itz.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMathematics/62/
END:VEVENT
END:VCALENDAR
