BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:David Williams (Penn State University)
DTSTART:20221104T223000Z
DTEND:20221104T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/2/">Space-Time Finite Element Methods: Challenges and Perspectives<
 /a>\nby David Williams (Penn State University) as part of SFU Mathematics 
 of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture held in
  K9509.\n\nAbstract\nSpace-time finite element methods (FEMs) are likely t
 o grow in popularity due to the ongoing growth in the size\, speed\, and p
 arallelism of modern computing platforms. The allure of space-time FEMs is
  both intuitive and practical. From the intuitive standpoint\, there is co
 nsiderable elegance and simplicity in accommodating both space and time us
 ing the same numerical discretization strategy. From the practical standpo
 int\, there are considerable advantages in efficiency and accuracy that ca
 n be gained from space-time mesh adaptation: i.e. adapting the mesh in bot
 h space and time to resolve important solution features. However\, despite
  these considerable advantages\, there are numerous challenges that must b
 e overcome before space-time FEMs can realize their full potential. These 
 challenges are primarily associated with four-dimensional geometric obstac
 les (hypersurface and hypervolume mesh generation)\, four-dimensional appr
 oximation theory (basis functions and quadrature rules)\, four-dimensional
  boundary condition enforcement (well-posed\, moving boundary conditions)\
 , and iterative-solution techniques for large-scale linear systems. In thi
 s presentation\, we will provide a brief overview of space-time FEMs\, and
  discuss some of the latest research developments and ongoing issues.\n\nD
 avid M. Williams is an assistant professor at The Pennsylvania State Unive
 rsity in the Mechanical Engineering Department. He came to Penn State from
  the Flight Sciences division of Boeing Commercial Airplanes and Boeing Re
 search and Technology\, where he worked for several years as a computation
 al fluid dynamics engineer. Williams received his M.S. and Ph. D. in Aeron
 autics and Astronautics at Stanford University. He holds a B.S.E. in Aeros
 pace Engineering from the University of Michigan. He has made significant 
 advances in the design of numerical algorithms for computational fluid dyn
 amic simulations. Currently\, his research focuses on employing high-order
  Finite Element schemes to more accurately predict unsteady flows.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Messenger (University of Colorado Boulder)
DTSTART:20221007T223000Z
DTEND:20221007T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/4/">Weak-form sparse identification of differential equations from 
 noisy measurements</a>\nby Daniel Messenger (University of Colorado Boulde
 r) as part of SFU Mathematics of Computation\, Application and Data ("MOCA
 D") Seminar\n\nLecture held in K9509.\n\nAbstract\nData-driven modeling re
 fers to the use of measurement data to infer the parameters and structure 
 of a mathematical model\, or to aid in forward simulations of a partially 
 known mathematical model. Motivated by problems in collective cell biology
 \, this talk will explore algorithms which automate the map from experimen
 tal data to governing differential equations\, specifically using weak for
 mulations of the dynamics. We will show that the weak form is an ideal fra
 mework for identifying models from data if the performance criteria are ro
 bustness to data corruptions\, highly accurate model recovery when corrupt
 ion levels are low\, and computational efficiency. We will first demonstra
 te the advantages of the resulting weak-form sparse identification for non
 linear dynamics algorithm (WSINDy) in the discovery of correct underlying 
 model equations across several key modeling paradigms\, including ordinary
  differential equations (ODEs)\, partial differential equations (PDEs)\, a
 nd interacting particle systems (IPS). We will then discuss more recent ex
 tensions of this framework\, including weak-form identification of PDEs fr
 om streaming data\, enabling identification of time-varying coefficients\,
  and the use of weak-form model selection as a classifier to determine spe
 cies membership in a heterogeneous population of initially unlabeled cells
 . We will conclude with an overview of possible next directions\, includin
 g open questions related to numerical analysis and theoretical recovery gu
 arantees.\n\nPasscode 696604\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hansol Park (SFU)
DTSTART:20221014T223000Z
DTEND:20221014T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/5/">The Watanabe-Strogatz transform and constant of motion function
 als for kinetic vector models</a>\nby Hansol Park (SFU) as part of SFU Mat
 hematics of Computation\, Application and Data ("MOCAD") Seminar\n\n\nAbst
 ract\nWe present a kinetic version of the Watanabe-Strogatz (WS) transform
  for vector models in this paper. From the generalized WS-transform\, we c
 an reduce the kinetic vector model into an ODE system. We also obtain the 
 cross-ratio type constant of motion functionals for kinetic vector models 
 under suitable conditions. We present the sufficient and necessary conditi
 ons for the existence of the suggested constant of motion functionals. As 
 an application of the constant of motion functional\, we provide the insta
 bility of bipolar states of the kinetic swarm sphere model. We also provid
 e the WS-transform and constant of motion functionals for non-identical ki
 netic vector models.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Justin Solomon (MIT)
DTSTART:20221021T220000Z
DTEND:20221021T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/6/">Volumetric Methods for Modeling\, Deformation\, and Corresponde
 nce</a>\nby Justin Solomon (MIT) as part of SFU Mathematics of Computation
 \, Application and Data ("MOCAD") Seminar\n\n\nAbstract\nIn 3D modeling\, 
 medical imaging\, and other disciplines\, popular techniques for geometry 
 processing often rely on mathematical models for surface geometry\, viewin
 g shapes as thin sheets embedded in $\\mathbb{R}^3$\; this construction ne
 glects the fact that many of these surfaces are "boundary representations\
 ," intended to represent boundaries of volumes.  As an alternative\, in th
 is talk we will explore how calculations on the extrinsic space around a s
 urface can benefit geometry processing applications---as well as the mathe
 matical\, numerical\, and computational challenges of this extension to th
 ree dimensions.  Our algorithms for these problems will build on machinery
  from differential geometry\, geometric measure theory\, vector field desi
 gn\, and machine learning.\n\n(Joint work with several members of the MIT 
 Geometric Data Processing Group.)\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Wiedemann (Universitaet Augsburg)
DTSTART:20220916T223000Z
DTEND:20220917T000000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/11/">Homogenization in evolving porous media</a>\nby David Wiedeman
 n (Universitaet Augsburg) as part of SFU Mathematics of Computation\, Appl
 ication and Data ("MOCAD") Seminar\n\n\nAbstract\nNumerical simulations of
  physical or chemical processes in heterogeneous \nmedia require a resolut
 ion of the heterogeneous structure. If\, however\, \nthis heterogeneity is
  microscopically small while the object under \nconsideration is large\, a
  dimensional mismatch occurs and classical \nnumerical methods become infe
 asible.\n\nAt this point\, analytical homogenization provides effective ho
 mogeneous \nsubstitute models\, which can be simulated numerically much mo
 re easily. \nOne class of problems that can be treated are processes in po
 rous media. \nIn many biological or chemical applications\, the pore struc
 ture evolves \nin time\, which impedes classical homogenization. By means 
 of the \ntwo-scale transformation method\, we can overcome this difficulty
  and \nderive new effective models for problems in evolving heterogeneous 
 media.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jingwei Hu (University of Washington)
DTSTART:20220923T223000Z
DTEND:20220924T000000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/12/">Dynamical low-rank methods for high-dimensional collisional ki
 netic equations</a>\nby Jingwei Hu (University of Washington) as part of S
 FU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\n
 \nAbstract\nKinetic equations describe the nonequilibrium dynamics of a co
 mplex system using a probability density function. Despite of their import
 ant role in multiscale modeling to bridge microscopic and macroscopic scal
 es\, numerically solving kinetic equations is computationally demanding as
  they lie in the six-dimensional phase space. Dynamical low-rank method is
  a dimension-reduction technique that has been recently applied to kinetic
  theory\, yet most of the endeavor is devoted to linear or collisionless p
 roblems. In this talk\, we introduce efficient dynamical low-rank methods 
 for Boltzmann type collisional kinetic equations\, building on certain pri
 or knowledge about the low-rank structure of the solution.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Iwen (Michigan State University)
DTSTART:20221207T233000Z
DTEND:20221208T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/15/">Low-Distortion Embeddings of Submanifolds of $R^n$: Lower Boun
 ds\, Faster Realizations\, and Applications</a>\nby Mark Iwen (Michigan St
 ate University) as part of SFU Mathematics of Computation\, Application an
 d Data ("MOCAD") Seminar\n\nLecture held in ASB10908.\n\nAbstract\nLet M b
 e a smooth submanifold of R^n equipped with the Euclidean(chordal) metric.
  This talk will consider the smallest dimension\, m\, for which there exis
 ts a bi-Lipschitz function f:M →R^m with bi-Lipschitz constants close to
  one. We will begin by presenting a bound for the embedding dimension m fr
 om below in terms of the bi-Lipschitz constants of f and the reach\, volum
 e\, diameter\, and dimension of M. We will then discuss how this lower bou
 nd can be applied to show that prior upper bounds by Eftekhari and Wakin o
 n the minimal low-distortion embedding dimension of such manifolds using r
 andom matrices achieve near-optimal dependence on dimension\, reach\, and 
 volume (even when compared against nonlinear competitors). Next\, we will 
 discuss a new class of linear maps for embedding arbitrary (infinite) subs
 ets of R^n with sufficiently small Gaussian width which can both (i) achie
 ve near-optimal embedding dimensions of submanifolds\, and (ii) be multipl
 ied by vectors in faster than FFT-time. When applied to d-dimensional subm
 anifolds of R^n we will see that these new constructions improve on prior 
 fast embedding matrices in terms of both runtime and embedding dimension w
 hen d is sufficiently small. Time permitting\, we will then conclude with 
 a discussion of non-linear so-called “terminal embeddings” of manifold
 s which allow for extensions of the famous Johnson-Lindenstrauss Lemma bey
 ond what any linear map can achieve.\n\nThis talk will draw on joint work 
 with various subsets of Mark Roach (MSU)\, Benjamin Schmidt (MSU)\, and Ar
 man Tavakoli (MSU).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Corless (University of Western Ontario)
DTSTART:20221012T223000Z
DTEND:20221012T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/16/">Compact cubic splines and compact finite differences</a>\nby R
 obert Corless (University of Western Ontario) as part of SFU Mathematics o
 f Computation\, Application and Data ("MOCAD") Seminar\n\nLecture held in 
 K9509.\n\nAbstract\nIn this paper we introduce an apparently new spline-li
 ke interpolant that we call a compact cubic interpolant or compact cubic s
 pline\; this is similar to a cubic spline introduced in 1972 by Swartz and
  Varga\, but has higher order accuracy at the edges. We argue that for nea
 rly uniform meshes the compact cubic approach offers some potential advant
 ages\, and offers a simple way to treat the edge conditions\, relieving th
 e user of the burden of deciding to use one of the three standard options:
  free (natural)\, complete (clamped)\, or “not-a-knot” conditions. Fin
 ally\, we establish that the matrices defining the compact cubic splines (
 equivalently\, the fourth-order compact finite difference formulæ) are to
 tally nonnegative\, if all mesh widths are the same sign\, for instance if
  the mesh is real and nodes are numbered in increasing order.\n\nThe talk 
 will be in-person and use chalk\, in the wonderful multi-board room that S
 FU has for the purpose.  The YouTube version linked above was a computer v
 ersion of the same talk\, with slides\, which has some advantages (run it 
 at double speed!).  But the chalk version offers a chance to slow down and
  appreciate more of the "big picture".  The topic will be accessible if th
 e listener has heard what a "spline" is\, but the main point is to prove t
 otal nonnegativity of a certain tridiagonal matrix.  I'll also make a conn
 ection to the (very useful) subject of compact finite differences.\n\nThis
  is joint work with Dr. Leili Rafiee Sevyeri (CS University of Waterloo)\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA
DTSTART:20230106T233000Z
DTEND:20230107T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/18
DESCRIPTION:by TBA as part of SFU Mathematics of Computation\, Application
  and Data ("MOCAD") Seminar\n\nLecture held in K9509.\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruiwen Shu (University of Georgia)
DTSTART:20230120T233000Z
DTEND:20230121T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/19/">Global Minimizers of a Large Class of Anisotropic Attractive-R
 epulsive Interaction Energies in 2D</a>\nby Ruiwen Shu (University of Geor
 gia) as part of SFU Mathematics of Computation\, Application and Data ("MO
 CAD") Seminar\n\n\nAbstract\nI will discuss my joint work with José Carri
 llo on a large family of Riesz-type singular interaction potentials with a
 nisotropy in two dimensions. Their associated global energy minimizers are
  given by explicit formulas whose supports are determined by ellipses unde
 r certain assumptions. More precisely\, by parameterizing the strength of 
 the anisotropic part we characterize the sharp range in which these explic
 it ellipse-supported configurations are the global minimizers based on lin
 ear convexity arguments. Moreover\, for certain anisotropic parts\, we pro
 ve that for large values of the parameter the global minimizer is only giv
 en by vertically concentrated measures corresponding to one dimensional mi
 nimizers. We also show that these ellipse-supported configurations generic
 ally do not collapse to a vertically concentrated measure at the critical 
 value for convexity\, leading to an interesting gap of the parameters in b
 etween. In this intermediate range\, we conclude by infinitesimal concavit
 y that any superlevel set of any local minimizer in a suitable sense does 
 not have interior points. Furthermore\, for certain anisotropic parts\, th
 eir support cannot contain any vertical segment for a restricted range of 
 parameters\, and moreover the global minimizers are expected to exhibit a 
 zigzag behavior. All these results hold for the limiting case of the logar
 ithmic repulsive potential\, extending and generalizing previous results i
 n the literature.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matias Delgadino (UT Austin)
DTSTART:20230217T233000Z
DTEND:20230218T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/24/">Phase transitions and log Sobolev inequalities</a>\nby Matias 
 Delgadino (UT Austin) as part of SFU Mathematics of Computation\, Applicat
 ion and Data ("MOCAD") Seminar\n\nLecture held in Remote.\n\nAbstract\nIn 
 this talk\, we will study the mean field limit of weakly interacting diffu
 sions for confining and interaction potentials that are not necessarily co
 nvex. We explore the relationship between the large N limit of the constan
 t in the logarithmic Sobolev inequality (LSI) for the N-particle system\, 
 and the presence or absence of phase transitions for the mean field limit.
  The non-degeneracy of the LSI constant will be shown to have far reaching
  consequences\, especially in the context of uniform-in-time propagation o
 f chaos and the behaviour of equilibrium fluctuations. This will be done b
 y employing techniques from the theory of gradient flows in the 2-Wasserst
 ein distance\, specifically the Riemannian calculus on the space of probab
 ility measures.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maria Pia Gualdani (UT Austin)
DTSTART:20230314T223000Z
DTEND:20230314T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/27/">Recent progresses in kinetic equations.</a>\nby Maria Pia Gual
 dani (UT Austin) as part of SFU Mathematics of Computation\, Application a
 nd Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nWe will d
 iscuss recent mathematical results for the Landau and Boltzmann equation. 
  Kinetic equations are used to describe evolution of interacting particles
 . The most famous kinetic equation is the Boltzmann equation: formulated b
 y Ludwig Boltzmann in 1872\, this equation describes motion of a large cla
 ss of gases. Later\, in 1936\, Lev Landau derived a new mathematical model
  for motion of plasma. This latter equation was named the Landau equation.
  While many important questions are still partially unanswered due to thei
 r mathematical complexity\, many others have been solved thanks to novel c
 ombinations of analytical techniques\, in particular the ones developed by
  Hoermander\, J. Nash\, E. De Giorgi and Moser.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan King (The Cheriton School of Computer Science\, University 
 of Waterloo)
DTSTART:20230324T223000Z
DTEND:20230324T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/28/">A Closest Point Method with Interior Boundary Conditions for G
 eometry Processing</a>\nby Nathan King (The Cheriton School of Computer Sc
 ience\, University of Waterloo) as part of SFU Mathematics of Computation\
 , Application and Data ("MOCAD") Seminar\n\nLecture held in AQ5008.\n\nAbs
 tract\nMany geometry processing tasks can be performed by solving partial 
 differential equations (PDEs) on surfaces. These PDEs usually involve boun
 dary conditions (e.g.\, Dirichlet or Neumann) defined anywhere on the surf
 ace\, not just on the physical (exterior) boundary of an open surface. Thi
 s talk discusses how to handle BCs on the interior of a surface while solv
 ing PDEs with the closest point method (CPM).\n\nThe CPM is an embedding m
 ethod\, i.e.\, it solves the surface PDE by solving a PDE defined in a spa
 ce surrounding the surface. The PDE is commonly solved using standard Cart
 esian numerical methods (e.g.\, finite-differences and Lagrange interpolat
 ion). Complex surfaces with high-curvatures and/or thin regions impose res
 trictions on the size of the embedding space. Therefore\, for complex surf
 aces\, fine resolution grids must be used to fit within the embedding spac
 e. We develop a matrix-free solver that can scale to millions of degrees o
 f freedom to allow for PDEs to be solved on complex shapes.\n\nOur use of 
 a closest point surface representation provides a general framework to han
 dle any surface that allows closest point computation\, e.g.\, parametriza
 tions\, point clouds\, level-sets\, neural implicits\, etc. The surface ca
 n be open or closed\, orientable or not\, of any codimension\, and even mi
 xed-codimension. Therefore\, the approach presented provides a general fra
 mework for geometry processing on complex surfaces given by general surfac
 e representations.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aleks Donev (Courant Institute\, NYU)
DTSTART:20230519T223000Z
DTEND:20230519T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/32/">Hydrodynamics and rheology of fluctuating\, semiflexible\, ine
 xtensible\, and slender filaments in Stokes flow</a>\nby Aleks Donev (Cour
 ant Institute\, NYU) as part of SFU Mathematics of Computation\, Applicati
 on and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nEvery
  animal cell is filled with a cytoskeleton\, a dynamic gel made of inexten
 sible filaments / bio-polymers\, such as microtubules\, actin filaments\, 
 and intermediate filaments\, all suspended in a viscous fluid. Similar sus
 pensions of elastic filaments or polymers are widely used in materials pro
 cessing. Numerical simulation of such gels is challenging because the fila
 ment aspect ratios are very large.\n\nWe have recently developed new metho
 ds for rapidly computing the dynamics of non-Brownian and Brownian inexten
 sible slender filaments in periodically-sheared Stokes flow [1\,2\,4]. We 
 apply our formulation to a permanently1 and dynamically cross-linked actin
  mesh3 in a background oscillatory shear flow. We find that nonlocal hydro
 dynamics can change the visco-elastic moduli by as much as 40% at certain 
 frequencies\, especially in partially bundled networks [3\,4].\n\nI will f
 ocus on accounting for bending thermal fluctuations of the filaments by fi
 rst establishing a mathematical formulation and numerical methods for simu
 lating the dynamics of stiff but not rigid Brownian fibers in Stokes flow 
 [4]. I will emphasize open questions for the community such as whether the
 re is a continuum limit of the Brownian contribution to the stress tensor 
 from the filaments.\n\nThis is joint work with Ondrej Maxian and Brennan S
 prinkle.\n\nReferences:\n\n1. O. Maxian et al\, Integral-based spectral me
 thod for inextensible slender fibers in Stokes flow\,. Phys. Rev. Fluids\,
  6:014102\, 2021\n2. O. Maxian et al\,. Hydrodynamics of a twisting\, bend
 ing\, inextensible fiber in Stokes flow\, Phys. Rev. Fluids\, 7:074101\, 2
 022\n3. O. Maxian et al\, Interplay between Brownian motion and cross-link
 ing controls bundling dynamics in actin networks\, Biophysical J.\, 121:12
 30–1245\, 2022.\n4. O. Maxian et al.\, Bending fluctuations in semiflexi
 ble\, inextensible\, slender filaments in Stokes flow: towards a spectral 
 discretization\, ArXiv:2301.11123\, to appear in J. Chem. Phys.\, 2023.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anotida Madzvamuse (UBC)
DTSTART:20230922T223000Z
DTEND:20230922T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/33/">Image-based modelling using geometric surface PDEs for single 
 and collective cell migration</a>\nby Anotida Madzvamuse (UBC) as part of 
 SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\
 nLecture held in K9509.\n\nAbstract\nIn this lecture\, I will focus on for
 mulating a dynamical geometric surface partial differential equation for m
 odelling static images during the process of single or collective\ncell mi
 gration. In the absence of detailed experimental molecular and mechanical 
 observations\,\na question asked by experimentalists is: Given a sequence 
 of images following\nsingle or collective cell migration\, is there an opt
 imal dynamic mathematical model that evolves static images at one time poi
 nt into static images at a later time point? I will employ both sharp- and
  diffuse-interface formulations based on phase-fields for geometric surfac
 e partial differential equations to derive a dynamical spatiotemporal mode
 l for the migration of cells in 2- and 3-D. The model is solved efficientl
 y using novel high performance computing techniques based on finite differ
 ences\, and multi-grid methods. Such an approach\nallows us to solve in re
 alistic times\, 2- and 3-D computations which are otherwise unfeasible\nwi
 thout such innovative numerical analysis computing strategies. To demonstr
 ate the\napplicability of the computational algorithm\, cell migration for
 ces such as polarisation\nwill be exhibited. A by-product of the computati
 onal algorithm is its ability to quantify\nautomatically cell proliferatio
 n rates which are generally obtained through cumbersome\nand error-prone m
 anual counting.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Miranda Holmes-Cerfon (UBC)
DTSTART:20231027T223000Z
DTEND:20231027T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/34/">Numerically simulating particles with short-ranged interaction
 s</a>\nby Miranda Holmes-Cerfon (UBC) as part of SFU Mathematics of Comput
 ation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n
 \nAbstract\nParticles with diameters of nanometres to micrometres form the
  building blocks of many of the materials around us\, and can be designed 
 in a multitude of ways to form new ones. Such particles commonly live in f
 luids\, where they jiggle about randomly because of thermal fluctuations i
 n the fluid\, and interact with each other via numerous mechanisms. One ch
 allenge in simulating such particles is that the range over which they int
 eract attractively is often much shorter than their diameters\, so the equ
 ations describing the particles’ dynamics are stiff\, requiring timestep
 s much smaller than the timescales of interest. I will introduce methods t
 o accelerate these simulations\, which instead solve the limiting equation
 s as the range of the attractive interaction goes to zero. In this limit a
  system of particles is described by a diffusion process on a collection o
 f manifolds of different dimensions\, connected by “sticky” boundary c
 onditions. I will describe our progress in simulating low-dimensional stic
 ky diffusion processes\, explain how these algorithms give us insight into
  sticky diffusions’ unusual mathematical properties\, and then discuss s
 ome ongoing challenges such as extending these methods to high dimensions\
 , incorporating friction and hydrodynamic interactions\, and capturing the
  anomalous diffusion that is sometimes observed experimentally.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Blaise Bourdin (McMaster University)
DTSTART:20231103T223000Z
DTEND:20231103T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/35/">Recent developments in variational and phase-field models of b
 rittle fracture</a>\nby Blaise Bourdin (McMaster University) as part of SF
 U Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\nL
 ecture held in K9509.\n\nAbstract\nVariational phase-field models of fract
 ure have been at the center of a multidisciplinary effort involving a larg
 e community of mathematicians\, mechanicians\, engineers\, and computation
 al scientists over the last 25 years or so. I will start with a modern int
 erpretation of Griffith's classical criterion as a variational principle f
 or a free discontinuity energy and will recall some of the milestones in i
 ts analysis. Then\, I will introduce the phase-field approximation per se 
 and describe its numerical implementation. I illustrate how phase-field mo
 dels have led to major breakthroughs in the predictive simulation of fract
 ure in complex situations. I then will turn my attention to current issues
 \, including crack nucleation in nominally brittle materials\, fracture of
  heterogeneous materials\, and inverse problems.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Smith (Yale-NUS College)
DTSTART:20230929T223000Z
DTEND:20230929T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/36
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/36/">Fokas Diagonalization</a>\nby David Smith (Yale-NUS College) a
 s part of SFU Mathematics of Computation\, Application and Data ("MOCAD") 
 Seminar\n\nLecture held in K9509.\n\nAbstract\nWe describe a new form of d
 iagonalization for linear two point constant coefficient differential oper
 ators with arbitrary linear boundary conditions. Although the diagonalizat
 ion is in a weaker sense than that usually employed to solve initial bound
 ary value problems (IBVP)\, we show that it is sufficient to solve IBVP wh
 ose spatial parts are described by such operators. We argue that the metho
 d described may be viewed as a reimplementation of the Fokas transform met
 hod for linear evolution equations on the finite interval. The results are
  extended to multipoint and interface operators\, including operators defi
 ned on networks of finite intervals\, in which the coefficients of the dif
 ferential operator may vary between subintervals\, and arbitrary interface
  and boundary conditions may be imposed\; differential operators with piec
 ewise constant coefficients are thus included.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Argyrios Petras (Johann Radon Institute for Computational and Appl
 ied Mathematics)
DTSTART:20231011T223000Z
DTEND:20231011T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/37
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/37/">Numerical methods for the solution of PDEs on static and movin
 g surfaces</a>\nby Argyrios Petras (Johann Radon Institute for Computation
 al and Applied Mathematics) as part of SFU Mathematics of Computation\, Ap
 plication and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract
 \nPartial differential equations (PDEs) on surfaces arise throughout the n
 atural and applied sciences. The solution of such equations poses a big ch
 allenge for rather general surfaces\, where no parametrization is possible
 . In this talk\, we will give an overview of some methods that are based o
 n the closest point concept and use finite difference stencils based on ra
 dial basis functions (RBF-FD).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephanie Ross (University of Calgary)
DTSTART:20231020T223000Z
DTEND:20231020T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/38
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/38/">A multimodal approach to understanding skeletal muscle mechani
 cs in health and disease</a>\nby Stephanie Ross (University of Calgary) as
  part of SFU Mathematics of Computation\, Application and Data ("MOCAD") S
 eminar\n\nLecture held in K9509.\n\nAbstract\nSkeletal muscle is the motor
  that drives human and animal movement\; however\, our understanding of ho
 w muscle performs this function is limited because of challenges in direct
 ly measuring muscle deformation and force output in living beings. In this
  talk\, I will share my previous work using continuum models of muscle and
  complementary experimental measures to determine the mechanisms underlyin
 g skeletal muscle function. I will then present my current research that e
 xtends on this fundamental work to probe how changes in the material prope
 rties of muscle with diseases such as stroke and cerebral palsy impact mus
 cle function and mobility.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christoph Ortner (UBC)
DTSTART:20231006T223000Z
DTEND:20231006T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/39
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/39/">Geometric Shallow Learning with the Atomic Cluster Expansion (
 or\, Efficient Parameterization of Many-body Interaction)</a>\nby Christop
 h Ortner (UBC) as part of SFU Mathematics of Computation\, Application and
  Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nAlthough my
  talk is arguably about machine-learning\, I will use mostly ideas and lan
 guage from mathematical modelling and numerical analysis. I will introduce
  a natural geometric learning framework\, the atomic cluster expansion (AC
 E)\,  which focuses on linear and shallow models\, and adds a new dimensio
 n to the design space of geometric deep learning. ACE is particularly well
 -suited for parameterising surrogate models of particle systems where it i
 s important to incorporate symmetries and geometric priors into models wit
 hout sacrificing systematic improvability.\nMy main focus will be on “le
 arning” interatomic potentials (or\, force fields): in this context\, AC
 E models arise naturally from a few systematic modelling and approximation
  theoretic steps that can be made reasonably rigorous.\nHowever\, the appl
 icability is much broader and\, time permitting\, I will also show how the
  ACE framework can be adapted to other contexts such as electronic structu
 re (parameterising Hamiltonians)\, quantum chemistry (wave functions)\, or
  elementary particle physics (e.g.\, jet tagging).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sam Stechmann (University of Wisconsin-Madison)
DTSTART:20240306T233000Z
DTEND:20240307T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/40
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/40/">Element learning: a systematic approach of accelerating finite
  element-type methods via machine learning</a>\nby Sam Stechmann (Universi
 ty of Wisconsin-Madison) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in SFU K9509.\n\nAbstrac
 t\nIn the past decade\, (artificial) neural networks and machine learning 
 tools have surfaced as game changing technologies across numerous fields\,
  resolving an array of challenging problems. Even for the numerical soluti
 on of partial differential equations (PDEs) or other scientific computing 
 problems\, results have shown that machine learning can speed up some comp
 utations. However\, many machine learning approaches tend to lose some of 
 the advantageous features of traditional numerical PDE methods\, such as i
 nterpretability and applicability to general domains with complex geometry
 .\n\nIn this talk\, we introduce a systematic approach (which we call elem
 ent learning) with the goal of accelerating finite element-type methods vi
 a machine learning\, while also retaining the desirable features of finite
  element methods. The derivation of this new approach is closely related t
 o hybridizable discontinuous Galerkin (HDG) methods in the sense that the 
 local solvers of HDG are replaced by machine learning approaches. Numerica
 l tests are presented for an example PDE\, the radiative transfer equation
 \, in a variety of scenarios with idealized or realistic cloud fields\, wi
 th smooth or sharp gradient in the cloud boundary transition. Comparisons 
 are set up with either a fixed number of degrees of freedom or a fixed acc
 uracy level of $10^{-3}$ in the relative $L^2$ error\, and we observe a si
 gnificant speed-up with element learning compared to a classical finite el
 ement-type method.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Charles Cheung (NVIDIA)
DTSTART:20231023T223000Z
DTEND:20231023T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/41
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/41/">Generative AI and AI for Science and Mathematics</a>\nby Charl
 es Cheung (NVIDIA) as part of SFU Mathematics of Computation\, Application
  and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this
  talk\, I will talk about a few directions and use cases of recent Generat
 ive AI development for metaverse and science. In the second part of the ta
 lk\, I will talk about PINNs and neural operator that have been used for s
 olving many engineering problems that involves differential equations with
  neural network. We will walk through the basic concept of PINNs and neura
 l operator and introduce NVIDIA modulus\, an SDK for training PINNs and ne
 ural operator.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chunyi Gai (UBC)
DTSTART:20231121T233000Z
DTEND:20231122T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/42
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/42/">Pattern formation and Spike Dynamics in the Presence of Noise<
 /a>\nby Chunyi Gai (UBC) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in ASB10908.\n\nAbstract
 \nNoise plays a crucial role in the formation and evolution of spatial pat
 terns in various reaction-diffusion systems in mathematical biology and ec
 ology. In this talk\, I give two examples where noise significantly influe
 nces spatial patterning.  The first example describes how patterned states
  can provide a refuge and prevent extinction under stressed conditions. It
  also illustrates the importance of not only the absolute level of climate
  change\, but also the speed with which it occurs. The second example stud
 ies the effect of noise on dynamics of a single spike pattern for the clas
 sical Gierer--Meinhardt model on a finite interval.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Liam Madden (UBC)
DTSTART:20240126T233000Z
DTEND:20240127T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/43
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/43/">Memory capacity of two-layer neural networks</a>\nby Liam Madd
 en (UBC) as part of SFU Mathematics of Computation\, Application and Data 
 ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nThe memory capaci
 ty of a statistical model is the largest size of generic data that the mod
 el can memorize and has important implications for both training and gener
 alization. In this talk\, we will prove a tight memory capacity result for
  two-layer neural networks with general activations. In order to do so\, w
 e will use tools from linear algebra\, combinatorics\, differential topolo
 gy\, and the theory of real analytic functions of several variables. In pa
 rticular\, we will show how to get memorization if the model is a local su
 bmersion and we will show that the Jacobian has generically full rank. The
  perspective that is developed also opens up a path towards deeper archite
 ctures\, alternative models\, and training.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hansol Park (SFU)
DTSTART:20231201T233000Z
DTEND:20231202T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/44
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/44/">Emergent behavior of mathematical models on manifolds</a>\nby 
 Hansol Park (SFU) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this 
 talk\, I introduce several first- and second-order models for self-collect
 ive behaviour on general manifolds and discuss their emergent behaviors. F
 or the first-order model\, we consider attractive-repulsive and purely att
 ractive interaction potentials\, and investigate the equilibria and the as
 ymptotic behaviour of the solutions. In particular\, we quantify the appro
 ach to asymptotic consensus in terms of the convergence rate of the diamet
 er of the solution’s support. For the second-order model (known as the C
 ucker-Smale model)\, velocity alignment interactions are considered. To an
 alyze the emergent behaviors of the two models\, the LaSalle invariance pr
 inciple is used. Also\, various geometric tools used to analyze the aggreg
 ation models on manifolds are presented.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Craig Fraser (University of Toronto)
DTSTART:20231211T230000Z
DTEND:20231212T000000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/45
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/45/">The Clebsch-Mayer Theory of the Second Variation in the Calcul
 us of Variations: A Case Study in the Influence of Dynamical Analysis on P
 ure Mathematics</a>\nby Craig Fraser (University of Toronto) as part of SF
 U Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\nL
 ecture held in SFU AQ5025.\n\nAbstract\nCarl Jacobi worked in the 1830s at
  the University of Königsberg on what became known as Hamilton-Jacobi the
 ory\, and also on the theory of the second variation in the calculus of va
 riations. The first was a subject in dynamical analysis\, while the second
  was a subject in pure mathematics. Insofar as the calculus of variations 
 was concerned\, Jacobi’s contributions were seminal and highly original 
 but presented in an incomplete and programmatic form. Together his writing
 s stimulated active but independent traditions of research in both subject
 s. In the late 1850s and 1860s Alfred Clebsch and Adolph Mayer – mathema
 ticians associated with the Königsberg school - established a new approac
 h to the investigation of sufficient conditions in the calculus of variati
 ons by bringing methods from Hamilton-Jacobi theory to bear on the transfo
 rmation of the second variation. In doing so they established the basis fo
 r research on the subject that was eventually codified in writings around 
 1900 of Camille Jordan\, Gustav von Escherich and Oskar Bolza.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Timon S. Gutleb (UBC)
DTSTART:20240216T233000Z
DTEND:20240217T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/46
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/46/">A frame approach for equations involving the fractional Laplac
 ian</a>\nby Timon S. Gutleb (UBC) as part of SFU Mathematics of Computatio
 n\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAb
 stract\nI will be presenting a frame approach for computing solutions of d
 ifferential equations inspired by recent progress in frame theory and spar
 se spectral methods. The primary case study for our method will be a very 
 general family of equations involving the fractional Laplacian.\n\nThis is
  joint work with I. Papadopoulos\, J.A. Carrillo and S. Olver.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lisa Kreusser (University of Bath)
DTSTART:20240226T233000Z
DTEND:20240227T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/47
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/47/">Unlocking the Full Potential of Data: From Applied Analysis an
 d Optimisation to Applications</a>\nby Lisa Kreusser (University of Bath) 
 as part of SFU Mathematics of Computation\, Application and Data ("MOCAD")
  Seminar\n\nLecture held in K9509.\n\nAbstract\nRecent and rapid breakthro
 ughs in contemporary biology\, climate science\, and data science have unv
 eiled a spectrum of intricate mathematical challenges which can be tackled
  through the fusion of applied and numerical analysis\, as well as optimis
 ation. In this talk\, I will begin by delving into a class of interacting 
 particle models with anisotropic interaction forces and their correspondin
 g continuum limit. These models find their inspiration in the simulation o
 f fingerprint patterns\, which play a critical role in databases in forens
 ic science and biometric applications. I will showcase our recent findings
 \, including the development of a mean-field optimal control algorithm to 
 tackle an inverse problem arising in parameter identification. Transitioni
 ng from interaction-focused models to the realm of transport networks\, I 
 will introduce an optimization approach tailored for a unique coupling of 
 differential equations that arises in the context of biological network fo
 rmation. Additionally\, I will provide insights into my recent research in
  data science\, encompassing topics such as image segmentation\, non-conve
 x optimisation algorithms for machine learning\, Wasserstein Generative Ad
 versarial Networks (WGANs)\, score-based diffusion models and semi-supervi
 sed learning techniques.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Silas Polani
DTSTART:20240318T223000Z
DTEND:20240318T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/48
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/48/">Intraguild Predation in homogeneous and heterogeneous landscap
 es</a>\nby Silas Polani as part of SFU Mathematics of Computation\, Applic
 ation and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nA
 bstract\nIntraguild predation (IGP) consists of two (or more) consumers of
  the same shared resource exhibiting a predator-prey relation among themse
 lves\, and  is a very present phenomena in terrestrial\, freshwater and ma
 rine ecological systems. Theoretical works show that IGP allows for coexis
 tence between two consumers of the same guild\, as long as IG prey is a mo
 re effective consumer than IG predator\, revealing an important mechanism 
 for consumer coexistence in food chains. Here we explore biological invasi
 ons forming IGP communities\, by either introducing IG prey or IG predator
  to established (single) Consumer-Resource populations in homogeneous and 
 heterogeneous landscapes. We use reaction-diffusion equations as our model
 ing framework\, and explore them through numerical simulations and homogen
 ization techniques. In homogeneous landscapes\, we find that asymptotic sp
 reading speeds are linearly determinate and also that the formation of tra
 veling wave solutions and dynamical stabilization regimes are possible. On
  heterogeneous landscapes\, we find that coexistence regimes in highly het
 erogeneous landscapes can occur regardless of IG-Prey being the least effe
 ctive consumer\, or be hindered even when IG-Prey remains as the dominant 
 competitor\, depending on habitat preferences of each of the species invol
 ved. We provide some conclusions of the work and venues of future research
 .\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Siting Liu (University of California\, Los Angeles)
DTSTART:20240308T233000Z
DTEND:20240309T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/49
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/49/">An inverse problem in mean field game from partial boundary me
 asurement</a>\nby Siting Liu (University of California\, Los Angeles) as p
 art of SFU Mathematics of Computation\, Application and Data ("MOCAD") Sem
 inar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nMean-field game (MF
 G) systems provide a powerful framework for modeling the collective behavi
 or of multi-agent systems with diverse applications. However\, unknown par
 ameters pose challenges. In this work\, we tackle an inverse problem\, rec
 overing MFG parameters from limited\, noisy boundary observations. Despite
  the problem's ill-posed nature\, we aim to efficiently retrieve these par
 ameters to understand population dynamics. Our focus is on recovering runn
 ing cost and interaction energy in MFG equations from boundary measurement
 s. We formalize the problem as a constrained optimization problem with L1 
 regularization. We then develop a fast and robust operator splitting algor
 ithm to solve the optimization using techniques\, including harmonic exten
 sions\, a three-operator splitting scheme\, and the primal-dual hybrid gra
 dient method. Numerical experiments illustrate the effectiveness and robus
 tness of the algorithm. This is joint work with Yat Tin Chow (UCR)\, Samy 
 Wu Fung (Colorado School of Mines)\, Levon Nurbekyan (Emory)\, and Stanley
  J. Osher (UCLA).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Boullé (Imperial College London)
DTSTART:20241108T230000Z
DTEND:20241109T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/50
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/50/">Elliptic PDE learning is data-efficient</a>\nby Nicolas Boull
 é (Imperial College London) as part of SFU Mathematics of Computation\, A
 pplication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.
 \n\nAbstract\nOperator learning is an emerging field at the intersection o
 f machine learning\, physics\, and mathematics\, that aims to discover pro
 perties of unknown physical systems from experimental data. Popular techni
 ques exploit the approximation power of deep learning to learn solution op
 erators\, which map source terms to solutions of the underlying PDE. Solut
 ion operators can then produce surrogate data for data-intensive machine l
 earning approaches such as learning reduced order models for design optimi
 zation in engineering and PDE recovery. In most deep learning applications
 \, a large amount of training data is needed\, which is often unrealistic 
 in engineering and biology. However\, PDE learning is shockingly data-effi
 cient in practice. We provide a theoretical explanation for this behavior 
 by constructing an algorithm that recovers solution operators associated w
 ith elliptic PDEs and achieves an exponential convergence rate with respec
 t to the size of the training dataset. The proof technique combines prior 
 knowledge of PDE theory and randomized numerical linear algebra techniques
  and may lead to practical benefits such as improving dataset and neural n
 etwork architecture designs.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gregor Maier (University of Bonn)
DTSTART:20240524T223000Z
DTEND:20240524T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/51
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/51/">On the Approximation of Gaussian Lipschitz Functionals</a>\nby
  Gregor Maier (University of Bonn) as part of SFU Mathematics of Computati
 on\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and H
 ybrid.\n\nAbstract\nOver the past few years\, operator learning – the ap
 proximation of mappings between infinite-dimensional function spaces using
  ideas from machine learning – has attracted increased research attentio
 n. Approximate operators\, learned from data\, hold promise to serve as ef
 ficient surrogate models for problems in scientific computing. Multiple mo
 del designs have been proposed so far and their efficiency has been demons
 trated in various practical applications.\nThe empirical findings are supp
 orted by a (slowly) growing body of theoretical approximation garantuees. 
 The latter focus to a large extent on linear and holomorphic operators. Ho
 wever\, far less is known about the approximation of (nonlinear) operators
  which are merely Lipschitz continuous. \n\nIn this talk\, I will focus on
  (scalar-valued) Lipschitz functionals in a Gaussian setting. I will first
  consider their polynomial approximation by Hermite polynomials and presen
 t lower and upper bounds on the best $s$-term error. This will be followed
  by a discussion on the approximation of Lipschitz functionals by arbitrar
 y (adaptive) sampling algorithms\, which will result in sharp error bounds
 . Finally\, I will conclude by also addressing the problem of recovering L
 ipschitz functionals from i.i.d. pointwise samples.\n\nThis is joint work 
 with Ben Adcock (SFU).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Williams (Pennsylvania State University)
DTSTART:20240718T223000Z
DTEND:20240718T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/52
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/52/">Finite element exterior calculus in four-dimensional space</a>
 \nby David Williams (Pennsylvania State University) as part of SFU Mathema
 tics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture he
 ld in K9509 and Hybrid.\n\nAbstract\nThe purpose of this talk is to explai
 n the key differences between standard finite element methods for 3D appli
 cations\, and space-time finite element methods for 4D applications. These
  differences are elucidated through the lens of finite element exterior ca
 lculus (FEEC). Through FEEC\, we can leverage the language of differential
  geometry and algebraic topology to construct finite element spaces in any
  number of dimensions. In this work\, we use techniques from FEEC to const
 ruct derivative operators in 3D and 4D space. We explain the differences b
 etween these operators\, and the associated Sobolev spaces. Thereafter\, w
 e construct conforming\, high-order\, finite element spaces on the tessera
 ct\, pentatope\, and tetrahedral prism in 4D. These shapes are fundamental
  geometric quantities in 4D\, as they correspond to the four-dimensional a
 nalogs of the cube\, tetrahedron\, and triangular prism\, respectively.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xuefeng Liu (Tokyo Woman's Christian University)
DTSTART:20240906T220000Z
DTEND:20240906T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/53
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/53/">Rigorous evaluation of the Hadamard derivative for shape optim
 ization problems</a>\nby Xuefeng Liu (Tokyo Woman's Christian University) 
 as part of SFU Mathematics of Computation\, Application and Data ("MOCAD")
  Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nThis talk intro
 duces a newly developed computational method for rigorously evaluating the
  Hadamard derivative of Laplacian eigenvalues\, which plays an important r
 ole in studying shape optimization problems.\n\nTo evaluate the Hadamard d
 erivative\, this method employs state-of-the-art algorithms for eigenvalue
 s and eigenfunctions via the finite element method (Liu'2013\,2015\; Liu-V
 ejchodsky'2022)\, effectively handling cases of repeated or closely spaced
  eigenvalues.\n\nWe also present a computer-assisted proof for the optimiz
 ation and simplicity of Laplacian eigenvalues over triangular domains (End
 o-Liu'2023\,2024)\, demonstrating the impact of these computational advanc
 ements in spectral geometry.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Isaac Harris (Purdue University)
DTSTART:20240913T220000Z
DTEND:20240913T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/54
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/54/">Transmission Eigenvalue Problems for a Scatterer with a Conduc
 tive Boundary</a>\nby Isaac Harris (Purdue University) as part of SFU Math
 ematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture
  held in K9509 and Hybrid.\n\nAbstract\nIn this talk\, we will investigate
  the acoustic transmission eigenvalue problem associated\nwith an inhomoge
 neous media with a conductive boundary. These are a new class\nof eigenval
 ue problems that are not elliptic\, not self-adjoint\, and non-linear\, wh
 ich\ngives the possibility of complex eigenvalues. The talk will consider 
 the case of an\nIsotropic and Anisotropic scatterer. We will discuss the e
 xistence of the eigenvalues as\nwell as their dependence on the material p
 arameters. Because this is a non-standard\neigenvalue problem\, a discussi
 on of the numerical calculations will also be highlighted. Lastly\, we wil
 l discuss recovering the scatterer using a monotonicity method that is ind
 ependent of the transmission eigenvalues.\nThis is joint work with: O. Bon
 darenko\, V. Hughes\, A. Kleefeld\, H. Lee\, and J. Sun.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Denis Grebenkov (CNRS - Ecole Polytechnique)
DTSTART:20240927T220000Z
DTEND:20240927T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/55
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/55/">Probabilistic insights on the Steklov spectral problem: theory
 \, numerics and applications</a>\nby Denis Grebenkov (CNRS - Ecole Polytec
 hnique) as part of SFU Mathematics of Computation\, Application and Data (
 "MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nIn this
  overview talk\, I will present the encounter-based approach to diffusive 
 processes in Euclidean domains and highlight its fundamental relation to t
 he Steklov spectral problem. So\, the Steklov eigenfunctions turn out to b
 e particularly useful for representing heat kernels with Robin boundary co
 ndition and disentangling diffusive dynamics from reaction events on the b
 oundary. I will also discuss applications of this approach in physical che
 mistry (to describe diffusion-controlled reactions) and in statistical phy
 sics (to determine the statistics of encounters and various first-passage 
 times). Some open questions related to spectral\, probabilistic and numeri
 cal aspects of this spectral problem will be outlined.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wuyang Chen (Simon Fraser University)
DTSTART:20241101T220000Z
DTEND:20241101T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/56
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/56/">Towards Data-Efficient OOD Generalization of Scientific Machin
 e Learning Models</a>\nby Wuyang Chen (Simon Fraser University) as part of
  SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n
 \nLecture held in K9509 and Hybrid.\n\nAbstract\nIn recent years\, there h
 as been growing promise in coupling machine learning methods with domain-s
 pecific physical insights to solve scientific problems based on partial di
 fferential equations (PDEs). However\, there are two critical bottlenecks 
 that must be addressed before scientific machine learning (SciML) can beco
 me practically useful. First\, SciML requires extensive pretraining data t
 o cover diverse physical systems and real-world scenarios. Second\, SciML 
 models often perform poorly when confronted with unseen data distributions
  that deviate from the training source\, even when dealing with samples fr
 om the same physical systems that have only slight differences in physical
  parameters. In this line of work\, we aim to address these challenges usi
 ng data-centric approaches. To enhance data efficiency\, we have developed
  the first unsupervised learning method for neural operators. Our approach
  involves mining unlabeled PDE data without relying on heavy numerical sim
 ulations. We demonstrate that unsupervised pretraining can consistently re
 duce the number of simulated samples required during fine-tuning across a 
 wide range of PDEs and real-world problems. Furthermore\, to evaluate and 
 improve the out-of-distribution (OOD) generalization of neural operators\,
  we have carefully designed a benchmark that includes diverse physical par
 ameters to emulate real-world scenarios. By evaluating popular architectur
 es across a broad spectrum of PDEs\, we conclude that neural operators ach
 ieve more robust OOD generalization when pretrained on physical dynamics w
 ith high-frequency patterns rather than smooth ones. This suggests that da
 ta-driven SciML methods will benefit more from learning from challenging s
 amples.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christina Runkel (University of Cambridge)
DTSTART:20240920T220000Z
DTEND:20240920T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/57
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/57/">Learning posterior distributions in underdetermined inverse pr
 oblems</a>\nby Christina Runkel (University of Cambridge) as part of SFU M
 athematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLect
 ure held in K9509 and Hybrid.\n\nAbstract\nIn recent years\, classical kno
 wledge-driven approaches for inverse problems have been complemented by da
 ta-driven methods exploiting the power of machine and especially deep lear
 ning. Purely data-driven methods\, however\, come with the drawback of dis
 regarding prior knowledge of the problem even though it has shown to be be
 neficial to incorporate this knowledge into the problem-solving process.\n
 \nIn this talk\, we introduce an unpaired learning approach for learning p
 osterior distributions of underdetermined inverse problems. It combines ad
 vantages of deep generative modeling with established ideas of knowledge-d
 riven approaches by incorporating prior information about the inverse prob
 lem. We develop a new neural network architecture ’UnDimFlow’ (short f
 or Unequal Dimensionality Flow) consisting of two normalizing flows\, one 
 from the data to the latent\, and one from the latent to the solution spac
 e. Additionally\, we incorporate the forward operator to develop an unpair
 ed learning method for the UnDimFlow architecture and propose a tailored p
 oint estimator to recover an optimal solution during inference. We evaluat
 e our method on the two underdetermined inverse problems of image inpainti
 ng and super-resolution.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Antoine Cerfon (Type One Energy Group)
DTSTART:20241018T220000Z
DTEND:20241018T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/58
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/58/">Open math problems for optimized fusion reactors</a>\nby Antoi
 ne Cerfon (Type One Energy Group) as part of SFU Mathematics of Computatio
 n\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hy
 brid.\n\nAbstract\nStellarators are promising magnetic fusion devices for 
 electricity generation\, because the dynamics of the hot fusion fuel - cal
 led a plasma - is largely determined by external control\, as opposed to d
 ynamical self-organization\, as is the case for other magnetic fusion conc
 epts. Computer design and simulations reliably predict experimental perfor
 mance\, which opens a lower risk and more cost efficient path to fusion po
 wer. In this talk\, I will present the mathematical challenges one faces w
 hen designing stellarators with optimized performance. I will show how rec
 ent progress in our mathematical understanding of stellarators and in nume
 rical methods for reactor optimization have led to the discovery of reacto
 r designs with outstanding physical properties. I will also highlight open
  problems in pure mathematics\, scientific computing\, numerical optimizat
 ion\, and reduced-order modeling\, whose solutions could further improve r
 eactor performance.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chiara Saffirio (UBC)
DTSTART:20241025T220000Z
DTEND:20241025T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/60
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/60/">Uniqueness criteria for the Vlasov-Poisson system and applicat
 ions to semiclassical problems.</a>\nby Chiara Saffirio (UBC) as part of S
 FU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\n
 Lecture held in K9509 and Hybrid.\n\nAbstract\nThe Vlasov-Poisson system i
 s a non-linear PDE describing the mean-field time-evolution of particles f
 orming a plasma or a galaxy.\nIn this talk I will present uniqueness crite
 ria for the Vlasov-Poisson equation in the classical and semi-relativistic
  setting\, emerging as corollaries of stability estimates in strong (L^p) 
 topologies or in weak topologies (induced by Wasserstein distances)\, and 
 show how they serve as a guideline to solve semiclassical problems. Differ
 ent topologies will allow us to treat different classes of quantum states.
 \n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anjali Nair (University of Chicago)
DTSTART:20241115T230000Z
DTEND:20241116T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/61
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/61/">From Schrödinger to diffusion- speckle formation of light in 
 random media and the Gaussian conjecture</a>\nby Anjali Nair (University o
 f Chicago) as part of SFU Mathematics of Computation\, Application and Dat
 a ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nA we
 ll-known conjecture in physical literature states that high frequency wave
 s propagating over long distances through turbulence eventually become com
 plex Gaussian distributed. The intensity of such wave fields then follows 
 an exponential law\, consistent with speckle formation observed in physica
 l experiments. Though fairly well-accepted and intuitive\, this conjecture
  is not entirely supported by any detailed mathematical derivation. In thi
 s talk\, I will discuss some recent results demonstrating the Gaussian con
 jecture in a weak-coupling regime of the paraxial approximation.\n\n \nThe
  paraxial approximation is a high frequency approximation of the Helmholtz
  equation\, where backscattering is ignored. This takes the form of a Schr
 ödinger equation with a random potential and is often used to model laser
  propagation through turbulence. The proof relies on the asymptotic closen
 ess of statistical moments of the wavefield under the paraxial approximati
 on\, its white noise limit and the complex Gaussian distribution itself. I
  will describe two scaling regimes\, one is a kinetic scaling where the se
 cond moment is given by a transport equation and a second diffusive scalin
 g\, where the second moment follows an anomalous diffusion. In both cases\
 , the limiting complex Gaussian distribution is fully characterized by its
  first and second moments. An additional stochastic continuity/tightness c
 riterion allows to show the convergence of these distributions over spaces
  of Hölder-continuous functions.\n \n\nThis is joint work with Guillaume 
 Bal.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael W. Mahoney (ICSI\, LBNL\, and Department of Statistics\, U
 C Berkeley)
DTSTART:20241211T223000Z
DTEND:20241211T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/62
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/62/">Foundational Methods for Foundation Models for Scientific Mach
 ine Learning</a>\nby Michael W. Mahoney (ICSI\, LBNL\, and Department of S
 tatistics\, UC Berkeley) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in Big Data Hub ASB10900
  and Hybrid.\n\nAbstract\nThe remarkable successes of ChatGPT in natural l
 anguage processing (NLP) and related developments in computer vision (CV) 
 motivate the question of what foundation models would look like and what n
 ew advances they would enable\, when built on the rich\, diverse\, multimo
 dal data that are available from large-scale experimental and simulational
  data in scientific computing (SC)\, broadly defined.  Such models could p
 rovide a robust and principled foundation for scientific machine learning 
 (SciML)\, going well beyond simply using ML tools developed for internet a
 nd social media applications to help solve future scientific problems.  I 
 will describe recent work demonstrating the potential of the "pre-train an
 d fine-tune" paradigm\, widely-used in CV and NLP\, for SciML problems\, d
 emonstrating a clear path towards building SciML foundation models\; as we
 ll as recent work highlighting multiple "failure modes" that arise when tr
 ying to interface data-driven ML methodologies with domain-driven SC metho
 dologies\, demonstrating clear obstacles to traversing that path successfu
 lly.  I will also describe initial work on developing novel methods to add
 ress several of these challenges\, as well as their implementations at sca
 le\, a general solution to which will be needed to build robust and reliab
 le SciML models consisting of millions or billions or trillions of paramet
 ers.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/62/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marta Ghirardelli (NTNU)
DTSTART:20250331T220000Z
DTEND:20250331T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/63
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/63/">Conditional Stability of the Euler Method on Riemannian Manifo
 lds</a>\nby Marta Ghirardelli (NTNU) as part of SFU Mathematics of Computa
 tion\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and
  Hybrid.\n\nAbstract\nWe consider neural networks (NN) as discretizations 
 of continuous dynamical systems. There are two relevant systems: the NN ar
 chitecture on one side and the gradient flow for optimizing the parameters
  on the other. In both cases\, stability properties of the discretization 
 methods can be relevant e.g. for adversarial robustness. Moreover\, to pre
 vent the problem of exploding or vanishing gradients\, it is common to con
 sider NNs whose feature space and/or parameter space is a Riemannian manif
 old. We investigate the stability of the explicit Euler method defined on 
 Riemannian manifolds\, namely the Geodesic Explicit Euler (GEE). We provid
 e a general sufficient condition which ensures stability in any Riemannian
  manifold. Whenever the manifold has constant sectional curvature\, such c
 ondition can be turned into a rule for choosing the stepsize.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/63/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brendan Pass (University of Alberta)
DTSTART:20250303T230000Z
DTEND:20250304T000000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/64
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/64/">An ODE characterization of regularized optimal transport and v
 ariants with linear constraints</a>\nby Brendan Pass (University of Albert
 a) as part of SFU Mathematics of Computation\, Application and Data ("MOCA
 D") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nI will discu
 ss various joint works with Luca Nenna and PhD student Joshua Hiew. We sho
 w that entropically regularized optimal transport with discrete marginals 
 and general cost functions can be characterized by a well-posed ordinary d
 ifferential equation.   The techniques adapt easily to a wide range of var
 iants of optimal transport\, with additional linear constraints\, includin
 g multi-marginal optimal transport and martingale optimal transport.  For 
 all of these problems\, the ODE can be solved by standard schemes\, yieldi
 ng a new computational method.  This method has the advantage of simultane
 ously yielding the solution for all values of the regularization parameter
 .\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/64/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert John Baraldi (Sandia National Laboratories)
DTSTART:20250310T220000Z
DTEND:20250310T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/65
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/65/">A Nonsmooth Trust-Region Framework for Applications in Data Sc
 ience and PDE Constrained Optimization</a>\nby Robert John Baraldi (Sandia
  National Laboratories) as part of SFU Mathematics of Computation\, Applic
 ation and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nA
 bstract\nWe introduce an inexact trust-region method for efficiently solvi
 ng a class of problems in which the objective is the sum of a smooth\, non
 convex function and nonsmooth\, convex function. Such objectives are perva
 sive in the literature\, with examples being machine learning\, basis purs
 uit\, inverse problems\, and topology optimization. The inclusion of nonsm
 ooth regularizers and constraints is critical\, as they often preserve phy
 sical properties or promote sparsity in the control. \nEnforcing these pro
 perties in an efficient manner is critical when met with computationally i
 ntense nature of solving PDEs or machine learning applications. We develop
  a novel trust-region method to minimize the sum of a smooth nonconvex fun
 ction and a nonsmooth convex function. Our method is unique in that it per
 mits and systematically controls the use of inexact objective function and
  derivative evaluations. When using a quadratic Taylor model for the trust
 -region subproblem\, our algorithm is an inexact\, matrix-free proximal Ne
 wton-type method that permits indefinite Hessians. Moreover\, we provide e
 xtensions of this method to adaptive mesh refinement\, stochastic optimiza
 tion as well as multilevel procedures.We prove global convergence of our m
 ethod in Hilbert space and demonstrate its efficacy on examples from data 
 science and PDE-constrained optimization.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/65/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Astrid Herremans (KU Leuven)
DTSTART:20250317T220000Z
DTEND:20250317T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/66
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/66/">Function Approximation with Numerical Redundancy</a>\nby Astri
 d Herremans (KU Leuven) as part of SFU Mathematics of Computation\, Applic
 ation and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nA
 bstract\nIn function approximation\, it is standard to assume the availabi
 lity of an orthonormal basis for computations\, ensuring that numerical er
 rors are negligible. However\, this assumption is often unmet in practice.
  For instance\, multivariate approximation schemes might use basis functio
 ns defined on a tensor-product domain\, while the function to be approxima
 ted only exists on an irregular subdomain. When restricted to such a subdo
 main\, the basis loses its orthogonality. This work discards the orthogona
 lity assumption\, enabling more flexible design of computational methods t
 hrough the use of non-orthogonal spanning set. To precisely identify when 
 numerical phenomena become significant\, we introduce the concept of numer
 ical redundancy. A set of functions is numerically redundant if it spans a
  lower-dimensional space when analysed numerically rather than analyticall
 y. This talk explores the key aspects of computing with such numerically r
 edundant spanning sets\, including convergence behaviour\, solver requirem
 ents\, and data efficiency.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/66/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elena Celledoni (NTNU)
DTSTART:20250407T220000Z
DTEND:20250407T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/67
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/67/">Shape analysis\, structure preservation and deep learning</a>\
 nby Elena Celledoni (NTNU) as part of SFU Mathematics of Computation\, App
 lication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n
 \nAbstract\nShape analysis is a framework for treating complex data and ob
 tain metrics on spaces of data. Examples are spaces of unparametrized curv
 es\, time-signals\, surfaces and images. In this talk we discuss structure
  preservation and deep learning for classifying\, analysing and manipulati
 ng shapes. \nA computationally demanding task for estimating distances bet
 ween shapes\, e.g. in object recognition\, is the computation of optimal r
 eparametrizations. This is an optimisation problem on the infinite dimensi
 onal group of orientation preserving diffeomorphisms.\nWe approximate diff
 eomorphisms with neural networks and use the optimal control and dynamical
 \nsystems point of view to deep learning. We will discuss useful geometric
  properties in this context e.g.\nreparametrization invariance of the dist
 ance function and inherent geometric structure of the data.\nAnother inter
 esting set of related problems arises when learning dynamical systems from
  (human\nmotion) data.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/67/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brynjulf Owren (NTNU)
DTSTART:20250416T190000Z
DTEND:20250416T200000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/68
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/68/">A dynamical systems approach for designing stable neural netwo
 rks on Euclidean spaces and Riemannian manifolds.</a>\nby Brynjulf Owren (
 NTNU) as part of SFU Mathematics of Computation\, Application and Data ("M
 OCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nRecently\
 , Sherry et al. (2024) reconsidered the pioneering work of Dahlquist and J
 eltsch (1979) on circle-contractivity for the study of neural networks. Th
 is theory can be used to analyse and improve the robustness of architectur
 es that are devised by a dynamical systems approach.\nThe main idea is to 
 start with a continuous dynamical system which satisfies a certain monoton
 icity condition. Then we need to discretize the system in a way that prese
 rves the non-expansive behavior of the associated flow. The theory is old\
 , but not necessarily widely known because Dahlquist and Jeltsch only publ
 ished the results in the form of a preprint. The application to neural net
 works is new as far as we know\, and we shall present some results and exa
 mples from Sherry et al (2024).\nThe importance of neural networks set on 
 Riemannian manifolds seems to be increasing and there is a need to develop
  the theory of non-expansive numerical methods also in such a setting.\nWe
  present some ideas from Arnold et al. (2024) where a few simple numerical
  methods for Riemannian manifolds are studied. We consider whether these m
 ethods can be non-expansive when applied to non-expansive vector fields. F
 or the geodesic implicit Euler method\, which also feature in the proximal
  gradient method for optimisation\, we find that its behaviour is strongly
  dependent on the sectional curvature of the manifold. As opposed to the E
 uclidean case\, we now also have to be careful about whether the nonlinear
  equations to be solved in each time step has a unique solution or not.\n\
 nArnold\, Celledoni\, Çokaj\, Owren\, Tumiotto: B-stability of numerical 
 integrators on Riemannian manifolds. Journal of Computational Dynamics\, 2
 024\, 11(1): 92-107. doi: 10.3934/jcd.2024002\n\nDahlquist and Jeltsch: Ge
 neralized disks of contractivity for explicit and implicit Runge-Kutta met
 hods.\nDept. of Numerical Analysis and Computer Science\, The Royal Instit
 ute of Technology\, Stockholm\, Report TRITA-NA-7906}\, 1979.\n\nSherry\, 
 Celledoni\, Ehrhardt\, Murari\, Owren\, Schönlieb: Designing Stable Neura
 l Networks using Convex Analysis and ODEs\, Physica D: Nonlinear Phenomena
 \, (463) 2024\, Paper No. 134159\, 13 pp.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/68/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jay Gopalakrishnan (Portland State University)
DTSTART:20250411T220000Z
DTEND:20250411T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/69
DESCRIPTION:by Jay Gopalakrishnan (Portland State University) as part of S
 FU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\n
 Lecture held in SFU West Mall 2830.\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/69/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Palmer (Harvard University)
DTSTART:20250522T220000Z
DTEND:20250522T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/70
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/70/">From Geometry Processing to Topological Defects and Beyond</a>
 \nby David Palmer (Harvard University) as part of SFU Mathematics of Compu
 tation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 a
 nd Hybrid.\n\nAbstract\nPractical problems from computer graphics\, comput
 er vision\, and computational engineering reveal surprising connections to
  the physics of crystals\, knot theory\, minimal surfaces\, and algebraic 
 geometry. Borrowing tools from math and physics helps us devise more robus
 t and efficient algorithms\, and conversely\, computational exploration wi
 th these tools can provide mathematical insight and elucidate new theoreti
 cal questions.\n\nIn optimization over surfaces\, local methods can get st
 uck when the incorrect topology is chosen at initialization. Current relax
 ation\, an idea borrowed from the analysis of minimal surfaces\, provides 
 an alternative convex language for surface optimization that avoids these 
 barriers. This idea inspires our new representation\, DeepCurrents\, for l
 earning families of surfaces with boundary.\n\nNext we turn to computation
 al meshing\, an essential geometric prerequisite to many techniques for si
 mulating continuous physical systems. Surprisingly\, meshing itself involv
 es structures analogous to topological defects found in physics\, and thes
 e defects are at the heart of what makes meshing problems challenging. Thr
 ough exploring the geometry of defects\, we devise two different approache
 s to surmounting these barriers\, based on current relaxation and semidefi
 nite relaxation\, respectively.\n\nThese examples serve as a microcosm of 
 how thinking carefully about the geometry and topology of optimization lan
 dscapes can unlock more robust and reliable algorithms\, suggesting a path
  forward in interdisciplinary applied geometry.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/70/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kthim Imeri
DTSTART:20250530T220000Z
DTEND:20250530T230000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/71
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/71/">Spectral Solutions to Robin Problems using Steklov Eigenfuncti
 ons and their Relations with the Smoothness of Domains</a>\nby Kthim Imeri
  as part of SFU Mathematics of Computation\, Application and Data ("MOCAD"
 ) Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nThe Laplace op
 erator with Dirichlet or Robin boundary conditions can be solved via a spe
 ctral series of Steklov eigenfunctions\, which converges exponentially fas
 t for smooth domains and data. The rate at which the Steklov eigenfunction
 s themselves can be approximated numerically depends critically on the bou
 ndary’s regularity.\n\nKey idea: Reformulate the boundary-value problem 
 so that the solution is recovered from a rapidly converging series of Stek
 lov modes.\n\nTheoretical Insights: On smoothly shaped domains (with smoot
 h boundary data)\, the series converges exponentially\, requiring very few
  terms for high accuracy. Even for irregular domains or rough data\, the m
 ethod retains algebraic (polynomial-rate) convergence.\n\nNumerical Implem
 entation: We present three complementary schemes for computing Steklov eig
 enfunctions and assembling the spectral expansion.\n\\end{list}\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/71/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brendan Keith (Brown University)
DTSTART:20251009T203000Z
DTEND:20251009T213000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/72
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/72/">Proximal Galerkin: A Unified Framework for Variational Problem
 s with Inequality Constraints</a>\nby Brendan Keith (Brown University) as 
 part of SFU Mathematics of Computation\, Application and Data ("MOCAD") Se
 minar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nThis talk presents
  the Proximal Galerkin (PG) method\, a high-order numerical method for sol
 ving variational problems with inequality constraints. PG combines two fou
 ndational ideas from applied mathematics: Galerkin discretizations of part
 ial differential equations and Bregman proximal point algorithms for nonsm
 ooth or constrained optimization. Each iteration of the method solves a re
 gularized subproblem formulated as a nonlinear saddle-point system. Concep
 tually\, PG is a discretized gradient flow within a finite-dimensional fun
 ction space\, such as a finite element subspace\, yielding robust and conv
 ergent solution approximations. The unified framework systematically handl
 es a broad class of variational inequalities\, enabling high-order\, const
 raint-preserving solutions without the need for specialized basis function
 s. This talk will outline the theoretical foundations of PG\, highlight it
 s connections to convex analysis\, and showcase recent applications in con
 tact mechanics\, fracture\, and multi-phase flows\, among others.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/72/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexandre Girouard (Université Laval)
DTSTART:20250912T223000Z
DTEND:20250912T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/73
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/73/">The exterior Steklov problem for Euclidean domains</a>\nby Ale
 xandre Girouard (Université Laval) as part of SFU Mathematics of Computat
 ion\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and 
 Hybrid.\n\nAbstract\nWe investigate the Steklov eigenvalue problem in the 
 exterior of a bounded Euclidean domain. In particular\, we prove the equiv
 alence of several formulations of this problem previously\nproposed in the
  literature. We derive geometric eigenvalue inequalities and examine other
  properties of the exterior Steklov eigenvalues and eigenfunctions. Our re
 sults reveal that while there\nare many similarities between the exterior 
 and the interior Steklov problems\, certain spectral phenomena differ sign
 ificantly. We also emphasise the distinctions between the properties of th
 e\nexterior Steklov problem in two dimensions and in higher dimensions.\n\
 nJoint work with Lukas Bundrock\, Denis Grebenkov\, Michael Levitin and Io
 sif Polterovich.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/73/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART:20251114T223000Z
DTEND:20251114T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/74
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/74/">Modern Sensing and Physics Discovery with Machine Learning</a>
 \nby Nathan Kutz (University of Washington) as part of SFU Mathematics of 
 Computation\, Application and Data ("MOCAD") Seminar\n\nLecture held in AQ
 3149.\n\nAbstract\nSensing is a universal task in science and engineering.
  Downstream tasks from sensing include learning dynamical models\, inferri
 ng full state estimates of a system (system identification)\, control deci
 sions\, and forecasting. These tasks are exceptionally challenging to achi
 eve with limited sensors\, noisy measurements\, and corrupt or missing dat
 a. Existing techniques typically use current (static) sensor measurements 
 to perform such tasks and require principled sensor placement or an abunda
 nce of randomly placed sensors. In contrast\, we propose a SHallow REcurre
 nt Decoder (SHRED) neural network structure which incorporates (i) a recur
 rent neural network (LSTM) to learn a latent representation of the tempora
 l dynamics of the sensors\, and (ii) a shallow decoder that learns a mappi
 ng between this latent representation and the high-dimensional state space
 . By explicitly accounting for the time-history\, or trajectory\, of the s
 ensor measurements\, SHRED enables accurate reconstructions with far fewer
  sensors\, outperforms existing techniques when more measurements are avai
 lable\, and is agnostic towards sensor placement. In addition\, a compress
 ed representation of the high-dimensional state is directly obtained from 
 sensor measurements\, which provides an on-the-fly compression for modelin
 g physical and engineering systems. Forecasting is also achieved from the 
 sensor time-series data alone\, producing an efficient paradigm for predic
 ting temporal evolution with an exceptionally limited number of sensors. I
 n the example cases explored\, including turbulent flows\, complex spatio-
 temporal dynamics can be characterized with exceedingly limited sensors th
 at can be randomly placed with minimal loss of performance.\n\nThis event 
 is a joint Physics Colloquium and MOCAD seminar.  Please note 2:30 p.m. st
 art time.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/74/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sheehan Olver (Imperial College London)
DTSTART:20250922T223000Z
DTEND:20250922T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/75
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/75/">Numerical Analysis Meets Representation Theory</a>\nby Sheehan
  Olver (Imperial College London) as part of SFU Mathematics of Computation
 \, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hyb
 rid.\n\nAbstract\nIn this talk we see how representation theory can be use
 d in numerical methods for partial differential equations (PDEs) and how n
 umerics can give more efficient methods for computational problems in repr
 esentation theory. In particular\, we will see that representation theory 
 tells us the ways symmetry can present itself\, and building that informat
 ion into discretisations of PDEs leads to trivial parallelisation. We will
  also see that numerical linear algebra can be used to construct a polynom
 ial time algorithm for decomposing representations of the symmetric group.
  Finally\, we discuss potential application of the ideas to Schrödinger e
 quation with multiple particles.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/75/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Charles Cheung (NVIDIA)
DTSTART:20250908T223000Z
DTEND:20250908T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/77
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/77/">From PhysicsML to Physical AI</a>\nby Charles Cheung (NVIDIA) 
 as part of SFU Mathematics of Computation\, Application and Data ("MOCAD")
  Seminar\n\nLecture held in K9509.\n\nAbstract\nMachine learning is transf
 orming the way we approach the laws of nature. PhysicsML — the fusion of
  machine learning with physics-based modeling — is rapidly advancing fie
 lds from computational biology and climate forecasting to product design a
 nd high-fidelity CFD simulations. These breakthroughs are pushing the boun
 daries of what we can model\, predict\, and design. But where do we go fro
 m here?\n\nIn this talk\, we explore the emerging frontier of Physical AI:
  intelligent systems that understand and interact with the physical world 
 through the combined power of machine learning\, physics-based models\, an
 d physically accurate simulations. We will share NVIDIA’s vision for ena
 bling this future—where PhysicsML serves as the engine\, simulation plat
 forms provide realistic virtual worlds\, and the results drive the next ge
 neration of robotics\, autonomous vehicles\, and beyond.\nThe era of machi
 nes that not only think but also reason about the physical world has begun
 . Let’s see what comes next.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/77/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jethro Warnett (Oxford)
DTSTART:20251017T223000Z
DTEND:20251017T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/78
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/78/">CANCELLED - Well-posedness and mean-field limit estimate of a 
 consensus-based algorithm for multiplayer games</a>\nby Jethro Warnett (Ox
 ford) as part of SFU Mathematics of Computation\, Application and Data ("M
 OCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nRecently\
 , a derivative-free consensus-based particle method was introduced that fi
 nds the Nash equilibrium of non-convex multiplayer games\, where it proves
  the global exponential convergence in the sense of mean-field law. We pro
 vide a quantitative estimate of the mean-field limit with respect to the n
 umber of particles\, as well as establishing the well-posedness of both th
 e finite particle model and the corresponding mean-field dynamics.\n\nDue 
 to a medical emergency\, today's 3:30 p.m MOCAD Seminar is cancelled.  We 
 are attempting to reschedule (and it seems another opportunity might be th
 e same time\, 3:30 p.m.\, Monday afternoon).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/78/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cliff Stoll (Acme Klein Bottles)
DTSTART:20251017T163000Z
DTEND:20251017T173000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/79
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/79/">Glass in Math\; Math in Glass</a>\nby Cliff Stoll (Acme Klein 
 Bottles) as part of SFU Mathematics of Computation\, Application and Data 
 ("MOCAD") Seminar\n\nLecture held in ASB10900 and Hybrid.\n\nAbstract\nGla
 ss Klein bottles?  Sure!  How about knots and knot-compliments? A Boys Sur
 face?  Plenty of topological manifolds work well in glass.  \n\nWith good 
 fortune\, SFU's glass-blower\, Lucas Clarke\, will demonstrate his art in 
 making mathematical manifolds in glass.\n\nC'mon over for hot math and hot
  glass!\n\nThis expository MOCAD seminar is a special wide-audience semina
 r held in ASB10900 (the Big Data Hub lecture theatre).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/79/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Warren (University of British Columbia)
DTSTART:20251121T233000Z
DTEND:20251122T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/80
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/80/">Unsupervised learning of 1d branching structures</a>\nby Andre
 w Warren (University of British Columbia) as part of SFU Mathematics of Co
 mputation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K950
 9.\n\nAbstract\nSuppose we have unlabeled data where we believe there is a
 n unknown\, latent branching (or tree-like) structure. Can we infer that s
 tructure? This type of unsupervised learning problem arises in a wide rang
 e of biological applications\, including in evolutionary and developmental
  settings. \n\nIn this talk\, I will present a variational approach to thi
 s problem\, whereby the latent branching structure can be estimated by way
  of a discretization of the "average-distance problem" of Buttazzo\, Oudet
 \, and Stepanov. The resulting estimator is shown to be consistent in the 
 zero-noise limit\, and can be cheaply approximated numerically by a Lloyd-
  or EM-type algorithm. This work is joint with Anton Afanassiev\, Forest K
 obayashi\, and Geoff Schiebinger.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/80/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Rowbottom (Cambridge)
DTSTART:20251024T223000Z
DTEND:20251024T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/81
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/81/">Physics inspired GNNs and some applications in scientific comp
 uting</a>\nby James Rowbottom (Cambridge) as part of SFU Mathematics of Co
 mputation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K950
 9.\n\nAbstract\nIn this talk I will present a series of works derived from
  the framework of physics inspired graph neural networks (GNN). The centra
 l premise is a GNN can be seen as the discretisation of a learnable dynami
 cal system over a graph\, this allows to leverage the standard tools of nu
 merical analysis to design and optimise in this model space. Firstly\, I w
 ill demonstrate how this provides desirable architectural properties which
  lead to SOTA performance in common GNN node classification tasks. In the 
 latter part of the talk\, I will show how the same architectures emerge as
  natural candidates in a range of applications found in scientific computi
 ng including adaptive mesh refinement for finite element methods and mesh 
 based graph inverse problems.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/81/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deanna Needell (UCLA)
DTSTART:20251128T233000Z
DTEND:20251129T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/82
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/82/">Fairness\, theory\, and sampling paradigms in machine learning
 </a>\nby Deanna Needell (UCLA) as part of SFU Mathematics of Computation\,
  Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstr
 act\nIn this talk\, we will discuss several areas of recent work centered 
 around the themes of fairness and foundations in machine learning as well 
 as highlight the challenges in this area. We will discuss recent results i
 nvolving linear algebraic tools for learning\, such as methods in non-nega
 tive matrix factorization that include tailored approaches for fairness. T
 hen\, we will discuss new foundational results that theoretically justify 
 phenomena like benign overfitting in neural networks.  Lastly\, we will me
 ntion some recent results on observational multiplicity\, and how those ca
 n be utilized to improve equity. Throughout the talk\, we will include exa
 mple applications from collaborations with community partners\, using mach
 ine learning to help organizations with fairness and justice goals. This t
 alk includes work joint with Erin George\, Kedar Karhadkar\, Lara Kassab\,
  and Guido Montufar.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/82/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simone Brugiapaglia (Concordia University)
DTSTART:20260410T223000Z
DTEND:20260410T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/83
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/83/">From compression to depth: generative compressive sensing and 
 deep greedy unfolding for signal reconstruction</a>\nby Simone Brugiapagli
 a (Concordia University) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nS
 ince its inception in the early 2000s\, compressive sensing has become a w
 ell-established paradigm for efficient signal recovery\, with applications
  ranging from medical imaging to scientific computing. More recently\, dat
 a-driven reconstruction methods based on deep neural networks have attract
 ed considerable attention and shown great promise as an alternative approa
 ch. In this talk\, we will review recent progress in signal reconstruction
  techniques that combine principles from compressive sensing and deep lear
 ning. First\, we will discuss recent advances in generative compressive se
 nsing\, where the traditional sparsity prior is replaced by the assumption
  that the signal to be reconstructed lies in the range of a deep generativ
 e neural network. Second\, we will explore deep greedy unfolding\, which i
 nvolves designing deep neural network architectures by "unrolling" the ite
 rations of a sparse recovery algorithm onto the layers of a trainable neur
 al network. In both cases\, we will present numerical results in tandem wi
 th theoretical guarantees.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/83/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ricardo Baptista (University of Toronto)
DTSTART:20260417T223000Z
DTEND:20260417T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/84
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/84/">Processing Language\, Images and Other Data Modalities</a>\nby
  Ricardo Baptista (University of Toronto) as part of SFU Mathematics of Co
 mputation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K950
 9.\n\nAbstract\nA fundamental problem in artificial intelligence is how to
  simultaneously deploy data from different sources\, such as audio\, image
 s\, text\, and video\, collectively known as multimodal data. In this talk
 \, I will present a mathematical framework for studying this question\, fo
 cusing primarily on text and images. I will begin by describing how large 
 language models (LLMs) operate\, addressing the challenging issue of using
  real-number algorithms to process language. In particular\, I will explai
 n next-token prediction\, the core of current LLM methodology. I will then
  focus on the canonical problem of measuring alignment between image and t
 ext data (contrastive learning). Finally\, I will describe how images can 
 be generated from text prompts (conditional generative modeling). From a m
 athematical perspective\, a unifying theme underlying this work is the min
 imization of divergences defined on spaces of probability measures. A seco
 nd key mathematical idea is the attention mechanism—a form of nonlinear 
 correlation between vector-valued sequences. I aim to explain these concep
 ts and their relevance to modern machine learning algorithms in an accessi
 ble fashion for a broad audience from the mathematical and computational s
 ciences.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/84/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Harish S. Bhat (UC Merced)
DTSTART:20260320T223000Z
DTEND:20260320T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/85
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/85/">Learning and Control Problems for Electron Dynamics</a>\nby Ha
 rish S. Bhat (UC Merced) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nT
 o compute the quantum dynamics of a molecule's electrons\, one tractable w
 ay to proceed is via time-dependent density functional theory (TDDFT). TDD
 FT gives equations of motion that\, in principle\, yield the same electron
  density as the full but intractable time-dependent Schrodinger equation. 
 However\, there is one term in the TDDFT Hamiltonian whose functional form
  is unknown: the exchange-correlation potential (Vxc). This motivates the 
 idea of trying to learn Vxc (or\, at least\, an improved model of Vxc) fro
 m data. I will review progress on this problem that includes (i) generatio
 n of suitable training data\, (ii) direct learning of Vxc neural network m
 odels in one spatial dimension\, and (iii) PDE-constrained optimization te
 chniques to learn Vxc in two spatial dimensions. A key ingredient in (ii) 
 and (iii) will be the adjoint method\, which connects our work to quantum 
 optimal control. We will conclude by briefly describing how to use the adj
 oint method (together with small neural networks) to solve quantum optimal
  control problems for molecules driven by electric fields.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/85/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stefania Fresca (University of Washington)
DTSTART:20260206T233000Z
DTEND:20260207T003000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/86
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/86/">Handling geometric variability and multi-scale optimization in
  surrogate models</a>\nby Stefania Fresca (University of Washington) as pa
 rt of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semi
 nar\n\nLecture held in K9509.\n\nAbstract\nSolving differential problems u
 sing full order models (FOMs)\, such as the finite element method\, can re
 sults in prohibitive computational costs\, particularly in real-time simul
 ations and multi-query routines. Surrogate modeling aims to replace FOMs w
 ith models characterized by much lower complexity but still able to expres
 s the physical features of the system under investigation.\n\nIn many appl
 ications\, the available data are inherently multi-resolution\, either due
  to geometric variability\, where solutions are defined on parametrized do
 mains\, or due to the need to capture phenomena across different spatial s
 cales. Motivated by this observation\, two complementary approaches to sur
 rogate modeling for parametrized PDEs are introduced and analyzed.\n\nFirs
 t\, Continuous Geometry-Aware DL-ROMs (CGA-DL-ROMs) are introduced. The sp
 ace-continuous formulation of the proposed architecture enables to deal wi
 th multi-resolution datasets\, which commonly arise in the presence of geo
 metrical parametrizations. Furthermore\, CGA-DL-ROMs are endowed with a st
 rong inductive bias that explicitly accounts for geometric parameters\, al
 lowing the distinct impact of geometric variability on the solution manifo
 ld to be captured. This geometrical awareness leads to improved compressio
 n properties and enhanced overall performance of the surrogate model.\n\nS
 econd\, a Multi-Level Monte Carlo (MLMC) training strategy for operator le
 arning is proposed\, exploiting hierarchies of resolutions of function dic
 retizations. The approach combines inexpensive gradient estimates obtained
  from coarse-resolution data with corrective contributions from a limited 
 number of fine-resolution samples\, thereby reducing the overall training 
 cost while preserving accuracy. The MLMC training framework is architectur
 e-agnostic and applicable to any architecture capable of handling multi-re
 solution data. Numerical experiments highlight the existence of a Pareto t
 rade-off between accuracy and computational cost governed by the distribut
 ion of samples across resolution levels.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/86/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jordan Sawchuk (SFU)
DTSTART:20260313T223000Z
DTEND:20260313T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/87
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/87/">A (nearly) random walk through thermodynamic geometry: Frictio
 n\, optimal transport\, and curvature</a>\nby Jordan Sawchuk (SFU) as part
  of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semina
 r\n\nLecture held in K9509.\n\nAbstract\nMinimizing energy dissipation in 
 driven stochastic systems is a fundamental goal in nonequilibrium thermody
 namics. In the linear-response (slow driving) regime\, this becomes a prob
 lem of Riemannian geometry: The control space is equipped with a metric (t
 he "generalized friction tensor") and optimal protocols are geodesics. Thi
 s talk follows one physicist's (nearly) random walk through the mathematic
 al landscape in an effort to understand this thermodynamic geometry. \n\nI
  will demonstrate that the generalized friction tensor is deeply connected
  to the network topology of the controlled system\, revealing unexpected l
 inks to previously established graph-theoretic geometries. Treating the fr
 iction tensor as a metric on the probability simplex\, I show that the met
 ric tensor is directly related to the mean first-passage times between sta
 tes\, and that dissipation is equivalently seen as a discrete $L^2$-Wasser
 stein transport cost or as Joule heating in a resistor network. \n\nFinall
 y\, I will share recent results\, open questions\, and grand ambitions reg
 arding an extrinsic geometry of control. I will discuss how the "cost of c
 onstraint" can be framed using the second fundamental form and normal curv
 ature\, how graph automorphisms map onto manifold isometries\, and highlig
 ht how geometric stability analysis (via Jacobi fields) can be used to pre
 dict when symmetry-breaking protocols become energetically optimal.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/87/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Olivier Lafitte (Université Sorbonne Paris Nord)
DTSTART:20260427T223000Z
DTEND:20260427T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/88
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/88/">Resonances in a cold plasma</a>\nby Olivier Lafitte (Universit
 é Sorbonne Paris Nord) as part of SFU Mathematics of Computation\, Applic
 ation and Data ("MOCAD") Seminar\n\nInteractive livestream: https://sfu.zo
 om.us/j/88232824688?pwd=SSwf2Nk28PAmzRguQcYrdLYaXKHml9.1\nLecture held in 
 K9509.\n\nAbstract\nWe consider a  magnetized plasma (in the case of a tok
 amak) where the density of ions $ n_0$ as well as the imposed vertical mag
 netic field $B_0$ dépend on the horizontal variable $x$. The linearized s
 ystem of Euler-Maxwell équations (system of 10 PDEs of order 1) around th
 e solution $(E\,B\, v\,n)_0=(0\, B_0(x)\,0\, n_0(x))$ is characterized by 
 the two fréquencies denoted by $\\omega_p(x)$ and $\\omega_c(x)$ (respect
 ively plasma and cyclotron frequencies). Classically\, the cyclotron frequ
 ency is associated with Landau damping\, but this frequency does not appea
 r to be a resonance in the cold plasma model (ax we prove it) but another 
 frequence of interest\, called the hybrid frequency $\\omega_h(x)=\\sqrt{\
 \omega_p^2(x)+\\omega_c^2(x)}$ is a resonance for the system: at any point
  $x_h$ where $\\omega$\, the imposed oscillation frequency is equal to $\\
 omega_h(x)$ we have energy transfer (from the electrons to the electric fi
 eld). We prove it using Bessel functions for the study of the correspondin
 g linear system of ODEs near $x_h$.\nJoint work with Bruno Despres (Sorbon
 ne Université) and Lise-Marie Imbert-Gerard (University of Arizona).\n\nJ
 oint work with Bruno Despres (Sorbonne Université) and Lise-Marie Imbert-
 Gerard (University of Arizona).\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/88/
URL:https://sfu.zoom.us/j/88232824688?pwd=SSwf2Nk28PAmzRguQcYrdLYaXKHml9.1
END:VEVENT
BEGIN:VEVENT
SUMMARY:Laura Weidensager (Simon Fraser University)
DTSTART:20260327T223000Z
DTEND:20260327T233000Z
DTSTAMP:20260414T235650Z
UID:AppliedMath/89
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Appli
 edMath/89/">Fast high-dimensional approximation: ANOVA methods for wavelet
 s and random Fourier features</a>\nby Laura Weidensager (Simon Fraser Univ
 ersity) as part of SFU Mathematics of Computation\, Application and Data (
 "MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this talk\, we 
 focus on the problem of reconstructing a multivariate function from discre
 te d-dimensional samples. Beyond achieving accurate function recovery\, we
  aim to enhance interpretability by identifying how individual variables a
 nd their interactions influence the target function. To this end\, we deve
 lop several efficient hybrid methods that combine the ANOVA decomposition\
 , wavelet techniques\, and random Fourier features. The multi-resolution c
 apabilities of wavelets and the scalability of random Fourier features\, p
 aired with the interpretability provided by the ANOVA decomposition\, enab
 le a robust framework for high-dimensional function approximation. The app
 roaches in this talk address both\, computational efficiency and transpare
 ncy.\n\nThe total approximation error is influenced by three main componen
 ts. First\, the ANOVA truncation to a function of low effective dimension 
 is the basis for the construction of ANOVA-boosting algorithms\, which exp
 loit the structure of the function. Second\, the projection onto a finite-
 dimensional subspace is determined by the choice of basis functions. To an
 alyze the projection error\, we explore and discuss wavelet characterizati
 ons of functions in certain function spaces\, like Sobolev and Besov spasc
 es. Finally\, for the regression from samples\, we give error bounds for t
 he least squares approximation\, which asymptotically coincides with the b
 ehavior of the projection error.\n
LOCATION:https://stable.researchseminars.org/talk/AppliedMath/89/
END:VEVENT
END:VCALENDAR
