BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Nicholas J. Higham (University of Manchester\, UK)
DTSTART:20200429T140000Z
DTEND:20200429T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /2/">Are Numerical Linear Algebra Algorithms Accurate at Extreme Scale and
  at Low Precisions?</a>\nby Nicholas J. Higham (University of Manchester\,
  UK) as part of E-NLA - Online seminar series on numerical linear algebra\
 n\n\nAbstract\nThe advent of exascale computing will bring the capability 
 to solve dense linear systems of order $10^8$. At the same time\, computer
  hardware is increasingly supporting low precision floating-point arithmet
 ics\, such as the IEEE half precision and bfloat16 arithmetics.  The stand
 ard rounding error bound for the inner product of two $n$-vectors $x$ and 
 $y$ is $|fl(x^Ty) - x^Ty| \\le n u |x|^T|y|$\,   where $u$ is the unit rou
 ndoff\, and the bound is approximately attainable.  This bound provides us
 eful information only if $nu < 1$.  Yet $nu > 1$ for exascale-size problem
 s solved in single precision and also for problems of order $n > 2048$ sol
 ved in half precision. Standard error bounds for matrix multiplication\, L
 U factorization\, and so on\, are equally uninformative in these situation
 s. Yet the supercomputers in the TOP500 are there by virtue of having succ
 essfully solved linear systems of orders up to $10^7$\, and deep learning 
 implementations routinely use half precision with apparent success.\n\nHav
 e we reached the point where our techniques for analyzing rounding errors\
 , honed over 70 years of digital computation\,  are unable to predict the 
 accuracy of numerical linear algebra computations that are now routine? I 
 will show that the answer is "no": we can understand the behaviour of extr
 eme-scale and low accuracy computations. The explanation lies in algorithm
 ic design techniques (both new and old) that help to reduce error growth a
 long with a new probabilistic approach to rounding error analysis.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michele Benzi (Scuola Normale Superiore Pisa\, Italy)
DTSTART:20200506T140000Z
DTEND:20200506T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /3/">Nonlocal dynamics on networks via fractional graph Laplacians: theory
  and numerical methods</a>\nby Michele Benzi (Scuola Normale Superiore Pis
 a\, Italy) as part of E-NLA - Online seminar series on numerical linear al
 gebra\n\n\nAbstract\nNonlocal diffusive dynamics on large\, sparse network
 s can be modeled by means of systems of differential equations involving f
 ractional graph Laplacians. The solution of such systems leads to non-anal
 ytic matrix functions\, due to the singularity of the graph Laplacian. Off
 -diagonal decay estimates for these and related matrix functions will be p
 resented\, together with explicit (closed form) expressions for some simpl
 e but important examples. The case of directed networks (leading to nonsym
 metric Laplacians) will also be discussed.\n\nThe numerical approximation 
 of the dynamics can be implemented by means of Krylov subspace methods. Th
 e lack of smoothness of the underlying function suggests the use of ration
 al approximation techniques. Some results using a shift-and-invert approac
 h will be presented.\n\nApplications include the efficient exploration of 
 large spatial networks and consensus dynamics in multi-agent systems.\n\nT
 his is joint work with Daniele Bertaccini (U. of Rome `Tor Vergata’)\, F
 abio Durastante (IAC-CNR)\, and Igor Simunec (Scuola Normale Superiore).\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Volker Mehrmann (Technische Universität Berlin\, Germany)
DTSTART:20200513T140000Z
DTEND:20200513T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /4/">Robustness of linear algebra properties for Port-Hamiltonian systems<
 /a>\nby Volker Mehrmann (Technische Universität Berlin\, Germany) as part
  of E-NLA - Online seminar series on numerical linear algebra\n\n\nAbstrac
 t\nPort-Hamiltonian systems are an important class of control systems that
  arise in all areas of science and engineering. When the system is lineari
 zed around a stationary solution one gets a linear port-Hamiltonian system
 . Despite the fact that the system looks unstructured at first sight\, it 
 has remarkable properties.  Stability and passivity are automatic\, spectr
 al structures for purely imaginary eigenvalues\, eigenvalues at infinity\,
  and even singular blocks in the Kronecker canonical form are very restric
 ted and furthermore the structure leads to fast and efficient iterative so
 lution methods for associated linear systems. When port-Hamiltonian system
 s are subject to (structured) perturbations\, then it is important to dete
 rmine the minimal allowed perturbations so that these properties are not p
 reserved. The computation of these structured distances to instability\, n
 on-passivity\, or non-regularity\, is typically a very hard optimization p
 roblem. However\, in the context of port-Hamiltonian systems\, the computa
 tion becomes much easier and can even be implemented efficiently for large
  scale problems in combination with model reduction techniques. We will di
 scuss these distances and the computational methods and illustrate the res
 ults via an industrial problem in the context of noise reduction for disk 
 brakes.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilse Ipsen (North Carolina State University\, USA)
DTSTART:20200520T140000Z
DTEND:20200520T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /5/">Probabilistic numerical linear solvers</a>\nby Ilse Ipsen (North Caro
 lina State University\, USA) as part of E-NLA - Online seminar series on n
 umerical linear algebra\n\n\nAbstract\nWe formulate iterative methods for 
 the solution of nonsingular linear systems as statistical inference proces
 ses by modeling the epistemic uncertainty in the iterates due to a limited
  computational budget. The goal is to obtain well-calibrated uncertainty  
 that is more insightful than traditional worst-case bounds\, and to produc
 e a  probabilistic description of the error that can be propagated coheren
 tly through a computational pipeline.\n\nOur Bayesian Conjugate Gradient M
 ethod (BayesCG) for real symmetric positive-definite linear systems posits
  a prior distribution for the solution\, and conditions on the finite amou
 nt of information obtained during the iterations to  produce a posterior d
 istribution that reflects the reduced uncertainty.  The following topics w
 ill be addressed:  (i) choice of prior for fast convergence and well-calib
 rated uncertainty\; (ii) error estimation through test statistics that mit
 igate the effect of BayesCG's nonlinear dependence on the solution\; and (
 iii) numerical stability to maintain positive semi-definiteness of the pos
 teriors\, and prevent convergence slow down from loss of orthogonality in 
 residuals and search directions.\n\nThis is joint work with Jon Cockayne (
 http://www.joncockayne.com/)\, Chris J. Oates (http://oates.work/)\, and T
 imothy W. Reid (https://math.sciences.ncsu.edu/people/twreid/).\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cleve Moler (MathWorks\, Inc.)
DTSTART:20200527T140000Z
DTEND:20200527T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /6/">The Evolution of "The Evolution of MATLAB"</a>\nby Cleve Moler (MathW
 orks\, Inc.) as part of E-NLA - Online seminar series on numerical linear 
 algebra\n\n\nAbstract\nWe show how MATLAB has evolved over more than 40 ye
 ars from a simple matrix calculator to a powerful technical computing envi
 ronment. We demonstrate several examples of MATLAB applications.  We concl
 ude with a discussion of current developments\, including machine learning
 \, automated driving and parallel computation.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nick Trefethen (University of Oxford)
DTSTART:20200603T140000Z
DTEND:20200603T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /7/">Vandermonde with Arnoldi</a>\nby Nick Trefethen (University of Oxford
 ) as part of E-NLA - Online seminar series on numerical linear algebra\n\n
 \nAbstract\nVandermonde matrices are exponentially ill-conditioned\, rende
 ring the familiar “polyval(polyfit)” algorithm for polynomial interpol
 ation and least-squares fitting ineffective at higher degrees. We show tha
 t Arnoldi orthogonalization fixes the problem.\n\nIt's remarkable how wide
 ly this trick is applicable.  Half a dozen examples will be presented.\n\n
 This is joint work with Pablo Brubeck and Yuji Nakatsukasa.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Martin Gander (University of Geneva)
DTSTART:20200610T140000Z
DTEND:20200610T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /8/">A Linear Algebra Approach to Time Parallelization: Parareal\, ParaExp
 \, ParaDiag\, ParaOpt and ParaStieltjes</a>\nby Martin Gander (University 
 of Geneva) as part of E-NLA - Online seminar series on numerical linear al
 gebra\n\n\nAbstract\nTime parallelization has been a very active research 
 area over the past decade. This is due to so massively parallel computer a
 rchitectures that parallelization in the spatial direction rarely suffices
  to take full advantage of such systems when solving evolution problems. T
 ime parallelization is however quite different from spatial parallelizatio
 n\, since information only propagates forward in time\, never backward. Ti
 me parallelization algorithms are often derived at the PDE level\, but whe
 never they are used\, they take the form of solvers for linear algebra pro
 blems. I will give in my presentation an introduction to such algorithms a
 t the linear algebra level\, starting with two simple but typical model pr
 oblems\, namely a heat equation and a transport equation. At the linear al
 gebra level\, these two problems look deceivingly similar\, but time paral
 lel algorithms need different features when solving one or the other in pa
 rallel. I will explain the reason for this at the linear algebra level\, a
 nd then show how Parareal\, ParaExp and ParaDiag address them. If time per
 mits\, I will also briefly explain the newer classes of ParaOpt and ParaSt
 ieltjes algorithms.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Embree (Virginia Tech)
DTSTART:20200617T140000Z
DTEND:20200617T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /9/">Contour Integral Methods for Nonlinear Eigenvalue Problems: A Systems
  Theory Perspective</a>\nby Mark Embree (Virginia Tech) as part of E-NLA -
  Online seminar series on numerical linear algebra\n\n\nAbstract\nContour 
 integral methods for nonlinear eigenvalue problems seek to compute a subse
 t of the spectrum in a bounded region of the complex plane. We briefly sur
 vey this class of algorithms\, establishing a relationship to system reali
 zation techniques in control theory. This connection motivates new contour
  integral methods that build on recent developments in rational interpolat
 ion of dynamical systems. The resulting techniques\, which replace the usu
 al Hankel matrices with Loewner matrix pencils\,  incorporate general inte
 rpolation schemes and permit ready recovery of eigenvectors.  Numerical ex
 amples illustrate the potential of this approach.\n\nThis talk describes j
 oint work with Michael Brennan (MIT) and Serkan Gugercin (Virginia Tech).\
 n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Demmel (University of California at Berkeley)
DTSTART:20200624T140000Z
DTEND:20200624T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /10/">Communication-Avoiding Algorithms for Linear Algebra\, Machine Learn
 ing\, and Beyond</a>\nby James Demmel (University of California at Berkele
 y) as part of E-NLA - Online seminar series on numerical linear algebra\n\
 n\nAbstract\nAlgorithms have two costs: arithmetic and communication\, i.e
 . moving data between levels of a memory hierarchy or processors over a ne
 twork. Communication costs (measured in time or energy per operation) alre
 ady greatly exceed arithmetic costs\, and the gap is growing over time fol
 lowing technological trends. Thus our goal is to design algorithms that mi
 nimize communication. We present new algorithms that communicate asymptoti
 cally less than their classical counterparts\, for a variety of linear alg
 ebra and machine learning problems\, demonstrating large speedups on a var
 iety of architectures. Some of these algorithms attain provable lower boun
 ds on communication. We describe generalizations of these bounds\, and opt
 imal algorithms\, to arbitrary code that can be expressed as nested loops 
 accessing arrays\, and to account for arrays having different precisions.\
 n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tamara G. Kolda (Sandia National Laboratories)
DTSTART:20200701T140000Z
DTEND:20200701T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /11/">Practical Leverage-Based Sampling for Low-Rank Tensor Decomposition<
 /a>\nby Tamara G. Kolda (Sandia National Laboratories) as part of E-NLA - 
 Online seminar series on numerical linear algebra\n\n\nAbstract\nConventio
 nal algorithms for finding low-rank canonical polyadic (CP) tensor decompo
 sitions are unwieldy for large sparse tensors. The CP decomposition can be
  computed by solving a sequence of overdetermined least problems with spec
 ial Khatri-Rao structure. In this work\, we present an application of rand
 omized algorithms to fitting the CP decomposition of sparse tensors\, solv
 ing a significantly smaller sampled least squares problem at each iteratio
 n with probabilistic guarantees on the approximation errors. Prior work ha
 s shown that sketching is effective in the dense case\, but the prior appr
 oach cannot be applied to the sparse case because a fast Johnson-Lindenstr
 auss transform (e.g.\, using a fast Fourier transform) must be applied in 
 each mode\, causing the sparse tensor to become dense. Instead\, we perfor
 m sketching through leverage score sampling\, crucially relying on the fac
 t that the structure of the Khatri-Rao product allows sampling from overes
 timates of the leverage scores without forming the full product or the cor
 responding probabilities. Naïve application of leverage score sampling is
  ineffective because we often have cases where a few scores are quite larg
 e\, so we propose a novel hybrid of deterministic and random leverage-scor
 e sampling which consistently yields improved fits. Numerical results on r
 eal-world large-scale tensors show the method is significantly faster than
  competing methods without sacrificing accuracy.  This is joint work with 
 Brett Larsen\, Stanford University.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Laura Grigori (INRIA Paris)
DTSTART:20200708T140000Z
DTEND:20200708T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /12/">Communication avoiding low rank matrix approximation\, an unified pe
 rspective on deterministic and randomized approaches</a>\nby Laura Grigori
  (INRIA Paris) as part of E-NLA - Online seminar series on numerical linea
 r algebra\n\n\nAbstract\nIn this talk we present an unified perspective on
  deterministic and randomized approaches for computing the low rank approx
 imation of a matrix. We survey recent approaches that allow to minimize co
 mmunication and discuss a generalized LU factorization that allows to unif
 y several existing algorithms. For this factorization we present an improv
 ed analysis which combines deterministic guarantees with sketching ensembl
 es satisfying Johnson-Lindenstrauss properties. We then extend some of the
  algorithms to computing the low rank approximation of a tensor by using H
 OSVD while also avoiding communication.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christian Lubich (University of Tübingen)
DTSTART:20200715T140000Z
DTEND:20200715T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /13/">Dynamical low-rank approximation</a>\nby Christian Lubich (Universit
 y of Tübingen) as part of E-NLA - Online seminar series on numerical line
 ar algebra\n\n\nAbstract\nThis talk reviews differential equations and the
 ir numerical solution on manifolds of low-rank matrices or of tensors with
  a rank structure such as tensor trains or general tree tensor networks. T
 hese low-rank differential equations serve to approximate\, in a data-comp
 ressed format\, large time-dependent matrices and tensors or multivariate 
 functions that are either given explicitly via their increments or are unk
 nown solutions to high-dimensional evolutionary differential equations\, w
 ith multi-particle time-dependent Schrödinger equations and kinetic equat
 ions such as Vlasov equations as noteworthy examples of applications.\n\nR
 ecently developed numerical time integrators are  based on splitting the p
 rojection onto the tangent space of the low-rank manifold at the current a
 pproximation. In contrast to all standard integrators\, these projector-sp
 litting methods are robust to the unavoidable presence of small singular v
 alues in the low-rank approximation. This robustness relies on exploiting 
 geometric properties of the manifold of low-rank matrices or tensors: in e
 ach substep of the projector-splitting algorithm\, the approximation moves
  along a flat subspace of the low-rank manifold. In this way\, high curvat
 ure due to small singular values does no harm.\n\nThis talk is based on wo
 rk done intermittently over the last decade with Othmar Koch\, Bart Vander
 eycken\, Ivan Oseledets\, Emil Kieri\, Hanna Walach and Gianluca Ceruti.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Ng (University of Hong Kong)
DTSTART:20200722T140000Z
DTEND:20200722T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /14/">Nonnegative low rank matrix approximation and its applications</a>\n
 by Michael Ng (University of Hong Kong) as part of E-NLA - Online seminar 
 series on numerical linear algebra\n\n\nAbstract\nIn this talk\, we study 
 low rank matrix approximation (NLRM) for nonnegative matrices arising from
  many data mining and pattern recognition applications. Our approach is di
 fferent from classical nonnegative matrix factorization (NMF) which has be
 en studied for some time. For a given nonnegative matrix\, the usual NMF a
 pproach is to determine two nonnegative low rank matrices such that the di
 stance between their product and the given nonnegative matrix is as small 
 as possible. However\, the proposed NLRM approach is to determine a nonneg
 ative low rank matrix such that the distance between such matrix and the g
 iven nonnegative matrix is as small as possible. There are two advantages.
  (i) The minimized distance can be smaller. (ii) The proposed method can i
 dentify important singular basis vectors\, while this information may not 
 be obtained in the classical NMF. Numerical results are reported to demons
 trate the performance of the proposed method. Several extensions and resea
 rch works are also presented.\n\nThis talk describes joint work with Tai-X
 iang Jiang (Southwestern University of Finance and Economics)\, JunJun Pan
  (Universite de Mons)\, Guang-Jing Song (Weifang University) and Hong Zhu 
 (Jiangsu University).\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Keyes (KAUST)
DTSTART:20200909T140000Z
DTEND:20200909T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /15/">Data-sparse Linear Algebra Algorithms for Large-scale Applications o
 n Emerging Architectures</a>\nby David Keyes (KAUST) as part of E-NLA - On
 line seminar series on numerical linear algebra\n\n\nAbstract\nA tradition
 al goal of algorithmic optimality\, squeezing out operations\, has been su
 perseded because of evolution in architecture. Algorithms must now squeeze
  memory\, data transfers\, and synchronizations\, while extra operations o
 n locally cached data cost relatively little time or energy. Hierarchicall
 y low-rank matrices realize a rarely achieved combination of optimal stora
 ge complexity and high-computational intensity in approximating a wide cla
 ss of formally dense operators that arise in exascale applications. They m
 ay be regarded as algebraic generalizations of the fast multipole method. 
 Methods based on hierarchical tree-based data structures and their simpler
  cousins\, tile low-rank matrices\, are well suited for early exascale arc
 hitectures\, which are provisioned for high processing power relative to m
 emory capacity and memory bandwidth. These data-sparse algorithms are ushe
 ring in a renaissance of numerical linear algebra. We describe modules of 
 a software toolkit\, Hierarchical Computations on Manycore Architectures (
 HiCMA)\, that illustrate these features on several applications. Early mod
 ules of this open-source project are distributed in software libraries of 
 major vendors. A recent addition\, H2Opus\, extends H2 hierarchical matrix
  operations to distributed memory and GPUs.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elisabeth Ullmann (TU Munich)
DTSTART:20200916T140000Z
DTEND:20200916T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /16/">Approximation of parametric covariance matrices</a>\nby Elisabeth Ul
 lmann (TU Munich) as part of E-NLA - Online seminar series on numerical li
 near algebra\n\n\nAbstract\nCovariance operators model the spatial\, tempo
 ral or other correlation between collections of random variables. In moder
 n applications these random variables are often associated with an infinit
 e-dimensional or high-dimensional function space. Examples are the solutio
 n of a partial differential equation with random coefficients in uncertain
 ty quantification (UQ)\, or Gaussian process regression in machine learnin
 g. When a suitable discretization of the function space has been applied\,
  the discretized covariance operator becomes a very large matrix - the cov
 ariance matrix - with a size that is of the order of the dimension of the 
 discrete space squared.\n\nCovariance matrices are naturally symmetric and
  positive semi-definite\, but in the applications we are interested in\, t
 hey are typically dense. To avoid the enormous cost of creating and handli
 ng these dense matrices\, efficient low-rank approximations such as the pi
 voted Cholesky decomposition\, or the adaptive cross approximation (ACA) h
 ave been developed during the last decade.\n\nBut the story does not end h
 ere since recently\, the attention has shifted to parameterized covariance
  operators. This is due to their increased modeling capacity\, e.g.\, in B
 ayesian inverse problems or Gaussian process regression with hyperparamete
 rs in machine learning. Now we are faced with the task to approximate a pa
 rametric covariance matrix where the parameter itself is a random process.
  Simply repeating the ACA or pivoted Cholesky decomposition for different 
 parameter values is inefficient and most certainly too expensive in practi
 se.\n\nWe introduce and study two algorithms for the approximation of para
 metric families of covariance matrices. The first approach is a (non-certi
 fied) approximation\, and employs a reduced basis associated with a collec
 tion of eigenvectors for specific parameter values. The second approach is
  a certified extension of the ACA where the approximation error is control
 led in the Wasserstein-2 distance of two Gaussian measures. Both approache
 s rely on an affine linear expansion of the covariance operator with respe
 ct to the parameter. This keeps the computational cost under control. Nota
 bly\, both algorithms do not require regular meshes in the covariance oper
 ator discretization and can be used on irregular domains.\n\nThis talk des
 cribes joint work with Daniel Kressner (EPFL)\, Jonas Latz (University of 
 Cambridge)\, Stefano Massei (TU/e) and Marvin Eisenberger (TUM).\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Howard Elman (University of Maryland)
DTSTART:20200930T140000Z
DTEND:20200930T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /17/">Multigrid Methods for Computing Low-Rank Solutions to Parameter-Depe
 ndent Partial Differential Equations</a>\nby Howard Elman (University of M
 aryland) as part of E-NLA - Online seminar series on numerical linear alge
 bra\n\n\nAbstract\nThe collection of solutions of discrete parameter-depen
 dent partial differential equations often takes the form of a low-rank mat
 rix. We show that in this scenario\, iterative algorithms for computing th
 ese solutions can take advantage of low-rank structure to reduce both comp
 utational effort and memory requirements. Implementation of such solvers r
 equires that explicit rank-compression computations be done to truncate th
 e ranks of intermediate quantities that must be computed. We prove that wh
 en truncation strategies are used as part of a multigrid solver\, the resu
 lting algorithms retain "textbook" (grid-independent) convergence rates\, 
 and we demonstrate how the truncation criteria affect convergence behavior
 . In addition\, we show that these techniques can be used to construct eff
 icient solution algorithms for computing the eigenvalues of parameter-depe
 ndent operators. In this setting\, parameterized eigenvectors can be group
 ed into matrices of low-rank structure\, and we introduce a variant of inv
 erse subspace iteration for computing them.  We demonstrate the utility of
  this approach on two benchmark problems\, a stochastic diffusion problem 
 with some poorly separated eigenvalues\, and an operator derived from a di
 screte Stokes problem whose minimal eigenvalue is related to the inf-sup s
 tability constant.\n\nThis is joint work with Tengfei Su.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sherry Li (Lawrence Berkeley National Laboratory)
DTSTART:20201014T140000Z
DTEND:20201014T150000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /18/">Autotuning exascale applications with Gaussian process regression</a
 >\nby Sherry Li (Lawrence Berkeley National Laboratory) as part of E-NLA -
  Online seminar series on numerical linear algebra\n\n\nAbstract\nSignific
 ant effort has been invested to develop highly scalable numerical librarie
 s and high-fidelity modeling and simulation for the upcoming exascale comp
 uters. These codes typically involve many parameters which need to be sele
 cted properly to optimize performance on the underlying parallel machine. 
 They are also expensive to run and thus have limited "function evaluation"
  values\, which post significant challenges to efficient performance tunin
 g on diverse architectures.\n\nBayesian optimization with Gaussian process
  regression is an attractive machine learning framework to build surrogate
  models with limited function evaluation points. In order to fully utilize
  all the available data\, we leverage multitask learning and multi-armed b
 andit strategies to build a more advanced Bayesian optimization framework.
 \n\nWe have developed an open-source software tool\, called GPTune\, for o
 ptimizing expensive large-scale HPC codes. We will show several features o
 f GPTune\, e.g.\, incorporation of coarse performance models to improve th
 e Bayesian model\, multi-objective tuning such as tuning a hybrid of time\
 , memory and accuracy\, and reuse of historical data base for model portab
 ility.\n\nWe will demonstrate the efficiency and effectiveness of GPTune w
 hen it is applied to numerical linear algebra libraries\, such as ScaLAPAC
 K\, SuperLU and Hypre\, as well as fusion simulation codes M3D-C1 and NIMR
 OD.\n\nThis talk describes joint work with James Demmel\, Yang Liu\, Osni 
 Marques\, Wissam Sid-Lakhdar and Xianran Zhu\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yuji Nakatsukasa (University of Oxford)
DTSTART:20201028T150000Z
DTEND:20201028T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /19/">Fast and stable randomized low-rank matrix approximation</a>\nby Yuj
 i Nakatsukasa (University of Oxford) as part of E-NLA - Online seminar ser
 ies on numerical linear algebra\n\n\nAbstract\nRandomized SVD has become a
 n extremely successful approach for efficiently computing a low-rank appro
 ximation of matrices. In particular the paper by Halko\, Martinsson\, and 
 Tropp (SIREV 2011) contains extensive analysis\, and has made it a very po
 pular method. The typical complexity for a rank-r approximation of mxn mat
 rices is O(mnlog n+(m+n)r^2) for dense matrices. The classical Nystrom met
 hod is much faster\, but only applicable to positive semidefinite matrices
 . This work studies a generalization of Nystrom's method applicable to gen
 eral matrices\, and shows that (i) it has near-optimal approximation quali
 ty comparable to competing methods\, (ii) the computational cost is the ne
 ar-optimal O(mnlog n+r^3) for dense matrices\, with small hidden constants
 \, and (iii) crucially\, it can be implemented in a numerically stable fas
 hion despite the presence of an ill-conditioned pseudoinverse. Numerical e
 xperiments illustrate that generalized Nystrom can significantly outperfor
 m state-of-the-art methods\, especially when r>>1\, achieving up to a 10-f
 old speedup. The method is also well suited to updating and downdating the
  matrix.\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Des Higham (University of Edinburgh)
DTSTART:20201111T150000Z
DTEND:20201111T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /20/">Concepts and Algorithms for Higher Order Networks: Beyond Pairwise I
 nteractions</a>\nby Des Higham (University of Edinburgh) as part of E-NLA 
 - Online seminar series on numerical linear algebra\n\n\nAbstract\nNetwork
  scientists have shown that there is great value in studying pairwise inte
 ractions between components in a system. From a linear algebra point of vi
 ew\, this involves defining and evaluating functions of the associated adj
 acency matrix. Recently\, there has been increased interest in the idea of
  accounting directly for higher order features. Such features may be built
  from the adjacency matrix---for example\, a triangle involving nodes i\, 
 j and k arises when the three edges\, i<->j\, j<->k and k<->i are present.
  In other contexts\, higher order information appears explicitly---for exa
 mple\, in a coauthorship network\, a document involving three authors form
 s a natural triangle. I will discuss the use of tensor-based definitions a
 nd algorithms to exploit such higher order information. The algorithms als
 o incorporate nonlinearities that  increase flexibility. I will focus on s
 pectral methods that extend classical concepts of node centrality and clus
 tering coefficients. The underlying object of study will be a constrained 
 nonlinear eigenvalue problem associated with a tensor. Using recent result
 s from nonlinear Perron--Frobenius theory\, we can establish existence and
  uniqueness under mild conditions\, and show that such spectral measures c
 an be computed efficiently and robustly with a nonlinear power method.\n\n
 The talk is based on joint work with Francesca Arrigo (University of Strat
 hclyde) and Francesco Tudisco (Gran Sasso Science Institute).\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francoise Tisseur (Manchester University)
DTSTART:20201125T150000Z
DTEND:20201125T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/21
DESCRIPTION:by Francoise Tisseur (Manchester University) as part of E-NLA 
 - Online seminar series on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chen Greif (The University of British Columbia)
DTSTART:20201209T150000Z
DTEND:20201209T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/22
DESCRIPTION:by Chen Greif (The University of British Columbia) as part of 
 E-NLA - Online seminar series on numerical linear algebra\n\nAbstract: TBA
 \n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anne Greenbaum (University of Washington)
DTSTART:20210113T150000Z
DTEND:20210113T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /23/">Spectral Sets: Numerical Range and Beyond</a>\nby Anne Greenbaum (Un
 iversity of Washington) as part of E-NLA - Online seminar series on numeri
 cal linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Gillis (University of Mons)
DTSTART:20210127T150000Z
DTEND:20210127T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /24/">Identifiability and Computation of Nonnegative Matrix Factorizations
 </a>\nby Nicolas Gillis (University of Mons) as part of E-NLA - Online sem
 inar series on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jim Nagy (Emory University)
DTSTART:20210210T150000Z
DTEND:20210210T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /25/">Krylov Subspace Regularization for Inverse Problems</a>\nby Jim Nagy
  (Emory University) as part of E-NLA - Online seminar series on numerical 
 linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Erin Carson (Charles University)
DTSTART:20210224T150000Z
DTEND:20210224T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /26/">What do we know about block Gram-Schmidt?</a>\nby Erin Carson (Charl
 es University) as part of E-NLA - Online seminar series on numerical linea
 r algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gunnar Martinsson (UT Austin)
DTSTART:20210310T150000Z
DTEND:20210310T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/27
DESCRIPTION:by Gunnar Martinsson (UT Austin) as part of E-NLA - Online sem
 inar series on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cameron Musco (University of Massachusetts Amherst)
DTSTART:20210324T150000Z
DTEND:20210324T160000Z
DTSTAMP:20260404T111324Z
UID:E-NLA/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/E-NLA
 /28/">Hutch++: Optimal Stochastic Trace Estimation</a>\nby Cameron Musco (
 University of Massachusetts Amherst) as part of E-NLA - Online seminar ser
 ies on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/E-NLA/28/
END:VEVENT
END:VCALENDAR
