BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Yu Bai (Salesforce Research)
DTSTART:20201028T170000Z
DTEND:20201028T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/2/">How Important is the Train-Validation Split in Meta-Learning?</a
 >\nby Yu Bai (Salesforce Research) as part of One World Seminar Series on 
 the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ryan Murray (NC State University)
DTSTART:20201021T160000Z
DTEND:20201021T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/3/">Consistency of Cheeger cuts: Total Variation\, Isoperimetry\, an
 d Clustering</a>\nby Ryan Murray (NC State University) as part of One Worl
 d Seminar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\
 n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jonas Latz (University of Cambridge)
DTSTART:20201104T170000Z
DTEND:20201104T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/4/">Analysis of Stochastic Gradient Descent in Continuous Time</a>\n
 by Jonas Latz (University of Cambridge) as part of One World Seminar Serie
 s on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhengdao Chen (New York University)
DTSTART:20201111T170000Z
DTEND:20201111T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/5/">A Dynamical Central Limit Theorem for Shallow Neural Networks</a
 >\nby Zhengdao Chen (New York University) as part of One World Seminar Ser
 ies on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bamdad Hosseini (Caltech)
DTSTART:20201118T170000Z
DTEND:20201118T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/6/">Conditional Sampling with Monotone GANs: Modifying Generative Mo
 dels to Solve Inverse Problems</a>\nby Bamdad Hosseini (Caltech) as part o
 f One World Seminar Series on the  Mathematics of Machine Learning\n\nAbst
 ract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Felix Voigtlaender (University of Vienna)
DTSTART:20201125T170000Z
DTEND:20201125T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/7/">Neural network performance for classification problems with boun
 daries of Barron class</a>\nby Felix Voigtlaender (University of Vienna) a
 s part of One World Seminar Series on the  Mathematics of Machine Learning
 \n\n\nAbstract\nWe study classification problems in which the distances be
 tween the different classes are not necessarily positive\, but for which t
 he boundaries between the classes are well-behaved. More precisely\, we as
 sume these boundaries to be locally described by graphs of functions of Ba
 rron-class. ReLU neural networks can approximate and estimate classificati
 on functions of this type with rates independent of the ambient dimension.
  More formally\, three-layer networks with $N$ neurons can approximate suc
 h functions with $L^1$-error bounded by $O(N^{-1/2})$. Furthermore\, given
  $m$ training samples from such a function\, and using ReLU networks of a 
 suitable architecture as the hypothesis space\, any empirical risk minimiz
 er has generalization error bounded by $O(m^{-1/4})$. All implied constant
 s depend only polynomially on the input dimension. We also discuss the opt
 imality of these rates. Our results mostly rely on the "Fourier-analytic" 
 Barron spaces that consist of functions with finite first Fourier moment. 
 But since several different function spaces have been dubbed "Barron space
 s'' in the recent literature\, we discuss how these spaces relate to each 
 other. We will see that they differ more than the existing literature sugg
 ests.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nadia Drenska (University of Minnesota)
DTSTART:20201209T170000Z
DTEND:20201209T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/8/">A PDE Interpretation of Prediction with Expert Advice</a>\nby Na
 dia Drenska (University of Minnesota) as part of One World Seminar Series 
 on the  Mathematics of Machine Learning\n\n\nAbstract\nWe study the proble
 m of prediction of binary sequences with expert advice in the online setti
 ng\, which is a classic example of online machine learning. We interpret t
 he binary sequence as the price history of a stock\, and view the predicto
 r as an investor\, which converts the problem into a stock prediction prob
 lem. In this framework\, an investor\, who predicts the daily movements of
  a stock\, and an adversarial market\, who controls the stock\, play again
 st each other over N turns. The investor combines the predictions of n ≥
  2 experts in order to make a decision about how much to invest at each tu
 rn\, and aims to minimize their regret with respect to the best-performing
  expert at the end of the game. We consider the problem with history-depen
 dent experts\, in which each expert uses the previous d days of history of
  the market in making their predictions. The prediction problem is played 
 (in part) over a discrete graph called the d dimensional de Bruijn graph.\
 n\nWe focus on an appropriate continuum limit and using methods from optim
 al control\, graph theory\, and partial differential equations\, we discus
 s strategies for the investor and the adversarial market. We prove that th
 e value function for this game\, rescaled appropriately\, converges as N 
 → ∞ at a rate of O(N−1/2)  (for C4 payoff functions) to the viscosit
 y solution of a nonlinear degenerate parabolic PDE. It can be understood a
 s the Hamilton-Jacobi-Issacs equation for the two-person game. As a result
 \, we are able to deduce asymptotically optimal strategies for the investo
 r. \n\nThis is joint work with Robert Kohn and Jeff Calder.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ziwei Ji (University of Illinois)
DTSTART:20201216T170000Z
DTEND:20201216T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/9/">The dual of the margin: improved analyses and rates for gradient
  descent’s implicit bias</a>\nby Ziwei Ji (University of Illinois) as pa
 rt of One World Seminar Series on the  Mathematics of Machine Learning\n\n
 Abstract: TBA\n\nThe implicit bias of gradient descent\, and specifically 
 its margin maximization properties\, have arisen as a promising explanatio
 n for the good generalization of deep networks. The purpose of this talk i
 s to demonstrate the effectiveness of a dual problem to smoothed margin ma
 ximization. Concretely\, this talk will develop this dual\, as well as a v
 ariety of consequences in linear and nonlinear settings.\n\nIn the linear 
 case\, this dual perspective firstly will yield fast 1/t rates for margin 
 maximization and implicit bias. This is faster than any prior first-order 
 hard-margin SVM solver\, which achieves 1/sqrt{t} at best.\n\nSecondly\, t
 he dual analysis also allows a characterization of the implicit bias\, eve
 n outside the standard setting of exponentially-tailed losses\; in this se
 nse\, it is gradient descent\, and not a particular loss structure which l
 eads to implicit bias.\n\nIn the nonlinear case\, duality will enable the 
 proof of a gradient alignment property: asymptotically\, the parameters an
 d their gradients become colinear. Although abstract\, this property in tu
 rn implies various existing and new margin maximization results.\n\nJoint 
 work with Matus Telgarsky.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Carola Bibiane Schönlieb (University of Cambridge)
DTSTART:20210113T170000Z
DTEND:20210113T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/10/">Machine Learned Regularization for Solving Inverse Problems</a>
 \nby Carola Bibiane Schönlieb (University of Cambridge) as part of One Wo
 rld Seminar Series on the  Mathematics of Machine Learning\n\n\nAbstract\n
 Inverse problems are about the reconstruction of an unknown physical quant
 ity from indirect measurements. Most inverse problems of interest are ill-
 posed and require appropriate mathematical treatment for recovering meanin
 gful solutions. Regularization is one of the main mechanisms to turn inver
 se problems into well-posed ones by adding prior information about the unk
 nown quantity to the problem\, often in the form of assumed regularity of 
 solutions. Classically\, such regularization approaches are handcrafted. E
 xamples include Tikhonov regularization\, the total variation and several 
 sparsity-promoting regularizers such as the L1 norm of Wavelet coefficient
 s of the solution. While such handcrafted approaches deliver mathematicall
 y and computationally robust solutions to inverse problems\, providing a u
 niversal approach to their solution\, they are also limited by our ability
  to model solution properties and to realise these regularization approach
 es computationally.\n\n\n\nRecently\, a new paradigm has been introduced t
 o the regularization of inverse problems\, which derives regularization ap
 proaches for inverse problems in a data driven way. Here\, regularization 
 is not mathematically modelled in the classical sense\, but modelled by hi
 ghly over-parametrised models\, typically deep neural networks\, that are 
 adapted to the inverse problems at hand by appropriately selected (and usu
 ally plenty of) training data.\n\n\n\nIn this talk\, I will review some ma
 chine learning based regularization techniques\, present some work on unsu
 pervised and deeply learned convex regularisers and their application to i
 mage reconstruction from tomographic and blurred measurements\, and finish
  by discussing some open mathematical problems.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Melanie Weber (Princeton University)
DTSTART:20210120T170000Z
DTEND:20210120T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/11/">Geometric Methods for Machine Learning and Optimization</a>\nby
  Melanie Weber (Princeton University) as part of One World Seminar Series 
 on the  Mathematics of Machine Learning\n\n\nAbstract\nMany machine learni
 ng applications involve non-Euclidean data\, such as graphs\, strings or m
 atrices. In such cases\, exploiting Riemannian geometry can deliver algori
 thms that are computationally superior to standard (Euclidean) nonlinear p
 rogramming approaches. This observation has resulted in an increasing inte
 rest in Riemannian methods in the optimization and machine learning commun
 ity.\n\nIn the first part of the talk\, we consider the task of learning a
  robust classifier in hyperbolic space. Such spaces have received a surge 
 of interest for representing large-scale\, hierarchical data\, due to the 
 fact that they achieve better representation accuracy with lower dimension
 s. We present the first theoretical guarantees for the (robust) large-marg
 in learning problem in hyperbolic space and discuss conditions under which
  hyperbolic methods are guaranteed to surpass the performance of their Euc
 lidean counterparts. In the second part\, we introduce Riemannian Frank-Wo
 lfe (RFW) methods for constraint optimization on manifolds. Here\, the goa
 l of the theoretical analysis is two-fold: We first show that RFW converge
 s at a nonasymptotic sublinear rate\, recovering the best-known guarantees
  for its Euclidean counterpart. Secondly\, we discuss how to implement the
  method efficiently on matrix manifolds. Finally\, we consider application
 s of RFW to the computation of Riemannian centroids and Wasserstein baryce
 nters\, which are crucial subroutines in many machine learning methods.\n\
 nBased on joint work with Suvrit Sra (MIT) and Manzil Zaheer\, Ankit Singh
  Rawat\, Aditya Menon and Sanjiv Kumar (all Google Research).\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathaniel Trask
DTSTART:20210127T170000Z
DTEND:20210127T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/12/">Structure preservation and convergence in scientific machine le
 arning</a>\nby Nathaniel Trask as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Bertozzi
DTSTART:20210203T170000Z
DTEND:20210203T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/13
DESCRIPTION:by Andrea Bertozzi as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Agazzi (Duke University)
DTSTART:20210210T170000Z
DTEND:20210210T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/14/">Convergence and optimality of single-layer neural networks for 
 reinforcement learning</a>\nby Andrea Agazzi (Duke University) as part of 
 One World Seminar Series on the  Mathematics of Machine Learning\n\nAbstra
 ct: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frederic Koehler
DTSTART:20210217T170000Z
DTEND:20210217T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/15
DESCRIPTION:by Frederic Koehler as part of One World Seminar Series on the
   Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bubacarr Bah
DTSTART:20210224T170000Z
DTEND:20210224T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/16
DESCRIPTION:by Bubacarr Bah as part of One World Seminar Series on the  Ma
 thematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathaniel Trask
DTSTART:20210303T170000Z
DTEND:20210303T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/17/">Structure preservation and convergence in scientific machine le
 arning</a>\nby Nathaniel Trask as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Hanin
DTSTART:20210310T170000Z
DTEND:20210310T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/18
DESCRIPTION:by Boris Hanin as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rachel Ward
DTSTART:20210317T170000Z
DTEND:20210317T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/19
DESCRIPTION:by Rachel Ward as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jeff Calder
DTSTART:20210324T170000Z
DTEND:20210324T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/20
DESCRIPTION:by Jeff Calder as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Garcia Trillos (Wisconsin Madison)
DTSTART:20210505T160000Z
DTEND:20210505T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/21/">Adversarial Classification\, Optimal Transport\, and Geometric 
 Flows</a>\nby Nicolas Garcia Trillos (Wisconsin Madison) as part of One Wo
 rld Seminar Series on the  Mathematics of Machine Learning\n\nAbstract: TB
 A\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clarice Poon (University of Bath)
DTSTART:20210519T160000Z
DTEND:20210519T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/22/">Smooth bilevel programming for sparse regularisation</a>\nby Cl
 arice Poon (University of Bath) as part of One World Seminar Series on the
   Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Nowak (University of Wisconsin-Madison)
DTSTART:20211013T160000Z
DTEND:20211013T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/23/">TBC</a>\nby Robert Nowak (University of Wisconsin-Madison) as p
 art of One World Seminar Series on the  Mathematics of Machine Learning\n\
 nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christoph Schwab (ETH Zürich)
DTSTART:20211201T170000Z
DTEND:20211201T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/24/">Deep Learning in High Dimension: Neural Network Approximation o
 f Analytic Maps of Gaussians</a>\nby Christoph Schwab (ETH Zürich) as par
 t of One World Seminar Series on the  Mathematics of Machine Learning\n\n\
 nAbstract\nFor artificial deep neural networks with ReLU activation\,\nwe 
 prove new expression rate bounds for\nparametric\, analytic functions wher
 e\nthe parameter dimension could be infinite.\nApproximation rates are in 
 mean square on the unbounded\nparameter range with respect to product gaus
 sian measure.\nApproximation rate bounds are free from the CoD\, and\ndete
 rmined by summability of Wiener-Hermite PC expansion coefficients.\nSuffic
 ient conditions for summability are quantified holomorphy\non products of 
 strips in the complex domain.\nApplications comprise DNN expression rate b
 ounds of deep-NNs\nfor response surfaces of elliptic PDEs with log-gaussia
 n\nrandom field inputs\, and for the posterior densities of the\ncorrespon
 ding Bayesian inverse problems.\nVariants of proofs which are constructive
  are outlined.\n\n(joint work with Jakob Zech\, University of Heidelberg\,
  Germany\,\n and with Dinh Dung and Nguyen Van Kien\, Hanoi\, Vietnam)\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Houman Owhadi (Caltech)
DTSTART:20220420T160000Z
DTEND:20220420T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/25/">Computational Graph Completion</a>\nby Houman Owhadi (Caltech) 
 as part of One World Seminar Series on the  Mathematics of Machine Learnin
 g\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephan Wäldchen (TU Berlin)
DTSTART:20220427T160000Z
DTEND:20220427T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/26/">Explaining Neural Network Classifiers: Hurdles and Progress</a>
 \nby Stephan Wäldchen (TU Berlin) as part of One World Seminar Series on 
 the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hongyang Zhang
DTSTART:20220504T160000Z
DTEND:20220504T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/27
DESCRIPTION:by Hongyang Zhang as part of One World Seminar Series on the  
 Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matthew Colbrook (University of Cambridge)
DTSTART:20220511T160000Z
DTEND:20220511T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/28/">Smale’s 18th Problem and the Barriers of Deep Learning</a>\nb
 y Matthew Colbrook (University of Cambridge) as part of One World Seminar 
 Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Denny Wu (University of Toronto)
DTSTART:20220914T160000Z
DTEND:20220914T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/29/">High-dimensional asymptotics of feature learning in the early p
 hase of NN training</a>\nby Denny Wu (University of Toronto) as part of On
 e World Seminar Series on the  Mathematics of Machine Learning\n\nAbstract
 : TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gal Vardi (Toyota Technological Institute at Chicago)
DTSTART:20220921T160000Z
DTEND:20220921T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/30/">Implications of the implicit bias in neural networks</a>\nby Ga
 l Vardi (Toyota Technological Institute at Chicago) as part of One World S
 eminar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sophie Langer (University of Twente)
DTSTART:20221012T160000Z
DTEND:20221012T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/31/">Circumventing the curse of dimensionality with deep neural netw
 orks</a>\nby Sophie Langer (University of Twente) as part of One World Sem
 inar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Peter Richtarik (KAUST)
DTSTART:20221005T160000Z
DTEND:20221005T170000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/32
DESCRIPTION:by Peter Richtarik (KAUST) as part of One World Seminar Series
  on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Johannes Brandstetter (Microsoft)
DTSTART:20221109T170000Z
DTEND:20221109T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/33/">Towards a New Generation of Neural PDE Surrogates</a>\nby Johan
 nes Brandstetter (Microsoft) as part of One World Seminar Series on the  M
 athematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simone Brugiapaglia (Concordia University)
DTSTART:20221116T170000Z
DTEND:20221116T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/34/">he Mathematical Foundations of Deep Learning: From Rating Impos
 sibility to Practical Existence Theorems</a>\nby Simone Brugiapaglia (Conc
 ordia University) as part of One World Seminar Series on the  Mathematics 
 of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francis Bach (Ecole Normale Superieure)
DTSTART:20221130T170000Z
DTEND:20221130T180000Z
DTSTAMP:20260404T110745Z
UID:OneWorldML/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldML/35/">Information Theory Through Kernel Methods</a>\nby Francis Bach 
 (Ecole Normale Superieure) as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldML/35/
END:VEVENT
END:VCALENDAR
