BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Andrea Bertozzi (UCLA)
DTSTART:20200423T183000Z
DTEND:20200423T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/1/">Epidemic modeling – basics and challenges</a>\nby Andrea Be
 rtozzi (UCLA) as part of One World MINDS seminar\n\n\nAbstract\nI will rev
 iew basics of epidemic modeling including eponential growth\, compartmenta
 l models and self-exciting point process models.  I will illustrate how su
 ch models have been used in the past for previous pandemics and what the c
 hallenges are for forecasting the current COVID-19 pandemic.  I will show 
 some examples of fitting of data to US states and what one can do with tho
 se results.  Overall\, model prediction has a degree of uncertainty especi
 ally with early time data and with many unknowns.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Strohmer (UC Davis)
DTSTART:20200430T183000Z
DTEND:20200430T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/2/">Pandemics\, Privacy\, and Paradoxes - Why We need a new parad
 igm for data science and AI</a>\nby Thomas Strohmer (UC Davis) as part of 
 One World MINDS seminar\n\n\nAbstract\nPioneered by giant internet corpora
 tions and powered by machine learning\, a new economic system is emerging 
 that pushes for relentless data capture and analysis\, usually without use
 rs' consent. Surveillance capitalism pursues the exploitation and control 
 of human nature\, thereby threatening our social fabric. To counter these 
 developments\, we need to rethink the role of data science and artificial 
 intelligence. We must urgently develop a new paradigm of what data is. Thi
 s urgency is aggravated by the current pandemic\, which amplifies fundamen
 tal paradoxes underlying data science and AI. I will argue that the key li
 es in understanding the trialectic nature of data\, the careful balance of
  which will be key to tackling the aforementioned disturbing developments\
 , while still reaping the benefits of data science and AI. Based on this t
 rialectic nature\, I will draw consequences for the role of mathematics in
  data science and indicate how mathematicians can directly contribute to a
  more just digital revolution.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Gilbert (University of Michigan)
DTSTART:20200507T183000Z
DTEND:20200507T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/3/">Metric representations: Algorithms and geometry</a>\nby Anna 
 Gilbert (University of Michigan) as part of One World MINDS seminar\n\n\nA
 bstract\nGiven a set of distances amongst points\, determining what metric
  representation is most “consistent” with the input distances or the m
 etric that best captures the relevant geometric features of the data is a 
 key step in many machine learning algorithms. In this talk\, we focus on 3
  specific metric constrained problems\, a class of optimization problems w
 ith metric constraints: metric nearness (Brickell et al. (2008))\, weighte
 d correlation clustering on general graphs (Bansal et al. (2004))\, and me
 tric learning (Bellet et al. (2013)\; Davis et al. (2007)).\n\nBecause of 
 the large number of constraints in these problems\, however\, these and ot
 her researchers have been forced to restrict either the kinds of metrics l
 earned or the size of the problem that can be solved. We provide an algori
 thm\, PROJECT AND FORGET\, that uses Bregman projections with cutting plan
 es\, to solve metric constrained problems with many (possibly exponentiall
 y) inequality constraints. We also prove that our algorithm converges to t
 he global optimal solution. Additionally\, we show that the optimality err
 or decays asymptotically at an exponential rate. We show that using our me
 thod we can solve large problem instances of three types of metric constra
 ined problems\, out-performing all state of the art methods with respect t
 o CPU times and problem sizes.\n\nFinally\, we discuss the adaptation of P
 ROJECT AND FORGET to specific types of metric constraints\, namely tree an
 d hyperbolic metrics.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilya Razenshteyn (Microsoft Research)
DTSTART:20200514T183000Z
DTEND:20200514T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/4/">Scalable Nearest Neighbor Search for Optimal Transport</a>\nb
 y Ilya Razenshteyn (Microsoft Research) as part of One World MINDS seminar
 \n\n\nAbstract\nThe Optimal Transport (aka Wasserstein) distance is an inc
 reasingly popular similarity measure for structured data domains\, such as
  images or text documents. This raises the necessity for fast nearest neig
 hbor search with respect to this distance\, a problem that poses a substan
 tial computational bottleneck for various tasks on massive datasets. In th
 is talk\, I will discuss fast tree-based approximation algorithms for sear
 ching nearest neighbors with respect to the Wasserstein-1 distance. I will
  start with describing a standard tree-based technique\, known as QuadTree
 \, which has been previously shown to obtain good results. Then I'll intro
 duce a variant of this algorithm\, called FlowTree\, and show that it achi
 eves better accuracy\, both in theory and in practice. In particular\, the
  accuracy of FlowTree is in line with previous high-accuracy methods\, whi
 le its running time is much faster.\n\nThe talk is based on a joint work w
 ith Arturs Backurs\, Yihe Dong\, Piotr Indyk and Tal Wagner. The paper can
  be found at https://arxiv.org/abs/1910.04126 and the code -- at https://g
 ithub.com/ilyaraz/ot_estimators\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Spielman (Yale)
DTSTART:20200521T183000Z
DTEND:20200521T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/5/">Balancing covariates in randomized experiments using the Gram
 –Schmidt walk</a>\nby Daniel Spielman (Yale) as part of One World MINDS 
 seminar\n\n\nAbstract\nIn randomized experiments\, such as a medical trial
 s\, we randomly assign the treatment\, such as a drug or a placebo\, that 
 each experimental subject receives. Randomization can help us accurately e
 stimate the difference in treatment effects with high probability. We also
  know that we want the two groups to be similar: ideally the two groups wo
 uld be similar in every statistic we can measure beforehand. Recent advanc
 es in algorithmic discrepancy theory allow us to divide subjects into grou
 ps with similar statistics.\n\nBy exploiting the recent Gram-Schmidt Walk 
 algorithm of Bansal\, Dadush\, Garg\, and Lovett\, we can obtain random as
 signments of low discrepancy. These allow us to obtain more accurate estim
 ates of treatment effects when the information we measure about the subjec
 ts is predictive\, while also bounding the worst-case behavior when it is 
 not.\n\nWe will explain the experimental design problem we address\, the G
 ram-Schmidt walk algorithm\, and the major ideas behind our analyses. This
  is joint work with Chris Harshaw\, Fredrik Sävje\, and Peng Zhang.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ronald Coifman (Yale)
DTSTART:20200528T183000Z
DTEND:20200528T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/6/">The Analytic Geometries of Data</a>\nby Ronald Coifman (Yale)
  as part of One World MINDS seminar\n\n\nAbstract\nWe will describe method
 ologies to build data geometries designed to simultaneously analyze and pr
 ocess data bases.  The different geometries or affinity metrics arise natu
 rally as we learn to contextualize and conceptualize. I.e\; relate data re
 gions\, and data features (which we extend to data tensors).  Moreover we 
 generate tensorial multiscale structures.  \n\nWe will indicate connection
  to analysis by deep nets and describe applications to modeling observatio
 ns of dynamical systems\, from stochastic molecular dynamics to calcium im
 aging of brain activity.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ben Adcock (Simon Fraser University)
DTSTART:20200604T183000Z
DTEND:20200604T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/7/">The troublesome kernel: instabilities in deep learning for in
 verse problems</a>\nby Ben Adcock (Simon Fraser University) as part of One
  World MINDS seminar\n\n\nAbstract\nDue to their stunning success in tradi
 tional machine learning applications such as classification\, techniques b
 ased on deep learning have recently begun to be actively investigated for 
 problems in computational science and engineering. One of the key areas at
  the forefront of this trend is inverse problems\, and specifically\, inve
 rse problems in imaging. The last few years have witnessed the emergence o
 f many neural network-based algorithms for important imaging modalities su
 ch as MRI and X-ray CT. These claim to achieve competitive\, and sometimes
  even superior\, performance to current state-of-the-art techniques.\n\nHo
 wever\, there is a problem. Techniques based on deep learning are typicall
 y unstable. For example\, small perturbations in the data can lead to a my
 riad of artefacts in the recovered images. Such artifacts can be hard to d
 ismiss as obviously unphysical\, meaning that this phenomenon has potentia
 lly serious consequences for the safe deployment of deep learning in pract
 ice. In this talk\, I will first showcase the instability phenomenon empir
 ically in a range of examples. I will then focus on its mathematical under
 pinnings\, the consequences of these insights when it comes to potential r
 emedies\, and the future possibilities for computing genuinely stable neur
 al networks for inverse problems in imaging.\n\nThis is joint work with Ve
 gard Antun\, Nina M. Gottschling\, Anders C. Hansen\, Clarice Poon\, and F
 rancesco Renna\n\nPapers:\n\nhttps://www.pnas.org/content/early/2020/05/08
 /1907377117\n\nhttps://arxiv.org/abs/2001.01258\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jelani Nelson (UC Berkeley)
DTSTART:20200611T183000Z
DTEND:20200611T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/8/">terminal dimensionality reduction in Euclidean space</a>\nby 
 Jelani Nelson (UC Berkeley) as part of One World MINDS seminar\n\n\nAbstra
 ct\nThe Johnson-Lindenstrauss lemma states that for any $X$ a subset of $R
 ^d$ with $|X| = n$ and for any epsilon\, there exists a map $f:X\\to R^m$ 
 for $m = O(\\log n / \\epsilon^2)$ such that: for all $x \\in X$\, for all
  $y \\in X$\, $(1-\\epsilon)|x - y|_2 \\le |f(x) - f(y)|_2 \\le (1+\\epsil
 on)|x - y|_2$. We show that this statement can be strengthened. In particu
 lar\, the above claim holds true even if "for all $y \\in X$" is replaced 
 with "for all $y \\in R^d$". Joint work with Shyam Narayanan.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gitta Jutyniok (TU Berlin)
DTSTART:20200618T183000Z
DTEND:20200618T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/9/">Understanding Deep Neural Networks: From Generalization to In
 terpretability</a>\nby Gitta Jutyniok (TU Berlin) as part of One World MIN
 DS seminar\n\n\nAbstract\nDeep neural networks have recently seen an impre
 ssive comeback with applications both in the public sector and the science
 s. However\, despite their outstanding success\, a comprehensive theoretic
 al foundation of deep neural networks is still missing.\n\nFor deriving a 
 theoretical understanding of deep neural networks\, one main goal is to an
 alyze their generalization ability\, i.e. their performance on unseen data
  sets. In case of graph convolutional neural networks\, which are today he
 avily used\, for instance\, for recommender systems\, already the generali
 zation capability to signals on graphs unseen in the training set\, typica
 lly coined transferability\, was not rigorously analyzed. In this talk\, w
 e will prove that spectral graph convolutional neural networks are indeed 
 transferable\, thereby also debunking a common misconception about this ty
 pe of graph convolutional neural networks.\n\nIf such theoretical approach
 es fail or if one is just given a trained neural network without knowledge
  of how it was trained\, interpretability approaches become necessary. Tho
 se aim to "break open the black box" in the sense of identifying those fea
 tures from the input\, which are most relevant for the observed output. Ai
 ming to derive a theoretically founded approach to this problem\, we intro
 duced a novel approach based on rate-distortion theory coined Rate-Distort
 ion Explanation (RDE)\, which not only provides state-of-the-art explanati
 ons\, but in addition allows first theoretical insights into the complexit
 y of such problems. In this talk we will discuss this approach and show th
 at it also gives a precise mathematical meaning to the previously vague te
 rm of relevant parts of the input.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Richard Baraniuk (Rice University)
DTSTART:20200625T183000Z
DTEND:20200625T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/10/">Affine spline insights into deep learning</a>\nby Richard Ba
 raniuk (Rice University) as part of One World MINDS seminar\n\n\nAbstract\
 nWe build a rigorous bridge between deep networks (DNs) and approximation 
 theory via spline functions and operators. Our key result is that a large 
 class of DNs can be written as a composition of max-affine spline operator
 s (MASOs)\, which provide a powerful portal through which to view and anal
 yze their inner workings. For instance\, conditioned on the input signal\,
  the output of a MASO DN can be written as a simple affine transformation 
 of the input. This implies that a DN constructs a set of signal-dependent\
 , class-specific templates against which the signal is compared via a simp
 le inner product\; we explore the links to the classical theory of optimal
  classification via matched filters and the effects of data memorization. 
 Going further\, we propose a simple penalty term that can be added to the 
 cost function of any DN learning algorithm to force the templates to be or
 thogonal with each other\; this leads to significantly improved classifica
 tion performance and reduced overfitting with no change to the DN architec
 ture. The spline partition of the input signal space that is implicitly in
 duced by a MASO directly links DNs to the theory of vector quantization (V
 Q) and K-means clustering\, which opens up new geometric avenue to study h
 ow DNs organize signals in a hierarchical fashion. To validate the utility
  of the VQ interpretation\, we develop and validate a new distance metric 
 for signals and images that quantifies the difference between their VQ enc
 odings.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stéphane Malla (École Normale Supérieure)
DTSTART:20200702T183000Z
DTEND:20200702T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/11/">Beyond sparsity: Non-linear harmonic analysis with phase for
  deep networks</a>\nby Stéphane Malla (École Normale Supérieure) as par
 t of One World MINDS seminar\n\n\nAbstract\nUnderstanding the properties o
 f deep neural networks is not just about applying standard harmonic analys
 is tools with a bit of optimization. It is shaking our understanding of no
 n-linear harmonic analysis and opening new horizons. By considering comple
 x image generation and classification problems of different complexities\,
  I will show that sparsity is not always the answer and phase plays an imp
 ortant role to capture important structures including symmetries within mu
 ltiscale representations. This talk will raise more questions than answers
 .\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Holger Rauhut (RWTH Aachen University)
DTSTART:20200709T183000Z
DTEND:20200709T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/12/">Convergence of gradient flows for learning deep linear neura
 l networks</a>\nby Holger Rauhut (RWTH Aachen University) as part of One W
 orld MINDS seminar\n\n\nAbstract\nLearning neural networks amounts to mini
 mizing a loss function over given training data. Often gradient descent al
 gorithms are used for this task\, but their convergence properties are not
  yet well-understood. In order to make progress we consider the simplified
  setting of linear networks optimized via gradient flows. We show that suc
 h gradient flow defined with respect to the layers (factors) can be reinte
 rpreted as a Riemannian gradient flow on the manifold of rank-$r$ matrices
  in certain cases. The gradient flow always converges to a critical point 
 of the underlying loss functional and\, for almost all initializations\, i
 t converges to a global minimum on the manifold of rank-$k$ matrices for s
 ome $k$.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Caroline Uhler (MIT)
DTSTART:20200716T183000Z
DTEND:20200716T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/13/">Multi-domain data integration: from observations to mechanis
 tic insights</a>\nby Caroline Uhler (MIT) as part of One World MINDS semin
 ar\n\n\nAbstract\nMassive data collection holds the promise of a better un
 derstanding of complex phenomena and ultimately\, of better decisions. An 
 exciting opportunity in this regard stems from the growing availability of
  perturbation / intervention data (manufacturing\, advertisement\, educati
 on\, genomics\, etc.). In order to obtain mechanistic insights from such d
 ata\, a major challenge is the integration of different data modalities (v
 ideo\, audio\, interventional\, observational\, etc.). Using genomics and 
 in particular the problem of identifying drugs for the repurposing against
  COVID-19 as an example\, I will first discuss our recent work on coupling
  autoencoders in the latent space to integrate and translate between data 
 of very different modalities such as sequencing and imaging. I will then p
 resent a framework for integrating observational and interventional data f
 or causal structure discovery and characterize the causal relationships th
 at are identifiable from such data. We end by a theoretical analysis of au
 toencoders linking overparameterization to memorization. In particular\, I
  will characterize the implicit bias of overparameterized autoencoders and
  show that such networks trained using standard optimization methods imple
 ment associative memory. Collectively\, our results have major implication
 s for planning and learning from interventions in various application doma
 ins.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tamara Kolda (Sandia National Laboratories)
DTSTART:20200723T183000Z
DTEND:20200723T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/14/">Practical Leverage-Based Sampling for Low-Rank Tensor Decomp
 osition</a>\nby Tamara Kolda (Sandia National Laboratories) as part of One
  World MINDS seminar\n\n\nAbstract\nConventional algorithms for finding lo
 w-rank canonical polyadic (CP) tensor decompositions are unwieldy for larg
 e sparse tensors. The CP decomposition can be computed by solving a sequen
 ce of overdetermined least problems with special Khatri-Rao structure. In 
 this work\, we present an application of randomized algorithms to fitting 
 the CP decomposition of sparse tensors\, solving a significantly smaller s
 ampled least squares problem at each iteration with probabilistic guarante
 es on the approximation errors. Prior work has shown that sketching is eff
 ective in the dense case\, but the prior approach cannot be applied to the
  sparse case because a fast Johnson-Lindenstrauss transform (e.g.\, using 
 a fast Fourier transform) must be applied in each mode\, causing the spars
 e tensor to become dense. Instead\, we perform sketching through leverage 
 score sampling\, crucially relying on the fact that the structure of the K
 hatri-Rao product allows sampling from overestimates of the leverage score
 s without forming the full product or the corresponding probabilities. Nai
 ve application of leverage score sampling is ineffective because we often 
 have cases where a few scores are quite large\, so we propose a novel hybr
 id of deterministic and random leverage-score sampling which consistently 
 yields improved fits. Numerical results on real-world large-scale tensors 
 show the method is significantly faster than competing methods without sac
 rificing accuracy. This is joint work with Brett Larsen\, Stanford Univers
 ity.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mary Wootters (Stanford University)
DTSTART:20200730T183000Z
DTEND:20200730T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/15
DESCRIPTION:by Mary Wootters (Stanford University) as part of One World MI
 NDS seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tselil Schramm (MIT)
DTSTART:20200806T183000Z
DTEND:20200806T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/16/">Reconciling Statistical Queries and the Low Degree Likelihoo
 d Ratio</a>\nby Tselil Schramm (MIT) as part of One World MINDS seminar\n\
 n\nAbstract\nIn many high-dimensional statistics problems\, we observe inf
 ormation-computation tradeoffs: given access to more data\, statistical es
 timation and inference tasks require fewer computational resources. Though
  this phenomenon is ubiquitous\, we lack rigorous evidence that it is inhe
 rent. In the current day\, to prove that a statistical estimation task is 
 computationally intractable\, researchers must prove lower bounds against 
 each type of algorithm\, one by one\, resulting in a "proliferation of low
 er bounds". We scientists dream of a more general theory which unifies the
 se lower bounds and explains computational intractability in an algorithm-
 independent way.\n\nIn this talk\, I will make one small step towards real
 izing this dream. I will demonstrate general conditions under which two po
 pular frameworks yield the same information-computation tradeoffs for high
 -dimensional hypothesis testing: the first being statistical queries in th
 e "SDA" framework\, and the second being hypothesis testing with low-degre
 e hypothesis tests\, also known as the low-degree-likelihood ratio. Our eq
 uivalence theorems capture numerous well-studied high-dimensional learning
  problems: sparse PCA\, tensor PCA\, community detection\, planted clique\
 , and more.\n\nBased on joint work with Matthew Brennan\, Guy Bresler\, Sa
 muel B. Hopkins and Jerry Li.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dustin Mixon (Ohio State)
DTSTART:20200813T183000Z
DTEND:20200813T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/17/">Ingredients matter: Quick and easy recipes for estimating cl
 usters\, manifolds\, and epidemics</a>\nby Dustin Mixon (Ohio State) as pa
 rt of One World MINDS seminar\n\n\nAbstract\nData science resembles the cu
 linary arts in the sense that better ingredients allow for better results.
  We consider three instances of this phenomenon. First\, we estimate clust
 ers in graphs\, and we find that more signal allows for faster estimation.
  Here\, "signal" refers to having more edges within planted communities th
 an across communities. Next\, in the context of manifolds\, we find that a
 n informative prior allows for estimates of lower error. In particular\, w
 e apply the prior that the unknown manifold enjoys a large\, unknown symme
 try group. Finally\, we consider the problem of estimating parameters in e
 pidemiological models\, where we find that a certain diversity of data all
 ows one to design estimation algorithms with provable guarantees. In this 
 case\, data diversity refers to certain combinatorial features of the soci
 al network. Joint work with Jameson Cahill\, Charles Clum\, Hans Parshall\
 , and Kaiying Xie.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Helmut Bölcskei (ETH Zürich)
DTSTART:20200820T183000Z
DTEND:20200820T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/OneWo
 rldMINDS/18/">Fundamental limits of learning in deep neural networks</a>\n
 by Helmut Bölcskei (ETH Zürich) as part of One World MINDS seminar\n\n\n
 Abstract\nWe develop a theory that allows to characterize the fundamental 
 limits of learning in deep neural networks. Concretely\, we consider Kolmo
 gorov-optimal approximation through deep neural networks with the guiding 
 theme being a relation between the epsilon-entropy of the hypothesis class
  to be learned and the complexity of the approximating network in terms of
  connectivity and memory requirements for storing the network topology and
  the quantized weights and biases. The theory we develop educes remarkable
  universality properties of deep networks. Specifically\, deep networks ca
 n Kolmogorov-optimally learn essentially any hypothesis class. In addition
 \, we find that deep networks provide exponential approximation accuracy
 —i.e.\, the approximation error decays exponentially in the number of no
 n-zero weights in the network—of widely different functions including th
 e multiplication operation\, polynomials\, sinusoidal functions\, general 
 smooth functions\, and even one-dimensional oscillatory textures and fract
 al functions such as the Weierstrass function\, both of which do not have 
 any known methods achieving exponential approximation accuracy. We also sh
 ow that in the approximation of sufficiently smooth functions finite-width
  deep networks require strictly smaller connectivity than finite-depth wid
 e networks. We conclude with an outlook on the further role our theory cou
 ld play.\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nir Sochen (University of Tel Aviv)
DTSTART:20200827T183000Z
DTEND:20200827T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/19
DESCRIPTION:by Nir Sochen (University of Tel Aviv) as part of One World MI
 NDS seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Potts (TU Chemnitz)
DTSTART:20200903T183000Z
DTEND:20200903T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/20
DESCRIPTION:by Daniel Potts (TU Chemnitz) as part of One World MINDS semin
 ar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rima Alifari (September 10:  Rima Alifari ()
DTSTART:20200910T183000Z
DTEND:20200910T193000Z
DTSTAMP:20260404T110824Z
UID:OneWorldMINDS/21
DESCRIPTION:by Rima Alifari (September 10:  Rima Alifari () as part of One
  World MINDS seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/OneWorldMINDS/21/
END:VEVENT
END:VCALENDAR
