BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Henry Adams (Colorado State University)
DTSTART:20210825T133000Z
DTEND:20210825T143000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/1/">Applied topology: from global to local</a>\nby Henry Adams (Colo
 rado State University) as part of DANGER: Data\, Numbers\, and Geometry\n\
 n\nAbstract\nThrough the use of examples\, I will explain one way in which
  applied topology has evolved since the birth of persistent homology in th
 e early 2000s. The first applications of topology to data emphasized the g
 lobal shape of a dataset\, such as the three-circle model for 3 x 3 pixel 
 patches from natural images\, or the configuration space of the cyclo-octa
 ne molecule\, which is a sphere with a Klein bottle attached via two circl
 es of singularity. More recently\, persistent homology is being used to me
 asure the local geometry of data. How do you vectorize geometry for use in
  machine learning problems? Persistent homology\, and its vectorization te
 chniques including persistence landscapes and persistence images\, provide
  popular techniques for incorporating geometry in machine learning. I will
  survey applications arising from machine learning tasks in agent-based mo
 deling\, shape recognition\, materials science\, and biology.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sonja Petrovic (Illinois Institute of Technology)
DTSTART:20210825T143000Z
DTEND:20210825T153000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/2/">Learning in commutative algebra & models for random algebraic st
 ructures</a>\nby Sonja Petrovic (Illinois Institute of Technology) as part
  of DANGER: Data\, Numbers\, and Geometry\n\n\nAbstract\nA commutative alg
 ebraist's interest in randomness has many facets\, of which this talk high
 lights two. Namely\, we will discuss 1) how to use basic statistics and le
 arning for improving Buchberger's algorithm and 2) how to generate samples
  of ideals in a `controlled' way. The two topics\, based on joint work wit
 h various collaborators and students\, form a two-step process in learning
  on algebraic structures\, designed with the aim of avoiding the 'danger z
 one' of blind machine learning over uninteresting distributions. \nFor lea
 rning\, we show that a multiple linear regression model built from a set o
 f easy-to-compute ideal generator statistics can predict the number of pol
 ynomial additions somewhat well\, better than an uninformed model\, and be
 tter than regression models built on some intuitive commutative algebra in
 variants that are more difficult to compute. We also train a simple recurs
 ive neural network that outperforms these linear models. Our work serves a
 s a proof of concept\, demonstrating that predicting the number of polynom
 ial additions in Buchberger's algorithm is a feasible problem from the poi
 nt of view of machine learning.\nAs a first example of sampling\, we prese
 nt random monomial ideals\, using which we prove theorems about the probab
 ility distributions\, expectations and thresholds for events involving mon
 omial ideals with given Hilbert function\, Krull dimension\, first graded 
 Betti numbers\, and present several experimentally-backed conjectures abou
 t regularity\, projective dimension\, strong genericity\, and Cohen-Macaul
 ayness of random monomial ideals. The models for monomial ideals can be us
 ed as a basis for generating other types of algebraic objects\, and provin
 g existence of desired properties.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Siu-Cheong Lau (Boston University)
DTSTART:20210825T160000Z
DTEND:20210825T170000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/3/">Deep learning over the moduli space of quiver representations</a
 >\nby Siu-Cheong Lau (Boston University) as part of DANGER: Data\, Numbers
 \, and Geometry\n\n\nAbstract\nIt is interesting to observe that neural ne
 twork in machine learning has a similar basic setup as quiver representati
 on theory. In this talk\, I will build an algebro-geometric formulation of
  a computing machine\, which is well-defined over the moduli space of repr
 esentations.  I will also explain a uniformization between spherical\, Euc
 lidean and hyperbolic moduli of framed quiver representations\, and constr
 uct a learning algorithm over these moduli spaces.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Riccardo Finotello (CEA Paris-Saclay)
DTSTART:20210826T120000Z
DTEND:20210826T130000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/4/">Algebraic geometry and computer vision: inception neural network
  for Calabi-Yau manifolds</a>\nby Riccardo Finotello (CEA Paris-Saclay) as
  part of DANGER: Data\, Numbers\, and Geometry\n\n\nAbstract\nComputing to
 pological properties of Calabi-Yau manifolds is\, in general\, a challengi
 ng mathematical task: traditional methods lead to complicated algorithms\,
  without expressions in closed form in most cases. At the same time\, rece
 nt years have witnessed the rising use of deep learning as a method for ex
 ploration of large sets of data\, to learn their patterns and properties. 
 This is specifically interesting when it comes to unravel complicated geom
 etrical structures\, as it is a central issue both in mathematics and theo
 retical physics\, as well as in the development of trustworthy AI methods.
  Motivated by their distinguished role in string theory for the study of c
 ompactifications\, we compute the Hodge numbers of Complete Intersection C
 alabi-Yau (CICY) manifolds using deep neural networks. Specifically\, we i
 ntroduce new regression architectures\, inspired by Google's Inception net
 work and multi-task learning\, which leverage the theoretical knowledge on
  the inputs with recent advancements in AI.  This shows the potential of d
 eep learning to learn from geometrical data\, and it proves the versatilit
 y of architectures developed in different contexts\, which may therefore f
 ind their way in theoretical physics and mathematics for exploration and i
 nference.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kyu-Hwan Lee (University of Connecticut)
DTSTART:20210826T133000Z
DTEND:20210826T143000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/5/">Applications of machine learning to data from number theory</a>\
 nby Kyu-Hwan Lee (University of Connecticut) as part of DANGER: Data\, Num
 bers\, and Geometry\n\n\nAbstract\nIn this talk\, we apply machine learnin
 g techniques to various data from the L-functions and modular forms databa
 se (LMFDB) and show that a machine can be trained to distinguish objects i
 n number theory according to their standard invariants. The applications i
 n this talk will include class numbers of quadratic number fields\, ranks 
 of elliptic curves\, Sato-Tate groups of genus 2 curves. This is joint wor
 k with Yang-Hui He and Thomas Oliver.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roozbeh Yousefadeh (Yale University)
DTSTART:20210826T143000Z
DTEND:20210826T153000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/6/">Deep learning generalization\, extrapolation\, over-parameteriza
 tion and decision boundaries</a>\nby Roozbeh Yousefadeh (Yale University) 
 as part of DANGER: Data\, Numbers\, and Geometry\n\n\nAbstract\nDeep neura
 l networks have achieved great success\, most notably in learning to class
 ify images. Yet\, the phenomenon of learning images is not well understood
 \, and generalization of deep networks is considered a mystery. Recent stu
 dies have explained the generalization of deep networks within the framewo
 rk of interpolation. In this talk\, we will see that the task of classifyi
 ng images requires extrapolation capability\, and interpolation by itself 
 is not adequate to understand functional task of deep networks. We study i
 mage classification datasets in the pixel space\, the internal representat
 ions of images learned throughout the layers of trained networks\, and als
 o in the low-dimensional feature space that one can derive using wavelets/
 shearlets. We show that in all these spaces\, image classification remains
  an extrapolation task to a moderate (yet considerable) degree outside the
  convex hull of training set. From the mathematical perspective\, a deep l
 earning image classifier is a function that partitions its domain and assi
 gns a class to each partition. Partitions are defined by decision boundari
 es and so is the model. Therefore\, the extensions of decision boundaries 
 outside the convex hull of training set are crucial in model's generalizat
 ion. From this perspective\, over-parameterization is a necessary conditio
 n for the ability to control the extensions of decision boundaries\, a nov
 el way of explaining why deep networks need to be over-parameterized. I wi
 ll also present a homotopy algorithm for computing points on the decision 
 boundaries of deep networks\, and finally\, I will explain how we can leve
 rage the decision boundaries to audit and debug ML models used in social a
 pplications.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruriko Yoshida (Naval Postgraduate School)
DTSTART:20210826T160000Z
DTEND:20210826T170000Z
DTSTAMP:20260404T094310Z
UID:Danger2021/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/7/">Tree topologies along a tropical line segment</a>\nby Ruriko Yos
 hida (Naval Postgraduate School) as part of DANGER: Data\, Numbers\, and G
 eometry\n\n\nAbstract\nTropical geometry with the max-plus algebra has bee
 n applied to statistical learning models over the spaces of phylogenetic t
 rees because geometry with the tropical metric over tree spaces has some n
 ice properties such as convexity in terms of the tropical metric.  One of 
 the challenges in applications of tropical geometry to tree spaces is the 
 difficulty interpreting outcomes of statistical models with the tropical m
 etric.  This talk focuses on combinatorics of tree topologies along a trop
 ical line segment\, an intrinsic geodesic with the tropical metric\, betwe
 en two phylogenetic trees over the tree space and we show some properties 
 of a tropical line segment between two trees.  Specifically\, we show that
  a probability of a tropical line segment of two randomly chosen trees goi
 ng through the origin (the star tree) is zero and we also show that if two
  given trees differ only one nearest neighbor interchange (NNI) move\, the
 n the tree topology of a tree in the tropical line segment between them is
  the same tree topology of one of these given two trees with possible zero
  branch lengths.\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Minhyong Kim (Warwick)
DTSTART:20210825T121500Z
DTEND:20210825T131500Z
DTSTAMP:20260404T094310Z
UID:Danger2021/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Dange
 r2021/8/">How hard is it to learn a mathematical structure?</a>\nby Minhyo
 ng Kim (Warwick) as part of DANGER: Data\, Numbers\, and Geometry\n\nAbstr
 act: TBA\n
LOCATION:https://stable.researchseminars.org/talk/Danger2021/8/
END:VEVENT
END:VCALENDAR
