BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Tom Oliver (Nottingham)
DTSTART:20210929T120000Z
DTEND:20210929T130000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/1/">Supervised learning of arithmetic invariants</a>\nby Tom Oliver (No
 ttingham) as part of Machine Learning Seminar\n\n\nAbstract\nWe explore th
 e utility of standard supervised learning algorithms for a range of classi
 fication problems in number theory. In particular\, we will consider class
  numbers of real quadratic fields\, ranks of elliptic curves over Q\, and 
 endomorphism types for genus 2 curves over Q. Each case is motivated by it
 s appearance in an open conjecture. Throughout the basic strategy is the s
 ame: we vectorize the underlying objects via the coefficients of their L-f
 unctions.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexei Vernitski (Essex)
DTSTART:20220701T130000Z
DTEND:20220701T140000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/2/">Using machine learning to solve mathematical problems and to search
  for examples and counterexamples in pure maths research</a>\nby Alexei Ve
 rnitski (Essex) as part of Machine Learning Seminar\n\n\nAbstract\nOur rec
 ent research can be generally described as applying state-of-the-art techn
 ologies of machine learning to suitable mathematical problems. As to machi
 ne learning\, we use both reinforcement learning and supervised learning (
 underpinned by deep learning). As to mathematical problems\, we mostly con
 centrate on knot theory\, for two reasons\; firstly\, we have a positive e
 xperience of applying another kind of artificial intelligence (automated r
 easoning) to knot theory\; secondly\, examples and counter-examples in kno
 t theory are finite and\, typically\, not very large\, so they are conveni
 ent for the computer to work with.\n\nHere are some successful examples of
  our recent work\, which I plan to talk about.\n\n1. Some recent studies u
 sed machine learning to untangle knots using Reidemeister moves\, but they
  do not describe in detail how they implemented untangling on the computer
 . We invested effort into implementing untangling in one clearly defined s
 cenario\, and were successful\, and made our computer code publicly availa
 ble.\n2. We found counterexamples showing that some recent publications cl
 aiming to give new descriptions of realisable Gauss diagrams contain an er
 ror. We trained several machine learning agents to recognise realisable Ga
 uss diagrams and noticed that they fail to recognise correctly the same co
 unterexamples which human mathematicians failed to spot.\n3. One problem r
 elated to (and "almost" equivalent to) recognising the trivial knot is col
 ouring the knot diagram by elements of algebraic structures called quandle
 s (I will define them). We considered\, for some types of knot diagrams (i
 ncluding petal diagrams)\, how supervised learning copes with this problem
 .\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anindita Maiti (Northeastern University)
DTSTART:20220912T140000Z
DTEND:20220912T150000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/3/">Non-perturbative Non-Lagrangian Neural Network Field Theories</a>\n
 by Anindita Maiti (Northeastern University) as part of Machine Learning Se
 minar\n\n\nAbstract\nEnsembles of Neural Network (NN) output functions des
 cribe field theories. The Neural Network Field Theories become free i.e. G
 aussian in the limit of infinite width and independent parameter distribut
 ions\, due to Central Limit Theorem (CLT). Interaction terms i.e. non-Gaus
 sianities in these field theories arise due to violations of CLT at finite
  width and / or correlated parameter distributions. In general\, non-Gauss
 ianities render Neural Network Field Theories as non-perturbative and non-
 Lagrangian. In this talk\, I will describe methods to study non-perturbati
 ve non-Lagrangian field theories in Neural Networks\, via a dual framework
  over parameter distributions. This duality lets us study correlation func
 tions and symmetries of NN field theories in the absence of an action\; fu
 rther the partition function can be approximated as a series sum over conn
 ected correlation functions. Thus\, Neural Networks allow us to study non-
 perturbative non-Lagrangian field theories through their architectures\, a
 nd can be beneficial to both Machine Learning and physics.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Manolis Tsakiris (Chinese Academy of Sciences)
DTSTART:20230208T100000Z
DTEND:20230208T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/4/">Unlabelled Principal Component Analysis</a>\nby Manolis Tsakiris (C
 hinese Academy of Sciences) as part of Machine Learning Seminar\n\n\nAbstr
 act\nThis talk will consider the problem of recovering a matrix of bounded
  rank from a corrupted version of it\, where the corruption consists of an
  unknown permutation of the matrix entries. Exploiting the theory of Groeb
 ner bases for determinantal ideals\, recovery theorems will be given. For 
 a special instance of the problem\, an algorithmic pipeline will be demons
 trated\, which employs methods for robust principal component analysis wit
 h respect to outliers and methods for linear regression without correspond
 ences.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guido Montufar (UCLA)
DTSTART:20230222T160000Z
DTEND:20230222T170000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/5/">Geometry and convergence of natural policy gradient methods</a>\nby
  Guido Montufar (UCLA) as part of Machine Learning Seminar\n\n\nAbstract\n
 We study the convergence of several natural policy gradient (NPG) methods 
 in infinite-horizon discounted Markov decision processes with regular poli
 cy parametrizations. For a variety of NPGs and reward functions we show th
 at the trajectories in state-action space are solutions of gradient flows 
 with respect to Hessian geometries\, based on which we obtain global conve
 rgence guarantees and convergence rates. In particular\, we show linear co
 nvergence for unregularized and regularized NPG flows with the metrics pro
 posed by Kakade and Morimura and co-authors by observing that these arise 
 from the Hessian geometries of conditional entropy and entropy respectivel
 y. Further\, we obtain sublinear convergence rates for Hessian geometries 
 arising from other convex functions like log-barriers. Finally\, we interp
 ret the discrete-time NPG methods with regularized rewards as inexact Newt
 on methods if the NPG is defined with respect to the Hessian geometry of t
 he regularizer. This yields local quadratic convergence rates of these met
 hods for step size equal to the penalization strength. This is work with J
 ohannes Müller.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kathlén Kohn (KTH)
DTSTART:20230215T100000Z
DTEND:20230215T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/6/">The Geometry of Linear Convolutional Networks</a>\nby Kathlén Kohn
  (KTH) as part of Machine Learning Seminar\n\n\nAbstract\nWe discuss linea
 r convolutional neural networks (LCNs) and their critical points. We obser
 ve that the function space (i.e.\, the set of functions represented by LCN
 s) can be identified with polynomials that admit certain factorizations\, 
 and we use this perspective to describe the impact of the network’s arch
 itecture on the geometry of the function space. For instance\, for LCNs wi
 th one-dimensional convolutions having stride one and arbitrary filter siz
 es\, we provide a full description of the boundary of the function space. 
 We further study the optimization of an objective function over such LCNs:
  We characterize the relations between critical points in function space a
 nd in parameter space and show that there do exist spurious critical point
 s. We compute an upper bound on the number of critical points in function 
 space using Euclidean distance degrees and describe dynamical invariants f
 or gradient descent. This talk is based on joint work with Thomas Merkh\, 
 Guido Montúfar\, and Matthew Trager.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nick Vannieuwenhoven (KU Leuven)
DTSTART:20230308T100000Z
DTEND:20230308T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/7/">Group-invariant tensor train networks for supervised learning</a>\n
 by Nick Vannieuwenhoven (KU Leuven) as part of Machine Learning Seminar\n\
 n\nAbstract\nInvariance under selected transformations has recently proven
  to be a powerful inductive bias in several machine learning models. One c
 lass of such models are tensor train networks. In this talk\, we impose in
 variance relations on tensor train networks. We introduce a new numerical 
 algorithm to construct a basis of tensors that are invariant under the act
 ion of normal matrix representations of an arbitrary discrete group. This 
 method can be up to several orders of magnitude faster than previous appro
 aches. The group-invariant tensors are then combined into a group-invarian
 t tensor train network\, which can be used as a supervised machine learnin
 g model. We applied this model to a protein binding classification problem
 \, taking into account problem-specific invariances\, and obtained predict
 ion accuracy in line with state-of-the-art invariant deep learning approac
 hes. This is joint work with Brent Sprangers.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yang-Hui He (LIMS)
DTSTART:20230405T090000Z
DTEND:20230405T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/8/">Universes as Bigdata: Physics\, Geometry and Machine-Learning</a>\n
 by Yang-Hui He (LIMS) as part of Machine Learning Seminar\n\n\nAbstract\nT
 he search for the Theory of Everything has led to superstring theory\, whi
 ch then led physics\, first to algebraic/differential geometry/topology\, 
 and then to computational geometry\, and now to data science. With a concr
 ete playground of the geometric landscape\, accumulated by the collaborati
 on of physicists\, mathematicians and computer scientists over the last 4 
 decades\, we show how the latest techniques in machine-learning can help e
 xplore problems of interest to theoretical physics and to pure mathematics
 . At the core of our programme is the question: how can AI help us with ma
 thematics?\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Julia Lindberg (UT Austin)
DTSTART:20230315T150000Z
DTEND:20230315T160000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/9/">Estimating Gaussian mixtures using sparse polynomial moment systems
 </a>\nby Julia Lindberg (UT Austin) as part of Machine Learning Seminar\n\
 n\nAbstract\nThe method of moments is a statistical technique for density 
 estimation that solves a system of moment equations to estimate the parame
 ters of an unknown distribution. A fundamental question critical to unders
 tanding identifiability asks how many moment equations are needed to get f
 initely many solutions and how many solutions there are. We answer this qu
 estion for classes of Gaussian mixture models using the tools of polyhedra
 l geometry. Using these results\, we present a homotopy method to perform 
 parameter recovery\, and therefore density estimation\, for high dimension
 al Gaussian mixture models. The number of paths tracked in our method scal
 es linearly in the dimension.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Eduardo Paluzo-Hidalgo (Seville)
DTSTART:20230329T140000Z
DTEND:20230329T150000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/10/">An introduction to Simplicial-map Neural Networks</a>\nby Eduardo 
 Paluzo-Hidalgo (Seville) as part of Machine Learning Seminar\n\n\nAbstract
 \nIn a recently accepted project RexasiPro\, we deal with a critical envir
 onment where trustworthy is decisive. One of our approaches are simplicial
 -map neural networks (SMNNs) which are explicitly defined using simplicial
  maps between triangulations of the input and output spaces. Its combinato
 rial definition lets us prove and guarantee several nice properties follow
 ing trustworthy AI principles. In "Two-hidden-layer feed-forward networks 
 are universal approximators: A constructive approach"\, the first definiti
 on of SMNNs was given and its universal approximator property was proved. 
 Later\, in "Simplicial-Map Neural Networks Robust to Adversarial Examples"
 \, its robustness against adversarial examples was described.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrizio Frosini (Bologna)
DTSTART:20230322T100000Z
DTEND:20230322T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/11/">Some recent results on the theory of GENEOs and its application to
  Machine Learning</a>\nby Patrizio Frosini (Bologna) as part of Machine Le
 arning Seminar\n\n\nAbstract\nGroup equivariant non-expansive operators (G
 ENEOs) have been introduced a few years ago as mathematical tools for appr
 oximating data observers when data are represented by real-valued or vecto
 r-valued functions. The use of these operators is based on the assumption 
 that the interpretation of data depends on the geometric properties of the
  observers. In this talk we will illustrate some recent results in the the
 ory of GENEOs\, showing how these operators can make available a new appro
 ach to topological data analysis and geometric deep learning.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christoph Hertrich (LSE)
DTSTART:20230419T150000Z
DTEND:20230419T160000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/12/">Understanding Neural Network Expressivity via Polyhedral Geometry<
 /a>\nby Christoph Hertrich (LSE) as part of Machine Learning Seminar\n\n\n
 Abstract\nNeural networks with rectified linear unit (ReLU) activations ar
 e one of the standard models in modern machine learning. Despite their pra
 ctical importance\, fundamental theoretical questions concerning ReLU netw
 orks remain open until today. For instance\, what is the precise set of (p
 iecewise linear) functions exactly representable by ReLU networks with a g
 iven depth? Even the special case asking for the number of layers to compu
 te a function as simple as $\\max\\{0\, x_1\, x_2\, x_3\, x_4\\}$ has not 
 been solved yet. In this talk we will explore the relevant background to u
 nderstand this question and report about recent progress using tropical an
 d polyhedral geometry as well as a computer-aided approach based on mixed-
 integer programming. This is based on joint works with Amitabh Basu\, Marc
 o Di Summa\, and Martin Skutella (NeurIPS 2021)\, as well as Christian Haa
 se and Georg Loho (ICLR 2023).\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vasco Portilheiro (UCL)
DTSTART:20230412T150000Z
DTEND:20230412T160000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/13/">Barriers to Learning Symmetries</a>\nby Vasco Portilheiro (UCL) as
  part of Machine Learning Seminar\n\n\nAbstract\nGiven the success of equi
 variant models\, there has been increasing interest in models which can le
 arn a symmetry from data\, rather than it being imposed a priori. We prese
 nt work which formalizes a tradeoff between (a) the simultaneous learnabil
 ity of symmetries and equivariant functions\, and (b) universal approximat
 ion of equivariant functions. The work is motivated by an experiment which
  modifies the Equivariant Multilayer Perceptron (EMLP) of Finzi et al. (20
 21) in an attempt to learn a group together with an equivariant function. 
 Additionally\, the tradeoff is shown to not exist for group-convolutional 
 networks.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bastian Rieck (Munich)
DTSTART:20230426T090000Z
DTEND:20230426T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/14/">Curvature for Graph Learning</a>\nby Bastian Rieck (Munich) as par
 t of Machine Learning Seminar\n\n\nAbstract\nCurvature bridges geometry an
 d topology\, using local information to derive global statements. While we
 ll-known in a differential topology context\, it was recently extended to 
 the domain of graphs. In fact\, graphs give rise to various notions of cur
 vature\, which differ in expressive power and purpose. We will give a brie
 f overview of curvature in graphs\, define some relevant concepts\, and sh
 ow their utility for data science and machine learning applications. In pa
 rticular\, we shall discuss two applications: first\, the use of curvature
  to distinguish between different models for synthesising new graphs from 
 some unknown distribution\; second\, a novel framework for defining curvat
 ure for hypergraphs\, whose structural properties require a more generic s
 etting. We will also describe new applications that are specifically geare
 d towards a treatment by curvature\, thus underlining the utility of this 
 concept for data science.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Taejin Paik (Seoul National University)
DTSTART:20230524T090000Z
DTEND:20230524T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/15/">Isometry-Invariant and Subdivision-Invariant Representations of Em
 bedded Simplicial Complexes</a>\nby Taejin Paik (Seoul National University
 ) as part of Machine Learning Seminar\n\n\nAbstract\nGeometric objects suc
 h as meshes and graphs are commonly used in various applications\, but ana
 lyzing them can be challenging due to their complex structures. Traditiona
 l approaches may not be robust to transformations like subdivision or isom
 etry\, leading to inconsistent results. Here is a novel approach to addres
 s these limitations by using only topological and geometric data to analyz
 e simplicial complexes in a subdivision-invariant and isometry-invariant w
 ay. This approach involves using a graph neural network to create an $O(3)
 $-equivariant operator and the Euler curve transform to generate sufficien
 t statistics that describe the properties of the object.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Platt (KCL)
DTSTART:20230503T090000Z
DTEND:20230503T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/16/">Group invariant machine learning by fundamental domain projections
 </a>\nby Daniel Platt (KCL) as part of Machine Learning Seminar\n\n\nAbstr
 act\nIn many applications one wants to learn a function that is invariant 
 under a group action. For example\, classifying images of digits\, no matt
 er how they are rotated. There exist many approaches in the literature to 
 do this. I will mention two approaches that are very useful in many applic
 ations\, but struggle if the group is big or acts in a complicated way. I 
 will then explain our approach which does not have these two problems. The
  approach works by finding some "canonical representative" of each input e
 lement. In the example of images of digits\, one may rotate the digit so t
 hat the brightest quarter is in the top-left\, which would define a "canon
 ical representative". In the general case\, one has to define what that me
 ans. Our approach is useful if the group is big\, and I will present exper
 iments on the Complete Intersection Calabi-Yau and Kreuzer-Skarke datasets
  to show this. Our approach is useless if the group is small\, and the cas
 e of rotated images of digits is an example of this. This is joint work wi
 th Benjamin Aslan and David Sheard.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vasco Brattka (Bundeswehr München)
DTSTART:20230614T090000Z
DTEND:20230614T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/17/">On the Complexity of Computing Gödel Numbers</a>\nby Vasco Brattk
 a (Bundeswehr München) as part of Machine Learning Seminar\n\n\nAbstract\
 nGiven a computable sequence of natural numbers\, it is a natural task to 
 find a Gödel number of a program that generates this sequence. It is easy
  to see that this problem is neither continuous nor computable. In algorit
 hmic learning theory this problem is well studied from several perspective
 s and one question studied there is for which sequences this problem is at
  least learnable in the limit. Here we study the problem on all computable
  sequences and we classify the Weihrauch complexity of it. For this purpos
 e we can\, among other methods\, utilize the amalgamation technique known 
 from learning theory. As a benchmark for the classification we use closed 
 and compact choice problems and their jumps on natural numbers\, and we ar
 gue that these problems correspond to induction and boundedness principles
 \, as they are known from the Kirby-Paris hierarchy in reverse mathematics
 . We provide a topological as well as a computability-theoretic classifica
 tion\, which reveal some significant differences.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Edward Pearce-Crump (Imperial)
DTSTART:20230621T090000Z
DTEND:20230621T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/18/">Exploring group equivariant neural networks using set partition di
 agrams</a>\nby Edward Pearce-Crump (Imperial) as part of Machine Learning 
 Seminar\n\n\nAbstract\nWhat do jellyfish and an 11th century Japanese nove
 l have to do with neural networks? In recent years\, much attention has be
 en given to developing neural network architectures that can efficiently l
 earn from data with underlying symmetries. These architectures ensure that
  the learned functions maintain a certain geometric property called group 
 equivariance\, which determines how the output changes based on a change t
 o the input under the action of a symmetry group. In this talk\, we will d
 escribe a number of new group equivariant neural network architectures tha
 t are built using tensor power spaces of $\\mathbb{R}^n$ as their layers. 
 We will show that the learnable\, linear functions between these layers ca
 n be characterised by certain subsets of set partition diagrams. This talk
  will be based on several papers that are to appear in ICML 2023.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alvaro Torras Casas (Cardiff)
DTSTART:20230531T090000Z
DTEND:20230531T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/19/">Dataset comparison using persistent homology morphisms</a>\nby Alv
 aro Torras Casas (Cardiff) as part of Machine Learning Seminar\n\n\nAbstra
 ct\nPersistent homology summarizes geometrical information of data by mean
 s of a barcode. Given a pair of datasets\, $X$ and $Y$\, one might obtain 
 their respective barcodes $B(X)$ and $B(Y)$. Thanks to stability results\,
  if $X$ and $Y$ are similar enough one deduces that the barcodes $B(X)$ an
 d $B(Y)$ are also close enough\; however\, the converse is not true in gen
 eral. In this talk we consider the case when there is a known relation bet
 ween $X$ and $Y$ encoded by a morphism between persistence modules. For ex
 ample\, this is the case when $Y$ is a finite subset of euclidean space an
 d $X$ is a sample taken from $Y$. As in linear algebra\, a morphism betwee
 n persistence modules is understood by a choice of a pair of bases togethe
 r with the associated matrix. I will explain how to use this matrix to get
  barcodes for images\, kernels and cokernels. Additionally\, I will explai
 n how to compute an induced block function that relates the barcodes $B(X)
 $ and $B(Y)$. I will finish the talk revising some applications of this th
 eory as well as future research directions.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Gebhart (Minnesota)
DTSTART:20230705T150000Z
DTEND:20230705T160000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/20/">Specifying Local Constraints in Representation Learning with Cellu
 lar Sheaves</a>\nby Thomas Gebhart (Minnesota) as part of Machine Learning
  Seminar\n\n\nAbstract\nMany machine learning algorithms constrain their l
 earned representations by imparting inductive biases based on local smooth
 ness assumptions. While these constraints are often natural and effective\
 , there are situations in which their simplicity is mis-aligned with the r
 epresentation structure required by the task\, leading to a lack of expres
 sivity and pathological behaviors like representational oversmoothing or i
 nconsistency. Without a broader theoretical framework for reasoning about 
 local representational constraints\, it is difficult to conceptualize and 
 move beyond such representational misalignments. In this talk\, we will se
 e that cellular sheaf theory offers an ideal algebro-topological framework
  for both reasoning about and implementing machine learning models on data
  which are subject to such local-to-global constraints over a topological 
 space. We will introduce cellular sheaves from a categorical perspective\,
  observing the relationship between their definition as a limit object and
  the consistency objectives underlying representation learning. We will th
 en turn to a discussion of sheaf (co)homology as a semi-computable tool fo
 r implementing these categorical concepts. Finally\, we will observe two p
 ractical applications of these ideas in the form of sheaf neural networks\
 , a generalization of graph neural networks for processing sheaf-valued si
 gnals\; and knowledge sheaves\, a sheaf-theoretic reformulation of knowled
 ge graph embedding.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Challenger Mishra (Cambridge)
DTSTART:20230712T090000Z
DTEND:20230712T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/21/">Mathematical conjecture generation and Machine Intelligence</a>\nb
 y Challenger Mishra (Cambridge) as part of Machine Learning Seminar\n\n\nA
 bstract\nConjectures hold a special status in mathematics. Good conjecture
 s epitomise milestones in mathematical discovery\, and have historically i
 nspired new mathematics and shaped progress in theoretical physics. Hilber
 t’s list of 23 problems and André Weil’s conjectures oversaw major de
 velopments in mathematics for decades. Crafting conjectures can often be u
 nderstood as a problem in pattern recognition\, for which Machine Learning
  (ML) is tailor-made. In this talk\, I will propose a framework that allow
 s a principled study of a space of mathematical conjectures. Using this fr
 amework and exploiting domain knowledge and machine learning\, we generate
  a number of conjectures in number theory and group theory. I will present
  evidence in support of some of the resulting conjectures and present a ne
 w theorem. I will lay out a vision for this endeavour\, and conclude by po
 sing some general questions about the pipeline.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Le Quoc Tung (ENS Lyon)
DTSTART:20230726T090000Z
DTEND:20230726T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/22/">Algorithmic and theoretical aspects of sparse deep neural networks
 </a>\nby Le Quoc Tung (ENS Lyon) as part of Machine Learning Seminar\n\n\n
 Abstract\nSparse deep neural networks offer a compelling practical opportu
 nity to reduce the cost of training\, inference and storage\, which are gr
 owing exponentially in the state of the art of deep learning. In this pres
 entation\, we will introduce an approach to study sparse deep neural netwo
 rks through the lens of another related problem: sparse matrix factorizati
 on\, i.e.\, the problem of approximating a (dense) matrix by the product o
 f (multiple) sparse factors. In particular\, we identify and investigate i
 n detail some theoretical and algorithmic aspects of a variant of sparse m
 atrix factorization named fixed support matrix factorization (FSMF) in whi
 ch the set of non-zero entries of sparse factors are known. Several fundam
 ental questions of sparse deep neural networks such as the existence of op
 timal solutions of the training problem or topological properties of its f
 unction space can be addressed using the results of (FSMF). In addition\, 
 by applying the results of (FSMF)\, we also study butterfly parametrizatio
 n\, an approach that consists of replacing (large) weight matrices with th
 e products of extremely sparse and structured ones in sparse deep neural n
 etworks.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Honglu Fan (Geneva)
DTSTART:20230802T090000Z
DTEND:20230802T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/23/">Local uniformization\, Hilbert scheme of points and reinforcement 
 learning</a>\nby Honglu Fan (Geneva) as part of Machine Learning Seminar\n
 \n\nAbstract\nIn this talk\, I will give a brief tour about how local unif
 ormization\, the Hilbert scheme of points\, and reinforcement learning com
 e together in a joint work (arXiv:2307.00252 [cs.LG]) with Gergely Berczi 
 and Mingcong Zeng.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Charlotte Aten (Denver)
DTSTART:20230906T140000Z
DTEND:20230906T150000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/24/">Discrete neural nets and polymorphic learning</a>\nby Charlotte At
 en (Denver) as part of Machine Learning Seminar\n\n\nAbstract\nClassical n
 eural network learning techniques have primarily been focused on optimizat
 ion in a continuous setting. Early results in the area showed that many ac
 tivation functions could be used to build neural nets that represent any f
 unction\, but of course this also allows for overfitting. In an effort to 
 ameliorate this deficiency\, one seeks to reduce the search space of possi
 ble functions to a special class which preserves some relevant structure. 
 I will propose a solution to this problem of a quite general nature\, whic
 h is to use polymorphisms of a relevant discrete relational structure as a
 ctivation functions. I will give some concrete examples of this\, then hin
 t that this specific case is actually of broader applicability than one mi
 ght guess.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Felix Schremmer (Hong Kong)
DTSTART:20231018T090000Z
DTEND:20231018T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/25/">Machine learning assisted exploration for affine Deligne-Lusztig v
 arieties</a>\nby Felix Schremmer (Hong Kong) as part of Machine Learning S
 eminar\n\n\nAbstract\nIn this interdisciplinary study\, we describe a proc
 edure to assist and accelerate research in pure mathematics by using machi
 ne learning. We study affine Deligne-Lusztig varieties\, certain geometric
  objects related to a number of mathematical questions\, by carefully deve
 loping a number of machine learning models. This iterated pipeline yields 
 well interpretable and highly accurate models\, thus producing strongly su
 pported mathematical conjectures. We explain how this method could have dr
 amatically accelerated the research in the past. A completely new mathemat
 ical theorem\, found by our ML-assisted method and proved using the classi
 cal mathematical tools of the field\, concludes this study. This is joint 
 work with Bin Dong\, Pengfei Jin\, Xuhua He and Qingchao Yu.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bruno Gavranović (Strathclyde)
DTSTART:20230920T090000Z
DTEND:20230920T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/26/">Fundamental Components of Deep Learning: A category-theoretic appr
 oach</a>\nby Bruno Gavranović (Strathclyde) as part of Machine Learning S
 eminar\n\n\nAbstract\nDeep learning\, despite its remarkable achievements\
 , is still a young field. Like the early stages of many scientific discipl
 ines\, it is permeated by ad-hoc design decisions. From the intricacies of
  the implementation of backpropagation\, through new and poorly understood
  phenomena such as double descent\, scaling laws or in-context learning\, 
 to a growing zoo of neural network architectures - there are few unifying 
 principles in deep learning\, and no uniform and compositional mathematica
 l foundation. In this talk I'll present a novel perspective on deep learni
 ng by utilising the mathematical framework of category theory. I'll identi
 fy two main conceptual components of neural networks\, report on progress 
 made throughout last years by the research community in formalising them\,
  and show how they've been used to describe backpropagation\, architecture
 s\, and supervised learning in general\, shedding a new light on the exist
 ing field.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rahul Sarkar (Stanford)
DTSTART:20230927T140000Z
DTEND:20230927T150000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/27/">A framework for generating inequality conjectures</a>\nby Rahul Sa
 rkar (Stanford) as part of Machine Learning Seminar\n\n\nAbstract\nIn this
  talk\, I'll present some recent and ongoing work\, where we propose a sys
 tematic approach to finding abstract patterns in mathematical data\, in or
 der to generate conjectures about mathematical inequalities. We focus on s
 trict inequalities of type $f < g$ and associate them with a Banach manifo
 ld. We develop a structural understanding of this conjecture space by stud
 ying linear automorphisms of this manifold. Next\, we propose an algorithm
 ic pipeline to generate novel conjecture. As proof of concept\, we give a 
 toy algorithm to generate conjectures about the prime counting function an
 d diameters of Cayley graphs of non-abelian simple groups. Some of these c
 onjectures were proved while others remain unproven.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Martina Scolamiero (KTH)
DTSTART:20231108T100000Z
DTEND:20231108T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/28/">Machine Learning with Topological Data Analysis features</a>\nby M
 artina Scolamiero (KTH) as part of Machine Learning Seminar\n\n\nAbstract\
 nIn Topological Data Analysis\, Persistent Homology has been widely used t
 o extract features from data. Such features are then used for clustering\,
  visualization and classification. In this talk I will describe how we def
 ine Lipschitz continuous persistence features starting from pseudo metrics
  to compare topological representations of data. Special emphasis will be 
 on the variety of different features that can be constructed in this way a
 nd how they can be used in machine learning pipelines. Joint work with the
  TDA group at KTH.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Agnese Barbensi (Queensland)
DTSTART:20231122T100000Z
DTEND:20231122T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/29/">Persistent homology\, hypergraphs and geometric cycle matching</a>
 \nby Agnese Barbensi (Queensland) as part of Machine Learning Seminar\n\n\
 nAbstract\nTopological data analysis has been demonstrated to be a powerfu
 l tool to describe topological signatures in real-life data\, and to extra
 ct complex patterns arising in natural systems. An important challenge in 
 topological data analysis is to find robust ways of computing and analysin
 g persistent generators\, and to match significant topological signals acr
 oss distinct systems. In this talk\, I will present some recent work deali
 ng with these problems. Our method is based on an interpretation of persis
 tent homology summaries with network theoretical tools\, combined with sta
 tistical and optimal transport techniques.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kyu-Hwan Lee (Connecticut)
DTSTART:20231206T150000Z
DTEND:20231206T160000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/30/">Data-scientific study of Kronecker coefficients</a>\nby Kyu-Hwan L
 ee (Connecticut) as part of Machine Learning Seminar\n\n\nAbstract\nThe Kr
 onecker coefficients are the decomposition multiplicities of the tensor pr
 oduct of two irreducible representations of the symmetric group. Unlike th
 e Littlewood--Richardson coefficients\, which are the analogues for the ge
 neral linear group\, there is no known combinatorial description of the Kr
 onecker coefficients\, and it is an NP-hard problem to decide whether a gi
 ven Kronecker coefficient is zero or not. In this talk\, we take a data-sc
 ientific approach to study whether Kronecker coefficients are zero or not.
  We show that standard machine-learning classifiers may be trained to pred
 ict whether a given Kronecker coefficient is zero or not. Motivated by pri
 ncipal component analysis and kernel methods\, we also define loadings of 
 partitions and use them to describe a sufficient condition for Kronecker c
 oefficients to be nonzero.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shailesh Lal (BIMSA)
DTSTART:20240320T100000Z
DTEND:20240320T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/31/">Neural Network solvers for the Yang-Baxter Equation</a>\nby Shaile
 sh Lal (BIMSA) as part of Machine Learning Seminar\n\n\nAbstract\nWe devel
 op a novel neural network architecture that learns solutions to the Yang B
 axter equation for R matrices of difference form. This method already enab
 les us to learn all solution classes of the 2d Yang Baxter equation. We pr
 opose and test paradigms for exploring the landscape of Yang Baxter equati
 on solution space aided by these methods. Further\, we shall also comment 
 on the application of these methods to generating new solutions of the Yan
 g Baxter equation. The talk is based on joint work with Suvajit Majumder a
 nd Evgeny Sobko available in part in arXiv:2304.07247.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniele Angella (Università di Firenze)
DTSTART:20240313T100000Z
DTEND:20240313T110000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/32/">Constructing and Machine Learning Calabi-Yau Five-Folds</a>\nby Da
 niele Angella (Università di Firenze) as part of Machine Learning Seminar
 \n\n\nAbstract\nThe significance of Calabi-Yau manifolds transcends both C
 omplex Geometry and String Theory. One possible approach to constructing C
 alabi-Yau manifolds involves intersecting hypersurfaces within the product
  of projective spaces\, defined by polynomials of a specific degree. We sh
 ow a method to construct all possible complete intersections Calabi-Yau 
 ﬁve-folds within a product of four or less complex projective spaces\, w
 ith up to four constraints. This results in a comprehensive set of 27\,068
  distinct spaces. For approximately half of these constructions\, excludin
 g the product spaces\, we can compute the cohomological data\, yielding 2\
 ,375 distinct Hodge diamonds. We present distributions of the invariants a
 nd engage in a comparative analysis with their lower-dimensional counterpa
 rts. Supervised machine learning techniques are applied to the cohomologic
 al data. The Hodge number $h^{1\,1}$ can be learnt with high efficiency\; 
 however\, accuracy diminishes for other Hodge numbers due to the extensive
  ranges of potential values. The talk is a joint collaboration with Rashid
  Alawadhi\, Andrea Leonardo\, and Tancredi Schettini Gherardini.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ellie Heyes (City\, University of London)
DTSTART:20240417T090000Z
DTEND:20240417T100000Z
DTSTAMP:20260404T094319Z
UID:CompAlg/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CompA
 lg/33/">Generating Calabi–Yau Manifolds with Machine Learning</a>\nby El
 lie Heyes (City\, University of London) as part of Machine Learning Semina
 r\n\n\nAbstract\nCalabi–Yau n-folds can be obtained as hypersurfaces in 
 toric varieties built from (n+1)-dimensional reflexive polytopes. Calabi
 –Yau 3-folds are of particular interest in string theory as they reduce 
 10-dimensional superstring theory to 4-dimensional quantum field theories 
 with N=1 supersymmetry. We generate Calabi–Yau 3-folds by generating 4-d
 imensional reflexive polytopes and their triangulations using genetic algo
 rithms and reinforcement learning respectively. We show how\, by modifying
  the fitness function of the genetic algorithm\, one can generate Calabi
 –Yau manifolds with specific properties that give rise to certain string
  models of particular interest.\n
LOCATION:https://stable.researchseminars.org/talk/CompAlg/33/
END:VEVENT
END:VCALENDAR
