BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Juergen Jost (MPI MIS)
DTSTART:20211025T140000Z
DTEND:20211025T144500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/1
DESCRIPTION:by Juergen Jost (MPI MIS) as part of CMO-Bound-Geometry & Lear
 ning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Seigal (Harvard University)
DTSTART:20211025T150000Z
DTEND:20211025T154500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/2/">Groups and symmetries in Gaussian graphical models</a>\nby Anna
  Seigal (Harvard University) as part of CMO-Bound-Geometry & Learning from
  Data\n\n\nAbstract\nWe can use groups and symmetries to define new statis
 tical models\, and to investigate them. In this talk\, I will discuss two 
 families of multivariate Gaussian models:\n1. RDAG models: graphical model
 s on directed graphs with coloured vertices and edges\,\n2. Gaussian group
  models: multivariate Gaussian models that are parametrised by a group.\nI
  will focus on maximum likelihood estimation\, an optimisation problem to 
 obtain parameters in the model that best fit observed data. For RDAG model
 s and Gaussian group models\, the existence of the maximum likelihood esti
 mate relates to linear algebra conditions and to stability notions from in
 variant theory. This talk is based on joint work with Carlos Améndola\, K
 athlén Kohn\, Visu Makam\, and Philipp Reichenbach.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shantanu Joshi (UCLA)
DTSTART:20211025T160000Z
DTEND:20211025T164500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/3
DESCRIPTION:by Shantanu Joshi (UCLA) as part of CMO-Bound-Geometry & Learn
 ing from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nancy Arana-Daniel (Universidad de Guadalajara)
DTSTART:20211025T180000Z
DTEND:20211025T184500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/4
DESCRIPTION:by Nancy Arana-Daniel (Universidad de Guadalajara) as part of 
 CMO-Bound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Benjamin Sanchez-Lengeling (Google Research)
DTSTART:20211025T190000Z
DTEND:20211025T194500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/5
DESCRIPTION:by Benjamin Sanchez-Lengeling (Google Research) as part of CMO
 -Bound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sophie Achard (CNRS University of Grenoble)
DTSTART:20211026T140000Z
DTEND:20211026T144500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/6/">Learning from brain data</a>\nby Sophie Achard (CNRS University
  of Grenoble) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbst
 ract\nNoninvasive neuroimaging of the brain while functioning is providing
 \nvery promising data sets to study the complex organisation of brain\nare
 as. It is not only possible to identify responses of brain areas to a\ncog
 nitive stimulus but also to model the interactions between brain\nareas.  
 The  human brain can be modelled as a network or graph where\nbrain areas 
 are nodes of the graph and interactions of pairs are the\nedges of the gra
 ph. The brain connectivity networks is small-world with\na combination of 
 segregation and integration characteristics. In this\ntalk\, I will presen
 t recent advances to understand and compare brain\ndata using learning app
 roaches. A particular focus on the reliability of\nthe methods will be giv
 en. Finally\, examples on various pathologies will\nhighlight the possible
  alterations and resilience of the brain network.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nihat Ay (TUHH)
DTSTART:20211026T150000Z
DTEND:20211026T154500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/7
DESCRIPTION:by Nihat Ay (TUHH) as part of CMO-Bound-Geometry & Learning fr
 om Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pratik Chaudhari (University of Pennsylvania)
DTSTART:20211026T160000Z
DTEND:20211026T164500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/8/">(Towards the) Foundations of Small Data</a>\nby Pratik Chaudhar
 i (University of Pennsylvania) as part of CMO-Bound-Geometry & Learning fr
 om Data\n\n\nAbstract\nThe relevant limit for machine learning is not N 
 → infinity but\ninstead N → 0. The human visual system is proof that i
 t is possible to\nlearn categories with extremely few samples. This talk w
 ill discuss\nsteps towards building such systems and it is structured in t
 hree\nparts. The first part will discuss algorithms to adapt representatio
 ns\nof deep networks to new categories with few labeled data. The second\n
 part will discuss when such adaptation works well and while doing so\,\nit
  will develop a method to compute the information-theoretically\noptimal d
 istance between two learning tasks. The third part will\ndiscuss tools to 
 learn tasks that are "far away" from each other and\nwill point to new met
 hods for multi-task and continual learning.\n\nThis talk will discuss resu
 lts from the following papers.\n1. An Information-Geometric Distance on th
 e Space of Tasks. Yansong\nGao\, Pratik Chaudhari. ICML 2021. https://arxi
 v.org/abs/2011.00613.\nCode: https://github.com/Yansongga/An-Information-G
 eometric-Distance-on-the-Space-of-Tasks\n2. Boosting a Model Zoo for Multi
 -Task and Continual Learning. Rahul\nRamesh\, Pratik Chaudhari. https://ar
 xiv.org/abs/2106.03027. Code:\nhttps://github.com/rahul13ramesh/MultitTask
 _ModelZoo\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruriko Yoshida (Naval Postgraduate School))
DTSTART:20211026T180000Z
DTEND:20211026T184500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/9/">Tree Topologies along a Tropical Line Segment ↓</a>\nby Rurik
 o Yoshida (Naval Postgraduate School)) as part of CMO-Bound-Geometry & Lea
 rning from Data\n\n\nAbstract\nTropical geometry with the max-plus algebra
  has been applied to statistical learning models over tree spaces because 
 geometry with the tropical metric over tree spaces has some nice propertie
 s such as convexity in terms of the tropical metric.  One of the challenge
 s in applications of tropical geometry to tree spaces is the difficulty in
 terpreting outcomes of statistical models with the tropical metric.  We fo
 cus on combinatorics of tree topologies along a tropical line segment\, an
  intrinsic geodesic with the tropical metric\, between two phylogenetic tr
 ees over the tree space and we show some properties of a tropical line seg
 ment between two trees.  Specifically we show that a probability of a trop
 ical line segment of two randomly chosen trees going through the origin (t
 he star tree) is zero if the number of leave is greater than four\, and we
  also show that if two given trees differ only one nearest neighbor interc
 hange (NNI) move\, then the tree topology of a tree in the tropical line s
 egment between them is the same tree topology of one of these given two tr
 ees with  possible zero branch lengths.  This is joint work with Shelby Co
 x.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jun Zhang (University of Michigan))
DTSTART:20211026T190000Z
DTEND:20211026T194500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/10/">Information Geometry: A Tutorial</a>\nby Jun Zhang (University
  of Michigan)) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbs
 tract\nInformation Geometry is the differential geometric study of the set
  of all probability distributions on a given sample space\, modeled as a d
 ifferentiable manifold where each point represents one probability distrib
 ution with its parameter serving as local coordinates. Such manifold is eq
 uipped with a natural Riemannian metric (Fisher-Rao metric) and a family o
 f affine connections (alpha-connections) that define parallel transport of
  score functions as tangent vectors. Starting from the motivating example 
 of the family of univariate normal distribution on a continuous support an
 d of the probability simplex as a family on discrete support\, I will expl
 ain how divergence functions (or contrast functions) measuring directed di
 stance on a manifold\, e.g.\, Kullback-Leibler divergence\, Bregman diverg
 ence\, f-divergence\, etc. are tied to Legendre duality and convex analysi
 s\, and how they in turn generate the underlying dualistic geometry of the
  what is known as the “statistical manifold”. The case of maximum entr
 opy (or minimum divergence) inference will be highlighted\, since it is li
 nked to the exponential family and the dually-flat (Hessian) geometric str
 ucture\, the simplest and the most well-understood example of information 
 geometry. If time permits\, I will introduce new development including the
  state-of-the-art understanding of deformation models\, in which generaliz
 ed entropy (for instance\, Tsallis entropy\, Renyi entropy\, phi-entropy) 
 replaces Shannon entropy and deformed divergence replaces KL and Bregman d
 ivergences. Deformed exponential families reveal an “escort statistics
 ” and “gauge freedom” that is buried in the standard exponential fam
 ily. This tutorial attempts to give a gentle introduction to information g
 eometry to a non-geometric audience.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yalbi Itzel Balderas-Martinez (Instituto Nacional de Enfermedades 
 Respiratorias)
DTSTART:20211026T213000Z
DTEND:20211026T223000Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/11/">Panel: AI & Public Institutions</a>\nby Yalbi Itzel Balderas-M
 artinez (Instituto Nacional de Enfermedades Respiratorias) as part of CMO-
 Bound-Geometry & Learning from Data\n\n\nAbstract\nA conversation with pub
 lic actors and stake-holders\, with a focus on AI in use cases in Mexican 
 Public Institutions (Government\, Science Planning\, and Healthcare). With
  Dr. Eduardo Ulises Moya\, Dra. Paola Villareal\, and Dra. Yalbi Itzel Bal
 deras Martinez.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maks Ovsjanikov (LIX Ecole Polytechnique)
DTSTART:20211027T140000Z
DTEND:20211027T144500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/12/">Efficient learning on curved surfaces via diffusion</a>\nby Ma
 ks Ovsjanikov (LIX Ecole Polytechnique) as part of CMO-Bound-Geometry & Le
 arning from Data\n\n\nAbstract\nIn this talk I will describe several appro
 aches for learning on curved surfaces\, represented as point clouds or tri
 angle meshes. I will first give a brief overview of geodesic convolutional
  neural networks (GCNNs) and their variants and then present a recent appr
 oach that replaces this paradigm with an efficient framework that is based
  on diffusion. The key properties of this approach is that it avoids poten
 tially error-prone and costly operations\, such as local patch discretizat
 ion with robust and efficient building blocks that are based on learned di
 ffusion and gradient computation. I will then show several applications\, 
 ranging from RNA surface segmentation to non-rigid shape correspondence\, 
 while highlighting the invariance of this technique to sampling and triang
 le mesh structure.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xavier Pennec (Université Côte d'Azur and INRIA)
DTSTART:20211027T150000Z
DTEND:20211027T154500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/13/">Curvature effects in Geometric statistics : empirical Frechet 
 mean and parallel transport accuracy</a>\nby Xavier Pennec (Université C
 ôte d'Azur and INRIA) as part of CMO-Bound-Geometry & Learning from Data\
 n\n\nAbstract\nTwo fundamental tools for statistics on objects living in n
 on-linear manifolds are the Fréchet mean and the parallel transport. We p
 resent in this talk new results based on Gavrilov's tensorial series expan
 sions allow us to quantify the accuracy of these two fundamental tools and
  to put forward the impact of the manifold curvature.\n\nA central limit t
 heorem for the empirical Fréchet mean was established in Riemannian manif
 olds by Bhattacharya & Patrangenaru in 2005. We propose an asymptotic deve
 lopment valid in Riemannian and affine cases which better explain the role
  of the curvature in the concentration of the empirical Fréchet mean towa
 rds the population mean with a finite number of samples. We also establish
  a new non-asymptotic (small sample) expansion in high concentration condi
 tions which shows a statistical bias on the empirical mean in the directio
 n of the average gradient of the curvature. These curvature effects become
  important with large curvature and can drastically modify the estimation 
 of the mean. They could partly explain the phenomenon of sticky means rece
 ntly put into evidence in stratified spaces with negative curvature\, and 
 smeary means in positive curvature.\n\nParallel transport is a second majo
 r tool\, for instant to transport longitudinal deformation trajectories fr
 om each individuals towards a template brain shape before for performing g
 roup-wise statistics in longitudinal analyses. More generally\, parallel t
 ransport should be the natural geometric formulation for domain adaptation
  in machine learning in non-linear spaces. In previous works\, we have bui
 ld on the Schild's ladder principle to engineer a more symmetric discrete 
 parallel transport scheme based on iterated geodesic parallelograms\, call
 ed pole ladder. This scheme is surprisingly exact in only one step on symm
 etric spaces\, which makes it quite interesting for many applications invo
 lving simple symmetric manifolds. For general manifolds\, Schild's and pol
 e ladders were thought to be of first order with respect to the number of 
 steps\, similarly to other schemes based on Jacobi fields. However\, the l
 iterature was lacking a real convergence performance analysis when the sch
 eme is iterated. We show that pole ladder naturally converges with quadrat
 ic speed\, and that Schild's ladder can be modified to perform identically
  even when geodesics are approximated by numerical schemes. This contrasts
  with Jacobi fields approximations that are bound to linear convergence. T
 he extra computational cost of ladder methods is thus easily compensated b
 y a drastic reduction of the number of steps needed to achieve the request
 ed accuracy.\n\n\n* Xavier Pennec. Curvature effects on the empirical mean
  in Riemannian and affine Manifolds: a non-asymptotic high concentration e
 xpansion in the small-sample regime. Note: Working paper or preprint\, Jun
 e 2019. ARXIV : 1906.07418\n* Nicolas Guigui and Xavier Pennec. Numerical 
 Accuracy of Ladder Schemes for Parallel Transport on Manifolds. Foundation
 s of Computational Mathematics\, June 2021. ARXIV : 2007.07585\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chris Connell (Indiana University Bloomington)
DTSTART:20211027T160000Z
DTEND:20211027T164500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/14/">Tensor decomposition based network embedding algorithms for pr
 ediction tasks on dynamic networks.</a>\nby Chris Connell (Indiana Univers
 ity Bloomington) as part of CMO-Bound-Geometry & Learning from Data\n\n\nA
 bstract\nClassical network embeddings create a low dimensional representat
 ion of the learned relationships between features across nodes. Such embed
 dings are important for tasks such as link prediction and node classificat
 ion. We consider low dimensional embeddings of “dynamic networks” -- a
  family of time varying networks where there exist both temporal and spati
 al link relationships between nodes. We present novel embedding methods fo
 r a dynamic network based on higher order tensor decompositions for tensor
 ial representations of the dynamic network. Our embeddings are analogous t
 o certain classical spectral embedding methods for static networks. We dem
 onstrate the effectiveness of our approach by comparing our algorithms' pe
 rformance on the link prediction task against an array of current baseline
  methods across three distinct real-world dynamic networks. Finally\, we p
 rovide a mathematical rationale for this effectiveness in the regime of sm
 all incremental changes. This is joint work with Yang Wang.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nina Miolane (UC Santa Barbara)
DTSTART:20211027T180000Z
DTEND:20211027T184500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/15
DESCRIPTION:by Nina Miolane (UC Santa Barbara) as part of CMO-Bound-Geomet
 ry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katy Craig (University of California Santa Barbara)
DTSTART:20211027T190000Z
DTEND:20211027T194500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/16/">A Blob Method for Diffusion and Applications to Sampling and T
 wo Layer Neural Networks</a>\nby Katy Craig (University of California Sant
 a Barbara) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstrac
 t\nGiven a desired target distribution and an initial guess of that distri
 bution\, composed of finitely many samples\, what is the best way to evolv
 e the locations of the samples so that they accurately represent the desir
 ed distribution? A classical solution to this problem is to allow the samp
 les to evolve according to Langevin dynamics\, a stochastic particle metho
 d for the Fokker-Planck equation. In today’s talk\, I will contrast this
  classical approach with a deterministic particle method corresponding to 
 the porous medium equation. This method corresponds exactly to the mean-fi
 eld dynamics of training a two layer neural network for a radial basis fun
 ction activation function. We prove that\, as the number of samples increa
 ses and the variance of the radial basis function goes to zero\, the parti
 cle method converges to a bounded entropy solution of the porous medium eq
 uation. As a consequence\, we obtain both a novel method for sampling prob
 ability distributions as well as insight into the training dynamics of two
  layer neural networks in the mean field regime.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Cloninger (University of California San Diego)
DTSTART:20211028T140000Z
DTEND:20211028T144500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/17/">Learning with Optimal Transport</a>\nby Alex Cloninger (Univer
 sity of California San Diego) as part of CMO-Bound-Geometry & Learning fro
 m Data\n\n\nAbstract\nDiscriminating between distributions is an important
  problem in a number of scientific fields. This motivated the introduction
  of Linear Optimal Transportation (LOT)\, which has a number of benefits w
 hen it comes to speed of computation and to determining classification bou
 ndaries. We characterize a number of settings in which the LOT embeds fami
 lies of distributions into a space in which they are linearly separable. T
 his is true in arbitrary dimensions\, and for families of distributions ge
 nerated through a variety of actions on a fixed distribution.  We also est
 ablish results on discrete spaces using Entropically Regularized Optimal T
 ransport\, and establish results about active learning with a small number
  of labels in the space of LOT embeddings.  This is joint work with Caroli
 ne Moosmueller (UCSD).\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ron Kimmel (Technion-Israel Institute of Technology)
DTSTART:20211028T150000Z
DTEND:20211028T154500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/18/">On Geometry and Learning</a>\nby Ron Kimmel (Technion-Israel I
 nstitute of Technology) as part of CMO-Bound-Geometry & Learning from Data
 \n\n\nAbstract\nGeometry means understanding in the sense that it involves
  finding the most basic invariants or Ockham’s razor explanation for a g
 iven phenomenon. At the other end\, modern Machine Learning has little to 
 do with explanation or interpretation of solutions to a given problem.\nI
 ’ll try to give some examples about the relation between learning and ge
 ometry\, focusing on learning geometry\, starting with the most basic noti
 on of planar shape invariants\, efficient distance computation on surfaces
 \,  and treating surfaces as metric spaces within a deep learning framewor
 k. I will introduce some links between these two seemingly orthogonal phil
 osophical directions.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Alvarez Melis (Microsoft Research\, MIT\, Harvard)
DTSTART:20211028T180000Z
DTEND:20211028T184500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/19/">Principled Data Manipulation with Optimal Transport</a>\nby Da
 vid Alvarez Melis (Microsoft Research\, MIT\, Harvard) as part of CMO-Boun
 d-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elizabeth Gross (University of Hawaii at Manoa)
DTSTART:20211028T190000Z
DTEND:20211028T194500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/20/">Learning phylogenetic networks using invariants</a>\nby Elizab
 eth Gross (University of Hawaii at Manoa) as part of CMO-Bound-Geometry & 
 Learning from Data\n\n\nAbstract\nPhylogenetic networks provide a means of
  describing the evolutionary history of sets of species believed to have u
 ndergone hybridization or gene flow during the course of their evolution. 
 The mutation process for a set of such species can be modeled as a Markov 
 process on a phylogenetic network. Previous work has shown that a site-pat
 tern probability distributions from a Jukes-Cantor phylogenetic network mo
 del must satisfy certain algebraic invariants. As a corollary\, aspects of
  the phylogenetic network are theoretically identifiable from site-pattern
  frequencies. In practice\, because of the probabilistic nature of sequenc
 e evolution\, the phylogenetic network invariants will rarely be satisfied
 \, even for data generated under the model. Thus\, using network invariant
 s for inferring phylogenetic networks requires some means of interpreting 
 the residuals\, or deviations from zero\, when observed site-pattern frequ
 encies are substituted into the invariants. In this work\, we propose a ma
 chine learning algorithm utilizing invariants to infer small\, level-one p
 hylogenetic networks. Given a data set\, the algorithm is trained on model
  data to learn the patterns of residuals corresponding to different networ
 k structures to classify the network that produced the data.  This is join
 t work with Travis Barton\, Colby Long\, and Joseph Rusinko.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Eliza O'Reilly (Caltech)
DTSTART:20211028T204500Z
DTEND:20211028T213000Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/21/">Random Tessellation Features and Forests</a>\nby Eliza O'Reill
 y (Caltech) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstra
 ct\nThe Mondrian process in machine learning is a recursive partition of s
 pace with random axis-aligned cuts used to build random forests and Laplac
 e kernel approximations.  The construction allows for efficient online alg
 orithms\, but the restriction to axis-aligned cuts does not capture depend
 encies between features. By viewing the Mondrian as a special case of the 
 stable under iterated (STIT) process in stochastic geometry\, we resolve o
 pen questions about the generalization of cut directions. We utilize the t
 heory of stationary random tessellations to show that STIT processes appro
 ximate a large class of stationary kernels and STIT forests achieve minima
 x rates for Lipschitz functions (forests and trees) and C^2 functions (for
 ests only). This work opens many new questions at the novel intersection o
 f stochastic geometry and machine learning. Based on joint work with Ngoc 
 Tran.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilke Demir (Intel)
DTSTART:20211028T213000Z
DTEND:20211028T221500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5239/22/">Panel\; AI & Industry</a>\nby Ilke Demir (Intel) as part of CM
 O-Bound-Geometry & Learning from Data\n\n\nAbstract\nA conversation with s
 everal actors and researchers about their roles in AI & Industry\, with Il
 ke Demir (Intel)\, Juan Carlos Catana (HP Labs Mx) and David Alvarez Melis
  (Microsoft Research).\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nina Otter (Queen Mary University London)
DTSTART:20211029T150000Z
DTEND:20211029T154500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/23
DESCRIPTION:by Nina Otter (Queen Mary University London) as part of CMO-Bo
 und-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Facundo Memoli (The Ohio State University)
DTSTART:20211029T160000Z
DTEND:20211029T164500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/24
DESCRIPTION:by Facundo Memoli (The Ohio State University) as part of CMO-B
 ound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soledad Villar (Johns Hopkins University)
DTSTART:20211029T180000Z
DTEND:20211029T184500Z
DTSTAMP:20260404T041640Z
UID:CMO-21w5239/25
DESCRIPTION:by Soledad Villar (Johns Hopkins University) as part of CMO-Bo
 und-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5239/25/
END:VEVENT
END:VCALENDAR
