BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Taco Cohen (Qualcomm AI Research)
DTSTART:20200506T160000Z
DTEND:20200506T173000Z
DTSTAMP:20260404T110744Z
UID:PhysicsMeetsML/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Physi
 csMeetsML/1/">Natural Graph Networks</a>\nby Taco Cohen (Qualcomm AI Resea
 rch) as part of Physics ∩ ML\n\n\nAbstract\nMessage passing algorithms a
 re the core of most neural networks that process information on graphs. Co
 nventionally\, such methods are invariant under permutation of the message
 s and hence forget how the information flows through the network. Analyzin
 g the local symmetries of the graph\, we show that a more general message 
 passing network can in fact be sensitive the flow of information by using 
 different kernels on different edges. This leads to an equivariant message
  passing algorithm that is more expressive than conventional invariant mes
 sage passing\, overcoming fundamental limitations of the latter. We derive
  the weight sharing and kernel constraints by modelling the symmetries usi
 ng elementary category theory and show that equivariant kernels are “jus
 t” natural transformations between two functors. This general formulatio
 n\, which we call Natural Networks\, gives a unified theory to model many 
 distinct forms of equivariant neural networks.\n
LOCATION:https://stable.researchseminars.org/talk/PhysicsMeetsML/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Phiala Shanahan (MIT)
DTSTART:20200520T160000Z
DTEND:20200520T173000Z
DTSTAMP:20260404T110744Z
UID:PhysicsMeetsML/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Physi
 csMeetsML/2/">Building symmetries into generative flow models</a>\nby Phia
 la Shanahan (MIT) as part of Physics ∩ ML\n\n\nAbstract\nI will discuss 
 recent work to incorporate symmetries\, in particular gauge symmetries (lo
 cal symmetry transformations that form Lie groups)\, into generative flow 
 models. This work is motivated by the applications of generative models fo
 r physics simulation\, in particular for lattice field theory.\n
LOCATION:https://stable.researchseminars.org/talk/PhysicsMeetsML/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ard Louis (Oxford)
DTSTART:20200603T160000Z
DTEND:20200603T173000Z
DTSTAMP:20260404T110744Z
UID:PhysicsMeetsML/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Physi
 csMeetsML/3/">Why do neural networks generalise in the overparameterised r
 egime?</a>\nby Ard Louis (Oxford) as part of Physics ∩ ML\n\n\nAbstract\
 nOne of the most surprising properties of deep neural networks (DNNs) is t
 hat they typically perform best in the overparameterised regime. Physicist
 s are taught from a young age that having more parameters than datapoints 
 is a terrible idea. This intuition can be formalised in standard learning 
 theory approaches\, based for example on model capacity\, which also predi
 ct that DNNs should heavily over-fit in this regime\, and therefore not ge
 neralise at all. So why do DNNs work so well? We use a version of the codi
 ng theorem from Algorithmic Information Theory to argue that DNNs are gene
 rically biased towards simple solutions. Such an inbuilt Occam’s razor m
 eans that they are biased towards solutions that typically generalise well
 . We further explore the interplay between this simplicity bias and the er
 ror spectrum on a dataset to develop a detailed Bayesian theory of trainin
 g and generalisation that explains why and when SGD trained DNNs generalis
 e\, and when they should not. This picture also allows us to derive tight 
 PAC-Bayes bounds that closely track DNN learning curves and can be used to
  rationalise differences in performance across architectures. Finally\, we
  will discuss some deep analogies between the way DNNs explore function sp
 ace\, and biases in the arrival of variation that explain certain trends o
 bserved in biological evolution.\n
LOCATION:https://stable.researchseminars.org/talk/PhysicsMeetsML/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Koji Hashimoto (Osaka University)
DTSTART:20200617T160000Z
DTEND:20200617T173000Z
DTSTAMP:20260404T110744Z
UID:PhysicsMeetsML/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/Physi
 csMeetsML/4/">Deep learning and quantum gravity</a>\nby Koji Hashimoto (Os
 aka University) as part of Physics ∩ ML\n\n\nAbstract\nFormulating quant
 um gravity is one of the final goals of fundamental physics. Recent progre
 ss in string theory brought a concrete formulation called AdS/CFT correspo
 ndence\, in which a gravitational spacetime emerges from lower-dimensional
  non gravitational quantum systems\, but we still lack in understanding ho
 w the correspondence works. I discuss similarities between the quantum gra
 vity and deep learning architecture\, by regarding the neural network as a
  discretized spacetime. In particular\, the questions such as\, when\, why
  and how a neural network can be a space or a spacetime\, may lead to a no
 vel way to look at machine learning. I implement concretely the AdS/CFT fr
 amework into a deep learning architecture\, and show the emergence of a cu
 rved spacetime as a neural network\, from a given teacher data of quantum 
 systems.\n
LOCATION:https://stable.researchseminars.org/talk/PhysicsMeetsML/4/
END:VEVENT
END:VCALENDAR
