BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jinglai Li (University of Birmingham)
DTSTART:20200707T120000Z
DTEND:20200707T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /1/">Maximum conditional entropy Hamiltonian Monte Carlo sampler</a>\nby J
 inglai Li (University of Birmingham) as part of Data Science and Computati
 onal Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jinming Duan (University of Birmingham)
DTSTART:20200714T130000Z
DTEND:20200714T140000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /2/">Cardiac Magnetic Resonance Image Segmentation with Anatomical Knowled
 ge</a>\nby Jinming Duan (University of Birmingham) as part of Data Science
  and Computational Statistics Seminar\n\n\nAbstract\nThis talk focuses on 
 segmentation of cardiac magnetic resonance (CMR) images from both healthy 
 and pathological subjects. Specifically\, we will propose three different 
 approaches that explicitly consider geometry (anatomy) information of the 
 heart.\n\nFirst\, we introduce a novel deep level set method\, which expli
 citly considers the image features learned from a deep neural network. To 
 this end\, we estimate joint probability maps over both region and edge lo
 cations in CMR images using a fully convolutional network. Due to the dist
 inct morphology of pulmonary hypertension (PH) hearts\, these probability 
 maps can then be incorporated in a single nested level set optimisation fr
 amework to achieve multi-region segmentation with high efficiency. We show
  results on CMR cine images and demonstrate that the proposed method leads
  to substantial improvements for CMR image segmentation in PH patients.\n\
 nSecond\, we propose a multi-task deep learning approach with atlas propag
 ation to develop a shape-refined bi-ventricular segmentation pipeline for 
 short-axis CMR volumetric images. The pipeline combines the computational 
 advantage of 2.5D FCNs networks and the capability of addressing 3D spatia
 l consistency without compromising segmentation accuracy. A refinement ste
 p is introduced for overcoming image artefacts (e.g.\, due to different br
 eath-hold positions and large slice thickness)\, which preclude the creati
 on of anatomically meaningful 3D cardiac shapes. Extensive numerical exper
 iments on the two large datasets show that our method is robust and capabl
 e of producing accurate\, high-resolution\, and anatomically smooth bi-ven
 tricular 3D models\, despite the presence of artefacts in input CMR volume
 s.\n\nLastly\, accelerating the CMR acquisition is essential. However\, re
 constructing high-quality images from accelerated CMR acquisition is a non
 trivial problem. As such\, I will show how deep neural networks can be dev
 eloped to bypass the usual image reconstruction stage. The method applies 
 shape prior knowledge through an auto-encoder. Due to the prior knowledge\
 , we improved both the CMR acquisition time and segmentation accuracy.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wei Zhang (Zuse Institute Berlin)
DTSTART:20200721T120000Z
DTEND:20200721T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /3/">Recent developments of Monte Carlo sampling strategies for probabilit
 y distributions on submanifolds</a>\nby Wei Zhang (Zuse Institute Berlin) 
 as part of Data Science and Computational Statistics Seminar\n\n\nAbstract
 \nMonte Carlo sampling for probability distributions on submanifolds is in
 volved in many applications in molecular dynamics\, statistical mechanics 
 and Bayesian computation. In this talk\,  I will talk about two types of M
 onte Carlo schemes that are developed in recent years. The first type of s
 chemes is based on the ergodicity of stochastic differential equations (SD
 Es) on submanifolds and is asymptotically unbiased as the step-size vanish
 es. The second type of schemes consists of Markov chain Monte Carlo (MCMC)
  algorithms that are unbiased when finite step-sizes are used. I will disc
 uss the role of projections onto submanifolds\, as well as the necessity o
 f the so-called "reversibility check'' step in MCMC schemes on submanifold
 s that is first pointed out by Goodman\, Holmes-Cerfon and Zappa. During t
 he talk\, I will illustrate both types of schemes with some numerical exam
 ples.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Long Tran-Thanh (University of Warwick)
DTSTART:20200728T120000Z
DTEND:20200728T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /4/">On COPs\, Bandits\, and AI for Good</a>\nby Long Tran-Thanh (Universi
 ty of Warwick) as part of Data Science and Computational Statistics Semina
 r\n\n\nAbstract\nIf you have a question about this talk\, please contact H
 ong Duong.\n\nIn the recent years there has been an increasing interest in
  applying techniques from artificial intelligence (AI) to tackle societal 
 and environmental challenges\, ranging from climate change and natural dis
 asters\, to food safety and disease spread. These efforts are typically kn
 own under the name AI for Good. While many research work in this area have
  been focusing on designing machine learning algorithms to learn new insig
 hts/predict future events from previously collected data\, there is anothe
 r domain where AI has been found to be useful\, namely: resource allocatio
 n and decision making. In particular\, a key step in addressing societal/e
 nvironmental challenges is to efficiently allocate a set of sparse resourc
 es to mitigate the problem(s). For example\, in the case of wildfire\, a d
 ecision maker has to adaptively and sequentially allocate a limited number
  of firefighting units to stop the spread of the fire as soon as possible.
  Another example comes from the problem of housing management for people i
 n need\, where a limited number of housing units have to be allocated to a
 pplicants in an online manner over time.\n\nWhile sequential resource allo
 cation can be often casted as (online) combinatorial optimisation problems
  (COPs)\, they can differ from the standard COPs when the decision maker h
 as to perform under uncertainty (e.g.\, the value of the action is not kno
 wn in advance\, or future events are unknown at the decision making stage)
 . In the presence of such uncertainty\, a popular tool from the decision m
 aking literature\, called multi-armed bandits\, comes in handy. In this ta
 lk\, I will demonstrate how to efficiently combine COPs with bandit models
  to tackle some AI for Good problems. In particular\, I first show how to 
 combine knapsack models with combinatorial bandits to efficiently allocate
  firefighting units and drones to mitigate wildfires. In the second part o
 f the talk\, I will demonstrate how interval scheduling\, paired up with b
 locking bandits\, can be a useful approach as a housing assignment method 
 for people in need.\n\nShort bio of the speaker:\n\nLong is a Hungarian-Vi
 etnamese computer scientist at the University of Warwick\, UK\, where he i
 s currently an Associate Professor. He obtained his PhD in Computer Scienc
 e from Southampton in 2012\, under the supervision of Nick Jennings and Al
 ex Rogers. Long has been doing active research in a number of key areas of
  Artificial Intelligence and multi-agent systems\, mainly focusing on mult
 i-armed bandits\, game theory\, and incentive engineering\, and their appl
 ications to crowdsourcing\, human-agent learning\, and AI for Good. He has
  published more than 60 papers at top AI conferences (AAAI\, AAMAS \, ECAI
 \, IJCAI \, NeurIPS\, UAI ) and journals (JAAMAS\, AIJ )\, and have receiv
 ed a number of national/international awards\, such as:\n\n(i) BCS /CPHC B
 est Computer Science PhD Dissertation Award (2012/13) – Honourable Menti
 on\; (ii) ECCAI /EurAI Best Artificial Intelligence Dissertation Award (20
 12/13) – Honourable Mention\; (iii) AAAI Outstanding Paper Award (2012) 
 – Honourable Mention (out of more than 1000 submissions)\; and (iv) ECAI
  Best Student Paper Award (2012)- Runner-Up (out of more than 600 submissi
 ons). (v) IJCAI 2019 Early Career Spotlight Talk – invited\n\nLong curre
 ntly serves as a board member (2018-2024) of the IFAAMAS Directory Board\,
  the main international governing body of the International Federation for
  Autonomous Agents and Multiagent Systems\, a major sub-field of the AI co
 mmunity. He is also the local chair of the AAMAS 2021 conference\, which w
 ill be held in London\, UK.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xin Tong (National University of Singapore)
DTSTART:20200804T130000Z
DTEND:20200804T140000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /5/">Can algorithms collaborate? The replica exchange method</a>\nby Xin T
 ong (National University of Singapore) as part of Data Science and Computa
 tional Statistics Seminar\n\n\nAbstract\nGradient descent (GD) is known to
  converge quickly for convex objective functions\, but it can be trapped a
 t local minima. On the other hand\, Langevin dynamics (LD) can explore the
  state space and find global minima\, but in order to give accurate estima
 tes\, LD needs to run with a small discretization step size and weak stoch
 astic force\, which in general slow down its convergence. This paper shows
  that these two algorithms can ``collaborate” through a simple exchange 
 mechanism\, in which they swap their current positions if LD yields a lowe
 r objective function. This idea can be seen as the singular limit of the r
 eplica-exchange technique from the sampling literature. We show that this 
 new algorithm converges to the global minimum linearly with high probabili
 ty\, assuming the objective function is strongly convex in a neighborhood 
 of the unique global minimum. By replacing gradients with stochastic gradi
 ents\, and adding a proper threshold to the exchange mechanism\, our algor
 ithm can also be used in online settings. We further verify our theoretica
 l results through some numerical experiments\, and observe superior perfor
 mance of the proposed algorithm over running GD or LD alone.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matthias Sachs (Duke University)
DTSTART:20200811T120000Z
DTEND:20200811T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /6/">Non-reversible Markov chain Monte Carlo for sampling of districting m
 aps</a>\nby Matthias Sachs (Duke University) as part of Data Science and C
 omputational Statistics Seminar\n\nAbstract: TBA\n\nFollowing the 2010 cen
 sus excessive Gerrymandering (i.e.\, the design of electoral districting m
 aps in such a way that outcomes are tilted in favor of a certain political
  power/party) has become an increasingly prevalent practice in several US 
 states. Recent approaches to quantify the degree of such partisan district
 ing use a random ensemble of districting plans which are drawn from a pres
 cribed probability distribution that adheres to certain non-partisan crite
 ria. In this talk I will discuss the construction of non-reversible Markov
  chain Monte-Carlo (MCMC) methods for sampling of such districting plans a
 s instances of what we term the Mixed skewed Metropolis-Hastings algorithm
  (MSMH)—a novel construction of non-reversible Markov chains which relie
 s on a generalization of what is commonly known as skew detailed balance.\
 n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yunwen Lei (University of Kaiserslautern)
DTSTART:20200818T120000Z
DTEND:20200818T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /7/">Statistical Learning by Stochastic Gradient Descent</a>\nby Yunwen Le
 i (University of Kaiserslautern) as part of Data Science and Computational
  Statistics Seminar\n\n\nAbstract\nStochastic gradient descent (SGD) has b
 ecome the workhorse behind many machine learning problems. Optimization an
 d estimation errors are two contradictory factors responsible for the pred
 iction behavior of SGD . In this talk\, we report our generalization analy
 sis of SGD by considering simultaneously the optimization and estimation e
 rrors. We remove some restrictive assumptions in the literature and signif
 icantly improve the existing generalization bounds. Our results help to un
 derstand how to stop SGD early to get a best generalization performance.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Duncan (Imperial College London)
DTSTART:20200825T120000Z
DTEND:20200825T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /8/">On the geometry of Stein variational gradient descent</a>\nby Andrew 
 Duncan (Imperial College London) as part of Data Science and Computational
  Statistics Seminar\n\n\nAbstract\nBayesian inference problems require sam
 pling or approximating high-dimensional probability distributions. The foc
 us of this talk is on the recently introduced Stein variational gradient d
 escent methodology\, a class of algorithms that rely on iterated steepest 
 descent steps with respect to a reproducing kernel Hilbert space norm. Thi
 s construction leads to interacting particle systems\, the mean-field limi
 t of which is a gradient flow on the space of probability distributions eq
 uipped with a certain geometrical structure. We leverage this viewpoint to
  shed some light on the convergence properties of the algorithm\, in parti
 cular addressing the problem of choosing a suitable positive definite kern
 el function. Our analysis leads us to considering certain singular kernels
  with adjusted tails. This is joint work with N. Nusken (U. of Potsdam) an
 d L. Szpruch (U. Edinburgh).\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nikolas Nüsken (University of Potsdam)
DTSTART:20200901T120000Z
DTEND:20200901T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /9/">Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural ne
 tworks: perspectives from the theory of controlled diffusions and measures
  on path space</a>\nby Nikolas Nüsken (University of Potsdam) as part of 
 Data Science and Computational Statistics Seminar\n\n\nAbstract\nThe first
  part of this presentation will review connections between problems in the
  optimal control of diffusion processes\, Hamilton-Jacobi-Bellman equation
 s and forward-backward SDEs\, having in mind applications in rare event si
 mulation and stochastic filtering. The second part will explain a recent a
 pproach based on divergences between probability measures on path space an
 d variational inference that can be used to construct appropriate loss fun
 ctions in a machine learning framework. This is joint work with Lorenz Ric
 hter.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Anh Han (Teesside University)
DTSTART:20201027T150000Z
DTEND:20201027T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /10/">To Regulate or Not: A Social Dynamics Analysis of an Idealised Artif
 icial Intelligence Race</a>\nby The Anh Han (Teesside University) as part 
 of Data Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panayiota Touloupou (University of Birmingham)
DTSTART:20201110T150000Z
DTEND:20201110T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /11/">Scalable inference for epidemic models with individual level data.</
 a>\nby Panayiota Touloupou (University of Birmingham) as part of Data Scie
 nce and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wil Ward (University of Sheffield)
DTSTART:20201124T150000Z
DTEND:20201124T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /12/">Gaussian processes techniques for non-linear multidimensional dynami
 cal systems</a>\nby Wil Ward (University of Sheffield) as part of Data Sci
 ence and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dennis Sun (California Polytechnic State University)
DTSTART:20201208T150000Z
DTEND:20201208T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/13
DESCRIPTION:by Dennis Sun (California Polytechnic State University) as par
 t of Data Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boumediene Hamzi (Imperial College London)
DTSTART:20201217T150000Z
DTEND:20201217T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /14/">Machine Learning and Dynamical Systems meet in Reproducing Kernel Hi
 lbert Spaces</a>\nby Boumediene Hamzi (Imperial College London) as part of
  Data Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lequan Yu (Stanford University)
DTSTART:20210202T160000Z
DTEND:20210202T170000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /15/">Medical Image Analysis with Data-efficient Learning</a>\nby Lequan Y
 u (Stanford University) as part of Data Science and Computational Statisti
 cs Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Sanz-Alonso (University of Chicago)
DTSTART:20210216T150000Z
DTEND:20210216T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/16
DESCRIPTION:by Daniel Sanz-Alonso (University of Chicago) as part of Data 
 Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Manfred Opper (University of Birmingham)
DTSTART:20210302T150000Z
DTEND:20210302T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/17
DESCRIPTION:by Manfred Opper (University of Birmingham) as part of Data Sc
 ience and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Oanh Nguyen (University of Illinois at Urbana-Champaign)
DTSTART:20210316T150000Z
DTEND:20210316T160000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /18/">Roots of random functions</a>\nby Oanh Nguyen (University of Illinoi
 s at Urbana-Champaign) as part of Data Science and Computational Statistic
 s Seminar\n\n\nAbstract\nRandom functions are linear combinations of deter
 ministic functions using independent random coefficients. Several importan
 t examples are the Kac polynomial\, Weyl polynomial\, and random orthogona
 l polynomials. Random functions appear naturally in physics and approximat
 ion theory and remain mysterious despite decades of intensive research. We
  will present our approaches via the local universality method to study qu
 estions about the roots. As one of the applications\, we prove that the nu
 mber of real roots of a wide class of random polynomials satisfies the Cen
 tral Limit Theorem.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Franca Hoffmann (University of Bonn)
DTSTART:20210427T120000Z
DTEND:20210427T130000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /19/">Kalman-Wasserstein Gradient Flows</a>\nby Franca Hoffmann (Universit
 y of Bonn) as part of Data Science and Computational Statistics Seminar\n\
 n\nAbstract\nWe study a class of interacting particle systems that may be 
 used for optimization. By considering the mean-field limit one obtains a n
 onlinear Fokker-Planck equation. This equation exhibits a gradient structu
 re in probability space\, based on a modified Wasserstein distance which r
 eflects particle correlations: the Kalman-Wasserstein metric. This setting
  gives rise to a methodology for calibrating and quantifying uncertainty f
 or parameters appearing in complex computer models which are expensive to 
 run\, and cannot readily be differentiated. This is achieved by connecting
  the interacting particle system to ensemble Kalman methods for inverse pr
 oblems. This is joint work with Alfredo Garbuno-Inigo (Caltech)\, Wuchen L
 i (UCLA) and Andrew Stuart (Caltech).\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Furqan Aziz (University of Birmingham)
DTSTART:20210525T140000Z
DTEND:20210525T150000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /20/">Backtrackless walks on a graph</a>\nby Furqan Aziz (University of Bi
 rmingham) as part of Data Science and Computational Statistics Seminar\n\n
 \nAbstract\nThe aim of this talk is to explore the use and applications of
  backtrackless walks on a graph. We will discuss how the backtrackless wal
 ks and the coefficients of the reciprocal of the Ihara zeta function\, whi
 ch are related to the frequencies of prime cycles in the graph\, can be us
 ed to implement graph kernels. We will further present explicit methods fo
 r computing the eigensystem of the edge-based Laplacian of a graph. This r
 eveals a connection between the eigenfunctions of the edge-based Laplacian
  and both the classical random walk and the backtrackless random walk on a
  graph. The definition of edge-based Laplacian allows us to define and imp
 lement more complex partial differential equations on graphs such as the s
 econd order wave equation.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aretha Teckentrup (University of Edinburgh)
DTSTART:20210608T140000Z
DTEND:20210608T150000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /21/">Convergence\, Robustness and Flexibility of Gaussian Process Regress
 ion</a>\nby Aretha Teckentrup (University of Edinburgh) as part of Data Sc
 ience and Computational Statistics Seminar\n\n\nAbstract\nWe are intereste
 d in the task of estimating an unknown function from a set of point evalua
 tions. In this context\, Gaussian process regression is often used as a Ba
 yesian inference procedure. However\, hyper-parameters appearing in the me
 an and covariance structure of the Gaussian process prior\, such as smooth
 ness of the function and typical length scales\, are often unknown and lea
 rnt from the data\, along with the posterior mean and covariance.\n\nIn th
 e first part of the talk\, we will study the robustness of Gaussian proces
 s regression with respect to mis-specification of the hyper-parameters\, a
 nd provide a convergence analysis of the method applied to a fixed\, unkno
 wn function of interest [1].\n\nIn the second part of the talk\, we discus
 s deep Gaussian processes as a class of flexible non-stationary prior dist
 ributions [2].\n\n[1] A.L. Teckentrup. Convergence of Gaussian process reg
 ression with estimated hyper-parameters and applications in Bayesian inver
 se problems. SIAM/ASA Journal on Uncertainty Quantification\, 8(4)\, p. 13
 10-1337\, 2020.\n\n[2] M.M. Dunlop\, M.A. Girolami\, A.M. Stuart\, A.L. Te
 ckentrup. How deep are deep Gaussian processes? Journal of Machine Learnin
 g Research\, 19(54)\, 1-46\, 2018.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yulong Lu (University of Massachusetts)
DTSTART:20210629T140000Z
DTEND:20210629T150000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /22/">A priori generalization error analysis of neural network methods for
  solving high dimensional elliptic PDEs</a>\nby Yulong Lu (University of M
 assachusetts) as part of Data Science and Computational Statistics Seminar
 \n\n\nAbstract\nNeural network-based machine learning methods\, including 
 the most notably deep learning have achieved extraordinary successes in nu
 merous fields. Despite the rapid development of learning algorithms based 
 on neural networks\, their mathematical analysis is far from understood. I
 n particular\, it has been a big mystery that neural network-based machine
  learning methods work extremely well for solving high dimensional problem
 s.\n\nIn this talk\, we will demonstrate the power of neural network metho
 ds for solving high dimensional elliptic PDEs. Specifically\, we will disc
 uss an a priori generalization error analysis of the Deep Ritz Method for 
 solving two classes of high dimensional Schrödinger problems: the station
 ary Schrödinger equation and the ground state of Schrödinger operator.  
 Assuming the exact solution or the ground state lies in a low-complexity f
 unction space called spectral Barron space\, we show that the convergence 
 rate of the generalization error is independent of dimension. We also deve
 lop a new regularity theory for the PDEs of consideration on the spectral 
 Barron space. This can be viewed as an analog of the classical Sobolev reg
 ularity theory for PDEs.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Neil Chada (King Abdullah University of Science and Technology)
DTSTART:20210706T140000Z
DTEND:20210706T150000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /23/">Unbiased Inference for Discretely observed Hidden Markov Model Diffu
 sions</a>\nby Neil Chada (King Abdullah University of Science and Technolo
 gy) as part of Data Science and Computational Statistics Seminar\n\n\nAbst
 ract\nWe develop a Bayesian inference method for diffusions observed discr
 etely and with noise\, which is free of discretisation bias. Unlike existi
 ng unbiased inference methods\, our method does not rely on exact simulati
 on techniques. Instead\, our method uses standard time-discretised approxi
 mations of diffusions\, such as the Euler—Maruyama scheme. Our approach 
 is based on particle marginal Metropolis—Hastings\, a particle filter\, 
 randomised multilevel Monte Carlo\, and importance sampling type correctio
 n of approximate Markov chain Monte Carlo. The resulting estimator leads t
 o inference without a bias from the time-discretisation as the number of M
 arkov chain iterations increases. We give convergence results and recommen
 d allocations for algorithm inputs. Our method admits a straightforward pa
 rallelisation\, and can be computationally efficient. The user-friendly ap
 proach is illustrated in three examples\, where the underlying diffusion i
 s an Ornstein—Uhlenbeck process\, a geometric Brownian motion\, and a 2d
  non-reversible Langevin equation.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Allen Hart (University of Bath)
DTSTART:20210720T140000Z
DTEND:20210720T150000Z
DTSTAMP:20260404T110643Z
UID:DSCSS/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/DSCSS
 /24/">Echo state networks applied to market making problems</a>\nby Allen 
 Hart (University of Bath) as part of Data Science and Computational Statis
 tics Seminar\n\n\nAbstract\nIn this talk\, we discuss how a special type o
 f recurrent neural network called an Echo State Network (ESN) can be appli
 ed to supervised learning problems involving time series. We train the ESN
  using linear regression\, and despite the training process being entirely
  linear\, the ESN retains the universal approximation property.\n\nWe disc
 uss briefly how an ESN can be used to solve supervised learning problems\,
  before moving onto the more complex problem of reinforcement learning. We
  demonstrate the theory by applying the ESN to a simple market making prob
 lem that appears in mathematical finance.\n
LOCATION:https://stable.researchseminars.org/talk/DSCSS/24/
END:VEVENT
END:VCALENDAR
