BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Igor Pruenster (Bocconi University))
DTSTART:20211129T160000Z
DTEND:20211129T164500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/1/">Nonparametric priors for partially exchangeable data: dependenc
 e structure and borrowing of information</a>\nby Igor Pruenster (Bocconi U
 niversity)) as part of CMO-Foundations of Objective Bayesian Methodology\n
 \nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beatrice Franzolini (Bocconi University\, Italy)
DTSTART:20211129T164500Z
DTEND:20211129T173000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/2/">Nonparametric priors with full-range borrowing of information</
 a>\nby Beatrice Franzolini (Bocconi University\, Italy) as part of CMO-Fou
 ndations of Objective Bayesian Methodology\n\n\nAbstract\nWhen data are gr
 ouped into distinct samples\, they typically are homogeneous within and he
 terogeneous across groups. In this case\, the Bayesian paradigm requires a
  prior law over a collection of distributions. From a modelling point of v
 iew\, it is essential to study how this structure reflects on the observab
 les\, especially in nonparametric models. We introduce the notion of hyper
 -ties and show that they play the same role of actual ties in the exchange
 able setting\, driving the dependence between observations. Using hyper-ti
 es\, we can compute correlation between observables and show how its sign 
 depends from the joint specification. Finally\, we propose a novel class o
 f dependent nonparametric priors\, which may induce either positive or neg
 ative correlation across samples.\n\n"\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marta Catalano (University of Warwick\, UK)
DTSTART:20211129T180000Z
DTEND:20211129T184500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/3/">A Wasserstein index of dependence for Bayesian nonparametric mo
 deling</a>\nby Marta Catalano (University of Warwick\, UK) as part of CMO-
 Foundations of Objective Bayesian Methodology\n\n\nAbstract\nOptimal trans
 port (OT) methods and Wasserstein distances are flourishing in many scient
 ific fields as an effective means for comparing and connecting different r
 andom structures. In this talk we describe the first use of an OT distance
  between Lévy measures with infinite mass to solve a statistical problem.
  Complex phenomena often yield data from different but related sources\, w
 hich are ideally suited to Bayesian modeling because of its inherent borro
 wing of information. In a nonparametric setting\, this is regulated by the
  dependence between random measures: we derive a general Wasserstein index
  for a principled quantification of the dependence gaining insight into th
 e models’ deep structure. It also allows for an informed prior elicitati
 on and provides a fair ground for model comparison. Our analysis unravels 
 many key properties of the OT distance between Lévy measures\, whose inte
 rest goes beyond Bayesian statistics\, spanning to the theory of partial d
 ifferential equations and of Lévy processes.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Isadora Antoniano-Villalobos (Ca' Foscari University of Venice)
DTSTART:20211129T184500Z
DTEND:20211129T193000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/4/">Bayesian mixture models for the prediction of extreme observati
 ons</a>\nby Isadora Antoniano-Villalobos (Ca' Foscari University of Venice
 ) as part of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstra
 ct\nIn many applications with interest in large or extreme observations\, 
 usual inferential methods may fail to reproduce the tail behaviour of the 
 variables involved. Recent literature has proposed the use of multivariate
  extreme value theory to predict an unobserved component of a random vecto
 r given large observed values of the rest. This is achieved through the es
 timation of the angular measure controlling the dependence structure in th
 e tail of the distribution. The idea can be extended and used for predicti
 on of multiple components at adequately large levels\, provided the model 
 used for the angular measure is sufficiently flexible enough to capture co
 mplex dependence structures. The use of Bernstein polynomials ensures such
  flexibility and their interpretation as mixture models allows the use of 
 current trans-dimensional MCMC posterior simulation methods for inference.
 \n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Julyan Arbel (Inria Grenoble\, France)
DTSTART:20211129T220000Z
DTEND:20211129T224500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/5/">Improving MCMC convergence diagnostic with a local version of R
 -hat</a>\nby Julyan Arbel (Inria Grenoble\, France) as part of CMO-Foundat
 ions of Objective Bayesian Methodology\n\n\nAbstract\nDiagnosing convergen
 ce of Markov chain Monte Carlo (MCMC) is crucial in Bayesian analysis. Amo
 ng the most popular methods\, the potential scale reduction factor (common
 ly named R-hat) is an indicator that monitors the convergence of all chain
 s to the stationary distribution\, based on a comparison of the between- a
 nd within-variance of the chains. Several improvements have been suggested
  since its introduction by Gelman & Rubin (1992). Here\, we analyse some p
 roperties of the theoretical value R associated to R-hat in the case of a 
 localized version that focuses on quantiles of the distribution. This lead
 s to proposing a new indicator\, which is shown to allow both for localizi
 ng the MCMC convergence in different quantiles of the distribution\, and a
 t the same time for handling some convergence issues not detected by other
  R-hat versions.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Campbell Trevor (University of British Columbia\, Canada)
DTSTART:20211129T224500Z
DTEND:20211129T230000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/6/">Parallel Tempering on Optimized Paths</a>\nby Campbell Trevor (
 University of British Columbia\, Canada) as part of CMO-Foundations of Obj
 ective Bayesian Methodology\n\n\nAbstract\nParallel tempering (PT) is a cl
 ass of Markov chain Monte Carlo algorithms that constructs a path of distr
 ibutions annealing between a tractable reference and an intractable target
 \, and then interchanges states along the path to improve mixing in the ta
 rget. The performance of PT depends on how quickly a sample from the refer
 ence distribution makes its way to the target\, which in turn depends on t
 he particular path of annealing distributions. However\, past work on PT h
 as used only simple paths constructed from convex combinations of the refe
 rence and target log-densities. In this talk I'll show that this path perf
 orms poorly in the common setting where the reference and target are nearl
 y mutually singular. To address this issue\, I'll present an extension of 
 the PT framework to general families of paths\, formulate the choice of pa
 th as an optimization problem that admits tractable gradient estimates\, a
 nd present a flexible new family of spline interpolation paths for use in 
 practice. Theoretical and empirical results will demonstrate that the prop
 osed methodology breaks previously-established upper performance limits fo
 r traditional paths.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:María Fernanda Gil Leyva Villa (Bocconi University)
DTSTART:20211130T000000Z
DTEND:20211130T004500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/7/">Gibbs sampling for mixtures in order of appearance: the ordered
  allocation sampler</a>\nby María Fernanda Gil Leyva Villa (Bocconi Unive
 rsity) as part of CMO-Foundations of Objective Bayesian Methodology\n\n\nA
 bstract\nGibbs sampling methods for mixture models are based on data augme
 ntation schemes that account for the unobserved partition in the data. The
 y have been broadly classified into two categories: marginal and condition
 al samplers. Marginal samplers are termed this way because they integrate 
 out part of the mixing distribution and model directly the partition struc
 ture. They can be used to implement mixture models with a tractable exchan
 geable partition probability function (EPPF) associated to the mixing dist
 ribution. However\, if the EPPF is not available in closed form\, marginal
  samplers are hard to adapt. In contrast\, conditional samplers rely on al
 location variables that identify each observation with a mixture component
 .  While conditional samplers are more broadly applicable and allow direct
  inference on the mixing distribution\, they are known to suffer from slow
  mixing. Moreover\, for mixtures models with infinitely many components so
 me form of truncation\, either deterministic or random\, is required. As f
 or mixtures with a random number of components\, the exploration of parame
 ter spaces of different dimensions can also be challenging. We tackle thes
 e issues by expressing the mixture components in the random order of appea
 rance in an exchangeable sequence directed by the mixing distribution. We 
 derive a sampler\, called the ordered allocation sampler\, that is straigh
 tforward to implement for mixing distributions with tractable size-biased 
 ordered weights. In infinite mixtures\, no form of truncation is necessary
 . As for finite mixtures with random dimension\, a simple updating of the 
 number of components is obtained by a blocking argument\, thus easing chal
 lenges found in trans-dimensional moves via Metropolis-Hasting steps. Alth
 ough the ordered allocation sampler is a conditional sampler\, sampling oc
 curs in the space of ordered partitions with blocks labelled in the least 
 element order. This improves mixing and promotes a consistent labelling of
  mixture components throughout iterations.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anirban Bhattarcharya (Texas A&M University)
DTSTART:20211130T004500Z
DTEND:20211130T013000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/8/">Coupling-based convergence assessment of some Gibbs samplers fo
 r high-dimensional Bayesian regression with shrinkage priors</a>\nby Anirb
 an Bhattarcharya (Texas A&M University) as part of CMO-Foundations of Obje
 ctive Bayesian Methodology\n\n\nAbstract\nWe consider Markov chain Monte C
 arlo (MCMC) algorithms for Bayesian high-dimensional regression with conti
 nuous shrinkage priors. A common challenge with these algorithms is the ch
 oice of the number of iterations to perform. This is critical when each it
 eration is expensive\, as is the case when dealing with modern data sets\,
  such as genome-wide association studies with thousands of rows and up to 
 hundreds of thousands of columns. We develop coupling techniques tailored 
 to the setting of high-dimensional regression with shrinkage priors\, whic
 h enable practical\, non-asymptotic diagnostics of convergence without rel
 ying on traceplots or long-run asymptotics. By establishing geometric drif
 t and minorization conditions for the algorithm under consideration\, we p
 rove that the proposed couplings have finite expected meeting time. Focusi
 ng on a class of shrinkage priors which includes the 'Horseshoe'\, we empi
 rically demonstrate the scalability of the proposed couplings. A highlight
  of our findings is that less than 1000 iterations can be enough for a Gib
 bs sampler to reach stationarity in a regression on 100\,000 covariates. T
 he numerical results also illustrate the impact of the prior on the comput
 ational efficiency of the coupling\, and suggest the use of priors where t
 he local precisions are Half-t distributed with degree of freedom larger t
 han one. (Joint work with Niloy Biswas\, Pierre Jacob\, and James Johndrow
 )\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Helen Ogden (University of Southampton\, UK09:45 - 10:30)
DTSTART:20211130T160000Z
DTEND:20211130T164500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/9/">Approximate cross validation for mixture models</a>\nby Helen O
 gden (University of Southampton\, UK09:45 - 10:30) as part of CMO-Foundati
 ons of Objective Bayesian Methodology\n\n\nAbstract\nChoosing appropriate 
 priors and hyperparameters to control the number of components used by a m
 ixture model is often challenging: it is typically hard to interpret such 
 parameters directly\, which makes it difficult to use subjective prior kno
 wledge. I will focus instead on how to choose these quantities to give a m
 odel with good frequentist properties. In principle\, models could be asse
 ssed by cross validation\, but in practice direct calculation of a cross v
 alidation criterion is computationally expensive and numerically unstable.
  I will discuss methods for approximating cross validation criteria for mi
 xture models\, which aim to address both of these issues.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander Ly (University of Amsterdam/CWI Amsterdam)
DTSTART:20211130T164500Z
DTEND:20211130T173000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/10/">Default Bayes Factors for Testing the (In)equality of Several 
 Population Variances</a>\nby Alexander Ly (University of Amsterdam/CWI Ams
 terdam) as part of CMO-Foundations of Objective Bayesian Methodology\n\n\n
 Abstract\nThe goal of this presentation is to elaborate on the notion of o
 bjectivity Bayesian tests. Concretely\, I’ll discuss Harold Jeffreys’s
  desiderata for objective Bayes factors that were formalised by Bayarri\, 
 Berger\, Forte and García-Donato (2012) within the context of testing the
  (in)equality of several population variances. I’ll also put forth the d
 esideratum of across-sample consistency for K-sample problems\, and show t
 hat for this problem\, such an objective Bayes factor adhering to all thes
 e desiderata (1) exists\, (2) is easily calculable\, and (3) has good freq
 uentist properties. If time allows\, I’ll also discuss the sequential pr
 operties of the resulting Bayes factor.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Luis E. Nieto-Barajas (ITAM Mexico)
DTSTART:20211130T180000Z
DTEND:20211130T184500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/11/">Characterizing variation of nonparametric random probability m
 easures using the Kullback–Leibler divergence</a>\nby Luis E. Nieto-Bara
 jas (ITAM Mexico) as part of CMO-Foundations of Objective Bayesian Methodo
 logy\n\n\nAbstract\nThis work characterizes the dispersion of some popular
  random probability measures\, including the bootstrap\, the Bayesian boot
 strap\, and the Pólya tree prior. This dispersion is measured in terms of
  the variation of the Kullback–Leibler divergence of a random draw from 
 the process to that of its baseline centring measure. By providing a quant
 itative expression of this dispersion around the baseline distribution\, o
 ur work provides insight for comparing different parameterizations of the 
 models and for the setting of prior parameters in applied Bayesian setting
 s. This highlights some limitations of the existing canonical choice of pa
 rameter settings in the Pólya tree process.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chris Holmes (Oxford University)
DTSTART:20211130T184500Z
DTEND:20211130T193000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/12/">Predictive Inference: a view towards objectivity</a>\nby Chris
  Holmes (Oxford University) as part of CMO-Foundations of Objective Bayesi
 an Methodology\n\n\nAbstract\nWe revisit the predictive approach to Bayesi
 an statistics\, advocated by Geisser and others\, as a framework to facili
 tate objective inference. We explore the predictive viewpoint of Bayesian 
 nonparametric learning as a means to improve robustness in M-open and we p
 oint to future research directions.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Judith Rousseau (University of Oxford)
DTSTART:20211130T220000Z
DTEND:20211130T224500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/13/">Using cut posterior in semi parametric inference with applicat
 ions to semiparametric and nonparametric Bayesian inference in hidden Mark
 ov models</a>\nby Judith Rousseau (University of Oxford) as part of CMO-Fo
 undations of Objective Bayesian Methodology\n\n\nAbstract\nIf the theory o
 f Bayesian approaches in standard nonparametric or high dimensional models
  is beginning to be well developed\, not so much is known in the context o
 f semi-parametric models outside very specific priors and models. We propo
 se in this talk a pseudo Bayesian approach\, based on the cut posterior wh
 ich allows for the construction of a distribution on the whole parameter a
 nd is constructed such that the marginal posterior on the parameter of int
 erest has optimal properties. We apply this approach to the setup of nonpa
 rametric hidden Markov models with finite state space and nonparametric em
 ission distributions. Since the seminal paper of Gassiat et al. (2016)\, i
 t is known that in such models the transition matrix $Q$ and the emission 
 distributions $F_1\, · · · \, F_K$ are identifiable\, up to label switc
 hing. We a cut posterior to simultaneously estimate $Q$ at the rate $\\sqr
 t{n}$ and the emission distributions at the usual nonparametric rates. To 
 do so\, we first consider a prior $\\pi_1$ on $Q$ and $F_1\, · · · \, F
 _K$ which leads to a posterior marginal distribution on $Q$ which verifies
  the Bernstein von mises property and thus to an estimator of $Q$ which is
  efficient. We then combine the marginal posterior on $Q$ with an other po
 sterior distribution on the emission distributions\, following the cut-pos
 terior approach\, to obtain a posterior which also concentrates around the
  emission distributions at the minimax rates. In addition an important int
 ermediate result of our work is an inversion inequality which allows to up
 per bound the $L_1$ norms between the emission densities by the $L_1$ norm
 s between marginal densities of 3 consecutive observations.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sinead Williamson (University of Texas at Austin)
DTSTART:20211130T224500Z
DTEND:20211130T230000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/14/">Posterior normalizing flows</a>\nby Sinead Williamson (Univers
 ity of Texas at Austin) as part of CMO-Foundations of Objective Bayesian M
 ethodology\n\n\nAbstract\nNormalizing flows allow us to construct complex 
 probability distributions $\\mathbb{P}(X)$ by transforming simpler distrib
 utions $\\mathbb{Q}(Z)$\, via a change of variables $X=f_\\theta(Z)$. If w
 e model the change-of-variables transformation $f_\\theta$ using an invert
 ible neural network with an analytically tractable Jacobian\, we can evalu
 ate likelihoods under the resulting distribution $\\mathbb{P}(X)$\, allowi
 ng us to perform maximum likelihood density estimation. Such maximum likel
 ihood density estimation is likely to overfit\, particularly if the number
  of observations is small. Rather than creating a mapping between a pair o
 f distributions\, we use normalizing flows to describe the relationship be
 tween two families of distributions. This allows us to use nonparametric l
 earning techniques to learn posterior distributions in a lightweight manne
 r.  (Joint work with Evan Ott)\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michele Guindani (University of California\, USA)
DTSTART:20211201T000000Z
DTEND:20211201T004500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/15/">A Common Atom Model for the Bayesian Nonparametric Analysis of
  Nested Data</a>\nby Michele Guindani (University of California\, USA) as 
 part of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nT
 he use of large datasets for targeted therapeutic interventions requires n
 ew ways to characterize the heterogeneity observed across subgroups of a s
 pecific population. In particular\, models for partially exchangeable data
  are needed for inference on nested datasets\, where the observations are 
 assumed to be organized in different units and some sharing of information
  is required to learn distinctive features of the units. In this talk\, we
  propose a nested Common Atoms Model (CAM) that is particularly suited for
  the analysis of nested datasets where the distributions of the units are 
 expected to differ only over a small fraction of the observations sampled 
 from each unit. The proposed CAM allows a two-layered clustering at the di
 stributional and observational level and is amenable to scalable posterior
  inference through the use of a computationally efficient nested slice sam
 pler algorithm. We further discuss how to extend the proposed modeling fra
 mework to handle discrete measurements\, and we conduct posterior inferenc
 e on a real microbiome dataset from a diet swap study to investigate how t
 he alterations in intestinal microbiota composition are associated with di
 fferent eating habits. If time allows\, we will also discuss an applicatio
 n to the analysis of time series calcium imaging experiments in awake beha
 ving animals. We further investigate the performance of our model in captu
 ring true distributional structures in the population by means of simulati
 on studies.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Giovanni Rebaudo (University of Texas at Austin)
DTSTART:20211201T004500Z
DTEND:20211201T013000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/16/">Graph-Aligned Random Partition Model</a>\nby Giovanni Rebaudo 
 (University of Texas at Austin) as part of CMO-Foundations of Objective Ba
 yesian Methodology\n\n\nAbstract\nBayesian nonparametric mixtures and rand
 om partition models are eﬀective tools to perform probabilistic clusteri
 ng. However\, standard independent mixture models can be restrictive in so
 me applications such as inference on cell-lineage due to the biological re
 lations of the clusters. The increasing availability of large genomics dat
 a and studies require new statistical tolls to perform model-based cluster
 ing and infer the relationship between the homogeneous subgroups of units.
  Motivated by single-cell RNA applications we develop a novel dependent mi
 xture model to jointly perform cluster analysis and align the cluster on a
  graph. Our flexible graph-aligned random partition model (gRPM) cleverly 
 exploits Gibbs -type priors as building blocks allowing us to derive analy
 tical results on the probability mass function of the random partition. Fr
 om the pmf of the random partition\, we derive a generalization of the wel
 l-known Chinese restaurant process and a related eﬃcient MCMC algorithm 
 to perform Bayesian inference. We perform posterior inference on real sing
 le-cell RNA data from mice stem cells. We further investigate the performa
 nce of our model in capturing underlying clustering structure as well as t
 he underlying graph by means of a simulation study.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Rossell (Universitat Pompeu Fabra\, Spain)
DTSTART:20211201T160000Z
DTEND:20211201T164500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/17/">Confounder importance learning for treatment effect inference<
 /a>\nby David Rossell (Universitat Pompeu Fabra\, Spain) as part of CMO-Fo
 undations of Objective Bayesian Methodology\n\n\nAbstract\nAn important ba
 sic problem is to estimate the association of a set of covariates of inter
 est (treatments) while accounting for many potential confounders. It has b
 een shown that standard high-dimensional Bayesian and penalized likelihood
  methods perform poorly in practice. The sparsity embedded in such methods
  leads to low power when there are strong correlations between treatments 
 and confounders\, or between confoundres\, which causes an under-selection
  (or omitted variable) bias. Current solutions encourage the inclusion of 
 confounders to increase power\, but as we show this can lead to serious ov
 er-selection problems. To address these issues\, we propose an empirical B
 ayes framework to learn what confounders should be encouraged (or disencou
 raged) to feature in the regression. We develop exact computations and a f
 aster expectation-propagation strategy for the family of exponential regre
 ssion models. We illustrate the applied impact of these issues to study th
 e association between salary and potentially discriminatory factors such a
 s gender\, race and place of birth.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jack Jewson (Universitat Pompeu Fabra\, Spain)
DTSTART:20211201T164500Z
DTEND:20211201T173000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/18/">General Bayesian Loss Function Selection and the use of Improp
 er Models</a>\nby Jack Jewson (Universitat Pompeu Fabra\, Spain) as part o
 f CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nStatist
 icians often face the choice between using probability models or a paradig
 m defined by minimising a loss function.  Both approaches are useful and\,
  if the loss can be re-cast into a proper probability model\, there are ma
 ny tools to decide which model or loss  is more appropriate for the observ
 ed data\, in the sense of explaining \nthe data’s nature. However\, when
  the loss leads to an improper model\,  there are no principled ways to gu
 ide this choice. We address this task by combining the Hyvarinen score\, w
 hich naturally targets infinitesimal relative probabilities\, and general 
 Bayesian updating\, which provides a unifying framework for inference on l
 osses and models. Specifically we propose the H-score\, a general Bayesian
  selection criterion and prove that it consistently selects the (possibly 
 improper) model closest to \nthe data-generating truth in Fisher’s diver
 gence. We also prove that an associated H-posterior consistently learns op
 timal hyper-parameters featuring in loss functions\, including a challengi
 ng tempering parameter in generalised Bayesian inference. As salient examp
 les\, we consider robust regression and non-parametric density estimation 
 where popular loss functions define improper models for the data and hence
  cannot be dealt with using standard model selection tools. These examples
  illustrate advantages in robustness-efficiency tradeoffs and provide a Ba
 yesian implementation for kernel density estimation\, opening a new avenue
  for Bayesian non-parametrics.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Veronika Rockova (University of Chicago)
DTSTART:20211201T180000Z
DTEND:20211201T184500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/19/">Metropolis-Hastings via Classification</a>\nby Veronika Rockov
 a (University of Chicago) as part of CMO-Foundations of Objective Bayesian
  Methodology\n\n\nAbstract\nThis paper develops a Bayesian computational p
 latform at the interface between posterior sampling and optimization in mo
 dels whose marginal likelihoods are difficult to evaluate. Inspired by con
 trastive learning and Generative Adversarial Networks (GAN)\, we reframe t
 he likelihood function estimation problem as a classification problem. Pit
 ting a Generator\, who simulates fake data\, against a Classifier\, who tr
 ies to distinguish them from the real data\, one obtains likelihood (ratio
 ) estimators which can be plugged into the Metropolis-Hastings algorithm. 
 The resulting Markov chains generate\, at a steady state\, samples from an
  approximate posterior whose asymptotic properties we characterize. Drawin
 g upon connections with empirical Bayes and Bayesian mis-specification\, w
 e quantify the convergence rate in terms of the contraction speed of the a
 ctual posterior and the convergence rate of the Classifier.  Asymptotic no
 rmality results are also provided which justify the inferential potential 
 of our approach. We illustrate the usefulness of our  approach on examples
  which have challenged for existing Bayesian likelihood-free approaches.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rajesh Ranganath (Courant Institute NYU\, USA)
DTSTART:20211201T184500Z
DTEND:20211201T193000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/20/">Where did my Bayes Go?</a>\nby Rajesh Ranganath (Courant Insti
 tute NYU\, USA) as part of CMO-Foundations of Objective Bayesian Methodolo
 gy\n\n\nAbstract\nI've spent time working on Bayesian methods\, especially
  scalable computation. However\, my recent work has developed algorithms t
 ailored to problems in healthcare that do not easily translate to standard
  Bayesian computation. In this talk\, I will highlight two such methods\, 
 one for survival analysis based on multiplayer games and another for build
 ing predictive models in the presence of spurious correlations. At the end
 \, I'll highlight thoughts on how Bayesian analysis might play a role in t
 hese problems.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Noirrit Chandra (The University of exas at Austin\, USA)
DTSTART:20211202T160000Z
DTEND:20211202T164500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/21/">Bayesian Scalable Precision Factor Analysis for Massive Sparse
  Gaussian Graphical Models</a>\nby Noirrit Chandra (The University of exas
  at Austin\, USA) as part of CMO-Foundations of Objective Bayesian Methodo
 logy\n\n\nAbstract\n"We propose a novel approach to estimating the precisi
 on matrix of multivariate Gaussian data that relies on decomposing them in
 to a low-rank and a diagonal component. Such decompositions are very popul
 ar for modeling large covariance matrices as they admit a latent factor ba
 sed representation that allows easy inference. The same is however not tru
 e for precision matrices due to the lack of computationally convenient rep
 resentations which restricts inference to low-to-moderate dimensional prob
 lems. We address this remarkable gap in the literature by building on a la
 tent variable representation for such decomposition for precision matrices
 . The construction leads to an efficient Gibbs sampler that scales very we
 ll to high-dimensional problems far beyond the limits of the current state
 -of-the-art. The ability to efficiently explore the full posterior space a
 lso allows the model uncertainty to be easily assessed. The decomposition 
 crucially additionally allows us to adapt sparsity inducing priors to  shr
 ink the insignificant entries of the precision matrix toward zero\, making
  the approach adaptable to high-dimensional small-sample-size sparse setti
 ngs. Exact zeros in the matrix encoding the underlying conditional indepen
 dence graph are then determined via a novel posterior false discovery rate
  control procedure. A near minimax optimal posterior concentration rate fo
 r estimating precision matrices is attained by our method under mild regul
 arity assumptions.\nWe evaluate the method's empirical performance through
  synthetic experiments and illustrate its practical utility in data sets f
 rom two different application domains.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniele Durante (Bocconi University\, Italy)
DTSTART:20211202T164500Z
DTEND:20211202T173000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/22/">Advances in Bayesian inference for regression models with bina
 ry\, categorical and partially-discretized data</a>\nby Daniele Durante (B
 occoni University\, Italy) as part of CMO-Foundations of Objective Bayesia
 n Methodology\n\n\nAbstract\nA broad class of models that routinely appear
  in several fields of application can be expressed as partially or fully d
 iscretized Gaussian linear regressions. Besides including the classical Ga
 ussian response setting\, this class crucially encompasses probit\, multin
 omial probit and tobit models\, among others\, and further includes key ex
 tensions to dynamic\, skewed and multivariate contexts. The relevance of s
 uch representations has motivated decades of  research in the Bayesian fie
 ld. The main reason for this active interest is that\, unlike for the Gaus
 sian response setting\, the posterior distribution induced by these models
  does not apparently belong to a known and tractable class\, under the com
 monly-assumed Gaussian priors. This has motivated the development of sever
 al alternative solutions for posterior inference relying either on samplin
 g-based strategies or on deterministic approximations\, which\, however\, 
 still experience scalability\, mixing and accuracy issues\, especially in 
 high dimension. The scope of this talk is to review\, unify and extend rec
 ent advances in Bayesian inference and computation for such a class of mod
 els. To address this goal\, I will prove that the likelihoods induced by a
 ll these formulations crucially share a common analytical structure which 
 implies conjugacy with a broad class of distributions\, namely the unified
  skew-normals (SUN)\, that generalize multivariate Gaussians to skewed con
 texts\, and include these variables as a special case. This result unifies
  and extends recent conjugacy properties for specific models within the cl
 ass analyzed\, and opens new avenues for improved posterior inference\, un
 der a broader class of core formulations and prior distributions\, via nov
 el closed-form expressions\, tractable Monte Carlo methods based on indepe
 ndent and identically distributed samples from the exact SUN posteriors\, 
 and more accurate and scalable approximations from variational Bayes and e
 xpectation-propagation. These advantages are illustrated in extensive simu
 lation studies and applications\, and are expected to boost the routine-us
 e of these such core Bayesian models\, while providing a novel framework f
 or studying general theoretical properties and developing future extension
 s.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Filippo Ascolani (Bocconi University\, Italy)
DTSTART:20211202T180000Z
DTEND:20211202T184500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/23/">Trees of random probability measures and Bayesian nonparametri
 c modelling</a>\nby Filippo Ascolani (Bocconi University\, Italy) as part 
 of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nWe int
 roduce a way to generate trees of random probability measures\, where the 
 link between two nodes is given by a hierarchical procedure: starting from
  a common root\, each node of the tree is endowed with a random probabilit
 y measure\, whose baseline distribution is again random and given by the a
 ssociated node in the previous layer.  The data can be observed at any nod
 e of the tree and different branches may have different length: the split 
 mechanism can be also considered random or based on covariates of interest
 . When the branches have the same length and the observations are linked o
 nly to the leaves\, we recover the well known family of discrete hierarchi
 cal processes We prove that\, if the distribution at each node is given by
  the normalization of a completely random measure (NRMI)\, the model is an
 alytically tractable: conditional on a suitable latent structure\, the pos
 terior is still given by a deep NRMI. Furthermore\, the asymptotic behavio
 ur of the number of clusters is derived\, when either the sample size at a
  particular layer diverges or the number of levels grows. Finally\, the ex
 tension to kernel mixtures is discussed.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yang Ni (Texas A&M University\, USA)
DTSTART:20211202T184500Z
DTEND:20211202T193000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/24/">Individualized Causal Discovery with Latent Trajectory Embedde
 d Bayesian Networks</a>\nby Yang Ni (Texas A&M University\, USA) as part o
 f CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nBayesia
 n networks have been widely used for generating causal hypotheses from mul
 tivariate data. Despite their popularity\, the vast majority of existing c
 ausal discovery approaches make the strong assumption of a (partially) hom
 ogeneous sampling scheme. However\, such assumption can be seriously viola
 ted causing significant biases when the underlying population is inherentl
 y heterogeneous. To explicitly account for the heterogeneity\, we propose 
 a novel Bayesian network model\, termed BN-LTE\, that embeds the heterogen
 eous data onto a low-dimensional manifold and builds Bayesian networks con
 ditional on the embedding. This new framework allows for more precise netw
 ork inference by improving the estimation resolution from population level
  to observation level (individualized causal models). Moreover\, while Bay
 esian networks are in general not identifiable with purely observational\,
  cross-sectional data due to Markov equivalence\, with the blessing of het
 erogeneity\, we prove that the proposed BN-LTE is uniquely identifiable un
 der common causal assumptions. Through extensive experiments\, we demonstr
 ate the superior performance of BN-LTE in discovering causal relationships
  as well as inferring observation-specific gene regulatory networks from o
 bservational data.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:José Antonio Perusquía (University of Kent\, UK)
DTSTART:20211202T220000Z
DTEND:20211202T224500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/25/">A Bayesian Approach to Anomaly Detection in Computer Systems: 
 A Review</a>\nby José Antonio Perusquía (University of Kent\, UK) as par
 t of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nComp
 uter systems are vast\, complex and dynamic objects that have become cruci
 al in modern life. To ensure their correct performance\, there is a need t
 o efficiently detect vulnerabilities and anomalies that could shut them do
 wn with potentially catastrophic consequences. Nowadays\, there exist a wi
 de number of classical and machine learning models used for such an import
 ant task. However\, these approaches lack the flexibility and the inherent
  probabilistic characterisation of uncertainty that Bayesian statistics of
 fer. That is why\, in recent years Bayesian anomaly detection models appli
 ed specifically to computer systems have gained considerable attention\, i
 n particular in the field of cyber security. That is why in this talk we c
 entre our attention on how these models have been used\, the specific chal
 lenges and interesting areas of opportunity.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katherine Heller (Google Research)
DTSTART:20211202T224500Z
DTEND:20211202T233000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/26/">Towards Trustworthy Machine Learning in Medicine and the Role 
 of Uncertainty</a>\nby Katherine Heller (Google Research) as part of CMO-F
 oundations of Objective Bayesian Methodology\n\n\nAbstract\nAs ML is incre
 asingly used in society\, we need methods that we have confidence that we 
 can rely on\, particularly in the medical domain. In this talk I discuss 3
  pieces of work\, the role uncertainty plays in understanding and combatin
 g issues with generalization and bias\, and particular mitigations that we
  can take into consideration.\n\n1) Sepsis Watch - I present a Gaussian Pr
 ocess (GP) + Recurrent Neural Network (RNN) model for predicting sepsis in
 fections in Emergency Department patients. I will discuss the benefit of u
 ncertainty given by the GP. I will then discuss the social context in intr
 oducing such a system into a hospital setting.\n\n2) Uncertainty and Elect
 ronic Health Records (EHR) - I will discuss Bayesian RNN models developed 
 for mortality prediction\, and the distinction between population level pr
 edictive performance and individual level predictive performance\, and its
  implications for bias.\n\n3) Underspecification and the credibility impli
 cations of hyperparameter choices in ML models -- I will discuss medical i
 maging applications and how using the uncertainty of model performance con
 ditioned on choice of hyperparameters can help identify situations in whic
 h methods may not generalize well outside the training domain.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mengyang Gu (University of California Santa Barbara\, USA)
DTSTART:20211203T000000Z
DTEND:20211203T004500Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/27/">Marginalization of latent variables for correlated data</a>\nb
 y Mengyang Gu (University of California Santa Barbara\, USA) as part of CM
 O-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nWe will dis
 cuss marginalization of latent variables for correlated outcomes\, such as
  multiple time series\, spatio-temporal processes\, and computer simulatio
 ns. We first review the Kalman filter and its connection to Gaussian proce
 sses with Matern covariance. Then we discuss vector regressive models\, li
 near models of coregionalization\, and their connections to Gaussian proce
 sses with product covariance. We show marginalizing correlated latent vari
 ables leads to efficient estimation of model parameters and predictions. A
 s an example\, we will introduce generalized probabilistic principal compo
 nent analysis (GPPCA) to study the latent factor model for multiple correl
 ated outcomes. Our method generalizes the previous probabilistic formulati
 on of principal component analysis (PPCA) by providing the closed-form max
 imum marginal likelihood estimator of the factor loadings and other parame
 ters\, where each factor is modeled by a Gaussian process. Lastly we will 
 introduce efficient representation of Gaussian processes with product Mate
 rn covariance and its applications on emulating massive computer simulatio
 ns. We will present numerical studies of simulated and real data that conf
 irms good predictive accuracy and computational efficiency of proposed app
 roaches.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alan Riva-Palacio (IIMAS-UNAM\, Mexico)
DTSTART:20211203T004500Z
DTEND:20211203T013000Z
DTSTAMP:20260404T060945Z
UID:CMO-21w5107/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CMO-2
 1w5107/28/">Bayesian analysis of vectors of subordinators</a>\nby Alan Riv
 a-Palacio (IIMAS-UNAM\, Mexico) as part of CMO-Foundations of Objective Ba
 yesian Methodology\n\n\nAbstract\nNon-decreasing additive processes\, also
  called subordinators\,  have many applications throughout mathematical mo
 deling\; for instance\, they have been quite used in risk and finance. Wel
 l known examples of subordinators are the stable\, gamma and compound Pois
 son processes with positive jumps.  Extension to a multivariate setting fo
 r studying heterogeneous data by considering vectors of subordinators can 
 be performed and has been studied in a frequentist setting. In this talk w
 e will discuss the challenges for the Bayesian analysis of models based on
  such vectors of subordinators.\n
LOCATION:https://stable.researchseminars.org/talk/CMO-21w5107/28/
END:VEVENT
END:VCALENDAR
