BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Chao Wang (UC Davis)
DTSTART:20201013T231000Z
DTEND:20201014T000000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/1/">From telescope to computed tomography via sparse recovery ap
 proaches</a>\nby Chao Wang (UC Davis) as part of Mathematics of Data and D
 ecisions at Davis (MADDD) Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cynthia Rudin (Duke)
DTSTART:20201020T231000Z
DTEND:20201021T000000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/2/">Current Approaches in Interpretable Machine Learning</a>\nby
  Cynthia Rudin (Duke) as part of Mathematics of Data and Decisions at Davi
 s (MADDD) Seminar\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrice Koehl (UC Davis)
DTSTART:20201027T231000Z
DTEND:20201028T000000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/3/">Light speed computation of exact solutions to generic and to
  degenerate assignment problems</a>\nby Patrice Koehl (UC Davis) as part o
 f Mathematics of Data and Decisions at Davis (MADDD) Seminar\n\n\nAbstract
 \nThe linear assignment problem is a fundamental problem in combinatorial 
 optimization with a wide range of applications\, from operational research
  to data sciences. It consists of assigning ``agents" to ``tasks" on a one
 -to-one basis\, while minimizing the total cost associated with the assign
 ment. While many exact algorithms have been developed to identify such an 
 optimal assignment\, most of these methods are computationally prohibitive
  for large size problems. In this talk\, I will describe a novel approach 
 to solving the assignment problem using techniques adapted from statistica
 l physics. In particular I will derive a strongly concave effective free e
 nergy function that captures the constraints of the assignment problem at 
 a finite temperature. This free energy decreases monotonically as a functi
 on of $\\beta$\, the inverse of temperature\,  to the optimal assignment c
 ost\, providing a robust framework for temperature annealing. For large en
 ough $\\beta$ values the exact solution to the generic assignment problem 
 can be derived using a simple round-off to the nearest integer of the elem
 ents of the computed assignment matrix. I will also describe a provably co
 nvergent method to handle degenerate assignment problems. Finally\, I will
  describe computer implementations  of this framework that are optimized f
 or parallel architectures\, one based on CPU\, the other based on GPU. The
 se implementations enable solving large assignment problems (of the orders
  of a few 10000s) in computing clock times of the orders of minutes.\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick Flaherty (UMass)
DTSTART:20201104T001000Z
DTEND:20201104T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/4/">MAP Clustering under the Gaussian Mixture Model via Mixed In
 teger Programming</a>\nby Patrick Flaherty (UMass) as part of Mathematics 
 of Data and Decisions at Davis (MADDD) Seminar\n\n\nAbstract\nIn the appli
 cation of clustering models to real data there is often rich prior informa
 tion that constrains the relationships among the samples\, or the relation
 ships between the samples and the parameters. For example\, in biological 
 or clinical experiments\, it may be known that two samples are technical r
 eplicates and should be assigned to the same cluster\, or it may be known 
 that the mean value for control samples is in a certain range. However\, s
 tandard model-based clustering methods make it difficult to enforce such h
 ard logical constraints and may fail to provide a globally optimal cluster
 ing. We present a global optimization approach for solving the maximum a-p
 osteriori (MAP) clustering problem under the Gaussian mixture model. Our a
 pproach can accommodate side constraints and preserves the combinatorial s
 tructure of the MAP clustering problem by its formulation as a mixed-integ
 er nonlinear optimization problem (MINLP). We approximate the MINLP throug
 h a mixed-integer quadratic program (MIQP) transformation that improves co
 mputational aspects while guaranteeing $\\epsilon$-global optimality. An i
 mportant benefit of our approach is the explicit quantification of the deg
 ree of suboptimality\, via the optimality gap\, en route to finding the gl
 obally optimal MAP clustering. Numerical experiments comparing our method 
 to other approaches show that our method finds better optima than standard
  clustering methods. Finally\, we cluster a real breast cancer\ngene expre
 ssion data set incorporating intrinsic subtype information the induced con
 straints substantially improve the computational performance and produce m
 ore coherent and biologically meaningful clusters.\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhi Ding (UC Davis)
DTSTART:20201111T001000Z
DTEND:20201111T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/5/">Deep Learning: Not a Simple Hammer for Massive MIMO Wireless
  Communication Systems</a>\nby Zhi Ding (UC Davis) as part of Mathematics 
 of Data and Decisions at Davis (MADDD) Seminar\n\n\nAbstract\nThe prolifer
 ation of advanced wireless services\, such as virtual reality\, autonomous
 \ndriving and internet of things has generated increasingly intense pressu
 re to\ndevelop intelligent wireless communication systems to meet networki
 ng needs\nposed by extremely high data rates\, massive number of connected
  devices\, and ultra\nlow latency. Deep learning (DL) has been recently em
 erged as an exciting design\ntool to advance the development of wireless c
 ommunication system with some\ndemonstrated successes. In this talk\, we i
 ntroduce the principles of applying DL for\nimproving wireless network per
 formance by integrating the underlying\ncharacteristics of channels in pra
 ctical massive MIMO deployment. We develop\nimportant insights derived fro
 m the physical RF channel properties and present a\ncomprehensive overview
  on the application of DL for accurately estimating channel\nstate informa
 tion (CSI) of forward channels with low feedback overhead. We\nprovide exa
 mples of successful DL application in CSI estimation for massive MIMO\nwir
 eless systems and highlight several promising directions for future resear
 ch.\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chelsea Weaver (Amazon)
DTSTART:20201118T001000Z
DTEND:20201118T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/6/">Natural Language Understanding at Amazon Music</a>\nby Chels
 ea Weaver (Amazon) as part of Mathematics of Data and Decisions at Davis (
 MADDD) Seminar\n\n\nAbstract\nIn this talk\, I’ll discuss what happens w
 hen you ask an Alexa device to play music. I’ll focus on the Natural Lan
 guage Understanding (NLU) component\, which deals with categorizing and la
 beling transcribed requests. In particular\, I’ll discuss two projects I
 ’ve worked on designed to improve upon the initial labeling. The first u
 ses a BERT-based language model to “correct” requests that appear to b
 e mislabeled. The second is an online learning model that selects from dif
 ferent NLU interpretations using implicit customer feedback. I’ll conclu
 de the talk with a few tips for the industry job search.\n\nLinks: Amazon 
 Jobs Page\; Amazon Science Page\n\nPapers:\nPersonalizing natural-language
  understanding using multi-armed bandits and implicit feedback – Moerche
 n et al (2020)\nCounterfactual Risk Minimization: Learning from Logged Ban
 dit Feedback -  Swaminathan & Joachims (2015)\nAnalysis of Thompson Sampli
 ng for the Multi-Armed Bandit Problem – Agrawal et al (2012)\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolfgang Polonik (UC Davis)
DTSTART:20201125T001000Z
DTEND:20201125T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/7/">Multiscale Geometric Feature Extraction</a>\nby Wolfgang Pol
 onik (UC Davis) as part of Mathematics of Data and Decisions at Davis (MAD
 DD) Seminar\n\n\nAbstract\nA method for extracting multiscale geometric fe
 atures from a data cloud is presented. Each pair of data points is mapped 
 into a real-valued feature function\, whose construction is based on geome
 tric considerations. The collection of these feature functions is then bei
 ng used for further data analysis. Applications include classification\, a
 nomaly detection and data visualization. In contrast to the popular kernel
  trick\, the construction of the feature functions is based on geometric c
 onsiderations. The performance of the methodology is illustrated through a
 pplications to real data sets\, and some theoretical guarantees supporting
  the performance of the novel methodology are presented. This is joint wor
 k with G. Chandler.\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guido Montufar (UCLA)
DTSTART:20201202T001000Z
DTEND:20201202T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/8/">Optimal Transport to Independence Models</a>\nby Guido Montu
 far (UCLA) as part of Mathematics of Data and Decisions at Davis (MADDD) S
 eminar\n\n\nAbstract\nAn independence model for discrete random variables 
 is a Segre-Veronese variety in a probability simplex. Any metric on the se
 t of joint states of the random variables induces a Wasserstein metric on 
 the probability simplex. The unit ball of this polyhedral norm is dual to 
 the Lipschitz polytope. Given any data distribution\, we seek to minimize 
 its Wasserstein distance to a fixed independence model. The solution to th
 is optimization problem is a piecewise algebraic function of the data. We 
 compute this function explicitly in small instances\, we examine its combi
 natorial structure and algebraic degrees in the general case\, and we pres
 ent some experimental case studies. This talk is based on joint work with 
 Türkü Özlüm Çelik\, Asgar Jamneshan\, Bernd Sturmfels\, Lorenzo Ventu
 rello.\n\nhttps://arxiv.org/abs/1909.11716\n\nhttps://arxiv.org/abs/2003.0
 6725\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Samir Chowdhury (Stanford)
DTSTART:20201209T001000Z
DTEND:20201209T010000Z
DTSTAMP:20260404T094308Z
UID:MADDD_Fall2020/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MADDD
 _Fall2020/9/">Gromov-Wasserstein Learning in a Riemannian Framework</a>\nb
 y Samir Chowdhury (Stanford) as part of Mathematics of Data and Decisions 
 at Davis (MADDD) Seminar\n\n\nAbstract\nGeometric and topological data ana
 lysis methods are increasingly being used to derive insights from data ari
 sing in the empirical sciences. We start with a particular case where such
  techniques are applied to human neuroimaging data to obtain graphs which 
 can then yield insights connecting neurobiology to human task performance.
  Reproducing such insights across populations requires statistical learnin
 g techniques such as averaging and PCA across graphs without known node co
 rrespondences. We formulate this problem using the Gromov-Wasserstein (GW)
  distance and present a recently-developed Riemannian framework for GW-ave
 raging and tangent PCA. Beyond graph adjacency matrices\, this framework p
 ermits consuming derived network representations such as distance or kerne
 l matrices\, and each choice leads to additional structure on the GW probl
 em that can be exploited for theoretical and/or computational advantages. 
 In particular\, we show how replacing the adjacency matrix representation 
 with a spectral representation leads to theoretical guarantees allowing ef
 ficient use of the Riemannian framework. Additionally we present numerics 
 showing how the spectral representation achieves state of the art accuracy
  and runtime in graph learning tasks such as matching and partitioning on 
 a variety of real and simulated datasets.\n
LOCATION:https://stable.researchseminars.org/talk/MADDD_Fall2020/9/
END:VEVENT
END:VCALENDAR
