BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:János Pach (Alfred Renyi Institute\, Budapest\, Hungary and MIPT\
 , Moscow\, Russia)
DTSTART:20201010T120000Z
DTEND:20201010T124000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/1/">Bounded VC-dimension and Ramsey-type questions</a>\nby 
 János Pach (Alfred Renyi Institute\, Budapest\, Hungary and MIPT\, Moscow
 \, Russia) as part of Machine Learning and Combinatorics Workshop\n\n\nAbs
 tract\nThere is a growing body of results in Ramsey theory which provide b
 etter bounds or stronger conclusions under the additional assumption of bo
 unded VC-dimension. After giving a short survey of results of this type\, 
 we prove the following theorem which settles a special case of the famous 
 Schur-Erdős problem. There exists a constant $c=c(d)$ such that for any $
 m$-coloring of the edges of a complete graph with at least $2^{cm}$ vertic
 es\, for which the family of the neighborhoods of the vertices in each col
 or has VC-dimension at most $d$\, has a monochromatic triangle. This resul
 t is best possible up to the value of $c$.\n\nJoint work with Jacob Fox an
 d Andrew Suk.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Omri Ben-Eliezer (Harvard University\, US)
DTSTART:20201010T124500Z
DTEND:20201010T132500Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/2/">Adversarially Robust Streaming Algorithms</a>\nby Omri 
 Ben-Eliezer (Harvard University\, US) as part of Machine Learning and Comb
 inatorics Workshop\n\n\nAbstract\nStreaming algorithms are traditionally d
 ivided into two categories: deterministic algorithms and randomized ones. 
 These two types of algorithms exhibit a tradeoff between robustness and ef
 ficiency: on one hand\, deterministic algorithms always return a correct o
 utput\, but for many important streaming problems\, efficient deterministi
 c algorithms do not exist. On the other hand\, efficient randomized algori
 thms are known for a very wide range of problems\, but their correctness t
 ypically relies on the assumption that the stream is fixed in advance\, wh
 ich is commonly unrealistic in practice.\n\nIn this talk\, I will focus on
  a middle ground that combines both robustness and efficiency: adversarial
 ly robust algorithms\, whose output is correct with high probability even 
 when the stream elements are adaptively chosen as a function of previous o
 utputs. This regime has received surprisingly little attention until recen
 tly\, and many intriguing problems are still open. In particular\, I shall
  discuss the robustness of random sampling\, mention its close connections
  to combinatorial online learning\, and if time permits\, present generic 
 techniques to transform “standard” randomized streaming algorithms for
  various problems into robust ones. \n\nBased on joint works with Noga Alo
 n\, Yuval Dagan\, Rajesh Jayaram\, Shay Moran\, Moni Naor\, David Woodruff
 \, and Eylon Yogev.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolfgang Mulzer (Freie Universität Berlin\, Gemrany)
DTSTART:20201010T133000Z
DTEND:20201010T141000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/3/">Asymmetric Convex Intersection Testing (ACIT)</a>\nby W
 olfgang Mulzer (Freie Universität Berlin\, Gemrany) as part of Machine Le
 arning and Combinatorics Workshop\n\n\nAbstract\nLet $P$ be a set of $n$ p
 oints and $H$ a set of $m$ halfspaces in $d$ dimensions. We denote by $ch(
 P)$ the polytope obtained by taking the convex hull of $P$\, and by $fh(H)
 $ the polytope obtained by taking the intersection of the halfspaces in $H
 $. Our goal is to decide whether the intersection of $H$ and the convex hu
 ll of P are disjoint. Even though ACIT is a natural variant of classic LP-
 type problems that have been studied at length in the literature\, and des
 pite its applications in the analysis of high-dimensional data sets\, it a
 ppears that the problem has not been studied before.\n\nWe discuss how kno
 wn approaches can be used to attack the ACIT problem\, and we provide a ve
 ry simple strategy that leads to a deterministic algorithm\, linear on $n$
  and $m$\, whose running time depends reasonably on the dimension $d$.\n\n
 Based on joined work with Luis Barba.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Open problem session
DTSTART:20201010T150000Z
DTEND:20201010T160000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/4/">Open problems session</a>\nby Open problem session as p
 art of Machine Learning and Combinatorics Workshop\n\n\nAbstract\nDuring t
 he open problem session\, all participants are welcome to propose an open 
 problem related to the topic of the workshop.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shay Moran (Technion\, Haifa\, Israel)
DTSTART:20201010T160000Z
DTEND:20201010T164000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/5/">On the expressiveness of comparison queries</a>\nby Sha
 y Moran (Technion\, Haifa\, Israel) as part of Machine Learning and Combin
 atorics Workshop\n\n\nAbstract\nComparisons are a classical and well studi
 ed algorithmic tool that is used in a variety of contexts and applications
 . We will discuss two manifestations of the expressiveness of these querie
 s in machine learning and algorithms (a more detailed overview is given be
 low). Both manifestations are based on the notion of “inference dimensio
 n” which can be viewed as another instance of the fruitful link between 
 machine learning and discrete mathematics – a link dating back to the di
 scovery of the VC dimension. \n\n1. Active classification with comparison 
 queries [Kane\, Lovett\, Moran\, Zhang\, FOCS ’17] —  Active learning 
 is a model for semi-supervised learning that captures situations in which 
 unlabeled data is abundant and manually labelling it is expensive. We cons
 ider an extension of active learning in which the learning algorithm may a
 sk the annotator to compare the distances of two examples from the boundar
 y of their label-class. For example\, in a recommendation system applicati
 on (say for restaurants)\, the annotator may be asked whether she liked or
  disliked a specific restaurant (a label query)\; or which one of two rest
 aurants did she like more (a comparison query). We prove that the usage of
  comparison queries leads to an exponential improvement in the query compl
 exity of some well studied problems. Specifically\, for the class of half-
 spaces\, we show that under natural assumptions\, such as large margin or 
 bounded bit-description of the input examples\, it is possible to reveal a
 ll the labels of a sample of size n using approximately $O(\\log n)$ queri
 es.\n\n2. Nearly optimal linear decision trees for $k$-SUM and related pro
 blems [Kane\, Lovett\, Moran\, JACM ‘19] — We use the above framework 
 to construct linear decision trees for a variety of decision problems in c
 ombinatorics and discrete geometry. For example\, for any k\, we construct
  linear decision trees that solve the $k$-SUM problem on $n$ elements usin
 g $O(n k \\log n)$ linear queries. Moreover\, the queries we use are compa
 rison queries\, which compare the sums of two $k$-subsets\; when viewed as
  linear queries\, comparison queries are $2k$-sparse and have only $\\{−
 1\,+1}$ coefficients. We give similar constructions for sorting sumsets $A
 +B$ and for solving the SUBSET-SUM problem\, both with optimal number of q
 ueries\, up to poly-logarithmic terms.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shachar Lovett (University of California San Diego\, US)
DTSTART:20201010T164500Z
DTEND:20201010T172500Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/6/">Point location with near-optimal bounds</a>\nby Shachar
  Lovett (University of California San Diego\, US) as part of Machine Learn
 ing and Combinatorics Workshop\n\n\nAbstract\nThe point location problem i
 s a classical problem in discrete and computational geometry. Given a know
 n partition of $\\mathbb R^d$ by $n$ hyperplanes into cells\, and an unkno
 wn point\, find as efficiently as possible the cell in which the point lie
 s. Access to the unknown point is done indirectly\, via linear queries wit
 h respect to hyperplanes.\n\nThis problem is equivalent to another well-st
 udied problem\, this time in machine learning: active learning of linear c
 lassifiers. Here\, there are $n$ known points in $\\mathbb R^d$\, which ar
 e labeled by an unknown hyperplane. The goal is to as efficiently as possi
 ble find the labeling of all $n$ points. Similarly here\, access to the un
 known hyperplane is done indirectly\, in the form of label queries.\n\nFor
  both problems\, there is a simple information-theoretic lower bound of $\
 \Omega(d \\log n)$ on the number of queries needed. Despite 40 years of wo
 rk\, the best known upper bounds had a polynomial dependence on the dimens
 ion $d$. In this work\, we achieve for the first time a near-linear depend
 ence on the dimension. The proof combines tools from geometry\, analysis a
 nd machine learning.\n\nJoint work with Max Hopkins\, Daniel Kane and Gaur
 av Mahajan.\n\nPaper will appear in FOCS 2020.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Amir Yehudayoff (Technion\, Haifa\, Israel)
DTSTART:20201011T120000Z
DTEND:20201011T124000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/7/">Trichotomy of rates in supervised learning</a>\nby Amir
  Yehudayoff (Technion\, Haifa\, Israel) as part of Machine Learning and Co
 mbinatorics Workshop\n\n\nAbstract\nWe show that in supervised learning th
 ere are only three learning rates possible: exponential\, linear and arbit
 rarily slow. \n\nJoint work with Bousquet\, Hanneke\, Moran\, and van Hand
 el.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roi Livni (Tel Aviv University\, Israel)
DTSTART:20201011T124500Z
DTEND:20201011T132500Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/8/">Graph-Based Discrimination</a>\nby Roi Livni (Tel Aviv 
 University\, Israel) as part of Machine Learning and Combinatorics Worksho
 p\n\n\nAbstract\nA basic question in learning theory is to identify if two
  distributions are identical when we have access only to examples sampled 
 from the distributions.  This basic task arises in the context of learning
 \, but also in the context of Generative Adversarial Networks (GANs)\, whe
 re a discriminator is trained to distinguish between a real-life distribut
 ion and a synthetic distribution. Classically\, we use a hypothesis class 
 H and claim that the two distributions are distinct if for some hypothesis
  in $H$ the expected value on the two distributions is (significantly) dif
 ferent. Our starting point is the following fundamental problem: “is hav
 ing the hypothesis dependent on more than a single random example benefici
 al”. To address this challenge we introduce $k$-ary based discriminators
  which can be modeled by a family of (hyper)-graphs. Each hypergraph is us
 ed to test if the distributions are distinct by estimating the probability
  of an (hyper)-edge. We study the expressiveness of such graph-based discr
 iminators and compare them to the classical setting of learning\, which is
  $k=1$.  We show a separation in the expressiveness of  $k+1$ vs $k$-ary g
 raph based discriminators and introduce a combinatorial measure\, called g
 raph-VC dimension\, and show that it controls the sample complexity.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrey Kupavskii (MIPT\, Moscow\, Russia and CNRS\, Grenoble\, Fra
 nce)
DTSTART:20201011T133000Z
DTEND:20201011T141000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/9/">VC-dimension of polytopes</a>\nby Andrey Kupavskii (MIP
 T\, Moscow\, Russia and CNRS\, Grenoble\, France) as part of Machine Learn
 ing and Combinatorics Workshop\n\n\nAbstract\nWe discuss recent progress i
 n the questions on the VC-dimension of $d$-dimensional polytopes with $k$ 
 facets / $k$ vertices. For the former class\, we determine the order of gr
 owth of the VC-dimension\, which turns out to be superlinear in $k$ (joint
  with Csicos and Mustafa). For the latter class\, we show the first polyno
 mial upper bound on the VC-dimension.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Steve Hanneke (Toyota Technological Institute at Chicago\, US)
DTSTART:20201011T150000Z
DTEND:20201011T154000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/10/">Proper Learning\, Helly Number\, and an Optimal SVM Bo
 und</a>\nby Steve Hanneke (Toyota Technological Institute at Chicago\, US)
  as part of Machine Learning and Combinatorics Workshop\n\n\nAbstract\nThe
  classical PAC sample complexity bounds are stated for any Empirical Risk 
 Minimizer (ERM) and contain an extra logarithmic factor $\\log(1/\\varepsi
 lon)$ which is known to be necessary for ERM in general. It has been recen
 tly shown by Hanneke (2016) that the optimal sample complexity of PAC lear
 ning for any VC class $C$ does not include this $\\log$ factor and is achi
 eved by a particular improper learning algorithm\, which outputs a specifi
 c majority-vote of hypotheses in $C$. This leaves the question of when thi
 s bound can be achieved by proper learning algorithms\, which are restrict
 ed to always output a hypothesis from $C$.\n\nIn this work we aim to chara
 cterize the classes for which the optimal sample complexity can be achieve
 d by a proper learning algorithm. We identify that these classes can be ch
 aracterized by the dual Helly number\, which is a combinatorial parameter 
 that arises in discrete geometry and abstract convexity. In particular\, u
 nder general conditions on $C$\, we show that the dual Helly number is bou
 nded if and only if there is a proper learner that obtains the optimal dep
 endence on $\\varepsilon$.\n\nAs further implications of our techniques we
  resolve a long-standing open problem posed by Vapnik and Chervonenkis (19
 74) on the performance of the Support Vector Machine by proving that the s
 ample complexity of SVM in the realizable case is $\\Theta((n/\\varepsilon
 )+(1/\\varepsilon)log(1/\\delta))$\, where $n$ is the dimension. This give
 s the first optimal PAC bound for Halfspaces achieved by a proper learning
  algorithm\, and moreover is computationally efficient.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hunter Chase (University of Illinois at Chicago\, US)
DTSTART:20201011T154500Z
DTEND:20201011T162500Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/11/">Query learning\, Littlestone dimension\, and consisten
 cy dimension”</a>\nby Hunter Chase (University of Illinois at Chicago\, 
 US) as part of Machine Learning and Combinatorics Workshop\n\n\nAbstract\n
 We use Littlestone dimension coupled with consistency dimension to charact
 erize efficient learning by equivalence and membership queries. We also br
 iefly describe how these combinatorial measures connect machine learning w
 ith model theory\, where many examples\, for query learning and other kind
 s of learning\, can be found.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jacob Fox (Stanford University\, US)
DTSTART:20201011T163000Z
DTEND:20201011T171000Z
DTSTAMP:20260404T094834Z
UID:MLC_Workshop_Moscow/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/MLC_W
 orkshop_Moscow/12/">Bounded VC-dimension and Extremal Graph Theory</a>\nby
  Jacob Fox (Stanford University\, US) as part of Machine Learning and Comb
 inatorics Workshop\n\n\nAbstract\nWe survey a variety of tools and results
  on extremal problems for graphs of bounded VC-dimension. This is based on
  joint works  with János Pach and Andrew Suk.\n
LOCATION:https://stable.researchseminars.org/talk/MLC_Workshop_Moscow/12/
END:VEVENT
END:VCALENDAR
