BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Huacheng Yu (Princeton University)
DTSTART:20200422T170000Z
DTEND:20200422T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/1/">Nearly Optimal Static Las Vegas Succinct Dictionary</a>\nby Huachen
 g Yu (Princeton University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sepideh Mahabadi (TTIC)
DTSTART:20200429T170000Z
DTEND:20200429T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/2/">Non-Adaptive Adaptive Sampling in Turnstile Streams</a>\nby Sepideh
  Mahabadi (TTIC) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Klein (University of Washington)
DTSTART:20200506T170000Z
DTEND:20200506T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/3/">An improved approximation algorithm for TSP in the half integral ca
 se</a>\nby Nathan Klein (University of Washington) as part of TCS+\n\nAbst
 ract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Bun (Boston University)
DTSTART:20200520T170000Z
DTEND:20200520T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/4/">An Equivalence between Private Classification and Online Predictabi
 lity</a>\nby Mark Bun (Boston University) as part of TCS+\n\nAbstract: TBA
 \n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rahul Ilango (MIT)
DTSTART:20200527T170000Z
DTEND:20200527T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/5
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/5/">Is it (NP) hard to distinguish order from chaos?</a>\nby Rahul Ilan
 go (MIT) as part of TCS+\n\n\nAbstract\n"The Minimum Circuit Size Problem 
 (MCSP) roughly asks what the ""complexity"" of a given string is. Informal
 ly\, one can think of this as determining the degree of ""computational or
 der"" a string has.\n\nIn the past several years\, there has been a resurg
 ence of interest in MCSP. A series of exciting results have begun unraveli
 ng what looks to be a fascinating story. This story already reveals deep c
 onnections between MCSP and a growing list of fields\, including cryptogra
 phy\, learning theory\, structural complexity theory\, average-case comple
 xity\, and circuit complexity. As an example\, Santhanam recently proved a
  conditional equivalence between the complexity of MCSP and the existence 
 of one-way functions.\n\nThis talk is split into two parts. The first part
  is a broad introduction to MCSP\, answering the following questions: What
  is this problem? Why is it interesting? What do we know so far\, and wher
 e might the story go next? The second part discusses recent joint work wit
 h Bruno Loff and Igor Oliveira showing that the ""multi-output version"" o
 f MCSP is NP-hard. "\n\nThe Minimum Circuit Size Problem (MCSP) roughly as
 ks what the "complexity" of a given string is. Informally\, one can think 
 of this as determining the degree of "computational order" a string has.\n
 \nIn the past several years\, there has been a resurgence of interest in M
 CSP. A series of exciting results have begun unraveling what looks to be a
  fascinating story. This story already reveals deep connections between MC
 SP and a growing list of fields\, including cryptography\, learning theory
 \, structural complexity theory\, average-case complexity\, and circuit co
 mplexity. As an example\, Santhanam recently proved a conditional equivale
 nce between the complexity of MCSP and the existence of one-way functions.
 \n\nThis talk is split into two parts. The first part is a broad introduct
 ion to MCSP\, answering the following questions: What is this problem? Why
  is it interesting? What do we know so far\, and where might the story go 
 next? The second part discusses recent joint work with Bruno Loff and Igor
  Oliveira showing that the "multi-output version" of MCSP is NP-hard.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Kim (Stanford University)
DTSTART:20200603T170000Z
DTEND:20200603T180000Z
DTSTAMP:20260404T110823Z
UID:TCSPlus/6
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/6/">Learning from outcomes: evidence-based rankings</a>\nby Michael P. 
 Kim (Stanford University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sahil Singla (Princeton University)
DTSTART:20200513T170000Z
DTEND:20200513T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/7
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/7/">Online vector balancing and geometric discrepancy</a>\nby Sahil Sin
 gla (Princeton University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clifford Stein (Coumbia University)
DTSTART:20200618T170000Z
DTEND:20200618T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/8
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/8/">Parallel approximate undirected shortest paths via low hop emulator
 s</a>\nby Clifford Stein (Coumbia University) as part of TCS+\n\n\nAbstrac
 t\nAlthough sequential algorithms with (nearly) optimal running time for f
 inding shortest paths in undirected graphs with non-negative edge weights 
 have been known for several decades\, near-optimal parallel algorithms hav
 e turned out to be a much tougher challenge. In this talk\, we present a $
 (1+\\varepsilon)$-approximate parallel algorithm for computing shortest pa
 ths in undirected graphs\, achieving polylog depth and near-linear work. A
 ll prior $(1+\\varepsilon)$-algorithms with polylog  depth perform at leas
 t superlinear work. Improving this long-standing upper bound obtained by C
 ohen (STOC'94) has been open for 25 years.\n\nOur algorithm uses several n
 ew tools. Prior work uses hopsets to introduce shortcuts in the graph. We 
 introduce a new notion that we call low hop emulators.  We also introduce 
 compressible preconditioners\, which we use in conjunction with Serman's f
 ramework (SODA '17) for the uncapacitated minimum cost flow problem.\n\nJo
 int work with Alex Andoni and Peilin Zhong.\n\nRescheduled to 06/18/2020 (
 originally scheduled for the 06/10/2020).\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Richard Peng (Georgia Tech)
DTSTART:20200917T170000Z
DTEND:20200917T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/9
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/9/">Solving Sparse Linear Systems Faster than Matrix Multiplication</a>
 \nby Richard Peng (Georgia Tech) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fotis Iliopoulos (IAS)
DTSTART:20200923T170000Z
DTEND:20200923T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/10
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/10/">Stochastic Local Search and the Lovasz Local Lemma</a>\nby Fotis I
 liopoulos (IAS) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Wein (NYU)
DTSTART:20200930T170000Z
DTEND:20200930T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/11
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/11/">Low-Degree Hardness of Random Optimization Problems</a>\nby Alex W
 ein (NYU) as part of TCS+\n\n\nAbstract\nIn high-dimensional statistical p
 roblems (including planted clique\, sparse PCA\, community detection\, etc
 .)\, the class of "low-degree polynomial algorithms" captures many leading
  algorithmic paradigms such as spectral methods\, approximate message pass
 ing\, and local algorithms on sparse graphs. As such\, lower bounds agains
 t low-degree algorithms constitute concrete evidence for average-case hard
 ness of statistical problems. This method has been widely successful at ex
 plaining and predicting statistical-to-computational gaps in these setting
 s.\n\nWhile prior work has understood the power of low-degree algorithms f
 or problems with a "planted" signal\, we consider here the setting of "ran
 dom optimization problems" (with no planted signal)\, including the proble
 m of finding a large independent set in a random graph\, as well as the pr
 oblem of optimizing the Hamiltonian of mean-field spin glass models. I wil
 l define low-degree algorithms in this setting\, argue that they capture t
 he best known algorithms\, and explain new proof techniques for giving low
 er bounds against low-degree algorithms in this setting. The proof involve
 s a variant of the so-called "overlap gap property"\, which is a structura
 l property of the solution space.\n\nBased on joint work with David Gamarn
 ik and Aukosh Jagannath\, available at: https://arxiv.org/abs/2004.12063\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Susanna F. de Rezende (Mathematical Institute of the Czech Academy
  of Sciences)
DTSTART:20201007T170000Z
DTEND:20201007T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/12
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/12/">Lifting with Simple Gadgets and Applications to Circuit and Proof 
 Complexity</a>\nby Susanna F. de Rezende (Mathematical Institute of the Cz
 ech Academy of Sciences) as part of TCS+\n\n\nAbstract\nLifting theorems i
 n complexity theory are a method of transferring lower bounds in a weak co
 mputational model into lower bounds for a more powerful computational mode
 l\, via function composition. There has been an explosion of lifting theor
 ems in the last ten years\, essentially reducing communication lower bound
 s to query complexity lower bounds. These theorems only hold for compositi
 on with very specific "gadgets" such as indexing and inner product.\n \nIn
  this talk\, we will present a generalization of the theorem lifting Nulls
 tellensatz degree to monotone span program size by Pitassi and Robere (201
 8) so that it works for any gadget with high enough rank\, in particular\,
  for useful gadgets such as equality and greater-than. We will then explai
 n how to apply our generalized theorem to solve three open problems:\n\n- 
 We present the first result that demonstrates a separation in proof power 
 for cutting planes with unbounded versus polynomially bounded coefficients
 . Specifically\, we exhibit CNF formulas that can be refuted in quadratic 
 length and constant line space in cutting planes with unbounded coefficien
 ts\, but for which there are no refutations in subexponential length and s
 ubpolynomial line space if coefficients are restricted to be of polynomial
  magnitude.\n\n- We give the strongest separation to-date between monotone
  Boolean formulas and monotone Boolean circuits. Namely\, we show that the
  classical GEN problem\, which has polynomial-size monotone Boolean circui
 ts\, requires monotone Boolean formulas of size $2^{\\Omega(n / \\text{pol
 ylog}(n))}$.\n\n- We give the first explicit separation between monotone B
 oolean formulas and monotone real formulas. Namely\, we give an explicit f
 amily of functions that can be computed with monotone real formulas of nea
 rly linear size but require monotone Boolean formulas of exponential size.
  Previously only a non-explicit separation was known.\n\nThis talk is base
 d on joint work with Or Meir\, Jakob Nordström\, Toniann Pitassi\, Robert
  Robere\, and Marc Vinyals.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jayadev Acharya (Cornell)
DTSTART:20201014T170000Z
DTEND:20201014T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/13
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/13/">Distributed Statistical Inference under Local Information Constrai
 nts</a>\nby Jayadev Acharya (Cornell) as part of TCS+\n\n\nAbstract\nWe co
 nsider statistical inference tasks in a distributed setting where access t
 o data samples is subjected to strict "local constraints\," through a unif
 ied framework that captures communication limitations and (local) privacy 
 constraints as special cases. We study estimation (learning) and goodness-
 of-fit (testing) for both discrete and high-dimensional distributions. Our
  goal is to understand how the sample complexity increases under the infor
 mation constraints.\n\nIn this talk we will provide an overview of this fi
 eld and a sample of some of our results. We will discuss the role of (publ
 ic) randomness  and interactivity in information-constrained inference\, a
 nd make a case for thinking about randomness and interactivity as resource
 s.\n\nThe work is part of a long-term ongoing collaboration with Clément 
 Canonne (IBM Research) and Himanshu Tyagi (IISc)\, and includes works done
  with Cody Freitag (Cornell)\, Yanjun Han (Stanford)\, Yuhan Liu (Cornell)
 \, and Ziteng Sun (Cornell).\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aayush Jain (UCLA)
DTSTART:20201021T170000Z
DTEND:20201021T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/14
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/14/">Indistinguishability Obfuscation from Well-Founded Assumptions</a>
 \nby Aayush Jain (UCLA) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Omar Montasser (TTIC)
DTSTART:20201028T170000Z
DTEND:20201028T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/15
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/15/">Adversarially Robust Learnability: Characterization and Reductions
 </a>\nby Omar Montasser (TTIC) as part of TCS+\n\n\nAbstract\nWe study the
  question of learning an adversarially robust predictor from uncorrupted s
 amples. We show that any VC class H is robustly PAC learnable\, but we als
 o show that such learning must sometimes be improper (i.e. use predictors 
 from outside the class)\, as some VC classes are not robustly properly lea
 rnable.  In particular\, the popular robust empirical risk minimization ap
 proach (also known as adversarial training)\, which is proper\, cannot rob
 ustly learn all VC classes.  After establishing learnability\, we turn to 
 ask whether having a tractable non-robust learning algorithm is sufficient
  for tractable robust learnability and give a reduction algorithm for robu
 stly learning any hypothesis class H using a non-robust PAC learner for H\
 , with nearly-optimal oracle complexity.\n\nThis is based on joint work wi
 th Steve Hanneke and Nati Srebro\, available at https://arxiv.org/abs/1902
 .04217 and https://arxiv.org/abs/2010.12039..\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shalev Ben-David (U Waterloo)
DTSTART:20201104T180000Z
DTEND:20201104T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/16
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/16/">Forecasting Algorithms\, Minimax Theorems\, and Randomized Lower B
 ounds</a>\nby Shalev Ben-David (U Waterloo) as part of TCS+\n\n\nAbstract\
 nI will present a new approach to randomized lower bounds\, particularly i
 n the setting where we wish to give a fine-grained analysis of randomized 
 algorithms that achieve small bias. The approach is as follows: instead of
  considering ordinary randomized algorithms which give an output in {0\,1}
  and may err\, we switch models to look at "forecasting" randomized algori
 thms which output a confidence in [0\,1] for whether they think the answer
  is 1. When scored by a proper scoring rule\, the performance of the best 
 forecasting algorithm is closely related to the bias of the best (ordinary
 ) randomized algorithm\, but is more amenable to analysis.\n\nAs an applic
 ation\, I'll present a new minimax theorem for randomized algorithms\, whi
 ch can be viewed as a strengthening of Yao's minimax theorem. Yao's minima
 x theorem guarantees the existence of a hard distribution for a function f
  such that solving f against this distribution (to a desired error level) 
 is as hard as solving f in the worst case (to that same error level). Howe
 ver\, the hard distribution provided by Yao's theorem depends on the chose
 n error level. Our minimax theorem removes this dependence\, giving a dist
 ribution which certifies the hardness of f against all bias levels at once
 . In recent work\, we used this minimax theorem to give a tight compositio
 n theorem for randomized query complexity.\n\nBased on joint work with Eri
 c Blais.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shuai Shao (University of Wisconsin-Madison)
DTSTART:20201111T180000Z
DTEND:20201111T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/17
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/17/">A Dichotomy for Real Boolean Holant Problems</a>\nby Shuai Shao (U
 niversity of Wisconsin-Madison) as part of TCS+\n\n\nAbstract\nAbstract: I
 n this talk\, we present a complexity dichotomy for Holant problems on the
  boolean domain with arbitrary sets of real-valued constraint functions. T
 hese constraint functions need not be symmetric nor do we assume any auxil
 iary functions as in previous results. It is proved that for every set F o
 f real-valued constraint functions\, Holant(F) is either P-time computable
  or #P-hard. The classification has an explicit criterion. This is a culmi
 nation of much research on a decade-long classification program for Holant
  problems\, and it uses previous results and techniques from many research
 ers.  However\, as it turned out\, the journey to the present theorem has 
 been arduous. Some particularly intriguing concrete functions f6\, f8 and 
 their associated families with extraordinary closure properties related to
  Bell states in quantum information theory play an important role in this 
 proof. \n\nBased on joint work with Jin-Yi Cai.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sumegha Garg (Harvard)
DTSTART:20201125T180000Z
DTEND:20201125T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/18
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/18/">The Coin Problem with Applications to Data Streams</a>\nby Sumegha
  Garg (Harvard) as part of TCS+\n\n\nAbstract\nConsider the problem of com
 puting the majority of a stream of $n$ i.i.d. uniformly random bits. This 
 problem\, known as the coin problem\, is central to a number of counting p
 roblems in different data stream models. We show that any streaming algori
 thm for solving this problem with large constant advantage (over the unifo
 rm distribution) must use $\\Omega(\\log n)$ bits of space. Previously\, i
 t was known that computing the majority on every input with a constant pro
 bability takes $\\Omega(\\log n)$ space. We extend our lower bound to prov
 ing tight lower bounds for solving multiple\, randomly interleaved copies 
 of the coin problem\, as well as for solving the OR of multiple copies of 
 a variant of the coin problem. Our proofs involve new measures of informat
 ion complexity that are well-suited for data streams.\n\nWe use these lowe
 r bounds to obtain a number of new results for data streams. In each case 
 there is an underlying $d$-dimensional vector x with additive updates to i
 ts coordinates given in a stream of length $m$. The input streams arising 
 from our coin lower bound have nice distributional properties\, and conseq
 uently for many problems for which we only had lower bounds in general tur
 nstile streams\, we now obtain the same lower bounds in more natural model
 s\, such as the bounded deletion model\, in which $\\|x\\|_2$ never drops 
 by a constant fraction of what it was earlier\, or in the random order mod
 el\, in which the updates are ordered randomly.\n\nBased on joint work wit
 h Mark Braverman and David P. Woodruff.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yang Liu (Stanford University)
DTSTART:20201202T180000Z
DTEND:20201202T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/19
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/19/">Faster Algorithms for Unit Maximum Flow</a>\nby Yang Liu (Stanford
  University) as part of TCS+\n\n\nAbstract\nThe maximum flow problem is on
 e of the most well-studied problems in combinatorial optimization\, encomp
 assing a broad range of cut\, matching\, and scheduling problems. Here we 
 present a recent line of work obtaining provably faster algorithms for sol
 ving the maximum flow problem using interior point methods. In particular\
 , we show how to solve the maximum flow problem in m-edge unit capacity gr
 aphs in time almost $m^{4/3}$\, improving over the breakthrough $m^{10/7}$
  time algorithm of Mądry.\n\nThis is based on joint work with Aaron Sidfo
 rd (https://arxiv.org/abs/1910.14276 and https://arxiv.org/abs/2003.08929)
 .\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:William Hoza (UT Austin)
DTSTART:20210217T180000Z
DTEND:20210217T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/20
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/20/">Fooling Constant-Depth Threshold Circuits</a>\nby William Hoza (UT
  Austin) as part of TCS+\n\n\nAbstract\nWe present the first non-trivial p
 seudorandom generator (PRG) for linear threshold (LTF) circuits of arbitra
 ry constant depth and super-linear size. This PRG fools circuits with dept
 h $d$ and $n^{1 + \\delta}$ wires\, where $\\delta = \\exp(-O(d))$\, using
  seed length $O(n^{1 - \\delta})$ and with error $\\exp(-n^{\\delta})$. Th
 is tightly matches the best known lower bounds for this circuit class. As 
 a consequence of our result\, all the known hardness for LTF circuits has 
 now effectively been translated into pseudorandomness. This brings the ext
 ensive effort in the last decade to construct PRGs and deterministic circu
 it-analysis algorithms for this class to the point where any subsequent im
 provement would yield breakthrough lower bounds.\n\nA key ingredient in ou
 r construction is a pseudorandom restriction procedure that has tiny failu
 re probability\, but simplifies the function to a non-natural "hybrid comp
 utational model" that combines decision trees and LTFs. As part of our pro
 of we also construct an "extremely low-error" PRG for the class of functio
 ns computable by an arbitrary function of s linear threshold functions tha
 t can handle even the extreme setting of parameters $s = n/\\mathrm{polylo
 g}(n)$ and $\\epsilon = \\exp(-n/\\mathrm{polylog}(n))$.\n\nJoint work wit
 h Pooya Hatami\, Avishay Tal\, and Roei Tell.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Steve Hanneke (TTIC)
DTSTART:20210303T180000Z
DTEND:20210303T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/21
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/21/">A Theory of Universal Learning</a>\nby Steve Hanneke (TTIC) as par
 t of TCS+\n\n\nAbstract\nHow quickly can a given class of concepts be lear
 ned from examples? It is common to measure the performance of a supervised
  machine learning algorithm by plotting its "learning curve"\, that is\, t
 he decay of the error rate as a function of the number of training example
 s. However\, the classical theoretical framework for understanding learnab
 ility\, the PAC model of Vapnik-Chervonenkis and Valiant\, does not explai
 n the behavior of learning curves: the distribution-free PAC model of lear
 ning can only bound the upper envelope of the learning curves over all pos
 sible data distributions. This does not match the practice of machine lear
 ning\, where the data source is typically fixed in any given scenario\, wh
 ile the learner may choose the number of training examples on the basis of
  factors such as computational resources and desired accuracy.\n\nIn this 
 work\, we study an alternative learning model that better captures such pr
 actical aspects of machine learning\, but still gives rise to a complete t
 heory of the learnable in the spirit of the PAC model. More precisely\, we
  consider the problem of universal learning\, which aims to understand the
  performance of learning algorithms on every data distribution\, but witho
 ut requiring uniformity over the distribution. The main result of this wor
 k is a remarkable trichotomy: there are only three possible rates of unive
 rsal learning. More precisely\, we show that the learning curves of any gi
 ven concept class decay either at an exponential\, linear\, or arbitrarily
  slow rates. Moreover\, each of these cases is completely characterized by
  appropriate combinatorial parameters\, and we exhibit optimal learning al
 gorithms that achieve the best possible rate in each case.\n\nJoint work w
 ith Olivier Bousquet\, Shay Moran\, Ramon van Handel\, and Amir Yehudayoff
 .\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Avishay Tal (UC Berkeley)
DTSTART:20210317T170000Z
DTEND:20210317T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/22
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/22/">Junta Distance Approximation with Sub-Exponential Queries</a>\nby 
 Avishay Tal (UC Berkeley) as part of TCS+\n\n\nAbstract\nJoint Work with V
 ishnu Iyer and Michael Whitmeyer\n\nA Boolean function $f:{0\,1}^n \\to {0
 \,1}$ is called a k-junta if it depends only on k out of the n input bits.
  Junta testing is the task of distinguishing between k-juntas and function
 s that are far from k-juntas. A long line of work settled the query comple
 xity of testing k-juntas\, which is O(k log(k)) [Blais\, STOC 2009\; Sagla
 m\, FOCS 2018]. Suppose\, however\, that f is not a perfect k-junta but ra
 ther correlated with a k-junta. How well can we estimate this correlation?
  This question was asked by De\, Mossel\, and Neeman [FOCS 2019]\, who gav
 e an algorithm for the task making ~exp(k) queries. We present an improved
  algorithm that makes ~exp(sqrt{k}) many queries. \n\nAlong the way\, we a
 lso give an algorithm\, making poly(k) queries\, that provides "implicit o
 racle access" to the underlying k-junta. Our techniques are Fourier analyt
 ical and introduce the notion of "normalized influences" that might be of 
 independent interest.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jasper Lee (Brown University)
DTSTART:20210331T170000Z
DTEND:20210331T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/23
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/23/">Optimal Sub-Gaussian Mean Estimation in $\\mathbb{R}$</a>\nby Jasp
 er Lee (Brown University) as part of TCS+\n\n\nAbstract\nWe revisit and se
 ttle a fundamental problem in statistics: given access to independent samp
 les from a 1D random variable (with finite but unknown mean and variance)\
 , what is the best way to estimate the mean in the high probability regime
 \, in terms of error convergence with respect to sample size? The conventi
 onal wisdom is to use the empirical mean as our estimate. However\, it is 
 known that the empirical mean can in fact have exponentially sub-optimal c
 onvergence for certain heavy-tailed distributions. On the other hand\, the
  median-of-means estimator (invented and reinvented in various literature)
  does have sub-Gaussian convergence for all finite-variance distributions\
 , albeit only in the big-O sense with a sub-optimal multiplicative constan
 t. The natural remaining question then\, is whether it is possible to brid
 ge the gap\, and have an estimator that has optimal convergence with the r
 ight constant for all finite-variance distributions.\n\nIn this talk\, we 
 answer the question affirmatively by giving an estimator that converges wi
 th the optimal constant inside the big-O\, up to a 1+o(1) multiplicative f
 actor. The estimator is also easy to compute. The convergence analysis inv
 olves deriving tail bounds using linear and convex-concave programming dua
 lities\, which may be of independent interest.\n\nBased on joint work with
  Paul Valiant.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Santhoshini Velusamy (Harvard University)
DTSTART:20210512T170000Z
DTEND:20210512T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/24
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/24/">Classification of the approximability of all finite Max-CSPs in th
 e dynamic streaming setting</a>\nby Santhoshini Velusamy (Harvard Universi
 ty) as part of TCS+\n\n\nAbstract\nA maximum constraint satisfaction probl
 em\, Max-CSP(F)\, is specified by a finite family of constraints F\, where
  each constraint is of arity k. An instance of the problem on n variables 
 is given by m applications of constraints from F to length-k subsequences 
 of the n variables\, and the goal is to find an assignment to the n variab
 les that satisfies the maximum number of constraints. The class of Max-CSP
 (F) includes optimization problems such as Max-CUT\, Max-DICUT\, Max-3SAT\
 , Max-q-Coloring\, Unique Games\, etc.\n\nIn this talk\, I will present ou
 r recent dichotomy theorem on the approximability of Max-CSP(F) for every 
 finite family F\, in the single-pass dynamic streaming setting. In this se
 tting\, at each time step\, a constraint is either added to or deleted fro
 m the stream. In the end\, the streaming algorithm must estimate the maxim
 um number of constraints that can be satisfied using space that is only po
 lylogarithmic in n. No background in streaming algorithms or constraint sa
 tisfaction problems will be needed to enjoy this talk!\n\nThe talk will be
  based on this paper (https://eccc.weizmann.ac.il/report/2021/011/)\, and 
 this paper (https://eccc.weizmann.ac.il/report/2021/063/) with Chi-Ning Ch
 ou\, Alexander Golovnev\, and Madhu Sudan.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Lincoln (UC Berkeley)
DTSTART:20210414T170000Z
DTEND:20210414T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/25
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/25/">New Techniques for Proving Fine-Grained Average-Case Hardness</a>\
 nby Andrea Lincoln (UC Berkeley) as part of TCS+\n\n\nAbstract\nIn this ta
 lk I will cover a new technique for worst-case to average-case reductions.
  There are two primary concepts introduced in this talk: "factored" proble
 ms and a framework for worst-case to average-case fine-grained (WCtoACFG) 
 self reductions.\n\nWe will define new versions of OV\, kSUM and zero-k-cl
 ique that are both worst-case and average-case fine-grained hard assuming 
 the core hypotheses of fine-grained complexity. We then use these as a bas
 is for fine-grained hardness and average-case hardness of other problems. 
 Our hard factored problems are also simple enough that we can reduce them 
 to many other problems\, e.g. to edit distance\, k-LCS and versions of Max
 -Flow. We further consider counting variants of the factored problems and 
 give WCtoACFG reductions for them for a natural distribution.\n\nTo show h
 ardness for these factored problems we formalize the framework of [Boix-Ad
 sera et al. 2019]  that was used to give a WCtoACFG reduction for counting
  k-cliques. We define an explicit property of problems such that if a prob
 lem has that property one can use the framework on the problem to get a WC
 toACFG self reduction.\nIn total these factored problems and the framework
  together give tight fine-grained average-case hardness for various proble
 ms including the counting variant of regular expression matching.\n\nBased
  on joint work with Mina Dalirrooyfard and Virginia Vassilevska Williams.\
 n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ronen Eldan (Weizmann Institute)
DTSTART:20210428T170000Z
DTEND:20210428T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/26
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/26/">Localization\, stochastic localization\, and Chen's recent breakth
 rough on the Kannan-Lovasz-Simonivits conjecture</a>\nby Ronen Eldan (Weiz
 mann Institute) as part of TCS+\n\n\nAbstract\nThe Kannan-Lovasz and Simon
 ovits (KLS) conjecture considers the following isoperimetric problem on hi
 gh-dimensional convex bodies: Given a convex body $K$\, consider the optim
 al way to partition it into two pieces of equal volume so as to minimize t
 heir interface. Is it true that up to a universal constant\, the minimal p
 artition is attained via a hyperplane cut? Roughly speaking\, this questio
 n can be thought of as asking "to what extent is a convex set a good expan
 der"?\n\nIn analogy to expander graphs\, such lower bounds on the capacity
  would imply bounds on mixing times of Markov chains associated with the c
 onvex set\, and so this question has direct implications on the complexity
  of many computational problems on convex sets. Moreover\, it was shown th
 at a positive answer would imply Bourgain's slicing conjecture.  \n\nVery 
 recently\, Yuansi Chen obtained a striking breakthrough\, nearly solving t
 his conjecture. In this talk\, we will overview some of the central ideas 
 used in the proof. We will start with the classical concept of "localizati
 on" (a very useful tool to prove concentration inequalities) and its exten
 sion\, stochastic localization - the main technique used in the proof.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kira Goldner (Columbia University)
DTSTART:20210526T170000Z
DTEND:20210526T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/27
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/27/">An Overview of Using Mechanism Design for Social Good</a>\nby Kira
  Goldner (Columbia University) as part of TCS+\n\n\nAbstract\nIn order to 
 accurately predict an algorithm's outcome and quality when it interacts wi
 th participants who have a stake in the outcome\, we must design it to be 
 robust to strategic manipulation.  This is the subject of algorithmic mech
 anism design\, which borrows ideas from game theory and economics to desig
 n robust algorithms.  In this talk\, I will show how results from the theo
 retical foundations of algorithmic mechanism design can be used to solve p
 roblems of societal concern.  \n\nI will overview recent work in this area
  in many different applications — housing\, labor markets\, carbon licen
 se allocations\, health insurance markets\, and more — as well as discus
 s open problems and directions ripe for tools from both mechanism design a
 nd general TCS.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pravesh Kothari and Ankur Moitra (CMU and MIT)
DTSTART:20210609T170000Z
DTEND:20210609T183000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/28
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/28/">Robustly Learning Mixtures of Gaussians</a>\nby Pravesh Kothari an
 d Ankur Moitra (CMU and MIT) as part of TCS+\n\n\nAbstract\nFor a while no
 w the problem of robustly learning a high-dimensional mixture of Gaussians
  has had a target on its back. The first works in algorithmic robust stati
 stics gave provably robust algorithms for learning a single Gaussian. Sinc
 e then there has been steady progress\, including algorithms for robustly 
 learning mixtures of spherical Gaussians\, mixtures of Gaussians under sep
 aration conditions\, and arbitrary mixtures of two Gaussians. In this talk
  we will discuss two recent works that essentially resolve the general pro
 blem. There are important differences in their techniques\, setup\, and ov
 erall quantitative guarantees\, which we will discuss.\n\nThe talk will co
 ver the following independent works:\n- Liu\, Moitra\, "Settling the Robus
 t Learnability of Mixtures of Gaussians"\n- Bakshi\, Diakonikolas\, Jia\, 
 Kane\, Kothari\, Vempala\, "Robustly Learning Mixtures of $k$ Arbitrary Ga
 ussians"\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Audra McMillan (Apple)
DTSTART:20210929T170000Z
DTEND:20210929T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/29
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/29/">Hiding among the clones: a simple and nearly optimal analysis of p
 rivacy amplification by shuffling</a>\nby Audra McMillan (Apple) as part o
 f TCS+\n\n\nAbstract\nDifferential privacy (DP) is a model of privacy-pres
 erving machine learning that has garnered significant interest in recent y
 ears due to its rigorous privacy guarantees. An algorithm differentially p
 rivate if the output is stable under small changes in the input database. 
 While DP has been adopted in a variety of applications\, most applications
  of DP in industry actually satisfy a stronger notion called local differe
 ntial privacy. In local differential privacy data subjects perturb their d
 ata before it reaches the data analyst. While this requires less trust\, i
 t comes a substantial cost to accuracy. Recent work of Erlingsson\, Feldma
 n\, Mironov\, Raghunathan\, Talwar\, and Thakurta [EFMRTT19] demonstrated 
 that random shuffling amplifies differential privacy guarantees of locally
  randomized data. Such amplification implies substantially stronger privac
 y guarantees for systems in which data is contributed anonymously [BEMMRLR
 KTS17] and has led to significant interest in the shuffle model of privacy
  [CSUZZ19\, EFMRTT19]. In this talk\, we will discuss a new result on priv
 acy amplification by shuffling\, which achieves the asymptotically optimal
  dependence in the local privacy parameter. Our result is based on a new p
 roof strategy which is simpler than previous approaches\, and extends to a
  lightly weaker notion known as approximate differential privacy with near
 ly the same guarantees. \n\nBased on joint work with Vitaly Feldman and Ku
 nal Talwar.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nutan Limaye (IT University of Copenhagen)
DTSTART:20211013T170000Z
DTEND:20211013T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/30
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/30/">Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits<
 /a>\nby Nutan Limaye (IT University of Copenhagen) as part of TCS+\n\n\nAb
 stract\n"Every multivariate polynomial P(X) can be written as a sum of mon
 omials\, i.e. a sum of products of variables and field constants. In gener
 al\, the size of such an expression is the number of monomials that have a
  non-zero coefficient in P.\n\nWhat happens if we add another layer of com
 plexity\, and consider sums of products of sums (of variables and field co
 nstants) expressions? Now\, it becomes unclear how to prove that a given p
 olynomial P(X) does not have small expressions. In this result\, we solve 
 exactly this problem.\n\nMore precisely\, we prove that certain explicit p
 olynomials have no polynomial-sized "Sigma-Pi-Sigma" (sums of products of 
 sums) representations. We can also show similar results for Sigma-Pi-Sigma
 -Pi\, Sigma-Pi-Sigma-Pi-Sigma and so on for all "constant-depth" expressio
 ns. \n\nThe talk is based on a joint work of Nutan Limaye\, Srikanth Srini
 vasan\, and Sébastien Tavenas."\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shravas Rao (Northwestern University)
DTSTART:20211027T170000Z
DTEND:20211027T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/31
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/31/">Degree vs. Approximate Degree and Quantum Implications of Huang's 
 Sensitivity Theorem</a>\nby Shravas Rao (Northwestern University) as part 
 of TCS+\n\n\nAbstract\nBased on the recent breakthrough of Huang (2019)\, 
 we show that for any total Boolean function $f$\,\n\n-  The degree of $f$ 
 is at most quadratic in the approximate degree of $f$. This is optimal as 
 witnessed by the OR function.\n\n- The deterministic query complexity of $
 f$ is at most quartic in the quantum query complexity of $f$. This matches
  the known separation (up to log factors) due to Ambainis\, Balodis\, Belo
 vs\, Lee\, Santha\, and Smotrovs (2017).\n\nWe apply these results to reso
 lve the quantum analogue of the Aanderaa--Karp--Rosenberg conjecture. We s
 how that if f is a nontrivial monotone graph property of an $n$-vertex gra
 ph specified by its adjacency matrix\, then $Q(f)=\\Omega(n)$\, which is a
 lso optimal. We also show that the approximate degree of any read-once for
 mula on n variables is $\\Theta(\\sqrt{n})$.\n\nBased on joint work with S
 cott Aaronson\, Shalev Ben-David\, Robin Kothari\, and Avishay Tal.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kuikui Liu (University of Washington)
DTSTART:20211110T180000Z
DTEND:20211110T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/32
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/32/">Spectral Independence: A New Tool to Analyze Markov Chains</a>\nby
  Kuikui Liu (University of Washington) as part of TCS+\n\n\nAbstract\nMark
 ov chain Monte Carlo is a widely used class of algorithms for sampling fro
 m high-dimensional probability distributions\, both in theory and in pract
 ice. While simple to implement\, analyzing the rate of convergence to stat
 ionarity\, i.e. the "mixing time"\, remains a challenging problem in many 
 settings. We introduce a new technique to bound mixing times called "spect
 ral independence"\, which says that certain pairwise correlation matrices 
 all have bounded spectral norm. This surprisingly powerful technique origi
 nates in the emerging study of high-dimensional expanders\, and has allowe
 d us to "unify" nearly all existing approaches to approximate counting and
  sampling by building new connections with other areas\, including statist
 ical physics\, geometry of polynomials\, functional analysis\, and more. T
 hrough these connections\, several long-standing open problems have recent
 ly been answered\, including counting bases of matroids and optimal mixing
  of the Glauber dynamics/Gibbs sampler up to the algorithmic phase transit
 ion threshold. \n\nBased on several joint works with Dorna Abdolazimi\, Ni
 ma Anari\, Zongchen Chen\, Shayan Oveis Gharan\, Eric Vigoda\, Cynthia Vin
 zant\, and June Vuong.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:William Kuszmaul (MIT)
DTSTART:20211201T180000Z
DTEND:20211201T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/33
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/33/">Linear Probing Revisited: Tombstones Mark the Demise of Primary Cl
 ustering</a>\nby William Kuszmaul (MIT) as part of TCS+\n\n\nAbstract\nThe
  linear-probing hash table is one of the oldest and most widely used data 
 structures in computer science. However\, linear probing also famously com
 es with a major drawback: as soon as the hash table reaches a high memory 
 utilization\, elements within the hash table begin to cluster together\, c
 ausing insertions to become slow. This phenomenon\, now known as "primary 
 clustering"\, was first captured by Donald Knuth in 1963\; at a load facto
 r of $1 - 1/x$\, the expected time per insertion becomes $\\Theta(x^2)$\, 
 rather than the more desirable $\\Theta(x)$.\n\nWe show that there is more
  to the story than the classic analysis would seem to suggest. It turns ou
 t that small design decisions in how deletions are implemented have dramat
 ic effects on the asymptotic performance of insertions. If these design de
 cisions are made correctly\, then even a hash table that is continuously a
 t a load factor $1 - \\Theta(1/x)$ can achieve average insertion time $\\t
 ilde{O}(x)$. A key insight is that the tombstones left behind by deletions
  cause a surprisingly strong "anti-clustering" effect\, and that when inse
 rtions and deletions are one-for-one\, the anti-clustering effects of dele
 tions actually overpower the clustering effects of insertions.\n\nWe also 
 present a new variant of linear probing\, which we call "graveyard hashing
 "\, that completely eliminates primary clustering on any sequence of opera
 tions. If\, when an operation is performed\, the current load factor is $1
  - 1/x$ for some $x$\, then the expected cost of the operation is $O(x)$. 
 One corollary is that\, in the external-memory model with a data block siz
 e of $B$\, graveyard hashing offers the following remarkable guarantee: at
  any load factor $1-1/x$ satisfying $x = o(B)$\, graveyard hashing achieve
 s $1 + o(1)$ expected block transfers per operation. Past external-memory 
 hash tables have only been able to offer a $1+o(1)$ guarantee when the blo
 ck size $B$ is at least $\\Omega(x^2)$.\n\nBased on joint work with Michae
 l A. Bender and Bradley C. Kuszmaul. To appear in FOCS 2021.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guy Blanc (Stanford University)
DTSTART:20211208T180000Z
DTEND:20211208T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/34
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/34/">Properly learning decision trees in almost polynomial time</a>\nby
  Guy Blanc (Stanford University) as part of TCS+\n\n\nAbstract\nWe give an
  $n^{O(\\log\\log n)}$-time membership query algorithm for properly and ag
 nostically learning decision trees under the uniform distribution over $\\
 {-1\,1\\}^n$. Even in the realizable setting\, the previous fastest runtim
 e was $n^{O(\\log n)}$\, a consequence of a classic algorithm of Ehrenfeuc
 ht and Haussler.  \n\nOur algorithm shares similarities with practical heu
 ristics for learning decision trees\, which we augment with additional ide
 as to circumvent known lower bounds against these heuristics. To analyze o
 ur algorithm\, we prove a new structural result for decision trees that st
 rengthens a theorem of O'Donnell\, Saks\, Schramm\, and Servedio. While th
 e OSSS theorem says that every decision tree has an influential variable\,
  we show how every decision tree can be "pruned"  so that every variable i
 n the resulting tree is influential.\n\nJoint work with Jane Lange\, Mingd
 a Qiao\, and Li-Yang Tan. To appear in FOCS 2021.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Merav Parter (Weizmann Institute of Science)
DTSTART:20220223T180000Z
DTEND:20220223T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/35
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/35/">New Diameter Reducing Shortcuts: Breaking the $O(\\sqrt{n})$ Barri
 er</a>\nby Merav Parter (Weizmann Institute of Science) as part of TCS+\n\
 n\nAbstract\nFor an $n$-vertex digraph $G=(V\,E)$\, a \\emph{shortcut set}
  is a (small) subset of edges $H$ taken from the transitive closure of $G$
  that\, when added to $G$ guarantees that the diameter of $G \\cup H$ is s
 mall. Shortcut sets\, introduced by Thorup in 1993\, have a wide range of 
 applications in algorithm design\, especially in the context of parallel\,
  distributed and dynamic computation on directed graphs. A folklore result
  in this context shows that every $n$-vertex digraph admits a shortcut set
  of linear size (i.e.\, of $O(n)$ edges) that reduces the diameter to $\\w
 idetilde{O}(\\sqrt{n})$. Despite extensive research over the years\, the q
 uestion of whether one can reduce the diameter to $o(\\sqrt{n})$ with $\\w
 idetilde{O}(n)$ shortcut edges has been left open.\n\nIn this talk\, I wil
 l present the first improved diameter-sparsity tradeoff for this problem\,
  breaking the $\\sqrt{n}$ diameter barrier. Specifically\, we show an $O(n
 ^{\\omega})$-time randomized algorithm for computing a linear shortcut set
  that reduces the diameter of the digraph to $\\widetilde{O}(n^{1/3})$. We
  also extend our algorithms to provide improved $(\\beta\,\\epsilon)$ hops
 ets for $n$-vertex weighted directed graphs.\n\nJoint work with Shimon Kog
 an.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roei Tell (Institute for Advanced Study (IAS))
DTSTART:20220309T180000Z
DTEND:20220309T190000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/36
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/36/">Hardness vs Randomness\, Revised: Uniform\, Non-Black-Box\, and In
 stance-Wise</a>\nby Roei Tell (Institute for Advanced Study (IAS)) as part
  of TCS+\n\n\nAbstract\nIn this talk I'll show how to revise the classical
  hardness-vs-randomness framework\, so that it can work in a non-black-box
  fashion. Specifically\, we will construct derandomization algorithms that
  don't rely on classical PRGs\, and instead "extract pseudorandomness" fro
 m the given input on which we want to simulate the probabilistic machine.\
 n\nUsing a non-black-box approach allows us to deduce stronger conclusions
  (or alternatively rely on weaker hypotheses)\, compared to classical appr
 oaches. In one instantiation of the new framework\, we reveal a close conn
 ection between the promiseBPP = promiseP conjecture and a new type of unif
 orm lower bounds. In another instantiation\, we simulate probabilistic alg
 orithms with essentially no observable runtime overhead\, under plausible 
 hypotheses.\n\nBased on a joint work with Lijie Chen.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shuichi Hirahara (NII (Japan))
DTSTART:20220323T220000Z
DTEND:20220323T230000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/37
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/37/">Excluding PH Pessiland</a>\nby Shuichi Hirahara (NII (Japan)) as p
 art of TCS+\n\n\nAbstract\nPessiland is the worst of Impagliazzo's five po
 ssible worlds: it is a world where NP is hard on average and pseudorandom 
 generators do not exist. Excluding Pessiland (i.e.\, showing the equivalen
 ce between average-case hardness of NP and the existence of pseudorandom g
 enerators) is one of the most important open questions in theoretical comp
 uter science.\n\nIn this talk\, we propose to consider PH (Polynomial Hier
 archy) variants of Impagliazzo's five possible worlds.  Our main result is
  to unconditionally rule out PH variants of Pessiland. I will also mention
  recent progress on excluding PH Heuristica: average-case hardness of PH f
 ollows from exponential worst-case hardness of PH.\n\nBased on joint works
  with Rahul Santhanam.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jessica Sorrell (UCSD)
DTSTART:20220406T170000Z
DTEND:20220406T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/38
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/38/">Reproducibility in Learning</a>\nby Jessica Sorrell (UCSD) as part
  of TCS+\n\n\nAbstract\nReproducibility is vital to ensuring scientific co
 nclusions are reliable\, but failures of reproducibility have been a major
  issue in nearly all scientific areas of study in recent decades. A key is
 sue underlying the reproducibility crisis is the explosion of methods for 
 data generation\, screening\, testing\, and analysis\, where\, crucially\,
  only the combinations producing the most significant results are reported
 . Such practices (also known as p-hacking\, data dredging\, and researcher
  degrees of freedom) can lead to erroneous findings that appear to be sign
 ificant\, but that don’t hold up when other researchers attempt to repli
 cate them.  \n\nIn this talk\, we introduce a new notion of reproducibilit
 y for randomized algorithms. This notion ensures that with high probabilit
 y\, an algorithm returns exactly the same output when run with two samples
  from the same distribution. Despite the exceedingly strong demand of repr
 oducibility\, there are efficient reproducible algorithms for several fund
 amental problems in statistics and learning. We show that any statistical 
 query algorithm can be made reproducible with a modest increase in sample 
 complexity\, and we use this to construct reproducible algorithms for find
 ing approximate heavy-hitters and medians. Using these ideas\, we give the
  first reproducible algorithm for learning halfspaces via a reproducible w
 eak learner and a reproducible boosting algorithm. We initiate the study o
 f lower bounds and inherent tradeoffs for reproducible algorithms\, giving
  nearly tight sample complexity upper and lower bounds for reproducible ve
 rsus non-reproducible SQ algorithms. Finally\, we discuss connections to o
 ther well-studied notions of algorithmic stability\, such as differential 
 privacy.\n\nJoint work with Russell Impagliazzo (UCSD)\, Rex Lei (UCSD)\, 
 and Toniann Pitassi (Columbia University)\, to appear in STOC 2022.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rasmus Kyng (ETH Zürich)
DTSTART:20220420T170000Z
DTEND:20220420T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/39
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/39/">Almost-Linear Time Algorithms for Maximum Flow and More</a>\nby Ra
 smus Kyng (ETH Zürich) as part of TCS+\n\n\nAbstract\nWe give the first a
 lmost-linear time algorithm for computing exact maximum flows and minimum-
 cost flows on directed graphs. By well-known reductions\, this implies alm
 ost-linear time algorithms for several problems including bipartite matchi
 ng\, optimal transport\, and undirected vertex connectivity.\n\nOur algori
 thm uses a new Interior Point Method (IPM) that builds the optimal flow as
  a sequence of an almost-linear number of approximate undirected minimum-r
 atio cycles\, each of which is computed and processed very efficiently usi
 ng a new dynamic data structure.\n\nOur framework extends to give an almos
 t-linear time algorithm for computing flows that minimize general edge-sep
 arable convex functions to high accuracy. This gives the first almost-line
 ar time algorithm for several problems including entropy-regularized optim
 al transport\, matrix scaling\, p-norm flows\, and Isotonic regression.\n\
 nJoint work with Li Chen\, Yang Liu\, Richard Peng\, Maximilian Probst Gut
 enberg\, and Sushant Sachdeva.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thatchaphol Saranurak (University of Michigan)
DTSTART:20220518T170000Z
DTEND:20220518T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/40
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/40/">All-pairs minimum cuts in nearly quadratic time: a tutorial</a>\nb
 y Thatchaphol Saranurak (University of Michigan) as part of TCS+\n\n\nAbst
 ract\nWe recently showed an algorithm for computing all-pairs minimum cuts
  (or\, more precisely\, the Gomory-Hu tree) in $\\tilde{O}(n^2)$ time in w
 eighted graphs and even almost-linear time in unweighted graphs. For weigh
 ted graphs\, this is the first improvement over the 60-year-old algorithm 
 by Gomory and Hu. Thus\, surprisingly\, computing all-pairs minimum cuts s
 eems to be strictly easier than computing all-pairs shortest paths\, which
  is conjectured to require $n^{3-o(1)}$ time.\n\nI will give a tutorial on
  the techniques behind our new result\, one of which is called "isolating 
 cuts". Then\, I will survey recent progress in fast minimum cut algorithms
  and discuss open problems.\n\nJoint work with Amir Abboud\, Robert Krauth
 gamer\, Jason Li\, Debmalya Panigrahi\, and Ohad Trabelsi.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vera Traub (ETH Zürich)
DTSTART:20220504T170000Z
DTEND:20220504T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/41
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/41/">Recent Developments in Graph Augmentation Problems</a>\nby Vera Tr
 aub (ETH Zürich) as part of TCS+\n\n\nAbstract\nAugmentation problems are
  a fundamental class of network design problems. They ask about the cheape
 st way to increase the (edge-)connectivity of a graph by adding edges amon
 g a given set of options. One of the most elementary and intensely studied
  augmentation problems is the (Weighted) Tree Augmentation Problem. Here\,
  a spanning tree has to be augmented into a 2-edge-connected graph.\n\nCla
 ssic techniques for network design yield 2-approximation algorithms for a 
 wide class of augmentation problems. For the Unweighted Tree Augmentation 
 Problem\, better-than-2 approximations are known for more than 20 years. H
 owever\, only recently the first better-than-2 approximations have been fo
 und for the more general Unweighted Connectivity Augmentation Problem and 
 Weighted Tree Augmentation Problem. In this talk we will discuss these rec
 ent advances.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Inbal Talgam-Cohen (Technion – Israel Institute of Technology)
DTSTART:20220601T170000Z
DTEND:20220601T180000Z
DTSTAMP:20260404T110824Z
UID:TCSPlus/42
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/TCSPl
 us/42/">Algorithmic contract theory</a>\nby Inbal Talgam-Cohen (Technion 
 – Israel Institute of Technology) as part of TCS+\n\n\nAbstract\nAlgorit
 hms increasingly interact with strategic\, self-interested players\; their
  design must take player incentives into account or risk being "gamed" and
  failing miserably. The algorithmic game theory literature traditionally f
 ocused on "mechanisms" - algorithms that incentivize players to truthfully
  report the input. In this talk we shift focus from mechanisms to "contrac
 ts"\, which are concerned with the algorithm's output and players' incenti
 ves to carry it out. The goal of this talk is to describe where we're at i
 n forming a new algorithmic theory of contract design.\n\nI will demonstra
 te how algorithmic approaches – in particular the approach of beyond wor
 st-case analysis – can shed light on a classic economic puzzle regarding
  simple contracts. Within the realm of incentives and learning\, I will di
 scuss how classifiers induce incentives and show a formal relation to cont
 racts.\n\nBased on joint works with Tal Alon\, Magdalen Dobson\, Paul Duet
 ting\, Ron Lavi\, Ariel Procaccia\, Tim Roughgarden\, Elisheva Shamash and
  Jamie Tucker-Foltz.\n
LOCATION:https://stable.researchseminars.org/talk/TCSPlus/42/
END:VEVENT
END:VCALENDAR
