BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Amihay Hanany (Imperial College London)
DTSTART:20211021T070000Z
DTEND:20211021T081500Z
DTSTAMP:20260404T110656Z
UID:UNISTMath/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/UNIST
 Math/1/">Magnetic Quivers and Physics at Strongly Coupled Quantum Field Th
 eories</a>\nby Amihay Hanany (Imperial College London) as part of UNIST Ma
 thematical Sciences Seminar Series\n\n\nAbstract\nSupersymmetric gauge the
 ories are an excellent medium for studying problems in both mathematics an
 d physics. Quiver gauge theories experienced a breakthrough in activity th
 rough two important concepts\, called magnetic quivers and Hasse (phase) d
 iagrams.\nThe first helps understanding the physics of strongly coupled ga
 uge theories and exotic theories with tensionless strings. The second give
 s an invaluable information about the phase structure of gauge theories\, 
 in analogy with phases of water. The talk will review these developments a
 nd explain their significance.\n
LOCATION:https://stable.researchseminars.org/talk/UNISTMath/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fabian Ruehle (Northeastern University)
DTSTART:20240517T070000Z
DTEND:20240517T080000Z
DTSTAMP:20260404T110656Z
UID:UNISTMath/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/UNIST
 Math/2/">Kolmogorov-Arnold Networks</a>\nby Fabian Ruehle (Northeastern Un
 iversity) as part of UNIST Mathematical Sciences Seminar Series\n\n\nAbstr
 act\nWe introduce Kolmogorov-Arnold Networks (KANs) as an alternative to s
 tandard feed-forward neural networks. KANs are based on Kolmogorov-Arnold 
 representation theory\, which means for our purposes that we can represent
  any function we want to learn by a weighted sum over basis functions\, ta
 ken to be splines. In contrast to standard MLPs\, the function basis of KA
 Ns is fixed to be piecewise polynomial rather than a combination of weight
 s and non-linearities\, and we only learn the parameters that control the 
 individual splines. While this is more expensive than a standard MLP\, KAN
 s have two properties that can offset this cost. First\, KANs can typicall
 y work with much fewer parameters. Second\, they exhibit better neural sca
 ling laws\, meaning the error decreases faster when increasing the number 
 of parameters as compared to MLPs. Fewer parameters also mean that KANs ar
 e much more interpretable\, especially when combined with the sparsificati
 on and pruning techniques we introduce. This makes KANs interesting as too
 ls for symbolic regression and for scientific discovery. We discuss an exa
 mple from knot theory\, where we could recover (trivial and non-trivial) r
 elations among knot invariants.\n\nRegister for zoom link:\nhttps://us06we
 b.zoom.us/meeting/register/tZ MkcOqtrDMuGNAQcMlvp3-MJwcWXVU6fzXl\n
LOCATION:https://stable.researchseminars.org/talk/UNISTMath/2/
END:VEVENT
END:VCALENDAR
