BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:M. Graham\, A. Linot (Univ. of Wisconsin-Madison)
DTSTART:20211020T190000Z
DTEND:20211020T200000Z
DTSTAMP:20260404T111214Z
UID:CNSwebinar/1
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CNSwe
 binar/1/">Data-driven dimension reduction\, dynamic modeling\, and control
  of complex chaotic systems</a>\nby M. Graham\, A. Linot (Univ. of Wiscons
 in-Madison) as part of Georgia Tech CNS Nonlinear Webinar\n\n\nAbstract\nO
 ur overall aim is to combine ideas from dynamical systems theory and machi
 ne learning to develop and apply reduced-order models of flow processes wi
 th complex chaotic dynamics. A particular aim is a minimal description of 
 dynamics on manifolds of dimension much less than the nominal state dimens
 ion and use of these models to develop effective control strategies for re
 ducing energy dissipation.\n\nAlec Linot: Modeling chaotic spatiotemporal 
 dynamics with a minimal representation using Neural ODEs\n\nSolutions to d
 issipative partial differential equations that exhibit chaotic dynamics of
 ten evolve to attractors that exist on finite-dimensional manifolds. We de
 scribe a data-driven reduced order modelling (ROM) method to find the coor
 dinates on this manifold and find an ordinary differential equation (ODE) 
 in these coordinates. The manifold coordinates are found by reducing the s
 ystem dimension via an undercomplete autoencoder – a neural network that
  reduces then expands dimension – and an ODE is learned in this coordina
 te system with a Neural ODE. Learning an ODE\, instead of a discrete time-
 map\, allows us to evolve trajectories arbitrarily far forward\, and allow
 s for training on unevenly and/or widely spaced data in time. We test on t
 he Kuramoto-Sivashinsky equation for domain sizes that exhibit spatiotempo
 rally chaos\, and find the ROM gives accurate short- and long-time statist
 ics with training data separated up to 0.7 Lyapunov times.\nhttps://arxiv.
 org/abs/2109.00060\n
LOCATION:https://stable.researchseminars.org/talk/CNSwebinar/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:K. Zeng\, Carlo Perez de Jesus\, Daniel Floryan (Univ. of Wisconsi
 n-Madison\, Univ. of Houston)
DTSTART:20211027T190000Z
DTEND:20211027T200000Z
DTSTAMP:20260404T111214Z
UID:CNSwebinar/2
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CNSwe
 binar/2/">Charting dynamics from data</a>\nby K. Zeng\, Carlo Perez de Jes
 us\, Daniel Floryan (Univ. of Wisconsin-Madison\, Univ. of Houston) as par
 t of Georgia Tech CNS Nonlinear Webinar\n\n\nAbstract\nKevin Zeng: Deep Re
 inforcement Learning Using Data-Driven Reduced-Order Models Discovers and 
 Stabilizes Low Dissipation Equilibria\n\nDeep reinforcement learning (RL)\
 , a data-driven method capable of discovering complex control strategies f
 or high-dimensional systems\, requires substantial interactions with the t
 arget system\, making it costly when the system is computationally or expe
 rimentally expensive (e.g. flow control). We mitigate this challenge by co
 mbining dimension reduction via an autoencoder with a neural ODE framework
  to learn a low-dimensional dynamical model\, which we substitute in place
  of the true system during RL training to efficiently estimate the control
  policy. We apply our method to data from the Kuramoto-Sivashinsky equatio
 n. With a goal of minimizing dissipation\, we extract control policies fro
 m the model using RL and show that the model-based strategies perform well
  on the full dynamical system and highlight that the RL agent discovers an
 d stabilizes a forced equilibrium solution\, despite never having been giv
 en explicit information about this state’s existence.\nhttps://arxiv.org
 /abs/2104.05437\n\nCarlo Perez de Jesus\nDept. of Chemical and Biological 
 Engineering\, Univ. of Wisconsin-Madison\n\nData-driven estimation of iner
 tial manifold dimension for chaotic Kolmogorov flow and time evolution on 
 the manifold\n\nModel reduction techniques have previously been applied to
  evolve the Navier-Stokes equations in time\, however finding the minimal 
 dimension needed to correctly capture the key dynamics is not a trivial ta
 sk. To estimate this dimension we trained an undercomplete autoencoder on 
 weakly chaotic vorticity data (32x32 grid) from Kolmogorov flow simulation
 s\, tracking the reconstruction error as a function of dimension. We also 
 trained a discrete time stepper that evolves the reduced order model with 
 a nonlinear dense neural network. The trajectory travels in the vicinity o
 f relative periodic orbits (RPOs) followed by sporadic bursting events. At
  a dimension of five (as opposed to the full state dimension of 1024)\, po
 wer input-dissipation probability density function is well-approximated\; 
 Fourier coefficient evolution shows that the trajectory correctly captures
  the heteroclinic connections (bursts) between the different RPOs\, and th
 e prediction and true data track each other for approximately a Lyapunov t
 ime.\n\nDaniel Floryan https://dfloryan.github.io/\nMechanical Engineering
  at the University of Houston\n\nCharting dynamics from data\n\nWe often f
 ind ourselves working with systems for which governing equations are unkno
 wn\, or if they are known\, they may be high-dimensional to the point of b
 eing difficult to analyze and prohibitively expensive to make predictions 
 with. These difficulties\, together with the ever-increasing availability 
 of data\, have led to the new paradigm of data-driven model discovery. I w
 ill present recent work that fruitfully combines a classical idea from app
 lied mathematics with modern methods of machine learning to learn minimal 
 dynamical models directly from time series data. In full analogy with cart
 ography\, we learn a representation of a system as an atlas of charts. Thi
 s approach allows us to obtain dynamical models of the lowest possible dim
 ension\, leads to computational benefits\, and can separate state space in
 to regions of distinct behaviors.\n              https://arxiv.org/abs/210
 8.05928\n
LOCATION:https://stable.researchseminars.org/talk/CNSwebinar/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vladimir Rosenhaus
DTSTART:20220412T150000Z
DTEND:20220412T160000Z
DTSTAMP:20260404T111214Z
UID:CNSwebinar/3
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CNSwe
 binar/3/">Feynman rules for  wave turbulence</a>\nby Vladimir Rosenhaus as
  part of Georgia Tech CNS Nonlinear Webinar\n\n\nAbstract\nIt has long bee
 n known that weakly nonlinear field theories can have a late-time stationa
 ry state that is not the thermal state\, but a wave turbulent state (the K
 olmogorov-Zakharov state) with a far-from-equilibrium cascade of energy. W
 e go beyond the existence of the wave turbulent state\, studying fluctuati
 ons about the wave turbulent state. Specifically\, we take a classical fie
 ld theory with an arbitrary quartic interaction and add dissipation and Ga
 ussian-random forcing. Employing the path integral relation  between stoch
 astic classical field theories and quantum field theories\, we give a pres
 cription\, in terms of  Feynman diagrams\, for computing correlation funct
 ions in this system.  We explicitly compute the two-point and four-point f
 unctions of the field to next-to-leading order in the coupling. Through an
  appropriate choice of forcing and dissipation\, these correspond to corre
 lation functions in the wave turbulent state. As a check\, we  reproduce t
 he next-to-leading order term in the kinetic equation.  The correlation fu
 nctions and corrections to the KZ state that we compute should\, in princi
 ple\, be experimentally measurable quantities.\n
LOCATION:https://stable.researchseminars.org/talk/CNSwebinar/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zeb Rocklin
DTSTART:20220419T150000Z
DTEND:20220419T160000Z
DTSTAMP:20260404T111214Z
UID:CNSwebinar/4
DESCRIPTION:Title: <a href="https://stable.researchseminars.org/talk/CNSwe
 binar/4/">Rigidity percolation in a random tensegrity via analytic graph t
 heory</a>\nby Zeb Rocklin as part of Georgia Tech CNS Nonlinear Webinar\n\
 n\nAbstract\nTensegrities are mechanical structures that include cable-lik
 e elements that are strong and lightweight relative to rigid rods yet supp
 ort only extensile stress. From suspension bridges to the musculoskeletal 
 system to individual biological cells\, humanity makes excellent use of te
 nsegrities\, yet the sharply nonlinear response of cables presents serious
  challenges to analytical theory. Here we consider large tensegrity struct
 ures with randomly placed cables (and struts) overlaid on a regular rigid 
 backbone whose corresponding system of inequalities is reduced via analyti
 c theory to an exact graph theory. We identify a novel coordination number
  that controls two rigidity percolation transitions: one in which global i
 nteractions between cables first support external loads and one in which t
 he structure becomes fully rigid.  We show that even the addition of a few
  cables strongly modifies conventional rigidity percolation\, both by modi
 fying the sharpness of the transition and by introducing avalanche effects
  in which a single constraint can eliminate multiple floppy modes.\n
LOCATION:https://stable.researchseminars.org/talk/CNSwebinar/4/
END:VEVENT
END:VCALENDAR
