Skip to content

Poster Session 1


Poster Session 1 (June 15, Thursday, 13:00-14:00)

Click the headers to sort the table.

Poster #Paper #AuthorsPaper title
177SooJean Han, Soon-Jo Chung, Johanna GustafsonCongestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns
272Panagiotis Vlantis, Leila Bridgeman, Michael ZavlanosFailing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe
3104Muhammad Abdullah Naeem, Miroslav PajicTransportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State Space
4136Zhuoyuan Wang, Yorie NakahiraA Generalizable Physics-informed Learning Framework for Risk Probability Estimation
516Yue Meng, Chuchu FanHybrid Systems Neural Control with Region-of-Attraction Planner
6171Adithya Ramesh, Balaraman RavindranPhysics-Informed Model-Based Reinforcement Learning
753Kong Yao Chee, M. Ani Hsieh, Nikolai MatniLearning-enhanced Nonlinear Model Predictive Control using Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles
8124Paul Griffioen, Alex Devonport, Murat ArcakProbabilistic Invariance for Gaussian Process State Space Models
97Sourya Dey, Eric William DavisDLKoopman: A deep learning software package for Koopman theory
1015Killian Reed Wood, Emiliano Dall’AneseOnline Saddle Point Tracking with Decision-Dependent Data
1151Yingying Li, James A Preiss, Na Li, Yiheng Lin, Adam Wierman, Jeff S ShammaOnline switching control with stability and regret guarantees
12115Mingyu Cai, Calin Belta, Cristian Ioan Vasile, Erfan AasiTime-Incremental Learning of Temporal Logic Classifiers Using Decision Trees
1318Aritra Mitra, Hamed Hassani, George J. PappasLinear Stochastic Bandits over a Bit-Constrained Channel
148Alireza Farahmandi, Brian Reitz, Mark Debord, Douglas Philbrick, Katia Estabridis, Gary HewerHyperparameter Tuning of an Off-Policy Reinforcement Learning Algorithm for H∞ Tracking Control
15150Keyan Miao, Konstantinos GatsisLearning Robust State Observers using Neural ODEs
1628Patricia Pauli, Dennis Gramlich, Frank AllgöwerLipschitz constant estimation for 1D convolutional neural networks
17148Tejas Pagare, Konstantin Avrachenkov, Vivek BorkarFull Gradient Deep Reinforcement Learning for Average-Reward Criterion
1861Alex Devonport, Peter Seiler, Murat ArcakFrequency Domain Gaussian Process Models for H∞ Uncertainties
19110Srinath Tankasala, Mitch PryorAccelerating Trajectory Generation for Quadrotors Using Transformers
20101Cyrus Neary, Ufuk TopcuCompositional Learning of Dynamical System Models Using Port-Hamiltonian Neural Networks
21158Yaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia BrikisImproving Gradient Computation for Differentiable Physics Simulation with Contacts
22126Kaiyuan Tan, Jun Wang, Yiannis KantarosTargeted Adversarial Attacks against Neural Network Trajectory Predictors
23149Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo StellatoEnd-to-End Learning to Warm-Start for Real-Time Quadratic Optimization
24125Xiaobing Dai, Armin Lederer, Zewen Yang, Sandra HircheGaussian Process-Based Event-Triggered Online Learning with Computational Delays for Control of Unknown Systems
2566Deepan Muthirayan, Chinmay Maheshwari, Pramod Khargonekar, Shankar SastryCompeting Bandits in Time Varying Matching Markets
26119Spencer Hutchinson, Berkay Turan, Mahnoosh AlizadehThe Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit Feedback
27134Wenqi Cui, Linbin Huang, Weiwei Yang, Baosen ZhangEfficient Reinforcement Learning Through Trajectory Generation
28106Swaminathan Gurumurthy, J Zico Kolter, Zachary ManchesterValue-Gradient Updates Using Approximate Simulator Gradients to Speed Up Model-free Reinforcement Learning
29163Kai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernández FisacISAACS: Iterative Soft Adversarial Actor-Critic for Safety
30121Sampada Deglurkar, Michael H Lim, Johnathan Tucker, Zachary N Sunberg, Aleksandra Faust, Claire TomlinCompositional Learning-based Planning for Vision POMDPs