Skip to content

Poster sessions


The poster sessions will be hosted in the Houston Hall.

Session 1 (June 15, Thursday, 13:00-14:00)
Session 2 (June 15, Thursday, 16:30-17:30)
Session 3 (June 16, Friday, 13:00-14:00)
Session 4 (June 16, Friday, 16:15-17:15)

Click the headers to sort the table.

Paper #Session #Poster #AuthorsTitle
232Kwangjun Ahn, Zakaria Mhammedi, Horia Mania, Zhang-Wei Hong, Ali JadbabaieModel Predictive Control via On-Policy Imitation Learning
3425Michelle Guo, Yifeng Jiang, Andrew Everett Spielberg, Jiajun Wu, Karen LiuBenchmarking Rigid Body Contact Models
719Sourya Dey, Eric William DavisDLKoopman: A deep learning software package for Koopman theory
8114Alireza Farahmandi, Brian Reitz, Mark Debord, Douglas Philbrick, Katia Estabridis, Gary HewerHyperparameter Tuning of an Off-Policy Reinforcement Learning Algorithm for H∞ Tracking Control
10326Baris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt, Justin BayerFilter-Aware Model-Predictive Control
1343Bence Zsombor Hadlaczky, Noémi Friedman, Béla Takarics, Balint VanekWing shape estimation with Extended Kalman filtering and KalmanNet neural network of a flexible wing aircraft
15110Killian Reed Wood, Emiliano Dall’AneseOnline Saddle Point Tracking with Decision-Dependent Data
1615Yue Meng, Chuchu FanHybrid Systems Neural Control with Region-of-Attraction Planner
18113Aritra Mitra, Hamed Hassani, George J. PappasLinear Stochastic Bandits over a Bit-Constrained Channel
2126Guanchun Tong, Michael MuehlebachA Dynamical Systems Perspective on Discrete Optimization
2247Ian Char, Joseph Abbate, Laszlo Bardoczi, Mark Boyer,
Youngseog Chung, Rory Conlin, Keith Erickson, Viraj Mehta, Nathan Richner, Egemen Kolemen, Jeff Schneider
Offline Model-Based Reinforcement Learning for Tokamak Control
25216Gautam Goel, Naman Agarwal, Karan Singh, Elad HazanBest of Both Worlds in Online Control: Competitive Ratio and Policy Regret
2733Hengquan Guo, Zhu Qi, Xin LiuRectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed Bandit with Constraints
28116Patricia Pauli, Dennis Gramlich, Frank AllgöwerLipschitz constant estimation for 1D convolutional neural networks
2941Han Wang, Leonardo Felipe Toso, James AndersonFedSysID: A Federated Approach to Sample-Efficient System Identification
3029Doumitrou Daniil Nimara, Mohammadreza Malek-Mohammadi, Petter Ogren, Jieqiang Wei, Vincent HuangModel-Based Reinforcement Learning for Cavity Filter Tuning
32210Tobias Enders, James Harrison, Marco Pavone, Maximilian SchifferHybrid Multi-agent Deep Reinforcement Learning for Autonomous Mobility on Demand Systems
3335Tahiya Salam, Alice Kate Li, M. Ani HsiehOnline Estimation of the Koopman Operator Using Fourier Features
36213Nick-Marios Kokolakis, Kyriakos G Vamvoudakis, Wassim HaddadReachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning
38416Zifan Wang, Yulong Gao, Siyi Wang, Michael Zavlanos, Alessandro Abate, Karl Henrik JohanssonPolicy Evaluation in Distributional LQR
39413Sophia Huiwen Sun, Robin Walters, Jinxi Li, Rose YuProbabilistic Symmetry for Multi-Agent Dynamics
4045Majid Khadiv, Avadesh Meduri, Huaijiang Zhu, Ludovic Righetti, Bernhard SchölkopfLearning Locomotion Skills from MPC in Sensor Space
4123Daniel Tabas, Ahmed S Zamzam, Baosen ZhangInterpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning
43421Rahel Rickenbach, Elena Arcari, Melanie ZeilingerTime Dependent Inverse Optimal Control using Trigonometric Basis Functions
44322Antoine Leeman, Johannes Köhler, Samir Bennani, Melanie ZeilingerPredictive safety filter using system level synthesis
45214Hancheng Min, Enrique MalladaLearning Coherent Clusters in Weakly-Connected Network Systems
4824Elie Aljalbout, Maximilian Karl, Patrick van der SmagtCLAS: Coordinating Multi-Robot Manipulation with Central Latent Action Spaces
51111Yingying Li, James A Preiss, Na Li, Yiheng Lin, Adam Wierman, Jeff S ShammaOnline switching control with stability and regret guarantees
5317Kong Yao Chee, M. Ani Hsieh, Nikolai MatniLearning-enhanced Nonlinear Model Predictive Control using Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles
5425Arnob GhoshProvably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation
55319Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela RusLearning Stability Attention in Vision-based End-to-end Driving Policies
56418Alessio Russo, Alexandre ProutiereAnalysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems
58314Lukas Kesper, Sebastian Trimpe, Dominik BaumannToward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control
59223Sydney Dolan, Siddharth Nayak, Hamsa BalakrishnanSatellite Navigation and Coordination with Limited Information Sharing
61118Alex Devonport, Peter Seiler, Murat ArcakFrequency Domain Gaussian Process Models for H∞ Uncertainties
62310Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad HazanRegret Guarantees for Online Deep Control
66125Deepan Muthirayan, Chinmay Maheshwari, Pramod Khargonekar, Shankar SastryCompeting Bandits in Time Varying Matching Markets
67318Zhaolin Ren, Yang Zheng, Maryam Fazel, Na LiOn Controller Reduction in Linear Quadratic Gaussian Control with Performance Bounds
68229Rishi Rani, Massimo FranceschettiDetection of Man-in-the-Middle Attacks in Model-Free Reinforcement Learning
69317Guanru Pan, Ruchuan Ou, Timm FaulwasserData-driven Stochastic Output-Feedback Predictive Control: Recursive Feasibility through Interpolated Initial Conditions
70415Joshua Pilipovsky, Vignesh Sivaramakrishnan, Meeko Oishi, Panagiotis TsiotrasProbabilistic Verification of ReLU Neural Networks via Characteristic Functions
7212Panagiotis Vlantis, Leila Bridgeman, Michael ZavlanosFailing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe
7321Francesco De Lellis, Marco Coraggio, Giovanni Russo, Mirco Musolesi, Mario di BernardoCT-DQN: Control-Tutored Deep Reinforcement Learning
75426Reza Khodayi-mehr, Pingcheng Jian, Michael ZavlanosPhysics-Guided Active Learning of Environmental Flow Fields
76313Xiyu Deng, Christian Kurniawan, Adhiraj Chakraborty, Assane Gueye, Niangjun Chen, Yorie NakahiraA Learning and Control Perspective for Microfinance
77411SooJean Han, Soon-Jo Chung, Johanna GustafsonCongestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns
7711SooJean Han, Soon-Jo Chung, Johanna GustafsonCongestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns
78427Adrien Banse, Licio Romao, Alessandro Abate, Raphael JungersData-driven memory-dependent abstractions of dynamical systems
79228Jan Achterhold, Philip Tobuschat, Hao Ma, Dieter Büchler, Michael Muehlebach, Joerg StuecklerBlack-Box vs. Grey-Box: A Case Study on Learning Ping Pong Ball Trajectory Prediction with Spin and Impacts
8034Kehan Long, Yinzhuang Yi, Jorge Cortes, Nikolay AtanasovDistributionally Robust Lyapunov Function Search Under Uncertainty
82327Saminda Wishwajith Abeyruwan, Alex Bewley, Nicholas Matthew Boffi, Krzysztof Marcin Choromanski, David B D’Ambrosio,
Deepali Jain,Pannag R Sanketi, Anish Shankar, Vikas Sindhwani, Sumeet Singh, Jean-Jacques Slotine, Stephen Tu
Agile Catching with Whole-Body MPC and Blackbox Policy Learning
8331Tanya Veeravalli, Maxim RaginskyNonlinear Controllability and Function Representation by Neural Stochastic Differential Equations
86320Harrison Delecki, Anthony Corso, Mykel KochenderferModel-based Validation as Probabilistic Inference
87417Xu Zhang, Marcos VasconcelosTop-k data selection via distributed sample quantile inference
90420An Thai Le, Kay Hansel, Jan Peters, Georgia ChalvatzakiHierarchical Policy Blending As Optimal Transport
91422Weiye Zhao, Tairan He, Changliu LiuProbabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models
94412Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron BootsContinuous Versatile Jumping Using Learned Action Residuals
95218Armand Comas, Christian Fernandez Lopez, Sandesh Ghimire, Haolin Li, Mario Sznaier, Octavia CampsLearning Object-Centric Dynamic Modes from Video and Emerging Properties
96419Valentin Duruisseaux, Thai P. Duong, Melvin Leok, Nikolay AtanasovLie Group Forced Variational Integrator Networks for Learning and Control of Robot Systems
9744Luigi Campanaro, Daniele De Martini, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis HavoutisRoll-Drop: accounting for observation noise with a single parameter
98312Wenliang Liu, Kevin Leahy, Zachary Serlin, Calin BeltaCatlNet: Learning Communication and Coordination Policies from CaTL+ Specifications
100324Yuyang Zhang, Runyu Zhang, Gen Li, Yuantao Gu, Na LiMulti-Agent Reinforcement Learning with Reward Delays
101120Cyrus Neary, Ufuk TopcuCompositional Learning of Dynamical System Models Using Port-Hamiltonian Neural Networks
102424Prithvi Akella, Skylar X. Wei, Joel W. Burdick, Aaron AmesLearning Disturbances Online for Risk-Aware Control: Risk-Aware Flight with Less Than One Minute of Data
10413Muhammad Abdullah Naeem, Miroslav PajicTransportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State Space
106128Swaminathan Gurumurthy, J Zico Kolter, Zachary ManchesterValue-Gradient Updates Using Approximate Simulator Gradients to Speed Up Model-free Reinforcement Learning
107328Swaminathan Gurumurthy, Zachary Manchester, J Zico KolterPractical Critic Gradient based Actor Critic for On-Policy Reinforcement Learning
10948Yaqi Duan, Martin J. Wainwright, Martin WainwrightPolicy evaluation from a single path: Multi-step methods, mixing and mis-specification
110119Srinath Tankasala, Mitch PryorAccelerating Trajectory Generation for Quadrotors Using Transformers
112227Thomas TCK Zhang, Katie Kang, Bruce D Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai MatniMulti-Task Imitation Learning for Linear Dynamical Systems
11336Zihao Zhou, Rose YuAutomatic Integration for Fast and Interpretable Neural Point Processes
11427Paula Gradu, Elad Hazan, Edgar MinasyanAdaptive Regret for Control of Time-Varying Dynamics
115112Mingyu Cai, Calin Belta, Cristian Ioan Vasile, Erfan AasiTime-Incremental Learning of Temporal Logic Classifiers Using Decision Trees
11637Leilei Cui, Tamer Başar, Zhong-Ping JiangA Reinforcement Learning Look at Risk-Sensitive Linear Quadratic Gaussian Control
117325Thomas Beckers, Qirui Wu, George J. PappasPhysics-enhanced Gaussian Process Variational Autoencoder
118212Guillaume O Berger, Sriram SankaranarayananTemplate-Based Piecewise Affine Regression
119126Spencer Hutchinson, Berkay Turan, Mahnoosh AlizadehThe Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit Feedback
120225Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Yannis Kevrekidis, Mahyar FazlyabCertified Invertibility in Neural Networks via Mixed-Integer Programming
121130Sampada Deglurkar, Michael H Lim, Johnathan Tucker, Zachary N Sunberg, Aleksandra Faust, Claire TomlinCompositional Learning-based Planning for Vision POMDPs
12418Paul Griffioen, Alex Devonport, Murat ArcakProbabilistic Invariance for Gaussian Process State Space Models
125124Xiaobing Dai, Armin Lederer, Zewen Yang, Sandra HircheGaussian Process-Based Event-Triggered Online Learning with Computational Delays for Control of Unknown Systems
126122Kaiyuan Tan, Jun Wang, Yiannis KantarosTargeted Adversarial Attacks against Neural Network Trajectory Predictors
128222Lauren E Conger, Sydney Vernon, Eric MazumdarDesigning System Level Synthesis Controllers for Nonlinear Systems with Stability Guarantees
131217Taha Entesari, Mahyar FazlyabAutomated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes
13246Yashaswini Murthy, Mehrdad Moharrami, R. SrikantModified Policy Iteration for Exponential Cost Risk Sensitive MDPs
133219Muhammad Abdullah NaeemConcentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach
134127Wenqi Cui, Linbin Huang, Weiwei Yang, Baosen ZhangEfficient Reinforcement Learning Through Trajectory Generation
13614Zhuoyuan Wang, Yorie NakahiraA Generalizable Physics-informed Learning Framework for Risk Probability Estimation
137315Luke Bhan, Yuanyuan Shi, Miroslav KrsticOperator Learning for Nonlinear Adaptive Control
13939Yan Jiang, Wenqi Cui, Baosen Zhang, Jorge CortesEquilibria of Fully Decentralized Learning in Networked Systems
140221Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo JovanovicProvably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning
14228Anushri Dixit, Lars Lindemann, Skylar Wei, Matthew Cleaveland, George J. Pappas, Joel W. BurdickAdaptive Conformal Prediction for Motion Planning among Dynamic Agents
14442Fernando Castañeda, Haruki Nishimura, Rowan Thomas McAllister, Koushil Sreenath, Adrien GaidonIn-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States
145220Songyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu FanCompositional Neural Certificates for Networked Dynamical Systems
146316Yecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh JayaramanLearning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
147226Yitian Chen, Timothy Molloy, Tyler Summers, Iman ShamesRegret Analysis of Online LQR Control via Trajectory Prediction and Tracking
148117Tejas Pagare, Konstantin Avrachenkov, Vivek BorkarFull Gradient Deep Reinforcement Learning for Average-Reward Criterion
149123Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo StellatoEnd-to-End Learning to Warm-Start for Real-Time Quadratic Optimization
150115Keyan Miao, Konstantinos GatsisLearning Robust State Observers using Neural ODEs
151428Serban Sabau, Yifei Zhang, Sourav Kumar Ukil, Andrei SperilaSample Complexity for Evaluating the Robust Linear Observer’s Performance under Coprime Factors Uncertainty
153323Sarper Aydin, Ceyhun EksinPolicy Gradient Play with Networked Agents in Markov Potential Games
154211Sheng Cheng, Lin Song, Minkyung Kim, Shenlong Wang, Naira HovakimyanDiffTune+: Hyperparameter-Free Auto-Tuning using Auto-Differentiation
15638Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea FinnContrastive Example-Based Control
157215Orhan Eren Akgun, Arif Kerem Dayi, Stephanie Gil, Angelia NedichLearning Trust Over Directed Graphs in Multiagent Systems
158121Yaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia BrikisImproving Gradient Computation for Differentiable Physics Simulation with Contacts
160423Xunbi Ji, Gabor OroszLearning the dynamics of autonomous nonlinear delay systems
16149Yikun Cheng, Pan Zhao, Naira HovakimyanSafe Model-Free Reinforcement Learning using Disturbance-Observer-Based Control Barrier Functions
163129Kai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernández FisacISAACS: Iterative Soft Adversarial Actor-Critic for Safety
165321Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup LeeGuaranteed Conformance of Neurosymbolic Models to Natural Constraints
166410Pengzhi Yang, Shumon Koga, Arash Asgharivaskasi, Nikolay AtanasovPolicy Learning for Active Target Tracking over Continuous SE(3) Trajectories
167224Yi Tian, Kaiqing Zhang, Russ Tedrake, Suvrit SraCan Direct Latent Model Learning Solve Linear Quadratic Gaussian Control?
16822Bilgehan Sel, Ahmad Tawaha, Yuhao Ding, Ruoxi Jia, Bo Ji, Javad Lavaei, Ming JinLearning-to-Learn to Guide Random Search: Derivative-Free Meta Blackbox Optimization on Manifold
17116Adithya Ramesh, Balaraman RavindranPhysics-Informed Model-Based Reinforcement Learning
172414Saber Jafarpour, Akash Harapanahalli, Samuel CooganInterval Reachability of Nonlinear Dynamical Systems with Neural Network Controllers
173311Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio PasqualettiLearning on Manifolds: Universal Approximations Properties using Geometric Controllability Conditions for Neural ODEs