The poster sessions will be hosted in the Houston Hall.
Session 1 (June 15, Thursday, 13:00-14:00)
Session 2 (June 15, Thursday, 16:30-17:30)
Session 3 (June 16, Friday, 13:00-14:00)
Session 4 (June 16, Friday, 16:15-17:15)
Click the headers to sort the table.
Paper # | Session # | Poster # | Authors | Title |
---|---|---|---|---|
2 | 3 | 2 | Kwangjun Ahn, Zakaria Mhammedi, Horia Mania, Zhang-Wei Hong, Ali Jadbabaie | Model Predictive Control via On-Policy Imitation Learning |
3 | 4 | 25 | Michelle Guo, Yifeng Jiang, Andrew Everett Spielberg, Jiajun Wu, Karen Liu | Benchmarking Rigid Body Contact Models |
7 | 1 | 9 | Sourya Dey, Eric William Davis | DLKoopman: A deep learning software package for Koopman theory |
8 | 1 | 14 | Alireza Farahmandi, Brian Reitz, Mark Debord, Douglas Philbrick, Katia Estabridis, Gary Hewer | Hyperparameter Tuning of an Off-Policy Reinforcement Learning Algorithm for H∞ Tracking Control |
10 | 3 | 26 | Baris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt, Justin Bayer | Filter-Aware Model-Predictive Control |
13 | 4 | 3 | Bence Zsombor Hadlaczky, Noémi Friedman, Béla Takarics, Balint Vanek | Wing shape estimation with Extended Kalman filtering and KalmanNet neural network of a flexible wing aircraft |
15 | 1 | 10 | Killian Reed Wood, Emiliano Dall’Anese | Online Saddle Point Tracking with Decision-Dependent Data |
16 | 1 | 5 | Yue Meng, Chuchu Fan | Hybrid Systems Neural Control with Region-of-Attraction Planner |
18 | 1 | 13 | Aritra Mitra, Hamed Hassani, George J. Pappas | Linear Stochastic Bandits over a Bit-Constrained Channel |
21 | 2 | 6 | Guanchun Tong, Michael Muehlebach | A Dynamical Systems Perspective on Discrete Optimization |
22 | 4 | 7 | Ian Char, Joseph Abbate, Laszlo Bardoczi, Mark Boyer, Youngseog Chung, Rory Conlin, Keith Erickson, Viraj Mehta, Nathan Richner, Egemen Kolemen, Jeff Schneider | Offline Model-Based Reinforcement Learning for Tokamak Control |
25 | 2 | 16 | Gautam Goel, Naman Agarwal, Karan Singh, Elad Hazan | Best of Both Worlds in Online Control: Competitive Ratio and Policy Regret |
27 | 3 | 3 | Hengquan Guo, Zhu Qi, Xin Liu | Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed Bandit with Constraints |
28 | 1 | 16 | Patricia Pauli, Dennis Gramlich, Frank Allgöwer | Lipschitz constant estimation for 1D convolutional neural networks |
29 | 4 | 1 | Han Wang, Leonardo Felipe Toso, James Anderson | FedSysID: A Federated Approach to Sample-Efficient System Identification |
30 | 2 | 9 | Doumitrou Daniil Nimara, Mohammadreza Malek-Mohammadi, Petter Ogren, Jieqiang Wei, Vincent Huang | Model-Based Reinforcement Learning for Cavity Filter Tuning |
32 | 2 | 10 | Tobias Enders, James Harrison, Marco Pavone, Maximilian Schiffer | Hybrid Multi-agent Deep Reinforcement Learning for Autonomous Mobility on Demand Systems |
33 | 3 | 5 | Tahiya Salam, Alice Kate Li, M. Ani Hsieh | Online Estimation of the Koopman Operator Using Fourier Features |
36 | 2 | 13 | Nick-Marios Kokolakis, Kyriakos G Vamvoudakis, Wassim Haddad | Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning |
38 | 4 | 16 | Zifan Wang, Yulong Gao, Siyi Wang, Michael Zavlanos, Alessandro Abate, Karl Henrik Johansson | Policy Evaluation in Distributional LQR |
39 | 4 | 13 | Sophia Huiwen Sun, Robin Walters, Jinxi Li, Rose Yu | Probabilistic Symmetry for Multi-Agent Dynamics |
40 | 4 | 5 | Majid Khadiv, Avadesh Meduri, Huaijiang Zhu, Ludovic Righetti, Bernhard Schölkopf | Learning Locomotion Skills from MPC in Sensor Space |
41 | 2 | 3 | Daniel Tabas, Ahmed S Zamzam, Baosen Zhang | Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning |
43 | 4 | 21 | Rahel Rickenbach, Elena Arcari, Melanie Zeilinger | Time Dependent Inverse Optimal Control using Trigonometric Basis Functions |
44 | 3 | 22 | Antoine Leeman, Johannes Köhler, Samir Bennani, Melanie Zeilinger | Predictive safety filter using system level synthesis |
45 | 2 | 14 | Hancheng Min, Enrique Mallada | Learning Coherent Clusters in Weakly-Connected Network Systems |
48 | 2 | 4 | Elie Aljalbout, Maximilian Karl, Patrick van der Smagt | CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action Spaces |
51 | 1 | 11 | Yingying Li, James A Preiss, Na Li, Yiheng Lin, Adam Wierman, Jeff S Shamma | Online switching control with stability and regret guarantees |
53 | 1 | 7 | Kong Yao Chee, M. Ani Hsieh, Nikolai Matni | Learning-enhanced Nonlinear Model Predictive Control using Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles |
54 | 2 | 5 | Arnob Ghosh | Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation |
55 | 3 | 19 | Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus | Learning Stability Attention in Vision-based End-to-end Driving Policies |
56 | 4 | 18 | Alessio Russo, Alexandre Proutiere | Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems |
58 | 3 | 14 | Lukas Kesper, Sebastian Trimpe, Dominik Baumann | Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control |
59 | 2 | 23 | Sydney Dolan, Siddharth Nayak, Hamsa Balakrishnan | Satellite Navigation and Coordination with Limited Information Sharing |
61 | 1 | 18 | Alex Devonport, Peter Seiler, Murat Arcak | Frequency Domain Gaussian Process Models for H∞ Uncertainties |
62 | 3 | 10 | Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan | Regret Guarantees for Online Deep Control |
66 | 1 | 25 | Deepan Muthirayan, Chinmay Maheshwari, Pramod Khargonekar, Shankar Sastry | Competing Bandits in Time Varying Matching Markets |
67 | 3 | 18 | Zhaolin Ren, Yang Zheng, Maryam Fazel, Na Li | On Controller Reduction in Linear Quadratic Gaussian Control with Performance Bounds |
68 | 2 | 29 | Rishi Rani, Massimo Franceschetti | Detection of Man-in-the-Middle Attacks in Model-Free Reinforcement Learning |
69 | 3 | 17 | Guanru Pan, Ruchuan Ou, Timm Faulwasser | Data-driven Stochastic Output-Feedback Predictive Control: Recursive Feasibility through Interpolated Initial Conditions |
70 | 4 | 15 | Joshua Pilipovsky, Vignesh Sivaramakrishnan, Meeko Oishi, Panagiotis Tsiotras | Probabilistic Verification of ReLU Neural Networks via Characteristic Functions |
72 | 1 | 2 | Panagiotis Vlantis, Leila Bridgeman, Michael Zavlanos | Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe |
73 | 2 | 1 | Francesco De Lellis, Marco Coraggio, Giovanni Russo, Mirco Musolesi, Mario di Bernardo | CT-DQN: Control-Tutored Deep Reinforcement Learning |
75 | 4 | 26 | Reza Khodayi-mehr, Pingcheng Jian, Michael Zavlanos | Physics-Guided Active Learning of Environmental Flow Fields |
76 | 3 | 13 | Xiyu Deng, Christian Kurniawan, Adhiraj Chakraborty, Assane Gueye, Niangjun Chen, Yorie Nakahira | A Learning and Control Perspective for Microfinance |
77 | 4 | 11 | SooJean Han, Soon-Jo Chung, Johanna Gustafson | Congestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns |
77 | 1 | 1 | SooJean Han, Soon-Jo Chung, Johanna Gustafson | Congestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns |
78 | 4 | 27 | Adrien Banse, Licio Romao, Alessandro Abate, Raphael Jungers | Data-driven memory-dependent abstractions of dynamical systems |
79 | 2 | 28 | Jan Achterhold, Philip Tobuschat, Hao Ma, Dieter Büchler, Michael Muehlebach, Joerg Stueckler | Black-Box vs. Grey-Box: A Case Study on Learning Ping Pong Ball Trajectory Prediction with Spin and Impacts |
80 | 3 | 4 | Kehan Long, Yinzhuang Yi, Jorge Cortes, Nikolay Atanasov | Distributionally Robust Lyapunov Function Search Under Uncertainty |
82 | 3 | 27 | Saminda Wishwajith Abeyruwan, Alex Bewley, Nicholas Matthew Boffi, Krzysztof Marcin Choromanski, David B D’Ambrosio, Deepali Jain,Pannag R Sanketi, Anish Shankar, Vikas Sindhwani, Sumeet Singh, Jean-Jacques Slotine, Stephen Tu | Agile Catching with Whole-Body MPC and Blackbox Policy Learning |
83 | 3 | 1 | Tanya Veeravalli, Maxim Raginsky | Nonlinear Controllability and Function Representation by Neural Stochastic Differential Equations |
86 | 3 | 20 | Harrison Delecki, Anthony Corso, Mykel Kochenderfer | Model-based Validation as Probabilistic Inference |
87 | 4 | 17 | Xu Zhang, Marcos Vasconcelos | Top-k data selection via distributed sample quantile inference |
90 | 4 | 20 | An Thai Le, Kay Hansel, Jan Peters, Georgia Chalvatzaki | Hierarchical Policy Blending As Optimal Transport |
91 | 4 | 22 | Weiye Zhao, Tairan He, Changliu Liu | Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models |
94 | 4 | 12 | Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots | Continuous Versatile Jumping Using Learned Action Residuals |
95 | 2 | 18 | Armand Comas, Christian Fernandez Lopez, Sandesh Ghimire, Haolin Li, Mario Sznaier, Octavia Camps | Learning Object-Centric Dynamic Modes from Video and Emerging Properties |
96 | 4 | 19 | Valentin Duruisseaux, Thai P. Duong, Melvin Leok, Nikolay Atanasov | Lie Group Forced Variational Integrator Networks for Learning and Control of Robot Systems |
97 | 4 | 4 | Luigi Campanaro, Daniele De Martini, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis Havoutis | Roll-Drop: accounting for observation noise with a single parameter |
98 | 3 | 12 | Wenliang Liu, Kevin Leahy, Zachary Serlin, Calin Belta | CatlNet: Learning Communication and Coordination Policies from CaTL+ Specifications |
100 | 3 | 24 | Yuyang Zhang, Runyu Zhang, Gen Li, Yuantao Gu, Na Li | Multi-Agent Reinforcement Learning with Reward Delays |
101 | 1 | 20 | Cyrus Neary, Ufuk Topcu | Compositional Learning of Dynamical System Models Using Port-Hamiltonian Neural Networks |
102 | 4 | 24 | Prithvi Akella, Skylar X. Wei, Joel W. Burdick, Aaron Ames | Learning Disturbances Online for Risk-Aware Control: Risk-Aware Flight with Less Than One Minute of Data |
104 | 1 | 3 | Muhammad Abdullah Naeem, Miroslav Pajic | Transportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State Space |
106 | 1 | 28 | Swaminathan Gurumurthy, J Zico Kolter, Zachary Manchester | Value-Gradient Updates Using Approximate Simulator Gradients to Speed Up Model-free Reinforcement Learning |
107 | 3 | 28 | Swaminathan Gurumurthy, Zachary Manchester, J Zico Kolter | Practical Critic Gradient based Actor Critic for On-Policy Reinforcement Learning |
109 | 4 | 8 | Yaqi Duan, Martin J. Wainwright, Martin Wainwright | Policy evaluation from a single path: Multi-step methods, mixing and mis-specification |
110 | 1 | 19 | Srinath Tankasala, Mitch Pryor | Accelerating Trajectory Generation for Quadrotors Using Transformers |
112 | 2 | 27 | Thomas TCK Zhang, Katie Kang, Bruce D Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni | Multi-Task Imitation Learning for Linear Dynamical Systems |
113 | 3 | 6 | Zihao Zhou, Rose Yu | Automatic Integration for Fast and Interpretable Neural Point Processes |
114 | 2 | 7 | Paula Gradu, Elad Hazan, Edgar Minasyan | Adaptive Regret for Control of Time-Varying Dynamics |
115 | 1 | 12 | Mingyu Cai, Calin Belta, Cristian Ioan Vasile, Erfan Aasi | Time-Incremental Learning of Temporal Logic Classifiers Using Decision Trees |
116 | 3 | 7 | Leilei Cui, Tamer Başar, Zhong-Ping Jiang | A Reinforcement Learning Look at Risk-Sensitive Linear Quadratic Gaussian Control |
117 | 3 | 25 | Thomas Beckers, Qirui Wu, George J. Pappas | Physics-enhanced Gaussian Process Variational Autoencoder |
118 | 2 | 12 | Guillaume O Berger, Sriram Sankaranarayanan | Template-Based Piecewise Affine Regression |
119 | 1 | 26 | Spencer Hutchinson, Berkay Turan, Mahnoosh Alizadeh | The Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit Feedback |
120 | 2 | 25 | Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Yannis Kevrekidis, Mahyar Fazlyab | Certified Invertibility in Neural Networks via Mixed-Integer Programming |
121 | 1 | 30 | Sampada Deglurkar, Michael H Lim, Johnathan Tucker, Zachary N Sunberg, Aleksandra Faust, Claire Tomlin | Compositional Learning-based Planning for Vision POMDPs |
124 | 1 | 8 | Paul Griffioen, Alex Devonport, Murat Arcak | Probabilistic Invariance for Gaussian Process State Space Models |
125 | 1 | 24 | Xiaobing Dai, Armin Lederer, Zewen Yang, Sandra Hirche | Gaussian Process-Based Event-Triggered Online Learning with Computational Delays for Control of Unknown Systems |
126 | 1 | 22 | Kaiyuan Tan, Jun Wang, Yiannis Kantaros | Targeted Adversarial Attacks against Neural Network Trajectory Predictors |
128 | 2 | 22 | Lauren E Conger, Sydney Vernon, Eric Mazumdar | Designing System Level Synthesis Controllers for Nonlinear Systems with Stability Guarantees |
131 | 2 | 17 | Taha Entesari, Mahyar Fazlyab | Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes |
132 | 4 | 6 | Yashaswini Murthy, Mehrdad Moharrami, R. Srikant | Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs |
133 | 2 | 19 | Muhammad Abdullah Naeem | Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach |
134 | 1 | 27 | Wenqi Cui, Linbin Huang, Weiwei Yang, Baosen Zhang | Efficient Reinforcement Learning Through Trajectory Generation |
136 | 1 | 4 | Zhuoyuan Wang, Yorie Nakahira | A Generalizable Physics-informed Learning Framework for Risk Probability Estimation |
137 | 3 | 15 | Luke Bhan, Yuanyuan Shi, Miroslav Krstic | Operator Learning for Nonlinear Adaptive Control |
139 | 3 | 9 | Yan Jiang, Wenqi Cui, Baosen Zhang, Jorge Cortes | Equilibria of Fully Decentralized Learning in Networked Systems |
140 | 2 | 21 | Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo Jovanovic | Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning |
142 | 2 | 8 | Anushri Dixit, Lars Lindemann, Skylar Wei, Matthew Cleaveland, George J. Pappas, Joel W. Burdick | Adaptive Conformal Prediction for Motion Planning among Dynamic Agents |
144 | 4 | 2 | Fernando Castañeda, Haruki Nishimura, Rowan Thomas McAllister, Koushil Sreenath, Adrien Gaidon | In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States |
145 | 2 | 20 | Songyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu Fan | Compositional Neural Certificates for Networked Dynamical Systems |
146 | 3 | 16 | Yecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman | Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching |
147 | 2 | 26 | Yitian Chen, Timothy Molloy, Tyler Summers, Iman Shames | Regret Analysis of Online LQR Control via Trajectory Prediction and Tracking |
148 | 1 | 17 | Tejas Pagare, Konstantin Avrachenkov, Vivek Borkar | Full Gradient Deep Reinforcement Learning for Average-Reward Criterion |
149 | 1 | 23 | Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo Stellato | End-to-End Learning to Warm-Start for Real-Time Quadratic Optimization |
150 | 1 | 15 | Keyan Miao, Konstantinos Gatsis | Learning Robust State Observers using Neural ODEs |
151 | 4 | 28 | Serban Sabau, Yifei Zhang, Sourav Kumar Ukil, Andrei Sperila | Sample Complexity for Evaluating the Robust Linear Observer’s Performance under Coprime Factors Uncertainty |
153 | 3 | 23 | Sarper Aydin, Ceyhun Eksin | Policy Gradient Play with Networked Agents in Markov Potential Games |
154 | 2 | 11 | Sheng Cheng, Lin Song, Minkyung Kim, Shenlong Wang, Naira Hovakimyan | DiffTune+: Hyperparameter-Free Auto-Tuning using Auto-Differentiation |
156 | 3 | 8 | Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn | Contrastive Example-Based Control |
157 | 2 | 15 | Orhan Eren Akgun, Arif Kerem Dayi, Stephanie Gil, Angelia Nedich | Learning Trust Over Directed Graphs in Multiagent Systems |
158 | 1 | 21 | Yaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia Brikis | Improving Gradient Computation for Differentiable Physics Simulation with Contacts |
160 | 4 | 23 | Xunbi Ji, Gabor Orosz | Learning the dynamics of autonomous nonlinear delay systems |
161 | 4 | 9 | Yikun Cheng, Pan Zhao, Naira Hovakimyan | Safe Model-Free Reinforcement Learning using Disturbance-Observer-Based Control Barrier Functions |
163 | 1 | 29 | Kai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernández Fisac | ISAACS: Iterative Soft Adversarial Actor-Critic for Safety |
165 | 3 | 21 | Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee | Guaranteed Conformance of Neurosymbolic Models to Natural Constraints |
166 | 4 | 10 | Pengzhi Yang, Shumon Koga, Arash Asgharivaskasi, Nikolay Atanasov | Policy Learning for Active Target Tracking over Continuous SE(3) Trajectories |
167 | 2 | 24 | Yi Tian, Kaiqing Zhang, Russ Tedrake, Suvrit Sra | Can Direct Latent Model Learning Solve Linear Quadratic Gaussian Control? |
168 | 2 | 2 | Bilgehan Sel, Ahmad Tawaha, Yuhao Ding, Ruoxi Jia, Bo Ji, Javad Lavaei, Ming Jin | Learning-to-Learn to Guide Random Search: Derivative-Free Meta Blackbox Optimization on Manifold |
171 | 1 | 6 | Adithya Ramesh, Balaraman Ravindran | Physics-Informed Model-Based Reinforcement Learning |
172 | 4 | 14 | Saber Jafarpour, Akash Harapanahalli, Samuel Coogan | Interval Reachability of Nonlinear Dynamical Systems with Neural Network Controllers |
173 | 3 | 11 | Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti | Learning on Manifolds: Universal Approximations Properties using Geometric Controllability Conditions for Neural ODEs |