Three talks on Control + RL by UDarmstadt researchers

Bot

イベント内容

We will have three talks by researchers from TU Darmstadt Germany. Each talk will be 30 minutes long including questions.

Title: Stochastic Optimal Control as Approximate Input Inference

Abstract:
Control-as-inference is a body of research concerning the use of probabilistic inference methods for control and reinforcement learning, with goal of developing principled algorithms that provide robustness, uncertainty quantification and synergy with related inference problems (i.e. state estimation). This work presents Input Inference for Control (i2c), a formulation that frames Optimal Control as Input Estimation by framing the control cost function as the likelihood of the specified observation model. By making the linear Gaussian assumption, strong relations to Kalman filtering, LQR, Dual Control and Maximum Entropy control can be derived. Moreover, extending the algorithm to nonlinear systems by approximate inference yields a Gauss-Newton method akin to iLQR. Empirical evaluations demonstrate that, compared to equivalent optimal control-derived approaches, i2c does indeed produce more consistent controllers for nonlinear constrained stochastic environments.

Short Bio:
Joe joined the Intelligent Autonomous Systems Group at TU Darmstadt as a Ph.D. researcher in December 2018, supervised by Prof. Jan Peters. He studied Information & Computer Engineering at the University of Cambridge, where he received his BA, MEng and the Charles Babbage Senior Scholarship. His Master’s thesis “Vision-Based Learning for Robotic Grasping”, which investigated the use of Convolutional Neural Networks for real-world grasp prediction, was undertaken at the Bio-Inspired Robotics Lab (BIRL) under the supervision of Dr Fumiya Iida. For two years, Joe worked at CMR Surgical, a medical device startup in Cambridge, UK, on the development of Versius, a bespoke robotic manipulator platform for laparoscopic surgery. Now working on the SKILLS4ROBOTS project, Joe is researching the development of principled algorithms that facilitate robot learning of complex tasks in unstructured settings. He is currently interesting in Probabilistic Inference, Optimal Control, Model-based Reinforcement Learning and inductive biases for robotics.

Title: The Hybrid-System Paradigm: Learning Decomposition and Control of Nonlinear Systems

Abstract:

Recent developments in the domain of learning-for-control have pushed towards deploying more complex and highly sophisticated representations, e.g. (deep) neural networks (DNN) and Gaussian processes (GP), to capture the structure of both dynamics and optimal controllers, leading to overwhelming and unprecedented successes in the domain of RL. However, this new sophistication has come with the cost of an overall reduction in our ability to interpret the resulting policies from a classical theoretical perspective. Inspired by recent in-depth analysis and implications of using piece-wise linear (PWL) activation functions, which show that such representations effectively divide the state space into linear sub-regions, we revive the idea of combining local dynamics and controllers to build up complexity and investigate the question if simpler representations of the dynamics and policies may be sufficient for solving certain control tasks. We take inspiration from the classical control community and apply the principles of hybrid switching systems for modeling and controlling general non-linear dynamics, in order to break down complex representations into simpler components. We present an expectation-maximization (EM) algorithm for learning a generative model and automatically decomposing non-linear dynamics into stochastic switching linear dynamical systems. Based on this representation, we introduce a new hybrid and model-based relative entropy policy search technique (Hybrid-REPS) for learning time-invariant local linear feedback controllers and corresponding local polynomial value function approximations.

Short Bio:

Hany Abdulsamad is a Ph.D. student at the TU Darmstadt Institute for Intelligent Autonomous Systems (IAS). He graduated with a Master's degree in Automation and Control from the faculty of Electrical Engineering and Information Technology at the TU Darmstadt. His Master's thesis under the supervision of Gerhard Neumann and Jan Peters focused on novel techniques of Trajectory Optimiaztion in the context of Model-Based Reinforcement Learning and was honored with the Datenlotsen Prize. His research interests range from Optimal Control and Trajectory Optimization to Reinforcement Learning and Robotics, seeking a wide understanding that unifies these fields. His current research focuses on learning hierarchical structures for system identification and control. More specifically, using insights from the field of Hybrid System Control to decompose complex and dynamics and controls and to deconstruct opaque complex systems into a collection of simpler and more interpretable models.

Title: Inductive Biases for Learning Robot Control

Abstract:

In order to leave the factory floors and research labs, future robots must abandon their stiff and pre-programmed movements and be capable to learn complex policies. These control policies must be adaptive to the inevitable changes of the physical world and must only select feasible actions. Current black-box function approximation approaches are not sufficient for such tasks as these approaches overfit to the training domain and commonly perform damaging actions. In this talk, I want to introduce my research focusing on robot learning for physical robots by using inductive biases and deep learning. This approach combines the representational power of deep networks and directly includes the feasibility and robustness as inductive bias within the learning problem to enable the application to the physical system. Using this approach we showed that deep networks can be constrained to learn physically plausible models and optimal feedback controller by incorporating domain knowledge and demonstrated the applicability to the physical system.

Short Bio:

Michael Lutter is a Ph.D. student at the TU Darmstadt Institute for Intelligent Autonomous Systems (IAS). His research focuses on combining domain knowledge and deep learning to enable learning of complex robot controllers that are feasible for the physical system. Prior to this Michael held a researcher position at the Technical University of Munich (TUM). At TUM, he worked for the Human Brain Project and taught various lectures within the Elite Master Program Neuroengineering. His educational background covers a Bachelors in Engineering Management from University of Duisburg Essen and a Masters in Electrical Engineering from the Technical University of Munich. During his undergraduate studies he also spent one semester at the Massachusetts Institute of Technology (MIT) studying electrical engineering and computer science. Besides his studies, Michael worked for ThyssenKrupp, Siemens and General Electric and received multiple scholarships for academic excellence.

注意事項

※ こちらのイベント情報は、外部サイトから取得した情報を掲載しています。
※ 掲載タイミングや更新頻度によっては、情報提供元ページの内容と差異が発生しますので予めご了承ください。
※ 最新情報の確認や参加申込手続き、イベントに関するお問い合わせ等は情報提供元ページにてお願いします。

類似しているイベント