AUTOREG 2019, Mannheim, Germany.
Motion control, in particular path following control (PFC), is an important function of autonomous vehicles. PFC controls the propulsion, steering and braking such that the vehicle follows a parametric path and reference velocity. For the design of traditional model-based PFC approaches a sufficiently accurate synthesis model of the vehicle has to be available in order to design a performant controller. However, constructing, parametrizing and testing these model-based PFC as well as deriving the synthesis model is known to be a time-consuming task. Recently the application of reinforcement learning (RL) methods to solve control problems without a synthesis model but based on high fidelity simulation models has gained increasing interest. In this paper we investigate the application of RL methods to solve the path following problem for DLR’s ROboMObil, an over-actuated robotic vehicle. Simulation results demonstrate that the RL-based PFC exhibits similar tracking performance as a model-based controller, executed on the path used for training. Moreover the RL-based PFC provides encouraging generalization capabilities, when facing unseen reference paths.
Copyright © 2008-2024 German Aerospace Center (DLR). All rights reserved.