
Deep Learning Models for Dynamical Systems: Applications and Methods
Explore the use of deep learning models in analyzing dynamical systems through transformations like Koopman operators and feedback linearization. Learn how data generation and analysis play crucial roles in understanding and controlling nonlinear systems efficiently.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Deep Learning Models for Dynamical Systems Group 29: Alan Williams, Imoleayo Abel
Background - Koopman Operator Given a nonlinear dynamical system , where is the state, and is a vector field. We desire a transformation that renders the dynamics of the system linear i.e. Example system: with transformation gives the linear system: Goal: Learn transformation and linear matrix from data
Background - Feedback Linearization Suppose: Transform: Using the following transformations: Yields following in z This is an example of exact full state feedback linearization, a technique for designing nonlinear controllers. We can now use linear control theory to design a controller with precise criteria, in z space. Let s say we suspect a system is feedback linearizable - can we learn the transformations T and g simply using data from the system? With zero order hold inputs the previous system is equivalent to:
Literature Review: Koopman operator The following figures taken from Lusch [1] demonstrate the networks used for learning the function ? and the Koopman operator ?. Our project is based on this paper: we are attempting to recreate their results and extend them to a dynamical system with control input. Lusch et. al were able to generate a coordinate transform that puts the system in a linear form. The trajectories below are of the original nonlinear and the transformed linear system [1]. Methods like Dynamic Mode Decomposition (DMD) precede these methods and are based on time delay-coordinates fit to high order linear models [6]. Extended DMD augments the systems measurements in DMD with nonlinear functions of the state [7].
Literature Review: Feedback Linearization [3] and [4] outline the current analytical methods for solving these transformations directly. Obtaining the linearizing transformation directly require one to solve a set of partial differential equations. One can also determine if the problem is possible to solve exactly, [5] shown below: There exists very well developed feedback linearization theory for nonlinear systems affine in control which extends to systems with multiple inputs. For systems for which a full-state linearization transform does not exists, partial feedback linearization can be done in some cases to control the unstable states linearly given that the remaining nonlinear state are stable.
Data Generation - MATLAB Koopman: Generate data from explicit solution of dynamical system in linear ?(?) space FBL: Use simulink model with random input u, between -5 and 5; keeping trajectories that lie within ?/2. Data size of (5000,3,21): 5000 trajectories, 2 states + 1 input (for FBL), simulated for 21 timesteps. Discard trajectory data lying outside of bounds where analytical transformation is invertible.
Loss Function: Koopman Operator Reconstruction loss for reconstruction accuracy State prediction error Linearization error
Loss Function: Feedback Linearization Using a similar loss as Lusch: Where, up until that time step: is is the m+1 step ahead prediction given the initial condition and the inputs For example for m = 2: and Note: For feedback linearization we have a slightly simpler loss because linearity loss that appear in Koopman are not included due to feedback linearizable assumption inherent in the model.
High Level Description in TF 2.0: Koopman We design a network with multiple outputs, and shared layers to make use of the keras functional API of TF 2.0. ?-1 label x1 MSE : X reconstructed ?(x1) label ?(x2) MSE : 1 Step Linearization ?(x1) ? Total Loss ? label x2 MSE : 1 Step Prediction ?-1 ?(x1) ? x label ?(x3) MSE : 2 Step Linearization ?(x1) ? ? ?-1 label x3 MSE : 2 Step Prediction ?(x1) ? ? : : ?, its inverse, and ? are all submodels with shared weights and we can connect them together in any desired combination as a Directed Acyclic Graph (DAG) using the keras functional API. Outputs are assigned their respect loss function and total loss will be summed in the end.
High Level Description in TF 2.0: Feedback Linearization ?-1 ?(x1) label x1 MSE : X reconstructed label u1 MSE : Input reconstructed ?-1 ?(u1) Total Loss ? x u ?-1 ?,? A,B label x2 MSE : 1 Step Prediction ?,? ?,? A,B A,B ?-1 ?-1 label x3 MSE : 2 Step Prediction . Details: batchsize = 500, width = 120 (2 hidden layers per function), alpha1= 1, alpha1= 5, alpha3= 0, epoch = 150, Sp= 20
Current and Future Work: 1. 2. 3. 4. 5. Use cross-validation in training Attempt new example systems for both methods Tune hyperparameters further (batch size, loss weights, layer widths) Results are sensitive to re-initializing network weights (this was true of [1] as well) Results may be improved if we had more time to enable a max-norm type loss which [1] incorporated
References [1] B. Lusch, J. N. Kutz, and S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications, vol. 9, no. 1, 2018. https://arxiv.org/pdf/1712.09707.pdf [2] S. E. Otto and C. W. Rowley, Linearly Recurrent Autoencoder Networks for Learning Dynamics, SIAM Journal on Applied Dynamical Systems, vol. 18, no. 1, pp. 558 593, 2019. https://arxiv.org/pdf/1712.01378.pdf [3] H. K. Khalil, Nonlinear systems, 2nd ed. Upper Saddle River, NJ: Prentice Hall, 1996. [4] A. Isidori, Nonlinear Control Systems, 3rd ed., ser. Communications and Control Engineering Series. Springer, 1995. [5] J. Cortes, MAE 281B Lecture 3: Feedback linearization of MIMO systems , University of California - San Diego, 2019. [6] Kutz, J. N., Brunton, S. L., Brunton, B. W. & Proctor, J. L. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems, Society for Industrial and Applied Mathematics, Philadelphia, PA 2016. [7] Williams, M. O., Kevrekidis, I. G. & Rowley, C. W. A data-driven approximation of the Koopman operator: extending dynamic mode decomposition. J. Nonlin. Sci. 6, 1307 1346 (2015).