Control Theory
Control theory is a field that utilizes the mathematical methods and tools for design and analysis of technical systems. It is concerned with the modeling and control of dynamics in physical and man-made processes and systems. Control theory has applications in engineering, life sciences, economics, sociology, and operation research.
Under the broad spectrum of control theory, our focus is on developing new model-based methods for stability, robustness, and optimization-based control of (i) hybrid dynamical systems (in particular, impulsive and switched systems), (ii) systems governed by partial differential equations, (iii) nonlinear systems, (iv) stochastic systems, and (v) time-delay systems.
Motivated by modern mathematical methods enabled by the unprecedented availability of data and computational resources, we are also interested in data-driven approaches for complex systems that are not amenable to empirical models or derivations based on first-principles. Here we follow the arising two main approaches: (i) the algebraic approaches, including those boosted by the behavioral control theory, as well as (ii) the growing field of application of artificial neural networks in simulation and control of wide classes of dynamical systems.
Some of the research topics are accompanied by courses of the chair.
Learning-based control
Neural Networks (NNs) are statistical estimators that are used to provide a functional relationship between inputs and outputs. Due to their universal approximation power and high computational flexibility and scalability, they are now being considered as a suitable method for not only for numerical solution of complex dynamical systems but for also their control. They are naturally applicable for stochastic control problems where a network is trained to solve the corresponding Hamiltonian-Jacobi-Bellman equation thereby learning to generate optimal Markovian control policies. The method is flexible to enough to be applicable for both continuous and discrete state space problems.
Data-Driven Control
Control applications are classically model-based and involve two steps: system identification (SYSID) and controller design for specific control problems. This approach to solving a control problem is called model-based design (MBD). This model-based approach, albeit quite successful, has certain limitations. For instance, if human decision-making is involved, models using first principles may not be perceivable. Moreover, for some problems for example climate dynamics or robotics, it is almost infeasible or requires a lot of computational power to identify a model from the available data.
Optimal Stochastic Control
Stochastic control theory constitutes a rapidly growing research area within Stochastic Analysis which has many applications in e.g. biology, physics, engineering, finance or economics. Stochastic differential equations (SDEs) are a useful tool to model diverse random processes like e.g. the time evolution of stock prices, outdoor temperature, precipitation amounts, infection numbers, or the population size of tumor cells.
Stability of Hybrid Dynamical Systems
Nonlinear systems theory is an active research area, where such questions as stability, robustness, and optimal control are important issues interesting both from theoretical and practical viewpoints. Hybrid systems combine continuous and discontinuous behavior. An important property of hybrid systems is stability that is decisive for their performance and sustainability. In this project, we are focused on developing new non-conservative tools for stability verification.
Network Dynamics and Control
Network control is composed of two closely related areas. The first one addresses various designs of controllers for networked control systems in broad sense: centralized control, decentralized control, distributed control, adaptive control, robust control, etc. The second one addresses various stability conditions for networks composed of interconnected systems. The latter is also important, because many problems of the first type (regulation, tracking, stabilization) are eventually reduced to the stability property of the corresponding reference error.
Adaptive Control and System Resilience
In many real-world processes, there is an interplay between software and physical components. These processes are vulnerable to malicious cyber-attacks. Attackers can gain access to the network layer and manipulate system measurement data and control input commands to severely compromise system performance. We aim to develop efficient control architectures that foil these malicious sensor and actuator attacks caused by adversarial entities controlling the measurement and actuator devices and recover the system performance.
Model Predictive Control of Hybrid Systems
Impulsive systems are a class of hybrid systems with continuous-time states, the flow, that are exposed to discrete changeovers in time, called jumps. This research proposes an MPC that schedules the optimal jumps at arbitrary discrete times, generating non-periodic jumps for impulsive linear systems. These systems with non-fixed jump times are important due to their ability to model i.a. cyber-physical systems, as for instance multi-agent communication systems.
Quantifier Elimination and Semi-Algebraic Systems in Control
Efficient implementations of symbolic computation with computer algebra tools presents an important research aspect for the analysis and design of control systems. The goal follows this line by formulating algebraic condition to the system stability and then applying more efficient semi-algebraic techniques to compute the stability region of system in parameter space.
Stability and Control of Linear Parameter Varying Systems
The framework of Linear Parameter Varying (LPV) systems has proven to be a systematic way to model nonlinear real-world phenomena and synthesize gain-scheduled controllers for nonlinear systems. The applications of LPV systems include the automotive industry, turbofan engines, robotics, and aerospace systems.