Lehrstuhl für Mechatronik in Maschinenbau und Fahrzeugtechnik (MEC)

Research Associate in "Learning based stochastic control with application in process engineering" (m/w/d)

About us

The chair of Prof. Bajcinca focuses on research of modern methods and advanced applications of control and system theory, involving three main pillars: cyber-physical systems, complex dynamical systems and machine learning. Through networking with a large number of national and international research, academic and industrial partners, funding projects with exotic and highly interesting tasks regarding model-based and data-driven control have been acquired on a regular basis. The research work is supported with an excellent laboratory equipment and high-performance computation in the areas of autonomous systems, robotics and energy systems, which is continuously being further developed.

https://www.mv.uni-kl.de/mec/home.
 

Research Scope

Carbonation process chain comprises multiple stages of material processing such as dissolution, filtration, precipitation and centrifugation which are seamlessly connected to function as one continuous process chain. The objective of the carbonation process chain is to convert mineral waste material into consumable carbonate byproducts. Classical process control approaches such as MOMs, AMOM, MPC and their variants have one or more of the following disadvantages: (i) lack of autonomy: since the control methods are based on ex-situ measurements and/or actuation,more often than not the controllers have to be manually tuned and/or retuned by a human operator. (ii) lack of robustness: in general live process are subject to various high/low frequency stochastic disturbances, thus the lack of noise model renders the above methods incapable of being robust to uncertain perturbations (iii) lack of learning: because most classical control methods are based on classical static or dynamic optimization formulation instead of statistical optimization formulation, the ability for learning from data is out of scope. On the other hand, (deep) neural networks (NNs/DNNs), due to their universal approximation power and high computational flexibility and scalability, offer a promising method for not only for numerical solution of complex dynamical systems but for their control. Similarly, when online learning is of importance (D)NN based reinforcement learning (RL) technique, due to its effective generalisation to continuous space Markov Decision Processes (MDPs), offers also a great potential for process control applications. Though DNN and RL methods have shown great success, they are far from being perfect and are vulnerable to failures. Various studies have been performed to highlight their vulnerabilities which include lack of mathematical explainability and lack of risk/uncertainty quantification. Thus the concept of model-based RL is well suited which in turn necessitates the use of DNNs as numerical solvers for simulation of dynamical systems.
 

Research Task / Work Description

One of the primary goals of the project is to develop an algorithm to autonomously control the operation of the carbonation process chain. To this end, the aim is to develop a Self-Learning Robust Autonomous Control(ler) (SLARC) algorithm. The SLARC is an adaptive stochastic algorithm, which consists of three interconnected modules namely- Plant/process simulation module, Observer module and the Control generation module. Each of these modules are viewed as parametric statistical estimators which need to be implemented using (deep) neural networks (DNN). Consequently, these networks are to be trained using process specific data and process model (PDE simulation). Thus, the SLARC controller is to be understood as a kind of hybrid model which combines physical laws with information gathered from data. In the first stage SLARC is developed for each process separately, e.g. filtration and precipitation process. After verifying and validating the closed-loop performance they will have to connect to work as one single controller. Since autonomy is the primary objective, reinforcement-learning (RL) methods have to incorporporated to enable continuous (online) learning of the process dynamics and consequently its control. 

The research compiles from the following list of tasks.

  • Formulation of a process specific stochastic optimization problem
  • Formulation of a stochastic optimization problem for the full process chain
  • To analyze well-posedness of the optimization problem- controllability and observability properties
  • To develop and implement process specific controller (SLARC) using (D)NNs
    • Implementation of the process network
    • Implementation of the observer network
    • Implementation of the control generation network
    • Connecting the three to form a single network
    • Finally incorporate RL methodology to facilitate online learning
  • To establish IOSS conditions for SLARC operating in closed loop.
  • Verification and validation of the controller in simulation and closed-loop setting
  • Collaborate with process engineers to obtain relevant process specific information and measurements
     

Qualification

  • A Masters degree in mathematics with outstanding grades and preferably specialising in stochastic control and/or (nonlinear) PDE control and/or stochastic analysis
  • Knowledge in nonlinear PDEs and stochastic analysis is expected.
  • Basic knowledge in Machine learning or mathematical statistics is expected. 
  • Fluent in Python and Matlab programming. Basic knowledge in Pytorch library is expected. 
  • Proficiency in English is essential. Knowing German is an advantage.
  • Highly motivated, eager to work within a team or independently.
     

We offer

  • Payment according to TV-L E13 with an initial one-year time limit
  • The possibility to do a PhD and to teach is given in case of scientific aptitude
  • TUK strongly encourages qualified female academics to apply
  • Severely disabled persons will be given preference in the case of appropriate suitability (please enclose proof)
  • Electronic application is preferred. Please attach only one coherent PDF.

You can expect an interesting, diversified and responsible task within a young, highly motivated and interdisciplinary team of a growing chair with great personal creativity freedom.

Contact

Prof. Dr.-Ing. Naim Bajcinca
Phone: +49 (0)631/205-3230
Mobile: +49 (0)172/614-8209
Fax:  +49 (0)631/205-4201
Email: mec-apps(at)mv.uni-kl.de

 

Keywords

CO2 carbonation from mine waste
Belt filtration
Stochastic controller
Reinforcement learning
Autonomous control
SLARC
 

Application Papers

Cover Letter
CV
University Certificates
References
List of Publications

 

Application Deadline

31. October 2023

 

Job Availability

Immediate

 

Zum Seitenanfang