Skip to Main content Skip to Navigation
New interface
Journal articles

Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme

Abstract : In this article, a novel integral reinforcement learning (RL)-based nonfragile output feedback tracking control algorithm is proposed for uncertain Markov jump nonlinear systems presented by the Takagi–Sugeno fuzzy model. The problem of nonfragile control is converted into solving the zero-sum games, where the control input and uncertain disturbance input can be regarded as two rival players. Based on the RL architecture, an offline parallel output feedback tracking learning algorithm is first designed to solve fuzzy stochastic coupled algebraic Riccati equations for Markov jump fuzzy systems. Furthermore, to overcome the requirement of a precise system information and transition probability, an online parallel integral RL-based algorithm is designed. Besides, the tracking object is achieved and the stochastically asymptotic stability, and expected performance for considered systems is ensured via the Lyapunov stability theory and stochastic analysis method. Furthermore, the effectiveness of the proposed control algorithm is verified by a robot arm system.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03825983
Contributor : Frédéric Davesne Connect in order to contact the contributor
Submitted on : Sunday, October 23, 2022 - 4:02:53 PM
Last modification on : Tuesday, October 25, 2022 - 3:40:39 AM

Identifiers

Citation

Jing Wang, Jiacheng Wu, Jinde Cao, Mohammed Chadli, Hao Shen. Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme. IEEE Transactions on Cybernetics, In press, pp.1-10. ⟨10.1109/TCYB.2022.3203795⟩. ⟨hal-03825983⟩

Share

Metrics

Record views

0