A Reinforcement Learning Approach to Quadrotor Stability in Windy Conditions

Session Number

CMPS 27

Advisor(s)

Dr. Ankit Agrawal, Saint Louis University

Discipline

Computer Science

Start Date

17-4-2025 11:25 AM

End Date

17-4-2025 11:40 AM

Abstract

Unmanned Aerial Vehicles, particularly quadrotors, have diverse applications in logistics, agriculture, surveillance, and search and rescue. However, quadrotor stability is highly sensitive to variable environmental conditions, such as wind. The Proportional-Integral-Derivative (PID) controller, the traditional control method for quadrotors, performs well in stable conditions but faces difficulties when tasked with maintaining drone attitude in more turbulent environments. Additionally, existing reinforcement learning (RL)-based research on quadrotor stability has primarily focused on simulated environments with low wind speeds and unrealistic wind dynamics, limiting its practical applicability. In this work, reinforcement learning is applied to the problem of quadrotor stabilization to improve performance in varying wind disturbances of different directions and intensities. Specifically, we employ a Deep Q-Network (DQN) trained using a Computational Fluid Dynamics (CFD)-based wind model and compare its performance against the traditional PID controller. The study leverages the Gym reinforcement learning library, Gazebo simulation software, and PX4 flight control to define a custom wind environment and train a quadrotor agent. Our results demonstrate the potential of this reinforcement learning approach in enhancing quadrotor stability in dynamic wind conditions.


Share

COinS
 
Apr 17th, 11:25 AM Apr 17th, 11:40 AM

A Reinforcement Learning Approach to Quadrotor Stability in Windy Conditions

Unmanned Aerial Vehicles, particularly quadrotors, have diverse applications in logistics, agriculture, surveillance, and search and rescue. However, quadrotor stability is highly sensitive to variable environmental conditions, such as wind. The Proportional-Integral-Derivative (PID) controller, the traditional control method for quadrotors, performs well in stable conditions but faces difficulties when tasked with maintaining drone attitude in more turbulent environments. Additionally, existing reinforcement learning (RL)-based research on quadrotor stability has primarily focused on simulated environments with low wind speeds and unrealistic wind dynamics, limiting its practical applicability. In this work, reinforcement learning is applied to the problem of quadrotor stabilization to improve performance in varying wind disturbances of different directions and intensities. Specifically, we employ a Deep Q-Network (DQN) trained using a Computational Fluid Dynamics (CFD)-based wind model and compare its performance against the traditional PID controller. The study leverages the Gym reinforcement learning library, Gazebo simulation software, and PX4 flight control to define a custom wind environment and train a quadrotor agent. Our results demonstrate the potential of this reinforcement learning approach in enhancing quadrotor stability in dynamic wind conditions.