MS Defense by Taehwan Seo

Event Details
  • Date/Time:
    • Wednesday November 30, 2022
      10:30 am - 12:30 pm
  • Location: Weber 200
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Verification of Adversarially Robust Reinforcement Learning Mechanisms in Autonomous Systems

Full Summary: No summary paragraph submitted.

Taehwan Seo
(Advisor: Prof. Kyriakos G. Vamvoudakis)

will defend a master’s thesis entitled,

Verification of Adversarially Robust Reinforcement Learning Mechanisms in Autonomous Systems


 Wednesday, November 30 at 10:30 a.m.

Weber 200



Artificial Intelligence (AI) is an effective algorithm for satisfying both optimality and adaptability in autonomous control systems. However, the policy generated from the AI is black-box, and since the algorithm cannot be analyzed in advance, this motivates the performance measurement of the AI model with verification. The performance and safety of the Cyber-Physical System (CPS) are subject to cyberattacks that intend to fail the system in operation or to interrupt the system from learning by modulation of learning data. For the safety and reliability scheme, verifying the impact of attacks on the CPS with the learning system is critical. This thesis proposal focuses on proposing one verification framework of adversarially robust Reinforcement Learning (RL) policy using the software toolkit ‘VERIFAI’, providing robustness measures over adversarial attack perturbations. This allows an algorithm engineer would be equipped with an RL control model verification toolbox that may be used to evaluate the reliability of any given attack mitigation algorithm and the performance of nonlinear control algorithms over their objectives. For this specified work, we developed the attack mitigating RL on nonlinear dynamics by the interconnection of off-policy RL and on-off adversarially robust mechanisms. After that, we connected with the simulation and verification toolkit for testing both the verification framework and integrated algorithm. The simulation experiment of the whole verification process was performed with two different control problems, one is a cart-pole problem from OpenAI gym, and the other problem is the attitude control of Cessna 172 in X-plane 11. From the experiment, we analyzed how the attack-mitigating RL algorithm performed with gain varying specific adversary attacks and evaluated the generated model performance over the changing environmental parameters.


  • Prof. Kyriakos G. Vamvoudakis – School of Aerospace Engineering (advisor)
  • Prof. Dimitiri Mavris– School of Aerospace Engineering
  • Prof. Yongxin Chen – School of Aerospace Engineering


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
ms defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 15, 2022 - 12:56pm
  • Last Updated: Nov 15, 2022 - 12:56pm