event
Ph.D. Proposal Oral Exam - Duo Xu
Primary tabs
Title: Bridging Human and AI: Interpretable Human-in-the-loop Reinforcement Learning via Inductive Logic Programming
Committee:
Dr. Fekri, Advisor
Dr. F. Zhang, Chair
Dr. Sivakumar
Abstract: The objective of the proposed research is to investigate the human-in-the-loop reinforcement learning (RL) in an interpretable way via inductive logic programming (ILP). Although the state-of-art deep RL algorithms have achieved success in many fields from robotic control to video-based games, few of them have been deployed into real-world applications. It is primarily prohibited by data inefficiency and lack of interpretability. To tackle these problems, the proposed research is primarily to investigate the interpretable RL with human in the loop. Since ILP is a popular and powerful tool to induce logic rules from practical data, to realize the interpretability of deep RL algorithms, we adopt ILP models based on modern differentiable deep neural architectures to learn interpretable policies or transitions models. Further, to improve sample efficiency, we incorporate human assistance into the learning loop of RL agent. We propose to investigate interpretable hierarchical RL, interpretable RL with human feedback, and text-based games as an application example.
Status
- Workflow Status:Published
- Created By:Daniela Staiculescu
- Created:04/27/2021
- Modified By:Daniela Staiculescu
- Modified:04/27/2021
Categories
Keywords
Target Audience