PhD Proposal by Takuma Nakamura

Event Details
  • Date/Time:
    • Wednesday November 15, 2017 - Thursday November 16, 2017
      1:45 pm - 3:59 pm
  • Location: College of Computing (CoC) Room 053
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Multiple-Hypothesis Vision-Based Landing Autonomy

Full Summary: No summary paragraph submitted.

Ph.D. Thesis Proposal by

Takuma Nakamura

(Advisor:  Professor Eric N. Johnson)


Multiple-Hypothesis Vision-Based Landing Autonomy


1:45 PM, Wednesday, November 15, 2017

College of Computing (CoC) Room 053



Unmanned Aerial Vehicles (UAVs) need humans in the mission loop for many tasks, and landing is one of the tasks that typically involve a human pilot. This is because of the complexity of a maneuver itself and flight-critical factors such as recognition of a landing zone, collision avoidance, assessment of landing sites, and decision to abort the maneuver. Another critical aspect to be considered is a reliance of UAVs on GPS systems. A GPS system is not a reliable solution for landing in some scenarios (e.g. deliverying a package in an urban city, and a surveillance UAV repatriating a home ship with the jammed signals), and a landing solely based on a GPS extremely decreases the UAV operation envelope. Vision is promising to achieve fully autonomous landing because it’s a rich-sensing, light, and affordable device that functions without any external resource.

Although vision is a powerful tool for autonomous landing, the use of vision for state estimation requires extensive consideration. First of all, vision-based landing faces a problem of occlusion. The target detected at a high altitude would be lost at certain altitudes while a vehicle descends; however, a small visual target cannot be recognized at high altitude. Second, the errors of the measurements are highly nonlinear and non-Gaussian due to the discrete pixel space, conversion from the pixel to physical units, the complex camera model, and complexity of detection algorithms. The vision sensor produces the unfixed number of the measurement with each image, and the measurements may include false positives. Plus, the estimation system is excessively tasked in a realistic condition. The landing site would be moving, tilted, or close to an obstacle. The available landing location may not be limited to one. In addition to assessing these statuses, understanding the confidence of the estimations is also the tasks of the vision, and the decisions to initiate, continue, and abort the mission are made based on the estimated states and confidence. The system that handles those issues and consistently produces the navigation solution while a vehicle lands eliminates one of the limitations of the autonomous UAV operation.

This proposal presents initial work in the development of a state estimation system for UAV landing. Two types of the vision algorithms are developed to allow the system to observe a landing site at various altitudes. One uses a nested visual fiducial architecture, and the other uses a sliding window approach with a partial target. The first algorithm assumes the case where an arbitrary visual marker can be utilized. The second algorithm has a greater flexibility and can be used for any known target. The extended Kalman particle filter (PF-EKF) fuses these visual measurements with other on-board sensors such as an inertial measurement unit (IMU), a GPS, a magnetometer, and a barometer to estimate the states of a vehicle and a landing location in a paradigm known as simultaneous localization and mapping (SLAM). The PF-EKF not only deal well with a highly nonlinear and non-Gaussian distribution of the measurement errors of vision but also hypothesizes and evaluates different scenarios such as multiple targets being in the line of sight. The preliminary results of a numerical simulation and an image-in-the-loop simulation are provided.

The focus of this work is to improve accuracy and consistency of the assessment of the vehicle and the landing location while a UAV is attempting a landing. Additional validation of the proposed system with real flight test data is planned. Also, the suggested algorithm to detect an occluded target will be tested with other detection frameworks.



Professor   Eric N. Johnson,          School of Aerospace Engineering (Advisor)

Professor   Eric Feron,                   School of Aerospace Engineering

Professor   James Hays,                 School of Computer Science

Professor   Patricio Antonio Vela, School of Electrical Engineering

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 10, 2017 - 9:59am
  • Last Updated: Nov 10, 2017 - 9:59am