Ph.D. Thesis Proposal: Zahoor Zafrulla

Event Details
  • Date/Time:
    • Friday December 14, 2012 - Saturday December 15, 2012
      11:00 am - 12:59 pm
  • Location: TBD - contact student
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:

Zahoor Zafrulla


Summary Sentence: Recognition of American Sign Language Classifiers

Full Summary: No summary paragraph submitted.

Ph.D. Thesis Proposal Announcement
Title: Recognition of American Sign Language Classifiers

Zahoor Zafrulla
Computer Science Ph.D. Student
School of Interactive Computing
College of Computing
Georgia Institute of Technology

Date: December 14, 2012 (Friday)
Time: 12p - 2p EST
Location: TBD


  • Dr. Thad Starner (Advisor, School of Interactive Computing, Georgia Tech)
  • Dr. Irfan Essa (Co-Advisor, School of Interactive Computing, Georgia Tech)
  • Dr. Jim Rehg (School of Interactive Computing, Georgia Tech)
  • Dr. Harley Hamilton (School of Interactive Computing, Georgia Tech)
  • Dr. Vassilis Athitsos (Computer Science and Engineering Department, University of Texas at Arlington)

In this proposal I address the problem of automatically recognizing selected classifier-based grammatical structures of American Sign Language (ASL). Classifiers in ASL utilize surrogate handshapes for people or objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and WH-questions. Challenging problems such as recognition of sign-based grammatical structures, which consist of basic signs strung together to form phrases or sentences, and classifier-based grammatical structures are relatively unexplored in the field of ASL recognition.  

In our work we have developed CopyCat, an educational ASL game, that was designed to help deaf children improve their language abilities. CopyCat requires children to engage in a progressively more difficult expressive signing task, to describe a graphic, as they advance through the game.
I will show that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. I propose to improve the quality of the machine verifier’s ability to identify the boundary of the signs by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, I will reduce CopyCat’s dependency on custom manufactured hardware by using an off-the-shelf Kinect camera to achieve similar verification performance. I propose that we can extend our ability to recognize sign language by leveraging depth maps and creating an architecture to recognize selected classifier-based grammatical structures of ASL. I will also demonstrate the flexibility of the architecture by showing that the architecture is able to spot and recognize ASL classifier constructions that are embedded within an ASL narrative.

Additional Information

In Campus Calendar

College of Computing, School of Interactive Computing

Invited Audience
No audiences were selected.
No categories were selected.
No keywords were submitted.
  • Created By: Jupiter
  • Workflow Status: Published
  • Created On: Dec 4, 2012 - 5:00am
  • Last Updated: Oct 7, 2016 - 10:01pm