PhD Defense by Kristin Siu

Primary tabs

Title: Design and Evaluation of Intelligent Reward Structures in Human Computation Games


Kristin Siu

School of Interactive Computing

College of Computing

Georgia Institute of Technology


Date: Tuesday, December 8, 2015

Time: 1:00 PM to 3:00 PM EST

Location: TSRB 222




Dr. Mark Riedl (Advisor), School of Interactive Computing, Georgia Institute of Technology Dr. Blair MacIntyre, School of Interactive Computing, Georgia Institute of Technology Dr. Brian Magerko, School of Literature, Communication and Culture/School of Interactive Computing, Georgia Institute of Technology Dr. Betsy DiSalvo, School of Interactive Computing, Georgia Institute of Technology Dr. Seth Cooper, College of Computer and Information Science, Northeastern University




Human computation games (HCGs) allow people to solve complex crowdsourcing tasks, which are often too complex to model or solve algorithmically, through gameplay. These games have been used to solve a wide variety of problems from image labeling to protein folding while attempting provide engaging experiences for their participants. However, the process of turning a human computation task into a successful game is often a difficult, daunting one. This is complicated by the fact that design knowledge for human computation games remains limited and that HCGs have two often-orthogonal design goals: to solve the task and to engage its players simultaneously. Among the many elements of these games that warrant deeper investigation are the structures of rewards. These reward structures are imperative for providing feedback to the players.


In this dissertation, I plan to show how manipulating different aspects of reward mechanics in HCGs can improve player engagement without negatively affecting the results of the corresponding human computation task. I will demonstrate this improvement by running controlled experiments to simultaneously measure and evaluate task performance and player engagement across three aspects of reward systems: alternative scoring functions, player choice of different reward types, and automatic adaptation of rewards based on player performance and available reward content. First, I will discuss the results of a study showing how a simple change from a collaborative to a competitive scoring function improves player engagement with no significant differences in the human computation task results. Next, I will discuss ongoing research including a current experiment using three reward systems that looks at how giving players choice of these rewards affects task completion and player engagement. Finally, I will propose two kinds of adaptive systems that will change reward distribution functions, and I will describe how I intend to implement and evaluate them.


  • Workflow Status:
  • Created By:
    Tatianna Richardson
  • Created:
  • Modified By:
    Fletcher Moore
  • Modified:


Target Audience

    No target audience selected.