event

PhD Defense by Yi Yang

Primary tabs

Title: Robust Adaptation of Natural Language Processing for Language Variation

 

Yi Yang

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Date: Friday, November 18th, 2016

Time: 12 PM – 2 PM EST

Location: KACB 1212

 

Committee:

Dr. Jacob Eisenstein, School of Interactive Computing, Georgia Tech (Advisor)

Dr. James Rehg, School of Interactive Computing, Georgia Tech

Dr. Byron Boots, School of Interactive Computing, Georgia Tech

Dr. Polo Chau, School of Computational Science and Engineering, Georgia Tech

Dr. Hal Daum{\'e} III, Department of Computer Science, University of Maryland

 

Abstract

 

Natural language processing (NLP) technology has been applied in various domains, ranging from social media and digital humanities to public health. Unfortunately, the adoption of existing NLP techniques in these areas often experiences unsatisfactory performances. Languages of new datasets and settings can be significantly different from standard NLP training corpora, and modern NLP techniques are usually vulnerable to variation in non-standard languages, in terms of the lexicon, syntax, and semantics. Previous approaches toward this problem suffer from three major weaknesses. First, they often employ supervised methods that require expensive annotations and easily become outdated with respect to the dynamic nature of languages. Second, they usually fail to leverage the valuable metadata associated with the target languages of these areas. Third, they treat language as uniform, and ignore the differences in language use with respect to different individuals. 

 

In this thesis, we propose several novel techniques to overcome these weaknesses and build NLP systems that are robust to language variation. These approaches are driven by co-occurrence statistics as well as rich metadata without the need of costly annotations, and can easily adapt to new settings. First, we can transform lexical variation into text that better matches standard datasets. We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. Text normalization focuses on tackling variation in lexicons, and therefore improving underling NLP tasks. Second, we can overcome language variation by adapting standard NLP tools to fit the text with variation directly. We propose a novel but simple feature embedding approach to learn joint feature representations for domain adaptation, by exploiting the feature template structure commonly used in NLP problems. We also show how to incorporate metadata attributes into feature embeddings, which helps to learn to distill the domain-invariant properties of each feature over multiple related domains. Domain adaptation is able to deal with a full range of linguistic phenomenons, thus it often yields better performances than text normalization. Finally, a subtle challenge posed by variation is that language is not uniformly distributed among individuals, while traditional NLP systems usually treat texts from different authors the same. Both text normalization and domain adaptation follow the standard NLP settings and fail to handle this problem. We propose to address the difficulty by exploiting the sociological theory of \textit{homophily}---the tendency of socially linked individuals to behave similarly---to build models that account for linguistic variation on an individual or a social community level. We investigate both  \textit{label homophily} and \textit{linguistic homophily} to build socially adapted information extraction and sentiment analysis systems respectively. Our work delivers state-of-the-art NLP systems for social media and historical texts on various standard benchmark datasets.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/07/2016
  • Modified By:Tatianna Richardson
  • Modified:11/07/2016

Categories

Keywords

Target Audience