{"634737":{"#nid":"634737","#data":{"type":"event","title":"Ph.D. Dissertation Defense - Min-Hung Chen","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle\u003C\/strong\u003E\u003Cem\u003E:\u0026nbsp; \u003C\/em\u003E\u003Cem\u003EBridging Distributional Discrepancy with Temporal Dynamics for Video Understanding\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECommittee:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Ghassan AlRegib, ECE, Chair , Advisor\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Zsolt Kira, CoC\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Patricio Vela, ECE\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Eva Dyer, BME\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Yi-Chang Tsai, CEE\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAbstract: \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVideo has become one of the major media in our society, bringing considerable interests in the development of video analysis techniques for various applications.\u0026nbsp;\u003Cstrong\u003ETemporal Dynamic\u003C\/strong\u003E, which represents how information changes along time, is the key component for videos. However, it is still not clear how temporal dynamics benefit video tasks, especially for the cross-domain case, which is close to real-world scenarios. Therefore, the objective of this thesis is to effectively exploit temporal dynamics from videos to tackle distributional discrepancy problems for video understanding. To achieve this objective, firstly I proposed two approaches to exploit spatio-temporal dynamics: 1)\u0026nbsp;\u003Cem\u003ETemporal Segment LSTM (TS-LSTM)\u003C\/em\u003E\u0026nbsp;and 2)\u0026nbsp;\u003Cem\u003EInceptionstyle Temporal-ConvNet (Temporal-Inception)\u003C\/em\u003E. Secondly,\u0026nbsp;I collected two large-scale datasets for cross-domain action recognition:\u0026nbsp;\u003Cem\u003EUCF-HMDB\u003Csub\u003Efull\u003C\/sub\u003E\u003C\/em\u003E\u0026nbsp;and\u0026nbsp;\u003Cem\u003EKinetics-Gameplay\u003C\/em\u003E\u0026nbsp;to facilitate cross-domain video research, and proposed\u0026nbsp;\u003Cem\u003ETemporal Attentive Adversarial Adaptation Network (TA\u003Csup\u003E3\u003C\/sup\u003EN)\u003C\/em\u003E\u0026nbsp;to simultaneously attend, align and learn temporal dynamics across domains. Finally,\u0026nbsp;to utilize temporal dynamics from unlabeled videos for action segmentation, I proposed\u0026nbsp;\u003Cem\u003ESelf-Supervised Temporal Domain Adaptation (SSTDA)\u003C\/em\u003E\u0026nbsp;to jointly align cross-domain feature spaces embedded with local and global temporal dynamics.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Bridging Distributional Discrepancy with Temporal Dynamics for Video Understanding "}],"uid":"28475","created_gmt":"2020-04-24 19:57:58","changed_gmt":"2020-04-24 19:57:58","author":"Daniela Staiculescu","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2020-05-06T13:00:00-04:00","event_time_end":"2020-05-06T15:00:00-04:00","event_time_end_last":"2020-05-06T15:00:00-04:00","gmt_time_start":"2020-05-06 17:00:00","gmt_time_end":"2020-05-06 19:00:00","gmt_time_end_last":"2020-05-06 19:00:00","rrule":null,"timezone":"America\/New_York"},"extras":[],"groups":[{"id":"434381","name":"ECE Ph.D. Dissertation Defenses"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"},{"id":"1808","name":"graduate students"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}