{"478901":{"#nid":"478901","#data":{"type":"news","title":"Georgia Tech Researchers Demonstrate How the Brain Can Handle So Much Data","body":[{"value":"\u003Cp\u003EHumans learn to very quickly identify complex objects and variations of them. We generally recognize an \u201cA\u201d no matter what the font, texture or background, for example, or the face of a coworker even if she puts on a hat or changes her hairstyle. We also can identify an object when just a portion is visible, such as the corner of a bed or the hinge of a door. But how? Are there simple techniques that humans use across diverse tasks? And can such techniques be computationally replicated to improve computer vision, machine learning or robotic performance?\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EResearchers at Georgia Tech discovered that humans can categorize data using less than 1 percent of the original information, and validated an algorithm to explain human learning -- a method that also can be used for machine learning, data analysis and computer vision.\u003C\/p\u003E\u003Cp\u003E\u201cHow do we make sense of so much data around us, of so many different types, so quickly and robustly?\u201d said \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/santosh-vempala\u0022\u003E\u003Cstrong\u003ESantosh Vempala\u003C\/strong\u003E\u003C\/a\u003E, Distinguished Professor of Computer Science at the Georgia Institute of Technology and one of four researchers on the project. \u201cAt a fundamental level, how do humans begin to do that? It\u2019s a computational problem.\u201d\u003C\/p\u003E\u003Cp\u003EResearchers \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/rosa-arriaga\u0022\u003E\u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E\u003C\/a\u003E, \u003Cstrong\u003EMaya Cakmak\u003C\/strong\u003E, \u003Cstrong\u003EDavid Rutter\u003C\/strong\u003E, and Vempala at Georgia Tech\u2019s College of Computing studied human performance in \u201crandom projection\u201d tests to understand how well humans learn an object. They presented test subjects with original, abstract images and then asked whether they could correctly identify that same image when randomly shown just a small portion of it.\u003C\/p\u003E\u003Cp\u003E\u201cWe hypothesized that random projection could be one way humans learn,\u201d Arriaga explains, a senior research scientist and developmental psychologist. \u201cThe short story is, the prediction was right. Just 0.15 percent of the total data is enough for humans.\u201d\u003C\/p\u003E\u003Cp\u003ENext, researchers tested a computational algorithm to allow machines (very simple neural networks) to complete the same tests. Machines performed as well as humans, which provides a new understanding of how humans learn. \u201cWe found evidence that, in fact, the human and the neural network behave very similarly,\u201d Arriaga said.\u003C\/p\u003E\u003Cp\u003EThe \u0026nbsp;researchers wanted to come up with a mathematical definition of what typical and atypical stimuli look like and, from that, predict which data would hardest for the human and the machine to learn. Humans and machines performed equally, demonstrating that indeed one can predict which data will be hardest to learn over time.\u003C\/p\u003E\u003Cp\u003EResults were recently published in the journal \u003Cem\u003ENeural Computation\u003C\/em\u003E (MIT press). It is believed to be the first study of \u201crandom projection,\u201d the core component of the researchers\u2019 theory, with human subjects.\u003C\/p\u003E\u003Cp\u003ETo test their theory, researchers created three families of abstract images \u2014 some originally as large as 500 x 500 pixels \u2014 and extracted very small, random samples from them, ranging in size from 6 to 20 pixels square. Humans and simple neural networks were shown the whole image for 10 seconds, then randomly shown 16 smaller sketches and asked to identify the original image. Using abstract images ensured that neither humans nor machines had any prior knowledge of what the objects were.\u003C\/p\u003E\u003Cp\u003E\u201cWe were surprised by how close the performance was between extremely simple neural networks and humans,\u201d Vempala said. \u201cThe design of neural networks was inspired by how we think humans learn, but it\u2019s a weak inspiration. To find that it matches human performance is quite a surprise.\u201d\u003C\/p\u003E\u003Cp\u003E\u201cThis fascinating paper introduces a localized random projection that compresses images while still making it possible for humans and machines to distinguish broad categories,\u201d said \u003Cstrong\u003ESanjoy Dasgupta\u003C\/strong\u003E, professor of computer science and engineering at the University of California San Diego and an expert on machine learning and random projection. \u201cIt is a creative combination of insights from geometry, neural computation, and machine learning.\u201d\u003C\/p\u003E\u003Cp\u003EAlthough researchers cannot definitively claim that the human brain actually engages in random projection, the results support the notion that random projection is a plausible explanation, the authors conclude. In addition, it suggests a very useful technique for machine learning: large data is a formidable challenge today, and random projection is one way to make data manageable without losing essential content, at least for basic tasks such as categorization and decision making.\u003C\/p\u003E\u003Cp\u003EThe algorithmic theory of learning based on random projection already has been cited more than 300 times and has become a commonly used technique in machine learning to handle large data of diverse types.\u003C\/p\u003E\u003Cp\u003EThe complete research paper, \u201cVisual Categorization with Random Projection,\u201d can be \u003Ca href=\u0022http:\/\/www.mitpressjournals.org\/doi\/abs\/10.1162\/NECO_a_00769#.VeIchPmq\u0022\u003Efound here\u003C\/a\u003E\u0026nbsp;and in the October edition of \u003Cem\u003ENeural Computation\u003C\/em\u003E.\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThis work is partially funded by the National Science Foundation (CCF-0915903 and CCF-1217793). Any conclusions expressed are those of the principal investigator and may not necessarily represent the official views of the funding organizations.\u003C\/em\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at Georgia Tech discovered that humans can categorize data using less than 1 percent of the original information, and validated an algorithm to explain human learning -- a method that also can be used for machine learning, data analysis and computer vision.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers find a general-purpose method to explain how the brain could possibly process huge amounts of data on the fly."}],"uid":"27490","created_gmt":"2015-12-15 11:38:02","changed_gmt":"2016-10-08 03:20:16","author":"Tara La Bouff","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-12-15T00:00:00-05:00","iso_date":"2015-12-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"478911":{"id":"478911","type":"image","title":"Vempala - human cognition abstracts","body":null,"created":"1450285200","gmt_created":"2015-12-16 17:00:00","changed":"1475895232","gmt_changed":"2016-10-08 02:53:52","alt":"Vempala - human cognition abstracts","file":{"fid":"204140","name":"screen_shot_2015-12-15_at_9.49.43_am.png","image_path":"\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.49.43_am_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.49.43_am_0.png","mime":"image\/png","size":200160,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/screen_shot_2015-12-15_at_9.49.43_am_0.png?itok=Gw_pfFI4"}},"478921":{"id":"478921","type":"image","title":"Vempala - human cognition samples","body":null,"created":"1450285200","gmt_created":"2015-12-16 17:00:00","changed":"1475895232","gmt_changed":"2016-10-08 02:53:52","alt":"Vempala - human cognition samples","file":{"fid":"204141","name":"screen_shot_2015-12-15_at_9.49.55_am.png","image_path":"\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.49.55_am_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.49.55_am_0.png","mime":"image\/png","size":155268,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/screen_shot_2015-12-15_at_9.49.55_am_0.png?itok=FJjrMS1F"}},"478931":{"id":"478931","type":"image","title":"Vempala - human cognition graphs","body":null,"created":"1450285200","gmt_created":"2015-12-16 17:00:00","changed":"1475895232","gmt_changed":"2016-10-08 02:53:52","alt":"Vempala - human cognition graphs","file":{"fid":"204142","name":"screen_shot_2015-12-15_at_9.51.10_am.png","image_path":"\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.51.10_am_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/screen_shot_2015-12-15_at_9.51.10_am_0.png","mime":"image\/png","size":79740,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/screen_shot_2015-12-15_at_9.51.10_am_0.png?itok=kuEb31Z_"}}},"media_ids":["478911","478921","478931"],"groups":[{"id":"47223","name":"College of Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"5660","name":"algorithms"},{"id":"1912","name":"brain"},{"id":"5637","name":"Computational"},{"id":"91641","name":"human cognition"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETara La Bouff, 404.769.5408\u003C\/p\u003E","format":"limited_html"}],"email":["tlabouff@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}