{"663767":{"#nid":"663767","#data":{"type":"news","title":"ECE Research Group \u0026 Lab Spotlight: OLIVES","body":[{"value":"\u003Cp\u003EThe human brain is made up of around 100 billion neurons, meaning it can process complex, multi-parallel information at a tantalizing speed. Through the sense of sight alone, humans can gather a treasure trove of information in the blink of an eye and apply that information to make split-second decisions. With \u0026lsquo;smart\u0026rsquo; technology becoming more and more ingrained in everyday life and further trusted to keep humans safe, it is crucial for machines to have human-like vision and rapid information processing capabilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor Ghassan AlRegib\u0026rsquo;s OLIVES (Omni Lab for Intelligent Visual Engineering and Science) research group in the Georgia Tech School for Electrical and Computer Engineering (ECE) is at the forefront of today\u0026rsquo;s computer vision and visual machine learning research and its deployment in everyday life applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re providing machines with the robust algorithms and datasets they need to better see, learn from, and respond to the world around them,\u0026rdquo; said AlRegib.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorking in the areas of autonomous driving, machine learning in the wild (outside a lab setting), subsurface interpretation, and healthcare, the team is advancing the field with safe, reliable, and predictive tools. Read about a few exciting research projects and themes taking place in OLIVES below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAutonomous Vehicles (AVs) and Smart Mobility in Any Condition\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As researchers endeavor for higher levels of autonomy in technology, safety-critical functions demand powerful algorithms,\u0026rdquo; said AlRegib. \u0026ldquo;Without understanding data, machine learning is\u0026nbsp;lacking. Autonomous driving may be the most obvious current example of this in practice.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor autonomous driving to work, a vehicle\u0026rsquo;s cameras must supply the car\u0026rsquo;s computers with information to make incredibly fast decisions. However, the majority of existing data and benchmarks are limited in terms of data available in challenging environmental conditions. For example, think of how a human driver adapts to driving in rain or sun glare conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo overcome the shortcomings in existing research, AlRegib\u0026rsquo;s team introduced the most comprehensive\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1908.11262.pdf\u0022\u003E\u0026nbsp;traffic sign detection dataset\u003C\/a\u003E\u0026nbsp;every published that contains controlled challenging conditions. It facilitates the building of deep learning models \u0026mdash; a form of machine learning with algorithms inspired by the structure and function of the brain \u0026mdash; in solving various computer vision tasks that help the car make the best and safest decision possible no matter the weather. Currently the group is working with a leader in AVs to publish a new comprehensive dataset.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWATCH: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/mediaspace.gatech.edu\/media\/Robust%20Autonomous%20Driving%20Under%20Challenging%20Conditions%20DEMO\/1_n1c8i26o\u0022\u003ERobust Autonomous Driving Under Challenging Conditions DEMO\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EManufacturing Trust in Deployed Intelligent Systems\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Trust in a system translates to its ability to explain, generalize, and become reliable,\u0026quot;\u0026nbsp;said\u0026nbsp;Mohit Prabhushankar, a postdoctoral fellow in the OLIVES Lab. \u0026quot;Systems must know what they don\u0026rsquo;t know, and more importantly, when they don\u0026#39;t know.\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this context, explainability is the act of involving humans in a system\u0026rsquo;s decision-making process by contextually providing reasons for its internal processes. Reliability then requires systems to be reliable under all conditions of deployment including when they encounter new aberrant events not seen previously. This is where generalizability \u0026ndash; the ability of a trained model to classify or forecast unseen data \u0026ndash; is used to make the best decision possible.\u0026nbsp;Equally important is the uncertainty score that is associated with the decision.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe OLIVES group\u0026rsquo;s state-of-the-art research introduces a two-stage decision making process that is coined as\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2209.08425.pdf\u0022\u003E\u003Cem\u003EIntrospective Learning\u003C\/em\u003E\u003C\/a\u003E.\u0026nbsp;The first stage is a fast and instinctive process and can be available in any existing system that makes a decision.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe second stage is a slower reflection stage where the system is asked to reflect on its decision by considering and evaluating all available choices. The team demonstrates the value of such processes in generalizability and uncertainty estimation with applications in robust recognition and prediction confidence calibration. This concept is\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2206.08255.pdf\u0022\u003Efurther expanded\u003C\/a\u003E\u0026nbsp;to detect adversarial and out-of-distribution data during deployment.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe fields of contextually relevant explanations are tackled in\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2202.11838.pdf\u0022\u003Epublished findings\u003C\/a\u003E\u0026nbsp;in the Signal Processing Magazine that provides a user-centric policy for intelligent systems. The team\u0026rsquo;s findings have been demonstrated across several applications ranging from AVs to medical imaging, entertainment systems, and subsurface imaging.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Interpretability and trust are precursors to effectively deploy intelligent systems in everyday life applications,\u0026rdquo; said AlRegib.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWATCH:\u003C\/strong\u003E \u003Ca href=\u0022https:\/\/mediaspace.gatech.edu\/media\/CURE-OR%3A%20Challenging%20Unreal%20and%20Real%20Environment%20for%20Object%20Recoginition%20DEMO\/1_eraxkrla\u0022\u003EChallenging Unreal and Real Environment for Object Recognition DEMO\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHuman-In-the-Loop Solutions to Big Data\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe amount of data that researchers can capture and generate in today\u0026rsquo;s world can be put to work in nearly every field imaginable. For example, the OLIVES lab was the first to introduce modern visual machine learning to seismic interpretation\u0026nbsp;\u0026mdash; data used to study earthquakes or vibrations of the earth.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn recent years, AlRegib helped established a new consortium called Machine Learning for Seismic (ML4Seismic) designed to foster research partnerships aimed to drive innovations in\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1812.08756.pdf\u0022\u003E\u0026nbsp;artificial-intelligence assisted seismic imaging\u003C\/a\u003E.ML4Seismic also provides smart analyses of the Earth\u0026rsquo;s subsurface for geothermal and oil and gas applications in the energy sector.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFurther, the OLIVES team\u0026rsquo;s work extends to healthcare, specifically ophthalmology. One of their first products is a\u0026nbsp;\u003Ca href=\u0022https:\/\/patentimages.storage.googleapis.com\/10\/cb\/4e\/525bd751f92ee0\/US11185224.pdf\u0022\u003Eportable eye exam device\u003C\/a\u003E\u0026nbsp;that can provide access to eyecare anytime and anywhere using a headset and a cloud-based AI technology. AlRegib has partnered with the Retina Consultants of Texas to release a\u003Ca href=\u0022https:\/\/zenodo.org\/record\/6622145#.Ytg7XC-B30p\u0022\u003E\u0026nbsp;comprehensive ophthalmology dataset\u003C\/a\u003E\u0026nbsp;with multiple data modalities and have developed an\u003Ca href=\u0022https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?arnumber=8790783\u0026amp;tag=1\u0022\u003E\u0026nbsp;automated framework to detect\u003C\/a\u003E\u0026nbsp;relative afferent pupillary defect (RAPD) in eyes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In a way our research comes full circle with our ophthalmology work. We can now utilize computer vision to literally benefit human vision,\u0026rdquo; said AlRegib.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn such applications, the domain expert \u0026mdash; a person with a strong theoretical foundation in the specific field for which the data was collected \u0026mdash; is at the center, and the expert\u0026rsquo;s interactions with the data and the decision-making process is instrumental. Hence, intelligent solutions must incorporate a two-way communication between the domain expert and the decision-making system. The OLIVES team has introduced solutions to this through active learning in both the fields of\u0026nbsp;\u003Ca href=\u0022https:\/\/ieeexplore.ieee.org\/document\/9506657\u0022\u003Eseismic interpretation\u003C\/a\u003E\u0026nbsp;and\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2206.10120.pdf\u0022\u003Eophthalmology healthcare\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By incorporating humans and their interactions with Big Data, we are able to effectively analyze and better understand Earth\u0026rsquo;s subsurface structures, provide personalized healthcare, and\u0026nbsp;capitalize on the greatest value domain experts provide, which is their knowledge and experience,\u0026rdquo; said\u0026nbsp;AlRegib.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWATCH:\u003C\/strong\u003E \u003Ca href=\u0022http:\/\/mediaspace.gatech.edu\/media\/Limitless%20Eyecare%20through%20Artificial%20Intelligence%20%26%20Imaging\/1_i5aygis8\u0022\u003ELimitless Eyecare through Artificial Intelligence \u0026amp; Imaging\u0026nbsp;DEMO\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EWant to learn more? Check out the\u003C\/em\u003E\u003Ca href=\u0022https:\/\/alregib.ece.gatech.edu\/\u0022\u003E\u003Cem\u003E\u0026nbsp;OLIVES website\u003C\/em\u003E\u003C\/a\u003E\u003Cem\u003E.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Ghassan AlRegib\u0027s Omni Lab for Intelligent Visual Engineering and Science (OLIVES) is is at the forefront of today\u2019s computer vision and visual machine learning research."}],"uid":"36172","created_gmt":"2022-12-09 02:15:04","changed_gmt":"2022-12-15 17:39:16","author":"dwatson71","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-12-08T00:00:00-05:00","iso_date":"2022-12-08T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"663761":{"id":"663761","type":"image","title":"ECE Lab Profile OLIVES","body":null,"created":"1670547837","gmt_created":"2022-12-09 01:03:57","changed":"1670547837","gmt_changed":"2022-12-09 01:03:57","alt":"","file":{"fid":"251246","name":"Research Group \u0026 Lab Spotlight_OLIVES Ghassan_Header.jpg","image_path":"\/sites\/default\/files\/images\/Research%20Group%20%26%20Lab%20Spotlight_OLIVES%20Ghassan_Header.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Research%20Group%20%26%20Lab%20Spotlight_OLIVES%20Ghassan_Header.jpg","mime":"image\/jpeg","size":461547,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Research%20Group%20%26%20Lab%20Spotlight_OLIVES%20Ghassan_Header.jpg?itok=f0AYyu2a"}},"663766":{"id":"663766","type":"image","title":"Ghassan and Mohit","body":null,"created":"1670550850","gmt_created":"2022-12-09 01:54:10","changed":"1670550850","gmt_changed":"2022-12-09 01:54:10","alt":"ECE Professor Ghassan AlRegib and postdoctoral research fellow Mohit Prabhushankar discussing the group\u2019s latest machine learning work in autonmous vechicles (AVs). AVs are won of the most\u00a0obvious examples of OLIVES research in every day life.\u00a0","file":{"fid":"251250","name":"Ghassan \u0026 Mohit.jpg","image_path":"\/sites\/default\/files\/images\/Ghassan%20%26%20Mohit.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ghassan%20%26%20Mohit.jpg","mime":"image\/jpeg","size":1074958,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ghassan%20%26%20Mohit.jpg?itok=2reml-b6"}},"663764":{"id":"663764","type":"image","title":"OLIVES CURE-OR","body":null,"created":"1670550163","gmt_created":"2022-12-09 01:42:43","changed":"1670550163","gmt_changed":"2022-12-09 01:42:43","alt":"","file":{"fid":"251248","name":"CURE-OR.png","image_path":"\/sites\/default\/files\/images\/CURE-OR.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CURE-OR.png","mime":"image\/png","size":1067635,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CURE-OR.png?itok=0ZaWa8C2"}},"663765":{"id":"663765","type":"image","title":"OLIVES AV","body":null,"created":"1670550259","gmt_created":"2022-12-09 01:44:19","changed":"1670550259","gmt_changed":"2022-12-09 01:44:19","alt":"\u2022\tExample of the OLVIES group\u2019s autonomous vehicle dataset in action.","file":{"fid":"251249","name":"OLIVES AV.png","image_path":"\/sites\/default\/files\/images\/OLIVES%20AV.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/OLIVES%20AV.png","mime":"image\/png","size":1063707,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/OLIVES%20AV.png?itok=2nWQenjE"}},"663763":{"id":"663763","type":"image","title":"OLIVES Group Photo","body":null,"created":"1670550025","gmt_created":"2022-12-09 01:40:25","changed":"1671121313","gmt_changed":"2022-12-15 16:21:53","alt":"Members of the OLIVES team in the lab (L-R): Ghazal Kaviani (Ph.D. candidate), Jinsol Lee (Ph.D. candidate), unknown, Ryan Benkert (Ph.D. candidate), Chen Zhou (Ph.D. candidate), Zoe Fowler (Ph.D. candidate), Yash-yee Logan (Ph.D. candidate), Professor Ghassan AlRegib, Mohit Prabhushankar (postdoctoral research fellow), Kiran Kokilepersaud (Ph.D. candidate), and Ahmad Mustafa (Ph.D. candidate).","file":{"fid":"251247","name":"NEW OLIVES LAB PHOTO.jpg","image_path":"\/sites\/default\/files\/images\/NEW%20OLIVES%20LAB%20PHOTO.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/NEW%20OLIVES%20LAB%20PHOTO.jpg","mime":"image\/jpeg","size":1338246,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/NEW%20OLIVES%20LAB%20PHOTO.jpg?itok=rr34KkWk"}}},"media_ids":["663761","663766","663764","663765","663763"],"related_links":[{"url":"https:\/\/ghassanalregib.info","title":"OLIVES (Omni Lab for Intelligent Visual Engineering and Science) "},{"url":"https:\/\/www.ece.gatech.edu\/faculty-staff-directory\/ghassan-alregib","title":"Ghassan AlRegib"}],"groups":[{"id":"1255","name":"School of Electrical and Computer Engineering"}],"categories":[{"id":"129","name":"Institute and Campus"},{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"44681","name":"Ghassan AlRegib"},{"id":"182572","name":"OLIVES"},{"id":"178069","name":"Omni Lab for Intelligent Visual Engineering and Science"},{"id":"191729","name":"School for Electrical and Computer Engineering"},{"id":"2435","name":"ECE"},{"id":"191730","name":"Intelligent Computer Vision"},{"id":"191731","name":"Machine Learning for Seismic"},{"id":"174666","name":"autonomous driving"},{"id":"191732","name":"machine learning in the wild"},{"id":"191733","name":"subsurface interpretation"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39451","name":"Electronics and Nanotechnology"},{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"},{"id":"39541","name":"Systems"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Cstrong\u003EDan Watson\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:dwatson@ece.gatech.edu\u0022\u003Edwatson@ece.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["dwatson@ece.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}