<node id="663767">
  <nid>663767</nid>
  <type>news</type>
  <uid>
    <user id="36172"><![CDATA[36172]]></user>
  </uid>
  <created>1670552104</created>
  <changed>1671125956</changed>
  <title><![CDATA[ECE Research Group & Lab Spotlight: OLIVES]]></title>
  <body><![CDATA[<p>The human brain is made up of around 100 billion neurons, meaning it can process complex, multi-parallel information at a tantalizing speed. Through the sense of sight alone, humans can gather a treasure trove of information in the blink of an eye and apply that information to make split-second decisions. With &lsquo;smart&rsquo; technology becoming more and more ingrained in everyday life and further trusted to keep humans safe, it is crucial for machines to have human-like vision and rapid information processing capabilities.</p>

<p>Professor Ghassan AlRegib&rsquo;s OLIVES (Omni Lab for Intelligent Visual Engineering and Science) research group in the Georgia Tech School for Electrical and Computer Engineering (ECE) is at the forefront of today&rsquo;s computer vision and visual machine learning research and its deployment in everyday life applications.</p>

<p>&ldquo;We&rsquo;re providing machines with the robust algorithms and datasets they need to better see, learn from, and respond to the world around them,&rdquo; said AlRegib.</p>

<p>Working in the areas of autonomous driving, machine learning in the wild (outside a lab setting), subsurface interpretation, and healthcare, the team is advancing the field with safe, reliable, and predictive tools. Read about a few exciting research projects and themes taking place in OLIVES below.</p>

<p><strong>Autonomous Vehicles (AVs) and Smart Mobility in Any Condition</strong></p>

<p>&ldquo;As researchers endeavor for higher levels of autonomy in technology, safety-critical functions demand powerful algorithms,&rdquo; said AlRegib. &ldquo;Without understanding data, machine learning is&nbsp;lacking. Autonomous driving may be the most obvious current example of this in practice.&rdquo;</p>

<p>For autonomous driving to work, a vehicle&rsquo;s cameras must supply the car&rsquo;s computers with information to make incredibly fast decisions. However, the majority of existing data and benchmarks are limited in terms of data available in challenging environmental conditions. For example, think of how a human driver adapts to driving in rain or sun glare conditions.</p>

<p>To overcome the shortcomings in existing research, AlRegib&rsquo;s team introduced the most comprehensive<a href="https://arxiv.org/pdf/1908.11262.pdf">&nbsp;traffic sign detection dataset</a>&nbsp;every published that contains controlled challenging conditions. It facilitates the building of deep learning models &mdash; a form of machine learning with algorithms inspired by the structure and function of the brain &mdash; in solving various computer vision tasks that help the car make the best and safest decision possible no matter the weather. Currently the group is working with a leader in AVs to publish a new comprehensive dataset.</p>

<p><strong>WATCH: </strong><a href="https://mediaspace.gatech.edu/media/Robust%20Autonomous%20Driving%20Under%20Challenging%20Conditions%20DEMO/1_n1c8i26o">Robust Autonomous Driving Under Challenging Conditions DEMO</a></p>

<p><strong>Manufacturing Trust in Deployed Intelligent Systems&nbsp;</strong></p>

<p>&ldquo;Trust in a system translates to its ability to explain, generalize, and become reliable,&quot;&nbsp;said&nbsp;Mohit Prabhushankar, a postdoctoral fellow in the OLIVES Lab. &quot;Systems must know what they don&rsquo;t know, and more importantly, when they don&#39;t know.&quot;</p>

<p>In this context, explainability is the act of involving humans in a system&rsquo;s decision-making process by contextually providing reasons for its internal processes. Reliability then requires systems to be reliable under all conditions of deployment including when they encounter new aberrant events not seen previously. This is where generalizability &ndash; the ability of a trained model to classify or forecast unseen data &ndash; is used to make the best decision possible.&nbsp;Equally important is the uncertainty score that is associated with the decision.</p>

<p>The OLIVES group&rsquo;s state-of-the-art research introduces a two-stage decision making process that is coined as&nbsp;<a href="https://arxiv.org/pdf/2209.08425.pdf"><em>Introspective Learning</em></a>.&nbsp;The first stage is a fast and instinctive process and can be available in any existing system that makes a decision.</p>

<p>The second stage is a slower reflection stage where the system is asked to reflect on its decision by considering and evaluating all available choices. The team demonstrates the value of such processes in generalizability and uncertainty estimation with applications in robust recognition and prediction confidence calibration. This concept is&nbsp;<a href="https://arxiv.org/pdf/2206.08255.pdf">further expanded</a>&nbsp;to detect adversarial and out-of-distribution data during deployment.&nbsp;</p>

<p>The fields of contextually relevant explanations are tackled in&nbsp;<a href="https://arxiv.org/pdf/2202.11838.pdf">published findings</a>&nbsp;in the Signal Processing Magazine that provides a user-centric policy for intelligent systems. The team&rsquo;s findings have been demonstrated across several applications ranging from AVs to medical imaging, entertainment systems, and subsurface imaging.&nbsp;</p>

<p>&ldquo;Interpretability and trust are precursors to effectively deploy intelligent systems in everyday life applications,&rdquo; said AlRegib.</p>

<p><strong>WATCH:</strong> <a href="https://mediaspace.gatech.edu/media/CURE-OR%3A%20Challenging%20Unreal%20and%20Real%20Environment%20for%20Object%20Recoginition%20DEMO/1_eraxkrla">Challenging Unreal and Real Environment for Object Recognition DEMO</a></p>

<p><strong>Human-In-the-Loop Solutions to Big Data</strong></p>

<p>The amount of data that researchers can capture and generate in today&rsquo;s world can be put to work in nearly every field imaginable. For example, the OLIVES lab was the first to introduce modern visual machine learning to seismic interpretation&nbsp;&mdash; data used to study earthquakes or vibrations of the earth.</p>

<p>In recent years, AlRegib helped established a new consortium called Machine Learning for Seismic (ML4Seismic) designed to foster research partnerships aimed to drive innovations in<a href="https://arxiv.org/pdf/1812.08756.pdf">&nbsp;artificial-intelligence assisted seismic imaging</a>.ML4Seismic also provides smart analyses of the Earth&rsquo;s subsurface for geothermal and oil and gas applications in the energy sector.</p>

<p>Further, the OLIVES team&rsquo;s work extends to healthcare, specifically ophthalmology. One of their first products is a&nbsp;<a href="https://patentimages.storage.googleapis.com/10/cb/4e/525bd751f92ee0/US11185224.pdf">portable eye exam device</a>&nbsp;that can provide access to eyecare anytime and anywhere using a headset and a cloud-based AI technology. AlRegib has partnered with the Retina Consultants of Texas to release a<a href="https://zenodo.org/record/6622145#.Ytg7XC-B30p">&nbsp;comprehensive ophthalmology dataset</a>&nbsp;with multiple data modalities and have developed an<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8790783&amp;tag=1">&nbsp;automated framework to detect</a>&nbsp;relative afferent pupillary defect (RAPD) in eyes.</p>

<p>&ldquo;In a way our research comes full circle with our ophthalmology work. We can now utilize computer vision to literally benefit human vision,&rdquo; said AlRegib.</p>

<p>In such applications, the domain expert &mdash; a person with a strong theoretical foundation in the specific field for which the data was collected &mdash; is at the center, and the expert&rsquo;s interactions with the data and the decision-making process is instrumental. Hence, intelligent solutions must incorporate a two-way communication between the domain expert and the decision-making system. The OLIVES team has introduced solutions to this through active learning in both the fields of&nbsp;<a href="https://ieeexplore.ieee.org/document/9506657">seismic interpretation</a>&nbsp;and&nbsp;<a href="https://arxiv.org/pdf/2206.10120.pdf">ophthalmology healthcare</a>.</p>

<p>&ldquo;By incorporating humans and their interactions with Big Data, we are able to effectively analyze and better understand Earth&rsquo;s subsurface structures, provide personalized healthcare, and&nbsp;capitalize on the greatest value domain experts provide, which is their knowledge and experience,&rdquo; said&nbsp;AlRegib.</p>

<p><strong>WATCH:</strong> <a href="http://mediaspace.gatech.edu/media/Limitless%20Eyecare%20through%20Artificial%20Intelligence%20%26%20Imaging/1_i5aygis8">Limitless Eyecare through Artificial Intelligence &amp; Imaging&nbsp;DEMO</a></p>

<p><em>Want to learn more? Check out the</em><a href="https://alregib.ece.gatech.edu/"><em>&nbsp;OLIVES website</em></a><em>.</em></p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2022-12-08T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Ghassan AlRegib's Omni Lab for Intelligent Visual Engineering and Science (OLIVES) is is at the forefront of today’s computer vision and visual machine learning research.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="663761">
            <nid>663761</nid>
            <type>image</type>
            <title><![CDATA[ECE Lab Profile OLIVES]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>251246</fid>
                  <filename><![CDATA[Research Group &amp; Lab Spotlight_OLIVES Ghassan_Header.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Research%20Group%20%26%20Lab%20Spotlight_OLIVES%20Ghassan_Header.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Research%20Group%20%26%20Lab%20Spotlight_OLIVES%20Ghassan_Header.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
          <item>
        <nid>
          <node id="663766">
            <nid>663766</nid>
            <type>image</type>
            <title><![CDATA[Ghassan and Mohit]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>251250</fid>
                  <filename><![CDATA[Ghassan &amp; Mohit.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Ghassan%20%26%20Mohit.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ghassan%20%26%20Mohit.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[ECE Professor Ghassan AlRegib and postdoctoral research fellow Mohit Prabhushankar discussing the group’s latest machine learning work in autonmous vechicles (AVs). AVs are won of the most obvious examples of OLIVES research in every day life. ]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
          <item>
        <nid>
          <node id="663764">
            <nid>663764</nid>
            <type>image</type>
            <title><![CDATA[OLIVES CURE-OR]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>251248</fid>
                  <filename><![CDATA[CURE-OR.png]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/CURE-OR.png]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/CURE-OR.png]]></file_full_path>
                  <filemime>image/png</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
          <item>
        <nid>
          <node id="663765">
            <nid>663765</nid>
            <type>image</type>
            <title><![CDATA[OLIVES AV]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>251249</fid>
                  <filename><![CDATA[OLIVES AV.png]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/OLIVES%20AV.png]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/OLIVES%20AV.png]]></file_full_path>
                  <filemime>image/png</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[•	Example of the OLVIES group’s autonomous vehicle dataset in action.]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
          <item>
        <nid>
          <node id="663763">
            <nid>663763</nid>
            <type>image</type>
            <title><![CDATA[OLIVES Group Photo]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>251247</fid>
                  <filename><![CDATA[NEW OLIVES LAB PHOTO.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/NEW%20OLIVES%20LAB%20PHOTO.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NEW%20OLIVES%20LAB%20PHOTO.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Members of the OLIVES team in the lab (L-R): Ghazal Kaviani (Ph.D. candidate), Jinsol Lee (Ph.D. candidate), unknown, Ryan Benkert (Ph.D. candidate), Chen Zhou (Ph.D. candidate), Zoe Fowler (Ph.D. candidate), Yash-yee Logan (Ph.D. candidate), Professor Ghassan AlRegib, Mohit Prabhushankar (postdoctoral research fellow), Kiran Kokilepersaud (Ph.D. candidate), and Ahmad Mustafa (Ph.D. candidate).]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[dwatson@ece.gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p><strong>Dan Watson</strong><br />
<a href="mailto:dwatson@ece.gatech.edu">dwatson@ece.gatech.edu</a></p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>1255</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Institute and Campus]]>
      </item>
          <item>
        <![CDATA[Student and Faculty]]>
      </item>
          <item>
        <![CDATA[Student Research]]>
      </item>
          <item>
        <![CDATA[Research]]>
      </item>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Engineering]]>
      </item>
          <item>
        <![CDATA[Robotics]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>129</tid>
        <value><![CDATA[Institute and Campus]]></value>
      </item>
          <item>
        <tid>134</tid>
        <value><![CDATA[Student and Faculty]]></value>
      </item>
          <item>
        <tid>8862</tid>
        <value><![CDATA[Student Research]]></value>
      </item>
          <item>
        <tid>135</tid>
        <value><![CDATA[Research]]></value>
      </item>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>145</tid>
        <value><![CDATA[Engineering]]></value>
      </item>
          <item>
        <tid>152</tid>
        <value><![CDATA[Robotics]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>
          <term tid="39451"><![CDATA[Electronics and Nanotechnology]]></term>
          <term tid="39501"><![CDATA[People and Technology]]></term>
          <term tid="39521"><![CDATA[Robotics]]></term>
          <term tid="39541"><![CDATA[Systems]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
          <link>
      <url>https://ghassanalregib.info</url>
      <title></title>
      </link>
          <link>
      <url>https://www.ece.gatech.edu/faculty-staff-directory/ghassan-alregib</url>
      <title></title>
      </link>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>1255</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[School of Electrical and Computer Engineering]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>44681</tid>
        <value><![CDATA[Ghassan AlRegib]]></value>
      </item>
          <item>
        <tid>182572</tid>
        <value><![CDATA[OLIVES]]></value>
      </item>
          <item>
        <tid>178069</tid>
        <value><![CDATA[Omni Lab for Intelligent Visual Engineering and Science]]></value>
      </item>
          <item>
        <tid>191729</tid>
        <value><![CDATA[School for Electrical and Computer Engineering]]></value>
      </item>
          <item>
        <tid>2435</tid>
        <value><![CDATA[ECE]]></value>
      </item>
          <item>
        <tid>191730</tid>
        <value><![CDATA[Intelligent Computer Vision]]></value>
      </item>
          <item>
        <tid>191731</tid>
        <value><![CDATA[Machine Learning for Seismic]]></value>
      </item>
          <item>
        <tid>174666</tid>
        <value><![CDATA[autonomous driving]]></value>
      </item>
          <item>
        <tid>191732</tid>
        <value><![CDATA[machine learning in the wild]]></value>
      </item>
          <item>
        <tid>191733</tid>
        <value><![CDATA[subsurface interpretation]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
