{"661799":{"#nid":"661799","#data":{"type":"news","title":"Whole-brain Functional Imaging Takes New Leaps with Deep Learning ","body":[{"value":"\u003Cp\u003EImaging neuro activities for long durations, high speeds, and large regions of the brain simultaneously is critical to understand the underlying computations in the brain \u0026ndash; in normal functions as well as in developmental and neurodegenerative diseases, according to Georgia Tech researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EModel organisms in neuroscience such as the round worm \u003Cem\u003EC. elegans\u003C\/em\u003E enable these kinds of studies to be conducted. This is because researchers can record activities of their entire brains simultaneously with new microscopy techniques. However, collecting such data is difficult with commonly available setups due to several technical constraints.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut a \u003Ca href=\u0022https:\/\/www.nature.com\/articles\/s41467-022-32886-w\u0022 target=\u0022_blank\u0022\u003Enew study\u003C\/a\u003E published in \u003Cem\u003ENature Communications\u003C\/em\u003E shows that deep learning (a machine-learning technique) can overcome technical constraints in whole-brain imaging, enabling new experiments that were previously not possible. Furthermore, this method will enable every lab in the world with common microscopy setups to do whole-brain imaging and accelerate discovery.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe study\u0026rsquo;s authors \u0026ndash; Professor \u003Ca href=\u0022https:\/\/www.chbe.gatech.edu\/people\/hang-lu\u0022 target=\u0022_blank\u0022\u003EHang Lu\u003C\/a\u003E, Cecil J. \u0026ldquo;Pete\u0026rdquo; Silas Chair in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.chbe.gatech.edu\/\u0022\u003ESchool of Chemical and Biomolecular Engineering\u003C\/a\u003E (ChBE) and Shivesh Chaudhary, ChBE PhD 2022 \u0026ndash; are interested in uncovering the fundamental building blocks of intelligence. For this, they focus on the nervous system of \u003Cem\u003EC. elegans, \u003C\/em\u003Esmall\u003Cem\u003E \u003C\/em\u003Eand compact with only 302 cells, but capable of generating complex behaviors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETechnical Constraints\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhole brain optical imaging methods allows researchers to record neuron activities at single-cell resolution, providing unprecedented amounts of data. And yet, commonly available confocal microscopes are not able to handle the technical constraints required for whole brain imaging, the researchers said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, the imaging must be performed at fast speeds to capture neuron activity dynamics, and to minimize motion artifacts. In addition, laser power must be minimized to prevent photobleaching of fluorescent markers used to label neurons, and to avoid photo-toxicity to animals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecause of these constraints, the videos can be extremely noisy. As a result, several critical downstream processing tasks, such as cell detection and tracking, become extremely complicated, and neuron activity extracted from these videos is of poor quality.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDeep Learning Strategies\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo overcome these challenges, the research team wondered if they could use advances in deep learning methods to reduce noise from lower quality but experimentally accessible imaging techniques.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn principle, this can be done in two ways. First, using a very large collection of data, unsupervised methods (not requiring human monitoring) can be \u0026ldquo;taught\u0026rdquo; to learn about features in the noisy images that can be turned into signals of interest to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESupervised deep learning, in comparison, would take advantage of what the signal-enhanced images should ideally look like and how the corresponding noisy images appear, instructing the machine what it\u0026rsquo;s supposed to see.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Supervised deep learning uses a smaller set of data to teach the machine what to look for, and with the right set of data, is really good at the task,\u0026rdquo; Lu explained.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut both unsupervised and supervised deep learning approaches pose some significant challenges for the whole-brain imaging problem \u0026ndash; the number of training datasets for unsupervised methods is insurmountably large, and the training datasets for traditional supervised methods are technically infeasible in that it is not possible to obtain the high-signal (high resolution) and noisy images that have the exact correspondence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDeveloping New Framework\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo solve these technical challenges, the research team designed a new supervised deep learning method called Neuro-Imaging Denoising via Deep Learning (NIDDL).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NIDDL framework demonstrates advantages compared to previous works. The method simplifies training data collection strategy because the network can be trained with still images consistent with neuro activity captured on video.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo explain, Professor Lu uses an analogy of processing a blurry video of someone playing tennis. Traditional algorithms would require one to use exact shots for comparison (one blurry, one not), but that\u0026rsquo;s not doable because the camera cannot provide both images at once. The tennis player would have moved by the time of the second photo, no matter how fast the exposure. But if the tennis player posed a position, then you could take two images and use them to train the algorithm.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith \u003Cem\u003EC. elegans\u003C\/em\u003E in place of the tennis player, the researchers can use still shots of sedated organisms for training the deep learning algorithm, Lu said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThus, unlike in previous studies, ultrafast imaging rates for training data collection are not required. \u0026nbsp;In addition, NIDDL framework requires much lower training data (approximately 500 pairs of images) compared to previous methods that require 3000-30,000 frames, because this deep learning technique has streamlined the process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENIDDL can be trained in supervised fashion across images of varieties of strains, labelling markers, and noise levels, making them more generalizable.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAdoptable Technology\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe manageable amount data required for training could encourage labs to set up their own denoising pipelines using the framework, the researchers said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso, more labs could adopt the NIDDL framework without having access to powerful graphic processing units. NIDDL has been extensively optimized to achieve a 20 to 30 times smaller memory footprint and a three to four times faster inference time (the amount needed for the machine to make a prediction) compared to previous methods.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Labs could run experiments faster, easier, cheaper, and better,\u0026rdquo; Lu said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFuture Applications\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers believe that NIDDL can be applied to many neural imaging scenarios, where experimentalists would only need to curate a small set of data specific to their experiments to deploy the algorithm to denoise the data.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, NIDDL would enable researchers to circumvent some experimental bottlenecks to make faster recordings to resolve brain dynamics, longer recordings to study a variety of behaviors in animals, and cover larger brain areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These applications would push forward our fundamental understandings of how the brain works, and guide understandings of brain disease mechanisms and discovering therapeutics,\u0026rdquo; Chaudhary said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECITATION\u003C\/strong\u003E: \u003Ca href=\u0022https:\/\/www.nature.com\/articles\/s41467-022-32886-w\u0022 target=\u0022_blank\u0022\u003Ehttps:\/\/www.nature.com\/articles\/s41467-022-32886-w\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFUNDInG\u003C\/strong\u003E: The authors acknowledge the funding support of the U.S. NIH (R01NS096581, R01MH130064, and R01NS115484) and the U.S. NSF (1764406 and 1707401) to H.L. Some nematode strains used in this work were provided by the Caenorhabditis Genetics Center (CGC), which is funded by the NIH (P40 OD010440), National Center for Research Resources and the International \u003Cem\u003EC. elegans\u003C\/em\u003E Knockout Consortium. This research was supported in part through research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at the Georgia Institute of Technology, Atlanta, Georgia, USA.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EAny opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the sponsoring agency.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A deep learning-based supervised denoising algorithm enables high-signal neuron activity recovery from noisy but easy to acquire videos."}],"uid":"27271","created_gmt":"2022-10-03 22:25:22","changed_gmt":"2022-10-04 13:18:21","author":"Brad Dixon","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-10-03T00:00:00-04:00","iso_date":"2022-10-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"661798":{"id":"661798","type":"image","title":"NIDDL Technology","body":null,"created":"1664835715","gmt_created":"2022-10-03 22:21:55","changed":"1664835715","gmt_changed":"2022-10-03 22:21:55","alt":"Left: Noisy images showing neuronal structures. Right: NIDDL Deep Denoised image of neuronal structures.","file":{"fid":"250681","name":"neurite.png","image_path":"\/sites\/default\/files\/images\/neurite.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/neurite.png","mime":"image\/png","size":320966,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/neurite.png?itok=YFxCkyya"}},"661800":{"id":"661800","type":"image","title":"NIDDL Technology","body":null,"created":"1664836072","gmt_created":"2022-10-03 22:27:52","changed":"1664836072","gmt_changed":"2022-10-03 22:27:52","alt":"Left: Activity traces of neurons from noisy videos, with noise masking real activities of interest. Right: Activity traces of neurons from NIDDL Deep Denoised videos unmasking the activities of interest.","file":{"fid":"250682","name":"niddl2.png","image_path":"\/sites\/default\/files\/images\/niddl2.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/niddl2.png","mime":"image\/png","size":621312,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/niddl2.png?itok=vuBAOkY0"}}},"media_ids":["661798","661800"],"groups":[{"id":"1183","name":"Home"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"68361","name":"brain imaging"},{"id":"191375","name":"neuro activity"},{"id":"11638","name":"C. elegans"},{"id":"109581","name":"deep learning"},{"id":"9167","name":"machine learning"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39441","name":"Bioengineering and Bioscience"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrad Dixon, \u003Ca href=\u0022mailto:braddixon@gatech.edu\u0022\u003Ebraddixon@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["braddixon@gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}