<![CDATA[$25 Million Project Will Advance DNA-Based Archival Data Storage]]> 27303 The demand for archival data storage has been skyrocketing, and if a new research initiative reaches its goals, that need could be met by taking advantage of an efficient and robust information storage medium that has proven itself through the centuries: the biopolymer DNA.

The Intelligence Advanced Research Projects Activity’s (IARPA) Molecular Information Storage (MIST) program has awarded a multi-phase contract worth up to $25 million to develop scalable DNA-based molecular storage techniques. The goal of the project, which will be led by the Georgia Tech Research Institute (GTRI), is to use DNA as the basis for deployable storage technologies that can eventually scale into the exabyte regime and beyond with reduced physical footprint, power and cost requirements relative to conventional storage technologies. 

The technology already exists for storing and reading information into DNA — which also encodes the genetic blueprint for living organisms — but significant advances will be needed to make it commercially practical and cost competitive with established magnetic tape and optical disk memory. While current archival storage has a limited lifetime, information stored in DNA could last for hundreds of years.

“The goal is to significantly reduce the size, weight and power required for archival data storage,” said Alexa Harter, director of GTRI’s Cybersecurity, Information Protection, and Hardware Evaluation Research (CIPHER) Laboratory. “What would take acres in a data farm today could be kept in a device the size of the tabletop. We want to significantly improve all kinds of metrics for long-term data storage.”

The Scalable Molecular Archival Software and Hardware (SMASH) project resulted from a proposal prepared by GTRI, San Francisco-based Twist Bioscience, San Diego-based Roswell Biotechnologies, and the University of Washington in collaboration with Microsoft. 

In the project plans, Twist will engineer a DNA synthesis platform on silicon that “writes” the DNA strands that carry the data. Roswell will provide DNA sequencing, or "reading" technology, and the University of Washington – in collaboration with Microsoft – will bring system architecture, data analysis and coding expertise to the project. At Georgia Tech, the project will involve fabrication facilities at the Institute for Electronics and Nanotechnology and researchers in such specialties as chemistry and information theory, who will also draw from four of GTRI’s eight laboratories.

“The reason people are looking at DNA for storage is that it has evolved over the ages as a very compact and reliable means of information storage,” said Nicholas Guise, a GTRI senior research scientist. “It’s so compact that a practical DNA archive could store an exabyte of data—equivalent to a million terabyte hard drives—in a volume about the size of a sugar cube. Scientists have been able to read DNA from animals that died centuries ago, so the data lasts essentially forever under the right conditions.”

Technology for encoding and decoding DNA works at small scales today, but to be useful for commercial archival purposes, researchers will have to scale up the production of synthetic DNA, reliably connect it to established computing systems and improve the speed of the data writing and reading process. The project goal would be to encode and decode terabytes of data in a day at costs and rates more than 100 times better than current technologies.

DNA data storage won’t initially replace server farms for information that must be accessed quickly and often. Because of the time required for reading and decoding, the technique would be useful for information that must be kept indefinitely, but accessed infrequently.  

Part of the technical challenge is interfacing the DNA with standard CMOS electronic technologies. The researchers plan to build hybrid chips in which the DNA grows above layers containing the electronics. The overall project will leverage the efficiencies of current semiconductor technologies, said Brooke Beckert, a GTRI research engineer. 

“We’ll be working with commercial foundries, so when we get the processing right, it should be much easier to transition the technology over to them,” she said. “Connecting to the existing technology infrastructure is a critical part of this project, but we’ll have to custom-make most of the components in the first stage.”

Among the challenges will be managing the tradeoffs between speed and error, said Guise. “The issue is how far down we can scale this without introducing too many errors,” he said. “The basic synthesis is proven at a scale of hundreds of microns. We want to shrink that by a factor of 100, which leads us to worry about such issues as crosstalk between different DNA strands in adjacent locations on the chips.”

Current technology uses modified inkjet printing to produce the DNA strands, but the SMASH project plans to grow the biopolymer more rapidly and in larger quantities using parallelized synthesis on the hybrid chips.

To achieve the major advances in reading cost and speed required, the program will rely on the molecular electronic DNA reader chips under development at Roswell. The data will be read from DNA strands using a molecular electronic sensor array chip, on which single molecules are drawn through nanoscale current meters that measure electrical signatures of each letter in the sequence.  For biomedical applications, the sequencing industry has been focused on a goal of achieving a $1,000 human genome. The DNA reading goals of this program amount to delivering a $10 genome, and that will require a major technology disruption.

The researchers acknowledge the challenges ahead in bringing their devices to commercial scale.

“We don’t see any killers ahead for this technology,” said Adam Meier, a GTRI senior research scientist. “There is a lot of emerging technology and doing this commercially will require many orders of magnitude improvement. Magnetic tape for archival storage has been improving steadily for 60 years, and this investment from IARPA will power the advancements needed to make DNA storage competitive with that."

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu).

Writer: John Toon

]]> John Toon 1 1579140492 2020-01-16 02:08:12 1579184379 2020-01-16 14:19:39 0 0 news The demand for archival data storage has been skyrocketing, and if a new research initiative reaches its goals, that need could be met by taking advantage of an efficient and robust information storage medium that has proven itself through the centuries: the biopolymer DNA.

2020-01-16T00:00:00-05:00 2020-01-16T00:00:00-05:00 2020-01-16 00:00:00 John Toon

Research News

(404) 894-6986

631205 631206 631207 631205 image <![CDATA[DNA Data Storage Researchers]]> image/jpeg 1579139387 2020-01-16 01:49:47 1579139387 2020-01-16 01:49:47 631206 image <![CDATA[Data Storage Researchers at Cleanroom]]> image/jpeg 1579139528 2020-01-16 01:52:08 1579139528 2020-01-16 01:52:08 631207 image <![CDATA[DNA Data Storage Researchers2]]> image/jpeg 1579139645 2020-01-16 01:54:05 1579139645 2020-01-16 01:54:05
<![CDATA[Cyber Hygiene Keeps Your Email Safe from Virtual Viruses]]> 27303 The email is from someone you think is a co-worker in another department at your company, who like you, has suddenly found herself teleworking from home without the usual group of colleagues to help review things. She’s asking your advice on a document attached to the email.

Being the helpful person that you are, you should just open up that file and give it a look, right? Wrong, says Brendan Saltaformaggio, a cybersecurity expert and assistant professor in the Georgia Tech School of Electrical and Computer Engineering.

Cyber criminals are taking advantage of the fact that more of us are working from home, away from the online safeguards we may have at work. What appears to be an email from a colleague in another department could be an attack that tricks recipients into opening attachments and silently installs malware.

“People will be doing a lot more over email when they work from home,” Saltaformaggio noted. “They will be corresponding more with co-workers, sending potentially sensitive documents and interacting with people they may not necessarily know using computer systems that may not have been intended for secure use. That increases risk.”

To stay safe from viruses and other malware, Saltaformaggio offers five tips for dealing with email while teleworking under these special conditions – and during the normal office conditions that we hope to resume soon.

“Teleworking can help companies get their staffs safely through this crisis, but we all need to be careful to practice good cyber hygiene, just as we are washing our hands to avoid an infection from viruses in the physical world,” Saltaformaggio said.

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Assistance: John Toon (404-894-6986) (jtoon@gatech.edu).

]]> John Toon 1 1584494117 2020-03-18 01:15:17 1584496626 2020-03-18 01:57:06 0 0 news Teleworking can create a new set of risks surrounding email. A Georgia Tech cybersecurity researcher offers some tips for staying safe on the Internet.

2020-03-17T00:00:00-04:00 2020-03-17T00:00:00-04:00 2020-03-17 00:00:00 John Toon

Research News

(404) 894-6986

633641 633642 633641 image <![CDATA[Coping with COVID]]> image/png 1584493388 2020-03-18 01:03:08 1584561934 2020-03-18 20:05:34 633642 image <![CDATA[Working from Home (Getty)]]> image/jpeg 1584493577 2020-03-18 01:06:17 1584493577 2020-03-18 01:06:17
<![CDATA[Considering Cybersecurity When Social Circles Share Digital Resources]]> 33939 Everyone who has ever shared an apartment in college has experienced some form of this challenge: You move in with a group of three other people. You think you set proper boundaries and rules for the house, but everyone swears they’re laid back and not too worried about any of the others getting out of line. After a while, everyone settles in, and next thing you know one roommate is taking a little too much advantage of your goodwill.

Their boyfriend starts showing up at the apartment all the time, whether you know he is coming or not. Sometimes, he’s even there when your roommate isn’t. He has a key to the front door, and helps himself to what’s in the fridge. All of a sudden, you essentially have a new roommate that you never approved and don’t know a whole lot about.

Sure, it’s annoying, but consider some of the bigger implications: Shared resources are becoming vulnerable to people outside of the intended group of users. Privacy is being compromised by one member of the group who is practicing poor security behavior. There is a disconnect amongst members of the group about what should and should not be acceptable. As the world becomes more computing intensive, this type of behavior has even larger implications in the context of technology.

New research by members of Georgia Tech’s School of Interactive Computing has found that existing technical controls for shared digital resources fall short in facilitating collaborative governance and decision making. The paper, which was accepted and awarded a Best Paper Honorable Mention at the 2020 CHI Conference on Human Factors in Computing Systems, examines why they fall short and offers guidance on ways to improve them.

“The idea here is that as computing infiltrates more and more of our social lives, we are lacking in ability for groups of people to come together and collaboratively think about the access control policies, the threat models we share, and decide together,” said Sauvik Das, an assistant professor in the school and the lead on the research. “What’s a fair way to control access to this resource?”

To examine these factors, Das and his team – which includes co-authors Hue WatsonEyitemi Moju-Igbene, and Akanksha Kumari – spoke with a number of types of groups about their behaviors and processes. Roommates, long-term friends, work colleagues – any group of 3-5 people that is socially connected for about 3-6 months. They looked at the resources that they collectively own and share that they would rather not have people outside of the group have access to – a smart fridge, a conference room, or a Google Doc, for example – and asked the threat models they each had.

“What are they afraid of?” Das said. “Who might they be against having access? How do they jointly come up with strategies? What conversations do they have about security and privacy? How does that impact their behaviors inside and outside of the group setting?”

They also had subjects fill out a diary to probe deeper individually – has something come up about security and privacy since they spoke as a group? What are the pain points and emergent threats on a day-to-day basis?

The research uncovered that the existing controls for digital resources drop the ball.

“They’re all designed under the understanding that people either don’t care enough or that only one person should handle the privacy,” Das said. “Most of these strategies were socially construed, but implicit because technical controls didn’t allow them to do it differently.”

One conclusion Das reached is that there need to be tools of shared governance, a tangible way for each member of the group to engage constructively with the shared privacy structure.

“What it looks like now is sort of a dictatorship,” Das explained. “Or, it can be egalitarian where anybody can adjust anything, and that causes problems too. There’s nuance of how different groups would like to control different resources. People are concerned about outsider threats, the types of information you share with people within your group, and with the reliability of people within the group to practice good behavior.

“We need a simple, fun, and easy way for people to come together and approach security and privacy as a group.”

]]> David Mitchell 1 1588926771 2020-05-08 08:32:51 1588926771 2020-05-08 08:32:51 0 0 news 2020-05-08T00:00:00-04:00 2020-05-08T00:00:00-04:00 2020-05-08 00:00:00 David Mitchell

Communications Officer


626044 626044 image <![CDATA[Cybersecurity stock image]]> image/jpeg 1568223064 2019-09-11 17:31:04 1568223064 2019-09-11 17:31:04 <![CDATA[CHI 2020 at Georgia Tech]]>
<![CDATA[Researchers Use Machine Learning to Fight COVID-19 Disinformation]]> 34541 Disinformation on COVID-19 spreads almost faster than the disease.

To ensure Americans can find the most accurate information, College of Computing researchers are creating machine-learning (ML) and data science tools to help fact-checkers be more efficient.

The Disinformation Dilemma

Although having high-quality news is important any time, the ever-changing nature of COVID-19 makes it even more vital that users have access to vetted information. Many Americans receive their news from social media, where rumors can be shared as much as memes.

“Rumors, hoaxes, fake cures, bioweapon claims, and disinformation campaigns about COVID-19 are prevalent on social media,” said School of Computational Science and Engineering Assistant Professor Srijan Kumar. “These induce anger, anxiety, and stress in readers, and in many cases, have even led to fatalities, such as hydroxychloroquine overdose.”

Newsroom fact-checkers are at forefront of fighting against false information, but manually verifying every fact is time-consuming at best and nearly impossible in the age of COVID-19. So Kumar and School of Computer Science Professor Mustque Ahamad are building data-driven, secure solutions for fact checking.

A Next-Generation Solution

Kumar and Ahamad are a well-matched team. In his past cybersecurity research, Ahamad has worked with professional fact-checkers to determine what they need to complete their work at news organizations. Kumar, for his part, has been building ML and data-driven tools to detect disinformation. COVID-19 seemed like a natural pairing for the two.

“Together, we started collaborating to build the next generation of data-driven and security-minded solutions for effective fact checking,” Kumar said.

Their solution is to do early detection of disinformation before it even gets to the fact-checkers. With this in mind, they plan to develop ML techniques to remove deliberately misleading information from the news.

Their ML models will be able to learn the difference between true versus false information with only a few training data points.

“Our models will triage the cases that are most likely to be false in order of their impact on the readers,” Kumar said.

The models will also be customizable to the individual fact-checker’s topical, geographical, and language preferences. As the project develops, Kumar and Ahamad will collaborate with professional fact-checkers to ensure the models are effective throughout the research.

“Our framework will bring together a one-stop-shop for group of fact checkers to collaboratively identify false information,” Kumar said. “This information can then be shipped to appropriate stakeholders, so that the readers can be appropriately alerted when they view it and the hoaxes can be removed from social media circulation.”

For more coverage of Georgia Tech’s response to the coronavirus pandemic, please visit our Responding to COVID-19 page.

]]> Tess Malone 1 1590525592 2020-05-26 20:39:52 1591276100 2020-06-04 13:08:20 0 0 news 2020-05-26T00:00:00-04:00 2020-05-26T00:00:00-04:00 2020-05-26 00:00:00 Tess Malone, Communications Officer


635701 635701 image <![CDATA[Disinformation]]> image/jpeg 1590526655 2020-05-26 20:57:35 1590526655 2020-05-26 20:57:35
<![CDATA[School of Computer Science Professor Wins Award for Influential Cryptography Research]]> 34541 Professor Alexandra Boldyreva has won a Test of Time Award from the International Conference on Practice and Theory in Public Key Cryptography (PKC) for her work on new multi-user digital signatures.

Boldyreva, who is a professor and associate chair in the School of Computer Science, wrote the single-author winning paper, Threshold Signatures, Multisignatures and Blind Signatures Based on the Gap-Diffie-Hellman-Group Signature Scheme, in 2003 as a Ph.D. student at the University of California San Diego.

“My advisor had to convince me to write the paper because I thought the results were too simple to deserve a publication,” Boldyreva said. “Of course, back then I would have never believed that the paper would do so well.”

Her new multi-user digital signature schemes were simpler and more efficient than existing schemes at the time. One of the most useful was a multisignature that allows a number of users to jointly digitally sign the same message while keeping the final signature as short and computationally efficient as a single signature. In effect, the length of the multisignature doesn’t grow as more users sign.

Secure networking protocols as well as blockchains and cryptocurrencies commonly rely on efficient multi-user signatures. Yet, at the same time, the schemes Boldyreva created are still simple enough that introductory cryptography courses teach them.

Boldyreva gave a five-minute presentation on the paper at this year’s virtually held PKC.

“Now, I’m certain that simplicity is a big advantage as long as the result is useful or interesting.”


]]> Tess Malone 1 1592856256 2020-06-22 20:04:16 1592856256 2020-06-22 20:04:16 0 0 news 2020-06-22T00:00:00-04:00 2020-06-22T00:00:00-04:00 2020-06-22 00:00:00 Tess Malone, Communications Officer


357351 357351 image <![CDATA[Sasha Boldyreva compressed]]> image/jpeg 1449245767 2015-12-04 16:16:07 1475895091 2016-10-08 02:51:31
<![CDATA[FraudScope Raises $7 Million to Tackle Health-Care Fraud]]> 27255 Josie Giles 1 1605726839 2020-11-18 19:13:59 1605726839 2020-11-18 19:13:59 0 0 hgTechInTheNews 2020-07-02T00:00:00-04:00 2020-07-02T00:00:00-04:00 2020-07-02T00:00:00-04:00 <![CDATA[Research Team Develops 'Hidden Property Abusing', Presenting at National Security Conference]]> 27255 Josie Giles 1 1606129585 2020-11-23 11:06:25 1613419171 2021-02-15 19:59:31 0 0 hgTechInTheNews 2020-07-31T00:00:00-04:00 2020-07-31T00:00:00-04:00 2020-07-31T00:00:00-04:00 <![CDATA[Baking and Boiling Botnets Could Drive Energy Market Swings and Damage]]> 27303 Evil armies of internet-connected EV chargers, ovens, hot-water heaters, air-conditioners, and other high-wattage appliances could be hijacked to slightly manipulate energy demand, potentially driving price swings and creating financial damage to deregulated energy markets, warns a new report scheduled to be presented Aug. 5 at the Black Hat USA 2020 conference.

By turning the compromised equipment on or off to artificially increase or decrease power demand, botnets made up of these energy-consuming devices might help an unscrupulous energy supplier or retailer (electric utility) alter prices to create a business advantage, or give a nation-state a way to remotely harm the economy of another country by causing financial damage to its electricity market. If done within the bounds of normal power demand variation, such an attack would be difficult to detect, the researchers said.

“If an attacker can slightly affect electricity market prices in their favor, it would be like knowing today what’s going to happen in tomorrow’s stock market,” said Tohid Shekari, a graduate research assistant in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “If the manipulation stays within a certain range, it would be stealthy and difficult to differentiate from a typical load forecasting error.”

Believed to be the first proposed energy market manipulation cyberattack, the operation would depend on botnets composed of thousands of appliances that could be controlled centrally by attackers who had taken over their Internet of Things (IoT) controllers. Malicious actors have already demonstrated IoT botnet attacks such as Mirai, which used a network of compromised internet-connected cameras and routers to launch attacks on key internet infrastructure.

The attack, dubbed “IoT Skimmer,” would be made possible by the deregulation of energy markets, which has created a system to efficiently supply electrical power. To meet the demand for electrical energy, utility companies must predict future demand and purchase power from the day-ahead wholesale energy market at competitive prices. If the predictions turn out to be wrong, the utilities may have to pay more or less for the energy they need to meet the demands of their customers by participating in the real-time market, which has more volatile prices in general. Creating erroneous demand data to manipulate forecasts could be profitable to the suppliers selling energy to meet the unexpected demand, or the retailers or utilities buying cheaper energy from the real-time market.

The researchers weren’t able to determine whether such an attack might have already taken place because IoT devices – beyond being insecure – also lack the kind of monitoring that would be necessary to detect such hijacking. But they used real data sets from two of the largest U.S. energy markets – New York and California – to evaluate the feasibility of their proposed attack.

“We did a lot of simulation and mathematical analysis to show that this kind of transfer could occur,” said Raheem Beyah, the Motorola Foundation Professor in the School of Electrical and Computer Engineering who is also Georgia Tech’s vice president for Interdisciplinary Research and co-founder of the company Fortiphyd Logic. “We also did a feasibility analysis of the supporting areas to show that this would be possible from various perspectives.”

The researchers assume that such botnets already exist, and that attackers could simply rent their use on the dark web. More than 20 million smart thermostats already exist in the North American market, and they are connected to at least one high-wattage device – a heating and air-conditioning system that could be controlled by attackers on an intermittent basis.

“If you consider all of the smart thermostats and internet-connected electric ovens, water heaters, and electric vehicle chargers that are already in use, there are plenty of devices to be compromised,” Shekari said. “Homeowners would likely never notice if the EV charger turns on when electricity demand is highest, or if the air conditioning cools a little more than they expected when they are not home.”

To counter the potential attack, researchers suggest both detection and prevention steps. Through integrated monitoring of the normal power use of high-wattage IoT-connected devices, unexpected peaks or valleys in power consumption triggered by an attacker could be detected. And access to data on expected energy demand – which is now made available publicly – could be restricted to those who actually need it.

The primary factor that makes this attack possible is the detailed online data sharing of electricity market information, which is usually updated every five minutes. 

“This energy demand information is really a data privacy issue, and we need to think long and hard about the balance between transparency and security,” Beyah said. “There’s always a tension there, but limiting the amount of detail could make it more difficult for attackers who want to hide their manipulations to know what the normal variations are.”

The potential attack highlights the need for considering cybersecurity threats in technology areas where they had perhaps never been possible before.  

“This is an interesting intersection between the IoT security world and energy markets,” said Beyah. “Right now, it seems that there is a large gap between the two worlds. Our point is that there are implications for combining IoT technology and high-wattage devices that can compromise markets in ways we would never have thought of before.”

The presentation, “IoT Skimmer: Energy Market Manipulation Through High-Wattage IoT Botnets,” will be presented on Wednesday, Aug. 5, at 2:30 p.m. as part of the Black Hat USA 2020 conference.

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu)

Writer: John Toon

]]> John Toon 1 1596550697 2020-08-04 14:18:17 1596550911 2020-08-04 14:21:51 0 0 news Evil armies of internet-connected EV chargers, ovens, hot-water heaters, air-conditioners, and other high-wattage appliances could be hijacked to slightly manipulate energy demand, potentially driving price swings and creating financial damage to deregulated energy markets, warns a new report scheduled to be presented Aug. 5 at the Black Hat USA 2020 conference.

2020-08-04T00:00:00-04:00 2020-08-04T00:00:00-04:00 2020-08-04 00:00:00 John Toon

Research News

(404) 894-6986

637494 637495 637494 image <![CDATA[Hijacked oven]]> image/jpeg 1596550050 2020-08-04 14:07:30 1596550188 2020-08-04 14:09:48 637495 image <![CDATA[Electric substation]]> image/jpeg 1596550128 2020-08-04 14:08:48 1596550169 2020-08-04 14:09:29
<![CDATA[Smartphone Apps May Connect to Vulnerable Backend Cloud Servers]]> 27303 Cybersecurity researchers have discovered vulnerabilities in the backend systems that feed content and advertising to smartphone applications through a network of cloud-based servers that most users probably don’t even know exists.

In research to be reported August 15 at the 2019 USENIX Security Symposium, researchers from the Georgia Institute of Technology and The Ohio State University identified more than 1,600 vulnerabilities in the support ecosystem behind the top 5,000 free apps available in the Google Play Store. The vulnerabilities, affecting multiple app categories, could allow hackers to break into databases that include personal information – and perhaps into users’ mobile devices.

To help developers improve the security of their mobile apps, the researchers have created an automated system called SkyWalker to vet the cloud servers and software library systems. SkyWalker can examine the security of the servers supporting mobile applications, which are often operated by cloud hosting services rather than individual app developers.

“A lot of people might be surprised to learn that their phone apps are communicating with not just one, but likely tens or even hundreds of servers in the cloud,” said Brendan Saltaformaggio, an assistant professor in Georgia Tech’s School of Electrical and Computer Engineering. “Users don’t know they are communicating with these servers because only the apps interact with them and they do so in the background. Until now, that has been a blind spot where nobody was looking for vulnerabilities.”

The Air Force Office of Scientific Research and the National Science Foundation supported the research.

In their study, the researchers discovered 983 instances of known vulnerabilities and another 655 instances of zero-day vulnerabilities spanning across the software layers – operating systems, software services, communications modules and web apps – of the cloud-based systems supporting the apps. The researchers are still investigating whether attackers could get into individual mobile devices connected to vulnerable servers.

“These vulnerabilities affect the servers that are in the cloud, and once an attacker gets on the server, there are many ways they can attack,” Saltaformaggio said. “It’s a whole new question whether or not they can jump from the server to a user’s device, but our preliminary research on that is very concerning.”

The researchers identified three types of attack that could be made on the backend servers: SQL injection, XML external entity and cross-site scripting, explained Omar Alrawi, a Georgia Tech graduate research assistant and co-first author with Chaoshun Zuo at Ohio State. By taking control of these machines in the cloud, attackers could gain access to personal data, delete or alter information or even redirect financial transactions to deposit funds in their own accounts. 

To study the system, Alrawi and Zuo ran applications in a controlled environment on a mobile device that connected to backend servers. They then watched the communications between the device and servers, and repeated the process for all of the applications studied.

“We found that a lot of applications don’t encrypt the communications between the mobile app and the cloud service, so an attacker that is between the two points or on the same network as the mobile could get information about the user – their location and user name – and potentially execute password resets,” Alrawi said.

The vulnerabilities were not easy to spot. “You have to understand the context through which the app communicates with the cloud server,” he said. “These are very deep bugs that cannot be identified by simply scanning and using traditional tools that are used for web application security.”

The operators of vulnerable systems were notified of the findings. Concerns about who is responsible for securing those backend servers is one of the issues to come out of the study. 

“It’s actually a significant problem because of how many different software developers may have their hands in building these cloud servers,” Saltaformaggio said. “It’s not always clear who is responsible for doing the patching and who is responsible for the vulnerabilities. It’s tough to track down these vulnerabilities, but it’s also tough to get them patched.”

To save app developers from having to do the security research they did, the researchers are offering SkyWalker, an analysis pipeline to study mobile backends. Developers will be able to submit their apps to SkyWalker at https://mobilebackend.vet and get a report on what it finds. 

“SkyWalker will watch how the application communicates with those cloud servers, and then it will try to communicate with the servers to find vulnerabilities,” said Alrawi. “This information can give an app developer a heads-up about potential problems before they make their application public.”

The researchers studied only applications in the Google Play Store. But applications designed for iOS may share the same backend systems.

“These servers provide backend services for mobile apps that any device could use,” Alrawi said. “These cloud services are essential components of modern mobile apps. They are part of the always-connected world.”

For the future, the researchers hope to study how the vulnerabilities could affect smartphone users, and to check on whether the problems they identified have been addressed.

“We are going to keep doing these sorts of studies and will revisit them later to see how the attack landscape has improved,” said Saltaformaggio. “We will keep looking for more blind spots that need to be studied. In the new world of smartphones and mobile applications, there are unique problems that need to be rooted out.”

In addition to those already mentioned, the research team included Ruian Duan and Ranjita Pai Kasturi from Georgia Tech and Zhiqiang Lin from Ohio State.

This work was partially supported by the Air Force Office of Scientific Research (AFOSR) under grant FA9550-14-1-0119 and by National Science Foundation (NSF) awards 1834215 and 1834216. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsoring organizations.

CITATION: Omar Alrawi, Chaoshun Zuo, Ruian Duan, Ranjita Pai Kasturi, Zhiqiang Lin and Brendan Saltaformaggio, “The Betrayal at Cloud City: An Empirical Analysis of Cloud-Based Mobile Backends,” 2019 USENIX Security Symposium. https://www.usenix.org/conference/usenixsecurity19/presentation/alrawi

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu)

Writer: John Toon

]]> John Toon 1 1565655257 2019-08-13 00:14:17 1565969134 2019-08-16 15:25:34 0 0 news Cybersecurity researchers have discovered vulnerabilities in the backend systems that feed content and advertising to smartphone applications through a network of cloud-based servers that most users probably don’t even know exists.

2019-08-12T00:00:00-04:00 2019-08-12T00:00:00-04:00 2019-08-12 00:00:00 John Toon

Research News

(404) 894-6986

624386 624387 624386 image <![CDATA[Schematic of SkyWalker]]> image/jpeg 1565654515 2019-08-13 00:01:55 1565654515 2019-08-13 00:01:55 624387 image <![CDATA[Vulnerable apps by genre]]> image/jpeg 1565654655 2019-08-13 00:04:15 1565654655 2019-08-13 00:04:15
<![CDATA[NOVID Exposure Notification App Enlists Smartphones in Coronavirus Battle ]]> 34528 The smartphones in everyone’s purse or pocket could soon become powerful tools in the effort to control coronavirus in the campus community.

Georgia Tech has begun using NOVID, an exposure notification app that will help students, staff, and faculty be anonymously notified if they have potentially been exposed to Covid-19. Use of the app is voluntary, and it is available at no cost to members of the Georgia Tech community. A link to information about the app is available from the COVID Central portal: (covidcentral.gatech.edu).

Developed by researchers at Carnegie Mellon University, NOVID captures no personally identifiable information from people using it. Instead, smartphones running the app exchange synthetic codes with other smartphones that are nearby for more than a brief period of time. If the owner of one of the phones tests positive for the virus, they can notify other app users with whom they have been in contact without identifying themselves or sharing any personal information.

That rapid notification can facilitate early testing, slowing spread of the virus from infected individuals who may not be showing symptoms yet – or who may be asymptomatic.

Manual contact tracing will continue to be done by the Georgia Department of Public Health with support from Georgia Tech. Contact tracing makes initial rapid notifications of close contacts in the Georgia Tech community based on information gathered from the individual who tests positive. For contact tracing purposes, a close contact is defined as anyone who, for 15 minutes or more, was within 6 feet of the person who tested positive, anyone who had physical contact with the person, or anyone who was coughed or sneezed on by the person. 

In a large community, exposure notification apps can fill in the gaps by finding individuals who might have been close enough to be exposed to the virus but not known to the individual with a positive test result. These could include, for instance, someone working nearby in a makerspace or lab – or working out on nearby equipment at the gym.

“Manual contact tracing has been proven over time to be extremely useful for tracking all of the contacts that you know about,” said Alexa Harter, director of the Georgia Tech Research Institute’s (GTRI) Cybersecurity, Information Protection, and Hardware Evaluation Research (CIPHER) Lab. “An exposure notification app is useful when you are in contact with someone you don’t know. It’s very good at the kinds of interactions that manual contract tracing doesn’t do as well.”

Installed on an iOS or Android smartphone, the app exchanges information with other phones also running the app. It records a frequently changing code to other devices so they can be alerted if necessary, but without sharing any personally identifiable information. Pairs of code interactions are stored on the NOVID server for a limited period of time.

On a college campus, faculty, staff, and students may walk past hundreds of people on sidewalks and hallways, but those brief encounters are not used for exposure notification because evidence suggests that such interactions have a very low risk of disease transmission. 

NOVID leverages a combination of ultrasound and Bluetooth technology to note other devices that are within 6 feet, and only if they remain that close for 15 minutes or more. By briefly using the device’s microphone and measuring the time sound takes to travel, ultrasound can accurately measure the distance between devices. The benefit of this combined method is avoiding false positives that would be generated from Bluetooth alone, such as contacts that may be on the other side of a wall – nearby, but not creating a risk of exposure.

If a student receives a positive Covid-19 test at Stamps Health Services, they will be given a one-time code that they can enter into the app, which will send a notification to other phones the app has recorded as potential exposures. Community members who have received positive tests elsewhere on campus or off campus are required, as part of public health regulations, to report their positive Covid-19 status to Stamps Health Services. They will receive an app notification code at that time.

Persons being warned through NOVID of a potential exposure will be encouraged to isolate themselves, monitor for symptoms, and be tested for the virus. The app will provide directions for how to contact relevant campus services when alerting a user that they have potentially been exposed.

Researchers from the GTRI CIPHER Lab’s Software Assurance Branch have evaluated NOVID for privacy protections to make sure it doesn’t record personal information that could identify users – and for cybersecurity issues to make sure it protects the device. “They found it was not collecting anything it shouldn’t be collecting or transmitting anything it shouldn’t be transmitting,” said Harter.

When launched for the first time, the app will ask users of iOS phones to give it permission to use the device’s microphone. That allows the ultrasound system to determine distance from other phones. The GTRI researchers found that the app did not store or transmit sound received via the device's microphone, and did not send sound via the device's speaker other than what is necessary for detecting potential contacts. On iOS devices, the app must be operating in standby mode to detect contacts.

Android users will be asked to allow the app to access location information. That’s necessary to use the device’s Bluetooth radio, which is part of the distance-finding process.

Because NOVID sends out signals only intermittently, it has minimal effect on battery life. Battery usage is slightly higher on Android than iOS because of Android's background mode.

The effectiveness of exposure notification apps grows with the percentage of community members using it. The app can also be helpful to smaller groups, such as fraternities or sororities that may be able to encourage a high percentage of members to use it.

“The more members of the Georgia Tech community that are using this, the more effective it will be in helping us stop the spread of infection,” said Jon Duke, director of Georgia Tech’s Center for Health Analytics and Informatics who led adoption of the software with Harter. “Addressing the pandemic will take a community-wide effort involving testing, wearing masks, maintaining distance from others, and frequent handwashing. NOVID is another part of that effort.”  

As of Aug. 17, more than 2,000 users had joined the Georgia Tech NOVID community.

NOVID can be downloaded from Apple’s App Store or Android’s Google Play Store. Georgia Tech staff, faculty and students should enter the community code JACKETS on the NOVID settings page to join the Georgia Tech NOVID community. Additional information about using the system is available at covidcentral.gatech.edu.

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu).

Writer: John Toon

]]> jhunt7 1 1597699237 2020-08-17 21:20:37 1598304961 2020-08-24 21:36:01 0 0 news The smartphones in everyone’s purse or pocket could soon become powerful tools in the effort to control coronavirus in the campus community.

2020-08-17T00:00:00-04:00 2020-08-17T00:00:00-04:00 2020-08-17 00:00:00 John Toon

Research News

(404) 894-6986

637982 637983 637982 image <![CDATA[Georgia Tech campus community]]> image/jpeg 1597697271 2020-08-17 20:47:51 1597697271 2020-08-17 20:47:51 637983 image <![CDATA[Using the NOVID exposure notification app]]> image/png 1597697455 2020-08-17 20:50:55 1597697455 2020-08-17 20:50:55
<![CDATA[New School Builds on Georgia Tech's Commitment to Advancing Cybersecurity and Privacy Education]]> 32045 Georgia Tech, which has been named No. 2 in undergraduate cybersecurity education by U.S. News and World Report, is building upon its success by launching a new School of Cybersecurity and Privacy. The new school is the first of its kind among top research universities.

The new School will build on Georgia Tech’s considerable investments in cybersecurity and privacy education and research. The Institute already has three cybersecurity degree programs. The School will weave them together with other important interdisciplinary programs.  

“The new School of Cybersecurity and Privacy is a reflection of Georgia Tech’s strengths and commitment to serving the needs of our society and our state,” said Georgia Tech President Ángel Cabrera.

“Georgia Tech’s new School of Cybersecurity and Privacy will focus on applied research collaborations with the fast-growing cybersecurity industry in Georgia and meeting a critical workforce need,” Cabrera said. “It will bring together Georgia Tech’s expertise across disciplines to advance technology and find new solutions to protect our personal privacy and support our national security.”

There are more than 500 cybersecurity researchers spread across Georgia Tech who bring in more than $180 million in research awards annually. Georgia Tech’s faculty are ranked #2 in the world in publications in top security conferences.

The School of Cybersecurity and Privacy at Georgia Tech is the first School of its kind at a top university, drawing together faculty and researchers across various disciplines from practically all six colleges at Georgia Tech and the Georgia Tech Research Institute.

The new School will be intercollegiate and interdisciplinary because cybersecurity and privacy problems typically play out across multiple dimensions.

“Cybersecurity includes not only technology but the law, business processes, and cultural considerations,” said Charles Isbell, dean and John P. Imlay, Jr. chair of the College of Computing. “The School of Cybersecurity and Privacy will bring together thought leaders from all of those areas to push the envelope of technical innovation and produce the workforce of the future."

To that end, the School will bring in not only computer scientists and engineers but business experts and behaviorists.

“Solving tomorrow’s toughest cybersecurity problems will require not only a thorough understanding of the technologies and threats involved. It also will require deep expertise in behavioral and policy considerations that must increasingly inform the development and use of new cybersecurity approaches and technologies,” said Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts.

“By drawing on Georgia Tech’s globally respected expertise in the technology and policy arenas, the School of Cybersecurity and Privacy will extend our leadership in this area with exactly the kind of innovative, interdisciplinary, human-centered thinking and research we need to advance socially responsible and technically practical solutions to these critical issues.”

The solutions and the workforce produced by the new School will not only benefit business, but also protect every aspect of our online lives and our national infrastructure.

“Cybersecurity is not just a personal issue — our credit cards or identities quickly come to mind — but it has an even larger impact on national security, financial markets, even power grids,” said Provost and Executive Vice President for Academic Affairs Steve McLaughlin. “That is why the new School of Cybersecurity and Privacy is so important at this time. Cybersecurity, privacy, and related policies dominate the priorities of many organizations, and the need for advanced research and talent is outpacing supply. Georgia Tech is already a leader in this area the new School will take us to even greater heights and impact.”

The creation of the School has been welcomed by industry leaders in cybersecurity and privacy, both in Atlanta and nationwide. Atlanta in particular is a center of financial technology, an area that goes hand-in-hand with cybersecurity.

[RELATED: Georgia Tech Institute for Information Security and Privacy]

"Financial technology companies leverage technology and data to fuel innovation, which makes cybersecurity and privacy vital to their success”, said Ryan Graciano (B.S. Computer Science ’04), co-founder and chief technology officer of Credit Karma. “This means not only staying on the cutting-edge technologically but also building systems that work with multiple sets of privacy regulations in different jurisdictions. The School of Cybersecurity will be an essential resource for fintech companies in Atlanta and worldwide."

The new School will strengthen Atlanta’s tech economy, said Alan Taetle, general partner at the venture-capital firm Noro-Moseley Partners.

“The creation of a new School of Cybersecurity and Privacy will help the Atlanta tech ecosystem build on two of its greatest strengths: internet security and payment processing,” Taetle said. “Georgia Tech is a world-class university that creates world-class technology, and I expect the new school to both produce new companies and also provide vital support to existing ones both locally and nationally.”

Of course, financial companies are not the only ones to benefit from new cybersecurity and privacy technologies.

“Consumer demands for digital service are transforming business, forcing new innovations in cloud infrastructure and other areas for businesses to compete or remain relevant. Every one of these innovations must be accompanied by new cybersecurity technologies and policies in order to keep both corporations and consumers safe,“ said Tony Spinelli, chief information officer for Urban One, Inc. and former chief information officer for Capital One. “The School of Cybersecurity and Privacy, in other words, is addressing a vast and growing demand from all sectors of the modern digital economy.”

The School of Cybersecurity and Privacy will be led by interim chair Rich DeMillo (Ph.D. 72), the Charlotte B. and Warren C. Chair of Computer Science and Professor of Management. DeMillo has previously served as the founder and executive director of Georgia Tech’s Center for 21st Century Universities, and as the dean of the College of Computing. He is the author of more than 100 articles, books, and patents, and is a fellow of the Association for the Advancement of Science and the Association of Computing Machinery. Prior to arriving at the Institute, DeMillo served as Hewlett-Packard’s first chief technology officer.

“Its roots are in computer science and engineering, mathematics, public policy, and national security, but the last twenty years has seen the birth of cybersecurity as a profession in its own right,” DeMillo said. “I am pleased that the Institute recognized this shift by launching an academic unit that will be home to a new generation of scholars.  Building that culture will be challenging, but we will be judged by the success of our people and our ability to meet the growing demand for Georgia Tech-caliber experts with unique skills.”

The new School will launch a nationwide search this fall for multiple faculty members and for its founding chair. For more information about the school, please visit its website.

]]> Ben Snedeker 1 1600129736 2020-09-15 00:28:56 1631735425 2021-09-15 19:50:25 0 0 news Georgia Tech’s newest school is the first of its kind at a major research university.

2020-09-15T00:00:00-04:00 2020-09-15T00:00:00-04:00 2020-09-15 00:00:00 Ann Claycombe, Communications Director

639105 639100 639105 image <![CDATA[Source Code Closeup]]> image/jpeg 1600177175 2020-09-15 13:39:35 1600177175 2020-09-15 13:39:35 639100 image <![CDATA[School of Cybersecurity & Privacy logo]]> image/png 1600129997 2020-09-15 00:33:17 1600129997 2020-09-15 00:33:17
<![CDATA[Virtual Assistant Stops Robocalls]]> 34541 Americans receive 4.8 million robocalls a year, but what if they didn’t have to be interrupted by them? Georgia Tech researchers developed a virtual assistant that screens calls to block 97 percent of scammers. 

Currently, people rely on blacklist apps to stop spam calls, but they are only up to 60 percent effective, according to School of Computer Science Ph.D. student Sharbani Pandit. This is because of the prominence of neighbor spoofing, or numbers similar to person’s own.

“The caller number will look very similar to your own number, so you’re more likely to pick up, and each time they call, they use a new number, so it doesn’t show on the blacklist,” said Pandit.

So Pandit and her team tried a more direct approach, a virtual assistant (VA). Like a smart home device or a secretary, the VA determines whether calls are from people before it passes the call off to the user.

How It Works

Whether a call is spam or not can be determined by a simple question: does the caller know the name of the person they’re calling?

“The idea is someone who is a wanted caller would know the full name of the person they’re trying to reach,” Pandit said.

If the caller can confirm the name, the VA would forward the call and a transcript of how the conversation went to the user. The entire interaction takes 10 seconds.

For callers who may not know the name but are not a bot, VA engages in a conversation and interrupts the caller as they are speaking to determine if the caller stops talking. This process takes a maximum of 30 seconds.

“It’s very natural in human conversation that you would stop to listen to what the other party is saying on the call, but a bot wouldn’t do that,” Pandit said.

For calls thought to be spam, the notification would be sent to the user with a label for each caller to indicate whether it was a human or a robot caller.

How Many Calls It Stops

The researchers did a user study with 21 people who determined they were comfortable talking to the VA. They also tested the VA on a database 8,000 robocall recordings, in which 97.8 percent of the robocalls were correctly labeled as such.

Pandit presented the research in Fighting Voice Spam with a Virtual Assistant Prototype. She co-wrote the paper with University of Georgia’s Jienan Liu and Associate Professor Roberto Perdisci, and SCS Professor Mustaque Ahamad.  The work has already garnered media attention from New Scientist and Digital Trends.

The researchers are still making a few adjustments before they release the Android app. Robocalls are constantly evolving and getting more sophisticated, which introduces new challenges to researchers. Some robocalls even have lists of phone numbers and names, so the researchers are adding more questions in the screening conversation that only a human could answer.

They believe that anything that can slow down the efficacy of robocalls will be a benefit.

“The goal of robocalls is to make as many calls as possible to target victims and make a profit from it, but as security researchers, if we can add hurdles in their step, then sometimes the cost isn’t worth it for the attacker,” said Pandit.

]]> Tess Malone 1 1601316253 2020-09-28 18:04:13 1601492929 2020-09-30 19:08:49 0 0 news 2020-09-28T00:00:00-04:00 2020-09-28T00:00:00-04:00 2020-09-28 00:00:00 Tess Malone, Communications Officer


639662 639662 image <![CDATA[Robocall]]> image/jpeg 1601323064 2020-09-28 19:57:44 1601323064 2020-09-28 19:57:44