USENIX Security 2024
USENIX Security Symposium | Aug 14 – 16, 2024
The USENIX Security Symposium brings together researchers, practitioners, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks.
Georgia Tech is a leading contributor to the technical program. Explore the work and people, and find out how Tech experts are shaping the field as AI and other emerging technologies impact security.
Cybersecurity research helps protect sensitive information and systems from attacks, ensuring safety and privacy for individuals and organizations. Meet the Georgia Tech experts who are charting a path forward.
Georgia Tech at USENIX Security 2024
Explore Georgia Tech’s experts and the organizations they are working with at USENIX Security.
By the Numbers
Partner Organizations
Ben Gurion University of the Negev • Carnegie Mellon University • Cornell University • Meta • Georgia Tech • German International University • Google DeepMind • Intel • Kyung Hee University • NVIDIA • Ohio State University • Palo Alto Networks • Pennsylvania State University • Ruhr University Bochum • Samsung Research • Sungkyunkwan University • The University of Adelaide • University of California, Berkeley • University of Georgia • University of Illinois Urbana-Champaign • University of Michigan • University of North Carolina at Chapel Hill • University of Pennsylvania • University of Texas at Austin • University of Washington • Virginia Tech
Tracks by Number of Tech Authors
Faculty with number of papers 🔗
The Big Picture 🔗
Tech’s Investment in Cybersecurity Research
- Explore Georgia Tech at USENIX Security in a single view.
- Search for individual authors.
- Sort the chart by track, Tech 1st authors, or team size.
- Click on any author to see paper details.
- Highlight all the Tech faculty with one click.
Research Roundup 🔗
Teams led by Georgia Tech experts | USENIX Security ’24
I Experienced More than 10 DeFi Scams: On DeFi Users’ Perception of Security Breaches and Countermeasures
This paper investigates the security perceptions and behaviors of Decentralized Finance (DeFi) users.
The researchers conducted interviews and surveys to understand why users continue to engage with DeFi despite the prevalence of scams and hacks. The paper highlights that while DeFi users are drawn to its decentralized nature and potential for profit, they often demonstrate a lack of awareness regarding security risks and effective mitigation strategies.
Worryingly, the source finds that many victims of DeFi scams do not learn from their experiences and continue engaging with DeFi platforms without improving their security practices. The authors ultimately suggest that stronger regulations, potentially in a decentralized format, might be necessary to better protect DeFi users.
Expand
Why It Matters:
Previous research by Georgia Tech has shown just how much money has been lost in decentralized finance on the Ethereum blockchain, but with this paper there is a better understanding as to why users continue to use cryptocurrency platforms. Georgia Tech’s researchers also found evidence to support claims of previous researchers who believed cryptocurrency users demonstrate similar behavior to gambling addicts.
Pictured: Lead author Mingyi Liu.
AI Psychiatry: Forensic Investigation of Deep Learning Networks in Memory Images
The paper introduces AiP, a novel memory forensic technique designed to recover and rehost deployed deep learning models for forensic investigation.
Expand
This addresses the challenge investigators face in obtaining the unique, in-the-field versions of deep learning models for analysis, especially in cases of online learning or potential attacks.
AiP works by first identifying high-level components of the deep learning model in memory, such as the root model object. It then recovers low-level data structures, including tensors and their weights, from both CPU and GPU memory.
Critically, AiP leverages the generic characteristics of deep neural networks (DNNs), like their representation as directed acyclic graphs and the structured nature of tensors, to enable model recovery across different frameworks and platforms.
Finally, AiP rehosts the recovered model into a live process, enabling investigators to apply white-box testing methodologies to detect attacks or vulnerabilities. The researchers demonstrated AiP’s efficacy through evaluation on various models and datasets, highlighting its accuracy, speed, and robustness, particularly in online learning scenarios where models are continuously updated.
Why It Matters:
In the paper the authors present a hypothetical case study involving a self-driving car accident.
They simulate a scenario where a car’s DL model, responsible for traffic sign recognition, misidentifies a sign and causes a collision. Using AiP, a forensic investigator can recover the specific model deployed on the car at the time of the accident and, by studying the internal workings of the software—a method known as white-box techniques—determine if the misidentification was caused by an implanted backdoor.
This method significantly reduces the amount of work digital forensic investigators have to do when investigating an incident like the one above.
Pictured: Lead author David Oygenblik.
6Sense: Internet-Wide IPv6 Scanning and its Security Applications
Due to the massive size of the Internet Protocol version six (IPv6) address space, the exhaustive scanning methods used on Internet Protocol version four (IPv4) were not up to the challenge of finding security vulnerabilities at this new scale.
Expand
To address this, Georgia Tech researchers introduce 6SENSE, a new system for scanning IPv6 to identify active hosts and analyze their security. This new system uses reinforcement learning to identify promising address ranges and then employs a combination of online and offline de-aliasing techniques to avoid wasting resources on large, unresponsive IP address blocks (known as aliases).
The authors demonstrate that 6SENSE significantly outperforms existing IPv6 scanning methods, discovering substantially more active hosts and unique networks. They further illustrate 6SENSE’s practical value by conducting the first scan-driven security analysis of IPv6 hosts, revealing a concerning prevalence of security misconfigurations and vulnerabilities.
Why It Matters:
Internet Protocol (IP) addresses are numerical labels assigned to devices connected to a network that uses the Internet Protocol for communication. They primarily act to identify devices on a network as well where they are located. IPv6 addresses are a newer and more extensive addressing system than the older IPv4 system.
While previous methods of scanning IPv4 addresses have been highly beneficial in identifying security vulnerabilities, they cannot be applied to the vastly larger IPv6 address space. The researchers’ 6SENSE scanning system handled the larger scale of IPv6 addresses without disrupting or harming the internet community and was proven to be the most effective system to date.
DVa: Extracting Victims and Abuse Vectors from Android Accessibility Malware
The Cyber Forensics Innovation Laboratory (CyFi Lab) describes DVa, a novel malware analysis pipeline designed to expose malware that abuses Android’s accessibility (a11y) services.
A11y malware is a type of Android malware that exploits the a11y service, which is designed to assist users with disabilities.
DVa identifies the malware’s targeted victims, analyzes its victim-specific abuse vectors, and detects any persistence mechanisms the malware deploys.
The researchers highlight DVa’s unique ability to overcome the limitations of standard malware analysis by mimicking the presence of vulnerable applications to trigger malware behaviors.
CyFI Lab’s paper details DVa’s use of dynamic victim-guided execution and abuse-vector-guided symbolic analysis to uncover intricate attack routines that target specific apps and user data.
Expand
Why It Matters:
With this new tool, users and developers now have crucial information to enable better protection against the growing threat of Android a11y malware. Other methods have been focused solely on finding malware currently deployed, while DVa alerts users of past and present malware activity.
This malware was found to have bypassed the security measures in the Google Play Store and, in some cases, was dormant in an early version of apps before being activated when the app was updated.
A11y malware poses as an accessibility service and after gaining access to the Android operating system, it can steal sensitive information like user credentials, approve automatic transactions, steal two-factor authentication codes, and execute ransomware.
Arcanum: Detecting and Evaluating the Privacy Risks of Browser Extensions on Web Pages and Web Content
The paper describes Arcanum, a dynamic taint tracking system for modern Chrome extensions that detects and evaluates privacy risks associated with data exfiltration. Arcanum works by tracking user data flows from various sources, like Chrome and Web APIs and user-annotated DOM elements, to potential exit points called taint sinks, such as network requests and storage APIs.
Expand
The researchers deployed Arcanum to analyze all functional Chrome extensions, focusing on seven popular websites, and discovered that a significant number of extensions collect and transmit sensitive user data, often without clear disclosure in their privacy policies. These findings highlight the pervasive privacy risks posed by browser extensions and the need for stricter privacy controls.
Why It Matters:
When Georgia Tech scientists used Arcanum on more than 100,000 Google Chrome browser extensions and found more than 3,000 of them were collecting sensitive data including URLs, page titles, device information, and the content of web pages. These browser extensions are used by around 144 million users and are known to send the data they collect to third-party servers without proper user awareness or consent.
Over half of the extensions flagged by Arcanum either lack privacy policies or fail to disclose their data collection practices accurately. This lack of transparency makes it difficult for users to make informed decisions about the extensions they install.
These findings have major implications for the future of browser extension security and web privacy research as well as showing the current state of internet user privacy.
Pictured: Lead author Qinge Xie.
Towards Generic Database Management System Fuzzing
In this paper Georgia Tech researchers introduce BUZZBEE, a new fuzzing framework designed to enhance the security of diverse Database Management Systems (DBMSs).
Expand
Existing fuzzing techniques struggle to effectively test the wide range of DBMS interfaces, particularly non-relational ones. The paper describes the implementation of BUZZBEE and presents a thorough evaluation of its performance on eight popular DBMSs across four categories—key-value, graph, document, and relational.
Why It Matters:
BUZZBEE significantly outperforms existing automated software testing techniques, also known as fuzzers. Researchers discovered 40 real-world vulnerabilities, including several previously unknown bugs.
Two Shuffles Make a RAM: Improved Constant Overhead Zero Knowledge RAM
Two researchers describe a new and more efficient technique for constructing Zero Knowledge (ZK) proofs, especially for statements expressed as RAM programs.
Expand
The authors achieve this efficiency by optimizing the arithmetic circuit used to implement read/write memory operations. They also explain how to implement their techniques in the context of Vector Oblivious Linear Evaluation (VOLE) based ZK proofs, achieving a 2-20x speedup over the previous best VOLE-ZK RAM. Finally, the authors present related ZK data structures, including improved read-only memory and set ZK data structures that can be used to further optimize performance.
Why this Matters:
This paper offers advancements in cryptography, especially in zero-knowledge proofs.
Zero-knowledge proofs are cryptographic schemes allowing someone to prove a statement is true without revealing any additional information to the verifier. In this paper, the researchers studied the math involved with RAM programs.
Teams with Georgia Tech contributors | USENIX Security ’24
Go Go Gadget Hammer: Flipping Nested Pointers for Arbitrary Data Leakage
This paper describes GadgetHammer, a novel exploit technique that leverages the Rowhammer vulnerability to gain arbitrary read access to a victim’s address space.
Unlike previous Rowhammer attacks that targeted specific sensitive structures like Page Table Entries, GadgetHammer exploits common code patterns involving nested pointer dereferences.
Expand
Why it Matters:
This research highlights the broader attack surface exposed by Rowhammer and emphasizes the need for more comprehensive defenses that address vulnerable code patterns, not just specific memory structures.
GoFetch: Breaking Constant-Time Cryptographic Implementations Using Data Memory-Dependent Prefetchers
A team of researchers from across the country examines the security implications of a hardware component called a Data Memory-dependent Prefetcher (DMP). The team specifically focused on the DMP in Apple’s M-series CPUs.
Expand
A vulnerability arises because DMPs can interpret data stored in memory as pointers and prefetch those pointers’ targets, thus revealing information about the data through cache timing side-channel attacks.
The authors demonstrate the severity of this vulnerability by crafting attacks against several constant-time cryptographic implementations. They propose several countermeasures, including restricting cryptographic operations to specific CPU cores, employing blinding techniques to obscure sensitive data, and advocating for hardware support to disable or constrain DMP behavior.
Why it Matters:
The authors argue that DMPs pose a greater security risk than previously thought because they can leak information about a program’s data, even if that data is never directly used as a memory address.
“I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
This research paper investigates how practitioners who develop consumer AI products approach user privacy. The authors interviewed 35 industry AI practitioners to understand how they define, approach, and conduct privacy work.
Expand
They found that while practitioners view privacy as crucial for ethical AI, their understanding of privacy risks was often limited to pre-defined intrusions, with few aware of the nuanced privacy challenges posed by AI technologies. The study also revealed that privacy work was often driven by compliance requirements rather than a proactive, user-centered approach, with practitioners facing limitations due to a lack of AI-specific tools and resources.
Why it Matters:
This paper shows an urgent need for better tools, resources, and support systems to empower AI practitioners to develop more privacy-conscious AI products.
Pixel Thief: Exploiting SVG Filter Leakage in Firefox and Chrome
This paper presents a novel pixel-stealing attack, dubbed “Pixel Thief,” that exploits vulnerabilities in the way Firefox and Chrome render SVG filters.
Expand
The authors demonstrate how attackers can leverage cache-based side-channel attacks to extract sensitive information, such as text content and browsing history, from web pages. Unlike previous timing-based attacks, this technique allows for the extraction of multiple bits of information per screen refresh, achieving significantly higher data rates.
Why it Matters:
The paper details two specific applications of their attack: recovering text from embedded pages and performing high-speed history sniffing. The authors conclude by highlighting the effectiveness of their attack and urging browser vendors to implement robust countermeasures against cache-based side-channel attacks to enhance user privacy.
SledgeHammer: Amplifying Rowhammer via Bank-level Parallelism
A novel Rowhammer attack technique called Sledgehammer will be presented USENIX Security 2024. The attack amplifies the vulnerability’s effectiveness by exploiting bank-level parallelism in DDR memory.
Expand
Why it Matters:
The authors demonstrate that Sledgehammer can significantly increase the number of bit flips achievable in a given time period, even on newer architectures like Intel’s 12th generation Alder Lake CPUs, where previous Rowhammer techniques were ineffective.
WEBRR: A Forensic System for Replaying and Investigating Web-Based Attacks in The Modern Web
In this paper, researchers introduce WEBRR, a novel forensic system designed to record and replay web-based attacks in Chromium-based web browsers.
Expand
The authors argue that existing forensic analysis systems rely too heavily on system-level auditing, making it difficult to reconstruct the nuanced steps involved in web-based attacks. WEBRR addresses this challenge by introducing a novel design that leverages JavaScript Execution Unit Partitioning to capture and deterministically replay the sequence of events that occur during a browsing session.
This approach overcomes the limitations of previous systems, which are either record-only or struggle to accurately replay complex web applications. Through extensive evaluation, the authors demonstrate that WEBRR is capable of successfully replaying a variety of sophisticated web-based attacks, including those that utilize Service Workers.
Why It Matters:
There is currently a gap between system-level forensic analysis and the ability to understand attacks that occur in web browsers. WEBRR bridges this gap by recording and replaying web-based attacks for digital forensic investigators.
Digital Discrimination of Users in Sanctioned States: The Case of the Cuba Embargo
This paper presents one of the first in-depth and systematic end-user centered investigations into the effects of sanctions on geoblocking, specifically in the case of Cuba.
Expand
The team of researchers conducted network measurements on the Tranco Top 10K domains and complement our findings with a small-scale user study with a questionnaire. They identified 546 domains subject to geoblocking across all layers of the network stack, ranging from DNS failures to HTTP(S) response pages with a variety of status codes.
Through this work, the researchers discovered a lack of user-facing transparency. They also found 88% of geoblocked domains do not serve informative notice of why they are blocked. Further, the authors highlighted a lack of measurement-level transparency, even among HTTP(S) blockpage responses. Notably, they identified 32 instances of blockpage responses served with 200 OK status codes, despite not returning the requested content.
Finally, the team notes the inefficacy of current improvement strategies and make recommendations to both service providers and policymakers to reduce Internet fragmentation.
Information Flow Control in Machine Learning through Modular Model Architecture
In today’s machine learning (ML) models, any part of the training data can affect the model output. This lack of control for information flow from training data to model output is a major obstacle in training models on sensitive data when access control only allows individual users to access a subset of data.
Expand
To enable secure machine learning for access-controlled data, researchers propose the notion of information flow control for machine learning, and develop an extension to the Transformer language model architecture that strictly adheres to the IFC definition they propose.
The team’s architecture controls information flow by limiting the influence of training data from each security domain to a single expert module, and only enables a subset of experts at inference time based on the access control policy. The evaluation using large text and code datasets show that our proposed parametric IFC architecture has minimal (1.9%) performance overhead and can significantly improve model accuracy (by 38% for the text dataset, and between 44%–62% for the code datasets) by enabling training on access-controlled data.
RESEARCH 🔗
Forensics
AI Psychiatry: Forensic Investigation of Deep Learning Networks in Memory Images
Authors: David Oygenblik, Georgia Institute of Technology; Carter Yagemann, Ohio State University; Joseph Zhang, University of Pennsylvania; Arianna Mastali, Georgia Institute of Technology; Jeman Park, Kyung Hee University; Brendan Saltaformaggio, Georgia Institute of Technology
WEBRR: A Forensic System for Replaying and Investigating Web-Based Attacks in The Modern Web
Authors: Joey Allen, Palo Alto Networks; Zheng Yang, Feng Xiao, and Matthew Landen, Georgia Institute of Technology; Roberto Perdisci, Georgia Institute of Technology and University of Georgia; Wenke Lee, Georgia Institute of Technology
Fuzzing I: Software
Towards Generic Database Management System Fuzzing
Authors: Yupeng Yang and Yongheng Chen, Georgia Institute of Technology; Rui Zhong, Palo Alto Networks; Jizhou Chen and Wenke Lee, Georgia Institute of Technology
Hardware Security II: Architecture and Microarchitecture
GoFetch: Breaking Constant-Time Cryptographic Implementations Using Data Memory-Dependent Prefetchers
Authors: Boru Chen, University of Illinois Urbana-Champaign; Yingchen Wang, University of Texas at Austin; Pradyumna Shome, Georgia Institute of Technology; Christopher Fletcher, University of California, Berkeley; David Kohlbrenner, University of Washington; Riccardo Paccagnella, Carnegie Mellon University; Daniel Genkin, Georgia Institute of Technology
Measurement II: Network
6Sense: Internet-Wide IPv6 Scanning and its Security Applications
Authors: Grant Williams, Mert Erdemir, Amanda Hsu, Shraddha Bhat, Abhishek Bhaskar, Frank Li, and Paul Pearce, Georgia Institute of Technology
Measurement III: Auditing and Best Practices I
Digital Discrimination of Users in Sanctioned States: The Case of the Cuba Embargo
Authors: Anna Ablove, Shreyas Chandrashekaran, Hieu Le, Ram Sundara Raman, and Reethika Ramesh, University of Michigan; Harry Oppenheimer, Georgia Institute of Technology; Roya Ensafi, University of Michigan
Measurement IV: Web
Arcanum: Detecting and Evaluating the Privacy Risks of Browser Extensions on Web Pages and Web Content
Authors: Qinge Xie, Manoj Vignesh Kasi Murali, Paul Pearce, and Frank Li, Georgia Institute of Technology
Mobile Security I
DVa: Extracting Victims and Abuse Vectors from Android Accessibility Malware
Authors: Haichuan Xu, Mingxuan Yao, and Runze Zhang, Georgia Institute of Technology; Mohamed Moustafa Dawoud, German International University; Jeman Park, Kyung Hee University; Brendan Saltaformaggio, Georgia Institute of Technology
Security Analysis V: ML
Information Flow Control in Machine Learning through Modular Model Architecture
Authors: Trishita Tiwari, Cornell University; Suchin Gururangan, University of Washington; Chuan Guo, FAIR at Meta; Weizhe Hua, Google DeepMind; Sanjay Kariyappa, Georgia Institute of Technology; Udit Gupta, Cornell University; Wenjie Xiong, Virginia Tech; Kiwan Maeng, Pennsylvania State University; Hsien-Hsin S. Lee, Intel; G. Edward Suh, NVIDIA/Cornell University
Side Channel II: RowHammer
Go Go Gadget Hammer: Flipping Nested Pointers for Arbitrary Data Leakage
Authors: Youssef Tobah, University of Michigan; Andrew Kwong, UNC Chapel Hill; Ingab Kang, University of Michigan; Daniel Genkin, Georgia Tech; Kang G. Shin, University of Michigan
SledgeHammer: Amplifying Rowhammer via Bank-level Parallelism
Authors: Ingab Kang, University of Michigan; Walter Wang and Jason Kim, Georgia Tech; Stephan van Schaik and Youssef Tobah, University of Michigan; Daniel Genkin, Georgia Tech; Andrew Kwong, UNC Chapel Hill; Yuval Yarom, Ruhr University Bochum
Side Channel IV
Pixel Thief: Exploiting SVG Filter Leakage in Firefox and Chrome
Authors: Sioli O’Connell, The University of Adelaide; Lishay Aben Sour and Ron Magen, Ben-Gurion University of the Negev; Daniel Genkin, Georgia Institute of Technology; Yossi Oren, Ben-Gurion University of the Negev and Intel Corporation; Hovav Shacham, UT Austin; Yuval Yarom, Ruhr University Bochum
User Studies V: Policies and Best Practices II
“I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
Authors: Hao-Ping (Hank) Lee, Carnegie Mellon University; Lan Gao and Stephanie Yang, Georgia Institute of Technology; Jodi Forlizzi and Sauvik Das, Carnegie Mellon University
User Studies VII: Policies and Best Practices III
I Experienced More than 10 DeFi Scams: On DeFi Users’ Perception of Security Breaches and Countermeasures
Authors: Mingyi Liu, Georgia Institute of Technology; Jun Ho Huh, Samsung Research; HyungSeok Han, Jaehyuk Lee, Jihae Ahn, and Frank Li, Georgia Institute of Technology; Hyoungshick Kim, Sungkyunkwan University; Taesoo Kim, Georgia Institute of Technology
Zero-Knowledge Proof I
Two Shuffles Make a RAM: Improved Constant Overhead Zero Knowledge RAM
Authors: Yibin Yang, Georgia Institute of Technology; David Heath, University of Illinois Urbana-Champaign
See you in Philadelphia!
Development: College of Computing
Project Lead/Data Graphics: Joshua Preston
News: John “JP” Popham
Data Management: Joni Isbell