HACKER DOUBLE SUMMER 2022 GUIDES — Part Eleven: USENIX + SOUPS
Welcome to the DCG 201 guide to Hacker Double Summer! This is part of a series where we are going to cover all the various hacker conventions and shenanigans at the start of July to the end of August both In Person & Digital! 2022 is a GIGANTIC year for hacker hysteria with so many events this will break the most guides we have ever written with the lucky number 13 as the goal. As more blog posts are uploaded, you will be able to jump through the guide via these links:
HACKER DOUBLE SUMMER — Part One: Surviving Las Vegas, New York & Virtually Anywhere
HACKER DOUBLE SUMMER — Part Two: Capture The Flags & MLH INIT Hackathon
HACKER DOUBLE SUMMER — Part Three: SummerC0n
HACKER DOUBLE SUMMER — Part Four: ToorCamp
HACKER DOUBLE SUMMER — Part Five: A New HOPE (HACKERS ON PLANET EARTH)
HACKER DOUBLE SUMMER — Part Six: SCaLE 19X
HACKER DOUBLE SUMMER — Part Seven: Back2Vegas by RingZero
HACKER DOUBLE SUMMER — Part Eight: BSides Las Vegas
HACKER DOUBLE SUMMER — Part Nine: Black Hat USA
HACKER DOUBLE SUMMER — Part Ten: The Diana Initiative
HACKER DOUBLE SUMMER — Part Eleven: USENIX + SOUPS
HACKER DOUBLE SUMMER — Part Twelve: DEFCON 30
HACKER DOUBLE SUMMER — Part Thirteen: Wiki World’s Fair
HACKER DOUBLE SUMMER — Part Fourteen: Blue Team Con
HACKER DOUBLE SUMMER — Part Fifteen: SIGS, EVENTS & PARTIES IN LAS VEGAS
USENIX 31ST SECURITY SYMPOSIUM + Eighteenth Symposium on Usable Privacy and Security
Date: Wednesday, August 10sth (12:30 PM EST) — Friday, August 12th (8:00 PM EST)
Location: Boston Marriott Copley Place (110 Huntington Ave Boston, MA 02116)
USENIX — https://www.usenix.org/conference/usenixsecurity22
SOUPS — https://www.usenix.org/conference/soups2022
Platform(s): Unknown Custom Platform
USENIX — https://www.usenix.org/conference/usenixsecurity22/technical-sessions
SOUPS — https://www.usenix.org/conference/soups2022/technical-sessions
Accessibility: USENIX Security ’22 Technical Sessions will was $950 ($500 for Students) In-Person but is currently SOLD OUT! Virtual Attendance cost $450 ($200 for Stuidents). SOUPS 2022 will cost $600 ($400 for Students) and Virtually $300 ($150 for Students). Talks after their formal presentation including white paper, slides and video are archived and are posted online for FREE.
USENIX — https://www.usenix.org/conference/259192/registration/form
SOUPS — https://www.usenix.org/conference/soups2022/registration-information
Code Of Conduct: https://www.usenix.org/conferences/coc
The USENIX Association is a 501(c)(3) nonprofit organization, dedicated to supporting the advanced computing systems communities and furthering the reach of innovative research. It was founded in 1975 under the name “Unix Users Group,” focusing primarily on the study and development of Unix and similar systems. It has since grown into a respected organization among practitioners, developers, and researchers of computer operating systems more generally. Since its founding, it has published a technical journal entitled ;login:.
- Foster technical excellence and innovation
- Support and disseminate research with a practical bias
- Provide a neutral forum for discussion of technical issues
- Encourage computing outreach into the community at large
Join us for the 30th USENIX Security Symposium, which will be held as a virtual event on August 11–13, 2021. USENIX Security brings together researchers, practitioners, system administrators, system programmers, and others to share and explore the latest advances in the security and privacy of computer systems and networks.
USENIX Security brings together researchers, practitioners, system administrators, system programmers, and others to share and explore the latest advances in the security and privacy of computer systems and networks.
A decently priced option for the technically minded, sponsored by the EFF, NoStarch Press, FreeBSD Foundation (and others) while also organized by a long standing organization.
The Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022) will take place at the Boston Marriott Copley Place in Boston, MA, USA, and also as a virtual event on August 7–9, 2022. SOUPS brings together an interdisciplinary group of researchers and practitioners in human-computer interaction, security, and privacy.
A long standing institution, this convention is focused on the Security & Privacy side of hacking viewed through an academic lens. Rubbing off from the Berkley Culture of Boston, if you like to read white papers on security research these two back-to-back conventions are for you!
OH CRAP IT’S IN BEANTOWN? HOW DO I SURVIVE THESE MASSHOLES!?!
Boston Travel Guide
Boston is not only one of America's oldest cities, it's also one of the most walkable, and we'd even go as far as…
21 Ultimate Things to Do in Boston
Looking for what to do in Boston? Check out our list of top attractions and best places to visit in Boston. From…
Boston Cab Association - Proudly providing transportation services to the city of Boston.
Since 1950, Boston Cab Dispatch has been proudly providing transportation services to the city of Boston. Our fleet of…
The 24 Most Iconic Boston Food Experiences
Updated on 5/9/2022 at 5:58 PM When it comes to cuisine, all of New England's many states have something special to…
The 25 Best Bars in Boston
We raise a glass to the local wine bars, drag clubs, cocktail joints, and more that are making drinking exciting again…
Where to Eat Vegetarian and Vegan Food in Greater Boston
It's not too difficult to dine well as a vegetarian or vegan in Greater Boston. There's a growing number of great…
Happy Cow Vegan: https://www.happycow.net/best-vegan-restaurants/boston-massachusetts
Six of Boston's Best Kosher Restaurants
A couple weeks ago, we talked bagels: chewy, crumby, stretchy. You had definite opinions! This week, let's evaluate…
14 Excellent Halal-Friendly Restaurants in Greater Boston
The Greater Boston area is home to a bounty of restaurants that cater to halal dietary restrictions while covering a…
Are you a sex worker impacted by COVID-19? Email email@example.com or call 617-431-6174 to see if you are…
USENIX Conference Policies
We encourage you to learn more about USENIX’s values and they put them into practice at our conferences.
Refunds and Cancellations
They are unable to offer refunds, cancellations, or substitutions for any registrations for this event. Please contact the Conference Department at firstname.lastname@example.org with any questions.
For general information, call USENIX at +1 510.528.8649 or send direct queries via email:
Sponsorship: email@example.com or x17
Student Grants: firstname.lastname@example.org
Proceedings Papers: email@example.com or x32
USENEX Papers and Proceedings
The full Proceedings published by USENIX for the symposium are available for download below. Individual papers can also be downloaded from their respective presentation pages. Copyright to the individual works is retained by the author[s].
Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Message from the Program Co-Chairs | Table of Contents
Full Proceedings PDFs
USENIX Security ’22 Full Proceedings (PDF, 346 MB)
USENIX Security ’22 Proceedings Interior (PDF, 344.1 MB, best for mobile devices)
SOUPS 2022 Technical Sessions
All sessions will be held in Salon E unless otherwise noted.
All the times listed below are in Eastern Daylight Time (EDT).
The technical sessions will be presented both in person and live streamed. Other program components that are available for virtual attendees are marked as “Virtual” below, and in-person attendees are welcome to join in.
Proceedings and Papers
The symposium papers and full proceedings are available to registered attendees now and will be available to everyone beginning Monday, August 8, 2022. Paper abstracts and proceedings front matter are available to everyone now. Copyright to the individual works is retained by the author[s].
Proceedings Front Matter
Proceedings Cover | Title Page, Copyright Page, and List of Organizers | Message from the Program Co-Chairs | Table of Contents
Full Proceedings PDF Files
SOUPS 2022 Full Proceedings (PDF, 42 MB)
SOUPS 2022 Full Proceedings Interior (PDF, 41 MB, best for mobile devices)
SOUPS 2022 Mentoring Program
SOUPS is proud to offer a mentoring program as part of the conference. There will be several opportunities to participate in mentoring:
Monday, August 8 (Lunch Tables)
A collection of in-person and virtual tables with 4–5 students/faculty/professionals will be held on the first day of the main conference and be organized by interest and career goals. We invite interested mentees and mentors to register to participate in either the in-person or virtual mentoring lunch tables.
The registration deadline is August 3.
- 12:30–13:15 and 17:00–17:45 (Virtual)
Virtual lunch tables will be held during the Monday luncheon and after the last paper session.
- 12:30–13:45 (In-Person)
In-person lunch tables will be held during the Monday luncheon.
Tuesday, August 9 (Speed Mentoring)
One-on-one speed-mentoring sessions will be held on the second day of the main conference and are designed to encourage broader interaction between mentors and mentees. We invite interested mentees and mentors to register to participate in the speed mentoring.
The registration deadline is August 8 at 12:00 EDT.
- 8:00–8:45 (Virtual)
Virtual speed mentoring will be held prior to the start of the paper presentation on Tuesday.
- 12:30–13:45 (In-Person)
In-person speed mentoring will be held during the Tuesday luncheon.
Please contact firstname.lastname@example.org for more details and questions.
DCG 201 TALK HIGHLIGHTS FOR USENIX 31 & SOUPS 2022
This is the section where we have comb through the entire list of talks on both days and list our highlights for the talks that stand out to us. Note that this does not invalidate any talks we didn’t list, in fact, we highly recommend you take a look at the full USENIX & SOUPS convention schedule beforehand and make up your own talk highlight lists. These are just the talks that for us had something stand out, either by being informative, unique or bizarre. (Sometimes, all three!)
Monday, August 8
9:00 am–9:15 am
Opening Remarks and Awards
General Chairs: Sonia Chiasson, Carleton University, and Apu Kapadia, Indiana University Bloomington
9:15 am-10:30 am
Understanding Non-Experts’ Security- and Privacy-Related Questions on a Q&A Site
Ayako A. Hasegawa, NICT; Naomi Yamashita, NTT / Kyoto University; Tatsuya Mori, Waseda University / NICT / RIKEN AIP; Daisuke Inoue, NICT; Mitsuaki Akiyama, NTT
Non-expert users are often forced to make decisions about security and privacy in their daily lives. Prior research has shown that non-expert users ask strangers for advice about digital media use online. In this study, to clarify the security and privacy concerns of non-expert users in their daily lives, we investigated security- and privacy-related question posts on a Question-and-Answer (Q&A) site for non-expert users. We conducted a thematic analysis of 445 question posts. We identified seven themes among the questions and found that users asked about cyberattacks the most, followed by authentication and security software. We also found that there was a strong demand for answers, especially for questions related to privacy abuse and account/device management. Our findings provide key insights into what non-experts are struggling with when it comes to privacy and security and will help service providers and researchers make improvements to address these concerns.
11:00 am–12:30 pm
Comparing User Perceptions of Anti-Stalkerware Apps with the Technical Reality
Matthias Fassl and Simon Anell, CISPA Helmholtz Center for Information Security; Sabine Houy, Umeå University; Martina Lindorfer, TU Wien; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Every year an increasing number of users face stalkerware on their phones. Many of them are victims of intimate partner surveillance (IPS) who are unsure how to identify or remove stalkerware from their phones. An intuitive approach would be to choose anti-stalkerware from the app store. However, a mismatch between user expectations and the technical capabilities can produce an illusion of security and risk compensation behavior (i.e., the Peltzmann effect).
We compare users’ perceptions of anti-stalkerware with the technical reality. First, we applied thematic analysis to app reviews to analyze user perceptions. Then, we performed a cognitive walkthrough of two prominent anti-stalkerware apps available on the Google PlayStore and reverse-engineered them to understand their detection features.
Our results suggest that users base their trust on the look and feel of the app, the number and type of alerts, and the apps’ affordances. We also found that app capabilities do not correspond to the users’ perceptions and expectations, impacting their practical effectiveness. We discuss different stakeholders’ options to remedy these challenges and better align user perceptions with the technical reality.
1:45 pm–2:45 pm
Keynote Address: Understanding and Reducing Online Misinformation Across 16 Countries on Six Continents
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 33,480) to investigate predictors of susceptibility to misinformation and interventions to combat misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment whereas political conservatism was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy was broadly effective at improving the veracity of news that people were willing to share, as were minimal digital literacy tips. Finally, crowdsourced accuracy evaluation was able to differentiate true from false headlines with high accuracy in all countries. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across the globe, and that similar solutions may be broadly effective.
Pre-print PDF: https://psyarxiv.com/a9frz/
Summary tweet thread: https://twitter.com/DG_Rand/status/1493946312353619981?s=20
David Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at MIT. Bridging the fields of cognitive science, behavioral economics, and social psychology, David’s research combines behavioral experiments and online/field studies with mathematical/computational models to understand human decision-making. His work focuses on illuminating why people believe and share misinformation and “fake news”; understanding political psychology and polarization; and promoting human cooperation. He has published over 170 articles in peer-reviewed journals such Nature, Science, PNAS, the American Economic Review, Psychological Science, Management Science, New England Journal of Medicine, and the American Journal of Political Science, and his work has received widespread media attention. David regularly advises technology companies such as Google, Facebook, and Twitter in their efforts to combat misinformation, and has provided testimony about misinformation to the US and UK governments. He has also written for popular press outlets including the New York Times, Wired, and New Scientist. He was named to Wired magazine’s Smart List 2012 of “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, awarded the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, chosen as fact-checking researcher of the year in 2017 by the Poyner Institute’s International Fact-Checking Network, awarded the 2020 FABBS Early Career Impact Award from the Society for Judgment and Decision Making, and selected as a 2021 Best 40-Under-40 Business School Professor by Poets & Quants. Papers he has coauthored have been awarded Best Paper of the Year in Experimental Economics, Social Cognition, and Political Methodology.
2:45 pm–3:15 pm
Session Chair: Marvin Ramokapane, University of Bristol
- Informed Consent: Are your participants aware of what they share
Noreen Whysel, Internet Safety Net
- E-Commerce Payment Security Evaluation and Literature Review
Urvashi Kishnani, University of Denver
- Moving Usable Security and Privacy Research Out of the Lab: Adding Virtual Reality to the Research Arsenal
Florian Mathis, University of Glasgow/University of Edinburgh/Bundeswehr University Munich
Detecting iPhone Security Compromise in Simulated Stalking Scenarios: Strategies and Obstacles
Andrea Gallardo, Hanseul Kim, Tianying Li, Lujo Bauer, and Lorrie Cranor, Carnegie Mellon University
Mobile phones can be abused for stalking, through methods such as location tracking, account compromise, and remote surveillance. We conducted eighteen remote semi-structured interviews in which we presented four hypothetical iPhone compromise scenarios that simulated technology-enabled abuse. We asked participants to provide advice for detecting and resolving each type of compromise. Using qualitative coding, we analyzed the interview data and identified the strategies of non-expert participants and the difficulties they faced in each scenario. We found that participants could readily delete an app and search in iOS settings or the home screen, but they were generally unable to identify or turn off location sharing in Google Maps or determine whether the iCloud account was improperly accessed. When following online advice for jailbreak detection, participants had difficulty finding a root checker app and resetting the phone. We identify underlying factors contributing to these difficulties and recommend improvements to iOS, Google Maps, and online advice to reduce the difficulties we identified.
Tuesday, August 9
If You Can’t Get Them to the Lab: Evaluating a Virtual Study Environment with Security Information Workers
Nicolas Huaman, Alexander Krause, and Dominik Wermke, CISPA Helmholtz Center for Information Security; Jan H. Klemmer and Christian Stransky, Leibniz University Hannover; Yasemin Acar, George Washington University; Sascha Fahl, CISPA Helmholtz Center for Information Security
Usable security and privacy researchers use many study methodologies, including interviews, surveys, and laboratory studies. Of those, lab studies allow for particularly flexible setups, including programming experiments or usability evaluations of software. However, lab studies also come with challenges: Often, it is particularly challenging to recruit enough skilled participants for in-person studies. Especially researchers studying security information workers reported on similar recruitment challenges in the past. Additionally, situations like the COVID-19 pandemic can make in-person lab studies even more challenging. Finally, institutions with limited resources may not be able to conduct lab studies. Therefore, we present and evaluate a novel virtual study environment prototype, called OLab, that allows researchers to conduct lab-like studies remotely using a commodity browser. Our environment overcomes lab-like study challenges and supports flexible setups and comprehensive data collection. In an iterative engineering process, we design and implement a prototype based on requirements we identified in previous work and conduct a comprehensive evaluation including a cognitive walkthrough with usable security experts, a guided and supervised online study with DevOps, and an unguided and unsupervised online study with computer science students. We can confirm that our prototype supports a wide variety of lab-like study setups and received positive feedback from all study participants.
10:00 am–10:30 am
Session Chair: Marvin Ramokapane, University of Bristol
- IoT Inspector: a platform for real-world smart home research
Danny Yuxing Huang, New York University
- Skilled? Gullible? Likely to Install Software Updates and verify HTTPS?
Miranda Wei, University of Washington
11:00 am–12:30 pm
Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women
Tanisha Afnan and Yixin Zou, University of Michigan School of Information; Maryam Mustafa, Lahore University of Management Sciences; Mustafa Naseem and Florian Schaub, University of Michigan School of Information
Women who identify with Islam in the United States come from many different race, class, and cultural communities. They are also more likely to be first or second-generation immigrants. This combination of different marginal identities (religious affiliation, gender, immigration status, and race) exposes Muslim-American women to unique online privacy risks and consequences. We conducted 21 semi-structured interviews to understand how Muslim-American women perceive digital privacy risks related to three contexts: government surveillance, Islamophobia, and social surveillance. We find that privacy concerns held by Muslim-American women unfolded with respect to three dimensions of identity: as a result of their identity as Muslim-Americans broadly (e.g., Islamophobic online harassment), as Muslim-American women more specifically (e.g., reputational harms within one’s cultural community for posting taboo content), and as a product of their own individual practices of Islam (e.g., constructing female-only spaces to share photos of oneself without a hijab). We discuss how these intersectional privacy concerns add to and expand on existing pro-privacy design principles, and lessons learned from our participants’ privacy-protective strategies for improving the digital experiences of this community.
Investigating How University Students in the United States Encounter and Deal With Misinformation in Private WhatsApp Chats During COVID-19
K. J. Kevin Feng, Princeton University; Kevin Song, Kejing Li, Oishee Chakrabarti, and Marshini Chetty, University of Chicago
Misinformation can spread easily in end-to-end encrypted messaging platforms such as WhatsApp where many groups of people are communicating with each other. Approaches to combat misinformation may also differ amongst younger and older adults. In this paper, we investigate how young adults encountered and dealt with misinformation on WhatsApp in private group chats during the first year of the COVID-19 pandemic. To do so, we conducted a qualitative interview study with 16 WhatsApp users who were university students based in the United States. We uncovered three main findings. First, all participants encountered misinformation multiple times a week in group chats, often attributing the source of misinformation to be well-intentioned family members. Second, although participants were able to identify misinformation and fact-check using diverse methods, they often remained passive to avoid negatively impacting family relations. Third, participants agreed that WhatsApp bears a responsibility to curb misinformation on the platform but expressed concerns about its ability to do so given the platform’s steadfast commitment to content privacy. Our findings suggest that conventional content moderation techniques used by open platforms such as Twitter and Facebook are unfit to tackle misinformation on WhatsApp. We offer alternative design suggestions that take into consideration the social nuances and privacy commitments of end-to-end encrypted group chats. Our paper also contributes to discussions between platform designers, researchers, and end users on misinformation in privacy-preserving environments more broadly.
Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child Relationships
Miranda Wei, Eric Zeng, Tadayoshi Kohno, and Franziska Roesner, Paul G. Allen School of Computer Science & Engineering, University of Washington
Modern technologies including smartphones, AirTags, and tracking apps enable surveillance and control in interpersonal relationships. In this work, we study videos posted on TikTok that give advice for how to surveil or control others through technology, focusing on two interpersonal contexts: intimate partner relationships and parent-child relationships. We collected 98 videos across both contexts and investigate (a) what types of surveillance or control techniques the videos describe, (b) what assets are being targeted, © the reasons that TikTok creators give for using these techniques, and (d) defensive techniques discussed. Additionally, we make observations about how social factors — including social acceptability, gender, and TikTok culture — are critical context for the existence of this anti-privacy and anti-security advice. We discuss the use of TikTok as a rich source of qualitative data for future studies and make recommendations for technology designers around interpersonal surveillance and control.
1:45 pm–3:15 PM
Password policies of most top websites fail to follow best practices
Kevin Lee, Sten Sjöberg, and Arvind Narayanan, Department of Computer Science and Center for Information Technology Policy, Princeton University
We examined the policies of 120 of the most popular websites for when a user creates a new password for their account. Despite well-established advice that has emerged from the research community, we found that only 13% of websites followed all relevant best practices in their password policies. Specifically, 75% of websites do not stop users from choosing the most common passwords — like “abc123456” and “P@$$w0rd”, while 45% burden users by requiring specific character classes in their passwords for minimal security benefit. We found low adoption of password strength meters — a widely touted intervention to encourage stronger passwords, appearing on only 19% of websites. Even among those sites, we found nearly half misusing them to steer users to include certain character classes, and not for their intended purpose of encouraging freely-constructed strong passwords.
Let The Right One In: Attestation as a Usable CAPTCHA Alternative
Tara Whalen, Thibault Meunier, and Mrudula Kodali, Cloudflare Inc.; Alex Davidson, Brave; Marwan Fayed and Armando Faz-Hernández, Cloudflare Inc.; Watson Ladd, Sealance Corp.; Deepak Maram, Cornell Tech; Nick Sullivan, Benedikt Christoph Wolters, Maxime Guerreiro, and Andrew Galloni, Cloudflare Inc.
CAPTCHAs are necessary to protect websites from bots and malicious crawlers, yet are increasingly solvable by automated systems. This has led to more challenging tests that require greater human effort and cultural knowledge; they may prevent bots effectively but sacrifice usability and discourage the human users they are meant to admit. We propose a new class of challenge: a Cryptographic Attestation of Personhood (CAP) as the foundation of a usable, pro-privacy alternative. Our challenge is constructed using the open Web Authentication API (WebAuthn) that is supported in most browsers. We evaluated the CAP challenge through a public demo, with an accompanying user survey. Our evaluation indicates that CAP has a strong likelihood of adoption by users who possess the necessary hardware, showing good results for effectiveness and efficiency as well as a strong expressed preference for using CAP over traditional CAPTCHA solutions. In addition to demonstrating a mechanism for more usable challenge tests, we identify some areas for improvement for the WebAuthn user experience, and reflect on the difficult usable privacy problems in this domain and how they might be mitigated.
Being Hacked: Understanding Victims’ Experiences of IoT Hacking
Asreen Rostami, RISE Research Institutes of Sweden & Stockholm University; Minna Vigren, Stockholm University; Shahid Raza, RISE Research Institutes of Sweden; Barry Brown, Stockholm University & Department of Computer Science, University of Copenhagen
From light bulbs to smart locks, IoT is increasingly embedded into our homes and lives. This opens up new vulnerabilities as IoT devices can be hacked and manipulated to cause harm or discomfort. In this paper we document users’ experiences of having their IoT systems hacked through 210 self-reports from Reddit, device support forums, and Amazon review pages. These reports and the discussion around them show how uncertainty is at the heart of ‘being hacked’. Hacks are sometimes difficult to detect, and users can mistake unusual IoT behaviour as evidence of a hack, yet this can still cause considerable emotional hurt and harm. In discussion, we shift from seeing hacks as technical system failings to be repaired, to seeing them as sites for care and user support. Such a shift in perspective opens a new front in designing for hacking — not just prevention but alleviating harm.
5:00 pm–5:15 pm
General Chairs: Sonia Chiasson, Carleton University, and Apu Kapadia, Indiana University Bloomington
Wednesday, August 10
8:45 am–9:15 am
Opening Remarks and Awards
Kevin Butler, University of Florida, and Kurt Thomas, Google
9:30 am–10:30 am
Exploring the Unchartered Space of Container Registry Typosquatting
Guannan Liu, Virginia Tech; Xing Gao, University of Delaware; Haining Wang, Virginia Tech; Kun Sun, George Mason University
With the increasing popularity of containerized applications, container registries have hosted millions of repositories that allow developers to store, manage, and share their software. Unfortunately, they have also become a hotbed for adversaries to spread malicious images to the public. In this paper, we present the first in-depth study on the vulnerability of container registries to typosquatting attacks, in which adversaries intentionally upload malicious images with an identification similar to that of a benign image so that users may accidentally download malicious images due to typos. We demonstrate that such typosquatting attacks could pose a serious security threat in both public and private registries as well as across multiple platforms. To shed light on the container registry typosquatting threat, we first conduct a measurement study and a 210-day proof-of-concept exploitation on public container registries, revealing that human users indeed make random typos and download unwanted container images. We also systematically investigate attack vectors on private registries and reveal that its naming space is open and could be easily exploited for launching a typosquatting attack. In addition, for a typosquatting attack across multiple platforms, we demonstrate that adversaries can easily self-host malicious registries or exploit existing container registries to manipulate repositories with similar identifications. Finally, we propose CRYSTAL, a lightweight extension to existing image management, which effectively defends against typosquatting attacks from both container users and registries.
Mistrust Plugins You Must: A Large-Scale Study Of Malicious Plugins In WordPress Marketplaces
Ranjita Pai Kasturi, Jonathan Fuller, Yiting Sun, Omar Chabklo, Andres Rodriguez, Jeman Park, and Brendan Saltaformaggio, Georgia Institute of Technology
Modern websites owe most of their aesthetics and functionalities to Content Management Systems (CMS) plugins, which are bought and sold on widely popular marketplaces. Driven by economic incentives, attackers abuse the trust in this economy: selling malware on legitimate marketplaces, pirating popular plugins, and infecting plugins post-deployment. This research studied the evolution of CMS plugins in over 400K production webservers dating back to 2012. We developed YODA, an automated framework to detect malicious plugins and track down their origin. YODA uncovered 47,337 malicious plugins on 24,931 unique websites. Among these, $41.5K had been spent on 3,685 malicious plugins sold on legitimate plugin marketplaces. Pirated plugins cheated developers out of $228K in revenues. Post-deployment attacks infected $834K worth of previously benign plugins with malware. Lastly, YODA informs our remediation efforts, as over 94% of these malicious plugins are still active today.
Trust Dies in Darkness: Shedding Light on Samsung’s TrustZone Keymaster Design
Alon Shakevsky, Eyal Ronen, and Avishai Wool, Tel-Aviv University
ALSO AT BLACK HAT USA 2022
ARM-based Android smartphones rely on the TrustZone hardware support for a Trusted Execution Environment (TEE) to implement security-sensitive functions. The TEE runs a separate, isolated, TrustZone Operating System (TZOS), in parallel to Android. The implementation of the cryptographic functions within the TZOS is left to the device vendors, who create proprietary undocumented designs.
In this work, we expose the cryptographic design and implementation of Android’s Hardware-Backed Keystore in Samsung’s Galaxy S8, S9, S10, S20, and S21 flagship devices. We reversed-engineered and provide a detailed description of the cryptographic design and code structure, and we unveil severe design flaws. We present an IV reuse attack on AES-GCM that allows an attacker to extract hardware-protected key material, and a downgrade attack that makes even the latest Samsung devices vulnerable to the IV reuse attack. We demonstrate working key extraction attacks on the latest devices. We also show the implications of our attacks on two higher-level cryptographic protocols between the TrustZone and a remote server: we demonstrate a working FIDO2 WebAuthn login bypass and a compromise of Google’s Secure Key Import.
We discuss multiple flaws in the design flow of TrustZone based protocols. Although our specific attacks only apply to the ≈100 million devices made by Samsung, it raises the much more general requirement for open and proven standards for critical cryptographic and security designs.
11:00 am–12:00 pm
“Like Lesbians Walking the Perimeter”: Experiences of U.S. LGBTQ+ Folks With Online Security, Safety, and Privacy Advice
Christine Geeng and Mike Harris, University of Washington; Elissa Redmiles, Max Planck Institute for Software Systems; Franziska Roesner, University of Washington
Given stigma and threats surrounding being gay or transgender, LGBTQ+ folks often seek support and information on navigating identity and personal (digital and physical) safety. While prior research on digital security advice focused on a general population and general advice, our work focuses on queer security, safety, and privacy advice-seeking to determine population-specific needs and takeaways for broader advice research. We conducted qualitative semi-structured interviews with 14 queer participants diverse across race, age, gender, sexuality, and socioeconomic status. We find that participants turn to their trusted queer support groups for advice, since they often experienced similar threats. We also document reasons that participants sometimes reject advice, including that it would interfere with their material livelihood and their potential to connect with others. Given our results, we recommend that queer-specific and general security and safety advice focus on specificity — why and how — over consistency, because advice cannot be one-size-fits-all. We also discuss the value of intersectionality as a framework for understanding vulnerability to harms in security research, since our participants’ overlapping identities affected their threat models and advice perception.
OpenVPN is Open to VPN Fingerprinting
Diwen Xue, Reethika Ramesh, and Arham Jain, University of Michigan; Michalis Kallitsis, Merit Network, Inc.; J. Alex Halderman, University of Michigan; Jedidiah R. Crandall, Arizona State University/Breakpointing Bad; Roya Ensafi, University of Michigan
VPN adoption has seen steady growth over the past decade due to increased public awareness of privacy and surveillance threats. In response, certain governments are attempting to restrict VPN access by identifying connections using “dual use” DPI technology. To investigate the potential for VPN blocking, we develop mechanisms for accurately fingerprinting connections using OpenVPN, the most popular protocol for commercial VPN services. We identify three fingerprints based on protocol features such as byte pattern, packet size, and server response. Playing the role of an attacker who controls the network, we design a two-phase framework that performs passive fingerprinting and active probing in sequence. We evaluate our framework in partnership with a million-user ISP and find that we identify over 85% of OpenVPN flows with only negligible false positives, suggesting that OpenVPN-based services can be effectively blocked with little collateral damage. Although some commercial VPNs implement countermeasures to avoid detection, our framework successfully identified connections to 34 out of 41 “obfuscated” VPN configurations. We discuss the implications of the VPN fingerprintability for different threat models and propose short-term defenses. In the longer term, we urge commercial VPN providers to be more transparent about their obfuscation approaches and to adopt more principled detection countermeasures, such as those developed in censorship circumvention research.
1:30 pm–2:30 pm
An Audit of Facebook’s Political Ad Policy Enforcement
Victor Le Pochat, imec-DistriNet, KU Leuven; Laura Edelson, New York University; Tom Van Goethem and Wouter Joosen, imec-DistriNet, KU Leuven; Damon McCoy and Tobias Lauinger, New York University
Major technology companies strive to protect the integrity of political advertising on their platforms by implementing and enforcing self-regulatory policies that impose transparency requirements on political ads. In this paper, we quantify whether Facebook’s current enforcement correctly identifies political ads and ensures compliance by advertisers. In a comprehensive, large-scale analysis of 4.2 million political and 29.6 million non-political ads from 215,030 advertisers, we identify ads correctly detected as political (true positives), ads incorrectly detected (false positives), and ads missed by detection (false negatives). Facebook’s current enforcement appears imprecise: 61% more ads are missed than are detected worldwide, and 55% of U.S. detected ads are in fact non-political. Detection performance is uneven across countries, with some having up to 53 times higher false negative rates among clearly political pages than in the U.S. Moreover, enforcement appears inadequate for preventing systematic violations of political advertising policies: for example, advertisers were able to continue running political ads without disclosing them while they were temporarily prohibited in the U.S. We attribute these flaws to five gaps in Facebook’s current enforcement and transparency implementation, and close with recommendations to improve the security of the online political ad ecosystem.
Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86
Yingchen Wang, University of Texas at Austin; Riccardo Paccagnella and Elizabeth Tang He, University of Illinois Urbana-Champaign; Hovav Shacham, University of Texas at Austin; Christopher W. Fletcher, University of Illinois Urbana-Champaign; David Kohlbrenner, University of Washington
Power side-channel attacks exploit data-dependent variations in a CPU’s power consumption to leak secrets. In this paper, we show that on modern Intel (and AMD) x86 CPUs, power side-channel attacks can be turned into timing attacks that can be mounted without access to any power measurement interface. Our discovery is enabled by dynamic voltage and frequency scaling (DVFS). We find that, under certain circumstances, DVFS-induced variations in CPU frequency depend on the current power consumption (and hence, data) at the granularity of milliseconds. Making matters worse, these variations can be observed by a remote attacker, since frequency differences translate to wall time differences!
The frequency side channel is theoretically more powerful than the software side channels considered in cryptographic engineering practice today, but it is difficult to exploit because it has a coarse granularity. Yet, we show that this new channel is a real threat to the security of cryptographic software. First, we reverse engineer the dependency between data, power, and frequency on a modern x86 CPU — finding, among other things, that differences as seemingly minute as a set bit’s position in a word can be distinguished through frequency changes. Second, we describe a novel chosen-ciphertext attack against (constant-time implementations of) SIKE, a post-quantum key encapsulation mechanism, that amplifies a single key-bit guess into many thousands of high- or low-power operations, allowing full key extraction via remote timing.
The Dangers of Human Touch: Fingerprinting Browser Extensions through User Actions
Konstantinos Solomos, Panagiotis Ilia, and Soroush Karami, University of Illinois at Chicago; Nick Nikiforakis, Stony Brook University; Jason Polakis, University of Illinois at Chicago
OpenSSLNTRU: Faster post-quantum TLS key exchange
Daniel J. Bernstein, University of Illinois at Chicago and Ruhr University Bochum; Billy Bob Brumley, Tampere University; Ming-Shing Chen, Ruhr University Bochum; Nicola Tuveri, Tampere University
Google’s CECPQ1 experiment in 2016 integrated a post-quantum key-exchange algorithm, newhope1024, into TLS 1.2. The Google-Cloudflare CECPQ2 experiment in 2019 integrated a more efficient key-exchange algorithm, ntruhrss701, into TLS 1.3.
This paper revisits the choices made in CECPQ2, and shows how to achieve higher performance for post-quantum key exchange in TLS 1.3 using a higher-security algorithm, sntrup761. Previous work had indicated that ntruhrss701 key generation was much faster than sntrup761 key generation, but this paper makes sntrup761 key generation much faster by generating a batch of keys at once.
Batch key generation is invisible at the TLS protocol layer, but raises software-engineering questions regarding the difficulty of integrating batch key exchange into existing TLS libraries and applications. This paper shows that careful choices of software layers make it easy to integrate fast post-quantum software, including batch key exchange, into TLS with minor changes to TLS libraries and no changes to applications.
As a demonstration of feasibility, this paper reports successful integration of its fast sntrup761 library, via a lightly patched OpenSSL, into an unmodified web browser and an unmodified TLS terminator. This paper also reports TLS 1.3 handshake benchmarks, achieving more TLS 1.3 handshakes per second than any software included in OpenSSL.
3:00 pm–4:00 pm
TLB;DR: Enhancing TLB-based Attacks with TLB Desynchronized Reverse Engineering
Andrei Tatar, Vrije Universiteit, Amsterdam; Daniël Trujillo, Vrije Universiteit, Amsterdam, and ETH Zurich; Cristiano Giuffrida and Herbert Bos, Vrije Universiteit, Amsterdam
Translation Lookaside Buffers, or TLBs, play a vital role in recent microarchitectural attacks. However, unlike CPU caches, we know very little about the exact operation of these essential microarchitectural components. In this paper, we introduce TLB desynchronization as a novel technique for reverse engineering TLB behavior from software. Unlike previous efforts that rely on timing or performance counters, our technique relies on fundamental properties of TLBs, enabling precise and fine-grained experiments. We use desynchronization to shed new light on TLB behavior, examining previously undocumented features such as replacement policies and handling of PCIDs on commodity Intel processors. We also show that such knowledge allows for more and better attacks.
Our results reveal a novel replacement policy on the L2 TLB of modern Intel CPUs as well as behavior indicative of a PCID cache. We use our new insights to design adversarial access patterns that massage the TLB state into evicting a target entry in the minimum number of steps, then examine their impact on several classes of prior TLB-based attacks. Our findings enable practical side channels à la TLBleed over L2, with much finer spatial discrimination and at a sampling rate comparable to L1, as well as an even finer-grained variant that targets both levels. We also show substantial speed gains for other classes of attacks that rely on TLB eviction.
Lumos: Identifying and Localizing Diverse Hidden IoT Devices in an Unfamiliar Environment
Rahul Anand Sharma, Elahe Soltanaghaei, Anthony Rowe, and Vyas Sekar, Carnegie Mellon University
Hidden IoT devices are increasingly being used to snoop on users in hotel rooms or AirBnBs. We envision empowering users entering such unfamiliar environments to identify and locate (e.g., hidden camera behind plants) diverse hidden devices (e.g., cameras, microphones, speakers) using only their personal handhelds.
What makes this challenging is the limited network visibility and physical access that a user has in such unfamiliar environments, coupled with the lack of specialized equipment.
This paper presents Lumos, a system that runs on commodity user devices (e.g., phone, laptop) and enables users to identify and locate WiFi-connected hidden IoT devices and visualize their presence using an augmented reality interface. Lumos addresses key challenges in: (1) identifying diverse devices using only coarse-grained wireless layer features, without IP/DNS layer information and without knowledge of the WiFi channel assignments of the hidden devices; and (2) locating the identified IoT devices with respect to the user using only phone sensors and wireless signal strength measurements. We evaluated Lumos across 44 different IoT devices spanning various types, models, and brands across six different environments. Our results show that Lumos can identify hidden devices with 95% accuracy and locate them with a median error of 1.5m within 30 minutes in a two-bedroom, 1000 sq. ft. apartment.
4:15 pm–5:15 pm
LTrack: Stealthy Tracking of Mobile Phones in LTE
Martin Kotuliak, Simon Erni, Patrick Leu, Marc Röschlin, and Srdjan Čapkun, ETH Zurich
We introduce LTrack, a new tracking attack on LTE that allows an attacker to stealthily extract user devices’ locations and permanent identifiers (IMSI). To remain stealthy, the localization of devices in LTrack is fully passive, relying on our new uplink/downlink sniffer. Our sniffer records both the times of arrival of LTE messages and the contents of the Timing Advance Commands, based on which LTrack calculates locations. LTrack is the first to show the feasibility of a passive localization in LTE through implementation on software-defined radio.
Passive localization attacks reveal a user’s location traces but can at best link these traces to a device’s pseudonymous temporary identifier (TMSI), making tracking in dense areas or over a long time-period challenging. LTrack overcomes this challenge by introducing and implementing a new type of IMSI Catcher named IMSI Extractor. It extracts a device’s IMSI and binds it to its current TMSI. Instead of relying on fake base stations like existing IMSI Catchers, which are detectable due to their continuous transmission, IMSI Extractor relies on our uplink/downlink sniffer enhanced with surgical message overshadowing. This makes our IMSI Extractor the stealthiest IMSI Catcher to date.
We evaluate LTrack through a series of experiments and show that in line-of-sight conditions, the attacker can estimate the location of a phone with less than 6m error in 90% of the cases. We successfully tested our IMSI Extractor against a set of 17 modern smartphones connected to our industry-grade LTE testbed. We further validated our uplink/downlink sniffer and IMSI Extractor in a test facility of an operator.
FLAME: Taming Backdoors in Federated Learning
Thien Duc Nguyen and Phillip Rieger, Technical University of Darmstadt; Huili Chen, University of California San Diego; Hossein Yalame, Helen Möllering, and Hossein Fereidooni, Technical University of Darmstadt; Samuel Marchal, Aalto University and F-Secure; Markus Miettinen, Technical University of Darmstadt; Azalia Mirhoseini, Google; Shaza Zeitouni, Technical University of Darmstadt; Farinaz Koushanfar, University of California San Diego; Ahmad-Reza Sadeghi and Thomas Schneider, Technical University of Darmstadt
Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. This ensures that FLAME can maintain the benign performance of the aggregated model while effectively eliminating adversarial backdoors. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models.
Thursday, August 11
9:00 am–10:15 am
Targeted Deanonymization via the Cache Side Channel: Attacks and Defenses
Mojtaba Zaheri, Yossi Oren, and Reza Curtmola, New Jersey Institute of Technology
Targeted deanonymization attacks let a malicious website discover whether a website visitor bears a certain public identifier, such as an email address or a Twitter handle. These attacks were previously considered to rely on several assumptions, limiting their practical impact. In this work, we challenge these assumptions and show the attack surface for deanonymization attacks is drastically larger than previously considered. We achieve this by using the cache side channel for our attack, instead of relying on cross-site leaks. This makes our attack oblivious to recently proposed software-based isolation mechanisms, including cross-origin resource policies (CORP), cross-origin opener policies (COOP) and SameSite cookie attribute. We evaluate our attacks on multiple hardware microarchitectures, multiple operating systems and multiple browser versions, including the highly-secure Tor Browser, and demonstrate practical targeted deanonymization attacks on major sites, including Google, Twitter, LinkedIn, TikTok, Facebook, Instagram and Reddit. Our attack runs in less than 3 seconds in most cases, and can be scaled to target an exponentially large amount of users.
To stop these attacks, we present a full-featured defense deployed as a browser extension. To minimize the risk to vulnerable individuals, our defense is already available on the Chrome and Firefox app stores. We have also responsibly disclosed our findings to multiple tech vendors, as well as to the Electronic Frontier Foundation. Finally, we provide guidance to websites and browser vendors, as well as to users who cannot install the extension.
GhostTouch: Targeted Attacks on Touchscreens without Physical Touch
Kai Wang, Zhejiang University; Richard Mitev, Technical University of Darmstadt; Chen Yan and Xiaoyu Ji, Zhejiang University; Ahmad-Reza Sadeghi, Technical University of Darmstadt; Wenyuan Xu, Zhejiang University
ALSO AT BLACK HAT USA 2022
Capacitive touchscreens have become the primary human-machine interface for personal devices such as smartphones and tablets. In this paper, we present GhostTouch, the first active contactless attack against capacitive touchscreens. GhostTouch uses electromagnetic interference (EMI) to inject fake touch points into a touchscreen without the need to physically touch it. By tuning the parameters of the electromagnetic signal and adjusting the antenna, we can inject two types of basic touch events, taps and swipes, into targeted locations of the touchscreen and control them to manipulate the underlying device. We successfully launch the GhostTouch attacks on nine smartphone models. We can inject targeted taps continuously with a standard deviation of as low as 14.6 x 19.2 pixels from the target area, a delay of less than 0.5s and a distance of up to 40mm. We show the real-world impact of the GhostTouch attacks in a few proof-of-concept scenarios, including answering an eavesdropping phone call, pressing the button, swiping up to unlock, and entering a password. Finally, we discuss potential hardware and software countermeasures to mitigate the attack.
DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks
Jaron Mink, Licheng Luo, and Natã M. Barbosa, University of Illinois at Urbana-Champaign; Olivia Figueira, Santa Clara University; Yang Wang and Gang Wang, University of Illinois at Urbana-Champaign
Fabricated media from deep learning models, or deepfakes, have been recently applied to facilitate social engineering efforts by constructing a trusted social persona. While existing works are primarily focused on deepfake detection, little is done to understand how users perceive and interact with deepfake persona (e.g., profiles) in a social engineering context. In this paper, we conduct a user study (n=286) to quantitatively evaluate how deepfake artifacts affect the perceived trustworthiness of a social media profile and the profile’s likelihood to connect with users. Our study investigates artifacts isolated within a single media field (images or text) as well as mismatched relations between multiple fields. We also evaluate whether user prompting (or training) benefits users in this process. We find that artifacts and prompting significantly decrease the trustworthiness and request acceptance of deepfake profiles. Even so, users still appear vulnerable with 43% of them connecting to a deepfake profile under the best-case conditions. Through qualitative data, we find numerous reasons why this task is challenging for users, such as the difficulty of distinguishing text artifacts from honest mistakes and the social pressures entailed in the connection decisions. We conclude by discussing the implications of our results for content moderators, social media platforms, and future defenses.
Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand
Matteo Cardaioli, Stefano Cecconello, Mauro Conti, and Simone Milani, University of Padua; Stjepan Picek, Delft University of Technology; Eugen Saraci, University of Padua
Automated Teller Machines (ATMs) represent the most used system for withdrawing cash. The European Central Bank reported more than 11 billion cash withdrawals and loading/unloading transactions on the European ATMs in 2019. Although ATMs have undergone various technological evolutions, Personal Identification Numbers (PINs) are still the most common authentication method for these devices. Unfortunately, the PIN mechanism is vulnerable to shoulder-surfing attacks performed via hidden cameras installed near the ATM to catch the PIN pad. To overcome this problem, people get used to covering the typing hand with the other hand. While such users probably believe this behavior is safe enough to protect against mentioned attacks, there is no clear assessment of this countermeasure in the scientific literature.
This paper proposes a novel attack to reconstruct PINs entered by victims covering the typing hand with the other hand. We consider the setting where the attacker can access an ATM PIN pad of the same brand/model as the target one. Afterward, the attacker uses that model to infer the digits pressed by the victim while entering the PIN. Our attack owes its success to a carefully selected deep learning architecture that can infer the PIN from the typing hand position and movements. We run a detailed experimental analysis including 58 users. With our approach, we can guess 30% of the 5-digit PINs within three attempts — the ones usually allowed by ATM before blocking the card. We also conducted a survey with 78 users that managed to reach an accuracy of only 7.92% on average for the same setting. Finally, we evaluate a shielding countermeasure that proved to be rather inefficient unless the whole keypad is shielded.
10:45 am–12:00 pm
Rolling Colors: Adversarial Laser Exploits against Traffic Light Recognition
Chen Yan, Zhejiang University; Zhijian Xu, Zhejiang University and The Chinese University of Hong Kong; Zhanyuan Yin, The University of Chicago; Xiaoyu Ji and Wenyuan Xu, Zhejiang University
Traffic light recognition is essential for fully autonomous driving in urban areas. In this paper, we investigate the feasibility of fooling traffic light recognition mechanisms by shedding laser interference on the camera. By exploiting the rolling shutter of CMOS sensors, we manage to inject a color stripe overlapped on the traffic light in the image, which can cause a red light to be recognized as a green light or vice versa. To increase the success rate, we design an optimization method to search for effective laser parameters based on empirical models of laser interference. Our evaluation in emulated and real-world setups on 2 state-of-the-art recognition systems and 5 cameras reports a maximum success rate of 30% and 86.25% for Red-to-Green and Green-to-Red attacks. We observe that the attack is effective in continuous frames from more than 40 meters away against a moving vehicle, which may cause end-to-end impacts on self-driving such as running a red light or emergency stop. To mitigate the threat, we propose redesigning the rolling shutter mechanism.
SWAPP: A New Programmable Playground for Web Application Security
Phakpoom Chinprutthiwong, Jianwei Huang, and Guofei Gu, SUCCESS Lab, Texas A&M University
Client-side web attacks are one of the major battlefields for cybercriminals today. To mitigate such attacks, researchers have proposed numerous defenses that can be deployed on a server or client. Server-side defenses can be easily deployed and modified by web developers, but it lacks the context of client-side attacks such as DOM-XSS attacks. On the other hand, client-side defenses, especially in the form of modified browsers or browser extensions, require constant vendor support or user involvement to be up to date.
In this work, we explore the feasibility of using a new execution context, the service worker context, as a platform for web security defense development that is programmable, browser agnostic, and runs at the client side without user involvement. To this end, we propose and develop SWAPP (Service Worker APplication Platform), a framework for implementing security mechanisms inside a service worker. As the service worker is supported by most browsers, our framework is compatible with most clients. Furthermore, SWAPP is designed to enable the extensibility and programmability of the apps. We demonstrate the versatility of SWAPP by implementing various apps that can mitigate web attacks including a recent side-channel attack targeting websites that deploy a service worker. SWAPP allows websites to offload a part of the security tasks from the server to the client and also enables the possibility to deploy or retrofit emerging security features/prototypes before they are officially supported by browsers. Finally, we evaluate the performance overhead of our framework and show that deploying defenses on a service worker is a feasible option.
1:30 pm–2:30 pm
Behind the Tube: Exploitative Monetization of Content on YouTube
Andrew Chu, University of Chicago; Arjun Arunasalam, Muslum Ozgur Ozmen, and Z. Berkay Celik, Purdue University
The YouTube video sharing platform is a prominent online presence that delivers various genres of content to society today. As the viewership and userbase of the platform grow, both individual users and larger companies have recognized the potential for monetizing this content. While content monetization is a native capability of the YouTube service, a number of requirements are enforced on the platform to prevent its abuse. Yet, methods to circumvent these requirements exist; many of which are potentially harmful to viewers and other users. In this paper, we present the first comprehensive study on exploitative monetization of content on YouTube. To do this, we first create two datasets; one using thousands of user posts from eleven forums whose users discuss monetization on YouTube, and one using listing data from five active sites that facilitate the purchase and sale of YouTube accounts. We then perform both manual and automated analysis to develop a view of illicit monetization exploits used on YouTube by both individual users and larger channel collectives. We discover six distinct exploits used to execute illicit content monetization on YouTube; four used by individual users, and two used by channel collectives. Further, we identify real-world evidence of each exploit on YouTube message board communities and provide insight into how each is executed. Through this, we present a comprehensive view of illicit monetization exploits on the YouTube platform that can motivate future investigation into mitigating these harmful endeavors.
How to Peel a Million: Validating and Expanding Bitcoin Clusters
George Kappos and Haaroon Yousaf, University College London and IC3; Rainer Stütz and Sofia Rollet, AIT — Austrian Institute of Technology; Bernhard Haslhofer, Complexity Science Hub Vienna; Sarah Meiklejohn, University College London and IC3
One of the defining features of Bitcoin and the thousands of cryptocurrencies that have been derived from it is a globally visible transaction ledger. While Bitcoin uses pseudonyms as a way to hide the identity of its participants, a long line of research has demonstrated that Bitcoin is not anonymous. This has been perhaps best exemplified by the development of clustering heuristics, which have in turn given rise to the ability to track the flow of bitcoins as they are sent from one entity to another.
In this paper, we design a new heuristic that is designed to track a certain type of flow, called a peel chain, that represents many transactions performed by the same entity; in doing this, we implicitly cluster these transactions and their associated pseudonyms together. We then use this heuristic to both validate and expand the results of existing clustering heuristics. We also develop a machine learning-based validation method and, using a ground-truth dataset, evaluate all our approaches and compare them with the state of the art. Ultimately, our goal is to not only enable more powerful tracking techniques but also call attention to the limits of anonymity in these systems.
3:00 pm–4:00 pm
Creating a Secure Underlay for the Internet
Henry Birge-Lee, Princeton University; Joel Wanner, ETH Zürich; Grace H. Cimaszewski, Princeton University; Jonghoon Kwon, ETH Zürich; Liang Wang, Princeton University; François Wirz, ETH Zürich; Prateek Mittal, Princeton University; Adrian Perrig, ETH Zürich; Yixin Sun, University of Virginia
Adversaries can exploit inter-domain routing vulnerabilities to intercept communication and compromise the security of critical Internet applications. Meanwhile the deployment of secure routing solutions such as Border Gateway Protocol Security (BGPsec) and Scalability, Control and Isolation On Next-generation networks (SCION) are still limited. How can we leverage emerging secure routing backbones and extend their security properties to the broader Internet?
We design and deploy an architecture to bootstrap secure routing. Our key insight is to abstract the secure routing backbone as a virtual Autonomous System (AS), called Secure Backbone AS (SBAS). While SBAS appears as one AS to the Internet, it is a federated network where routes are exchanged between participants using a secure backbone. SBAS makes BGP announcements for its customers’ IP prefixes at multiple locations (referred to as Points of Presence or PoPs) allowing traffic from non-participating hosts to be routed to a nearby SBAS PoP (where it is then routed over the secure backbone to the true prefix owner). In this manner, we are the first to integrate a federated secure non-BGP routing backbone with the BGP-speaking Internet.
We present a real-world deployment of our architecture that uses SCIONLab to emulate the secure backbone and the PEERING framework to make BGP announcements to the Internet. A combination of real-world attacks and Internet-scale simulations shows that SBAS substantially reduces the threat of routing attacks. Finally, we survey network operators to better understand optimal governance and incentive models.
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era
Changjiang Li, Pennsylvania State University and Zhejiang University; Li Wang, Shandong University; Shouling Ji and Xuhong Zhang, Zhejiang University; Zhaohan Xi, Pennsylvania State University; Shanqing Guo, Shandong University; Ting Wang, Pennsylvania State University
Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors. Yet, with the rapid advances in synthetic media techniques (e.g., deepfake), the security of FLV is facing unprecedented challenges, about which little is known thus far.
To bridge this gap, in this paper, we conduct the first systematic study on the security of FLV in real-world settings. Specifically, we present LiveBugger, a new deepfake-powered attack framework that enables customizable, automated security evaluation of FLV. Leveraging LiveBugger, we perform a comprehensive empirical assessment of representative FLV platforms, leading to a set of interesting findings. For instance, most FLV APIs do not use anti-deepfake detection; even for those with such defenses, their effectiveness is concerning (e.g., it may detect high-quality synthesized videos but fail to detect low-quality ones). We then conduct an in-depth analysis of the factors impacting the attack performance of LiveBugger: a) the bias (e.g., gender or race) in FLV can be exploited to select victims; b) adversarial training makes deepfake more effective to bypass FLV; c) the input quality has a varying influence on different deepfake techniques to bypass FLV. Based on these findings, we propose a customized, two-stage approach that can boost the attack success rate by up to 70%. Further, we run proof-of-concept attacks on several representative applications of FLV (i.e., the clients of FLV APIs) to illustrate the practical implications: due to the vulnerability of the APIs, many downstream applications are vulnerable to deepfake. Finally, we discuss potential countermeasures to improve the security of FLV. Our findings have been confirmed by the corresponding vendors.
Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction
Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler, and Patrick Traynor, University of Florida
Generative machine learning models have made convincing voice synthesis a reality. While such tools can be extremely useful in applications where people consent to their voices being cloned (e.g., patients losing the ability to speak, actors not wanting to have to redo dialog, etc), they also allow for the creation of nonconsensual content known as deepfakes. This malicious audio is problematic not only because it can convincingly be used to impersonate arbitrary users, but because detecting deepfakes is challenging and generally requires knowledge of the specific deepfake generator. In this paper, we develop a new mechanism for detecting audio deepfakes using techniques from the field of articulatory phonetics. Specifically, we apply fluid dynamics to estimate the arrangement of the human vocal tract during speech generation and show that deepfakes often model impossible or highly-unlikely anatomical arrangements. When parameterized to achieve 99.9% precision, our detection mechanism achieves a recall of 99.5%, correctly identifying all but one deepfake sample in our dataset. We then discuss the limitations of this approach, and how deepfake models fail to reproduce all aspects of speech equally. In so doing, we demonstrate that subtle, but biologically constrained aspects of how humans generate speech are not captured by current models, and can therefore act as a powerful tool to detect audio deepfakes.
4:15 pm–5:15 pm
RE-Mind: a First Look Inside the Mind of a Reverse Engineer
Alessandro Mantovani and Simone Aonzo, EURECOM; Yanick Fratantonio, Cisco Talos; Davide Balzarotti, EURECOM
When a human activity requires a lot of expertise and very specialized cognitive skills that are poorly understood by the general population, it is often considered `an art.’ Different activities in the security domain have fallen in this category, such as exploitation, hacking, and the main focus of this paper: binary reverse engineering (RE).
However, while experts in many areas (ranging from chess players to computer programmers) have been studied by scientists to understand their mental models and capture what is special about their behavior, the `art’ of understanding binary code and solving reverse engineering puzzles remains to date a black box.
In this paper, we present a measurement of the different strategies adopted by expert and beginner reverse engineers while approaching the analysis of x86 (dis)assembly code, a typical static RE task. We do that by performing an exploratory analysis of data collected over 16,325 minutes of RE activity of two unknown binaries from 72 participants with different experience levels: 39 novices and 33 experts.
Pacer: Comprehensive Network Side-Channel Mitigation in the Cloud
Aastha Mehta, University of British Columbia (UBC); Mohamed Alzayat, Roberta De Viti, Björn B. Brandenburg, Peter Druschel, and Deepak Garg, Max Planck Institute for Software Systems (MPI-SWS)
Network side channels (NSCs) leak secrets through packet timing and packet sizes. They are of particular concern in public IaaS Clouds, where any tenant may be able to colocate and indirectly observe a victim’s traffic shape. We present Pacer, the first system that eliminates NSC leaks in public IaaS Clouds end-to-end. It builds on the principled technique of shaping guest traffic outside the guest to make the traffic shape independent of secrets by design. However, Pacer also addresses important concerns that have not been considered in prior work — it prevents internal side-channel leaks from affecting reshaped traffic, and it respects network flow control, congestion control and loss recovery signals. Pacer is implemented as a paravirtualizing extension to the host hypervisor, requiring modest changes to the hypervisor and the guest kernel and optional, minimal changes to applications. We present Pacer’s key abstraction of a cloaked tunnel, describe its design and implementation, and show through an experimental evaluation that Pacer imposes moderate overheads on bandwidth, client latency, and server throughput, while thwarting attacks using state-of-the-art CNN classifiers.
Friday, August 12
9:00 am–10:15 am
Playing Without Paying: Detecting Vulnerable Payment Verification in Native Binaries of Unity Mobile Games
Chaoshun Zuo and Zhiqiang Lin, The Ohio State University
Modern mobile games often contain in-app purchasing (IAP) for players to purchase digital items such as virtual currency, equipment, or extra moves. In theory, IAP should have been implemented securely; but in practice, we have found that many game developers have failed to do so, particularly by misplacing the trust of payment verification, e.g., by either locally verifying the payment transactions or without using any verification at all, leading to playing without paying vulnerabilities. This paper presents PAYMENTSCOPE, a static binary analysis tool to automatically identify vulnerable IAP implementations in mobile games. Through modeling of its IAP protocols with the SDK provided APIs using a payment-aware data flow analysis, PAYMENTSCOPE directly pinpoints untrusted payment verification vulnerabilities in game native binaries. We have implemented PAYMENTSCOPE on top of binary analysis framework Ghidra, and tested with 39,121 Unity (the most popular game engine) mobile games, with which PAYMENTSCOPE has identified 8,954 (22.89%) vulnerable games. Among them, 8,233 games do not verify the validity of payment transactions and 721 games simply verify the transactions locally. We have disclosed the identified vulnerabilities to developers of vulnerable games, and many of them have acknowledged our findings.
Rendering Contention Channel Made Practical in Web Browsers
Shujiang Wu and Jianjia Yu, Johns Hopkins University; Min Yang, Fudan University; Yinzhi Cao, Johns Hopkins University
Browser rendering utilizes hardware resources shared within and across browsers to display web contents, thus inevitably being vulnerable to side channel attacks. Prior works have studied rendering side channels that are caused by rendering time differences of one frame, such as URL color change. However, it still remains unclear how rendering contentions play a role in side-channel attacks and covert communications.
In this paper, we design a novel rendering contention channel. Specifically, we stress the browser’s rendering resource with stable, self-adjustable pressure and measure the time taken to render a sequence of frames. The measured time sequence is further used to infer any co-rendering event of the browser.
To better understand the channel, we study its cause via a method called single variable testing. That is, we keep all variables the same but only change one to test whether the changed variable contributes to the contention. Our results show that CPU, GPU and screen buffer are all part of the contention.
To demonstrate the channel’s feasibility, we design and implement a prototype, open-source framework, called SIDER, to launch four attacks using the rendering contention channel, which are (i) cross-browser, cross-mode cookie synchronization, (ii) history sniffing, (iii) website fingerprinting, and (iv) keystroke logging. Our evaluation shows the effectiveness and feasibility of all four attacks.
Practical Privacy-Preserving Authentication for SSH
Lawrence Roy, Stanislav Lyakhov, Yeongjin Jang, and Mike Rosulek, Oregon State University
Public-key authentication in SSH reveals more information about the participants’ keys than is necessary. (1) The server can learn a client’s entire set of public keys, even keys generated for other servers. (2) The server learns exactly which key the client uses to authenticate, and can further prove this fact to a third party. (3) A client can learn whether the server recognizes public keys belonging to other users. Each of these problems lead to tangible privacy violations for SSH users.
In this work we introduce a new public-key authentication method for SSH that reveals essentially the minimum possible amount of information. With our new method, the server learns only whether the client knows the private key for some authorized public key. If multiple keys are authorized, the server does not learn which one the client used. The client cannot learn whether the server recognizes public keys belonging to other users. Unlike traditional SSH authentication, our method is fully deniable.
Our method supports existing SSH keypairs of all standard flavors — RSA, ECDSA, EdDSA. It does not require users to generate new key material. As in traditional SSH authentication, clients and servers can use a mixture of different key flavors in a single authentication session.
We integrated our new authentication method into OpenSSH, and found it to be practical and scalable. For a typical client and server with at most 10 ECDSA/EdDSA keys each, our protocol requires 9 kB of communication and 12.4 ms of latency. Even for a client with 20 keys and server with 100 keys, our protocol requires only 12 kB of communication and 26.7 ms of latency.
10:45 am–12:00 pm
Security and Privacy Perceptions of Third-Party Application Access for Google Accounts
David G. Balash, Xiaoyuan Wu, and Miles Grant, The George Washington University; Irwin Reyes, Two Six Technologies; Adam J. Aviv, The George Washington University
Online services like Google provide a variety of application programming interfaces (APIs). These online APIs enable authenticated third-party services and applications (apps) to access a user’s account data for tasks such as single sign-on (SSO), calendar integration, and sending email on behalf of the user, among others. Despite their prevalence, API access could pose significant privacy and security risks, where a third-party could have unexpected privileges to a user’s account. To gauge users’ perceptions and concerns regarding third-party apps that integrate with online APIs, we performed a multi-part online survey of Google users. First, we asked n = 432 participants to recall if and when they allowed third-party access to their Google account: 89% recalled using at least one SSO and 52% remembered at least one third-party app. In the second survey, we re-recruited n = 214 participants to ask about specific apps and SSOs they’ve authorized on their own Google accounts. We collected in-the-wild data about users’ actual SSOs and authorized apps: 86% used Google SSO on at least one service, and 67% had at least one third-party app authorized. After examining their apps and SSOs, participants expressed the most concern about access to personal information like email addresses and other publicly shared info. However, participants were less concerned with broader — -and perhaps more invasive — -access to calendars, emails, or cloud storage (as needed by third-party apps). This discrepancy may be due in part to trust transference to apps that integrate with Google, forming an implied partnership. Our results suggest opportunities for design improvements to the current third-party management tools offered by Google; for example, tracking recent access, automatically revoking access due to app disuse, and providing permission controls.
MaDIoT 2.0: Modern High-Wattage IoT Botnet Attacks and Defenses
Tohid Shekari, Georgia Institute of Technology; Alvaro A. Cardenas, University of California, Santa Cruz; Raheem Beyah, Georgia Institute of Technology
The widespread availability of vulnerable IoT devices has resulted in IoT botnets. A particularly concerning IoT botnet can be built around high-wattage IoT devices such as EV chargers because, in large numbers, they can abruptly change the electricity consumption in the power grid. These attacks are called Manipulation of Demand via IoT (MaDIoT) attacks. Previous research has shown that the existing power grid protection mechanisms prevent any large-scale negative consequences to the grid from MaDIoT attacks. In this paper, we analyze this assumption and show that an intelligent attacker with extra knowledge about the power grid and its state, can launch more sophisticated attacks. Rather than attacking all locations at random times, our adversary uses an instability metric that lets the attacker know
Post-Quantum Cryptography with Contemporary Co-Processors: Beyond Kronecker, Schönhage-Strassen & Nussbaumer
Joppe W. Bos, Joost Renes, and Christine van Vredendaal, NXP Semiconductors
There are currently over 30 billion IoT (Internet of Things) devices installed worldwide. To secure these devices from various threats one often relies on public-key cryptographic primitives whose operations can be costly to compute on resource-constrained IoT devices. To support such operations these devices often include a dedicated co-processor for cryptographic procedures, typically in the form of a big integer arithmetic unit. Such existing arithmetic co-processors do not offer the functionality that is expected by upcoming post-quantum cryptographic primitives. Regardless, contemporary systems may exist in the field for many years to come.
In this paper we propose the Kronecker+ algorithm for polynomial multiplication in rings of the form Z[X]/(X^n +1): the arithmetic foundation of many lattice-based cryptographic schemes. We discuss how Kronecker+ allows for re-use of existing co-processors for post-quantum cryptography, and in particular directly applies to the various finalists in the post-quantum standardization effort led by NIST. We demonstrate the effectiveness of our algorithm in practice by integrating Kronecker+ into Saber: one of the finalists in the ongoing NIST standardization effort. On our target platform, a RV32IMC with access to a dedicated arithmetic co-processor designed to accelerate RSA and ECC, Kronecker+ performs the matrix multiplication 2.8 times faster than regular Kronecker substitution and 1.7 times faster than Harvey’s negated-evaluation-points method.
1:30 pm–2:30 pm
Total Eclipse of the Heart — Disrupting the InterPlanetary File System
Bernd Prünster, Institute of Applied Information Processing and Communications (IAIK), Graz University of Technology; Alexander Marsalek, A-SIT Secure Information Technology Center Austria; Thomas Zefferer, A-SIT Plus GmbH
Peer-to-peer networks are an attractive alternative to classical client-server architectures in several fields of application such as voice-over-IP telephony and file sharing. Recently, a new peer-to-peer solution called the InterPlanetary File System (IPFS) has attracted attention, with its promise of re-decentralising the Web. Being increasingly used as a stand-alone application, IPFS has also emerged as the technical backbone of various other decentralised solutions and was even used to evade censorship. Decentralised applications serving millions of users rely on IPFS as one of their crucial building blocks. This popularity also makes IPFS attractive for large-scale attacks. We have identified a conceptual issue in one of IPFS’s core libraries and demonstrate its exploitation by means of a successful end-to-end attack. We evaluated this attack against the IPFS reference implementation on the public IPFS network, which is used by the average user to share and consume IPFS content. The results obtained from mounting this attack on live IPFS nodes show that arbitrary IPFS nodes can be eclipsed, i.e. isolated from the network, with moderate effort and limited resources. Compared to similar works, we show that our attack scales well even beyond current network sizes and can disrupt the entire public IPFS network with alarmingly low effort. The vulnerability set described in this paper has been assigned CVE-2020–10937. Responsible disclosure procedures have led to mitigations being deployed. The issues presented in this paper were publicly disclosed together with Protocol Labs, the company coordinating the IPFS development in October 2020.
Dos and Don’ts of Machine Learning in Computer Security
Daniel Arp, Technische Universität Berlin; Erwin Quiring, Technische Universität Braunschweig; Feargus Pendlebury, King’s College London and Royal Holloway, University of London and The Alan Turing Institute; Alexander Warnecke, Technische Universität Braunschweig; Fabio Pierazzi, King’s College London; Christian Wressnegger, KASTEL Security Research Labs and Karlsruhe Institute of Technology; Lorenzo Cavallaro, University College London; Konrad Rieck, Technische Universität Braunschweig
With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas. This development has influenced computer security, spawning a series of work on learning-based security systems, such as for malware detection, vulnerability discovery, and binary code analysis. Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance and render learning-based systems potentially unsuitable for security tasks and practical deployment.
In this paper, we look at this problem with critical eyes. First, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. We conduct a study of 30 papers from top-tier security conferences within the past 10 years, confirming that these pitfalls are widespread in the current security literature. In an empirical analysis, we further demonstrate how individual pitfalls can lead to unrealistic performance and interpretations, obstructing the understanding of the security problem at hand. As a remedy, we propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible. Furthermore, we identify open problems when applying machine learning in security and provide directions for further research.
3:00 pm–4:00 pm
“The Same PIN, Just Longer”: On the (In)Security of Upgrading PINs from 4 to 6 Digits
Collins W. Munyendo, The George Washington University; Philipp Markert, Ruhr University Bochum; Alexandra Nisenoff, University of Chicago; Miles Grant and Elena Korkes, The George Washington University; Blase Ur, University of Chicago; Adam J. Aviv, The George Washington University
With the goal of improving security, companies like Apple have moved from requiring 4-digit PINs to 6-digit PINs in contexts like smartphone unlocking. Users with a 4-digit PIN thus must “upgrade” to a 6-digit PIN for the same device or account. In an online user study (n=1010), we explore the security of such upgrades. Participants used their own smartphone to first select a 4-digit PIN. They were then directed to select a 6-digit PIN with one of five randomly assigned justifications. In an online attack that guesses a small number of common PINs (10–30), we observe that 6-digit PINs are, at best, marginally more secure than 4-digit PINs. To understand the relationship between 4- and 6-digit PINs, we then model targeted attacks for PIN upgrades. We find that attackers who know a user’s previous 4-digit PIN perform significantly better than those who do not at guessing their 6-digit PIN in only a few guesses using basic heuristics (e.g., appending digits to the 4-digit PIN). Participants who selected a 6-digit PIN when given a “device upgrade” justification selected 6-digit PINs that were the easiest to guess in a targeted attack, with the attacker successfully guessing over 25% of the PINs in just 10 attempts, and more than 30% in 30 attempts. Our results indicate that forcing users to upgrade to 6-digit PINs offers limited security improvements despite adding usability burdens. System designers should thus carefully consider this tradeoff before requiring upgrades.
RegexScalpel: Regular Expression Denial of Service (ReDoS) Defense by Localize-and-Fix
Yeting Li, CAS-KLONAT, Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences; SKLCS, Institute of Software, Chinese Academy of Sciences; Yecheng Sun, SKLCS, Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Zhiwu Xu, College of Computer Science and Software Engineering, Shenzhen University; Jialun Cao, The Hong Kong University of Science and Technology; Yuekang Li, School of Computer Science and Engineering, Nanyang Technological University; Rongchen Li, SKLCS, Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Haiming Chen, SKLCS, Institute of Software, Chinese Academy of Sciences; CAS-KLONAT, Institute of Information Engineering, Chinese Academy of Sciences; Shing-Chi Cheung, The Hong Kong University of Science and Technology; Yang Liu, School of Computer Science and Engineering, Nanyang Technological University; Yang Xiao, CAS-KLONAT, Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences
The Regular expression Denial of Service (ReDoS) is a class of denial of service attacks that exploit vulnerable regular expressions (regexes) whose execution time can be superlinearly related to input sizes. A common approach of defending ReDoS attacks is to repair the vulnerable regexes. Techniques have been recently proposed to synthesize repaired regexes using program-by-example (PBE) techniques. However, these existing techniques may generate regexes, which are not semantically equivalent or similar to the original ones, or are still vulnerable to ReDoS attacks.
To address the challenges, we propose RegexScalpel, an automatic regex repair framework that adopts a localize-andfix strategy. RegexScalpel first localizes the vulnerabilities by leveraging fine-grained vulnerability patterns proposed by us to analyze their vulnerable patterns, the source (i.e., the pathological sub-regexes), and the root causes (e.g., the overlapping sub-regexes). Then, RegexScalpel targets to fix the pathological sub-regexes according to our predefined repair patterns and the localized vulnerability information. Furthermore, our repair patterns ensure that the repair regexes are semantically either equivalent to or similar to the original ones. Our iterative repair method also keeps out vulnerabilities of the repaired regexes. With an experiment on a total number of 448 vulnerable regexes, we demonstrate that RegexScalpel can outperform all existing automatic regexes fixing techniques by fixing 348 more regexes than the best existing work. Also, we adopted RegexScalpel to detect ten popular projects including Python and NLTK, and revealed 16 vulnerable regexes.We then applied RegexScalpel to successfully repair all of them, and these repairs were merged into the later release by the maintainers, resulting in 8 confirmed CVEs.
Paul Grubbs, Arasu Arun, Ye Zhang, Joseph Bonneau, and Michael Walfish, NYU
This paper initiates research on zero-knowledge middleboxes (ZKMBs). A ZKMB is a network middlebox that enforces network usage policies on encrypted traffic. Clients send the middlebox zero-knowledge proofs that their traffic is policy-compliant; these proofs reveal nothing about the client’s communication except that it complies with the policy. We show how to make ZKMBs work with unmodified encrypted-communication protocols (specifically TLS 1.3), making ZKMBs invisible to servers. As a contribution of independent interest, we design optimized zero-knowledge proofs for TLS 1.3 session keys.
We apply the ZKMB paradigm to several case studies. Experimental results suggest that in certain settings, performance is in striking distance of practicality; an example is a middlebox that filters domain queries (each query requiring a separate proof) when the client has a long-lived TLS connection with a DNS resolver. In such configurations, the middlebox’s overhead is 2–5 ms of running time per proof, and client latency to create a proof is several seconds. On the other hand, clients may have to store hundreds of MBs depending on the underlying zero-knowledge proof machinery, and for some applications, latency is tens of seconds.
4:15 pm–5:15 pm
Debloating Address Sanitizer
Yuchen Zhang, Stevens Institute of Technology; Chengbin Pang, Nanjing University; Georgios Portokalidis, Nikos Triandopoulos, and Jun Xu, Stevens Institute of Technology
Address Sanitizer (ASan) is a powerful memory error detector. It can detect various errors ranging from spatial issues like out-of-bound accesses to temporal issues like use-after-free. However, ASan has the major drawback of high runtime overhead. With every functionality enabled, ASan incurs an overhead of more than 1x.
This paper first presents a study to dissect the operations of ASan and inspects the primary sources of its runtime overhead. The study unveils (or confirms) that the high overhead is mainly caused by the extensive sanitizer checks on memory accesses. Inspired by the study, the paper proposes ASan — , a tool assembling a group of optimizations to reduce (or “debloat”) sanitizer checks and improve ASan’s efficiency. Unlike existing tools that remove sanitizer checks with harm to the capability, scalability, or usability of ASan, ASan — fully maintains those decent properties of ASan.
Our evaluation shows that ASan — presents high promise. It reduces the overhead of ASan by 41.7% on SPEC CPU2006 and by 35.7% on Chromium. If only considering the overhead incurred by sanitizer checks, the reduction rates increase to 51.6% on SPEC CPU2006 and 69.6% on Chromium. In the context of fuzzing, ASan — increases the execution speed of AFL by over 40% and the branch coverage by 5%. Combined with orthogonal, fuzzing-tailored optimizations, ASan — can speed up AFL by 60% and increase the branch coverage by 9%. Running in Chromium to support our daily work for four weeks, ASan — did not present major usability issues or significant slowdown and it detected all the bugs we reproduced from previous reports.
Lamphone: Passive Sound Recovery from a Desk Lamp’s Light Bulb Vibrations
Ben Nassi, Yaron Pirutin, and Raz Swisa, Ben-Gurion University of the Negev; Adi Shamir, Weizmann Institute of Science; Yuval Elovici and Boris Zadov, Ben-Gurion University of the Negev
In this paper, we introduce “Lamphone,” an optical side-channel attack used to recover sound from desk lamp light bulbs; such lamps are commonly used in home offices, which became a primary work setting during the COVID-19 pandemic. We show how fluctuations in the air pressure on the surface of a light bulb, which occur in response to sound and cause the bulb to vibrate very slightly (a millidegree vibration), can be exploited by eavesdroppers to recover speech passively, externally, and using equipment that provides no indication regarding its application. We analyze a light bulb’s response to sound via an electro-optical sensor and learn how to isolate the audio signal from the optical signal. We compare Lamphone to related methods presented in other studies and show that Lamphone can recover sound at high quality and lower volume levels that those methods. Finally, we show that eavesdroppers can apply Lamphone in order to recover speech at the sound level of a virtual meeting with fair intelligibility when the victim is sitting/working at a desk that contains a desk lamp with a light bulb from a distance of 35 meters.
XDRI Attacks — and — How to Enhance Resilience of Residential Routers
Philipp Jeitner, Fraunhofer Institute for Secure Information Technology SIT and National Research Center for Applied Cybersecurity ATHENE; Haya Shulman, Fraunhofer Institute for Secure Information Technology SIT, National Research Center for Applied Cybersecurity ATHENE, and Goethe-Universität Frankfurt; Lucas Teichmann, Fraunhofer Institute for Secure Information Technology SIT; Michael Waidner, Fraunhofer Institute for Secure Information Technology SIT, National Research Center for Applied Cybersecurity ATHENE, and Technische Universität Darmstadt
We explore the security of residential routers and find a range of critical vulnerabilities. Our evaluations show that 10 out of 36 popular routers are vulnerable to injections of fake records via misinterpretation of special characters. We also find that in 15 of the 36 routers the mechanisms, that are meant to prevent cache poisoning attacks, can be circumvented.
In our Internet-wide study with an advertisement network, we identified and analyzed 976 residential routers used by web clients, out of which more than 95% were found vulnerable to our attacks. Overall, vulnerable routers are prevalent and are distributed among 177 countries and 4830 networks.
To understand the core factors causing the vulnerabilities we perform black- and white-box analyses of the routers. We find that many problems can be attributed to incorrect assumptions on the protocols’ behaviour and the Internet, misunderstanding of the standard recommendations, bugs, and simplified DNS software implementations.
We provide recommendations to mitigate our attacks. We also set up a tool to enable everyone to evaluate the security of their routers at https://xdi-attack.net/.
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, and Michael Backes, CISPA Helmholtz Center for Information Security; Emiliano De Cristofaro, UCL and Alan Turing Institute; Mario Fritz and Yang Zhang, CISPA Helmholtz Center for Information Security
Inference attacks against Machine Learning (ML) models allow adversaries to learn sensitive information about training data, model parameters, etc. While researchers have studied, in depth, several kinds of attacks, they have done so in isolation. As a result, we lack a comprehensive picture of the risks caused by the attacks, e.g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses. In this paper, we fill this gap by presenting a first-of-its-kind holistic risk assessment of different inference attacks against machine learning models. We concentrate on four attacks — namely, membership inference, model inversion, attribute inference, and model stealing — and establish a threat model taxonomy.
Our extensive experimental evaluation, run on five model architectures and four image datasets, shows that the complexity of the training dataset plays an important role with respect to the attack’s performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also show that defenses like DP-SGD and Knowledge Distillation can only mitigate some of the inference attacks. Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models, and equally serves as a benchmark tool for researchers and practitioners.