HACKER SUMMER CAMP 2024 GUIDES — Part Sixteen: USENIX Security Trifecta 2024

DCG 201
67 min readAug 13, 2024

--

Welcome to the DCG 201 Guides for Hacker Summer Camp 2024! This is part of a series where we are going to cover all the various hacker conventions and shenanigans both In-Person & Digital! This year in 2024 we have completely lost our minds and thus we will have a total of 18 guides spanning 3 months of Hacker Insanity!

As more blog posts are uploaded, you will be able to jump through the guide via these links:

HACKER SUMMER CAMP 2024 — Part One: Surviving Las Vegas & Virtually Anywhere 2024

HACKER SUMMER CAMP 2024 — Part Two: Capture The Flags & Hackathons

HACKER SUMMER CAMP 2024 — Part Three: Design Automation Conference #61

HACKER SUMMER CAMP 2024 — Part Four: ToorCamp 2024

HACKER SUMMER CAMP 2024 — Part Five: LeHack 20th

HACKER SUMMER CAMP 2024 — Part Six: HOPE XV

HACKER SUMMER CAMP 2024 — Part Seven: SummerCon 2024

HACKER SUMMER CAMP 2024 — Part Eight: DOUBLEDOWN24 by RingZer0

HACKER SUMMER CAMP 2024 — Part Nine: TRICON & REcon 2024

HACKER SUMMER CAMP 2024 — Part Ten: The Diana Initiative 2024

HACKER SUMMER CAMP 2024 — Part Eleven: Wikimania Katowice

HACKER SUMMER CAMP 2024 — Part Twelve: SquadCon 2024

HACKER SUMMER CAMP 2024 — Part Thirteen: BSides Las Vegas 2024

HACKER SUMMER CAMP 2024 — Part Fourteen: Black Hat USA 2024

HACKER SUMMER CAMP 2024 — Part Fifteen: DEFCON 32

HACKER SUMMER CAMP 2024 — Part Sixteen: USENIX Security Trifecta 2024

HACKER SUMMER CAMP 2024 — Part Seventeen: HackCon 2024

HACKER SUMMER CAMP 2024 — Part Eighteen: SIGS, EVENTS & PARTIES

USENIX 33ND SECURITY SYMPOSIUM + Nineteenth Symposium on Usable Privacy and Security

Date: Sunday, August 11th (6:00 PM PST) — Friday, August 16th (5:00 PM PST)

Location: Philadelphia Marriott Downtown (1200 Filbert St, Philadelphia, PA 19107)

— Websites —

SOUPS: https://www.usenix.org/conference/soups2024

w00t: https://www.usenix.org/conference/woot24

USENIX: https://www.usenix.org/conference/usenixsecurity24

Tickets:

Platform(s): TBA

Schedule:

Live Streams:

TBA

Chat: TBA

Accessibility:

Code Of Conduct: https://squadcon.me/?page_id=1007

The USENIX Association is a 501(c)(3) nonprofit organization, dedicated to supporting the advanced computing systems communities and furthering the reach of innovative research. It was founded in 1975 under the name “Unix Users Group,” focusing primarily on the study and development of Unix and similar systems. It has since grown into a respected organization among practitioners, developers, and researchers of computer operating systems more generally. Since its founding, it has published a technical journal entitled ;login:.

USENIX’S MISSION:

  • Foster technical excellence and innovation
  • Support and disseminate research with a practical bias
  • Provide a neutral forum for discussion of technical issues
  • Encourage computing outreach into the community at large

The 33rd USENIX Security Symposium will take place on August 14–16, 2024, at the Philadelphia Marriott Downtown in Philadelphia, PA, USA. The USENIX Security Symposium brings together researchers, practitioners, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks.

The Twentieth Symposium on Usable Privacy and Security (SOUPS 2024) will take place at the Philadelphia Downtown Marriott in Philadelphia, PA, USA, on August 11–13, 2024. SOUPS brings together an interdisciplinary group of researchers and practitioners in human-computer interaction, security, and privacy.

The 18th USENIX WOOT Conference on Offensive Technologies (WOOT ’24) will take place at the Philadelphia Downtown Marriott in Philadelphia, PA, USA, on August 12–13, 2024.

A long standing institution, this convention is focused on the Security & Privacy side of hacking viewed through an academic and research focused-lens. If you like to read white papers on security research these two back-to-back conventions are for you!

PHILADELPHIA: A WRENCHED HIVE OF SCUM AND VILLAINY (FROM A NEW JERSEYAN PERSPECTIVE)

(MORE PHILLY INFO COMING SOON)

THINGS THAT ARE NOT PHILLY CHEESESTAKES

Philadelphia Marriott Downtown

1201 Market Street
Philadelphia, PA 19107
USA
+1 215.625.2900

Hotel Reservation Deadline: Monday, July 22, 2024

USENIX has negotiated a special conference attendee room rate of US$219 plus tax for single/double occupancy for conference attendees, including in-room wireless internet. To receive this rate, book your room online or call the hotel and mention USENIX or USENIX Security ’24 when making your reservation.

The group rate is available through July 22, 2024, or until the block sells out, whichever occurs first. After this date, contact the hotel directly to inquire about room availability.

Room and Ride Sharing

USENIX maintains a Google Group to facilitate communication among attendees seeking roommates and ride sharing. You can sign up for free to find attendees with whom you can share a hotel room, taxi, shuttle, or other ride-share service. Please include “USENIX Security ‘24” in the subject line when posting a new request.

Parking

See the hotel’s website for up-to-date parking information and rates.

USENIX Conference Policies

We encourage you to learn more about USENIX’s values and they put them into practice at our conferences.

Refunds and Cancellations

They are unable to offer refunds, cancellations, or substitutions for any registrations for this event. Please contact the Conference Department at conference@usenix.org with any questions.

Questions?

Send direct queries via email:

Registration: conference@usenix.org
Membership: membership@usenix.org
Sponsorship: sponsorship@usenix.org
Student Grants: students@usenix.org
Proceedings Papers: production@usenix.org

PUBLISHED PAPERS

USENEX Papers and Proceedings

The full Proceedings published by USENIX for the symposium are available for download below. Individual papers can also be downloaded from their respective presentation pages. Copyright to the individual works is retained by the author[s].

USENIX Security ’24 Activities

To enhance your symposium experience, several attendee events are planned throughout the week. They are open to all USENIX Security ’24 attendees.

Symposium Luncheon and Test of Time Award Presentation

Thursday, 12:00 pm–1:30 pm
Franklin Hall

Student Mentoring

SOUPS

Monday & Tuesday, 12:30 pm–2:00 pm
Salon F

See the Mentoring Program page for more information.

USENIX

Tuesday, 7:00 pm–8:00 pm
Conference Rooms 407–409

USENIX Security is hosting a mentoring event for students/junior folks in computer security and privacy on Tuesday, August 13, from 7:00 pm–8:00 pm. The event will be structured as speed mentoring, where participants meet with pre-assigned mentors in 15–20 minute blocks. To take part, please sign up via this form by July 26 or until space is filled.

w00t Demo Session and Happy Hour

Tuesday, 4:30 pm–6:00 pm
Salon ABF

A cornerstone of the USENIX WOOT Conference is to bring together academics and practitioners — hackers of all sorts — to discuss and share offensive security research. To help those conversations get started, WOOT ’24 is hosting a Demo/Poster Session and Happy Hour featuring both new work as well as demos and posters from authors of accepted WOOT ’24 papers.

USENIX Symposium Reception

Wednesday, 6:00 pm–7:30 pm
Franklin Hall

Mingle with fellow attendees at the USENIX Security ’24 Reception, featuring dinner, drinks, and the chance to connect with other attendees, speakers, and symposium organizers.

USENIX Student Meet-Up

Wednesday, 7:30 pm–8:30 pm
Conference Rooms 407–409
Refreshments provided by Futurewei

All student attendees and Program Committee members are invited to join an informal mixer. Snacks and drinks will be provided.

USENIX LGBTQ+ and Allies Happy Hour

Thursday, 7:30 pm–8:30 pm
Conference Rooms 407–409
Refreshments provided by Google

We welcome LGBTQ+ attendees and allies in the security community to mingle and discuss topics relevant to the LGBTQ+ community.

USENIX TikTok Sponsor Meetup and Happy Hour

Thursday, 7:30 pm–8:30 pm
Conference Rooms 411–412

Join TikTok for a special one-hour event! This is a fantastic opportunity to network, socialize, and engage in casual discussions about Privacy Enhancing Technologies (PETs). Connect with research scientists from TikTok’s Privacy Innovation team to learn about our latest projects and ask any questions you have about our research.

Enjoy a variety of refreshments as you share insights with peers. We look forward to seeing you there and fostering meaningful conversations about the future of privacy and technology.

Birds-of-a-Feather Sessions (BoFs)

SOUPS

Registered attendees may schedule Birds-of-a-Feather sessions (BoFs) and reserve meeting rooms for them in one-hour increments via the BoFs schedule grid posted outside the badge pickup area. The Attendee Guide, which will be sent to registered attendees shortly before the event, contains more details for scheduling a BoF.

Monday, August 12

  • Conference Rooms 407–409, 6:30 pm–9:30 pm
  • Conference Rooms 411–412, 6:30 pm–9:30 pm

w00t

Monday, 6:30 pm–9:30 pm
Tuesday, 7:00 pm–9:00 pm

Registered attendees may schedule Birds-of-a-Feather sessions (BoFs) and reserve meeting rooms for them in one-hour increments via the BoFs schedule grid posted outside the badge pickup area. The Attendee Guide, which will be sent to registered attendees shortly before the event, contains more details for scheduling a BoF.

USENIX

Tuesday, 7:00 pm–9:00 pm
Wednesday, 7:30 pm–10:30 pm
Thursday, 7:30 pm–10:30 pm

Registered attendees may schedule Birds-of-a-Feather sessions (BoFs) and reserve meeting rooms for them in one-hour increments via the BoFs schedule grid posted outside the badge pickup area. The Attendee Guide, which will be sent to registered attendees shortly before the event, contains more details for scheduling a BoF.

CSET’24

CSET’24 is being sponsored by USC Information Sciences Institute (USC-ISI) in cooperation with USENIX. The workshop will be held in hybrid format on Tuesday, August 13, preceding the USENIX Security Symposium. In-person attendance will be at the Philadelphia Marriott Downtown. See Participate page for more information.

For 16 years, the Workshop on Cyber Security Experimentation and Test (CSET) has been an important and lively space for presenting research on and discussing “meta” cybersecurity topics related to reliability, validity, reproducibility, transferability, ethics, and scalability — in practice, in research, and in education. Submissions are particularly encouraged to employ a scientific approach to cybersecurity and/or demonstrably grow community resources.

CSET was traditionally sponsored by USENIX. In 2020, USENIX Association decided to discontinue their support of all workshops (including CSET) due to pandemic effects on USENIX financial revenue. We are committed to continuing the CSET Workshop independently, and hope that we may rejoin USENIX in the future.

ATTENDING THE WORKSHOPS

  • The workshop will be held on Tuesday, August 13, preceding the USENIX Security Symposium.
  • The workshop will be a hybrid event supporting both in-person and remote participation. The registration fee for both in-person and remote participation is $295. Please register with Eventbrite to attend CSET’24.
  • In-person participation will be at Philadelphia Marriott Downtown in Philadelphia, PA, USA (the site of USENIX Security 2024).
  • CSET’24 in-person attendees can take advantage of the USENIX hotel rate. The hotel block is open until July 22 or until it sells out. There are several additional hotels in the area.
  • Remote participants will receive a Zoom link prior to the event.
  • In-person participants will be provided morning coffee break and afternoon coffee break. Breakfast and lunch will be on your own. There will be a no-host dinner at a nearby restaurant on Tuesday evening (approx. 6pm).

Workshop Program (August 13, 2024 — all times EDT / UTC-4)

Registration

8:00–8:25

Opening Remarks

8:25 AM — 8:30 AM
Terry Benzel, USC-ISI and Deniz Gurkan, Kent State University

Keynote: “Well, It Worked on My Computer”: On Reproducibility in Security Research

8:30 AM– 9:30 AM

Computer Security is a critical area of research. As such, the artifacts within Computer Security should accelerate the pace of progress in the community. However, nearly every researcher has expressed difficulty in comparing their work against prior work. In this talk, I will discuss our lab’s work in assessing the state of reproducibility and the avenues for better reproducibility within the Computer Security community.

Speaker bio: Daniel Olszewski (Ozzy) grew up in Kalispell, MT and graduated from Carroll College in Helena, MT with a bachelor’s in Computer Science and Mathematics. He then joined the University of Florida to avoid the cold in 2019 with Dr. Patrick Traynor. His research focus reproducible computer security with extensive experience in deepfakes, machine learning, and network security. He will be on the job market in the Fall.

Coffee Break

9:30 AM –10:30 AM

Paper Session 1 — Security Tools and Approaches (Session Chair: Deniz Gurkan, Kent State University)

9:45 AM — 11:45 AM

Hardening the Internet of Things: Toward Designing Access Control For Resource Constrained IoT Devices
Manuel Bessler (Xylem), Paul Sangster (Xylem), Radhika Upadrashta (Xylem), TJ OConnor (Florida Tech)

Accelerating Ransomware Defenses with Computational Storage Drive-Based API Call Sequence Classification
Kurt Friday (Louisiana State University), Elias Bou-Harb (Louisiana State University)

Measuring Cyber Essentials Security Policies
Sándor Bartha (University of Edinburgh), Russell Ballantine (TEK Systems Global Services), David Aspinall (University of Edinburgh)

Design and Implementation of a Coverage-Guided Ruby Fuzzer
Matt Schwager (Trail of Bits), Dominik Klemba (Trail of Bits), Josiah Dykstra (Trail of Bits)

Lunch Break

11:45 AM — 1:15 AM

Paper Session 2 — Dataset (Session Chair: Josiah Dykstra, Trail of Bits)

1:15 PM — 3:15 PM

Introducing a Comprehensive, Continuous, and Collaborative Survey of Intrusion Detection Datasets
Philipp Bönninghausen (Fraunhofer FKIE), Rafael Uetz (Fraunhofer FKIE), Martin Henze (RWTH Aachen University & Fraunhofer FKIE)

Introducing a New Alert Data Set for Multi-Step Attack Analysis
Max Landauer (AIT Austrian Institute of Technology), Florian Skopik (AIT Austrian Institute of Technology), Markus Wurzenberger (AIT Austrian Institute of Technology)

The attacks aren’t alright: Large-Scale Simulation of Fake Base Station Attacks and Detections
Thijs Heijligenberg (Radboud University), David Rupprecht (Radix Security), Katharina Kohls (Ruhr University Bochum)

GothX: a generator of customizable, legitimate and malicious IoT network traffic
Manuel Poisson (CentraleSupélec, Amossys, Univ. Rennes, Inria), Rodrigo Carnier (National Institute of Informatics), Kensuke Fukuda (National Institute of Informatics / Sokendai)

Coffee Break

3:15 PM — 3:30 PM

Paper Session 3 — Testbeds and Experimental Environments (Session Chair: Mohit Singhal, Northeastern University)

3:30 PM — 5:30 PM

NERDS: A Non-invasive Environment for Remote Developer Studies
Joseph Lewis (University of Maryland), Kelsey R. Fulton (Colorado School of Mines)

A Testbed for Operations in the Information Environment
Mary Ellen Zurko (MIT Lincoln Laboratory), Adam Tse (MIT Lincoln Laboratory), Swaroop Vattam (MIT Lincoln Laboratory), Vince Ercolani (MIT Lincoln Laboratory), Doug Stetson (MIT Lincoln Laboratory)

Towards a High Fidelity Training Environment for Autonomous Cyber Defense Agents
Sean Oesch (Oak Ridge National Laboratory), Amul Chaulagain (Oak Ridge National Laboratory), Brian Weber (Oak Ridge National Laboratory), Matthew Dixson (Oak Ridge National Laboratory), Amir Sadovnik (Oak Ridge National Laboratory), Benjamin Roberson (Oak Ridge National Laboratory), Cory Watson (Oak Ridge National Laboratory), Phillipe Austria (Oak Ridge National Laboratory)

COMEX: Deeply Observing Application Behavior on Real Android Devices
Zeya Umayya (IIITD), Dhruv Malik (IIITD), Arpit Nandi (IIITD), Akshat Kumar (IIITD), Sareena Karapoola (IIT Madras, Chennai, India), Sambuddho (IIITD)

No-Host Dinner @ Restaurant TBA

6:00 PM EST

SOUPS WORKSHOPS

Workshop on Usable Cybersecurity and Privacy for Immersive Technologies

Wednesday, August 7 — Virtual Workshop

1:00 pm – 4:30 pm

WPTM 2024: 3rd Annual Workshop on Privacy Threat Modeling

Thursday, August 8 — Virtual Workshop

11:00 am — 2:00 pm

WIPS 2024: 9th Workshop on Inclusive Privacy and Security

Friday, August 9 — Virtual Workshop

9:00 am — 5:00 pm

Sunday, August 11

SUPA 2024: Societal and User-Centered Privacy in AI

9:00 am – 12:10 pm

Rooms 305–306

Designing Effective and Accessible Approaches for Digital Product Cybersecurity Education and Awareness

9:00 am — 1:00 pm

Rooms 303–304

GOSS: Gender, Online Safety, and Sexuality Workshop (GOSS)

2:00 pm – 6:00 pm

Rooms 303–304/Virtual

WSIW 2024: 10th Workshop on Security Information Workers

2:30 pm — 6:00 pm

Rooms 305–306

S&PEI: Workshop on Creating Engaging Security and Privacy Educational Interfaces for Educators and Families

9:00 am – 5:00 pm

Drexel University

DCG 201 TALK HIGHLIGHTS FOR THE USENIX SECTURITY TRIFECTA 2024 (EST)

This is the section where we have comb through the entire list of talks on both days and list our highlights for the talks that stand out to us. Note that this does not invalidate any talks we didn’t list, in fact, we highly recommend you take a look at the full SOUPS, W00T & USENIX Convention Schedules beforehand and make up your own talk highlight lists. These are just the talks that for us had something stand out, either by being informative, unique or bizarre. (Sometimes, all three!)

Monday, August 14th

Evaluating the Usability of Differential Privacy Tools with Data Practitioners

SOUPS

Time: 9:30 AM — 11:30 AM

Authors:

Ivoline C. Ngong, Brad Stenger, Joseph P. Near, and Yuanyuan Feng, University of Vermont

Abstract:

Differential privacy (DP) has become the gold standard in privacy-preserving data analytics, but implementing it in realworld datasets and systems remains challenging. Recently developed DP tools aim to make DP implementation easier, but limited research has investigated these DP tools’ usability. Through a usability study with 24 US data practitioners with varying prior DP knowledge, we evaluated the usability of four open-source Python-based DP tools: DiffPrivLib, Tumult Analytics, PipelineDP, and OpenDP. Our study results suggest that these DP tools moderately support data practitioners’ DP understanding and implementation; that Application Programming Interface (API) design and documentation are vital for successful DP implementation and user satisfaction. We provide evidence-based recommendations to improve DP tools’ usability to broaden DP adoption.

AI and White Hat Hacking: A Symbiotic Relationship?

w00t

Time: 9:15 AM — 10:15 AM

Perri Adams, DARPA

Abstract:

Several years after the introduction of widely-accessible, powerful generative AI, security researchers are still working to understand how it will, and will not, alter their domain. Will AI be the panacea that revolutionizes static analysis and arbitrarily generates proof-of-concept exploits, or will its applicability be limited to well-constrained subproblems, leading to incremental, but not unsubstantial, gains in the field? While too soon to offer answers, we’ll look to recent work to explore how white hat security research will be shaped by AI.

Then, having asked what AI can do for us, we’ll ask what we can do for AI. Over the last several decades, offensive security research has gone from ad hoc efforts, often unappreciated, to an essential part of the cybersecurity ecosystem, responsible for some of the most effective defensive measures we have today. As the AI industry rapidly comes into its own, what lessons can be learned from the paths forged by white hat hackers?

Ms. Perri Adams is a special assistant to the director at DARPA, where she advises stakeholders at the agency and across the U.S. government on the next generation of AI and cybersecurity technology.

Prior to this role, Adams was a program manager within DARPA’s Information Innovation Office (I2O), where, among other programs, she created the AI Cyber Challenge (AIxCC). Previously, she was also a technical advisor for research and development programs at DARPA.

Before joining the agency, she supported various U.S. government customers, including other parts of the Department of Defense, while at Boeing and Two Six Technologies.

A frequent speaker on both technical and cyber policy issues, her written work has been published by Lawfare and the Council on Foreign Relations. She has also advised and collaborated with think tanks such as the Carnegie Endowment for International Peace and Georgetown’s Center for Security and Emerging Technology.

For years, Adams has been an avid participant in cybersecurity Capture the Flag (CTF) competitions and was one of the organizers of the DEF CON CTF, the world’s premier hacking competition. Adams holds a Bachelor of Science degree in computer science from Rensselaer Polytechnic Institute and is a proud alumna of the computer security club, RPISEC.

Privacy Communication Patterns for Domestic Robots

SOUPS

Time: 11:30 AM — 12:30 AM

Authors:

Maximiliane Windl, LMU Munich and Munich Center for Machine Learning (MCML); Jan Leusmann, LMU Munich; Albrecht Schmidt, LMU Munich and Munich Center for Machine Learning (MCML); Sebastian S. Feger, LMU Munich and Rosenheim Technical University of Applied Sciences; Sven Mayer, LMU Munich and Munich Center for Machine Learning (MCML)

IAPP SOUPS Privacy Award

Abstract:

Future domestic robots will become integral parts of our homes. They will have various sensors that continuously collect data and varying locomotion and interaction capabilities, enabling them to access all rooms and physically manipulate the environment. This raises many privacy concerns. We investigate how such concerns can be mitigated, using all possibilities enabled by the robot’s novel locomotion and interaction abilities. First, we found that privacy concerns increase with advanced locomotion and interaction capabilities through an online survey (N = 90). Second, we conducted three focus groups (N = 22) to construct 86 patterns to communicate the states of microphones, cameras, and the internet connectivity of domestic robots. Lastly, we conducted a large-scale online survey (N = 1720) to understand which patterns perform best regarding trust, privacy, understandability, notification qualities, and user preference. Our final set of communication patterns will guide developers and researchers to ensure a privacy-preserving future with domestic robots.

WhatsApp with privacy? Privacy issues with IM E2EE in the Multi-device setting

w00t

Time: 10:45 AM — 12:00 NOON

Authors:

Tal A. Be’ery, Zengo

Abstract:

We recently discovered a privacy issue with Meta’s WhatsApp, the world’s most popular Instant Messaging (IM) application. Meta’s WhatsApp suffers from a privacy issue that leaks the victims’ device setup information (mobile device + up to 4 linked devices) to any user, even if blocked and not in contacts. Monitoring this information over time allows potential attackers to gather actionable intelligence about victims and their device changes (device replaced/ added /removed). Additionally, message recipients can associate the message with the specific sender device that sent it. The root cause for these issues stems from Signal’s multi device protocol architecture, the Sesame protocol, and as a result these issues are not limited to Meta’s WhatsApp only but probably relevant to most IM solutions, including the privacy-oriented Signal Messenger.

Achilles Heel in Secure Boot: Breaking RSA Authentication and Bitstream Recovery from Zynq-7000 SoC

w00t

Time: 10:45 AM — 12:00 NOON

Authors:

Prasanna Ravi and Arpan Jati, Temasek Laboratories, Nanyang Technological University, Singapore; Shivam Bhasin, National Integrated Centre for Evaluation (NiCE), Nanyang Technological University, Singapore

Abstract:

Secure boot forms the backbone of trusted computing by ensuring that only authenticated software is executed on the designated platform. However, implementation of secure boot can have flaws leading to critical exploits. In this paper, we highlight a critical vulnerability in open source First Stage Boot Loader (FSBL) of AMD-Xilinx’s flagship Zynq-7000 System on Chip (SoC) solution for embedded devices. The discovered vulnerability acts as a ‘single point of failure’ allowing complete bypass of the underlying bypass RSA authentication during secure boot. As a result, a malicious actor can take complete control of the device and run unauthenticated/malicious applications. We demonstrate an exploit using the discovered vulnerability in form of first practical ‘Starbleed’ attacks on Zynq-7000 devices to recover the decrypted bitstream from an encrypted (using AES-256) boot image. The identified flaw has existed in the secure-boot software for more than 10 years. The vulnerability was responsibly disclosed to the vendor under CVE 2022/23822. The vendor thereafter patched the FSBL software and issued a design advisory. Our work therefore motivates the need towards rigorous security evaluation tools to test for such trivial security vulnerabilities in software.

Amplifying Threats: The Role of Multi-Sender Coordination in SMS-Timing-Based Location Inference Attacks

w00t

Time: 1:30 PM — 12:35 NOON

Authors:

Evangelos Bitsikas, Northeastern University; Theodor Schnitzler, Research Center Trustworthy Data Science and Security and Maastricht University; Christina Pöpper, New York University Abu Dhabi; Aanjhan Ranganathan, Northeastern University

Abstract:

SMS-timing-based location inference attacks leverage timing side channels to ascertain a target’s location. Prior work has primarily relied on a single-sender approach, employing only one SMS attacker from a specific location to infer the victim’s whereabouts. However, this method exhibits several drawbacks. In this research, we systematically enumerate the limitations of the single-sender approach, which prompted us to explore a multi-sender strategy. Our investigation delves into the feasibility of an attacker employing multiple SMS senders towards a victim to address these limitations and introduces novel features to bolster prediction accuracy. Through exhaustive experimentation, we demonstrate that strategically positioned multiple SMS senders significantly enhance the location-inference accuracy, achieving a 142% improvement for four distinct classes of potential victim locations. This work further highlights the need to develop mitigations against SMS-timing-based location inference attacks.

Batman Hacked My Password: A Subtitle-Based Analysis of Password Depiction in Movies

SOUPS

Time: 2:00 PM — 3:00 PM

Authors:

Maike M. Raphael, Leibniz University Hannover; Aikaterini Kanta, University of Portsmouth; Rico Seebonn and Markus Dürmuth, Leibniz University Hannover; Camille Cobb, University of Illinois Urbana-Champaign

Abstract:

Password security is and will likely remain an issue that non-experts have to deal with. It is therefore important that they understand the criteria of secure passwords and the characteristics of good password behavior. Related literature indicates that people often acquire knowledge from media such as movies, which influences their perceptions about cybersecurity including their mindset about passwords. We contribute a novel approach based on subtitles and an analysis of the depiction of passwords and password behavior in movies. We scanned subtitles of 97,709 movies from 1960 to 2022 for password appearance and analyzed resulting scenes from 2,851 movies using mixed methods to show what people could learn from watching movies. Selected films were viewed for an in-depth analysis.

Among other things, we find that passwords are often portrayed as weak and easy to guess, but there are different contexts of use with very strong passwords. Password hacking is frequently depicted as unrealistically powerful, potentially leading to a sense of helplessness and futility of security efforts. In contrast, password guessing is shown as quite realistic and with a lower (but still overestimated) success rate. There appears to be a lack of best practices as password managers and multi-factor authentication are practically non-existent.

Can Johnny be a whistleblower? A qualitative user study of a social authentication Signal extension in an adversarial scenario

SOUPS

Time: 2:00 PM — 3:00 PM

Authors:

Maximilian Häring and Julia Angelika Grohs, University of Bonn; Eva Tiefenau, Fraunhofer FKIE; Matthew Smith, University of Bonn and Fraunhofer FKIE; Christian Tiefenau, University of Bonn

Abstract:

To achieve a higher level of protection against person-in-the-middle attacks when using common chat apps with end-to-end encryption, each chat partner can verify the other party’s key material via an out-of-band channel. This procedure of verifying the key material is called an authentication ceremony (AC) and can consist of, e.g., comparing textual representations, scanning QR codes, or using third party social accounts. In the latter, a user can establish trust by proving that they have access to a particular social media account. A study has shown that such social authentication’s usability can be very good; however, the study focused exclusively on secure cases, i.e., the authentication ceremonies were never attacked. To evaluate whether social authentication remains usable and secure when attacked, we implemented an interface for a recently published social authentication protocol called SOAP. We developed a study design to compare authentication ceremonies, conducted a qualitative user study with an attack scenario, and compared social authentication to textual and QR code authentication ceremonies. The participants took on the role of whistleblowers and were tasked with verifying the identities of journalists. In a pilot study, three out of nine participants were caught by the government due to SOAP, but with an improved interface, this number was reduced to one out of 18 participants. Our results indicate that social authentication can lead to more secure behavior compared to more traditional authentication ceremonies and that the scenario motivated participants to reason about their decisions.

SOUPS Lightning Talks Monday

3:00 PM — 3:30 PM

Evolving Landscape of Disinformation — Lessons Learned So Far!

Saqib Hakak, University of New Brunswick

From Laughter to Concern: Exploring Conversations about Deepfakes on Reddit — Trends and Sentiments

Harshitha B. Nagaraj and Rahul G. K. Kiran, Rochester Institute of Technology

Media Portrayals of Student Privacy in Higher Education: A 2013–2023 Review

Min Cheong Kim, University of Maryland College Park

Authentication UX Design Opportunities for Future Smart Glasses

Jocelyn Rosenberg, Meta Reality Labs

Cybersecurity Analysts’ Perception of AI Security Tools and Practical Implications

Siddharth Hirwani and John Robertson, Secureworks

Posthoc Privacy Guarantees for Collaborative Inference

Praneeth Vepakomma, Massachusetts Institute of Technology

MakeShift: Security Analysis of Shimano Di2 Wireless Gear Shifting in Bicycles

w00t

Time: 3:15 PM — 4:30 PM

Authors:

Maryam Motallebighomi, Northeastern University; Earlence Fernandes, UC San Diego; Aanjhan Ranganathan, Northeastern University

Abstract:

The bicycle industry is increasingly adopting wireless gear-shifting technology for its advantages in performance and design. In this paper, we explore the security of these systems, focusing on Shimano’s Di2 technology, a market leader in the space. Through a blackbox analysis of Shimano’s proprietary wireless protocol, we uncovered the following critical vulnerabilities: (1) A lack of mechanisms to prevent replay attacks that allows an attacker to capture and retransmit gear shifting commands; (2) Susceptibility to targeted jamming, that allows an attacker to disable shifting on a specific target bike; and (3) Information leakage resulting from the use of ANT+ communication, that allows an attacker to inspect telemetry from a target bike. Exploiting these, we conduct successful record and replay attacks that lead to unintended gear shifting that can be completely controlled by an attacker without the need for any cryptographic keys. Our experimental results show that we can perform replay attacks from up to 10 meters using software-defined radios without any amplifiers. The recorded packets can be used at any future time as long as the bike components remain paired. We also demonstrate the feasibility of targeted jamming attacks that disable gear shifting for a specific bike, meaning they are finely tuned to not affect neighboring systems. Finally, we propose countermeasures and discuss their broader implications with the goal of improving wireless communication security in cycling equipment.

Engineering A Backdoored Bitcoin Wallet

w00t

Time: 3:15 PM — 4:30 PM

Authors:

Adam Scott and Sean Andersen, Block, Inc

Abstract:

Here we describe a backdoored bitcoin hardware wallet. This wallet is a fully-functional hardware wallet, yet it implements an extra, evil functionality: the wallet owner unknowingly leaks the private seed to the attacker through a few valid bitcoin transactions. The seed is leaked exclusively through the ECDSA signatures. To steal funds, the attacker just needs to tap into the public blockchain. The attacker does not need to know (or control) any detail about the wallet deployment (such as where in the world the wallet is, or who is using it). The backdoored wallet behavior is indistinguishable from the input-output behavior of a non-backdoored hardware wallet (meaning that it is impossible to discern non-backdoored signatures from backdoored ones, and backdoored signatures are as valid and just “work” as well as regular, non-backdoored ones). The backdoor does not need to be present at wallet initialization time; it can be implanted before or after key generation (this means the backdoor can be distributed as a firmware update, and is compatible with existing bitcoin wallets). We showcase the feasibility of the backdoored wallet by providing an end-to-end implementation on the bitcoin testnet network. We leak an entire 256-bit seed in 10 signatures, and only need modest computational resources to recover the seed.

KEYNOTE: Reflecting on Twenty Years of Usable Privacy and Security

SOUPS

Time: 4:00 PM — 5:00 PM

Moderator: Patrick Gage Kelley, Google
Panelists: Lorrie Faith Cranor, Carnegie Mellon University; Simson Garfinkel, BasisTech, LLC and Harvard University; Robert Biddle, Carleton University; Mary Ellen Zurko, MIT Lincoln Laboratory; Katharina Krombholz, CISPA Helmholtz Center for Information Security

Abstract:

Lorrie Faith Cranor (lorrie.cranor.org) is the Director and Bosch Distinguished Professor in Security and Privacy Technologies of CyLab and the FORE Systems University Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University. She directs the CyLab Usable Privacy and Security Laboratory (CUPS) and co-directs the Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission. She is also a co-founder of Wombat Security Technologies, Inc., a security awareness training company that was acquired by Proofpoint. She founded the Symposium On Usable Privacy and Security (SOUPS) and co-founded the Conference on Privacy Engineering Practice and Respect (PEPR). She has served on a number of boards, including the Electronic Frontier Foundation Board of Directors, the Electronic Privacy Information Center Advisory Board, the Computing Research Association Board of Directors, and the Aspen Institute Cybersecurity Group. She was elected to the ACM CHI Academy and named a Fellow of IEEE, ACM, and AAAS. She was previously a researcher at AT&T-Labs Research. She holds a doctorate in Engineering and Policy from Washington University in St. Louis. In 2012–2013 she spent her sabbatical as a fellow in the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University, where she worked on fiber arts projects, including a quilted visualization of bad passwords that was featured in Science Magazine as well as a bad passwords dress that she frequently wears when talking about her research. She plays soccer, walks to work, sews her own clothing with pockets, and tries not to embarrass her three young adult children.

Dr. Simson Garfinkel researches and writes at the intersection of AI, privacy, and digital forensics. He is a fellow of the AAAS, the ACM and the IEEE. He earned his PhD in Computer Science at MIT and a Master of Science in Journalism at Columbia University.

Robert Biddle is Professor of Computer Science and Cognitive Science at Carleton University in Ottawa, Canada. His research has always concerned human factors in Computer Science, drawing on principles and methods from cognitive and social sciences. The topics addressed have ranged from programming language design, to software development, and especially cybersecurity. His undergraduate studies were in Mathematics, Computer Science, and Education, and his Masters and Doctoral studies were in Computer Science. He is dual citizen of Canada and New Zealand, and his education and academic career has been in both countries. He has awards for research, teaching, and graduate mentorship. Robert is a Fellow of the New Zealand Computer Society, and a British Commonwealth Scholar.

Mary Ellen Zurko is a technical staff member at the Massachusetts Institute of Technology (MIT) Lincoln Laboratory. She has worked in research, product prototyping and development, and has more than 20 patents. She defined the field of user-centered security in 1996, and has worked in cybersecurity for over 35 years. She was the security architect of one of IBM’s earliest clouds, and a founding member of NASEM’s Forum on Cyber Resilience. She serves as a Distinguished Expert for NSA’s Best Scientific Cybersecurity Research Paper competition, and is on the NASEM committee identifying the key Cyber Hard Problems for our nation. Her research interests include unusable security for attackers, Zero Trust architectures for government systems, security development and code security, authorization policies, high-assurance virtual machine monitors, the web, and PKI. Zurko received a S.B. and S.M. in computer science from MIT. She has been the only “Mary Ellen Zurko” on the web for over 25 years.

TUESDAY, AUGUST 14th

"It was honestly just gambling”: Investigating the Experiences of Teenage Cryptocurrency Users on Reddit

SOUPS

Time: 9:00 AM — 10:30 AM

Authors:

Elijah Bouma-Sims, Hiba Hassan, Alexandra Nisenoff, Lorrie Faith Cranor, and Nicolas Christin, Carnegie Mellon University

Abstract:

Despite fears that minors may use unregulated cryptocurrency exchanges to gain access to risky investments, little is known about the experience of underage cryptocurrency users. To learn how teenagers access digital assets and the risks they encounter while using them, we conducted a multi-stage, inductive content analysis of 1,676 posts made to teenage communities on Reddit containing keywords related to cryptocurrency. We identified 1,409 (84.0%) posts that meaningfully discussed cryptocurrency, finding that teenagers most often use accounts in their parents’ names to purchase cryptocurrencies, presumably to avoid age restrictions. Teenagers appear motivated to invest by the potential for relatively large, short-term profits, but some discussed a sense of entertainment, ideological motivation, or an interest in technology. We identified many of the same harms adult users of digital assets encountered, including investment loss, victimization by fraud, and loss of keys. We discuss the implications of our results in the context of the ongoing debates over cryptocurrency regulation.

“Violation of my body:” Perceptions of AI-generated non-consensual (intimate) imagery

SOUPS

Time: 9:00 AM — 10:30 AM

Authors:

Natalie Grace Brigham, Miranda Wei, and Tadayoshi Kohno, University of Washington; Elissa M. Redmiles, Georgetown University

Abstract:

AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfakes portraying sexual acts. Respondents indicated strong opposition to creating and, even more so, sharing non-consensually created synthetic content, especially if that content depicts a sexual act. However, seeking out such content appeared more acceptable to some respondents. Attitudes around acceptability varied further based on the hypothetical creator’s relationship to the participant, the respondent’s gender and their attitudes towards sexual consent. This study provides initial insight into public perspectives of a growing threat and highlights the need for further research to inform social norms as well as ongoing policy conversations and technical developments in generative AI.

SOUPS Lightning Talks Tuesday

10:30 PM — 11:00 AM

Human-in-the-Loop for Secure Digital Wallets Transactions

Raja Hasnain Anwar, University of Massachusetts Amherst

Best Practices for Engaging Industry Researchers

Eun-Jeong Shin and Janice Tsai, Google

Beyond the West: Exploring the Impact and Regulation of Non-Consensual Image-Disclosure Abuse (NCIDA) in Non-Western Contexts

Amna Batool, University of Michigan

The Golden xCOMPASS: The Compass You Need to Navigate through the App-Privacy Universe!

Rahmadi Trimananda, Comcast Cybersecurity & Privacy Research

SecureCheck: Access Contracts, Negotiations, and Recommendations for Data-Sharing Minimization

Jacob Hopkins, Texas A&M University — Corpus Christi

“I would not install an app with this label”: Privacy Label Impact on Risk Perception and Willingness to Install iOS Apps

SOUPS

Time: 11:30 AM — 12:30 AM

Authors:

David G. Balash, University of Richmond; Mir Masood Ali and Chris Kanich, University of Illinois Chicago; Adam J. Aviv, The George Washington University

Abstract:

Starting December 2020, all new and updated iOS apps must display app-based privacy labels. As the first large-scale implementation of privacy nutrition labels in a real-world setting, we aim to understand how these labels affect perceptions of app behavior. Replicating the methodology of Emani-Naeini et al. [IEEE S&P ’21] in the space of IoT privacy nutrition labels, we conducted an online study in January 2023 on Prolific with n=1,505 participants to investigate the impact of privacy labels on users’ risk perception and willingness to install apps. We found that many privacy label attributes raise participants’ risk perception and lower their willingness to install an app. For example, when the app privacy label indicates that financial info will be collected and linked to their identities, participants were 15 times more likely to report increased privacy and security risks associated with the app. Likewise, when a label shows that sensitive info will be collected and used for cross-app/website tracking, participants were 304 times more likely to report a decrease in their willingness to install. However, participants had difficulty understanding privacy label jargon such as diagnostics, identifiers, track and linked. We provide recommendations for enhancing privacy label transparency, the importance of label clarity and accuracy, and how labels can impact consumer choice when suitable alternative apps are available.

Exploiting Android’s Hardened Memory Allocator

w00t

Time: 10:40 AM — 12:00 NOON

Authors:

Philipp Mao, Elias Valentin Boschung, Marcel Busch, and Mathias Payer, EPFL

Awarded Best Paper!

Abstract:

Most memory corruptions occur on the heap. To harden userspace applications and prevent heap-based exploitation, Google has developed Scudo. Since Android 11, Scudo has replaced jemalloc as the default heap implementation for all native code on Android. Scudo mitigates exploitation attempts of common heap vulnerabilities.

We present an in-depth study of the security of Scudo on Android by analyzing Scudo’s internals and systematizing Scudo’s security measures. Based on these insights we construct two new exploitation techniques that ultimately trick Scudo into allocating a chunk at an attacker’s chosen address. These techniques demonstrate — given adequate memory corruption primitives — that an attacker can leverage Scudo to gain arbitrary memory write. To showcase the practicality of our findings, we backport an n-day vulnerability to Android 14 and use it to exploit the Android system server.

Our exploitation techniques can be used to target any application using the Scudo allocator. While one of our techniques is fixed in newer Scudo versions, the second technique will stay applicable as it is based on how Scudo handles larger chunks.

SOK: 3D Printer Firmware Attacks on Fused Filament Fabrication

w00t

Time: 1:30 PM — 2:45 PM

Authors:

Muhammad Haris Rais, Virginia State University; Muhammad Ahsan and Irfan Ahmed, Virginia Commonwealth University

Abstract:

The globalized nature of modern supply chains facilitates hostile actors to install malicious firmware in 3D printers. A worm similar to Stuxnet could stealthily infiltrate a printer farm used for military drones, resulting in the production of batches with a variety of defects. While cybersecurity researchers have extensively delved into the designing and slicing stages of the printing process and explored physical side channels for offensive and defensive research, the domain of firmware attacks remains significantly underexplored. This study proposes a classification tree for firmware attacks, focusing on the attack goals. We further propose nine distinct firmware attacks within these categories to demonstrate and understand the impact of compromised firmware on a standard fused filament fabrication printer. The study evaluates these attacks through relevant destructive and non-destructive tests, including assessing the tensile strength of the printed parts and conducting air quality tests at the printing premises. The study further investigates the viability of forty-eight attacks, including nine that we propose, across the 3D printing stages: the design stage (involving CAD file manipulation), the slicing stage (involving G-code file manipulation), and the printing stage (involving firmware manipulation). Drawing on our understanding of the 3D printing attack surface, we introduce an Attack Feasibility Index (AFI) to assess the feasibility of attacks at different printing stages. This systematization and examination advances the comprehension of potential 3D printing attacks and urges researchers to delve into cybersecurity strategies focused on counteracting feasible attacks at specific printing stages.

Beyond the Office Walls: Understanding Security and Shadow Security Behaviours in a Remote Work Context

SOUPS

Time: 2:00 PM — 3:00 PM

Authors:

Sarah Alromaih, University of Oxford and King Abdulaziz City for Science and Technology; Ivan Flechais, University of Oxford; George Chalhoub, University of Oxford and University College London

Abstract:

Organisational security research has primarily focused on user security behaviour within workplace boundaries, examining behaviour that complies with security policies and behaviour that does not. Here, researchers identified shadow security behaviour: where security-conscious users apply their own security practices which are not in compliance with official security policy. Driven by the growth in remote work and the increasing diversity of remote working arrangements, our qualitative research study aims to investigate the nature of security behaviours within remote work settings. Using Grounded Theory, we interviewed 20 remote workers to explore security related practices within remote work. Our findings describe a model of personal security and how this interacts with an organisational security model in remote settings. We model how remote workers use an appraisal process to relate the personal and organisational security models, driving their security-related behaviours. Our model explains how different levels of alignment between the personal and organisational models can drive compliance, non-compliance, and shadow security behaviour in remote work settings. We discuss the implications of our findings for remote work security and highlight the importance of maintaining informal security communications for remote workers, homogenising security interactions, and adopting user experience design for remote work solutions.

w00t Lightning Talks and Closing Remarks

Time: 3:15 pm — 4:15 pm

Program Co-Chairs: Adam Doupé, Arizona State University; Alyssa Milburn, Intel

Negative Effects of Social Triggers on User Security and Privacy Behaviors

SOUPS

Time: 3:30 PM — 4:45 PM

Authors:

Lachlan Moore, Waseda University and NICT; Tatsuya Mori, Waseda University, NICT, and RIKEN AIP; Ayako A. Hasegawa, NICT

Abstract:

People make decisions while being influenced by those around them. Previous studies have shown that users often adopt security practices on the basis of advice from others and have proposed collaborative and community-based approaches to enhance user security behaviors. In this paper, we focused on the negative effects of social triggers and investigated whether risky user behaviors are socially triggered. We conducted an online survey to understand the triggers for risky user behaviors and the practices of sharing the behaviors. We found that a non-negligible percentage of participants experienced social triggers before engaging in risky behaviors. We also show that socially triggered risky behaviors are more likely to be socially shared, i.e., there are negative chains of risky behaviors. Our findings suggest that more efforts are needed to reduce negative social effects, and we propose specific approaches to accomplish this.

WEDNESDAY, AUGUST 14th (USENIX)

“Did They F***ing Consent to That?”: Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse

Track 1

Time: 11:15 am–12:15 pm

Authors:

Lucy Qin, Georgetown University; Vaughn Hamilton, Max Planck Institute for Software Systems; Sharon Wang, University of Washington; Yigit Aydinalp and Marin Scarlett, European Sex Workers Rights Alliance; Elissa M. Redmiles, Georgetown University

Abstract:

As many as 8 in 10 adults share intimate content such as nude or lewd images. Sharing such content has significant benefits for relationship intimacy and body image, and can offer employment. However, stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence. An estimated 1 in 3 people have been subjected to image-based sexual abuse (IBSA), a spectrum of violence that includes the nonconsensual distribution or threat of distribution of consensually-created intimate content (also called NDII). In this work, we conducted a rigorous empirical interview study of 52 European creators of intimate content to examine the threats they face and how they defend against them, situated in the context of their different use cases for intimate content sharing and their choice of technologies for storing and sharing such content. Synthesizing our results with the limited body of prior work on technological prevention of NDII, we offer concrete next steps for both platforms and security & privacy researchers to work toward safer intimate content sharing through proactive protection.

Content Warning: This work discusses sexual violence, specifically, the harms of image-based sexual abuse (particularly in Sections 2 and 6).

Eye of Sauron: Long-Range Hidden Spy Camera Detection and Positioning with Inbuilt Memory EM Radiation

Track 2

Time: 11:15 am–12:15 pm

Authors:

Qibo Zhang and Daibo Liu, Hunan University; Xinyu Zhang, University of California San Diego; Zhichao Cao, Michigan State University; Fanzi Zeng, Hongbo Jiang, and Wenqiang Jin, Hunan University

Abstract:

In this paper, we present ESauron — the first proof-of-concept system that can detect diverse forms of spy cameras (i.e., wireless, wired and offline devices) and quickly pinpoint their locations. The key observation is that, for all spy cameras, the captured raw images must be first digested (e.g., encoding and compression) in the video-capture devices before transferring to target receiver or storage medium. This digestion process takes place in an inbuilt read-write memory whose operations cause electromagnetic radiation (EMR). Specifically, the memory clock drives a variable number of switching voltage regulator activities depending on the workloads, causing fluctuating currents injected into memory units, thus emitting EMR signals at the clock frequency. Whenever the visual scene changes, bursts of video data processing (e.g., video encoding) suddenly aggravate the memory workload, bringing responsive EMR patterns. ESauron can detect spy cameras by intentionally stimulating scene changes and then sensing the surge of EMRs even from a considerable distance. We implemented a proof-of-concept prototype of the ESauron by carefully designing techniques to sense and differentiate memory EMRs, assert the existence of spy cameras, and pinpoint their locations. Experiments with 50 camera products show that ESauron can detect all spy cameras with an accuracy of 100% after only 4 stimuli, the detection range can exceed 20 meters even in the presence of blockages, and all spy cameras can be accurately located.

In Wallet We Trust: Bypassing the Digital Wallets Payment Security for Free Shopping

Track 1

Time: 1:45 pm–2:45 pm

Authors:

Raja Hasnain Anwar, University of Massachusetts Amherst; Syed Rafiul Hussain, Pennsylvania State University; Muhammad Taqi Raza, University of Massachusetts Amherst

This paper is currently under embargo, but the paper abstract is available now. The final paper PDF will be available on the first day of the conference.

Abstract:

Digital wallets are a new form of payment technology that provides a secure and convenient way of making contactless payments through smart devices. In this paper, we study the security of financial transactions made through digital wallets, focusing on the authentication, authorization, and access control security functions. We find that the digital payment ecosystem supports the decentralized authority delegation which is susceptible to a number of attacks. First, an attacker adds the victim’s bank card into their (attacker’s) wallet by exploiting the authentication method agreement procedure between the wallet and the bank. Second, they exploit the unconditional trust between the wallet and the bank, and bypass the payment authorization. Third, they create a trap door through different payment types and violate the access control policy for the payments. The implications of these attacks are of a serious nature where the attacker can make purchases of arbitrary amounts by using the victim’s bank card, despite these cards being locked and reported to the bank as stolen by the victim. We validate these findings in practice over major US banks (notably Chase, AMEX, Bank of America, and others) and three digital wallet apps (ApplePay, GPay, and PayPal). We have disclosed our findings to all the concerned parties. Finally, we propose remedies for fixing the design flaws to avoid these and other similar attacks.

Towards Privacy-Preserving Social-Media SDKs on Android

Track 3

Time: 1:45 pm–2:45 pm

Authors:

Haoran Lu, Yichen Liu, Xiaojing Liao, and Luyi Xing, Indiana University Bloomington

Abstract:

Integration of third-party SDKs are essential in the development of mobile apps. However, the rise of in-app privacy threat against mobile SDKs — called cross-library data harvesting (XLDH), targets social media/platform SDKs (called social SDKs) that handles rich user data. Given the widespread integration of social SDKs in mobile apps, XLDH presents a significant privacy risk, as well as raising pressing concerns regarding legal compliance for app developers, social media/platform stakeholders, and policymakers. The emerging XLDH threat, coupled with the increasing demand for privacy and compliance in line with societal expectations, introduces unique challenges that cannot be addressed by existing protection methods against privacy threats or malicious code on mobile platforms. In response to the XLDH threats, in our study, we generalize and define the concept of privacy-preserving social SDKs and their in-app usage, characterize fundamental challenges for combating the XLDH threat and ensuring privacy in design and utilizaiton of social SDKs. We introduce a practical, clean-slate design and end-to-end systems, called PESP, to facilitate privacy-preserving social SDKs. Our thorough evaluation demonstrates its satisfactory effectiveness, performance overhead and practicability for widespread adoption.

Can I Hear Your Face? Pervasive Attack on Voice Authentication Systems with a Single Face Image

Track 1

Time: 3:15 pm–4:15 pm

Authors:

Nan Jiang, Bangjie Sun, and Terence Sim, National University of Singapore; Jun Han, KAIST

Abstract:

We present Foice, a novel deepfake attack against voice authentication systems. Foice generates a synthetic voice of the victim from just a single image of the victim’s face, without requiring any voice sample. This synthetic voice is realistic enough to fool commercial authentication systems. Since face images are generally easier to obtain than voice samples, Foice effectively makes it easier for an attacker to mount large-scale attacks. The key idea lies in learning the partial correlation between face and voice features and adding to that a face-independent voice feature sampled from a Gaussian distribution. We demonstrate the effectiveness of Foice with a comprehensive set of real-world experiments involving ten offline participants and an online dataset of 1029 unique individuals. By evaluating eight state-of-the-art systems, including WeChat’s Voiceprint and Microsoft Azure, we show that all these systems are vulnerable to Foice attack.

Rethinking the Security Threats of Stale DNS Glue Records

Track 4

Time: 3:15 pm–4:15 pm

Authors:

Yunyi Zhang, National University of Defense Technology and Tsinghua University; Baojun Liu, Tsinghua University; Haixin Duan, Tsinghua University, Zhongguancun Laboratory, and Quan Cheng Laboratory; Min Zhang, National University of Defense Technology; Xiang Li, Tsinghua University; Fan Shi and Chengxi Xu, National University of Defense Technology; Eihal Alowaisheq, King Saud University

This paper is currently under embargo. The final paper PDF and abstract will be available on the first day of the conference.

Guardians of the Galaxy: Content Moderation in the InterPlanetary File System

Track 1

Time: 4:30 pm–5:30 pm

Authors:

Saidu Sokoto, City, University of London; Leonhard Balduf, TU Darmstadt; Dennis Trautwein, University of Göttingen; Yiluo Wei and Gareth Tyson, Hong Kong Univ. of Science & Technology (GZ); Ignacio Castro, Queen Mary, University of London; Onur Ascigil, Lancaster University; George Pavlou, University College London; Maciej Korczyński, Univ. Grenoble Alpes; Björn Scheuermann, TU Darmstadt; Michał Król, City, University of London

Abstract:

The InterPlanetary File System (IPFS) is one of the largest platforms in the growing “Decentralized Web”. The increasing popularity of IPFS has attracted large volumes of users and content. Unfortunately, some of this content could be considered “problematic”. Content moderation is always hard. With a completely decentralized infrastructure and administration, content moderation in IPFS is even more difficult. In this paper, we examine this challenge. We identify, characterize, and measure the presence of problematic content in IPFS (e.g. subject to takedown notices). Our analysis covers 368,762 files. We analyze the complete content moderation process including how these files are flagged, who hosts and retrieves them. We also measure the efficacy of the process. We analyze content submitted to denylist, showing that notable volumes of problematic content are served, and the lack of a centralized approach facilitates its spread. While we identify fast reactions to takedown requests, we also test the resilience of multiple gateways and show that existing means to filter problematic content can be circumvented. We end by proposing improvements to content moderation that result in 227% increase in the detection of phishing content and reduce the average time to filter such content by 43%.

A Binary-level Thread Sanitizer or Why Sanitizing on the Binary Level is Hard

Track 6

Time: 4:30 pm–5:30 pm

Authors:

Joschua Schilling, CISPA Helmholtz Center for Information Security; Andreas Wendler, Friedrich-Alexander-Universität Erlangen-Nürnberg; Philipp Görz, Nils Bars, Moritz Schloegel, and Thorsten Holz, CISPA Helmholtz Center for Information Security

Abstract:

Dynamic software testing methods, such as fuzzing, have become a popular and effective method for detecting many types of faults in programs. While most research focuses on targets for which source code is available, much of the software used in practice is only available as closed source. Testing software without having access to source code forces a user to resort to binary-only testing methods, which are typically slower and lack support for crucial features, such as advanced bug oracles in the form of sanitizers, i.e., dynamic methods to detect faults based on undefined or suspicious behavior. Almost all existing sanitizers work by injecting instrumentation at compile time, requiring access to the target’s source code. In this paper, we systematically identify the key challenges of applying sanitizers to binary-only targets. As a result of our analysis, we present the design and implementation of BINTSAN, an approach to realize the data race detector TSAN targeting binary-only Linux x86–64 targets. We systematically evaluate BINTSAN for correctness, effectiveness, and performance. We find that our approach has a runtime overhead of only 15% compared to source-based TSAN. Compared to existing binary solutions, our approach has better performance (up to 5.0× performance improvement) and precision, while preserving compatibility with the compiler-based TSAN.

THURSDAY, AUGUST 15th

“But they have overlooked a few things in Afghanistan:” An Analysis of the Integration of Biometric Voter Verification in the 2019 Afghan Presidential Elections

Track 1

Time: 9:00 am–10:15 am

Authors:

Kabir Panahi and Shawn Robertson, University of Kansas; Yasemin Acar, Paderborn University; Alexandru G. Bardas, University of Kansas; Tadayoshi Kohno, University of Washington; Lucy Simko, The George Washington University

This paper is currently under embargo. The final paper PDF and abstract will be available on the first day of the conference.

Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants

Track 1

Time: 9:00 am–10:15 am

Authors:

Yuanyuan Feng, University of Vermont; Abhilasha Ravichander, Allen Institute for Artificial Intelligence; Yaxing Yao, Virginia Tech; Shikun Zhang and Rex Chen, Carnegie Mellon University; Shomir Wilson, Pennsylvania State University; Norman Sadeh, Carnegie Mellon University

Abstract:

Understanding and managing data privacy in the digital world can be challenging for sighted users, let alone blind and low-vision (BLV) users. There is limited research on how BLV users, who have special accessibility needs, navigate data privacy, and how potential privacy tools could assist them. We conducted an in-depth qualitative study with 21 US BLV participants to understand their data privacy risk perception and mitigation, as well as their information behaviors related to data privacy. We also explored BLV users’ attitudes towards potential privacy question answering (Q&A) assistants that enable them to better navigate data privacy information. We found that BLV users face heightened security and privacy risks, but their risk mitigation is often insufficient. They do not necessarily seek data privacy information but clearly recognize the benefits of a potential privacy Q&A assistant. They also expect privacy Q&A assistants to possess cross-platform compatibility, support multi-modality, and demonstrate robust functionality. Our study sheds light on BLV users’ expectations when it comes to usability, accessibility, trust and equity issues regarding digital data privacy.

Intellectual Property Exposure: Subverting and Securing Intellectual Property Encapsulation in Texas Instruments Microcontrollers

Track 2

Time: 9:00 am–10:15 am

Authors:

Marton Bognar, Cas Magnus, Frank Piessens, and Jo Van Bulck, DistriNet, KU Leuven

Abstract:

In contrast to high-end computing platforms, specialized memory protection features in low-end embedded devices remain relatively unexplored despite the ubiquity of these devices. Hence, we perform an in-depth security evaluation of the state-of-the-art Intellectual Property Encapsulation (IPE) technology found in widely used off-the-shelf, Texas Instruments MSP430 microcontrollers. While we find IPE to be promising, bearing remarkable similarities with trusted execution environments (TEEs) from research and industry, we reveal several fundamental protection shortcomings in current IPE hardware. We show that many software-level attack techniques from the academic TEE literature apply to this platform, and we discover a novel attack primitive, dubbed controlled call corruption, exploiting a vulnerability in the IPE access control mechanism. Our practical, end-to-end attack scenarios demonstrate a complete bypass of confidentiality and integrity guarantees of IPE-protected programs.

Informed by our systematic attack study on IPE and root-cause analysis, also considering related research prototypes, we propose lightweight hardware changes to secure IPE. Furthermore, we develop a prototype framework that transparently implements software responsibilities to reduce information leakage and repurposes the onboard memory protection unit to reinstate IPE security guarantees on currently vulnerable devices with low performance overheads.

Snowflake, a censorship circumvention system using temporary WebRTC proxies

Track 1

Time: 10:45 am–12:00 pm

Authors:

Cecylia Bocovich, Tor Project; Arlo Breault, Wikimedia Foundation; David Fifield and Serene, unaffiliated; Xiaokang Wang, Tor Project

Abstract:

Snowflake is a system for circumventing Internet censorship. Its blocking resistance comes from the use of numerous, ultra-light, temporary proxies (“snowflakes”), which accept traffic from censored clients using peer-to-peer WebRTC protocols and forward it to a centralized bridge. The temporary proxies are simple enough to be implemented in JavaScript, in a web page or browser extension, making them much cheaper to run than a traditional proxy or VPN server. The large and changing pool of proxy addresses resists enumeration and blocking by a censor. The system is designed with the assumption that proxies may appear or disappear at any time. Clients discover proxies dynamically using a secure rendezvous protocol. When an in-use proxy goes offline, its client switches to another on the fly, invisibly to upper network layers.

Snowflake has been deployed with success in Tor Browser and Orbot for several years. It has been a significant circumvention tool during high-profile network disruptions, including in Russia in 2021 and Iran in 2022. In this paper, we explain the composition of Snowflake’s many parts, give a history of deployment and blocking attempts, and reflect on implications for circumvention generally.

Remote Keylogging Attacks in Multi-user VR Applications

Track 2

Time: 10:45 am–12:00 pm

Authors:

Zihao Su, University of California, Santa Barbara; Kunlin Cai, University of California, Los Angeles; Reuben Beeler, Lukas Dresel, Allan Garcia, and Ilya Grishchenko, University of California, Santa Barbara; Yuan Tian, University of California, Los Angeles; Christopher Kruegel and Giovanni Vigna, University of California, Santa Barbara

Abstract:

As Virtual Reality (VR) applications grow in popularity, they have bridged distances and brought users closer together. However, with this growth, there have been increasing concerns about security and privacy, especially related to the motion data used to create immersive experiences. In this study, we highlight a significant security threat in multi-user VR applications, which are applications that allow multiple users to interact with each other in the same virtual space. Specifically, we propose a remote attack that utilizes the avatar rendering information collected from an adversary’s game clients to extract user-typed secrets like credit card information, passwords, or private conversations. We do this by (1) extracting motion data from network packets, and (2) mapping motion data to keystroke entries. We conducted a user study to verify the attack’s effectiveness, in which our attack successfully inferred 97.62% of the keystrokes. Besides, we performed an additional experiment to underline that our attack is practical, confirming its effectiveness even when (1) there are multiple users in a room, and (2) the attacker cannot see the victims. Moreover, we replicated our proposed attack on four applications to demonstrate the generalizability of the attack. These results underscore the severity of the vulnerability and its potential impact on millions of VR social platform users.

It Doesn’t Look Like Anything to Me: Using Diffusion Model to Subvert Visual Phishing Detectors

Track 5

Time: 10:45 am–12:00 pm

Authors:

Qingying Hao and Nirav Diwan, University of Illinois at Urbana-Champaign; Ying Yuan, University of Padua; Giovanni Apruzzese, University of Liechtenstein; Mauro Conti, University of Padua; Gang Wang, University of Illinois at Urbana-Champaign

Abstract:

Visual phishing detectors rely on website logos as the invariant identity indicator to detect phishing websites that mimic a target brand’s website. Despite their promising performance, the robustness of these detectors is not yet well understood. In this paper, we challenge the invariant assumption of these detectors and propose new attack tactics, LogoMorph, with the ultimate purpose of enhancing these systems. LogoMorph is rooted in a key insight: users can neglect large visual perturbations on the logo as long as the perturbation preserves the original logo’s semantics. We devise a range of attack methods to create semantic-preserving adversarial logos, yielding phishing webpages that bypass state-of-the-art detectors. For text-based logos, we find that using alternative fonts can help to achieve the attack goal. For image-based logos, we find that an adversarial diffusion model can effectively capture the style of the logo while generating new variants with large visual differences. Practically, we evaluate LogoMorph with white-box and black-box experiments and test the resulting adversarial webpages against various visual phishing detectors end-to-end. User studies (n = 150) confirm the effectiveness of our adversarial phishing webpages on end users (with a detection rate of 0.59, barely better than a coin toss). We also propose and evaluate countermeasures, and share our code.

Pixel Thief: Exploiting SVG Filter Leakage in Firefox and Chrome

Track 2

Time: 1:30 pm–2:45 pm

Authors:

Sioli O’Connell, The University of Adelaide; Lishay Aben Sour and Ron Magen, Ben Gurion University of the Negev; Daniel Genkin, Georgia Institute of Technology; Yossi Oren, Ben-Gurion University of the Negev and Intel Corporation; Hovav Shacham, UT Austin; Yuval Yarom, Ruhr University Bochum

Abstract:

Web privacy is challenged by pixel-stealing attacks, which allow attackers to extract content from embedded iframes and to detect visited links. To protect against multiple pixelstealing attacks that exploited timing variations in SVG filters, browser vendors repeatedly adapted their implementations to eliminate timing variations. In this work we demonstrate that past efforts are still not sufficient.

We show how web-based attackers can mount cache-based side-channel attacks to monitor data-dependent memory accesses in filter rendering functions. We identify conditions under which browsers elect the non-default CPU implementation of SVG filters, and develop techniques for achieving access to the high-resolution timers required for cache attacks. We then develop efficient techniques to use the pixel-stealing attack for text recovery from embedded pages and to achieve high-speed history sniffing. To the best of our knowledge, our attack is the first to leak multiple bits per screen refresh, achieving an overall rate of 267 bits per second.

Mempool Privacy via Batched Threshold Encryption: Attacks and Defenses

Track 4

Time: 1:30 pm–2:45 pm

Authors:

Arka Rai Choudhuri, NTT Research; Sanjam Garg, Julien Piet, and Guru-Vamsi Policharla, University of California, Berkeley

Abstract:

With the rising popularity of DeFi applications it is important to implement protections for regular users of these DeFi platforms against large parties with massive amounts of resources allowing them to engage in market manipulation strategies such as frontrunning/backrunning. Moreover, there are many situations (such as recovery of funds from vulnerable smart contracts) where a user may not want to reveal their transaction until it has been executed. As such, it is clear that preserving the privacy of transactions in the mempool is an important goal.

In this work we focus on achieving mempool transaction privacy through a new primitive that we term batched-threshold encryption, which is a variant of threshold encryption with strict efficiency requirements to better model the needs of resource constrained environments such as blockchains. Unlike the naive use of threshold encryption, which requires communication proportional to O(nB) to decrypt B transactions with a committee of n parties, our batched-threshold encryption scheme only needs O(n) communication. We additionally discuss pitfalls in prior approaches that use (vanilla) threshold encryption for mempool privacy.

To show that our scheme is concretely efficient, we implement our scheme and find that transactions can be encrypted in under 6 ms, independent of committee size, and the communication required to decrypt an entire batch of B transactions is 80 bytes per party, independent of the number of transactions B, making it an attractive choice when communication is very expensive. If deployed on Ethereum, which processes close to 500 transaction per block, it takes close to 2.8 s for each committee member to compute a partial decryption and under 3.5 s to decrypt all transactions for a block in single-threaded mode.

RustSan: Retrofitting AddressSanitizer for Efficient Sanitization of Rust

Track 6

Time: 1:30 pm–2:45 pm

Authors:

Kyuwon Cho, Jongyoon Kim, Kha Dinh Duy, Hajeong Lim, and Hojoon Lee, Sungkyunkwan University

This paper is currently under embargo. The final paper PDF and abstract will be available on the first day of the conference.

Election Eligibility with OpenID: Turning Authentication into Transferable Proof of Eligibility

Track 7

Time: 1:30 pm–2:45 pm

Authors:

Véronique Cortier, Alexandre Debant, Anselme Goetschmann, and Lucca Hirschi, Université de Lorraine, CNRS, Inria, LORIA, France

Abstract:

Eligibility checks are often abstracted away or omitted in voting protocols, leading to situations where the voting server can easily stuff the ballot box. One reason for this is the difficulty of bootstraping the authentication material for voters without relying on trusting the voting server.

In this paper, we propose a new protocol that solves this problem by building on OpenID, a widely deployed authentication protocol. Instead of using it as a standard authentication means, we turn it into a mechanism that delivers transferable proofs of eligibility. Using zk-SNARK proofs, we show that this can be done without revealing any compromising information, in particular, protecting everlasting privacy. Our approach remains efficient and can easily be integrated into existing protocols, as we have done for the Belenios voting protocol. We provide a full-fledged proof of concept along with benchmarks showing our protocol could be realistically used in large-scale elections.

LaserAdv: Laser Adversarial Attacks on Speech Recognition Systems

Track 2

Time: 3:15 pm–4:15 pm

Authors:

Guoming Zhang, Xiaohui Ma, Huiting Zhang, and Zhijie Xiang, Shandong University; Xiaoyu Ji, Zhejiang University; Yanni Yang, Xiuzhen Cheng, and Pengfei Hu, Shandong University

Abstract:

Audio adversarial perturbations are imperceptible to humans but can mislead machine learning models, posing a security threat to automatic speech recognition (ASR) systems. Existing methods aim to minimize perturbation values, use acoustic masking, or mimic environmental sounds to render them undetectable. However, these perturbations, being audible frequency range sounds, are still audibly detectable. The slow propagation and rapid attenuation of sound limit their temporal sensitivity and attack range. In this study, we propose LaserAdv, a method that employs lasers to launch adversarial attacks, thereby overcoming the aforementioned challenges due to the superior properties of lasers. In the presence of victim speech, laser adversarial perturbations are superimposed on the speech rather than simply drowning it out, so LaserAdv has higher attack efficiency and longer attack range than LightCommands. LaserAdv introduces a selective amplitude enhancement method based on time-frequency interconversion (SAE-TFI) to deal with distortion. Meanwhile, to simultaneously achieve inaudible, targeted, universal, synchronization-free (over 0.5 s), long-range, and black-box attacks in the physical world, we introduced a series of strategies into the objective function. Our experimental results show that a single perturbation can cause DeepSpeech, Whisper and iFlytek, to misinterpret any of the 12,260 voice commands as the target command with accuracy of up to 100%, 92% and 88%, respectively. The attack distance can be up to 120 m.

VoltSchemer: Use Voltage Noise to Manipulate Your Wireless Charger

Track 2

Time: 3:15 pm–4:15 pm

Authors:

Zihao Zhan and Yirui Yang, University of Florida; Haoqi Shan, University of Florida, CertiK; Hanqiu Wang, Yier Jin, and Shuo Wang, University of Florida

Abstract:

Wireless charging is becoming an increasingly popular charging solution in portable electronic products for a more convenient and safer charging experience than conventional wired charging. However, our research identified new vulnerabilities in wireless charging systems, making them susceptible to intentional electromagnetic interference. These vulnerabilities facilitate a set of novel attack vectors, enabling adversaries to manipulate the charger and perform a series of attacks.

In this paper, we propose VoltSchemer, a set of innovative attacks that grant attackers control over commercial-off-the-shelf wireless chargers merely by modulating the voltage from the power supply. These attacks represent the first of its kind, exploiting voltage noises from the power supply to manipulate wireless chargers without necessitating any malicious modifications to the chargers themselves. The significant threats imposed by VoltSchemer are substantiated by three practical attacks, where a charger can be manipulated to: control voice assistants via inaudible voice commands, damage devices being charged through overcharging or overheating, and bypass Qi-standard specified foreign-object-detection mechanism to damage valuable items exposed to intense magnetic fields.

We demonstrate the effectiveness and practicality of the VoltSchemer attacks with successful attacks on 9 top-selling COTS wireless chargers. Furthermore, we discuss the security implications of our findings and suggest possible countermeasures to mitigate potential threats.

SLUBStick: Arbitrary Memory Writes through Practical Software Cross-Cache Attacks within the Linux Kernel

Track 3

Time: 3:15 pm–4:15 pm

Authors:

Lukas Maar, Stefan Gast, Martin Unterguggenberger, Mathias Oberhuber, and Stefan Mangard, Graz University of Technology

Abstract:

While the number of vulnerabilities in the Linux kernel has increased significantly in recent years, most have limited capabilities, such as corrupting a few bytes in restricted allocator caches. To elevate their capabilities, security researchers have proposed software cross-cache attacks, exploiting the memory reuse of the kernel allocator. However, such cross-cache attacks are impractical due to their low success rate of only 40 %, with failure scenarios often resulting in a system crash.

In this paper, we present SLUBStick, a novel kernel exploitation technique elevating a limited heap vulnerability to an arbitrary memory read-and-write primitive. SLUBStick operates in multiple stages: Initially, it exploits a timing side channel of the allocator to perform a cross-cache attack reliably. Concretely, exploiting the side-channel leakage pushes the success rate to above 99 % for frequently used generic caches. SLUBStick then exploits code patterns prevalent in the Linux kernel to convert a limited heap vulnerability into a page table manipulation, thereby granting the capability to read and write memory arbitrarily. We demonstrate the applicability of SLUBStick by systematically analyzing two Linux kernel versions, v5.19 and v6.2. Lastly, we evaluate SLUBStick with a synthetic vulnerability and 9 real-world CVEs, showcasing privilege escalation and container escape in the Linux kernel with state-of-the-art kernel defenses enabled.

Understanding the Security and Privacy Implications of Online Toxic Content on Refugees

Track 1

Time: 4:30 pm–5:30 pm

Authors:

Arjun Arunasalam, Purdue University; Habiba Farrukh, University of California, Irvine; Eliz Tekcan and Z. Berkay Celik, Purdue University

Abstract:

Deteriorating conditions in regions facing social and political turmoil have resulted in the displacement of huge populations known as refugees. Technologies such as social media have helped refugees adapt to challenges in their new homes. While prior works have investigated refugees’ computer security and privacy (S&P) concerns, refugees’ increasing exposure to toxic content and its implications have remained largely unexplored. In this paper, we answer how toxic content can influence refugees’ S&P actions, goals, and barriers, and how their experiences shape these factors. Through semi-structured interviews with refugee liaisons (n=12), focus groups (n=9, 27 participants), and an online survey (n=29) with refugees, we discover unique attack contexts (e.g., participants are targeted after responding to posts directed against refugees) and how intersecting identities (e.g., LGBTQ+, women) exacerbate attacks. In response to attacks, refugees take immediate actions (e.g., selective blocking) or long-term behavioral shifts (e.g., ensuring uploaded photos are void of landmarks) These measures minimize vulnerability and discourage attacks, among other goals, while participants acknowledge barriers to measures (e.g., anonymity impedes family reunification). Our findings highlight lessons in better equipping refugees to manage toxic content attacks.

The Imitation Game: Exploring Brand Impersonation Attacks on Social Media Platforms

Track 1

Time: 4:30 pm–5:30 pm

Authors:

Bhupendra Acharya, CISPA Helmholtz Center for Information Security; Dario Lazzaro, University of Genoa; Efrén López-Morales, Texas A&M University-Corpus Christi; Adam Oest and Muhammad Saad, PayPal Inc.; Antonio Emanuele Cinà, University of Genoa; Lea Schönherr and Thorsten Holz, CISPA Helmholtz Center for Information Security

Abstract:

The rise of social media users has led to an increase in customer support services offered by brands on various platforms. Unfortunately, attackers also use this as an opportunity to trick victims through fake profiles that imitate official brand accounts. In this work, we provide a comprehensive overview of such brand impersonation attacks on social media.

We analyze the fake profile creation and user engagement processes on X, Instagram, Telegram, and YouTube and quantify their impact. Between May and October 2023, we collected 1.3 million user profiles, 33 million posts, and publicly available profile metadata, wherein we found 349,411 squatted accounts targeting 2,625 of 2,847 major international brands. Analyzing profile engagement and user creation techniques, we show that squatting profiles persistently perform various novel attacks in addition to classic abuse such as social engineering, phishing, and copyright infringement. By sharing our findings with the top 100 brands and collaborating with one of them, we further validate the real-world implications of such abuse. Our research highlights a weakness in the ability of social media platforms to protect brands and users from attacks based on username squatting. Alongside strategies such as customer education and clear indicators of trust, our detection model can be used by platforms as a countermeasure to proactively detect abusive accounts.

SIMurai: Slicing Through the Complexity of SIM Card Security Research

Track 3

Time: 4:30 pm–5:30 pm

Authors:

Tomasz Piotr Lisowski, University of Birmingham; Merlin Chlosta, CISPA Helmholtz Center for Information Security; Jinjin Wang and Marius Muench, University of Birmingham

This paper is currently under embargo, but the paper abstract is available now. The final paper PDF will be available on the first day of the conference.

Abstract:

SIM cards are widely regarded as trusted entities within mobile networks. But what if they were not trustworthy? In this paper, we argue that malicious SIM cards are a realistic threat, and demonstrate that they can launch impactful attacks against mobile devices and their basebands.

We design and implement SIMURAI, a software platform for security-focused SIM exploration and experimentation. At its core, SIMURAI features a flexible software implementation of a SIM. In contrast to existing SIM research tooling that typically involves physical SIM cards, SIMURAI adds flexibility by enabling deliberate violation of application-level and transmission-level behavior — a valuable asset for further exploration of SIM features and attack capabilities.

We integrate the platform into common cellular security test beds, demonstrating that smartphones can successfully connect to mobile networks using our software SIM. Additionally, we integrate SIMURAI with emulated baseband firmwares and carry out a fuzzing campaign that leads to the discovery of two high-severity vulnerabilities on recent flagship smartphones. We also demonstrate how rogue carriers and attackers with physical access can trigger these vulnerabilities with ease, emphasizing the need to recognize hostile SIMs in cellular security threat models.

FRIDAY, AUGUST 16th

Swipe Left for Identity Theft: An Analysis of User Data Privacy Risks on Location-based Dating Apps

Track 3

Time: 9:00 am–10:15 am

Authors:

Karel Dhondt, Victor Le Pochat, Yana Dimova, Wouter Joosen, and Stijn Volckaert, DistriNet, KU Leuven

This paper is currently under embargo, but the paper abstract is available now. The final paper PDF will be available on the first day of the conference.

Abstract:

Location-based dating (LBD) apps enable users to meet new people nearby and online by browsing others’ profiles, which often contain very personal and sensitive data. We systematically analyze 15 LBD apps on the prevalence of privacy risks that can result in abuse by adversarial users who want to stalk, harass, or harm others. Through a systematic manual analysis of these apps, we assess which personal and sensitive data is shared with other users, both as (intended) data exposure and as inadvertent yet powerful leaks in API traffic that is otherwise hidden from a user, violating their mental model of what they share on LBD apps. We also show that 6 apps allow for pinpointing a victim’s exact location, enabling physical threats to users’ personal safety. All these data exposures and leaks — supported by easy account creation — enable targeted or large-scale, long-term, and stealthy profiling and tracking of LBD app users. While privacy policies acknowledge personal data processing, and a tension exists between app functionality and user privacy, significant data privacy risks remain. We recommend user control, data minimization, and API hardening as countermeasures to protect users’ privacy.

Security and Privacy Analysis of Samsung’s Crowd-Sourced Bluetooth Location Tracking System

Track 7

Time: 9:00 am–10:15 am

Authors:

Tingfeng Yu, James Henderson, Alwen Tiu, and Thomas Haines, School of Computing, The Australian National University

Abstract:

We present a detailed analysis of Samsung’s Offline Finding (OF) protocol, which is part of Samsung’s Find My Mobile system for locating Samsung mobile devices and Galaxy SmartTags. The OF protocol uses Bluetooth Low Energy (BLE) to broadcast a unique beacon for a lost device. This beacon is then picked up by nearby Samsung phones or tablets (the helper devices), which then forward the beacon and the location it was detected at, to a vendor server. The owner of a lost device can then query the server to locate their device. We examine several security and privacy related properties of the OF protocol and its implementation. These include: the feasibility of tracking an OF device through its BLE data, the feasibility of unwanted tracking of a person by exploiting the OF network, the feasibility for the vendor to de-anonymise location reports to determine the locations of the owner or the helper devices, and the feasibility for an attacker to compromise the integrity of the location reports. Our findings suggest that there are privacy risks on all accounts, arising from issues in the design and the implementation of the OF protocol.

ElectionGuard: a Cryptographic Toolkit to Enable Verifiable Elections

Track 7

Time: 9:00 am–10:15 am

Authors:

Josh Benaloh and Michael Naehrig, Microsoft Research; Olivier Pereira, Microsoft Research and UCLouvain; Dan S. Wallach, Rice University

Abstract:

ElectionGuard is a flexible set of open-source tools that — when used with traditional election systems — can produce end-to-end verifiable elections whose integrity can be verified by observers, candidates, media, and even voters themselves. ElectionGuard has been integrated into a variety of systems and used in actual public U.S. elections in Wisconsin, California, Idaho, Utah, and Maryland as well as in caucus elections in the U.S. Congress. It has also been used for civic voting in the Paris suburb of Neuilly-sur-Seine and for an online election by a Switzerland/Denmark-based organization.

The principal innovation of ElectionGuard is the separation of the cryptographic tools from the core mechanics and user interfaces of voting systems. This separation allows the cryptography to be designed and built by security experts without having to re-invent and replace the existing infrastructure. Indeed, in its preferred deployment, ElectionGuard does not replace the existing vote counting infrastructure but instead runs alongside and produces its own independently-verifiable tallies. Although much of the cryptography in ElectionGuard is, by design, not novel, some significant innovations are introduced which greatly simplify the process of verification.

This paper describes the design of ElectionGuard, its innovations, and many of the learnings from its implementation and growing number of real-world deployments.

iHunter: Hunting Privacy Violations at Scale in the Software Supply Chain on iOS

Track 3

Time: 10:45 am–11:45 am

Authors:

Dexin Liu, Peking University and Alibaba Group; Yue Xiao and Chaoqi Zhang, Indiana University Bloomington; Kaitao Xie and Xiaolong Bai, Alibaba Group; Shikun Zhang, Peking University; Luyi Xing, Indiana University Bloomington

Abstract:

Privacy violations and compliance issues in mobile apps are serious concerns for users, developers, and regulators. With many off-the-shelf tools on Android, prior works extensively studied various privacy issues for Android apps. Privacy risks and compliance issues can be equally expected in iOS apps, but have been little studied. In particular, a prominent recent privacy concern was due to diverse third-party libraries widely integrated into mobile apps whose privacy practices are non-transparent. Such a critical supply chain problem, however, was never systematically studied for iOS apps, at least partially due to the lack of the necessary tools.

This paper presents the first large-scale study, based on our new taint analysis system named iHunter, to analyze privacy violations in the iOS software supply chain. iHunter performs static taint analysis on iOS SDKs to extract taint traces representing privacy data collection and leakage practices. It is characterized by an innovative iOS-oriented symbolic execution that tackles dynamic features of Objective-C and Swift and an NLP-powered generator for taint sources and taint rules. iHunter identified non-compliance in 2,585 SDKs (accounting for 40.4%) out of 6,401 iOS SDKs, signifying a substantial presence of SDKs that fail to adhere to compliance standards. We further found a high proportion (47.2% in 32,478) of popular iOS apps using these SDKs, with practical non-compliance risks violating Apple policies and major privacy laws. These results shed light on the pervasiveness and severity of privacy violations in iOS apps’ supply chain. iHunter is thoroughly evaluated for its high effectiveness and efficiency. We are responsibly reporting the results to relevant stakeholders.

Quantifying Privacy Risks of Prompts in Visual Prompt Learning

Track 5

Time: 10:45 am–11:45 am

Authors:

Yixin Wu, Rui Wen, and Michael Backes, CISPA Helmholtz Center for Information Security; Pascal Berrang, University of Birmingham; Mathias Humbert, University of Lausanne; Yun Shen, Netapp; Yang Zhang, CISPA Helmholtz Center for Information Security

Abstract:

Large-scale pre-trained models are increasingly adapted to downstream tasks through a new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not update the pre-trained model’s parameters. Instead, it only learns an input perturbation, namely prompt, to be added to the downstream task data for predictions. Given the fast development of prompt learning, a well-generalized prompt inevitably becomes a valuable asset as significant effort and proprietary data are used to create it. This naturally raises the question of whether a prompt may leak the proprietary information of its training data. In this paper, we perform the first comprehensive privacy assessment of prompts learned by visual prompt learning through the lens of property inference and membership inference attacks. Our empirical evaluation shows that the prompts are vulnerable to both attacks. We also demonstrate that the adversary can mount a successful property inference attack with limited cost. Moreover, we show that membership inference attacks against prompts can be successful with relaxed adversarial assumptions. We further make some initial investigations on the defenses and observe that our method can mitigate the membership inference attacks with a decent utility-defense trade-off but fails to defend against property inference attacks. We hope our results can shed light on the privacy risks of the popular prompt learning paradigm. To facilitate the research in this direction, we will share our code and models with the community.

Orbital Trust and Privacy: SoK on PKI and Location Privacy Challenges in Space Networks

Track 2

Time: 1:15 pm–2:15 pm

Authors:

David Koisser, Sanctuary; Richard Mitev, Technische Universität Darmstadt; Nikita Yadav, Indian Institute of Science, Bangalore; Franziska Vollmer and Ahmad-Reza Sadeghi, Technische Universität Darmstadt

Abstract:

The dynamic evolution of the space sector, often referred to as “New Space,” has led to increased commercialization and innovation. This transformation is characterized by a surge in satellite numbers, the emergence of small, cost-effective satellites like CubeSats, and the development of space networks. As satellite networks play an increasingly vital role in providing essential services and supporting various activities, ensuring their security is crucial, especially concerning trust relationships among satellites and the protection of satellite service users.

Satellite networks possess unique characteristics, such as orbital dynamics, delays, and limited bandwidth, posing challenges to trust and privacy. While prior research has explored various aspects of space network security, this paper systematically investigates two crucial yet unexplored dimensions: (i) The integrity of PKI components directly impacts the security and privacy of satellite communications and data transmission, with orbital delays and disruptions potentially hindering timely certificate revocation checks. (ii) Conversely, transmitting user signals to satellites requires careful consideration to prevent location tracking and unauthorized surveillance. By drawing on insights from terrestrial studies, we aim to provide a comprehensive understanding of these intertwined security aspects, identify research gaps, and stimulate further exploration to tackle these research challenges in the evolving domain of space network security.

On a Collision Course: Unveiling Wireless Attacks to the Aircraft Traffic Collision Avoidance System (TCAS)

Track 2

Time: 1:15 pm–2:15 pm

Authors:

Giacomo Longo, DIBRIS, University of Genova; Martin Strohmeier, Cyber-Defence Campus, armasuisse S + T; Enrico Russo, DIBRIS, University of Genova; Alessio Merlo, CASD, School of Advanced Defense Studies; Vincent Lenders, Cyber-Defence Campus, armasuisse S + T

Abstract:

Collision avoidance systems have been a safety net of last resort in aviation since their introduction in the 1980s. Through constantly refined safety procedures and hard lessons learned from mid-air collisions, the TCAS II Version 7.1 has become the global standard, significantly improving safety in a fast-growing field.

Despite this safety record, TCAS was not designed with security in mind, even in its newest versions. With the rise of software-defined radios, security researchers have shown many wireless technologies in aviation and critical infrastructures to be insecure against radio frequency (RF) attacks. However, while similar attacks have been postulated for TCAS with its built-in distance measurement, all attempts to execute them have failed so far.

In this paper, we present the first working RF attacks on TCAS. We demonstrate how to take full control over the collision avoidance displays and create so-called RA of arbitrary aircraft on collision course. We build the necessary tooling using commercial off-the-shelf hardware, creating sufficient conditions for the attacker to spoof colliding aircraft from a distance of up to 4.2 km.

We evaluate this and further attacks extensively on a live, real-world, certified aircraft test system and discuss potential countermeasures and mitigations that should be considered by aircraft and system manufacturers in the future.

Cryptographic Analysis of Delta Chat

Track 6

Time: 1:15 pm–2:15 pm

Authors:

Yuanming Song, Lenka Mareková, and Kenneth G. Paterson, ETH Zurich

Abstract:

We analyse the cryptographic protocols underlying Delta Chat, a decentralised messaging application which uses e-mail infrastructure for message delivery. It provides end-to-end encryption by implementing the Autocrypt standard and the SecureJoin protocols, both making use of the OpenPGP standard. Delta Chat’s adoption by categories of high-risk users such as journalists and activists, but also more generally users in regions affected by Internet censorship, makes it a target for powerful adversaries. Yet, the security of its protocols has not been studied to date. We describe five new attacks on Delta Chat in its own threat model, exploiting cross-protocol interactions between its implementation of SecureJoin and Autocrypt, as well as bugs in rPGP, its OpenPGP library. The findings have been disclosed to the Delta Chat team, who implemented fixes.

DVSorder: Ballot Randomization Flaws Threaten Voter Privacy

Track 1

Time: 2:45 pm–3:45 pm

Authors:

Braden L. Crimmins and Dhanya Y. Narayanan, University of Michigan; Drew Springall, Auburn University; J. Alex Halderman, University of Michigan

This paper is currently under embargo. The final paper PDF and abstract will be available on the first day of the conference.

Demystifying the Security Implications in IoT Device Rental Services

Track 2

Time: 2:45 pm–3:45 pm

Authors:

Yi He and Yunchao Guan, Tsinghua University; Ruoyu Lun, China National Digital Switching System Engineering and Technological Research Center; Shangru Song and Zhihao Guo, Tsinghua University; Jianwei Zhuge and Jianjun Chen, Tsinghua University and Zhongguancun Laboratory; Qiang Wei and Zehui Wu, China National Digital Switching System Engineering and Technological Research Center; Miao Yu and Hetian Shi, Tsinghua University; Qi Li, Tsinghua University and Zhongguancun Laboratory

Abstract:

Nowadays, unattended device rental services with cellular IoT controllers, such as e-scooters and EV chargers, are widely deployed in public areas around the world, offering convenient access to users via mobile apps.While differing from traditional smart homes in functionality and implementation, the security of these devices remains largely unexplored.In this work, we conduct a systematic study to uncover security implications in IoT device rental services.By investigating 17 physical devices and 92 IoT apps, we identify multiple design and implementation flaws across a wide range of products, which can lead to severe security consequences, such as forcing all devices offline, remotely controlling all devices, or hijacking all users’ accounts of the vendors. The root cause is that rentable IoT devices adopt weak resource identifiers (IDs), and attackers can infer these IDs at scale and exploit access control flaws to manipulate these resources.For instance, rentable IoT products allow authenticated users to find and use any device from the rentable IoT apps via a device serial number, which can be easily inferred by attackers and combined with other vulnerabilities to exploit remote devices on a large scale.To identify these risks, we propose a tool, called IDScope, to automatically detect the weak IDs in apps and assess if these IDs can be abused to scale the exploitation scope of existing access control vulnerabilities.Finally, we identify 57 vulnerabilities in 28 products which can lead to various large-scale exploitation in 24 products and affect millions of users and devices by exploiting three types of weak IDs. The vendors have confirmed our findings and most issues have been mitigated with our assistance.

Argus: All your (PHP) Injection-sinks are belong to us.

Track 4

Time: 2:45 pm–3:45 pm

Authors:

Rasoul Jahanshahi and Manuel Egele, Boston University

Abstract:

Injection-based vulnerabilities in web applications such as cross-site scripting (XSS), insecure deserialization, and command injection have proliferated in recent years, exposing both clients and web applications to security breaches. Current studies in this area focus on detecting injection vulnerabilities in applications. Crucially, existing systems rely on manually curated lists of functions, so-called sinks, to detect such vulnerabilities. However, current studies are oblivious to the internal mechanics of the underlying programming language. In such a case, existing systems rely on an incomplete set of sinks, which results in disregarding security vulnerabilities. Despite numerous studies on injection vulnerabilities, there has been no study that comprehensively identifies the set of functions that an attacker can exploit for injection attacks.

This paper addresses the drawbacks of relying on manually curated lists of sinks to identify such vulnerabilities. We devise a novel generic approach to automatically identify the set of sinks that can lead to injection-style security vulnerabilities. To demonstrate the generality, we focused on three types of injection vulnerabilities: XSS, command injection, and insecure deserialization. We implemented a prototype of our approach in a tool called Argus to identify the set of PHP functions that deserialize user-input, execute operating system (OS) commands, or write user-input to the output buffer. We evaluated our prototype on the three most popular major versions of the PHPinterpreter. Argus detected 284 deserialization functions that allow adversaries to perform deserialization attacks, an order of magnitude more than the most exhaustive manually curated list used in related work. Furthermore, we detected 22 functions that can lead to XSS attacks, which is twice the number of functions used in prior work. To demonstrate thatArgus produces security-relevant findings, we integrated its results with three existing analysis systems– Psalm and RIPS, two static taint analyses, and FUGIO, an exploit generation tool. Themodifiedtoolsdetected 13 previously unknown deserialization and XSS vulnerabilities in WordPress and its plugins, of which 11 have been assigned CVE IDs and designated as high-severity vulnerabilities.

FaceObfuscator: Defending Deep Learning-based Privacy Attacks with Gradient Descent-resistant Features in Face Recognition

Track 5

Time: 2:45 pm–3:45 pm

Authors:

Shuaifan Jin, He Wang, and Zhibo Wang, Zhejiang University; Feng Xiao, Palo Alto Networks; Jiahui Hu, Zhejiang University; Yuan He and Wenwen Zhang, Alibaba Group; Zhongjie Ba, Weijie Fang, Shuhong Yuan, and Kui Ren, Zhejiang University

Abstract:

As face recognition is widely used in various security-sensitive scenarios, face privacy issues are receiving increasing attention. Recently, many face recognition works have focused on privacy preservation and converted the original images into protected facial features. However, our study reveals that emerging Deep Learning-based (DL-based) reconstruction attacks exhibit notable ability in learning and removing the protection patterns introduced by existing schemes and recovering the original facial images, thus posing a significant threat to face privacy. To address this threat, we introduce FaceObfuscator, a lightweight privacy-preserving face recognition system that first removes visual information that is non-crucial for face recognition from facial images via frequency domain and then generates obfuscated features interleaved in the feature space to resist gradient descent in DL-based reconstruction attacks. To minimize the loss in face recognition accuracy, obfuscated features with different identities are well-designed to be interleaved but non-duplicated in the feature space. This non-duplication ensures that FaceObfuscator can extract identity information from the obfuscated features for accurate face recognition. Extensive experimental results demonstrate that FaceObfuscator’s privacy protection capability improves around 90% compared to existing privacy-preserving methods in two major leakage scenarios including channel leakage and database leakage, with a negligible 0.3% loss in face recognition accuracy. Our approach has also been evaluated in a real-world environment and protected more than 100K people’s face data of a major university.

SCAVY: Automated Discovery of Memory Corruption Targets in Linux Kernel for Privilege Escalation

Track 3

Time: 4:00 pm–5:00 pm

Authors:

Erin Avllazagaj, Yonghwi Kwon, and Tudor Dumitraș, University of Maryland

Abstract:

Kernel privilege-escalation exploits typically leverage memory-corruption vulnerabilities to overwrite particular target locations. These memory corruption targets play a critical role in the exploits, as they determine which privileged resources (e.g., files, memory, and operations) the adversary may access and what privileges (e.g., read, write, and unrestricted) they may gain. While prior research has made important advances in discovering vulnerabilities and achieving privilege escalation, in practice, the exploits rely on the few memory corruption targets that have been discovered manually so far.

We propose SCAVY, a framework that automatically discovers memory corruption targets for privilege escalation in the Linux kernel. SCAVY’s key insight lies in broadening the search scope beyond the kernel data structures explored in prior work, which focused on function pointers or pointers to structures that include them, to encompass the remaining 90% of Linux kernel structures. Additionally, the search is bug-type agnostic, as it considers any memory corruption capability. To this end, we develop novel and scalable techniques that combine fuzzing and differential analysis to automatically explore and detect privilege escalation by comparing the accessibility of resources between executions with and without corruption. This allows SCAVY to determine that corrupting a certain field puts the system in an exploitable state, independently of the vulnerability exploited. SCAVY found 955 PoC, from which we identify 17 new fields in 12 structures that can enable privilege escalation. We utilize these targets to develop 6 exploits for 5 CVE vulnerabilities. Our findings show that new memory corruption targets can change the security implications of vulnerabilities, urging researchers to proactively discover memory corruption targets.

Devil in the Room: Triggering Audio Backdoors in the Physical World

Track 5

Time: 4:00 pm–5:00 pm

Authors:

Meng Chen, Zhejiang University; Xiangyu Xu, Southeast University; Li Lu, Zhongjie Ba, Feng Lin, and Kui Ren, Zhejiang University

Abstract:

Recent years have witnessed deep learning techniques endowing modern audio systems with powerful capabilities. However, the latest studies have revealed its strong reliance on training data, raising serious threats from backdoor attacks. Different from most existing works that study audio backdoors in the digital world, we investigate the mismatch between the trigger and backdoor in the physical space by examining sound channel distortion. Inspired by this observation, this paper proposes TrojanRoom to bridge the gap between digital and physical audio backdoor attacks. TrojanRoom utilizes the room impulse response (RIR) as a physical trigger to enable injection-free backdoor activation. By synthesizing dynamic RIRs and poisoning a source class of samples during data augmentation, TrojanRoom enables any adversary to launch an effective and stealthy attack using the specific impulse response in a room. The evaluation shows over 92% and 97% attack success rates on both state-of-the-art speech command recognition and speaker recognition systems with negligible impact on benign accuracy below 3% at a distance of over 5m. The experiments also demonstrate that TrojanRoom could bypass human inspection and voice liveness detection, as well as resist trigger disruption and backdoor defense.

Terrapin Attack: Breaking SSH Channel Integrity By Sequence Number Manipulation

Track 7

Time: 4:00 pm–5:00 pm

Authors:

Fabian Bäumer, Marcus Brinkmann, and Jörg Schwenk, Ruhr University Bochum

Abstract:

The SSH protocol provides secure access to network services, particularly remote terminal login and file transfer within organizational networks and to over 15 million servers on the open internet. SSH uses an authenticated key exchange to establish a secure channel between a client and a server, which protects the confidentiality and integrity of messages sent in either direction. The secure channel prevents message manipulation, replay, insertion, deletion, and reordering. At the network level, SSH uses the Binary Packet Protocol over TCP.

In this paper, we show that as new encryption algorithms and mitigations were added to SSH, the SSH Binary Packet Protocol is no longer a secure channel: SSH channel integrity (INT-PST, aINT-PTXT, and INT-sfCTF) is broken for three widely used encryption modes. This allows prefix truncation attacks where encrypted packets at the beginning of the SSH channel can be deleted without the client or server noticing it. We demonstrate several real-world applications of this attack. We show that we can fully break SSH extension negotiation (RFC 8308), such that an attacker can downgrade the public key algorithms for user authentication or turn off a new countermeasure against keystroke timing attacks introduced in OpenSSH 9.5. Further, we identify an implementation flaw in AsyncSSH that, together with prefix truncation, allows an attacker to redirect the victim’s login into a shell controlled by the attacker.

We also performed an internet-wide scan for affected encryption modes and support for extension negotiation. We find that 71.6% of SSH servers support a vulnerable encryption mode, while 63.2% even list it as their preferred choice.

We identify two root causes that enable these attacks: First, the SSH handshake supports optional messages that are not authenticated. Second, SSH does not reset message sequence numbers when activating encryption keys. Based on this analysis, we propose effective and backward-compatible changes to SSH that mitigate our attacks.

CONTINUE TO: HACKER SUMMER CAMP 2024 — Part Seventeen: HackCon 2024

::END OF LINE::

--

--

DCG 201

North East New Jersey DEFCON Group Chapter. Dirty Jersey Represent! We meet at Sub Culture once a month to hack on technology projects! www.defcon201.org