Introduction
Cybercrimes have been on the rise in recent years scamming people from all sections of our society. The number of such crimes ranging from financial fraud to identity theft that makes use of online platforms such as social media is increasing annually. For instance, throughout the COVID-19 pandemic, the U.S. National Center for Disaster Fraud had been continually issuing warnings about fraud incidents. Behavioral science research shows that individuals with developmental disabilities and older adults are especially vulnerable to such scams. Among such vulnerable populations, individuals with developmental disabilities include autism spectrum disorder (ASD), attention-deficit/hyperactivity disorder (ADHD), intellectual disabilities, and beyond. These users require a variety of assistance and are susceptible to falling victim to online scams due to factors such as lower reading comprehension levels and a deficit in recognizing certain social nuances.This full-day workshop aims to bring together multidisciplinary researchers and practitioners working on different aspects of cybersecurity with a focus on inclusion, accessible system design, and bias mitigation in AI methods for cybersecurity. This workshop will provide a forum to share discoveries, experiences, and exchange new ideas in various application domains of online cybersecurity, including social media scams and phishing as well as to identify key emerging topics for future directions of inclusive AI systems for cybersecurity.
Program
08:30 AM - 8:45 AM |
Conference Welcome and Opening Remarks |
---|---|
08:45 AM - 9:45 AM |
Conference Keynote 1: Jeannette M. Wing, Executive Vice President for Research & Professor of Computer Science, Columbia University; Title: Trustworthy AI |
09:45 AM - 10:45 AM |
Conference Keynote 2: Deirdre K. Mulligan, Director of the Berkeley Center for Law and Technology & Professor, School of Information, UC Berkeley |
10:45 AM - 11:00 AM |
BREAK |
11:00 AM - 12:20 PM |
WORKSHOP SESSION 1: Inclusive AI for Cybersecurity |
11:00 AM - 11:35 AM |
Workshop Keynote Talk: Dr. Jeremiah Still Title: AI-Driven Cybersecurity: The Role of Human Factors and Inclusive Design in Cyber Awareness Abstract: The growing field of human-centered AI research is exploring critical areas like system interpretability, human-in-the-loop models, inclusive design, and ethical implications, particularly in cybersecurity where transparency and personalization are crucial for user compliance. AI offers the potential to provide timely, personalized feedback, as demonstrated by an AI-driven intervention in a simulated SMiShing attack, highlighting the importance of inclusive design, user training, and safe decision-making spaces. Ultimately, AI-powered security solutions can be both effective and inclusive, protecting a diverse user base and enhancing the overall user experience. Brief Bio: ![]() Jeremiah Still, Ph.D., is an Associate Professor at Old Dominion University in the Department of Psychology with an appointment in the School of Cybersecurity. He earned a Ph.D. in Human-Computer Interaction from Iowa State University in 2009. Dr. Still is a Fellow of the Psychonomic Society and received the Earl Alluisi Award from APA Division 21, which recognizes his outstanding achievements as an applied experimental/engineering psychologist. His Psychology of Design laboratory explores the relationship between human cognition and technology. Specifically, he focuses on human-centered cybersecurity related to AI, next-gen authentication, SMiShing, and cyber awareness. |
11:35 AM - 12:05 PM |
Breakout Groups: Human factors behind vulnerabilities of AI tools for Cybersecurity |
12:05 PM - 12:20 PM |
Discussion: Breakout group summaries and joint-paper planning |
12:20 PM - 01:30 PM |
LUNCH BREAK |
01:30 PM - 02:30 PM |
Conference Keynote 3: Ed H. Chi, Distinguished Scientist & Research Lead (LLM/LaMDA), Google DeepMind |
02:30 PM - 04:30 PM |
Conference Panel: How Will Artificial Intelligence Reshape Scientific Research? |
04:30PM - 04:45 PM |
BREAK |
04:45 PM - 06:25 PM |
WORKSHOP SESSION 2: Inclusive AI for Cybersecurity |
04:45 PM - 06:05 PM |
Paper Session |
04:45 PM - 05:05 PM |
Design Challenges for Scam Prevention Tools to Protect Neurodiverse and Older Adult Populations Pragathi Tummala (George Mason University), Hannah Choi (George Mason University), Anuridhi Gupta (George Mason University), Tomas A Lapnas (George Mason University), Yoo Sun Chung (George Mason University), Matthew Peterson (George Mason University), Geraldine G Walther (George Mason University), Hemant Purohit (George Mason University) |
05:05 PM - 05:25 PM |
Towards Inclusive Cybersecurity: Protecting the Vulnerable with Social Cyber Vulnerability Metrics Shutonu Mitra (Virginia Tech), Qi Zhang (Virginia Tech), Chen-Wei Chang (Virginia Tech), Hossein Salemi (George Mason University), Hemant Purohit (George Mason University), Fengxiu Zhang (George Mason University), Michin Hong (Indiana University), Chang-Tien Lu (Virginia Tech), Jin-Hee Cho (Virginia Tech) |
05:25 PM - 05:45 PM |
A Blockchain-Enabled Approach to Cross-Border Compliance and Trust Vikram Kulothungan (Capitol Technology University) |
05:45 PM - 06:05 PM |
Mind the Inclusion Gap: A Critical Review of Accessibility in Anti-Counterfeiting Technologies Salem Abdul-Baki (George Mason University), Krishna Purohit (George Mason University), Hemant Purohit (George Mason University) |
06:05 PM - 06:25 PM |
Workshop conclusion and follow-on event planning |
06:30 PM - 08:00 PM |
NETWORKING RECEPTION |
Important Dates
Paper Submission Deadline: |
|
Notification of Decision: |
|
Final Version due: | September 22 (firm) |
Scope
State-of-the-art research in online cybersecurity is striving to help assess technical vulnerabilities in information systems designed to protect us in online browsing, with the increasing use of Artificial Intelligence (AI) methods, including Machine Learning (ML), Natural Language Processing (NLP), and Computer Vision. Yet, critical evaluations of such methods have not accounted for the analysis of multifaceted behavioral vulnerabilities of such systems with respect to all demographic individuals to scams in online space such as social media, leaving weaker protection for vulnerable populations such as individuals with developmental disabilities and older adults. An inclusive AI model for cybersecurity tasks, for example, scam detection should incorporate the diversity of social factors and attitudes when representing the language and multimedia of input data to learn to classify potential online scams. Nowadays, there is a prevalent practice in NLP and ML to extensively use automated feature engineering with deep learning-based algorithms and pre-trained language models for feature representation to achieve higher classification performance. However, feature representations encoded in such models can often inadvertently perpetuate undesirable biases in the labeled data or pre-trained language models on which they are trained. Thus, this highlights the gap and the need to create inclusive AI systems for next-gen cybersecurity to protect us from scams online while providing a safer browsing experience to all populations.This workshop will focus on the emerging research thrusts across the topics of inclusion in cybersecurity and trust in online forums, scam detection and prevention in social networks, inclusion in cybersecurity tool design when employing AI technologies, robustness of AI systems, and human factors to characterize biases of AI techniques among others. We invite papers presenting various aspects of design requirements and applications of inclusive AI systems for cybersecurity, with a focus on protecting vulnerable populations online.
Topics of Interest
We encourage submissions on the following topics (but not limited to):- Characterizing data biases in NLP methods and ML models for cybersecurity
- Inclusion challenges of current cybersecurity tools for vulnerable populations, especially individuals with developmental disabilities and older adults
- Accessible and inclusive cybersecurity system design
- Behavioral science approach (e.g., eye-tracking, brain-imaging) to detect AI biases
- Human factors-driven evaluation metrics of detection models for online scams
- Robustness of scam detection models under data drift
- Bias mitigation in AI-based scam detection tools for social media, emails, and the web
Submission
Workshop papers should not exceed 10 pages, including references. Short papers should not exceed 3 pages and are required to have the tag [Short paper] in the title. The paper should be submitted via EasyChair in this workshop track: https://easychair.org/conferences/?conf=ieeetps2024
The standard, two-column IEEE conference paper format should be used and this template can be downloaded from here:
Download IEEE Templates
Organizers and Contact
- Hemant Purohit, Humanitarian Informatics Lab, School of Computing, George Mason University, USA, hpurohit [AT] gmu [DOT] edu
- Jin-Hee Cho, Trustworthy Cyberspace Lab, Computer Science Department, Virginia Tech, USA
- Yoosun Chung, Division of Special Education and Human disAbilities, George Mason University, USA