Integrated sensing and communication (ISAC) is considered an emerging technology for 6th-generation (6G) wireless and mobile networks. It is expected to enable a wide variety of vertical applications, ranging from unmanned aerial vehicles (UAVs) detection for critical infrastructure protection to physiological sensing for mobile healthcare. Despite its significant socioeconomic benefits, ISAC technology also raises unique challenges in system security and user privacy. Being aware of the security and privacy challenges, understanding the trade-off between security and communication performance, and exploring potential countermeasures in practical systems are critical to a wide adoption of this technology in various application scenarios.
This talk will discuss various security and privacy threats in emerging ISAC systems with a focus on communication-centric ISAC systems, that is, using the cellular or WiFi infrastructure for sensing. We will then examine potential mechanisms to secure ISAC systems and protect user privacy at the physical and data layers under different sensing modes.
At the wireless physical (PHY) layer, an ISAC system is subject to both passive and active attacks, such as unauthorized passive sensing, unauthorized active sensing, signal spoofing, and jamming. Potential countermeasures include wireless channel/radio frequency (RF) environment obfuscation, waveform randomization, anti-jamming communication, and spectrum/RF monitoring.
At the data layer, user privacy could be compromised during data collection, sharing, storage, and usage. For sensing systems powered by artificial intelligence (AI), user privacy could also be compromised during the model training and inference stages. An attacker could falsify the sensing data to achieve a malicious goal. Potential countermeasures include the application of privacy enhancing technologies (PETs), such as data anonymization, differential privacy, homomorphic encryption, trusted execution, and data synthesis.
The Fifth Generation (5G) New Radio (NR) introduces advanced features at the Physical (PHY) and Medium Access Control (MAC) layers to enhance communication efficiency and reliability. However, these features also present new security challenges, potentially exposing vulnerabilities exploitable by adversaries. This study identifies specific vulnerabilities within the Physical Downlink Control Channel (PDCCH) of 5G NR at both the PHY and MAC layers. Leveraging these vulnerabilities, we propose a novel, energy-efficient jamming technique that selectively targets specific User Equipment (UE) and disrupts the target's downlink communication with the base station. Using MATLAB, we evaluate the impact of the proposed attack and demonstrate its ability to induce a Denial of Service (DoS) condition. Moreover, we also perform our jamming on a private 5G testbed built using the srsRan project. Notably, the results show that the proposed strategy ensures jamming success from 5 dBm. In contrast, conventional jamming requires at least 10 dBm of power for successful jamming, facilitating the feasibility of sustained attacks. These findings highlight critical security vulnerabilities in 5G NR and emphasize the need for robust countermeasures to safeguard next-generation wireless networks.
Vehicle platooning, with vehicles traveling in close formation coordinated through Vehicle-to-Everything (V2X) communications, offers significant benefits in fuel efficiency and road utilization. However, it is vulnerable to sophisticated falsification attacks by authenticated insiders that can destabilize the formation and potentially cause catastrophic collisions. This paper addresses this challenge: misbehavior detection in vehicle platooning systems. We present AttentionGuard, a transformer-based framework for misbehavior detection that leverages the self-attention mechanism to identify anomalous patterns in mobility data. Our proposal employs a multi-head transformer-encoder to process sequential kinematic information, enabling effective differentiation between normal mobility patterns and falsification attacks across diverse platooning scenarios, including steady-state (no-maneuver) operation, join, and exit maneuvers. Our evaluation uses an extensive simulation dataset featuring various attack vectors (constant, gradual, and combined falsifications) and operational parameters (controller types, vehicle speeds, and attacker positions). Experimental results demonstrate that AttentionGuard achieves up to 0.95 F1-score in attack detection, with robust performance maintained during complex maneuvers. Notably, our system performs effectively with minimal latency (100ms decision intervals), making it suitable for real-time transportation safety applications. Comparative analysis reveals superior detection capabilities and establishes the transformer-encoder as a promising approach for securing Cooperative Intelligent Transport Systems (C-ITS) against sophisticated insider threats.
According to the Open Web Application Security Project (OWASP), Cross-Site Scripting (XSS) is a critical security vulnerability. Despite decades of research, XSS remains among the top 10 security vulnerabilities. Researchers have proposed various techniques to protect systems from XSS attacks, with machine learning (ML) being one of the most widely used methods. An ML model is trained on a dataset to identify potential XSS threats, making its effectiveness highly dependent on the size and diversity of the training data.
A variation of XSS is obfuscated XSS, where attackers apply obfuscation techniques to alter the code's structure, making it challenging for security systems to detect its malicious intent. Our study's random forest model trained on traditional (non-obfuscated) XSS data achieved 99.8% accuracy. However, when tested against obfuscated XSS samples, accuracy dropped to 81.9%, underscoring the importance of training ML models with obfuscated data to improve their effectiveness in detecting XSS attacks. A significant challenge is to generate highly complex obfuscated code despite the availability of several public tools. These tools can only produce obfuscation up to certain levels of complexity.
In our proposed system, we fine-tune a Large Language Model (LLM) to generate complex obfuscated XSS payloads automatically. By transforming original XSS samples into diverse obfuscated variants, we create challenging training data for ML model evaluation. Our approach achieved a 99.5% accuracy rate with the obfuscated dataset. We also found that the obfuscated samples generated by the LLM were 28.1% more complex than those created by other tools, significantly improving the model's ability to handle advanced XSS attacks and making it more effective for real-world application security.
Large language models (LLMs) have profoundly shaped various domains, including several types of network systems. With their powerful capabilities, LLMs have recently been proposed to enhance network security. However, the development of LLMs can introduce new risks due to their potential vulnerabilities and misuse. In this paper, we are motivated to review the dual role of LLMs in network security. Our goal is to explore how LLMs impact network security and ultimately shed light on how to evaluate LLMs from a network security perspective. We further discuss several future research directions regarding how to scientifically enable LLMs to assist with network security.
In connected and autonomous vehicles, machine learning for safety message classification has become critical for detecting malicious or anomalous behavior. However, conventional approaches that rely on centralized data collection or purely local training face limitations due to the large scale, high mobility, and heterogeneous data distributions inherent in inter-vehicle networks. To overcome these challenges, this paper explores Distributed Federated Learning (DFL), whereby vehicles collaboratively train deep learning models by exchanging model updates among one-hop neighbors and propagating models over multiple hops. Using the Vehicular Reference Misbehavior (VeReMi) Extension Dataset, we show that DFL can significantly improve classification accuracy across all vehicles compared to learning strictly with local data. Notably, vehicles with low individual accuracy see substantial accuracy gains through DFL, illustrating the benefit of knowledge sharing across the network. We further show that local training data size and time-varying network connectivity correlate strongly with the model's overall accuracy. We investigate DFL's resilience and vulnerabilities under attacks in multiple domains, namely wireless jamming and training data poisoning attacks. Our results reveal important insights into the vulnerabilities of DFL when confronted with multi-domain attacks, underlining the need for more robust strategies to secure DFL in vehicular networks.
Federated Learning (FL) is increasingly adopted as a decentralized machine learning paradigm due to its capability to preserve data privacy by training models without centralizing user data. However, FL is susceptible to indirect privacy breaches via network traffic analysis—an area not explored in existing research. The primary objective of this research is to study the feasibility of fingerprinting deep learning models deployed within FL environments by analyzing their network-layer traffic information. In this paper, we conduct an experimental evaluation using various deep learning architectures (i.e., CNN, RNN) within a federated learning testbed. We utilize machine learning algorithms, including Support Vector Machines (SVM), Random Forest, and Gradient-Boosting, to fingerprint unique patterns within the traffic data. Our experiments show high fingerprinting accuracy, achieving 100% accuracy using Random Forest and around 95.7% accuracy using SVM and Gradient Boosting classifiers. This analysis suggests that we can identify specific architectures running within the subsection of the network traffic. Hence, if an adversary knows about the underlying DL architecture, they can exploit that information and conduct targeted attacks. These findings suggest a notable security vulnerability in FL systems and the necessity of strengthening it at the network level.
In this paper, we explore a selective multi-user jamming attack in a wireless system with a unified Reconfigurable intelligent surface (RIS) named as MuJam-RIS. In particular, a deep reinforcement learning (DRL)-based framework is proposed to optimize the RIS with the assumption of spatial-channel state information (CSI), which can be obtained in real-time given a well-developed radio radiance field (RRF), or non-real-time ray-tracing based simulation tools. In the proposed DRL framework, a newly designed actor-critic network incorporates both LOS and RIS-assisted reflected channels to ensure robustness in different wireless environments. More importantly, the reward function is generalized such that multiple users can be selectively jammed simultaneously. Evaluation results demonstrate that the proposed approach can outperform the traditional optimization method in a single-target jamming as well as provide selective jamming capability to multiple targets.
This paper explores the concept of timeliness in covert communications when faced with eavesdropping and jamming. We consider a transmitter-receiver pair communicating over a wireless channel where the choice of a resource block (frequency, time) to transmit is the result of a Reinforcement Learning policy. The eavesdropper aims to detect a transmission to perform a steering attack. Using two multiarmed bandit systems, we investigate the problem of minimizing the Age of Information (AoI) regret at the legit receiver, while maximizing the AoI regret at the adversary. We present an upper bound for regret and demonstrate through simulations the validity of the bound and the vulnerabilities introduced by the use of metric-guided policies such as age-aware policies.