Cybersecurity

The Edge-Cloud continuum paradigm holds significant potential for enabling next-generation applications. However, it also introduces serious security and privacy challenges. Edge and IoT devices often provide critical services (e.g., safety-critical functions in automotive systems) and collect sensitive personal data (e.g., through eHealth wearable devices) that must be protected from a range of attackers, including reverse engineers, network attackers, and malicious insiders. Unfortunately, due to their limited computational resources, these devices are particularly vulnerable to various cyber-attacks. 

Furthermore, beyond external attackers, data security is threatened by the same providers offering Cloud services (CSP) —so-called “Honest but Curious” providers. 

Addressing these security challenges demands advanced methodologies for the efficient and robust deployment of cybersecurity solutions in cloud-edge infrastructures. The goal is to secure edge devices and the services they host, safeguard the privacy of sensitive data processed and stored at cloud-edge locations, enable resource-efficient orchestration of security services, enforce distributed security policies, and manage identities effectively.

Next-generation digital identity management solutions

Development of tool-supported methodologies for the secure development, maintenance and deployment of Identity and Access Management (IAM) solutions in cloud-edge services and applications. The aim is to ensure the security and privacy of the sensitive data allocated, processed, and consumed at cloud-edge locations, the fulfillment of distributed security policies, and identity management. More specifically:

AI-based Edge Threat and Anomaly Detection

Development of computationally efficient methods for cyber threat and anomaly detection in real-world edge-computing environments. This involves: 

Security challenges in distributed computing environments

Distributed digital infrastructures, such as smart industries and smart cities, can be subject to anomalies (hardware, software, and communication issues) as well as to cyber attacks aimed at causing service downtime,  material losses in industrial production sites or delays in public transportation, with potentially negative repercussions on the lives of citizens.

AI-based systems for detecting anomalies and attacks have demonstrated high effectiveness and detection accuracy in various application scenarios, enabling operators to take appropriate countermeasures promptly. However, deploying such AI-based security solutions in real-world environments with heterogeneous devices and communication channels presents significant challenges:


Resource allocation

Security functions might interfere with the correct execution of a device’s main tasks by consistently consuming portions of CPU and memory. The AI model at the core of a security function may need to process variable amounts of data within short time windows, leading to unpredictable resource consumption patterns and potential misbehaviour of other tasks. Addressing this issue requires coordinated orchestration of available resources, considering the resource needs of both security functions and other processes running on the devices, as well as the overall security requirements.


Training data confidentiality

The lack of comprehensive and up-to-date datasets encompassing recent cyberattacks and a wide variety of system anomalies remains a significant challenge. This is primarily due to the potential disclosure of sensitive information when sharing datasets, such as critical industrial production details like sensor readings, actuator states, and control messages. A recent approach to address this challenge is Federated Learning (FL), a collaborative machine learning training method that enables multiple parties to develop a common AI-based intrusion and anomaly detection system without sharing their training data. 

Although FL in cybersecurity is still in its early stages, it has already shown promising results. While FL is designed to preserve confidentiality in collaborative learning, it remains vulnerable to malicious participants who may exploit the training process to compromise the AI model trained for intrusion and anomaly detection. Addressing this challenge requires advanced cryptographic techniques, such as homomorphic encryption and differential privacy, to protect the information shared among participants.