Lidé

Ing. Maria Rigaki

Všechny publikace

A Survey of Privacy Attacks in Machine Learning

  • DOI: 10.1145/3624010
  • Odkaz: https://doi.org/10.1145/3624010
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    As machine learning becomes more widely used, the need to study its implications in security and privacy becomes more urgent. Although the body of work in privacy has been steadily growing over the past few years, research on the privacy aspects of machine learning has received less focus than the security aspects. Our contribution in this research is an analysis of more than 45 papers related to privacy attacks against machine learning that have been published during the past seven years. We propose an attack taxonomy, together with a threat model that allows the categorization of different attacks based on the adversarial knowledge, and the assets under attack. An initial exploration of the causes of privacy leaks is presented, as well as a detailed analysis of the different attacks. Finally, we present an overview of the most commonly proposed defenses and a discussion of the open problems and future directions identified during our analysis.

Out of the Cage: How Stochastic Parrots Win in Cyber Security Environments

  • DOI: 10.5220/0012391800003636
  • Odkaz: https://doi.org/10.5220/0012391800003636
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Large Language Models (LLMs) have gained widespread popularity across diverse domains involving text generation, summarization, and various natural language processing tasks. Despite their inherent limitations, LLM-based designs have shown promising capabilities in planning and navigating open-world scenarios. This paper introduces a novel application of pre-trained LLMs as agents within cybersecurity network environments, focusing on their utility for sequential decision-making processes. We present an approach wherein pre-trained LLMs are leveraged as attacking agents in two reinforcement learning environments. Our proposed agents demonstrate similar or better performance against state-of-the-art agents trained for thousands of episodes in most scenarios and configurations. In addition, the best LLM agents perform similarly to human testers of the environment without any additional training process. This design highlights the potential of LLMs to address complex decision-making tasks within cybersecurity efficiently. Furthermore, we introduce a new network security environment named NetSecGame. The environment is designed to support complex multi-agent scenarios within the network security domain eventually. The proposed environment mimics real network attacks and is designed to be highly modular and adaptable for various scenarios.

The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning

  • Autoři: Ing. Maria Rigaki, Ing. Sebastián García, Ph.D.,
  • Publikace: 28th European Symposium on Research in Computer Security, The Hague, The Netherlands, September 25–29, 2023, Proceedings, Part I. Basel: Springer Nature Switzerland AG, 2024. p. 44-64. ISSN 0302-9743. ISBN 978-3-031-50593-5.
  • Rok: 2024
  • DOI: 10.1007/978-3-031-51482-1_3
  • Odkaz: https://doi.org/10.1007/978-3-031-51482-1_3
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Due to the proliferation of malware, defenders are increasingly turning to automation and machine learning as part of the malware detection toolchain. However, machine learning models are susceptible to adversarial attacks, requiring the testing of model and product robustness. Meanwhile, attackers also seek to automate malware generation and evasion of antivirus systems, and defenders try to gain insight into their methods. This work proposes a new algorithm that combines Malware Evasion and Model Extraction (MEME) attacks. MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples while simultaneously training a surrogate model with a high agreement with the target model to evade. To evaluate this method, we compare it with two state-of-the-art attacks in adversarial malware creation, using three well-known published models and one antivirus product as targets. Results show that MEME outperforms the state-of-the-art methods in terms of evasion capabilities in almost all cases, producing evasive malware with an evasion rate in the range of 32–73%. It also produces surrogate models with a prediction label agreement with the respective target models between 97–99%. The surrogate could be used to fine-tune and improve the evasion rate in the future.

Machete: Dissecting the Operations of a Cyber Espionage Group in Latin America

  • DOI: 10.1109/EuroSPW.2019.00058
  • Odkaz: https://doi.org/10.1109/EuroSPW.2019.00058
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Reports on cyber espionage operations have been on the rise in the last decade. However, operations in Latin America are heavily under researched and potentially underestimated. In this paper we analyze and dissect a cyber espionage tool known as Machete. Our research shows that Machete is operated by a highly coordinated and organized group who focuses on Latin American targets. We describe the five phases of the APT operations from delivery to exfiltration of information and we show why Machete is considered a cyber espionage tool. Furthermore, our analysis indicates that the targeted victims belong to military, political, or diplomatic sectors. The review of almost six years of Machete operations show that it is likely operated by a single group, and their activities are possibly state-sponsored. Machete is still active and operational to this day.

Bringing a GAN to a Knife-Fight: Adapting Malware Communication to Avoid Detection

  • DOI: 10.1109/SPW.2018.00019
  • Odkaz: https://doi.org/10.1109/SPW.2018.00019
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Generative Adversarial Networks (GANs) have been successfully used in a large number of domains. This paper proposes the use of GANs for generating network traffic in order to mimic other types of traffic. In particular, our method modifies the network behavior of a real malware in order to mimic the traffic of a legitimate application, and therefore avoid detection. By modifying the source code of a malware to receive parameters from a GAN, it was possible to adapt the behavior of its Command and Control (C2) channel to mimic the behavior of Facebook chat network traffic. In this way, it was possible to avoid the detection of new-generation Intrusion Prevention Systems that use machine learning and behavioral characteristics. A real-life scenario was successfully implemented using the Stratosphere behavioral IPS in a router, while the malware and the GAN were deployed in the local network of our laboratory, and the C2 server was deployed in the cloud. Results show that a GAN can successfully modify the traffic of a malware to make it undetectable. The modified malware also tested if it was being blocked and used this information as a feedback to the GAN. This work envisions the possibility of self-adapting malware and self-adapting IPS.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk