Key technologies

Robust and Secure Machine Learning for Security and Defense-Relevant Systems (RSML)

  1. Status: Project phase

Background

Neural artificial intelligence mechanisms (e.g. deep learning) are increasingly being used in applications by internal and external security organizations, for example in the context of drone-based reconnaissance, detection of disinformation and tactical situational awareness applications. At the same time, there is increasing knowledge of specific vulnerabilities that occur specifically in machine learning (ML) and in addition to classic IT vulnerabilities (e.g. “poisoned” data, forced misclassification, indirect command execution in large language models, targeted extraction of training data). In the status quo, there are no guarantees that AI-specific attack vectors cannot also be exploited in the context of high-security AI applications.

Aim

The aim of the project is to research and support the development of highly resilient, robust and demonstrably safe ML components in safety-relevant environments. In the context of the AI applications under consideration, the model and system behavior in terms of desired outputs (predictions, recommendations, actions) must move within defined limits even under adverse and potentially hostile conditions (e.g. poor visibility, no connectivity). Participating research consortia can develop their robustness and security approaches for a broad spectrum of potential ML applications relevant to the requirements of internal and external security.

Thematically, the program is divided into the following main research areas:

  • Verification over the entire life cycle
  • Automated data quality assurance
  • Hybrid models from neural and symbolic AI systems
  • Formal verification of ML models
  • Secure system embedding

Disruptive Risk Research

According to the current state of research, behavioral proofs for ML models and systems, including desirable safety and robustness properties, are not or only insufficiently realized at the formal level. In order to counter all threats and attack vectors in the future – including previously unknown ones – fundamental contributions to the creation and improvement of ML security and robustness as an inherent quality dimension must be made in the long term, in addition to AI risk management measures and adversarial training. A verifiably secure AI system thus creates the basic prerequisites for use in a high-security and defense context. In the status quo, however, it is uncertain whether and to what extent the aims – i.e. approaches to solving fundamental security problems of ML models – can be achieved at all.

Questions about the programme? Please write to us:

  1. Program team: Key Technologies | Security for and through Artificial Intelligence
  2. E-Mail: rsml@cyberagentur.de

Newsletter

Your update on research, awarding and co.

Subscribe to our scientific newsletter. In this way, you can find out promptly which research projects we are currently awarding, when partnering events, symposia or ideas competitions are coming up and what’s new in research.