25 million euros for robust and secure machine learning
The Agentur für Innovation in der Cybersicherheit GmbH (Cyberagentur) is launching the third phase of its “Robust and Secure Machine Learning” research program with three innovative approaches for securing AI systems. The aim is to develop tools, applications and concepts that can guarantee security even under hostile conditions. The research concentrates on applications in critical infrastructures, situation centers and open source intelligence (OSINT) and focuses in the first step on securing the data basis of neural AI systems.
The Agentur für Innovation in der Cybersicherheit GmbH (Cyberagentur) is launching the third phase of its “Robust and Secure Machine Learning” (RSML) research competition. From December 2024 to November 2025, three selected research teams will research and implement innovative approaches to the security and robustness of machine learning systems. The Cyberagentur has allocated a total budget of 25 million euros for phases 2 to 5. A final phase 6, which is scheduled to start in around three years, will be awarded to the research team with the most innovative research performance in phases 3-5 and the most convincing proposal for phase 6.
“With the third phase of RSML, we are launching research into groundbreaking technologies that not only lay the theoretical foundations, but can also provide practical solutions for the security needs of tomorrow,” explains Dr. Daniel Gille, Project Manager and Head of Unit for Artificial Intelligence at the Cyberagentur. “Our research teams are now working on data-centric solutions to ensure security and resilience in highly sensitive application areas such as situation centers and critical network infrastructures.”
The three consortia are focusing on the following approaches:
- Modular toolkit with end-to-end assessment workflows: Development of new metrics and tools for guided development of assured ML systems.
- Hybrid AI-supported Red/Blue Team agents: These agents are designed to mutually test and secure systems in need of protection and the associated backup AIs.
- Holistic framework for secure ML applications: A comprehensive verification system with threat modeling and an “RSML Operations Center” as a central interface.
“The development and integration of highly reliable AI systems is one of the greatest challenges of our time. Especially in safety-critical applications, these systems must not only be reliable, but also verifiably tamper-proof,” emphasizes Dr. Gille. “With the approaches of the third phase, we are focusing on data that is the foundation of all deep learning applications, including large language models and image classification systems. The data used for training is often incomplete, unrepresentative, poorly labeled or even manipulated. This is the root of many downstream security and reliability problems for which we want to find solutions.”
The RSML research program takes a unique approach by competitively funding research and development across multiple phases. The results are intended not only to expand the scientific basis, but also to produce prototype applications that are evaluated in realistic test environments.
Further information: https://www.cyberagentur.de/rsml