- Status: Project phase
Background
Neural artificial intelligence algorithms (e.g. deep learning) are increasingly used in applications by internal and external security organisations, for example in the context of drone-based reconnaissance, disinformation detection and tactical situational awareness applications. At the same time, there is a growing awareness of specific vulnerabilities that are specific to machine learning (ML) and emerge independent of classical IT security vulnerabilities (e.g. “poisoned” data, forced misclassification, indirect command execution in large language models, targeted extraction of training data). In the status quo, there are no guarantees that AI-specific attack vectors could not be exploited in the context of high-security AI applications, too.
Aim
The aim of the program is to research and support the development of highly resilient, robust and verifiably secure ML components in a security-relevant environment. In the context of AI applications under consideration, the model and system behaviour in terms of desired outputs (predictions, recommendations, actions) must operate within defined limits even under adverse and potentially hostile conditions (e.g. poor visibility, no connectivity). Participating research consortia can develop their robustness and security approaches for a broad spectrum of potential ML applications with relevance for internal and external security stakeholders.
Thematically, the program is divided into the following main research areas:
- Verification over the entire life cycle
- Automated data quality assurance
- Hybrid models from neural and symbolic AI systems
- Formal verification of ML models
- Secure system embedding
Disruptive Risk Research
According to the current state of research, proofs for the behaviour of ML models and systems, including desirable safety and robustness properties, are not or only insufficiently realised at a formal level. In order to counter all threats and attack vectors in the future – including previously unknown ones -, fundamental contributions to the establishment and improvement of ML security and robustness as an inherent quality dimension must be made in the long term, in addition to AI risk management measures and adversarial training. A provably secure AI system thus constitutes the basic prerequisite for deployment in high-security and defence-relevant contexts. In the status quo, however, it is uncertain whether and to what extent the desired goals – i.e. approaches to solving fundamental security problems of ML models – can be achieved at all.