Strengthening
the development of
Trustworthy AI
MISSION KI Minimum standard
MISSION AI is developing a voluntary minimum standard for artificial intelligence that strengthens the reliability and trustworthiness of AI applications.
Since 1 August 2024, the EU AI Act has regulated the safety of AI applications throughout Europe. The Act focuses on high-risk AI systems and defines strict requirements and conformity assessment procedures for these applications.
For lower-risk AI applications, such as automated contract review or supply chain optimisation, the EU AI Act offers flexible regulation. These systems are primarily subject to transparency requirements in order to keep users and providers informed.
This is where MISSION AI comes in. The initiative aims to create a flexible framework for AI applications below the high-risk threshold with a voluntary, practicable minimum standard. This standard is based on the requirements of the EU AI Act for high-risk applications and ensures quality and trustworthiness.
MISSION AI strengthens the trust of users in AI technologies and at the same time creates competitive advantages for AI providers who apply the standard. By testing the standard on real use cases, the initiative ensures that the standard is developed in a needs-orientated and practical manner.
1. What is the content of the MISSION KI minimum standard based on?
The MISSION AI minimum standard is based on the ‘Ethics Guidelines for Trustworthy AI’ of the High-Level Expert Group (HEG-KI), which was convened by the European Commission. The HEG-KI has defined central principles for assessing the trustworthiness of AI systems. By taking these principles into account, the MISSION AI minimum standard ensures compatibility with European AI regulation and standardisation.
1.1 - Values
In this framework, six core values were identified that serve as a compass for the responsible development and use of AI systems. They form the foundation for an ethical and human-centred approach in the AI landscape:
1. Reliability
Performance & Robustness
Fallback Plans & General Safety
2. AI-specific cyber security
Resistance to AI-specific attacks and security
3. Data quality, protection and management
Data quality & integrity
Protection of personal data
Protection of proprietary data
Data access
4. Non-discrimination
Avoidance of unjustified distortions
Accessibility and universal design
Stakeholder participation
5. Transparency
Traceability & documentation
Explainability & interpretability
External communication
6. Human supervision & control
Human capacity to act
Human supervision
1.2 - Protection needs analysis
The MISSION KI minimum standard relies on the protection needs analysis (SBA) as a starting point to ensure efficiency.
This analysis determines the necessary protection requirements for the defined values and thus forms the basis for a targeted test. It filters out the relevant values and criteria for a use case and defines a target for the subsequent test.
Details of the protection needs analysis
The minimum standard therefore takes into account the variety of AI application scenarios - from energy distribution optimisation and product recommendation systems to medical diagnostic tools. The relevance of the individual values varies depending on the use case.
For example, the value ‘non-discrimination’ plays a subordinate role in an AI for optimising power distribution, as the decisions are based on technical parameters. In this case, the value of ‘transparency’ takes centre stage: the AI's decisions must be comprehensible and understandable so that operators and regulatory authorities can check why certain energy distributions were made.
Regardless of the use case, the ‘reliability’ value is always subject to scrutiny, as it is considered fundamental to the quality of any AI application. The other values can be categorised as not applicable in whole or in part under certain conditions that are clearly defined in the protection requirements analysis.
2. How does the standard become auditable?
2.1 - The test criteria catalogue translates abstract values into measurable variables
In the context of the emerging AI regulation and standardisation, a series of (criteria) catalogues and standards on AI trustworthiness have also been published in Europe and Germany. These are largely based on the results of the HLEG-KI.
Details of the test criteria catalogue
The MISSION KI test criteria catalogue is based on three sources in particular:
VDE SPEC 90012,
AI test catalogue of the Fraunhofer IAIS,
AIC4 criteria catalogue for AI cloud services from the Federal Office for Security and Information Technology (BSI).
In order to make the MISSION AI minimum standard testable, the 6 abstract values were translated into a structured test procedure based on the so-called ‘VCIO’ approach (Values - Criteria - Indicators - Observables). This is divided into several levels: The values form the foundation on which specific criteria are built. Indicators and measurable variables (observables) are used to assess these criteria. The degree of fulfilment of each value is systematically determined on the basis of this structure. This methodology ensures a precise and comprehensible assessment.
In addition, test tools are developed to check the fulfilment of the observables and thus increase the reliability of the test result.
2.2 - The evaluation
At the end of the test process, an overall assessment is made for each of the six defined values. This assessment is compared with the previously determined protection requirements. An AI application passes the test if it achieves the defined test target for each value. This documents that the quality measures and their evidence sufficiently fulfil the identified protection requirements.
The successful test thus confirms that the AI application fulfils the necessary quality standards and has implemented the required protective measures. This process ensures a thorough evaluation and creates transparency regarding the trustworthiness and security of the tested AI systems.
3. Advantages for AI users and operators
The MISSION AI minimum standard offers clear advantages for AI providers and AI operators. AI providers benefit from an efficient proof of quality that can be used by large companies and start-ups alike.
This improves their competitiveness, as they can stand out on the market thanks to comparable quality criteria. In addition, with the MISSION AI minimum standard, AI providers lay the foundation early on to fulfil the requirements of the EU AI Act.
This is particularly helpful if the area of use of your AI application changes in such a way that it is later categorised as a high-risk application. AI operators in turn benefit from greater market transparency and higher reliability of the AI applications used. This also strengthens end users' trust in the technology. A win-win situation that promotes the development of a robust and trustworthy AI ecosystem in Europe.
Our Partners
The development of our MISSION KI minimum standard is supported by a strong partnership of leading institutions: