DIN SPEC 92005 Künstliche Intelligenz – Quantifizierung von Unsicherheiten im Maschinellen Lernen

DIN SPEC 92005 – Standardizing uncertainty in machine learning?

DIN SPEC 92005 has been publicly available since January 2024. It deals with the quantification of uncertainties in Machine Learning (ML) and was developed in collaboration with Fraunhofer IESE, which also provided the deputy chairman.

Dieser Artikel ist auch in Deutsch verfügbar: DIN SPEC 92005 – Unsicherheit im Maschinellen Lernen standardisieren?

DIN SPEC 92005: An important step towards consciously dealing with uncertainty in AI applications

„As Deputy Chairman, I am certainly somewhat biased, but I am also firmly convinced that a major step towards better uncertainty management in future AI systems has been taken here under the leadership of DIN. I can therefore recommend to anyone interested in the topic to take a look at DIN SPEC 92005, which can be obtained free of charge from Beuth Verlag.“

Dr. Michael Kläs, Expert for Safe AI, Fraunhofer IESE

DIN SPEC 92005 as a solid foundation for practice

Since ML models are trained on data, residual uncertainty in their output is usually unavoidable. This makes it all the more important, especially for safety-critical AI applications, to reliably determine the uncertainty of their output.

DIN SPEC 92005 offers, among other things:

  • a consolidated terminology, including an uncertainty ontology (cf. [5])
  • a practical classification of sources of uncertainty (cf. [2])
  • a systematic overview of different approaches (cf. [1])
  • a compilation of requirements for uncertainty quantification
  • examples on how to implement the requirements with approaches such as uncertainty wrappers

Would you like a broader overview or more details?

If you need a general overview of quality assurance for ML models, especially in the context of safety-critical AI systems, become a Certified Data Scientist Specialized in Assuring Safety. Current training dates can be found on the seminar page of the Fraunhofer Big Data and Artificial Intelligence Alliance or, with a direct booking option, at Fraunhofer IESE.

If you are interested in the topic of uncertainty management for AI in more detail – for example, if you are wondering how you can implement the requirements for uncertainty quantification from DIN SPEC 92005 in practice – you are welcome to contact us. Our Dependable AI team has been working for years on innovative approaches to reliably determine uncertainty in AI models at runtime and use it to make better decisions.

Are you interested in the scientific background?

Here you find a selection of current IESE publications on the topic of uncertainty management for AI models (2018 to 2023):

  1. Kläs, M., “Towards Identifying and Managing Sources of Uncertainty in AI and Machine Learning Models – An Overview,” arxiv.org/abs/1811.11669, 2018.
  2. Kläs, M., Vollmer, A. M., “Uncertainty in Machine Learning Applications: A Practice-Driven Classification of Uncertainty,” WAISE 2018 at Computer Safety, Reliability, and Security (SAFECOMP 2018), Västerås, Sweden, 2018.
  3. Kläs, M., Sembach, L., “Uncertainty Wrappers for Data-driven Models – Increase the Transparency of AI/ML-based Models through Enrichment with Dependable Situation-aware Uncertainty Estimates,” WAISE 2019 at Computer Safety, Reliability, and Security (SAFECOMP 2019), Turku, Finland, 2019.
  4. Kläs, M., Jöckel, L., „A Framework for Building Uncertainty Wrappers for AI/ML-based Data-Driven Components,” WAISE 2020 at Computer Safety, Reliability, and Security (SAFECOMP 2020), Lisbon, Portuga, 2020.
  5. Bandyszak, T., Jöckel, L., Kläs, M., Törsleff, S., Weyer, T., Wirtz, B., “Handling Uncertainty in Collaborative Embedded Systems Engineering,” in: Böhm W., Broy M., Klein C., Pohl K., Rumpe B., Schröck S. (eds) Model-Based Engineering of Collaborative Embedded Systems. Springer, Cham., 2021. ISBN 978-3-030-62135-3.
  6. Kläs, M., Adler, R., Sorokos, I., Jöckel, L., Reich, J., “Handling Uncertainties of Data-Driven Models in Compliance with Safety Constraints for Autonomous Behaviour,” Proceedings of European Dependable Computing Conference (EDCC 2021), Munich, Germany, 2021.
  7. Jöckel, L., Kläs, M., “Could We Relieve AI/ML Models of the Responsibility of Providing Dependable Uncertainty Estimates? A Study on Outside-Model Uncertainty Estimates,” Proceedings of International Conference on Computer Safety, Reliability and Security (SAFECOMP 2021), York, UK, 2021.
  8. Groß, J., Adler, R., Kläs, M., Reich, J., Jöckel, L., Gansch, R., „Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception,” Proceedings of International Conference on Computer Safety, Reliability and Security (SAFECOMP 2022), Munich, Germany, 2022.
  9. Gerber, P., Jöckel, L., Kläs, M., “A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models,” Proceedings of Artificial Intelligence Safety (SafeAI 2022), 2022.
  10. Groß, J., Kläs, M., Jöckel, L., Gerber, P., “Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning,” Proceedings of International Conference on Dependable Systems and Networks Workshops (DSN-W), Porto, Portugal, 2023. 10.1109/DSN-W58399.2023.00061.
  11. Jöckel, L., Kläs, M., Groß, J., Gerber, P., “Conformal Prediction and Uncertainty Wrapper: What Statistical Guarantees Can You Get for Uncertainty Quantification in Machine Learning?” Proceedings of WAISE 2023 at International Conference on Computer Safety, Reliability and Security (SAFECOMP 2023), York, UK, 2023.
  12. Jöckel, L., Kläs, M., Popp, G., Hilger, N., Fricke, S., “Uncertainty Wrapper in the medical domain: Establishing transparent uncertainty quantification for opaque machine learning models in practice,” Submitted to European Dependable Computing Conference (EDCC 2024), Leuven, Belgium, 2024. (arxiv.org/abs/2311.05245).