Seminar: Safe AI

Secure solutions that incorporate AI

Sichere Lösungen, die KI enthalten

The use of artificial intelligence (AI) in safety-critical systems poses new challenges, as conventional development and testing methods are often insufficient. The complexity of machine learning (ML) and, in particular, deep learning (DL) models, their often non-deterministic behavior, and their dependence on data quality and the learning algorithms used require a special understanding to ensure safety and reliability. This training course provides you with the necessary knowledge to understand the challenges and highlights current strategies for the safe use of AI.

In safety-critical environments, AI, especially ML and DL, offers great economic potential, but this potential is not yet being fully exploited. Despite numerous approaches to increasing the reliability of AI methods, especially ML components, the rapid pace of research development poses a challenge. Manufacturers are expected to comply with the state of the art, but traditional safety standards such as IEC 61508 hardly take AI into account and are based on assumptions without AI, which means that AI components are only indirectly secured.

Although many standards deal with AI, these are usually not sufficient for safety and refer to traditional safety standards, such as ISO/IEC TR 24028. Exceptions with a clear focus on safety include ISO/IEC TR 5469:2024 and ISO/PAS 8800. Although these provide guidance on risk-reducing measures, their high level of abstraction means that they are not sufficient for selecting tailor-made measures. For more information, visit https://www.iese.fraunhofer.de/blog/ki-normen-und-standardisierung/[in German].

The use of ML fundamentally changes the development paradigm: instead of traditional specifications, sample data serves as the basis from which a learning algorithm generates a model. This is often non-deterministic, and its quality depends heavily on data, algorithms, and developer expertise. Since many established verification methods are only applicable to a limited extent due to a lack of clear specifications and interpretability, this creates particular challenges.

Information and details about the seminar

Referenzprojekt: MInD, Fraunhofer IESE
© iStock.com/everythingpossible

This seminar provides an introductory overview of the state of the art in safety and artificial intelligence, including relevant standards and standardization initiatives. We will discuss the challenges that arise in this area of conflict in order to raise your awareness of the problems associated with the use of AI approaches in safety-critical solutions. You will also learn about possible strategies for safe AI solutions and an exemplary selection of approaches that will help you address specific challenges and derive customized safety concepts.

Event type and location

 

  • Online seminar via MS Teams

Also available as in-house training or as an online seminar at your company upon request.

Dates and duration

 

  • Further dates to follow
  • Individual appointments available on request

Completion

 

  • Certificate of attendance
  • Receipt of training materials

Language

 

  • German
  • Training material in English

Cost

 

  • 950.00 euros per person
  • Individual pricing for in-house training courses upon request

Equipment

 

  • Laptop with internet access, webcam, and microphone 

You will receive a Microsoft Teams dial-in link for the online seminar.

For example, you come from a safety-critical domain and are interested in the application of AI components or processes. The seminar is aimed in particular at testing organizations, specialists and managers, project managers, quality managers, safety engineers, and data scientists.

Basic knowledge or initial experience in dealing with machine learning processes is an advantage. Detailed prior knowledge in the field of safety engineering is not required.

Motivation

 

  • Why Safe AI?

Current practice in the field of safety

 

  • Definitions, standards, methods

Current state of practice in the field of data-driven models and AI

 

  • Approaches, classification, development processes, testing

Challenges

 

  • Impact of the specifics of AI development

Strategies for safety despite AI

 

  • Standards
  • Assurance Cases

Best Practices

 

  • Examples of security measures in the model lifecycle

The seminar was designed by the experts and specialists at Fraunhofer IESE and has already been successfully held several times. Seminar participants receive personal support from the Fraunhofer IESE team and gain direct access to expertise from research and practice.

Communication

Interactive lecture

 

  • Questions can be asked at any time
  • Regular feedback rounds
  • Exercises based on use cases

Expertise

Maximum practical relevance

 

  • Fraunhofer experts and specialists
  • Theory from research and project work
  • Practical expertise

Dr. Michael Klaes, Fraunhofer IESE
© Fraunhofer IESE

Since completing his studies in computer science, Dr. Michael Kläs has been working in applied research and advising companies on software quality and data analysis at Fraunhofer IESE. Over the past decade, he has been responsible for numerous industry and research projects, particularly the development of KPI systems, the evaluation of new technologies, and the development of predictive analytics. His dissertation focused on the prediction of software errors with the involvement of expert knowledge. His current focus is on potential analysis for data-driven innovation and data quality and uncertainty analysis in big data and AI systems. As the author of numerous specialist publications, he is also involved as a university lecturer and as an expert in standardization (DIN/VDE).

Dr. Rasmus Adler, Fraunhofer IESE
© Fraunhofer IESE

Dr.-Ing. Rasmus Adler studied applied computer science and has been employed at Fraunhofer IESE since 2006. In his doctoral thesis, he developed fail-operational solutions for active safety systems such as ESP. He then devoted himself to model-based safety engineering for autonomous systems as a project manager and safety expert. He coordinated the development of solutions to measure the risk of planned/possible autonomous system behavior in relation to the current situation at runtime and to initiate risk-minimizing measures. In his current position as Program Manager for Autonomous Systems, he is particularly dedicated to the risk management of networked cyber-physical systems. In order to maximize the benefits of individual systems as well as the overall benefits of system networks, he relies on cooperative risk management during runtime, which is based in part on artificial intelligence. Since current safety standards do not support this innovative risk management, he is involved in standardization committees and participates in the development of normative requirements for autonomous, networked cyber-physical systems. 

 Anna Maria Vollmer, Fraunhofer IESE
© Fraunhofer IESE

Anna Maria Vollmer holds a master's degree in computer science from the Technical University of Kaiserslautern and has been working at Fraunhofer IESE in the Data Science department since 2017. In her current roles as Senior Data Engineer and Manager for Business & Transfer, she combines technical expertise with strategic thinking. Through her responsibility for a large number of projects, she has developed a deep understanding of complex problems and effective solution strategies. In addition, she brings experience in quality assessment and works on data-driven innovations for industrial customers.

Training request

Would you like to book the one-day seminar (in-person or online) at flexible dates or take advantage of the in-house option?

Send us your request and receive suggested dates for your participation in the training. 

 

Send Email

Do you have any questions?

Do you have questions about secure solutions that incorporate AI?

 

Contact us!!

We are happy to assist you and will take the time to help you!

 

Please schedule an appointment with us, either by email or phone.