Seminar: Safety Assurance for Artificial Intelligence

Ki, Sicherheit, Absicherung, Fraunhofer IESE
© iStock.com/Henrik5000
KI Sicherheit in sicherheitskritischen Bereichen.

AI safety - with safety engineering to safe AI (The seminar is held in German)

The seminar for becoming a Certified Data Scientist Specialized in Assuring Safety addresses the safety assurance of Artificial Intelligence (AI) in safety-critical areas.

We provide an overview of the state of the art for assuring AI safety. At the interface between safety and Artificial Intelligence, we address relevant standards and standardization initiatives. We create awareness for the challenges caused by using AI in safety-critical solutions by highlighting typical problems from this area of conflicting requirements.

In the seminar, participants learn about possible strategies for assuring the safety of Artificial Intelligence. Together, will test a selection of approaches to address concrete challenges and derive customized safety concepts.

The training, which is conducted online, includes a large proportion of exercises and interaction in order to convey the content in a practice-oriented manner and to enable transfer to everyday professional life.

 

Book a date

Information and details about the seminar

Target Group

  • Reviewing organizations, specialists and managers, project managers, quality management
  • Experts such as safety engineers and data scientists

Basic knowledge of statistics and data analytics is expected. Initial experience with Machine Learning methods is an advantage. Prior knowledge in the field of safety engineering is not required.

 

Your Benefits

Safety Assurance for Artificial Intelligence at a glance:

  • You will get to know the hazard and innovation potential of AI applications in safety-critical environments
  • You will get an overview of the fundamentals of safety engineering
  • You will learn about the relevant AI fundamentals from the safety perspective
  • You will be able to classify the benefits and the binding nature of safety and AI standards
  • You will get to know a selection of possible strategies and measures for safe AI
  • You will be able to apply assurance cases as a possible argumentation basis for AI-related safety cases by way of example

The seminar Safety Assurance for Artificial Intelligence is held in German. Accompanying documents are available in English.

 

Day 1: Motivation and Fundamentals

  • Motivation for AI in safety-critical systems
  • Fundamentals of safety engineering
  • AI basics with a focus on Safety

Day 2: Standards and Measures for AI Safety

  • Relevant standards and norms
  • Safety measures for AI
  • Measures during specification
  • Measures during construction

Day 3: Measures for Safe AI (cont.)

  • Measures during testing
  • Measures during analysis
  • Measures in the data lifecycle

Day 4: Safety Argumentation

  • Fundamentals of assurance cases
  • Assurance cases for AI components
  • Holistic consideration of the core contents, opportunity for final questions

Day 5: Certified exam in the morning

Dr. Rasmus Adler, Fraunhofer IESE
© Fraunhofer IESE

Dr.-Ing. Rasmus Adler studied Applied Computer Science and has been employed at Fraunhofer IESE since 2006. In his PhD, he developed fail-operational solutions for active safety systems such as ESP. After that, he dedicated himself to model-based engineering of autonomous systems as a project manager and safety expert. He coordinated the development of solutions to measure at runtime the risk of planned/possible autonomous system behavior with respect to the current situation and to initiate risk-minimizing actions. In his current position as Program Manager Autonomous Systems, he is particularly dedicated to the risk management of networked cyber-physical systems. To maximize the benefit of individual systems, but also the overall benefit of systems of systems, he relies on cooperative risk management at runtime that is partly based on Artificial Intelligence. Since current safety standards do not support this innovative risk management, he is involved in standardization committees and participates in the development of normative requirements for autonomous networked cyber-physical systems.

Dr. Michael Klaes, Fraunhofer IESE
© Fraunhofer IESE

Dr. Michael Kläs has been working in applied research and advising companies in the areas of software quality and data analytics ever since completing his studies in Computer Science. In the past decade, he has been responsible in numerous industry and research projects for the establishment of KPI systems, the evaluation of new technologies, and the development of predictive analyses. In his dissertation, he addressed the prediction of software faults using expert knowledge. Currently, his focus is on the area of potential analysis for data-driven innovation and on the analysis of data quality and uncertainty in Big Data and AI systems. As the author of numerous professional publications, he is also actively engaged as a university lecturer and as an expert in standardization (DIN/VDE).

Janek Groß, Fraunhofer IESE
© Fraunhofer IESE

Janek Groß studied mathematics and psychology in Eichstätt, Germany, earning a Bachelor of Science degree and the first state examination for secondary school teachers. He completed his Master’s degree in Robotics Cognition Intelligence at the Faculty of Computer Science at the Technical University of Munich. During his studies, he acquired extensive knowledge in mathematical statistics and empirical sciences. He also gained relevant experience in the development of larger neural networks and in the use of mainframe computers. In basic research, his main interests are in the areas of time series analysis and information theory.

Since the beginning of 2021, he has been working in the “Data Science” department of Fraunhofer IESE, in close collaboration with the “Safety Engineering” department. His tasks include the empirical validation and formal assurance of data-driven AI models used in autonomous vehicles and robots. 

Dr. Adam Trendowicz, Fraunhofer IESE
© Fraunhofer IESE

Dr. Adam Trendowicz is a senior engineer in the “Data Science” department at the Fraunhofer Institute for Experimental Software Engineering IESE in Kaiserslautern, Germany. After receiving his PhD in the area of software project effort and risk estimation models from the University of Kaiserslautern (Germany), he continues to work in data science and data-driven business innovation.

Dr. Trendowicz has more than 20 years of experience in the analysis of software projects and products in various industries. He has led various activities in the areas of software measurement, prediction, and improvement in software companies of different sizes and in different domains (including Germany, Japan, and India). In this context, he has developed and empirically validated prediction models for software cost and software quality.

In his current work, Dr. Trendowicz focuses on data quality and preparation in the context of Machine Learning and on lean deployment of data-driven innovations based on Machine Learning and Artificial Intelligence solutions.

Dr. Trendowicz has co-created the “Data Scientist” continuing education and certification program offered by the Fraunhofer Big Data and Artificial Intelligence Alliance. Furthermore, he has held several tutorials on business-IT alignment, data preparation and analysis, software quality measurement, and cost estimation. Finally, he has co-authored several books and numerous international journal and conference publications.

Lisa Joeckel, Fraunhofer IESE
© Fraunhofer IESE

Lisa Jöckel has been working in the area of safety assurance for AI-based systems at Fraunhofer IESE as a Senior Data Scientist since 2018. One of her focal areas is the evaluation of uncertainties in the decisions of data-driven models. In addition, she deals with the testing of such models and the quality of the test data. She studied Computer Science at the Technical University of Kaiserslautern with a focus on data visualization and computer graphics.

Pascal Gerber, Fraunhofer IESE
© Fraunhofer IESE

Pascal Gerber studied Computer Science at the Technical University of Kaiserslautern, Germany. In his theses as well as during his work as a student research assistant at Fraunhofer IESE, he focused on topics such as reinforcement learning and quality influence models for evaluating uncertainties in the decisions of data-driven models.

After graduating, he started working in the “Safety Engineering” department of Fraunhofer IESE in 2021 and acquired fundamental competencies in safety engineering. Since 2023, he has been working in the “Data Science” department and is currently focusing on quality influence models.

Registration

Please fill out the registration form below. The maximum number of participants is 15. Registrations will be considered in the order they are received.

If the seminar is already fully booked on your preferred date, you have the option of being placed on the waiting list. The waiting list is non-binding and free of charge for you. As soon as a place becomes available, we will take steps to fill it as soon as possible and would then contact you.

Seminar Safety Assurance for Artificial Intelligence (The seminar is held in German)

The costs amount to 4,350 Euros including meals on site. Face-to-face events take place in Kaiserslautern.

* Required

Please select a date:
Data protection
  • We do, of course, treat your data confidentially and do not pass it on to third parties. You can object to the processing of your data at any time.

  • The participation fee is tax-free according to Sec 4 No. 22a of the German Value Added Tax Act (UStG).

    It includes accompanying documents, the examination fee, and catering during on-site events. We do, of course, treat your data confidentially and do not pass it on to third parties. You can object to the processing of your data at any time.

    Following the training, our accounting department will send an official invoice to the address provided by you. 

  • The certificate for the seminar Safety Assurance for Artificial Intelligence is issued by the Fraunhofer Personnel Certification Authority.

    Admission requirements are a university degree or equivalent qualifications proven by an individual attestation.