Seminar Safe AI: Safe Solutions Containing AI

Next dates

Now also as an online seminar!

  • 23 February 2021, 9 am – 5 pm
    (fee charged)

More dates for 2021 are in planning.

Safe AI – Safe Solutions Containing AI

Artificial Intelligence (AI) methods such as Machine Learning (ML) and especially Deep Learning (DL) are widely used in non-critical applications such as language assistants. In safety-critical environments, there are also many use cases with huge economic potential, but this potential cannot be fully exploited yet at present. One prominent example is the use of ML-based object recognition for automated driving. There are many approaches aimed at making ML-based components more dependable, and due to intensive research, more and more new approaches are emerging. Manufacturers are supposed to keep up with the state of the art and the state of the practice, but how can they do this if research keeps publishing new results almost daily, and if these results may even represent different viewpoints? Safety standards are intended to describe the assured state of the practice, but traditional standards such as the basic safety standard IEC 61508 do not take AI developments into account. They assume that safety functions are realized without AI and that AI components are assured through these traditional safety functions. Although there are many standards that deal with AI, they are not sufficient with regard to safety or refer to traditional safety standards, such as the technical report ISO/IEC TR 24028 “Overview of trustworthiness in artificial intelligence”. The technical report ISO/IEC AWI TR 5469 “Functional safety of AI-based systems” currently under development may be able to answer the question of what is generally accepted when it comes to the use of AI in safety-critical contexts. However, it will not be able to provide custom-tailored safety concepts. To do this, safety and AI experts must work together and mutually understand each other’s mindset and terminology.

The use of ML, in particular, signifies a change of the development paradigm: In the development of a classical system, the system is typically specified, refined, implemented, and tested by engineers, who consider safety requirements. If, on the other hand, ML is used for specific, usually complex, tasks, a set of example data serves as a detailed specification; on this basis, a learning algorithm generates a model that is used in the final system (after being checked on more example data) to implement a specific function. However, it is known that the way in which these models are developed is (often) not deterministic. In addition, the quality of such models depends, among other things, on the data used, the process, the learning algorithms used (product), and the expertise of the developers (humans). The fact that many established verification and validation methods are only applicable partially due to the lack of a real specification and the non-interpretability of ML-based solutions further complicates the situation.

Goal of the Seminar

In this seminar, you will get an introductory overview of the state of the practice in the areas of Safety and Artificial Intelligence, including relevant standards and standardization initiatives. We will discuss the challenges arising in this area of conflict in order to raise your awareness for the problems involved in using AI approaches in safety-critical solutions. In addition, you will learn about possible strategies for safe AI solutions and get to know an example selection of approaches that can help you to address concrete challenges and derive customized safety concepts.

Content of the Seminar

  • Motivation – Why Safe AI?
  • State of the practice in the area of Safety
    • Definitions, standards, methods, …
  • State of the practice in the area of data-driven models and AI
    • Approaches, categorization, development processes, testing
  • Challenges
    • Effects of the specifics of AI development
  • Strategies for safety despite AI
    • Standards
    • Assurance Cases
  • Best Practices
    • Overview
    • Safety Supervisor
    • Dynamic Risk Management
    • Uncertainty Wrapper

If you are interested, we can also offer you individual seminars tailored to your industry or your company in a closed group with participants from your company.

Registration for the fee-based online seminar “Safe AI – Safe Solutions Containing AI”

Date: 23 February 2021, 9 am – 5 pm 

Participation fee: 950.00 euros

* Pflichtfelder

Title*
Academic Title
First Name
Last Name*
Company*
Street*
ZIP*
City*
Email*
Land
Consent for the storage and usage of your data
By entering your data you consent to the storage and usage of your data in accordance with the German Privacy Act. The declaration of consent is made voluntarily and can be revoked at any time.
Privacy policy* I consent to the elicitation, procession, transmission, and usage of my personal data by the Fraunhofer Institute for Experimental Software Engineering for the purpose of customer management.