Safe AI – Safe Solutions Containing AI
Artificial Intelligence (AI) methods such as Machine Learning (ML) and especially Deep Learning (DL) are widely used in non-critical applications such as language assistants. In safety-critical environments, there are also many use cases with huge economic potential, but this potential cannot be fully exploited yet at present. One prominent example is the use of ML-based object recognition for automated driving. There are many approaches aimed at making ML-based components more dependable, and due to intensive research, more and more new approaches are emerging. Manufacturers are supposed to keep up with the state of the art and the state of the practice, but how can they do this if research keeps publishing new results almost daily, and if these results may even represent different viewpoints? Safety standards are intended to describe the assured state of the practice, but traditional standards such as the basic safety standard IEC 61508 do not take AI developments into account. They assume that safety functions are realized without AI and that AI components are assured through these traditional safety functions. Although there are many standards that deal with AI, they are not sufficient with regard to safety or refer to traditional safety standards, such as the technical report ISO/IEC TR 24028 “Overview of trustworthiness in artificial intelligence”. The technical report ISO/IEC AWI TR 5469 “Functional safety of AI-based systems” currently under development may be able to answer the question of what is generally accepted when it comes to the use of AI in safety-critical contexts. However, it will not be able to provide custom-tailored safety concepts. To do this, safety and AI experts must work together and mutually understand each other’s mindset and terminology.
The use of ML, in particular, signifies a change of the development paradigm: In the development of a classical system, the system is typically specified, refined, implemented, and tested by engineers, who consider safety requirements. If, on the other hand, ML is used for specific, usually complex, tasks, a set of example data serves as a detailed specification; on this basis, a learning algorithm generates a model that is used in the final system (after being checked on more example data) to implement a specific function. However, it is known that the way in which these models are developed is (often) not deterministic. In addition, the quality of such models depends, among other things, on the data used, the process, the learning algorithms used (product), and the expertise of the developers (humans). The fact that many established verification and validation methods are only applicable partially due to the lack of a real specification and the non-interpretability of ML-based solutions further complicates the situation.