SAFE AI – Use of Safe Artificial Intelligence

Towards trustworthy and reliable AI systems with holistic engineering

Safety assurance for Artificial Intelligence

Artificial Intelligence (AI) is already present in many systems today: from voice assistants to intelligent algorithms that evaluate our online shopping or social media behavior. In the future, we will encounter AI systems much more frequently, especially in critical fields of application such as autonomous driving, production automation/Industry 4.0, or medical technology. This is where SAFE AI – that is, controlling the risk of unacceptable failures and functional insufficiencies of AI systems – plays an important role.

 

AI safety in critical applications

 

We advise and support companies in engineering dependable AI systems and accompany them throughout the entire lifecycle: from AI strategy via AI development to the assurance of AI including AI validation and AI auditing and compliance with legal and normative requirements.

Definition and benefit of Safe AI systems

Dependability of a system describes its ability to avoid unacceptable failures in the provision of a service or functionality (Jean-Claude Laprie). Dependability includes, for example, the availability or reliability of a service, but also, in particular, functional safety in order to avoid catastrophic consequences for the users and the environment (i.e., serious or even fatal accidents as a result of failures). But the integrity and security of a system or its maintainability are also quality properties mentioned in the context of dependability. 

Accordingly, the engineering of safe AI systems involves using systems and software engineering principles to systematically guarantee safety during the construction, verification/validation, and operation of the AI system and, in particular, to also take into account legal and normative requirements right from the start.

To this end, it is important to understand that AI systems are developed in a fundamentally different way than traditional software-based systems. Often, the requirements on the AI system cannot be described completely, and the system must function dependably in an almost infinite application space. This is where established methods and techniques of classical software engineering reach their limits and new, innovative approaches are required. In an AI component, the functionality is not programmed in the traditional way, but created by applying algorithms to data. This results in a model that is merely executed by the software at runtime. One approach in this regard is for example, Machine Learning (ML) or, more specifically, Deep Learning (DL). The resulting model (e.g., an artificial neural network) is generally not comprehensible to humans due to its structure and inherent complexity, and thus the decisions made by an AI system are often not understandable.

Typical application areas are systems where the dependability of the AI system – i.e., managing the risk of failures or functional insufficiencies – is a concern.

Smart Mobility

Highly automated and autonomous vehicles will completely change the areas of mobility and logistics (transport of goods) and give rise to new digital services. But the risks of such a vehicle must be acceptable for users and for people interacting with the vehicle (such as passers-by). What exactly constitutes an acceptable risk is currently still the subject of debate, e.g., also by ethics committees. Basic elements that are already widely accepted are a positive risk balance and the principle of ALARP (As Low as Reasonably Practicable). Positive risk balance ensures that the risks for life and limb are less than if a human were to operate the system manually. With ALARP, it must be demonstrated that everything that can reasonably be done to minimize risks – depending on feasibility and effort – is actually being done.

Industry 4.0

AI is one of the pillars of the fourth industrial revolution to realize smart factories that allow individualized products to be created in a highly flexible manner. In addition to scenarios such as Production-as-a-Service or Predictive Maintenance, new smart machines are also being used. Driverless transport systems plan their routes independently and adapt them to demand. Cobots support factory workers and can be trained efficiently for a wide variety of tasks. Nowadays, mechanisms such as light barriers and safety cages are used that completely deactivate the system should a human enter the restricted zone. However, this does not allow the desired level of close interaction with humans to be achieved. In the future, smarter protective functions will also be needed when using such systems and interacting with them so that humans are not exposed to any increased risk.

Digital Health

Today already, physicians are supported by AI systems in the prevention, diagnosis, and therapy of diseases. Here, AI is often responsible for advanced image and data analysis (e.g., in the interpretation of a CT image). It enables analyses in terms of quality and quantity that cannot be achieved by conventional means. Even in surgeries, the first robotic systems are already in use today, but they are controlled by humans. However, the more medical professionals rely on such systems and the higher the degree of autonomy of an AI system in the future, the more important it becomes to control the risk of failures and functional insufficiencies of the AI.

Trustworthiness is a prerequisite for people and societies to develop, deploy, and use AI systems (European Commission’s Ethics Guidelines for Trustworthy AI). Similar to aviation or nuclear energy, it is not only the components of the AI system that need to be trustworthy, but the socio-technical system in its overall context. A holistic and systematic approach is required to ensure trustworthiness. According to the guidelines of the European Commission, trustworthy AI consists of three aspects that should be fulfilled throughout the entire lifecycle:

  1. Compliance with all applicable laws and regulation
  2. Adherence to ethical principles and values
  3. Robustness both from a technical and social perspective

Safety assurance of the AI system refers, in particular, to the aspect of robustness of the AI system from the technical perspective in compliance with applicable laws and standards.

Back to Overview

Opportunities for companies arising from the use of Safe AI

Dependable AI, Fraunhofer IESE
© iStock.com/Shulz

The application fields of AI in general are diverse: AI plays a crucial role in the future scenarios of almost all industries – whether Smart Mobility, Smart Production/Industry 4.0, Digital Health, Smart Farming, Smart City or Smart Region. Most scenarios are based on training data-based intelligent systems that cannot be constructed by classical programming due to their complexity. Methods such as Machine Learning are used in an attempt to solve a highly complex problem by training models on a huge amount of data.

AI approaches can be used to either optimize existing processes (such as more efficient maintenance of machines or sorting of workpieces) or to create new user-oriented products and services (such as smart mobility services based on autonomous vehicles).

The potential of AI for the economy  has been examined in a wide variety of studies. These studies predict strong growth for the next few years, both for Germany and worldwide. Much of the growth is expected from completely new and more user-oriented products. The potential is huge, but so is the competition in the global marketplace. Countries like the U.S. and China are investing on a large scale and their structures and mentality make them a favorable environment for start-ups and the rapid establishment of new innovative products and services. In order to stand up to the competition, Europe and Germany must look to their own strengths: engineered quality and dependability!

For many of the future scenarios mentioned above, SAFE AI is crucial for success. This applies, for example, to everything related to highly automated or autonomous systems and cognitive systems in safety-critical application areas. Without dependability, there is no trust and trustworthiness; without trust and trustworthiness, there is no acceptance and no success. After all, hardly anyone would use autonomous vehicles or work with adaptive robots in a working environment if these were not fundamentally safe in terms of how they function in their intended environment (such as on the road or in a factory.

Back to Overview

Challenges of Safe AI systems

Potentials and Risks

AI and Machine Learning are more than a purely technical challenge, which should not start with buying tools and infrastructure. Companies often face the challenge of finding a use case that is beneficial for them or assessing to which extent AI can really be the solution for a given use case. Furthermore, it is a challenge for companies to assess how AI could impact their business.

Assurance of Dependability

A core element in the construction of dependable AI systems is to provide assurance that the system as a whole is dependable, i.e., that the risk of failures and functional insufficiencies is reduced to an acceptable level. One challenge is to make the argument clear and find appropriate evidence that demonstrates the dependability of the AI system.

Safety Architectures

Even if an AI component fails, the system as a whole must be able to deal with this failure and minimize any risk of damage. The design of such safety architectures, which are crucial for the system as a whole to function dependably, is often challenging in practice.

Blackbox Components

Today, large systems are rarely built by a single manufacturer, but rather integrate parts from different suppliers. In this context, software components are often delivered as a black box; i.e., there is no insight into the concrete implementation and only the behavior of the component can be observed. In the case of AI components, it is therefore important that assurances and information on the dependability of the component are specified unambiguously and are delivered together with the component.

Runtime Guarantees

To keep the risk of failures and functional insufficiencies of an AI system within an acceptable level, it may be necessary to include special mechanisms that dynamically monitor the risk at runtime and intervene in the system behavior if necessary. In practice, it is a challenge to find the balance between efficiency / effectiveness of the AI system and risk minimization at runtime.

Uncertainty Management

Every data-based model has certain uncertainties in its application. In addition to the AI model itself, these can also stem from the input data and the usage context. For example, a model may not have been trained for certain input data, or it may be used outside its intended context of use. One challenge is to quantify these uncertainties in order to manage the risk they pose. 

Data Availability

There is also often a lack of high-quality data and/or knowledge about how to collect such data. This poses a major problem for building models with Machine Learning. Even the best Machine Learning approach cannot produce a model with first-rate performance from poor-quality data.

Data Science Competencies

The demand for data scientists, i.e., persons with skills in dealing with data and, in particular, Big Data as well as data analytics and modeling, is high. For companies, it is often a challenge to recruit data scientists or train them within the company by means of continuing education and training. This is especially true for data scientists who are familiar with the construction of dependable AI systems.

AI Standards

For Safe AI, i.e., the safety assurance of AI systems, legal requirements, standards, best practices, and established methods are still missing. Some guidelines have just been published (e.g., the EU’s Ethics Guidelines for Trustworthy AI) or must still be adapted with respect to AI (such as the new version of ISO 26262 or the extension of the ISO/IEC 25000 series with respect to AI). Therefore, it is unclear for many companies which AI systems are already possible today and what future specifications for dependable AI systems will look like.

Reference Process

In applied AI research, there are many parallel developments that need to be followed. Individual, local solutions exist there for a number of problems, but a comprehensive engineering process suitable for industrial use is currently not yet available. Companies therefore do not have a reference process for orientation and have to painstakingly put together their own blueprint for the development of their AI system.

Development of safety-assured AI systems

The safety assurance of a dependable AI system is carried out using various procedures during construction, verification/validation (V&V), and operation depending on the criticality of the functionality.

 Grafik Übersicht über die Prozessbereiche bei der Entwicklung eines Dependable-AI-Systems (W-Modell)- Fraunhofer IESE
© Fraunhofer IESE
Figure 1: Overview of the process areas in the development of a Safe AI system (W-model).

Figure 1 shows our view of the fundamental process areas in a W-model. Here, we distinguish different activities in the lifecycle of an AI system (x-axis) and different stakeholders involved in the various phases: from the management level to the operative/technical level. The presentation of the activities and levels is based on VDE-AR-E 2842-61, which defines a reference model for trustworthy AI (developed by DKE, the organization responsible for the development of standards in VDE and DIN). Basically, the process is oriented towards a holistic systems engineering approach. Depending on the application area, it is common that individual system components delivered as a black box by suppliers must be integrated into the overall system.

Competence Development

The development process is typically preceded by a ramp-up phase in which the competencies required in the company are identified and developed. For the management level, it is important in this context to understand both the concrete possibilities offered by AI systems for their own business and the stumbling blocks encountered in implementation. For the operative/technical level, on the other hand, knowledge about concrete techniques/methods/tools for the implementation plays an important role.

 

Construction and V&V

The overall process starts with the development of the business strategy and the elaboration of the concrete potentials/added values expected from an AI system. From this, the technical requirements on the system level and a concrete solution concept are derived. Depending on the criticality of the system, further analyses are carried out on the system level (e.g., regarding functional safety) in order to take the regulatory and legal requirements into account during development. Following conventional systems engineering approaches (such as ISO/IEC 15288 or 12207), the system is decomposed into subsystems, which can consist of software and hardware. The software subsystems, in turn, consist of classical software components and components that contain AI and represent a normal or safe function of the system.

Today, software components are developed in an agile development process (such as Scrum), which takes place in parallel to hardware development. AI components are usually also developed in a highly iterative process, but this is more oriented to common methods from the field of data science (such as CRISP-DM). This means that the model is improved incrementally/iteratively until it meets the specified quality criteria. In this context, we speak of the evaluation or hardening of the AI model. The individual components of the system are then integrated into the overall system and verified or validated as a whole before the system can be released to the outside world after satisfying defined acceptance and release criteria (e.g., according to required testing and auditing processes). In practice, this overall process for construction and verification/validation is also iterated several times, depending on the size and complexity of the system and the development project.

 

Operation

The lifecycle of an AI system does not end with the deployment of the system. In the operation phase, the aim is to monitor the performance of the system and learn from the real behavior how to improve the AI system and/or the AI model. In the case of AI systems that continue to learn at runtime, i.e., that change their behavior on the basis of experience gained, further dynamic risk management procedures are also necessary.

Back to Overview

Implementation of Safe AI

 Grafik Übersicht über die Prozessbereiche bei der Entwicklung eines Dependable-AI-Systems (W-Modell)- Fraunhofer IESE
© Fraunhofer IESE
Figure 2: Classification of solution components in the development of a Safe AI system.

As a general rule, we advise and support companies in engineering dependable AI systems and accompany them throughout the entire lifecycle: from AI strategy via AI development to the assurance of AI including AI validation and AI auditing and compliance with legal and normative requirements. We are multidisciplinary in our approach, i.e., we have competencies in the area of software and systems engineering as well as in data science and innovation strategy.

In our research in the context of Safe AI, we have developed concrete solution components for key challenges, which we are transferring into practice. Figure 2 shows a classification of solution components in the depicted process areas of the W-model.

AI Potential Analysis

The added value of AI for a company is worked out within the framework of potential analyses. Using established templates (such as Value Proposition or BMC), the impact of AI on the business model is elaborated. We compile relevant norms, standards, and quality guidelines for the application context and show which AI systems are already possible today and which requirements will have to be met in the future depending on the criticality. During this process, we also identify the gap between the company’s current setup and its future direction and develop a strategy. In the context of prototypes, promising application scenarios are evaluated in terms of profitability, demand, and technical feasibility (e.g., with regard to the availability of data and the quality of the prediction).

 

Multi-Aspect Safety Engineering

Especially for the development of highly automated and autonomous systems, not only functional safety (ISO 26262) plays an important role, but also the safety of the intended functionality (SOTIF, ISO/PAS 21448) and the systematic development of a safe nominal behavior. In a model-based approach, we use state-of-the-art safety engineering methods and techniques to model the system holistically, to analyze it, and to manage the identified risks. Assurance or safety case analyses are used to argue that the system is dependable. Evidences are used to argue that regarding the assurance or safety case, the system behaves in such a way that the risk acceptance threshold is not exceeded.

 

Safety Supervisor Architecture

Another central element for the dependability of a system is the system architecture. Here, architectures exist, for example, that place a kind of safety container around the AI. These Safety Supervisor or Simplex architectures override the AI’s decision if it would lead to dangerous behavior of the AI system. For this purpose, it is important to know, for example, the uncertainty of the AI recommendation under certain conditions of use and to determine the risk of an incorrect decision based on this. Such approaches are interesting for Machine Learning components in general and particularly important if we are talking about AI systems that continue to learn over their lifetime and can therefore adapt their behavior. Since it is potentially impossible to know at the design time of an AI system which data the AI will use as a basis for further learning, appropriate guardrails for the behavior must be defined and incorporated into the system.

 

Digital Dependability Identities (DDI)

A Digital Dependability Identity (DDI) is an instrument for centrally documenting all specifications that are relevant for system dependability in a uniform and exchangeable manner and for delivering this specification together with a system or system part. This is especially important if system parts from different manufacturers/suppliers along the supply chain have to be integrated, and/or if dynamic integration of different system parts takes place at runtime (as in collaborative driving). The backbone of DDI is an assurance case that structures and links all specifications to argue under which conditions the system will operate dependably. The nature of the evidences in the argumentation chain varies widely. They may concern the safety architecture or the management of uncertainties of the AI component, for example the assurance that data is used that sufficiently covers the intended context of use.

 

Dynamic Risk Management (DRM)

DRM is an approach for the continuous monitoring of the risk of a system and its planned behaviors as well as of the safety-related capabilities. DRM monitors the risk, optimizes the performance, and ensures safety by means of continuous management informed by runtime models. One ingredient of DRM of AI-based systems can also be the output of an Uncertainty Wrapper.

 

Uncertainty Management

Uncertainty is an inherent problem in data-based solutions and cannot be ignored. The Uncertainty Wrapper is a holistic, model-independent approach for the identification of situationally reliable predictions of uncertainties in AI components. These uncertainties may stem not only from the AI model itself, but also from the input data and the context of use. The Uncertainty Wrapper can be used, on the one hand, during the development of the AI model to manage the identified uncertainties by taking appropriate measures and making the model safer. On the other hand, at runtime of the AI system, the wrapper enables a holistic view of the current uncertainty of the AI component. This assessment can in turn be used by a safety container to override the AI and bring the system to a lower-risk state.

 

Training

Last but not least, we are also active in providing qualification for the corresponding competencies within the company: from Enlightening Talks for management via continuing education and training programs on becoming a Data Scientist to special training for dependable AI systems (e.g., Safe AI seminar).

Back to Overview

© iStock.com/Shulz

Why should your company collaborate with us on Safe AI?

Artificial Intelligence (AI) and Machine Learning (ML) have great potential to improve existing products and services – or even to enable completely new products and services. However, assuring and verifying essential quality properties of AI systems, and in particular properties relating to dependability, is a major challenge. Those who find and implement adequate solutions in this regard have a clear competitive edge! We are convinced: With our methods and solution components, we can actively and effectively support you on this path!

 

End-to-end: We accompany companies throughout the entire lifecycle: from AI strategy via AI development to AI assurance including AI validation and AI auditing and the fulfillment of legal and normative requirements.

Multidisciplinary: We are multidisciplinary in our approach, i.e., we have competencies in software and systems engineering as well as in data science and innovation strategy. Accordingly, we offer a one-stop service where all relevant competencies regarding Safe AI are available.

Innovative: As a Fraunhofer Institute, we are represented in innovative research projects for Safe AI at the German and EU level, so we know the current state of the art and state of the practice regarding processes, technologies, and tools for Safe AI.

Informed: We are involved in roadmap and standardization activities on Safe AI through associations and standardization bodies, so we have access to current discussions and are aware of future changes of the legal and normative requirements.

One-stop provider: As a member of the Fraunhofer Big Data and Artificial Intelligence (AI) Alliance, we have access to extensive resources (data scientists, software packages, and computing infrastructure). This allows us to act as a one-stop provider and offer even more complex projects in the context of Safe AI. 

We will be happy to support you in the design and implementation of your Safe AI systems!

 

Contact us!

We will be happy to support you and make time for you!

Schedule an appointment with us, by email or by phone.