Artificial Intelligence – Minimizing threats through uncertainty assessments

Transparent and reliable handling of uncertainty in AI systems at runtime

Reduce potential AI threats and risks

The possibilities for using AI are diverse and continue to evolve constantly. They open up innovative perspectives in various sectors, such as:

  • Diagnostic support in medicine
  • Autonomous vehicles and improved traffic safety
  • Efficient automated processes in industry
  • Detection of cyber attacks and security vulnerabilities

Despite the many opportunities offered by Artificial Intelligence, it is crucial not to neglect the challenges posed by uncertainty and model validation.

 

Uncertainty management

Uncertainty management plays a crucial role in reducing potential threats and risks associated with the use of Artificial Intelligence. It deals with the systematic identification, quantification, and reduction of uncertainties in AI models and results in order to realize more dependable and transparent decisions. Thus, uncertainty assessment in AI approaches enables critical evaluation of the model and its applicability. Through effective uncertainty management, companies can strengthen confidence in AI systems and increase their accountability.

In practice, AI systems are often confronted with situations that lead to uncertain predictions

Daten im Umlauf
© iStock.com/monsitj

In AI systems, the functional behavior of its data-driven components (DDC) is not programmed but rather learned automatically from available data. Consequently, the functional behavior expected from a DDC is usually specified only by way of example based on sample cases for which data has been previously collected. For this reason, it cannot be guaranteed that such components will behave as intended in all cases, which leads to uncertainties at runtime.

Questions you should ask to minimize AI risks:

  • To what extent can we trust the outputs of a “Data Driven Component”?
  • How can we get solid statements about their dependability?
  • On what factors does the uncertainty of an output depend?

What are the reasons that AI can become a threat?

IT-Audits und Penetrationstests, Fraunhofer IESE
© iStock.com/sdecoret

In practice, three types of uncertainty are particularly relevant:

1 Model fit
“Model fit uncertainty” focuses on the inherent
uncertainty in data-driven models due to
modeling and approximation.

2 Data quality
“Data quality uncertainty” includes the additional uncertainty that arises from applying a model to input data that has qualitative deficiencies.

3 Scope compliance
“Scope compliance uncertainty” comprises the uncertainty about whether the model is currently actually applied within the scope for which the model has been trained and validated.

You already have an AI application in use? Do you fully trust the decision of your AI? Existing models can be supplemented with uncertainty assessments. Please feel free to contact us.

Learn more about our “Uncertainty Wrapper” approach and its advantages over other uncertainty assessments, e.g., in traffic sign recognition, pedestrian detection, or cell classification using flow cytometry data.

 

Contact us!

Do you want to reduce potential AI threats or find out whether uncertainty management makes sense for your company?

Schedule an appointment with us, by email or by phone.