Seminar: Securing Artificial Intelligence

Ki, Sicherheit, Absicherung, Fraunhofer IESE
© iStock.com/Henrik5000
AI security in security-critical areas.

Use of secure artificial intelligence

Artificial intelligence (AI) is already present in many systems today: from voice assistants to intelligent algorithms that analyze our behavior when shopping online or using social media. In the future, we will encounter AI systems even more frequently, especially in critical areas of application such as autonomous driving, production automation/Industry 4.0, and medical technology. This is where SAFE AI  i.e., controlling the risk of unacceptable errors and malfunctions in AI systems – plays an important role.

Information and details about the seminar

Dependable AI / Verlässliche KI Keyvisual

The Certified Data Scientist Specialized in Assuring Safety seminar provides expert knowledge on securing artificial intelligence (AI) in safety-critical areas.

We offer an overview of the state of the art in ensuring AI safety. At the interface between functional safety and artificial intelligence, we deal with relevant standards and standardization initiatives. We raise awareness of the challenges involved in using AI in safety-critical solutions by highlighting typical problems in this area of conflict.

In the seminar, participants learn about possible strategies for securing artificial intelligence. Together, we test a selection of approaches to address specific challenges and derive tailor-made safety concepts.

Event type and location

 

  • In-person seminar in Kaiserslautern
  • Online seminar via MS Teams

Dates

 

  • More dates to follow!

Completion

 

  • Certificate of attendance and certificate
  • Receipt of training materials

Language

 

  • German
  • English

Cost

 

  • 4 1/2-day in-person seminar (including exam): EUR 4350 per person (including drinks and lunch)
  • 4 1/2-day online seminar (including exam): EUR 4350 per person

Facilities
 

  • Presence: Writing materials for notes
  • Online: Laptop with internet access, webcam, and microphone

You will receive a Microsoft Teams dial-in link for the online seminar.

In this seminar, participants learn about possible strategies for securing artificial intelligence. Together, we test a selection of approaches to address specific challenges and derive customized safety concepts.
 

Target group

  • Testing organizations, specialists and managers, project managers, quality management 
  • experts such as safety engineers and data scientists
     

Prerequisites

  • Basic knowledge of statistics and data analysis is required.
  • Initial experience in working with machine learning methods is an advantage. 
  • Prior knowledge in the field of safety engineering is not required.

After the seminar, you will know...

  • ...about the potential risks and innovations of AI applications in safety-critical environments.
  • ...about the basics of safety engineering.
  • ...about relevant AI fundamentals from a safety perspective.
  • ...how to classify the benefits and binding nature of safety and AI standards.
  • ... about a selection of possible strategies and measures for safe AI.
  • ... how to apply assurance cases as a possible basis for argumentation for AI-related safety evidence.

 

The seminar offers you... 

... an overview of the state of the art.

... possible strategies for securing artificial intelligence

... a selection of approaches for addressing specific challenges and deriving customized safety concepts

... a high proportion of exercises and interaction in order to convey the content in a practical manner and enable its transfer into everyday working life.

The seminar on securing artificial intelligence will be held in German. Accompanying documents are available in English.

Day1: 09:00-16:30 Uhr

Motivation and fundamentals

 

  • Motivation for AI in safety-critical systems
  • Fundamentals of safety engineering
  • Fundamentals of AI with a focus on safety

Day2: 9:00-16:30 Uhr

Standards and measures

 

  • Relevant standards and norms
  • Safety measures for AI
  • Measures during specification
  • Measures during design

Day 3: 9:00-16:30 Uhr

Measures for secure AI (cont.)

 

  • Measures during testing
  • Measures during analysis
  • Measures in the data lifecycle

Day 4: 9:00-16:30 Uhr

Safety argumentation

 

  • Fundamentals of assurance cases
  • Assurance cases for AI components
  • Holistic view of core content, opportunity for concluding questions

Day 5: 9:00-12:30 Uhr

Measures for secure AI (cont.)

 

  • Certified exam in the morning

The seminar was designed by the experts and specialists at Fraunhofer IESE and has already been successfully held several times. Seminar participants receive personal support from the Fraunhofer IESE team and gain direct access to expertise from research and practice.

Communication

Interactive lecture

 

  • Questions can be asked at any time
  • Regular feedback rounds
  • Practice sessions for applying and consolidating specialist knowledge

Media

Tips and tools

 

  • Multimedia presentation
  • Live examples and demonstrations
  • Detailed seminar materials and checklists

Expertise

Maximum practical relevance

 

  • Fraunhofer experts and specialists
  • Theory from research and project work
  • Practical expertise

Dr. Rasmus Adler, Fraunhofer IESE
© Fraunhofer IESE

Dr.-Ing. Rasmus Adler studied applied computer science and has been employed at Fraunhofer IESE since 2006. In his doctoral thesis, he developed fail-operational solutions for active safety systems such as ESP. He then devoted himself to model-based safety engineering for autonomous systems as a project manager and safety expert. He coordinated the development of solutions to measure the risk of planned/possible autonomous system behavior in relation to the current situation at runtime and to initiate risk-minimizing measures. In his current position as Program Manager for Autonomous Systems, he is particularly dedicated to the risk management of networked cyber-physical systems. In order to maximize the benefits of individual systems as well as the overall benefits of system networks, he relies on cooperative risk management during runtime, which is based in part on artificial intelligence. Since current safety standards do not support this innovative risk management, he is involved in standardization committees and participates in the development of normative requirements for autonomous, networked cyber-physical systems. 

Dr. Michael Klaes, Fraunhofer IESE
© Fraunhofer IESE

Since completing his studies in computer science, Dr. Michael Kläs has been working in applied research and advising companies on software quality and data analysis at Fraunhofer IESE. Over the past decade, he has been responsible for numerous industry and research projects, particularly the development of KPI systems, the evaluation of new technologies, and the development of predictive analytics. His dissertation focused on predicting software errors with the help of expert knowledge. His current focus is on potential analysis for data-driven innovation and data quality and uncertainty analysis in big data and AI systems. As the author of numerous specialist publications, he is also involved as a university lecturer and as an expert in standardization (DIN/VDE).

Janek Groß, Fraunhofer IESE
© Fraunhofer IESE

Janek Groß studied mathematics and psychology in Eichstätt, where he earned a Bachelor of Science degree and passed the first state examination for secondary school teaching. He completed his master's degree in Robotics Cognition Intelligence at the Faculty of Computer Science at the Technical University of Munich. During his studies, he acquired extensive knowledge in mathematical statistics and empirical sciences. He also gained relevant experience in the development of larger neural networks and in working with mainframe computers. In basic research, he is particularly interested in the areas of time series analysis and information theory. Since the beginning of 2021, he has been working in the Data Science department at Fraunhofer IESE and collaborates closely with the Safety Engineering department. His responsibilities include the empirical validation and formal verification of data-driven AI models used in autonomous vehicles and robots. 

Dr. Adam Trendowicz, Fraunhofer IESE
© Fraunhofer IESE

Dr. Adam Trendowicz is a senior engineer in the “Data Science” department at the Fraunhofer Institute for Experimental Software Engineering IESE in Kaiserslautern, Germany. After receiving his PhD in the area of software project effort and risk estimation models from the University of Kaiserslautern (Germany), he continues to work in data science and data-driven business innovation.

Dr. Trendowicz has more than 20 years of experience in the analysis of software projects and products in various industries. He has led various activities in the areas of software measurement, prediction, and improvement in software companies of different sizes and in different domains (including Germany, Japan, and India). In this context, he has developed and empirically validated prediction models for software cost and software quality.

In his current work, Dr. Trendowicz focuses on data quality and preparation in the context of Machine Learning and on lean deployment of data-driven innovations based on Machine Learning and Artificial Intelligence solutions.

Dr. Trendowicz has co-created the “Data Scientist” continuing education and certification program offered by the Fraunhofer Big Data and Artificial Intelligence Alliance. Furthermore, he has held several tutorials on business-IT alignment, data preparation and analysis, software quality measurement, and cost estimation. Finally, he has co-authored several books and numerous international journal and conference publications.

Pascal Gerber, Fraunhofer IESE
© Fraunhofer IESE

Pascal Gerber studied Computer Science at the Technical University of Kaiserslautern, Germany. In his theses as well as during his work as a student research assistant at Fraunhofer IESE, he focused on topics such as reinforcement learning and quality influence models for evaluating uncertainties in the decisions of data-driven models.

After graduating, he started working in the “Safety Engineering” department of Fraunhofer IESE in 2021 and acquired fundamental competencies in safety engineering. Since 2023, he has been working in the “Data Science” department and is currently focusing on quality influence models.

Mitarbeiter
© Fraunhofer IESE

Marc Wellstein received his Master’s degree in computer science (M.Sc.) from TU Kaiserslautern, Germany, in July 2021. Since then, he is full-time researcher in the “Safety Engineering” department at the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern. His main research topic is dynamic risk management, complementing established design time assurance methods with runtime assurance aspects, and assurable, human-like driving behaviors in simulation-based validation for autonomous driving systems. Generally, his focus is mainly on the automotive domain.

Patrick Wolf, Fraunhofer IESE
© Fraunhofer IESE

Dr.-Ing. Patrick Wolf studied applied computer science at the Technical University of Kaiserslautern with a focus on embedded systems and robotics. His master's thesis dealt with the assessment of sensor data quality and the effect of uncertainties in perception systems of autonomous vehicles. From 2016 to 2023, he was a research assistant at the Chair of Robotic Systems at RPTU Kaiserslautern and developed safe and reliable autonomy solutions for commercial vehicles in off-road environments. Dr. Wolf completed his doctorate “Cognitive Processing in Behavior-Based Perception of Autonomous Off-Road Vehicles” with distinction in 2022.

Since 2023, Dr. Wolf has been working as a “Senior Safety Engineer” in the department for “Safety Engineering” at Fraunhofer IESE and develops new solutions in the field of safe autonomous driving. In addition to his research and industry activities, he is a lecturer in the Fraunhofer certificate program “Certified Data Scientist Specialized in Assuring Safety”. Since the winter semester 2023, he has been a lecturer in the Department of Computer Science at RPTU and teaches the lecture “Offroad Robotics” at the Kaiserslautern campus.

Mitarbeiter
© Fraunhofer IESE

Daniel Hillen received his Master's degree in Computer Science from the Technical University in Kaiserslautern, Germany. Since 2020, he has been working full-time as a research associate and safety engineer at the Fraunhofer Institute for Experimental Software Engineering (IESE). His research focuses on the development of methods for safeguarding autonomous vehicles. His focus is on modeling the vehicle environment and model-based safety engineering.

Daniel Seifert, Fraunhofer iESE

Daniel Seifert has been working as a data scientist at Fraunhofer IESE since April 2023. He mainly focuses on natural language processing using large language models and the explainability of AI models. Prior to this, he completed a master's degree in computer science at the Technical University of Kaiserslautern with a focus on software engineering and worked as a research assistant in the field of safety engineering at Fraunhofer IESE.

Do you have any questions?

Contact us and benefit from Fraunhofer as a research and industry partner.

 

Contact us!!

Do you have any questions about the seminar or specific requests for training?

 

Feel free to contact us!