{"id":5454,"date":"2020-04-28T15:59:23","date_gmt":"2020-04-28T13:59:23","guid":{"rendered":"https:\/\/blog.iese.fraunhofer.de\/?p=5454"},"modified":"2024-05-27T12:09:23","modified_gmt":"2024-05-27T10:09:23","slug":"which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe","status":"publish","type":"post","link":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/","title":{"rendered":"Which laws, standards, and research initiatives exist to make Artificial Intelligence and Autonomous Systems safe?"},"content":{"rendered":"<p class=\"lead\">This is a question that we, Dr.-Ing. Rasmus Adler as \u201cProgram Manager Autonomous Systems\u201d at Fraunhofer IESE, and Dr. Patrik Feth as member of the group \u201cAdvanced Safety Functions &amp; Standards\u201d at SICK AG, are confronted with again and again. In this post, we will therefore address the issue of standards for autonomous systems and provide an overview of initiatives aimed at regulating the use of Artificial Intelligence in safety-critical systems. We originally prepared this overview for ourselves, as we wanted to make a deliberate choice regarding which research groups and standardization committees we would get involved in with our expertise. With this post we want to strengthen exchanges within the research and standardization community in the field of safety assurance of AI and create synergies. We are looking forward to getting feedback and will also regularly update this post accordingly.<\/p>\n<p>In principle, there is broad agreement that Artificial Intelligence needs boundaries. However, the notions of what \u00bbArtificial Intelligence (AI)\u00bb is still differ greatly. Also, there is no consensus yet as to how to implement such boundaries in the form of laws and standards. Yet, many efforts are underway worldwide to reach a consensus, and initial work results are already available as well.<\/p>\n<div class=\"info-box\">\n<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-7634 size-thumbnail\" src=\"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-150x150.jpg\" alt=\"Patrik Feth (SICK AG)\" width=\"150\" height=\"150\" srcset=\"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-150x150.jpg 150w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-32x32.jpg 32w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-50x50.jpg 50w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-64x64.jpg 64w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-96x96.jpg 96w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-128x128.jpg 128w, https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Patrik_Feth_Blog-65x65.jpg 65w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/strong><\/p>\n<p><strong>Co-Autor<\/strong><\/p>\n<p>Dr. Patrik Feth<br \/>\nCorporate Unit Functional Safety<br \/>\nSICK AG<br \/>\n<a href=\"mailto:Patrik.Feth@sick.de\" target=\"_blank\" rel=\"noopener noreferrer\">Patrik.Feth@sick.de<\/a><\/p>\n<\/div>\n<h3>Are there already standards for autonomous systems?<\/h3>\n<p>The definition of AI is currently being discussed by the <a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">ISO\/IEC JTC 1\/SC 42<\/a> committee, for example. \u00bb<a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\" target=\"_blank\" rel=\"noopener noreferrer\">Ethics Guidelines for Trustworthy AI<\/a>\u00ab have been drawn up at the European level by a high-level expert group. These Guidelines proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed (<a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\" target=\"_blank\" rel=\"noopener noreferrer\">Seite 8<\/a>). These laws include the<br \/>\n<a href=\"https:\/\/de.wikipedia.org\/wiki\/Produktsicherheitsgesetz_(Deutschland)\" target=\"_blank\" rel=\"noopener noreferrer\">(German) Product Safety Act<\/a> with the associated machinery directive, which are of particular importance in the context of safety.<\/p>\n<p>The current laws do not, however, make any concrete provisions regarding the development of safety-critical systems. It is only required by various means to comply with the state of the art and the state of the practice. This is where standards come into play, because standards should reflect this state in the best possible way. To be able to do so, they must be regularly updated in terms of new technological developments. Traditionally, such adjustments have rather been reactive in nature. Industry representatives agree on a minimum level that can be regarded as the current standard. Regarding the use of AI in safety-critical applications, however, a proactive approach is increasingly being taken. Safety experts from research and application jointly develop recommendations for action and application rules. In the following, we will focus on work and working groups using this proactive approach.<\/p>\n<h3>Already published standards for autonomous systems (including technical reports, DIN SPECs, DKE application rules, etc.)<\/h3>\n<p>Here we list already published documents from standardization committees that concern AI and autonomous systems. At this time, many other documents are under preparation and will be published in the near future. Please see the list below for current initiatives. We will gladly extend the list with additional elements. Simply use the comment function below.<\/p>\n<ul>\n<li><strong>DIN SPEC 92001-1<\/strong><br \/>\nThe aim of DIN SPEC 92001 is to establish a quality-assuring and transparent lifecycle for AI modules. In the first part of the planned 92001 series, a framework is being set up for this.<br \/>\n<a href=\"https:\/\/www.din.de\/de\/wdc-beuth:din21:288723757\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.din.de\/de\/wdc-beuth:din21:288723757<\/a><\/li>\n<li><strong>UL4600<\/strong><br \/>\nUL4600 places the focus on setting up a safety case for autonomous systems and provides a framework for this. Fraunhofer IESE is on the review committee in order to provide support with its industry experience and its research expertise.<br \/>\n<a href=\"https:\/\/edge-case-research.com\/ul4600\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/edge-case-research.com\/ul4600\/<\/a><\/li>\n<li><strong>ISO\/PAS 21448<br \/>\n<\/strong> Developed for the automotive sector, this standard addresses the limits of meaningful usability of algorithms and sensor systems and considers the new error class of functional deficiencies.<br \/>\n<a href=\"https:\/\/www.iso.org\/standard\/70939.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/standard\/70939.html<\/a> <strong><br \/>\n<\/strong><\/li>\n<li><strong>ISO\/IEC 20546<\/strong><br \/>\nThis standard defines basic terminology for Big Data. However, the terms <em>Artificial Intelligence<em> or <em>Machine Learning<em> are not mentioned in this document.<br \/>\n<a href=\"https:\/\/www.iso.org\/standard\/68305.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/standard\/68305.html<\/a><\/em><\/em><\/em><\/em><\/li>\n<li><strong>ISO\/IEC TR 20547-2<\/strong><br \/>\nThe 20547 series is intended to establish a reference architecture for Big Data. In this second part, use cases are listed.<br \/>\n<a href=\"https:\/\/www.iso.org\/standard\/71276.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/standard\/71276.html<\/a><\/li>\n<li><strong>ISO\/IEC TR 20547-5<\/strong><br \/>\nThe fifth part of the 20547 series provides an overview of standards that are relevant for Big Data, both existing standards and standards currently in development.<br \/>\n<a href=\"https:\/\/www.iso.org\/standard\/72826.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/standard\/72826.html<\/a><\/li>\n<\/ul>\n<h3>Whitepapers, reports, and similar documents<\/h3>\n<p>The list below does not contain any standards, but we believe that the included documents reflect the generally accepted state of the art quite well. We will gladly add additional elements to this list. Please use the comment function below for this.<\/p>\n<ul>\n<li><strong>High-Level Expert Group on AI (European Commission): Ethics Guidelines for Trustworthy AI<\/strong><br \/>\nThese guidelines set out a framework for achieving trustworthy AI. Three elementary components are identified here: The AI should be lawful, ethical, and robust. Under the aspect of robustness, safety is mentioned explicitly. The document contains an assessment list for trustworthy AI.<br \/>\n<a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/high-level-expert-group-artificial-intelligence\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ec.europa.eu\/digital-single-market\/en\/high-level-expert-group-artificial-intelligence<\/a><\/li>\n<li><strong>Expert Report of the Data Ethics Commission<\/strong><br \/>\nThe Data Ethics Commission was commissioned by the German Federal Government with the development of ethical standards, guidelines, and concrete recommendations for action aimed at protecting the individual, preserving social coexistence, and safeguarding and promoting prosperity in the information age. This document summarizes the results. [only available in German]<br \/>\n<a href=\"http:\/\/s.fhg.de\/mcz\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/s.fhg.de\/mcz<\/a><\/li>\n<li><strong>IEEE: Ethically Aligned Design<\/strong><br \/>\nIn this document, the IEEE summarizes its recommendations aimed at designing the development of standards, certification, regulation, and legislation for the development of autonomous and intelligent systems in such a way that this benefits societal well-being holistically.<br \/>\n<a href=\"https:\/\/ethicsinaction.ieee.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ethicsinaction.ieee.org\/<\/a><\/li>\n<li><strong>SASWG: Safety Assurance Objectives for Autonomous Systems<\/strong><br \/>\nHaving emerged from a working group of the Safety-Critical System Club, this document lists objectives for the validation of autonomous systems at different levels of abstraction.<br \/>\n<a href=\"https:\/\/scsc.uk\/ga\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/scsc.uk\/ga<\/a><\/li>\n<li><strong>Safety First for Automated Driving<\/strong><br \/>\nIn this cross-industry whitepaper, Daimler together with Aptiv, Audi, Baidu, BMW, Continental, Fiat Chrysler Automobiles, HERE, Infineon, Intel, and Volkswagen examines the topic of safety for automated driving in accordance with SAE Level 3 and Level 4. It also addresses the use of AI methods (Machine Learning) required for automated driving.<br \/>\n<a href=\"https:\/\/newsroom.intel.com\/wp-content\/uploads\/sites\/11\/2019\/07\/Intel-Safety-First-for-Automated-Driving.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/newsroom.intel.com\/wp-content\/uploads\/sites\/11\/2019\/07\/Intel-Safety-First-for-Automated-Driving.pdf<\/a><\/li>\n<li><strong>Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective<\/strong><br \/>\nWe have included this publication because it offers a good overview of technical, ethical, and legal safety-related issues and their interfaces.<br \/>\n<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0004370219301109?dgcid=rss_sd_all\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0004370219301109?dgcid=rss_sd_all<\/a><\/li>\n<li><strong>Considerations in Assuring Safety of Increasingly Autonomous Systems<\/strong><br \/>\nWe have included this technical report from NASA because it summarizes what needs to be considered when technical systems take over safety-critical tasks that had previously been solved by humans with their \u201cintelligence\u201d.<br \/>\n<a href=\"https:\/\/ntrs.nasa.gov\/archive\/nasa\/casi.ntrs.nasa.gov\/20180006312.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/ntrs.nasa.gov\/archive\/nasa\/casi.ntrs.nasa.gov\/20180006312.pdf<\/a><\/li>\n<\/ul>\n<h3>Ongoing Initiatives in Research and Standardization<\/h3>\n<p>Many organizations are undertaking activities to further develop the state of the art in science and technology, or to document this state of the art in standardization projects. We will gladly add further elements to this list. Simply use the comment function below for this purpose.<\/p>\n<h4>Standardization Initiatives<\/h4>\n<ul>\n<li><strong>DIN.ONE &#8211; Plattform und die deutsche Normungsroadmap KI<\/strong><br \/>\nDIN and DKE are collaborating with the German Federal Government and representatives from industry, research, and civil society to draw up a standardization roadmap on Artificial Intelligence. This also includes standardization with regard to safety.<br \/>\n<a href=\"https:\/\/din.one\/pages\/viewpage.action?pageId=33620030\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/din.one\/pages\/viewpage.action?pageId=33620030<\/a><\/li>\n<li><strong>Standardization Council Industrie 4.0<\/strong><br \/>\nThe <a href=\"https:\/\/sci40.com\/de\/\" target=\"_blank\" rel=\"noopener noreferrer\">Standardization Council<\/a> has the task to coordinate standardization and regulation work in the field of Industrie4.0 in Germany and beyond. It represents the interests of industry in the field of national, European and international standardization in the context of the digitization of industry and actively promotes international cooperation. Fraunhofer IESE and SICK are in the working group &#8222;safe trustworthy AI-systems&#8220;. Fraunhofer IESE is additionally in the working groups &#8222;Human and AI&#8220; as well as &#8222;Data modelling and semantics&#8220;.<\/li>\n<li><strong>ISO\/IEC JTC 1\/SC42 WG1<\/strong><br \/>\nWorking group 1 of the SC42 is concerned with the fundamentals of AI standardization, such as terminology, concepts, and frameworks.<br \/>\n<a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/committee\/6794475.html<\/a><\/li>\n<li><strong>ISO\/IEC JTC 1\/SC42 WG2<\/strong><br \/>\nWorking group 2 of the SC42 emerged from a formerly independent SC on the topic of \u201cBig Data\u201d. Here, topics concerning data and data quality continue to be worked on.<br \/>\n<a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/committee\/6794475.html<\/a><\/li>\n<li><strong>ISO\/IEC JTC 1\/SC42 WG3<\/strong><br \/>\nThe focal topic of working group 3 of the SC42 is trustworthiness. Here, standards on risk management, on robustness of neural networks, as well as on ethical and social topics related to AI are being prepared.<br \/>\n<a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/committee\/6794475.html<\/a><\/li>\n<li><strong>ISO\/IEC JTC 1\/SC42 WG4<\/strong><br \/>\nArbeitsgruppe 4 des SC42 sammelt Use Cases rund um KI.<br \/>\n<a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/committee\/6794475.html<\/a><\/li>\n<li><strong>ISO\/IEC JTC 1\/SC42 WG5<br \/>\n<\/strong>Working group 5 of the SC42 is the youngest group on the panel and has the mandate to deal with computational aspects and characteristics of AI.<br \/>\n<a href=\"https:\/\/www.iso.org\/committee\/6794475.html\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.iso.org\/committee\/6794475.html<\/a><\/li>\n<li><strong>DKE AK801.0.8<\/strong><br \/>\nIn this working group of DKE, in which both Fraunhofer IESE and SICK are active, an application rule for the development of autonomous\/cognitive systems is currently being drawn up. The focus in this application rule is on the execution of a trustworthiness analysis and the establishment of a trustworthiness assurance case. It is planned to publish a first version of this application rule in 2020.<br \/>\n<a href=\"https:\/\/www.dke.de\/de\/news\/2019\/referenzmodell-vertrauenswuerdige-ki-vde-anwendungsregel\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.dke.de\/de\/news\/2019\/referenzmodell-vertrauenswuerdige-ki-vde-anwendungsregel<\/a><\/li>\n<li><strong>DIN SPECs on AI<\/strong><br \/>\nIn the shortened procedure of an SPEC, DIN is currently making efforts to publish additional standards on the topic of AI. An<br \/>\n<a href=\"https:\/\/www.din.de\/de\/forschung-und-innovation\/themen\/kuenstliche-intelligenz\/standards-fuer-ki\" target=\"_blank\" rel=\"noopener noreferrer\">overview<\/a> includes those SPECs, where free downloadable drafts exists like <a href=\"https:\/\/www.beuth.de\/de\/technische-regel\/din-spec-13266\/318439445\" target=\"_blank\" rel=\"noopener noreferrer\">DIN SPEC 13266<\/a> &#8222;Guideline for the development of deep learning image recognition systems&#8220; [in German] and <a href=\"https:\/\/www.beuth.de\/de\/technische-regel\/din-spec-92001-1\/303650673\" target=\"_blank\" rel=\"noopener noreferrer\">DIN SPEC 92001-1<\/a> &#8222;Artificial Intelligence &#8211; Life Cycle Processes and Quality Requirements &#8211; Part 1: Quality Meta Model&#8220;.<\/li>\n<li><strong>IEEE 2846 WG<br \/>\n<\/strong>This <a href=\"https:\/\/sagroups.ieee.org\/2846\/\" target=\"_blank\" rel=\"noopener noreferrer\">WG<\/a> is working on &#8222;A Formal Model for Safety Considerations in Automated Vehicle Decision Making&#8220;. <b> <\/b>The purpose of this standard is to define a parameterized formal model for automated vehicle decision making that enables industry and government alike to align on a common definition of what it means for an automated vehicle to drive safely balancing safety and practicability.<\/li>\n<li><strong>FG-AI4AD<br \/>\n<\/strong>The <a href=\"https:\/\/www.itu.int\/en\/ITU-T\/focusgroups\/ai4ad\/Pages\/default.aspx\" target=\"_blank\" rel=\"noopener noreferrer\">FG-AI4AD<\/a> supports standardization activities for services and applications enabled by AI systems in autonomous and assisted driving. The FG aims to create international harmonization on the definition of a minimal performance threshold for these AI systems (such as AI as a Driver).<\/li>\n<\/ul>\n<h4>Further Initiatives<\/h4>\n<ul>\n<li><strong>Assuring Autonomy International Program<\/strong><br \/>\nThis initiative led by the University of York is explicitly concerned with the assurance and regulation of robotics and autonomous systems. Currently, a freely accessible Body of Knowledge is being built up here, which is to become the reference source on this topic in the future. The \u201cAssuring Autonomy\u201d program is thematically very close to the <a href=\"https:\/\/www.iese.fraunhofer.de\/de\/trend\/kognitives-system.html\" target=\"_blank\" rel=\"noopener noreferrer\">Autonomous Systems<\/a> program of Fraunhofer IESE. In order to create synergies, a strategic collaboration is currently being prepared.<br \/>\n<a href=\"https:\/\/www.york.ac.uk\/assuring-autonomy\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.york.ac.uk\/assuring-autonomy\/<\/a><\/li>\n<li><strong>Safety-Crictial System Club: Group Autonomous Systems<br \/>\n<\/strong>Fraunhofer IESE is member of the working group Autonomous Systems of the Safety-Critical Systems Club. The group aims to produce clear guidance on how autonomous systems and autonomy technologies should be managed in a safety related context, in a way that reflects emerging best practice.<br \/>\n<a href=\"https:\/\/scsc.uk\/ga\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/scsc.uk\/ga<\/a><\/li>\n<li><strong>Partnership on AI<\/strong><br \/>\nAn initiative led from the USA that identifies the use of safety-critical applications as the first of its thematic pillars. The partners comprise more than 90 organizations, including Amazon, Apple, Facebook, Google, and Microsoft. Companies from traditional safety-critical domains are not represented to date.<br \/>\n<a href=\"https:\/\/www.partnershiponai.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.partnershiponai.org\/<\/a><\/li>\n<li><strong>The Autonomous<br \/>\n<\/strong>The <a href=\"https:\/\/www.the-autonomous.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Autonomous<\/a> is an open platform that brings together the autonomous mobility ecosystem to align on relevant safety subjects. Besides an annual event in Vienna, the Autonomous is hosting Chapter Events and Workshops throughout the year to work on global reference solutions on Safety from Architecture, Security, AI, and Regulation standpoint.<\/li>\n<\/ul>\n<div class=\"info-box\">\n<p><strong>Looking for more information and input on that topic?<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>If you want to learn more about the state of the art and the challenges of <strong>using AI safely in autonomous systems<\/strong>, please feel free to attend our 4-day seminar to become a certified <a href=\"https:\/\/www.iese.fraunhofer.de\/de\/seminare_training\/data-scientist-assuring-safety.html\">\u00bbData Scientist Specialized in Assuring Safety\u00ab<\/a>. The seminar also provides an up-to-date insight into the state of standardization.<\/p>\n<p>&nbsp;<\/p>\n<p>Please also read the Fraunhofer IESE Blog post about the definition of autonomous systems:<br \/>\n<a href=\"https:\/\/www.iese.fraunhofer.de\/blog\/autonomous-or-merely-highly-automated-what-is-actually-the-difference\/\">Autonomous or merely highly automated-what is actually the difference?<\/a><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>This is a question that we, Dr.-Ing. Rasmus Adler as \u201cProgram Manager Autonomous Systems\u201d at Fraunhofer IESE, and Dr. Patrik Feth as member of the group \u201cAdvanced Safety Functions &amp; Standards\u201d at SICK AG, are confronted with again and again&#8230;.<\/p>\n","protected":false},"author":22,"featured_media":5510,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[239,177,18],"tags":[],"coauthors":[37],"class_list":["post-5454","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-autonomes-fahren","category-kuenstliche-intelligenz","category-sicherheit"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Standards and laws for autonomous systems - Blog des Fraunhofer IESE<\/title>\n<meta name=\"description\" content=\"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Standards and laws for autonomous systems - Blog des Fraunhofer IESE\" \/>\n<meta property=\"og:description\" content=\"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/\" \/>\n<meta property=\"og:site_name\" content=\"Fraunhofer IESE\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/FraunhoferIESE\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-04-28T13:59:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-27T10:09:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2523\" \/>\n\t<meta property=\"og:image:height\" content=\"1585\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Dr. Rasmus Adler\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@FraunhoferIESE\" \/>\n<meta name=\"twitter:site\" content=\"@FraunhoferIESE\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dr. Rasmus Adler\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"9\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Dr. Rasmus Adler\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/\"},\"author\":{\"name\":\"Dr. Rasmus Adler\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#\\\/schema\\\/person\\\/c04a120d16ea7bef582db16c1c8a0e96\"},\"headline\":\"Which laws, standards, and research initiatives exist to make Artificial Intelligence and Autonomous Systems safe?\",\"datePublished\":\"2020-04-28T13:59:23+00:00\",\"dateModified\":\"2024-05-27T10:09:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/\"},\"wordCount\":2115,\"publisher\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/Gesetze-und-Normen-Digital.jpg\",\"articleSection\":[\"Autonomes Fahren\",\"K\u00fcnstliche Intelligenz\",\"Sicherheit\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/\",\"name\":\"Standards and laws for autonomous systems - Blog des Fraunhofer IESE\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/Gesetze-und-Normen-Digital.jpg\",\"datePublished\":\"2020-04-28T13:59:23+00:00\",\"dateModified\":\"2024-05-27T10:09:23+00:00\",\"description\":\"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/Gesetze-und-Normen-Digital.jpg\",\"contentUrl\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/Gesetze-und-Normen-Digital.jpg\",\"width\":2523,\"height\":1585,\"caption\":\"KI Normen und Standardisierung\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Which laws, standards, and research initiatives exist to make Artificial Intelligence and Autonomous Systems safe?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/\",\"name\":\"Fraunhofer IESE\",\"description\":\"Blog des Fraunhofer-Institut f\u00fcr Experimentelles Software Engineering\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#organization\",\"name\":\"Fraunhofer IESE\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/08\\\/fhg_iese_logo.png\",\"contentUrl\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/08\\\/fhg_iese_logo.png\",\"width\":183,\"height\":50,\"caption\":\"Fraunhofer IESE\"},\"image\":{\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/FraunhoferIESE\\\/\",\"https:\\\/\\\/x.com\\\/FraunhoferIESE\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/fraunhoferiese\\\/\",\"https:\\\/\\\/www.youtube.com\\\/c\\\/FraunhoferIESE\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/#\\\/schema\\\/person\\\/c04a120d16ea7bef582db16c1c8a0e96\",\"name\":\"Dr. Rasmus Adler\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2019\\\/08\\\/5D3_9315_blog-96x96.jpg299946be4e1d400b1373dc4921f14c73\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2019\\\/08\\\/5D3_9315_blog-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/wp-content\\\/uploads\\\/2019\\\/08\\\/5D3_9315_blog-96x96.jpg\",\"caption\":\"Dr. Rasmus Adler\"},\"description\":\"Rasmus Adler hat angewandte Informatik studiert und ist seit 2006 am Fraunhofer IESE besch\u00e4ftigt. In seiner Promotion entwickelte er Fail-Operational L\u00f6sungen f\u00fcr aktive Sicherheitssysteme wie dem ESP. Anschlie\u00dfend widmete er sich als Projektleiter und Safety Experte dem modell-basierten Safety Engineering autonomer Systeme. Er koordinierte die Entwicklung von L\u00f6sungen, um zur Laufzeit das Risiko von geplantem\\\/m\u00f6glichen autonomen Systemverhalten bez\u00fcglich der aktuellen Situation zu messen und risikominierende Ma\u00dfnahmen anzusto\u00dfen. In seiner aktuellen Position als Program Manager f\u00fcr autonome Systeme widmet er sich insbesondere dem Risikomanagement vernetzter Cyber-Physischer Systeme. Um den Nutzen der einzelnen Systeme aber auch den Gesamtnutzen von Systemverb\u00fcnden zu maximieren, setzt er auf ein kooperatives und zum Teil auf k\u00fcnstlicher Intelligenz basiertem Risikomanagement zur Laufzeit. Da aktuelle Sicherheitsnormen dieses innovative Risikomanagement nicht unterst\u00fctzen, engagiert er sich in Normungsgremien und beteiligt sich an der Entwicklung normativer Anforderungen f\u00fcr autonome, vernetzte Cyber-Physische Systeme.\",\"url\":\"https:\\\/\\\/www.iese.fraunhofer.de\\\/blog\\\/author\\\/rasmus-adler\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Standards and laws for autonomous systems - Blog des Fraunhofer IESE","description":"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/","og_locale":"de_DE","og_type":"article","og_title":"Standards and laws for autonomous systems - Blog des Fraunhofer IESE","og_description":"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)","og_url":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/","og_site_name":"Fraunhofer IESE","article_publisher":"https:\/\/www.facebook.com\/FraunhoferIESE\/","article_published_time":"2020-04-28T13:59:23+00:00","article_modified_time":"2024-05-27T10:09:23+00:00","og_image":[{"width":2523,"height":1585,"url":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","type":"image\/jpeg"}],"author":"Dr. Rasmus Adler","twitter_card":"summary_large_image","twitter_creator":"@FraunhoferIESE","twitter_site":"@FraunhoferIESE","twitter_misc":{"Verfasst von":"Dr. Rasmus Adler","Gesch\u00e4tzte Lesezeit":"9\u00a0Minuten","Written by":"Dr. Rasmus Adler"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#article","isPartOf":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/"},"author":{"name":"Dr. Rasmus Adler","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#\/schema\/person\/c04a120d16ea7bef582db16c1c8a0e96"},"headline":"Which laws, standards, and research initiatives exist to make Artificial Intelligence and Autonomous Systems safe?","datePublished":"2020-04-28T13:59:23+00:00","dateModified":"2024-05-27T10:09:23+00:00","mainEntityOfPage":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/"},"wordCount":2115,"publisher":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#organization"},"image":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#primaryimage"},"thumbnailUrl":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","articleSection":["Autonomes Fahren","K\u00fcnstliche Intelligenz","Sicherheit"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/","url":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/","name":"Standards and laws for autonomous systems - Blog des Fraunhofer IESE","isPartOf":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#primaryimage"},"image":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#primaryimage"},"thumbnailUrl":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","datePublished":"2020-04-28T13:59:23+00:00","dateModified":"2024-05-27T10:09:23+00:00","description":"Fraunhofer IESE provides a list of collected standards for autonomous systems and gives an overview of the use of artificial intelligence (AI)","breadcrumb":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#primaryimage","url":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","contentUrl":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","width":2523,"height":1585,"caption":"KI Normen und Standardisierung"},{"@type":"BreadcrumbList","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/which-laws-standards-and-research-initiatives-exist-to-make-artificial-intelligence-and-autonomous-systems-safe\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/www.iese.fraunhofer.de\/blog\/"},{"@type":"ListItem","position":2,"name":"Which laws, standards, and research initiatives exist to make Artificial Intelligence and Autonomous Systems safe?"}]},{"@type":"WebSite","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#website","url":"https:\/\/www.iese.fraunhofer.de\/blog\/","name":"Fraunhofer IESE","description":"Blog des Fraunhofer-Institut f\u00fcr Experimentelles Software Engineering","publisher":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.iese.fraunhofer.de\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#organization","name":"Fraunhofer IESE","url":"https:\/\/www.iese.fraunhofer.de\/blog\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2016\/08\/fhg_iese_logo.png","contentUrl":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2016\/08\/fhg_iese_logo.png","width":183,"height":50,"caption":"Fraunhofer IESE"},"image":{"@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/FraunhoferIESE\/","https:\/\/x.com\/FraunhoferIESE","https:\/\/www.linkedin.com\/company\/fraunhoferiese\/","https:\/\/www.youtube.com\/c\/FraunhoferIESE"]},{"@type":"Person","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/#\/schema\/person\/c04a120d16ea7bef582db16c1c8a0e96","name":"Dr. Rasmus Adler","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2019\/08\/5D3_9315_blog-96x96.jpg299946be4e1d400b1373dc4921f14c73","url":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2019\/08\/5D3_9315_blog-96x96.jpg","contentUrl":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2019\/08\/5D3_9315_blog-96x96.jpg","caption":"Dr. Rasmus Adler"},"description":"Rasmus Adler hat angewandte Informatik studiert und ist seit 2006 am Fraunhofer IESE besch\u00e4ftigt. In seiner Promotion entwickelte er Fail-Operational L\u00f6sungen f\u00fcr aktive Sicherheitssysteme wie dem ESP. Anschlie\u00dfend widmete er sich als Projektleiter und Safety Experte dem modell-basierten Safety Engineering autonomer Systeme. Er koordinierte die Entwicklung von L\u00f6sungen, um zur Laufzeit das Risiko von geplantem\/m\u00f6glichen autonomen Systemverhalten bez\u00fcglich der aktuellen Situation zu messen und risikominierende Ma\u00dfnahmen anzusto\u00dfen. In seiner aktuellen Position als Program Manager f\u00fcr autonome Systeme widmet er sich insbesondere dem Risikomanagement vernetzter Cyber-Physischer Systeme. Um den Nutzen der einzelnen Systeme aber auch den Gesamtnutzen von Systemverb\u00fcnden zu maximieren, setzt er auf ein kooperatives und zum Teil auf k\u00fcnstlicher Intelligenz basiertem Risikomanagement zur Laufzeit. Da aktuelle Sicherheitsnormen dieses innovative Risikomanagement nicht unterst\u00fctzen, engagiert er sich in Normungsgremien und beteiligt sich an der Entwicklung normativer Anforderungen f\u00fcr autonome, vernetzte Cyber-Physische Systeme.","url":"https:\/\/www.iese.fraunhofer.de\/blog\/author\/rasmus-adler\/"}]}},"jetpack_featured_media_url":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-content\/uploads\/2020\/04\/Gesetze-und-Normen-Digital.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/posts\/5454","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/comments?post=5454"}],"version-history":[{"count":58,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/posts\/5454\/revisions"}],"predecessor-version":[{"id":12483,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/posts\/5454\/revisions\/12483"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/media\/5510"}],"wp:attachment":[{"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/media?parent=5454"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/categories?post=5454"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/tags?post=5454"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.iese.fraunhofer.de\/blog\/wp-json\/wp\/v2\/coauthors?post=5454"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}