AI-supported software engineering

Fraunhofer IESE is researching the use of AI to increase efficiency in software engineering while ensuring quality.

There is currently enormous interest in generative AI, which is entirely justified given its successes to date and the continuing rapid pace of innovation. Large language models (LLMs) [in German] such as GPT-4 already have impressive capabilities, and their development is progressing at a rapid pace. For example, DeepSeek (a Chinese AI model) caused a stir because it attracted a lot of attention thanks to its supposedly improved resource efficiency and its release as open source – a circumstance that is likely to further promote the spread of AI support.

It is foreseeable that the increased integration of AI will bring about lasting changes to everyday work in many areas. This also applies to software engineering, which has always been a focus of Fraunhofer IESE. First and foremost, AI promises a significant increase in efficiency and productivity in this area, which is why many companies are currently looking closely at this topic.

AI tools in software development

Numerous new AI-supported tools are currently being introduced. The tools now available already cover a wide range of tasks in the software development life cycle – from requirements gathering and architecture to implementation and validation. Accordingly, the number of users of such AI tools is growing rapidly. Current (albeit non-representative) surveys indicate that over 60% [Stack Overflow Survey] or even over 80% [The Pragmatic Engineer] of developers are already using such tools, with a strong upward trend.

ChatGPT is the most popular tool overall [The Pragmentic], but it is very generic in its application. In addition, there are numerous more specialized tools that focus on typical tasks in software engineering. Programming assistants such as GitHub CoPilot, Amazon CodeWhisperer, Cody AI, Mutable AI, and Tabnine promise significant productivity gains. In addition, there are solutions for software testing and writing test cases (Codium AI), for documentation creation (Mintlify Writer), for bug tracking (Bugasura), and more. Newer LLM-based multi-agent development platforms such as Devin, GPT-Pilot, and CrewAI aim to automate not just individual aspects, but large parts of the software development process.

Efficiency in software engineering without compromising quality

All of this sounds very promising, but there are significant challenges that are slowing down the widespread use of these new tools. Particularly relevant are the reliability and traceability of the tools, which in turn influence the quality of the developed system. So how can quality be ensured? Can this task be performed effectively and efficiently by humans? Are there technical solutions that can help with this? And will there actually be a gain in efficiency and productivity in the end?

Safety-critical systems, which are being used more and more frequently due to the trend toward greater automation and autonomy, are a particularly tricky case. Such systems may not be introduced to the market without adequate safety certification. But how can safety be proven when AI-supported tools were used in development? Safety standards require that all development tools used be examined for potential malfunctions and their impact on the safety of the end product. With LLM-based tools, this is currently not feasible, at least as far as the LLM component is concerned. However, it is not necessarily the LLM itself that must be “secure,” but rather the security must be conclusively arguable in the overall context, i.e., including all surrounding measures.

IESE has many years of experience in this field, both in safety engineering in general and in the evaluation of tools in particular.

Research and applications of AI-supported software engineering

For over 25 years, Fraunhofer IESE has been a leading global institution in the field of software and systems engineering. Systematic quality assurance and verifiable quality have been at the heart of its research from the very beginning. Currently, the focus is on the quality characteristics of “safety” and “security” as well as on research work to ensure these characteristics in complex autonomous systems, in the field of production, and in the healthcare sector. In summary, IESE today stands for methods for the efficient development of complex, yet reliable software-intensive systems. In view of developments in the field of AI, it is clear that AI support in software engineering is a key topic for the future.

Researchers from several departments at Fraunhofer IESE are currently working together on specific use cases for AI-supported software engineering. These departments have many years of experience in the systematic modeling, measurement, analysis, and prediction of software qualities. This mix of expertise – software and systems engineering, safety engineering, security engineering, and data science/AI – puts IESE in a unique position to significantly advance AI support for the design of safe and reliable systems.

This is framed by various bilateral projects with partners and our own research project, in which we investigate key questions regarding the use of AI in engineering based on exemplary use cases in the spirit of preliminary research.

Standardization of artificial intelligence and functional safety

IESE is also very active in standardization related to AI [in German], particularly with regard to safety. Through the DKE Working Group 914.0.11, for example, it is involved in the development of an international standard on functional safety and AI (ISO/IEC TR 5469 “Functional safety and AI systems”). In addition, IESE is working through the DIN/DKE Joint Committee on Artificial Intelligence (NA 043-01-42) on the development of harmonized European standards for the recently adopted European AI Act. In this way, it is helping to translate legal requirements into technical specifications. In addition, it provides support in the development of derived standards for various industries such as manufacturing, automotive, and agriculture. Against this backdrop, IESE led the cross-sector safety group in the second AI standardization roadmap and played a leading role in the development of DIN SPEC 92005 on uncertainty in machine learning.

We are happy to support your company in integrating AI into your processes.

We develop customized solutions, work with state-of-the-art technology, and are independent and neutral.

Benefit from our experience!

 

Contact us!!

 

Make an appointment with our experts.