Evaluating the Quality of Safety-Critical Software Systems
Continuously managing software product quality is an integral part of software project management and especially crucial for the development of safety-critical systems. Software quality models capture the knowledge and experience with respect to what quality characteristics are of interest (such as reliability, maintainability, or safety), what measurement data to collect (such as results from a static code analysis), and what mechanisms to use for assessing the quality of the software as a whole (such as building up evaluation thresholds and baselines). Nowadays, it is still a challenge to come up with suitable quality models for an organization: First, there is no universal model that can be applied in every environment because quality is heavily dependent on the application domain, the stakeholders, the usage purpose, and the concrete project context. In practice and research, a variety of different quality models exist. Finding the “right” model depends on a clear picture of the goals that should be obtained from using the model. Second, quality models needs to be tailored to company specifics and must be supported by corresponding tools. Existing standards (such as the ISO/IEC 25000 series) are often too generic and hard to fully implement in an organization. Third, in order to create sustainable quality models, the contribution to and value for organizational goals must be clarified and the models need to be integrated into the development processes (e.g., by defining appropriate quality gates).
As part of ongoing strategic collaboration with the Japan Aerospace Exploration Agency JAXA, the focus from 2012 to 2013 was on developing a model for evaluating the quality of safetycritical software of satellite systems delivered by external suppliers. The main idea was to combine the results from a classical safety analysis with a static code analysis for identifying safetycritical software functions and components with bad code quality and thus with a high risk of failure. Having such a model allows JAXA to systematically evaluate the source code delivered by their suppliers and to focus quality assurance activities on those parts of the code that are rated as safety critical and as having bad software quality. This effort should further increase the quality of the supplied safety-critical software and, in turn, enable JAXA to use high-quality software in satellites and thus achieve its main mission.
For that purpose, a quality model was developed together with JAXA experts with a focus on quality characteristics, as well as corresponding metrics for measuring those characteristics that have proven to have a strong impact on functional safety. The initial model was created based on information from the literature as well as with the help of external experts for the development of safety-critical systems. Afterwards, the model was tailored to the specific needs of JAXA and enriched with information from a classical Fault Tree Analysis (FTA) based on a mapping table between the identified root causes for a system failure and the functions related to those causes.
In 2012, the quality model was implemented using the Fraunhofer M-System measurement framework, which allows for retrieving data from static code analysis tools and provides visualization means for browsing the analysis results and interacting with the visualization (e.g., drilling down into the data). In 2013, the model was applied to an example system provided by JAXA and was initially evaluated in terms of its practical usage.
In 2014, following the integration of the final improvement recommendations, it is planned to broaden the scope of the model usage to software that is actually part of recent JAXA satellite systems and to enable JAXA to perform the quality evaluation of safety-critical software systems on a larger scale. Furthermore, integration of further aerospace-relevant standards and regulations will be evaluated in terms of extending the functionality of the original quality model.