In recent years, the proliferation of cybersecurity regulations—whether issued by governments or private certification schemes—has made code quality and security critical concerns for software vendors and product manufacturers. To meet these new requirements and demonstrate compliance, static code analysis has emerged as an essential technique within the verification toolbox. An increasing number of safety and cybersecurity standards mandate its use, particularly in critical systems.
Yet despite its growing adoption, static analysis remains sometimes misunderstood or misused. It is therefore essential to clarify its principles, actual capabilities, limitations, and best practices for integration into the software development lifecycle. This article provides a technical overview of the foundations, methods, and industrial use cases of static analysis tools applied to source code defect detection.
Definition and Scope of Static Analysis
Static Analysis (SA) of software is a set of general technologies that aim at understanding the behavior of a target application (piece of software) without execution it. It can be applied to :
- perform transformation (translation, optimization, compilation, parallelization …)
- defects detection at source code or binary code level.
Compilers form a well-known class of SA tools used on a day-to-day basis. Compilers mix several analysis objectives:
- detection of ill-formed statements in the application,
- translation from programming languages instructions to machine level instructions,
- optimization to reduce the size and execution time of the output runtime.
SA technology goes from textual (key words or patterns search), to syntactical (the syntax of the programming language source is known) and semantical (the syntax and the semantics of the programming language is known). The SA algorithms can be applied at the procedure level or in inter-procedural mode through then call graph. Amongst the well-known algorithms that appear in SA tools, one can find control flow, data flow, information flow, alias analysis, abstract interpretation, use-def chains, taint analysis, …
An SA tool only subject to True Positive and True Negative would be a perfect tool. No SA tool reaches this level of precision: semantical SA tools produce findings classified as True Positive and True Negative but also False positive (they do not miss any defect present in the application, but they produce spurious messages), whereas non semantical SA tools produce True Positive and True Negative but also False negative (they may miss defects present in the application, but they do not produce spurious messages).
SA technology allows the detection of many kinds of defects ranging from quality to safety or cyber-security domains. The level of formalization varies from one kind of defects to the other: for example, Run Time Errors (RTEs) are perfectly formalized from the norm of the programming language (undefined, unspecified or implementation defined behaviors), whereas coding standard violations or cyber security flaws formalization is missing. The level of formalization controls the exhaustiveness of the corresponding tool: exhaustive SA tools can (with the appropriate SA technology) detect “all” occurrences of the formalized defects they target (RTE), whereas there is no way to prove that they detect “all” occurrences of the unformalized defects that they target.
Each SA tool targets a subset of known defects called detection perimeter. In general, two different SA tools do not target the same detection perimeter even though their perimeters may intersect. Even though they tackle the same defect, they can detect more or less complex occurrences of this defect, depending on the technologies they apply. As a result, it is difficult to choose the most efficient SA tool for a given class of applications.
Why and how to use SA Tools?
More and more safety or cyber security standards identify SA as a required verification means for critical pieces of software. Therefore, applying a SA tool becomes mandatory in more and more industrial development processes, even though the standards do not prescribe SA tools nor SA objectives. Industrials become responsible for :
- defining the target defects that make their target application at risk,
- selecting amongst the 200 existing ones a SA tool that covers the target defects,
- configuring the selected SA tool to detects the target defects,
- launching the static analysis and producing the list of findings,
- producing the verification report.
Contrarily to a manual analysis the SA tool analysis is replicable in an identical manner without effort. It is easy to rerun the analysis from one version to the next one and it is even though possible to analyze differences between findings of the two versions.
Note that competencies from different domains are required to cover the five steps in a proper way.
- Step 1: knowledge of the standard verification requirements and of the safety or cyber security risks facing the critical application in its execution context.
- Steps 2 and 3: basic knowledge on SA technology and SA tools to select the SA tool that suits the verification objective and to properly configure the selected SA tool.
- Step 4: knowledge of the selected SA tool behavior to check that the produced findings conform to the verification objective.
- Step 5: knowledge of the verification objective, tool configuration and classification of the findings in True/False positive and True/False negative.
The three first steps form a preparatory phase that must be applied once. The preparatory phase can be performed for one industrial application or a set of applications to configure the selected SA tool (including defects criticality if necessary). Most SA tools offer the possibility to skip verification by adding annotations in the source code. The preparatory phase report includes the justification of all choices performed at this stage, but also SA tool usage recommendations, and is finalized by a verification procedure. The two last steps form the run phase that should be applied on each new version of each application. From the procedure resulting from the preparatory phase, each further version of the applications can be analyzed in a coherent way. The verification report should contain not only the analyzed SA tool findings but also the justified residual defects and SA tool annotations present in the source code.
Note that in some industrial contexts, a SA tool is already in use on the applications to cover some other verification objectives. In this case step (2) consists in identifying the subset of verification objectives defined in (1) that should be covered by this SA tool. The third (3) step consists in either defining a specific configuration to cover the verification objectives from (1) or merging this specific configuration with the default one. This must be studied carefully to avoid not only extra cost during the run phase (steps (4) and (5)) but also rejection by the development or verification teams.
The preparatory phase is often difficult to foresee from an industrial point of view because the editors promote the usage of the SA tools in their default configuration. But 15 years of static analysis usage in various industrial contexts has demonstrated that this strategy often fails: the critical defects are not detected before application set up and development and verification teams consider that SA findings are useless. This preparatory phase is the only way to identify truly critical defects and truly increase the security of critical applications. In general, industry requires external competencies to facilitate the preparatory phase: the preparatory report should justify the defects identification, SA tool selection and configuration and any recommendation (annotations, pragmas, …) necessary to achieve the verification objective.
Conclusion
When properly prepared, configured, and integrated into a documented and tool-supported process, static analysis is a powerful lever to improve the safety and cybersecurity of critical software. It is not merely about using a tool with default settings: it requires deep thinking about verification objectives, relevant detection scopes, and the context of application. Experience has shown that the preparatory phase is often the key to effective and accepted usage by project teams. In short, static analysis does not replace human expertise but rather complements it intelligently to achieve a common goal: controlling software risks from the very first lines of code.