CTFL – Syllabus v4.0 – 3. Static Testing – Part 1/2

Keywords

anomaly, dynamic testing, formal review, informal review, inspection, review, static analysis, static testing, technical review, walkthrough

Learning Objectives for Chapter 3:

3.1 Static Testing Basics

  • FL-3.1.1 (K1) Recognize types of products that can be examined by the different static test techniques
  • FL-3.1.2 (K2) Explain the value of static testing
  • FL-3.1.3 (K2) Compare and contrast static and dynamic testing

3.2 Feedback and Review Process

  • FL-3.2.1 (K1) Identify the benefits of early and frequent stakeholder feedback
  • FL-3.2.2 (K2) Summarize the activities of the review process
  • FL-3.2.3 (K1) Recall which responsibilities are assigned to the principal roles when performing reviews
  • FL-3.2.4 (K2) Compare and contrast the different review types
  • FL-3.2.5 (K1) Recall the factors that contribute to a successful review

3.1. Static Testing Basics

In contrast to dynamic testing, in static testing the software under test does not need to be executed. Code, process specification, system architecture specification or other work products are evaluated through manual examination (e.g., reviews) or with the help of a tool (e.g., static analysis). Test objectives include improving quality, detecting defects and assessing characteristics like readability, completeness, correctness, testability and consistency. Static testing can be applied for both verification and validation.

Testers, business representatives and developers work together during example mappings, collaborative user story writing and backlog refinement sessions to ensure that user stories and related work products meet defined criteria, e.g., the Definition of Ready (see section 5.1.3). Review techniques can be applied to ensure user stories are complete and understandable and include testable acceptance criteria. By asking the right questions, testers explore, challenge and help improve the proposed user stories.

Static analysis can identify problems prior to dynamic testing while often requiring less effort, since no test cases are required, and tools (see chapter 6) are typically used. Static analysis is often incorporated into CI frameworks (see section 2.1.4). While largely used to detect specific code defects, static analysis is also used to evaluate maintainability and security. Spelling checkers and readability tools are other examples of static analysis tools.

3.1.1. Work Products Examinable by Static Testing

Almost any work product can be examined using static testing. Examples include requirement specification documents, source code, test plans, test cases, product backlog items, test charters, project documentation, contracts and models.

Any work product that can be read and understood can be the subject of a review. However, for static analysis, work products need a structure against which they can be checked (e.g., models, code or text with a formal syntax).

Work products that are not appropriate for static testing include those that are difficult to interpret by human beings and that should not be analyzed by tools (e.g., 3rd party executable code due to legal reasons).

3.1.2. Value of Static Testing

Static testing can detect defects in the earliest phases of the SDLC, fulfilling the principle of early testing (see section 1.3). It can also identify defects which cannot be detected by dynamic testing (e.g., unreachable code, design patterns not implemented as desired, defects in non-executable work products).

Static testing provides the ability to evaluate the quality of, and to build confidence in work products. By verifying the documented requirements, the stakeholders can also make sure that these requirements describe their actual needs. Since static testing can be performed early in the SDLC, a shared understanding can be created among the involved stakeholders. Communication will also be improved between the involved stakeholders. For this reason, it is recommended to involve a wide variety of stakeholders in static testing.

Even though reviews can be costly to implement, the overall project costs are usually much lower than when no reviews are performed because less time and effort needs to be spent on fixing defects later in the project.

Code defects can be detected using static analysis more efficiently than in dynamic testing, usually resulting in both fewer code defects and a lower overall development effort.

3.1.3. Differences between Static Testing and Dynamic Testing

Static testing and dynamic testing practices complement each other. They have similar objectives, such as supporting the detection of defects in work products (see section 1.1.1), but there are also some differences, such as:

  • Static and dynamic testing (with analysis of failures) can both lead to the detection of defects, however there are some defect types that can only be found by either static or dynamic testing.
  • Static testing finds defects directly, while dynamic testing causes failures from which the associated defects are determined through subsequent analysis
  • Static testing may more easily detect defects that lay on paths through the code that are rarely executed or hard to reach using dynamic testing
  • Static testing can be applied to non-executable work products, while dynamic testing can only be applied to executable work products
  • Static testing can be used to measure quality characteristics that are not dependent on executing code (e.g., maintainability), while dynamic testing can be used to measure quality characteristics that are dependent on executing code (e.g., performance efficiency)

Typical defects that are easier and/or cheaper to find through static testing include:

  • Defects in requirements (e.g., inconsistencies, ambiguities, contradictions, omissions, inaccuracies, duplications)
  • Design defects (e.g., inefficient database structures, poor modularization)
  • Certain types of coding defects (e.g., variables with undefined values, undeclared variables, unreachable or duplicated code, excessive code complexity)
  • Deviations from standards (e.g., lack of adherence to naming conventions in coding standards)
  • Incorrect interface specifications (e.g., mismatched number, type or order of parameters)
  • Specific types of security vulnerabilities (e.g., buffer overflows)
  • Gaps or inaccuracies in test basis coverage (e.g., missing tests for an acceptance criterion)

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *