CTFL – Syllabus v4.0 – 2. Testing Throughout the Software Development Lifecycle – Part 1/2

Keywords

acceptance testing, black-box testing, component integration testing, component testing, confirmation testing, functional testing, integration testing, maintenance testing, non-functional testing, regression testing, shift-left, system integration testing, system testing, test level, test object, test type, white-box
testing

Learning Objectives for Chapter 2:

2.1 Testing in the Context of a Software Development Lifecycle

  • FL-2.1.1 (K2) Explain the impact of the chosen software development lifecycle on testing
  • FL-2.1.2 (K1) Recall good testing practices that apply to all software development lifecycles
  • L-2.1.3 (K1) Recall the examples of test-first approaches to development
  • FL-2.1.4 (K2) Summarize how DevOps might have an impact on testing
  • FL-2.1.5 (K2) Explain the shift-left approach
  • FL-2.1.6 (K2) Explain how retrospectives can be used as a mechanism for process improvement

2.2 Test Levels and Test Types

  • FL-2.2.1 (K2) Distinguish the different test levels
  • FL-2.2.2 (K2) Distinguish the different test types
  • FL-2.2.3 (K2) Distinguish confirmation testing from regression testing

2.3 Maintenance Testing

  • FL-2.3.1 (K2) Summarize maintenance testing and its triggers

2.1 Testing in the Context of a Software Development Lifecycle

A software development lifecycle (SDLC) model is an abstract, high-level representation of the software development process. A SDLC model defines how different development phases and types of activities performed within this process relate to each other, both logically and chronologically. Examples of SDLC models include: sequential development models (e.g., waterfall model, V-model), iterative development models (e.g., spiral model, prototyping), and incremental development models (e.g., Unified Process).

Some activities within software development processes can also be described by more detailed software development methods and Agile practices. Examples include: acceptance test-driven development (ATDD), behavior-driven development (BDD), domain-driven design (DDD), extreme programming (XP), feature-driven development (FDD), Kanban, Lean IT, Scrum, and test-driven development (TDD).

2.1.1. Impact of the Software Development Lifecycle on Testing

Testing must be adapted to the SDLC to succeed. The choice of the SDLC impacts on the:

  • Scope and timing of test activities (e.g., test levels and test types)
  • Level of detail of test documentation
  • Choice of test techniques and test approach
  • Extent of test automation
  • Role and responsibilities of a tester

In sequential development models, in the initial phases testers typically participate in requirement reviews, test analysis, and test design. The executable code is usually created in the later phases, so typically dynamic testing cannot be performed early in the SDLC.

In some iterative and incremental development models, it is assumed that each iteration delivers a working prototype or product increment. This implies that in each iteration both static and dynamic testing may be performed at all test levels. Frequent delivery of increments requires fast feedback and extensive regression testing.

Agile software development assumes that change may occur throughout the project. Therefore, lightweight work product documentation and extensive test automation to make regression testing easier are favored in agile projects. Also, most of the manual testing tends to be done using experience-based test techniques (see Section 4.4) that do not require extensive prior test analysis and design.

2.1.2. Software Development Lifecycle and Good Testing Practices

Good testing practices, independent of the chosen SDLC model, include the following:

  • For every software development activity, there is a corresponding test activity, so that all development activities are subject to quality control
  • Different test levels (see chapter 2.2.1) have specific and different test objectives, which allows for testing to be appropriately comprehensive while avoiding redundancy
  • Test analysis and design for a given test level begins during the corresponding development phase of the SDLC, so that testing can adhere to the principle of early testing (see section 1.3)

A software development lifecycle (SDLC) model is an abstract, high-level representation of the software development process. A SDLC model defines how different development phases and types of activities performed within this process relate to each other, both logically and chronologically. Examples of SDLC models include: sequential development models (e.g., waterfall model, V-model), iterative development models (e.g., spiral model, prototyping), and incremental development models (e.g., Unified Process).

Some activities within software development processes can also be described by more detailed software development methods and Agile practices. Examples include: acceptance test-driven development (ATDD), behavior-driven development (BDD), domain-driven design (DDD), extreme programming (XP), feature-driven development (FDD), Kanban, Lean IT, Scrum, and test-driven development (TDD).

2.1.1. Impact of the Software Development Lifecycle on Testing

Testing must be adapted to the SDLC to succeed. The choice of the SDLC impacts on the:

  • Scope and timing of test activities (e.g., test levels and test types)
  • Level of detail of test documentation
  • Choice of test techniques and test approach
  • Extent of test automation
  • Role and responsibilities of a tester

In sequential development models, in the initial phases testers typically participate in requirement reviews, test analysis, and test design. The executable code is usually created in the later phases, so typically dynamic testing cannot be performed early in the SDLC.

In some iterative and incremental development models, it is assumed that each iteration delivers a working prototype or product increment. This implies that in each iteration both static and dynamic testing may be performed at all test levels. Frequent delivery of increments requires fast feedback and extensive regression testing.

Agile software development assumes that change may occur throughout the project. Therefore, lightweight work product documentation and extensive test automation to make regression testing easier are favored in agile projects. Also, most of the manual testing tends to be done using experience-based test techniques (see Section 4.4) that do not require extensive prior test analysis and design.

2.1.2. Software Development Lifecycle and Good Testing Practices

Good testing practices, independent of the chosen SDLC model, include the following:

  • For every software development activity, there is a corresponding test activity, so that all development activities are subject to quality control
  • Different test levels (see chapter 2.2.1) have specific and different test objectives, which allows for testing to be appropriately comprehensive while avoiding redundancy
  • Test analysis and design for a given test level begins during the corresponding development phase of the SDLC, so that testing can adhere to the principle of early testing (see section 1.3)
  • Automation through a delivery pipeline reduces the need for repetitive manual testing
  • The risk in regression is minimized due to the scale and range of automated regression tests

DevOps is not without its risks and challenges, which include:

  • The DevOps delivery pipeline must be defined and established
  • CI / CD tools must be introduced and maintained
  • Test automation requires additional resources and may be difficult to establish and maintain

Although DevOps comes with a high level of automated testing, manual testing – especially from the user’s perspective – will still be needed.

2.1.5. Shift-Left Approach

The principle of early testing (see section 1.3) is sometimes referred to as shift-left because it is an approach where testing is performed earlier in the SDLC. Shift-left normally suggests that testing should be done earlier (e.g., not waiting for code to be implemented or for components to be integrated), but it does not mean that testing later in the SDLC should be neglected.

There are some good practices that illustrate how to achieve a “shift-left” in testing, which include:

  • Reviewing the specification from the perspective of testing. These review activities on specifications often find potential defects, such as ambiguities, incompleteness, and inconsistencies
  • Writing test cases before the code is written and have the code run in a test harness during code implementation
  • Using CI and even better CD as it comes with fast feedback and automated component tests to accompany source code when it is submitted to the code repository
  • Completing static analysis of source code prior to dynamic testing, or as part of an automated process
  • Performing non-functional testing starting at the component test level, where possible. This is a form of shift-left as these non-functional test types tend to be performed later in the SDLC when a complete system and a representative test environment are available

A shift-left approach might result in extra training, effort and/or costs earlier in the process but is expected to save efforts and/or costs later in the process.

For the shift-left approach it is important that stakeholders are convinced and bought into this concept.

2.1.6. Retrospectives and Process Improvement

Retrospectives (also known as “post-project meetings” and project retrospectives) are often held at the end of a project or an iteration, at a release milestone, or can be held when needed. The timing and organization of the retrospectives depend on the particular SDLC model being followed. In these meetings the participants (not only testers, but also e.g., developers, architects, product owner, business analysts) discuss:

  • What was successful, and should be retained?
  • What was not successful and could be improved?
  • How to incorporate the improvements and retain the successes in the future?

The results should be recorded and are normally part of the test completion report (see section 5.3.2). Retrospectives are critical for the successful implementation of continuous improvement and it is important that any recommended improvements are followed up.

Typical benefits for testing include:

  • Increased test effectiveness / efficiency (e.g., by implementing suggestions for process improvement)
  • Increased quality of testware (e.g., by jointly reviewing the test processes)
  • Team bonding and learning (e.g., as a result of the opportunity to raise issues and propose improvement points)
  • Improved quality of the test basis (e.g., as deficiencies in the extent and quality of the requirements could be addressed and solved)
  • Better cooperation between development and testing (e.g., as collaboration is reviewed and optimized regularly)

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *