CTFL – Syllabus v4.0 – 5. Managing the Test Activities – Part 4/4

5.3.Test Monitoring, Test Control and Test Completion

Test monitoring is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test control uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.

Examples of control directives include:

  • Reprioritizing tests when an identified risk becomes an issue
  • Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
  • Adjusting the test schedule to address a delay in the delivery of the test environment
  • Adding new resources when and where needed

Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

5.3.1. Metrics used in Testing

Test metrics are gathered to show progress against the planned schedule and budget, the current quality of the test object, and the effectiveness of the test activities with respect to the objectives or an iteration goal. Test monitoring gathers a variety of metrics to support the test control and test completion.

Common test metrics include:

  • Project progress metrics (e.g., task completion, resource usage, test effort)
  • Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
  • Product quality metrics (e.g., availability, response time, mean time to failure)
  • Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
  • Risk metrics (e.g., residual risk level)
  • Coverage metrics (e.g., requirements coverage, code coverage)
  • Cost metrics (e.g., cost of testing, organizational cost of quality)

5.3.2. Purpose, Content and Audience for Test Reports

Test reporting summarizes and communicates test information during and after testing. Test progress reports support the ongoing control of the testing and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances. Test completion reports summarize a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.

During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:

  • Test period
  • Test progress (e.g., ahead or behind schedule), including any notable deviations
  • Impediments for testing, and their workarounds
  • Test metrics (see section 5.3.1 for examples)
  • New and changed risks within testing period
  • Testing planned for the next period

A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data.

Typical test completion reports include:

  • Test summary
  • Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
    criteria)
  • Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
  • Testing impediments and workarounds
  • Test metrics based on test progress reports
  • Unmitigated risks, defects not fixed
  • Lessons learned that are relevant to the testing

Different audiences require different information in the reports, and influence the degree of formality and the frequency of reporting. Reporting on test progress to others in the same team is often frequent and informal, while reporting on testing for a completed project follows a set template and occurs only once.

The ISO/IEC/IEEE 29119-3 standard includes templates and examples for test progress reports (called test status reports) and test completion reports.

5.3.3. Communicating the Status of Testing

The best means of communicating test status varies, depending on test management concerns, organizational test strategies, regulatory standards, or, in the case of self-organizing teams (see section 1.5.2), on the team itself. The options include:

  • Verbal communication with team members and other stakeholders
  • Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
  • Electronic communication channels (e.g., email, chat)
  • Online documentation
  • Formal test reports (see section 5.3.2)

One or more of these options can be used. More formal communication may be more appropriate for distributed teams where direct face-to-face communication is not always possible due to geographical distance or time differences. Typically, different stakeholders are interested in different types of information, so communication should be tailored accordingly.

5.4. Configuration Management

In testing, configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.

For a complex configuration item (e.g., a test environment), CM records the items it consists of, their relationships, and versions. If the configuration item is approved for testing, it becomes a baseline and can only be changed through a formal change control process.

Configuration management keeps a record of changed configuration items when a new baseline is created. It is possible to revert to a previous baseline to reproduce previous test results.

To properly support testing, CM ensures the following:

  • All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
  • All identified documentation and software items are referenced unambiguously in test documentation

Continuous integration, continuous delivery, continuous deployment and the associated testing are typically implemented as part of an automated DevOps pipeline (see section 2.1.4), in which automated CM is normally included.

5.5.Defect Management

Since one of the major test objectives is to find defects, an established defect management process is essential. Although we refer to “defects” here, the reported anomalies may turn out to be real defects or something else (e.g., false positive, change request) – this is resolved during the process of dealing with the defect reports. Anomalies may be reported during any phase of the SDLC and the form depends on the SDLC.

At a minimum, the defect management process includes a workflow for handling individual anomalies from their discovery to their closure and rules for their classification. The workflow typically comprises activities to log the reported anomalies, analyze and classify them, decide on a suitable response such as to fix or keep it as it is and finally to close the defect report. The process must be followed by all involved stakeholders. It is advisable to handle defects from static testing (especially static analysis) in a similar way.

Typical defect reports have the following objectives:

  • Provide those responsible for handling and resolving reported defects with sufficient information
    to resolve the issue
  • Provide a means of tracking the quality of the work product
  • Provide ideas for improvement of the development and test process

A defect report logged during dynamic testing typically includes:

  • Unique identifier
  • Title with a short summary of the anomaly being reported
  • Date when the anomaly was observed, issuing organization, and author, including their role
  • Identification of the test object and test environment
  • Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
  • Description of the failure to enable reproduction and resolution including the steps that detected
    the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
  • Expected results and actual results
  • Severity of the defect (degree of impact) on the interests of stakeholders or requirements
  • Priority to fix
  • Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
  • References (e.g., to the test case)

Some of this data may be automatically included when using defect management tools (e.g., identifier, date, author and initial status). Document templates for a defect report and example defect reports can be found in the ISO/IEC/IEEE 29119-3 standard, which refers to defect reports as incident reports.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *