CHAPTER 1 – 1.4 – Fundamental test process- Part 3/3

1.4.5 Evaluating exit criteria and reporting

Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level, as for each we need to know whether we have done enough testing. Based on our risk assessment, we’ll have set criteria against which we’ll measure “enough”.

These criteria vary for each project and are known as exit criteria. They tell us whether we can declare a given testing activity or level complete. We may have a mix of coverage or completion criteria (which tell us about test cases that must be included, e.g. “the driving test must include an emergency stop” or “the software test must include a response measurement”), acceptance criteria (which tell us how we know whether the software has passed or failed overall, e.g. “only pass the driver if they have completed the emergency stop correctly” or “only pass the software for release if it meets the priority 1 requirements list”) and process exit criteria (which tell us whether we have completed all the tasks we need to do, e.g. “the examiner/tester has not finished until they have written and filed the end of test report”). Exit criteria should be set and evaluated for each test level.

Evaluating exit criteria has the following major tasks:

  • Check test logs against the exit criteria specified in test planning: We look to see what evidence we have for which tests have been executed and checked, and what defects have been raised, fixed, confirmation tested, or are out standing.
  • Assess if more tests are needed or if the exit criteria specified should be changed: We may need to run more tests if we have not run all the tests we designed, or if we realize we have not reached the coverage we expected, or if the risks have increased for the project. We may need to change the exit criteria to lower them, if the business and project risks rise in importance and the product or technical risks drop in importance. Note that this is not easy to do and must be agreed with stakeholders. The test management tools and test coverage tools that we’ll discuss in Chapter 6 help us with this assessment.
  • Write a test summary report for stakeholders: It is not enough that the testers know the outcome of the test. All the stakeholders need to know what testing has been done and the outcome of the testing, in order to make informed decisions about the software.

1.4.6 Test closure activities

During test closure activities, we collect data from completed test activities to consolidate experience, including checking and filing testware, and analyzing facts and numbers. We may need to do this when software is delivered. We also might close testing for other reasons, such as when we have gathered the information needed from testing, when the project is cancelled, when a particular milestone is achieved, or when a maintenance release or update is done. Test closure activities include the following major tasks:

  • Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral. For deferred defects, in other words those that remain open, we may request a change in a future release. We document the-acceptance or rejection of the software system.
  • Finalize and archive testware, such as scripts, the test environment, and any other test infrastructure, for later reuse. It is important to reuse whatever we can of testware; we will inevitable carry out maintenance testing, and it saves time and effort if our testware can be pulled out from a library of
    existing tests. It also allows us to compare the results of testing between software versions.
  • Hand over testware to the maintenance organization who will support the software and make any bug fixes or maintenance changes, for use in confirmation testing and regression testing. This group may be a separate group to the people who build and test the software; the maintenance testers are one of the customers of the development testers; they will use the library of tests.
  • Evaluate how the testing went and analyze lessons learned for future releases and projects. This might include process improvements for the software development life cycle as a whole and also improvement of the test processes. If you reflect on Figure 1.3 again, we might use the test results to set targets for improving reviews and testing with a goal of reducing the number of defects in live use. We might look at the number of incidents which were test problems, with the goal of improving the way we design,
    execute and check our tests or the management of the test environments and data. This helps us make our testing more mature and cost-effective for the organization. This is documented in a test summary report or might be part of an overall project evaluation report

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *