Chapter 1 – 1.2 – What is testing? – Part 2/3

1.2.3 Software test and driving test compared

We can see that the software test is very like a driving test in many ways, although of course it is not a perfect analogy! The driving examiner becomes the software tester. The driver being examined becomes the system or software under test, and you’ll see as we go through this book that the same approach broadly holds.

  • Planning and preparation – Both the examiner and the tester need a plan of action and need to prepare for the test, which is not exhaustive, but is representative and allows risk-based decisions about the outcome.
  • Static and dynamic – Both dynamic (driving the car or executing the software) and static (questions to the driver or a review of the software) tests are useful.
  • Evaluation – The examiner and the tester must make an objective evaluation, log the test outcome and report factual observations about the test.
  • Determine that they satisfy specified requirements – The examiner and tester both check against requirements to carry out particular tasks successfully.
  • Demonstrate that they are fit for purpose – The examiner and the tester are not evaluating for perfection but for meeting sufficient of the attributes required to pass the test.
  • Detect defects – The examiner and tester both look for and log faults.

Let’s think a little more about planning. Because time is limited, in order to make a representative route that would provide a sufficiently good test, both software testers and driving examiners decide in advance on the route they will take.

It is not possible to carry out the driving test and make decisions about where to ask the driver to go next on the spur of moment. If the examiner did that, they might run out of time and have to return to the test center without having observed all the necessary maneuvers. The driver will still want a pass/fail report.

In the same way, if we embark on testing a software system without a plan of action, we are very likely to run out of time before we know whether we have done enough testing. We’ll see that good testers always have a plan of action. In some cases, we use a lightweight outline providing the goals and general direction of the test, allowing the testers to vary the test during execution. In other cases, we use detailed scripts showing the
steps in the test route and documenting exactly what the tester should expect to happen as each step. Whichever approach the tester takes, there will be some plan of action. Similarly, just as the driving examiner makes a log and report, a good tester will objectively document defects found and the outcome of the test.

So, test activities exist before and after test execution, and we explain those activities in this book. As a tester or test manager, you will be involved in planning and control of the testing, choosing test conditions, designing test cases based on those test conditions, executing them and checking results, evaluating whether enough testing has been done by Examining completion (or exit) criteria, reporting on the testing process and system under test, and presenting test completion (or summary) reports.

1.2.4 When can we meet our test objectives?

Testing Principle – Early testing

Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.

We can use both dynamic testing and static testing as a means for achieving similar test objectives. Both provide information to improve both the system to be tested, and the development and testing processes. We mentioned above that testing can have different goals and objectives, which often include:

  • finding defects;
  • gaining confidence in and providing information about the level of quality;
  • preventing defects.

Many types of review and testing activities-take place at different stages in the life cycle, as we’ll see in Chapter 2. These have different objectives. Early testing – such as early test design and review activities – finds defects early on when they are cheap to find and fix. Once the code is written, programmers and testers often run a set of tests so that they can identify and fix defects in the software. In this ‘development testing’ (which includes component, integration and system testing), the main objective may be to cause as many failures as possible so that defects in the software are identified and can be fixed.

Following that testing, the users of the software may carry out acceptance testing to confirm that the system works as expected and to gain confidence that it has met the requirements. Fixing the defects may not always be the test objective or the desired outcome. Sometimes we simply want to gather information and measure the software. This can take the form of attribute measures such as mean time between failures to assess reliability, or an assessment of the defect density in the software to assess and understand the risk of releasing it.

When maintaining software by enhancing it or fixing bugs, we are changing software that is already being used. In that case an objective of testing may be to ensure that we have not made errors and introduced defects when we changed the software. This is called regression testing – testing to ensure nothing has changed that should not have changed.

We may continue to test the system once it is in operational use. In this case, the main objective may be to assess system characteristics such as reliability or availability

Testing Principle – Defect clustering

A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *