CHAPTER 1 – 1.4 – Fundamental test process- Part 2/3

1.4.3 Test analysis and design

Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test designs. During test analysis and design, we take general testing objectives identified during planning and build test designs and test procedures (scripts). You’ll see how to do this in Chapter 4.

Test analysis and design has the following major tasks, in approximately the following order:

  • Review the test basis (such as the product risk analysis, requirements, architecture, design specifications, and interfaces), examining the specifications for the software we are testing. We use the test basis to help us build our tests. We can start designing certain kinds of tests (called black-box tests) before the code exists, as we can use the test basis documents to understand what the system should do once built. As we study the test basis, we often identify gaps and ambiguities in the specifications, because we are trying to identify precisely what happens at each point in the system, and this also prevents defects appearing in the code.
  • Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure. This gives us a high-level list of what we are interested in testing. If we return to our driving example, the examiner might have a list of test conditions including “behavior at road junctions”, “use of indicators”, “ability to maneuver the car” and so on. In testing, we use the test techniques to help us define the test conditions. From this we can start to identify the type of generic test data we might need.
  • Design the tests (you’ll see how to do this in Chapter 4), using techniques to help select representative tests that relate to particular aspects of the software which carry risks or which are of particular interest, based on the test conditions and going into more detail. For example, the driving examiner might look at a list of test conditions and decide that junctions need to include T-junctions, cross roads and so on. In testing, we’ll define the test case and test procedures.
  • Evaluate testability of the requirements and system. The requirements may be written in a way that allows a tester to design tests; for example, if the performance of the software is important, that should be specified in a testable way. If the requirements just say ‘the software needs to respond quickly enough’ that is not testable, because “quick enough” may mean different things to different people. A more testable requirement would be “the soft ware needs to respond in 5 seconds with 20 people logged on”. The testability of the system depends on aspects such as whether it is possible to set up the system in an environment that matches the operational environment and whether all the ways the system can be configured or used can be understood and tested. For example, if we test a website, it may not be possible to identify and recreate all the configurations of hardware, operating system, browser, connection, firewall and other factors that the website might encounter.
  • Design the test environment set-up and identify any required infrastructure and tools. This includes testing tools (see Chapter 6) and support tools such as spreadsheets, word processors, project planning tools, and non-IT tools and equipment – everything we need to carry out our work.

1.4.4 Test implementation and execution

During test implementation and execution, we take the test conditions and make them into test cases and testware and set up the test environment. This means that, having put together a high-level design for our tests, we now start to build them. We transform our test conditions into test cases and procedures, other testware such as scripts for automation. We also need to set up an environment where we will run the tests and build our test data. Setting up environments and data often involves significant time and effort, so you should plan and monitor this work carefully. Test implementation and execution have the following major tasks, in approximately the following order:

Implementation:

  • Develop and prioritize our test cases, using the techniques you’ll see in Chapter 4, and create test data for those tests. We will also write instructions for carrying out the tests (test procedures). For the driving examiner this might mean changing the test condition “junctions” to “take the route down Mayfield Road to the junction with Summer Road and ask the driver to turn left into Summer Road and
    then right into Green Road, expecting that the driver checks mirrors, signals and maneuvers correctly, while remaining aware of other road users.” We may need to automate some tests using test harnesses and automated test scripts. We’ll talk about automation more in Chapter 6.
  • Create test suites from the test cases for efficient test execution. A test suite is a logical collection of test cases which naturally work together. Test suites often share data and a common high-level set of objectives. We’ll also set up a test execution schedule.
  • Implement and verify the environment. We make sure the test environment has been set up correctly, possibly even running specific tests on it.

Execution:

  • Execute the test suites and individual test cases, following our test procedures. We might do this manually or by using test execution tools, according to the planned sequence.
  • Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware. We must know exactly what tests we used against what version of the software; we must report defects against specific versions; and the test log we keep provides an audit trail.
  • Compare actual results (what happened when we ran the tests) with expected results (what we anticipated would happen).
  • Where there are differences between actual and expected results, report discrepancies as incidents. We analyze them to gather further details about the defect, reporting additional information on the problem, identify the causes of the defect, and differentiate between problems in the software and other products under test and any defects in test data, in test documents, or mistakes in the way we executed the test. We would want to log the latter in order to improve the testing itself.
  • Repeat test activities as a result of action taken for each discrepancy. We need to re-execute tests that previously failed in order to confirm a fix (confirmation testing or re-testing). We execute corrected tests and suites if there were defects in our tests. We test corrected software again to ensure that the defect was indeed fixed correctly (confirmation test) and that the programmers did not introduce defects in unchanged areas of the software and that fixing a defect did not uncover other defects (regression testing).

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *