- Differentiate between a test design specification, a test case specification and a test procedure specification. (K1)
- Compare the terms test condition, test case and test procedure. (K2)
- Write test cases: showing a clear traceability to the requirements; containing an expected result. (K3)
- Translate test cases into a well-structured test procedure specification at a level of detail relevant to the knowledge of the testers. (K3)
- Write a test execution schedule for a given set of test cases, considering prioritization, and technical and logical dependencies. (K3)
4.1.1 Introduction
Before we can actually execute a test, we need to know what we are trying to test, the inputs, the results that should be produced by those inputs, and how we actually get ready for and run the tests.
In this section we are looking at three things: test conditions, test cases and test procedures (or scripts) – they are described in the sections below. Each is specified in its own document, according to the Test Documentation Standard [IEEE829].
Test conditions are documented in a Test Design Specification. We will look at how to choose test conditions and prioritize them.
Test cases are documented in a Test Case Specification. We will look at how to write a good test case, showing clear traceability to the test basis (e.g., the requirement specification) as well as to test conditions.
Test procedures are documented (as you may expect) in a Test Procedure Specification (also known as a test script or a manual test script). We will look at how to translate test cases into test procedures relevant to the knowledge of the tester who will be executing the test, and we will look at how to produce a test execution schedule, using prioritization and technical and logical dependencies.
In this section, look for the definitions of the glossary terms: test case, test case specification, test condition, test data, test procedure specification, test script and traceability.
4.1.2 Formality of test documentation
Testing may be performed with varying degrees of formality. Very formal testing would have extensive documentation, which is well controlled, and would expect the documented detail of the tests to include the exact and specific input and expected outcome of the test. Very informal testing may have no documentation at all, or only notes kept by individual testers, but we’d still expect the testers to have in their minds and notes some idea of what they intended to test and what they expected the outcome to be. Most people are probably somewhere in between! The right level of formality for you depends on your context: a commercial safety-critical application has very different needs than a one-off application to be used by only a few people for a short time.
The level of formality is also influenced by your organization – its culture, the people working there, how mature the development process is, how mature the testing process is, etc. The thoroughness of your test documentation may also depend on your time constraints; under excessive deadline pressure, keeping good documentation may be compromised.
In this chapter we will describe a fairly formal documentation style. If this is not appropriate for you, you might adopt a less formal approach, but you will be aware of how to increase formality if you need to.
4.1.3 Test analysis: identifying test conditions
Test analysis is the process of looking at something that can be used to derive test information. This basis for the tests is called the “test basis”. It could be a system requirement, a technical specification, the code itself (for structural testing), or a business process. Sometimes tests can be based on an experienced user’s knowledge of the system, which may not be documented. The test basis includes whatever the tests are based on. This was also discussed in Chapter 1.
From a testing perspective, we look at the test basis in order to see what could be tested – these are the test conditions. A test condition is simply something that we could test. If we are looking to measure coverage of code decisions (branches), then the test basis would be the code itself, and the list of test conditions would be the decision outcomes (True and False). If we have a requirements specification, the table of contents can be our initial list of test conditions.
A good way to understand requirements better is to try to define tests to meet those requirements, as pointed out by [Hetzel, 1988].
For example, if we are testing a customer management and marketing system for a mobile phone company, we might have test conditions that are related to a marketing campaign, such as age of customer (pre-teen, teenager, young adult, mature), gender, postcode or zip code, and purchasing preference (payas-you-go or contract). A particular advertising campaign could be aimed at male teenaged customers in the mid-west of the USA on pay-as-you-go, for example.
Testing experts use different names to represent the basic idea of “a list of things that we could test”. For example, Marick refers to “test requirements” as things that should be tested. Although it is not intended to imply that everything must be tested, it is too easily interpreted in that way. [Marick, 1994] In contrast, Hutcheson talks about the “test inventory” as a list of things that could be tested [Hutcheson, 2003]; Craig talks about ‘test objectives’ as broad categories of things to test and “test inventories” as the actual list of things that need to be tested [Craig, 2002]. These authors are all referring to what the ISTQB glossary calls a test condition.
When identifying test conditions, we want to “throw the net wide” to identify as many as we can, and then we will start being selective about which ones to take forward to develop in more detail and combine into test cases. We could call them “test possibilities”.
In Chapter 1 we explained that testing everything is known as exhaustive testing (defined as exercising every combination of inputs and preconditions) and we demonstrated that this is an impractical goal. Therefore, as we cannot test everything, we have to select a subset of all possible tests. In practice the subset we select may be a very small subset and yet it has to have a high probability of finding most of the defects in a system. We need some intelligent thought processes to guide our selection; test techniques (i.e. test design techniques) are such thought processes.
A testing technique helps us select a good set of tests from the total number of all possible tests for a given system. Different techniques offer different ways of looking at the software under test, possibly challenging assumptions made about it. Each technique provides a set of rules or guidelines for the tester to follow in identifying test conditions and test cases. Techniques are described in detail later in this chapter.
The test conditions that are chosen will depend on the test strategy or detailed test approach. For example, they might be based on risk, models of the system, likely failures, compliance requirements, expert advice or heuristics.
The word “heuristic” comes from the same Greek root as eureka, which means “I find”. A heuristic is a way of directing your attention, a commonsense rule useful in solving a problem. Test conditions should be able to be linked back to their sources in the test basis – this is called traceability.
Traceability can be either horizontal through all the test documentation for a given test level (e.g. system testing, from test conditions through test cases to test scripts) or vertical through the layers of development documentation (e.g., from requirements to components).
Why is traceability important? Consider these examples:
- The requirements for a given function or feature have changed. Some of the fields now have different ranges that can be entered. Which tests were looking at those boundaries? They now need to be changed. How many tests will actually be affected by this change in the requirements? These questions
can be answered easily if the requirements can easily be traced to the tests. - A set of tests that has run OK in the past has started to have serious problems. What functionality do these tests actually exercise? Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily.
- Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed – was every requirement tested?
Having identified a list of test conditions, it is important to prioritize them, so that the most important test conditions are identified (before a lot of time is spent in designing test cases based on them). It is a good idea to try and think of twice as many test conditions as you need – then you can throw away the less important ones, and you will have a much better set of test conditions!
Note that spending some extra time now, while identifying test conditions, doesn’t take very long, as we are only listing things that we could test. This is a good investment of our time – we don’t want to spend time implementing poor tests!
Test conditions can be identified for test data as well as for test inputs and test outcomes, for example, different types of record, different distribution of types of record within a file or database, different sizes of records or fields in a record. The test data should be designed to represent the most important types of data, i.e., the most important data conditions.
Test conditions are documented in the IEEE 829 document called a Test Design Specification, shown below. (This document could have been called a Test Condition Specification, as the contents referred to in the standard are actually test conditions.)