Test harness/unit test framework tools (D)
These two types of tools are grouped together because they are variants of the type of support needed by developers when testing individual components or units of software. A test harness provides stubs and drivers, which are small programs that interact with the software under test (e.g., for testing middleware and embedded software). See Chapter 2 for more detail of how these are used in integration testing. Some unit test framework tools provide support for object-oriented software, others for other development paradigms. Unit test frameworks can be used in agile development to automate tests in parallel with development. Both types of tools enable the developer to test, identify and localize any defects. The framework or the stubs and drivers supply any information needed by the software being tested (e.g., an input that would have come from a user) and also receive any information sent by the software (e.g., a value to be displayed on a screen). Stubs may also be referred to as “mock objects”.
Test harnesses or drivers may be developed in-house for particular systems. Advice on designing test drivers can be found in [Hoffman and Strooper, 1995].
There are a large number of “xUnit” tools for different programming languages, e.g. JUnit for Java, NUnit for .Net applications, etc. There are both commercial tools and also open-source (i.e., free) tools. Unit test framework tools are very similar to test execution tools, since they include facilities such as the ability to store test cases and monitor whether tests pass or fail, for example. The main difference is that there is no capture/playback facility and they tend to be used at a lower level, i.e. for component or component integration testing, rather than for system or acceptance testing.
Features or characteristics of test harnesses and unit test framework tools include support for:
- supplying inputs to the software being tested;
- receiving outputs generated by the software being tested;
- executing a set of tests within the framework or using the test harness;
- recording the pass/fail results of each test (framework tools);
- storing tests (framework tools);
- support for debugging (framework tools);
- coverage measurement at code level (framework tools).
Test comparators
Is it really a test if you put some inputs into some software, but never look to see whether the software produces the correct result? The essence of testing is to check whether the software produces the correct result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison.
There are two ways in which actual results of a test can be compared to the expected results for the test. Dynamic comparison is where the comparison is done dynamically, i.e., while the test is executing. The other way is post-execution comparison, where the comparison is performed after the test has finished executing and the software under test is no longer running.
Test execution tools include the capability to perform dynamic comparison while the tool is executing a test. This type of comparison is good for comparing the wording of an error message that pops up on a screen with the correct wording for that error message. Dynamic comparison is useful when an actual result does not match the expected result in the middle of a test – the tool can be programmed to take some recovery action at this point or go to a different set of tests.
Post-execution comparison is usually best done by a separate tool (i.e., not the test execution tool). This is the type of tool that we mean by a test comparator or test comparison tool and is typically a “stand-alone” tool. Operating systems normally have file comparison tools available which can be used for post-execution comparison and often a comparison tool will be developed inhouse for comparing a particular type of file or test result.
Post-execution comparison is best for comparing a large volume of data, for example comparing the contents of an entire file with the expected contents of that file or comparing a large set of records from a database with the expected content of those records. For example, comparing the result of a batch run (e.g., overnight processing of the day’s online transactions) is probably impossible to do without tool support.
Whether a comparison is dynamic or post-execution, the test comparator needs to know what the correct result is. This may be stored as part of the test case itself or it may be computed using a test oracle. See Chapter 4 for information about test oracles.
Features or characteristics of test comparators include support for:
- dynamic comparison of transient events that occur during test execution;
- post-execution comparison of stored data, e.g. in files or databases;
- masking or filtering of subsets of actual and expected results.
Coverage measurement tools (D)
How thoroughly have you tested? Coverage tools can help answer this question.
A coverage tool first identifies the elements or coverage items that can be counted, and where the tool can identify when a test has exercised that coverage item. At component testing level, the coverage items could be lines of code or code statements or decision outcomes (e.g., the True or False exit from an IF statement). At component integration level, the coverage item may be a call to a function or module. Although coverage can be measured at system or acceptance testing levels, e.g., where the coverage item may be a requirement statement, there aren’t many (if any) commercial tools at this level; there is more tool support at component testing level or to some extent at component integration level.
The process of identifying the coverage items at component test level is called “instrumenting the code”, as described in Chapter 4. A suite of tests is then run through the instrumented code, either automatically using a test execution tool or manually. The coverage tool then counts the number of coverage items that have been executed by the test suite, and reports the percentage of coverage items that have been exercised, and may also identify the items that have not yet been exercised (i.e., not yet tested). Additional tests can then be run to increase coverage (the tool reports accumulated coverage of all the tests run so far).
The more sophisticated coverage tools can provide support to help identify the test inputs that will exercise the paths that include as-yet unexercised coverage items (or link to a test design tool to identify the unexercised items). For example, if not all decision outcomes have been exercised, the coverage tool can identify the particular decision outcome (e.g., a False exit from an IF statement) that no test has taken so far and may then also be able to calculate the test input required to force execution to take that decision outcome.
Features or characteristics of coverage measurement tools include support for:
- identifying coverage items (instrumenting the code);
- calculating the percentage of coverage items that were exercised by a suite of tests;
- reporting coverage items that have not been exercised as yet;
- identifying test inputs to exercise as yet uncovered items (test design tool functionality);
- generating stubs and drivers (if part of a unit test framework).
Note that the coverage tools only measure the coverage of the items that they can identify. Just because your tests have achieved 100% statement coverage, this does not mean that your software is 100% tested!
Security tools
There are a number of tools that protect systems from external attack, for example firewalls, which are important for any system.
Security testing tools can be used to test security by trying to break into a system, whether or not it is protected by a security tool. The attacks may focus on the network, the support software, the application code or the underlying database.
Features or characteristics of security testing tools include support for:
- identifying viruses;
- detecting intrusions such as denial of service attacks;
- simulating various types of external attacks;
- probing for open ports or other externally visible points of attack;
- identifying weaknesses in password files and passwords;
- security checks during operation, e.g., for checking integrity of files, and intrusion detection, e.g., checking results of test attacks.