6.1.5 Tool support for test execution and logging
Test execution tools
When people think of a “testing tool”, it is usually a test execution tool that they have in mind, a tool that can run tests. This type of tool is also referred to as a “test running tool”. Most tools of this type offer a way to get started by capturing or recording manual tests; hence they are also known as “capture/playback” tools, “capture/replay” tools or “record/playback” tools. The analogy is with recording a television program and playing it back. However, the tests are not something which is played back just for someone to watch the tests interact with the system, which may react slightly differently when the tests are repeated. Hence captured tests are not suitable if you want to achieve long-term success with a test execution tool, as is described in Section 6.2.3.
Test execution tools use a scripting language to drive the tool. The scripting language is actually a programming language. So, any tester who wishes to use a test execution tool directly will need to use programming skills to create and modify the scripts. The advantage of programmable scripting is that tests can repeat actions (in loops) for different data values (i.e., test inputs), they can take different routes depending on the outcome of a test (e.g., if a test fails, go to a different set of tests) and they can be called from other scripts giving some structure to the set of tests.
When people first encounter a test execution tool, they tend to use it to “capture/playback”, which sounds really good when you first hear about it. The theory is that while you are running your manual tests, you simply turn on the “capture”, like a video recorder for a television program. However, the theory breaks down when you try to replay the captured tests – this approach does not scale up for large numbers of tests. The main reason for this is that a captured script is very difficult to maintain because:
- It is closely tied to the flow and interface presented by the GUI.
- It may rely on the circumstances, state and context of the system at the time the script was recorded. For example, a script will capture a new order number assigned by the system when a test is recorded. When that test is played back, the system will assign a different order number and reject sub sequent requests that contain the previously captured order number.
- The test input information is “hard-coded”, i.e., it is embedded in the individual script for each test.
Any of these things can be overcome by modifying the scripts, but then we are no longer just recording and playing back! If it takes more time to update a captured test than it would take to run the same test again manually, the scripts tend to be abandoned and the tool becomes “shelf-ware”.
There are better ways to use test execution tools to make them work well and actually deliver the benefits of unattended automated test running. There are at least five levels of scripting and also different comparison techniques. Data-driven scripting is an advance over captured scripts, but keyword-driven scripts give significantly more benefits. [Fewster and Graham, 1999], [Buwalda et al., 2001]. [Mosley and Posey, 2002] describe “control synchronized data-driven testing”. See also Section 6.2.3.
There are many different ways to use a test execution tool and the tools themselves are continuing to gain new useful features. For example, a test execution tool can help to identify the input fields which will form test inputs and may construct a table which is the first step towards data-driven scripting.
Although they are commonly referred to as testing tools, they are actually best used for regression testing (so they could be referred to as “regression testing tools” rather than “testing tools”). A test execution tool most often runs tests that have already been run before. One of the most significant benefits of using this type of tool is that whenever an existing system is changed (e.g., for a defect fix or an enhancement), all of the tests that were run earlier could potentially be run again, to make sure that the changes have not disturbed the existing system by introducing or revealing a defect.
Features or characteristics of test execution tools include support for:
- capturing (recording) test inputs while tests are executed manually;
- storing an expected result in the form of a screen or object to compare to, the next time the test is run;
- executing tests from stored scripts and optionally data files accessed by the script (if data-driven or keyword-driven scripting is used);
- dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values;
- ability to initiate post-execution comparison;
- logging results of tests run (pass/fail, differences between expected and actual results);
- masking or filtering of subsets of actual and expected results, for example excluding the screen-displayed current date and time which is not of interest to a particular test;
- measuring timings for tests;
- synchronizing inputs with the application under test, e.g. wait until the application is ready to accept the next input, or insert a fixed delay to represent human interaction speed;
- sending summary results to a test management tool.