6.1 Test Tool Considerations
Test tools can be used to support one or more testing activities. Such tools include:
- Tools that are directly used in testing, such as test execution tools and test data preparation tools
- Tools that help to manage requirements, test cases, test procedures, automated test scripts, test results, test data, and defects, and for reporting and monitoring test execution
- Tools that are used for analysis and evaluation
- Any tool that assists in testing (a spreadsheet is also a test tool in this meaning)
6.1.1 Test Tool Classification
Test tools can have one or more of the following purposes depending on the context:
- Improve the efficiency of test activities by automating repetitive tasks or tasks that require significant resources when done manually (e.g., test execution, regression testing)
- Improve the efficiency of test activities by supporting manual test activities throughout the test process (see section 1.4)
- Improve the quality of test activities by allowing for more consistent testing and a higher level of defect reproducibility
- Automate activities that cannot be executed manually (e.g., large scale performance testing)
- Increase reliability of testing (e.g., by automating large data comparisons or simulating behavior)
Tools can be classified based on several criteria such as purpose, pricing, licensing model (e.g., commercial or open source), and technology used. Tools are classified in this syllabus according to the test activities that they support.
Some tools clearly support only or mainly one activity; others may support more than one activity, but are classified under the activity with which they are most closely associated. Tools from a single provider, especially those that have been designed to work together, may be provided as an integrated suite.
Some types of test tools can be intrusive, which means that they may affect the actual outcome of the test. For example, the actual response times for an application may be different due to the extra instructions that are executed by a performance testing tool, or the amount of code coverage achieved may be distorted due to the use of a coverage tool. The consequence of using intrusive tools is called the probe effect.
Some tools offer support that is typically more appropriate for developers (e.g., tools that are used during component and integration testing). Such tools are marked with “(D)” in the sections below.
Tool support for management of testing and testware
Management tools may apply to any test activities over the entire software development lifecycle.
Examples of tools that support management of testing and testware include:
- Test management tools and application lifecycle management tools (ALM)
- Requirements management tools (e.g., traceability to test objects)
- Defect management tools
- Configuration management tools
- Continuous integration tools (D)
Tool support for static testing
Static testing tools are associated with the activities and benefits described in chapter 3. Examples of such tool include:
- Static analysis tools (D)
Tool support for test design and implementation
Test design tools aid in the creation of maintainable work products in test design and implementation, including test cases, test procedures and test data. Examples of such tools include:
- Model-Based testing tools
- Test data preparation tools
In some cases, tools that support test design and implementation may also support test execution and logging, or provide their outputs directly to other tools that support test execution and logging.
Tool support for test execution and logging
Many tools exist to support and enhance test execution and logging activities. Examples of these tools include:
- Test execution tools (e.g., to run regression tests)
- Coverage tools (e.g., requirements coverage, code coverage (D))
- Test harnesses (D)
Tool support for performance measurement and dynamic analysis
Performance measurement and dynamic analysis tools are essential in supporting performance and load testing activities, as these activities cannot effectively be done manually. Examples of these tools include:
- Performance testing tools
- Dynamic analysis tools (D)
Tool support for specialized testing needs
In addition to tools that support the general test process, there are many other tools that support more specific testing for non-functional characteristics.
6.1.2 Benefits and Risks of Test Automation
Simply acquiring a tool does not guarantee success. Each new tool introduced into an organization will require effort to achieve real and lasting benefits. There are potential benefits and opportunities with the use of tools in testing, but there are also risks. This is particularly true of test execution tools (which is often referred to as test automation).
Potential benefits of using tools to support test execution include:
- Reduction in repetitive manual work (e.g., running regression tests, environment set up/tear down tasks, re-entering the same test data, and checking against coding standards), thus saving time
Greater consistency and repeatability (e.g., test data is created in a coherent manner, tests are executed by a tool in the same order with the same frequency, and tests are consistently derived from requirements)
- More objective assessment (e.g., static measures, coverage)
- Easier access to information about testing (e.g., statistics and graphs about test progress, defect rates and performance)
Potential risks of using tools to support testing include:
- Expectations for the tool may be unrealistic (including functionality and ease of use)
- The time, cost and effort for the initial introduction of a tool may be under-estimated (including training and external expertise)
- The time and effort needed to achieve significant and continuing benefits from the tool may be under-estimated (including the need for changes in the test process and continuous improvement in the way the tool is used)
- The effort required to maintain the test work products generated by the tool may be underestimated
- The tool may be relied on too much (seen as a replacement for test design or execution, or the use of automated testing where manual testing would be better)
- Version control of test work products may be neglected
- Relationships and interoperability issues between critical tools may be neglected, such as requirements management tools, configuration management tools, defect management tools and tools from multiple vendors
- The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
- The vendor may provide a poor response for support, upgrades, and defect fixes
- An open source project may be suspended
- A new platform or technology may not be supported by the tool
- There may be no clear ownership of the tool (e.g., for mentoring, updates, etc.)
6.1.3 Special Considerations for Test Execution and Test Management Tools
In order to have a smooth and successful implementation, there are a number of things that ought to be considered when selecting and integrating test execution and test management tools into an organization.
Test execution tools
Test execution tools execute test objects using automated test scripts. This type of tools often requires significant effort in order to achieve significant benefits.
- Capturing test approach: Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale to large numbers of test scripts. A captured script is a linear representation with specific data and actions as part of each script. This type of script may be unstable when unexpected events occur, and require ongoing maintenance as the system’s user interface evolves over time.
- Data-driven test approach: This test approach separates out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test script that can read the input data and execute the same test script with different data.
- Keyword-driven test approach: This test approach, a generic script processes keywords describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.
The above approaches require someone to have expertise in the scripting language (testers, developers or specialists in test automation). When using data-driven or keyword-driven test approaches testers who are not familiar with the scripting language can also contribute by creating test data and/or keywords for these predefined scripts. Regardless of the scripting technique used, the expected results for each test need to be compared to actual results from the test, either dynamically (while the test is running) or stored for later (post-execution) comparison.
Further details and examples of data-driven and keyword-driven testing approaches are given in ISTQBCTAL-TAE, Fewster 1999 and Buwalda 2001.
Model-Based testing (MBT) tools enable a functional specification to be captured in the form of a model, such as an activity diagram. This task is generally performed by a system designer. The MBT tool interprets the model in order to create test case specifications which can then be saved in a test management tool and/or executed by a test execution tool (see ISTQB-CTFL-MBT ).
Test management tools
Test management tools often need to interface with other tools or spreadsheets for various reasons, including:
- To produce useful information in a format that fits the needs of the organization
- To maintain consistent traceability to requirements in a requirements management tool
- To link with test object version information in the configuration management tool
This is particularly important to consider when using an integrated tool (e.g., Application Lifecycle Management), which includes a test management module, as well as other modules (e.g., project schedule and budget information) that are used by different groups within an organization.