2.3 Test Types
A test type is a group of test activities aimed at testing specific characteristics of a software system, or a part of a system, based on specific test objectives. Such objectives may include:
- Evaluating functional quality characteristics, such as completeness, correctness, and appropriateness
- Evaluating non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability
- Evaluating whether the structure or architecture of the component or system is correct, complete, and as specified
- Evaluating the effects of changes, such as confirming that defects have been fixed (confirmation testing) and looking for unintended changes in behavior resulting from software or environment changes (regression testing)
2.3.1 Functional Testing
Functional testing of a system involves tests that evaluate functions that the system should perform. Functional requirements may be described in work products such as business requirements specifications, epics, user stories, use cases, or functional specifications, or they may be undocumented.
The functions are “what” the system should do. Functional tests should be performed at all test levels (e.g., tests for components may be based on a component specification), though the focus is different at each level (see section 2.2).
Functional testing considers the behavior of the software, so black-box techniques may be used to derive test conditions and test cases for the functionality of the component or system (see section 4.2).
The thoroughness of functional testing can be measured through functional coverage. Functional coverage is the extent to which some functionality has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and functional requirements, the percentage of these requirements which are addressed by testing can be calculated, potentially identifying coverage gaps.
Functional test design and execution may involve special skills or knowledge, such as knowledge of the particular business problem the software solves (e.g., geological modelling software for the oil and gas industries).
2.3.2 Non-functional Testing
Non-functional testing of a system evaluates characteristics of systems and software such as usability, performance efficiency or security. Refer to ISO standard (ISO/IEC 25010) for a classification of software product quality characteristics. Non-functional testing is the testing of “how well” the system behaves.
Contrary to common misperceptions, non-functional testing can and often should be performed at all test levels, and done as early as possible. The late discovery of non-functional defects can be extremely dangerous to the success of a project.
Black-box techniques (see section 4.2) may be used to derive test conditions and test cases for nonfunctional testing. For example, boundary value analysis can be used to define the stress conditions for performance tests.
The thoroughness of non-functional testing can be measured through non-functional coverage. Nonfunctional coverage is the extent to which some type of non-functional element has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and supported devices for a mobile application, the percentage of devices which are addressed by compatibility testing can be calculated, potentially identifying coverage gaps.
Non-functional test design and execution may involve special skills or knowledge, such as knowledge of the inherent weaknesses of a design or technology (e.g., security vulnerabilities associated with particular programming languages) or the particular user base (e.g., the personas of users of healthcare facility management systems).
Refer to ISTQB-CTAL-TA, ISTQB-CTAL-TTA, ISTQB-CTAL-SEC, and other ISTQB® specialist modules for more details regarding the testing of non-functional quality characteristics.
2.3.3 White-box Testing
White-box testing derives tests based on the system’s internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system (see section 4.3).
The thoroughness of white-box testing can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered.
At the component testing level, code coverage is based on the percentage of component code that has been tested, and may be measured in terms of different aspects of code (coverage items) such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested. These types of coverage are collectively called code coverage. At the component integration testing level, white-box testing may be based on the architecture of the system, such as interfaces between components, and structural coverage may be measured in terms of the percentage of interfaces exercised by tests.
White-box test design and execution may involve special skills or knowledge, such as the way the code is built, how data is stored (e.g., to evaluate possible database queries), and how to use coverage tools and to correctly interpret their results.
2.3.4 Change-related Testing
When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences.
- Confirmation testing: After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software may also be tested with new tests to cover changes needed to fix the defect. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.
- Regression testing: It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions.
Regression testing involves running tests to detect such unintended side-effects. Confirmation testing and regression testing are performed at all test levels.
Especially in iterative and incremental development lifecycles (e.g., Agile), new features, changes to existing features, and code refactoring result in frequent changes to the code, which also requires change-related testing. Due to the evolving nature of the system, confirmation and regression testing are very important. This is particularly relevant for Internet of Things systems where individual objects (e.g., devices) are frequently updated or replaced.
Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation. Automation of these tests should start early in the project (see chapter 6).
2.3.5 Test Types and Test Levels
It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a banking application, starting with functional tests:
- For component testing, tests are designed based on how a component should calculate compound interest.
- For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic.
- For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.
- For system integration testing, tests are designed based on how the system uses an external microservice to check an account holder’s credit score.
- For acceptance testing, tests are designed based on how the banker handles approving or declining a credit application.
The following are examples of non-functional tests:
- For component testing, performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation.
- For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic.
- For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices.
- For system integration testing, reliability tests are designed to evaluate system robustness if the credit score microservice fails to respond.
- For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s credit processing interface for people with disabilities.
The following are examples of white-box tests:
- For component testing, tests are designed to achieve complete statement and decision coverage (see section 4.3) for all components that perform financial calculations.
- For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic.
- For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.
- For system integration testing, tests are designed to exercise all possible inquiry types sent to the credit score microservice.
- For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers.
Finally, the following are examples for change-related tests:
- For component testing, automated regression tests are built for each component and included within the continuous integration framework.
- For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository.
- For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes.
- For system integration testing, tests of the application interacting with the credit scoring microservice are re-executed daily as part of continuous deployment of that microservice.
- For acceptance testing, all previously-failed tests are re-executed after a defect found in acceptance testing is fixed.
While this section provides examples of every test type across every level, it is not necessary, for all software, to have every test type represented across every level. However, it is important to run applicable test types at each level, especially the earliest level where the test type occurs.