ISTQB Glossary
105 key software testing terms explained in plain language, mapped to the ISTQB CTFL v4.0 syllabus chapters.
A
The conditions that a system or feature must satisfy to be accepted by a user, customer, or stakeholder. They provide clear, testable requirements that define when a user story or requirement is considered complete.
A level of testing performed to determine whether a system satisfies the business requirements and whether it is ready for delivery. It is typically carried out by end users or stakeholders to validate the solution against real-world needs.
An informal testing approach performed without predefined test cases or a structured plan. Testers rely on their experience and intuition to explore the software and uncover defects that formal methods might miss.
A testing practice that follows agile development principles, emphasizing continuous testing throughout short iterations. Testers collaborate closely with developers and business stakeholders, and testing is integrated into every sprint.
A form of acceptance testing conducted at the development site by internal staff or a selected group of users before the product is released to external testers or the general public.
B
A form of acceptance testing performed by external users in their own environments after alpha testing. It provides feedback on real-world usage before the final product release.
A testing technique where the internal structure of the system is not considered. Tests are derived from specifications, requirements, or other documentation, focusing solely on inputs and expected outputs.
A black-box test design technique where test cases are created using values at the edges of equivalence partitions. Defects often cluster at boundaries, making this an efficient way to find errors with minimal test cases.
A white-box test coverage metric that measures the percentage of branches (decision outcomes) exercised by a test suite. Achieving 100% branch coverage means every possible path from each decision point has been tested at least once.
C
An experience-based testing technique where testers use a list of items to be verified or conditions to be checked. Checklists are built from past experience, known risk areas, or quality standards.
Testing that focuses on the interactions and interfaces between integrated components. It verifies that components work together correctly after being individually tested.
The testing of individual software components in isolation from the rest of the system. Also known as unit testing, it verifies that each module performs its intended function correctly.
Re-testing a specific function or area after a defect has been fixed to confirm the fix resolved the issue. It verifies that the original failure no longer occurs under the same conditions.
The degree to which specified items (such as code statements, branches, or requirements) have been exercised by a test suite. It is expressed as a percentage and helps assess the thoroughness of testing.
D
A test automation approach where test input data and expected results are stored externally, such as in spreadsheets or databases. The same test script is executed multiple times with different data sets to increase coverage efficiently.
A coverage metric that measures whether each possible outcome of every decision point in the code has been tested. It is equivalent to branch coverage and subsumes statement coverage.
A black-box technique that uses a table to represent combinations of inputs and their corresponding expected outputs or actions. It is especially useful for testing business rules with multiple conditions.
An imperfection or flaw in a work product where it does not meet its requirements or specifications. A defect in code can lead to a failure when the software is executed.
A metric that measures the number of confirmed defects found in a component or system divided by its size, such as lines of code or function points. It helps compare quality across modules.
A document that describes a defect found during testing, including steps to reproduce, expected versus actual results, severity, and priority. It serves as the primary communication tool between testers and developers.
A shared understanding within an agile team of the criteria that must be met before a work item is considered complete. It typically includes coding, testing, documentation, and review requirements.
Testing that involves executing the software and observing its behavior. Unlike static testing, dynamic testing requires the code to be run, and it includes all test levels from unit testing through acceptance testing.
E
The set of conditions that must be met before a testing activity can begin. Examples include completed test plans, available test environments, and approved requirements documents.
A black-box test design technique that divides input data into groups (partitions) where all values in a partition are expected to be treated the same way by the system. One representative value from each partition is selected for testing.
A human mistake made during software development that introduces a defect into a work product. Errors can occur in requirements, design, code, or documentation and may lead to defects and ultimately failures.
An experience-based testing technique where the tester uses knowledge of common mistakes, past defects, and typical failure patterns to anticipate where errors are likely to exist in the software.
A testing approach that attempts to cover every possible combination of inputs and preconditions. It is generally impractical for all but the simplest systems, which is why risk-based and sampling strategies are used instead.
The conditions that must be satisfied before a testing activity can be considered complete. They may include coverage targets, defect thresholds, or schedule constraints.
A category of test techniques that leverage the tester's skill, intuition, and knowledge of similar applications to design and execute tests. It complements formal techniques by catching defects that structured approaches might overlook.
A hands-on approach where test design, execution, and learning happen simultaneously. The tester actively explores the application, adapting their strategy based on what they discover during the session.
F
An observable deviation of the software from its expected behavior during execution. Failures are caused by defects in the code, but not every defect necessarily leads to a failure.
A test result that incorrectly indicates no defect is present when one actually exists. This can happen due to poorly designed test cases or incorrect expected results.
A test result that incorrectly indicates a defect is present when the software is actually functioning correctly. False positives waste investigation time and can erode confidence in the test suite.
Testing that evaluates what the system does by verifying its functions against the functional requirements and specifications. It focuses on the behavior of the software from the user perspective.
I
The process of identifying the areas of the system that may be affected by a proposed change. It helps determine the scope of regression testing needed after modifications are made.
A document that records any event during testing that requires further investigation. It may describe a potential defect, an unexpected result, or an environmental issue encountered during test execution.
Testing performed by individuals who are not the developers of the software. Independence can range from tests by a separate team within the same organization to tests by an external company, and it helps reduce author bias.
The most formal type of review, led by a trained moderator and following a defined process. Inspections use roles, checklists, and metrics to systematically examine work products and are highly effective at finding defects early.
A test level that focuses on verifying the interactions between components or systems. It aims to find defects in interfaces and the way integrated parts communicate with each other.
K
A scripting technique for automated testing where test cases are defined using a set of predefined keywords representing actions. Each keyword maps to one or more test steps, making tests readable by non-technical stakeholders.
L
A type of performance testing that evaluates system behavior under expected and peak load conditions. It helps determine whether the system can handle the anticipated number of concurrent users and transactions.
M
Testing performed on an existing operational system after it has been modified, migrated, or retired. It ensures that changes have not introduced new defects and that the system continues to meet its requirements.
N
Testing that evaluates how well the system performs rather than what it does. It covers quality characteristics such as performance, usability, reliability, security, and portability.
P
A collaborative testing approach where two team members work together at one workstation to test the software. One person may drive the testing while the other observes, asks questions, and takes notes.
A type of non-functional testing that evaluates the speed, responsiveness, and stability of a system under a particular workload. It helps identify bottlenecks and ensures the system meets performance requirements.
The level of business importance assigned to a defect or test case. Priority indicates how urgently a defect should be fixed, which may differ from its severity based on business context and deadlines.
A risk that is directly related to the quality of the product being developed. Examples include the possibility of incorrect calculations, data corruption, or poor performance under load.
A risk related to the management and control of the test project itself. Examples include staffing shortages, tool issues, unrealistic deadlines, and insufficient test environments.
Q
The degree to which a work product satisfies stated and implied needs of its stakeholders. In software, quality encompasses both functional correctness and non-functional attributes like usability and performance.
A set of activities focused on ensuring that appropriate processes are defined and followed during software development. QA is process-oriented and aims to prevent defects rather than detect them.
Activities focused on examining and measuring the actual quality of work products, including testing. QC is product-oriented and aims to identify defects in deliverables before they reach the end user.
R
Testing conducted after a change to the software to ensure that existing functionality has not been broken. It is a key activity during maintenance and iterative development to catch unintended side effects.
A form of static testing where a work product is examined by one or more people to find defects, improve quality, or share understanding. Reviews can range from informal buddy checks to formal inspections.
The process of assessing identified risks by estimating their likelihood of occurrence and the potential impact if they materialize. The results guide decisions about test prioritization and mitigation strategies.
A testing approach where the priority, scope, and depth of testing activities are driven by the results of risk analysis. Higher-risk areas receive more thorough testing to optimize the use of limited resources.
The fundamental reason behind a defect or failure. Identifying root causes through analysis helps prevent similar defects from recurring in the future, improving overall process quality.
A systematic investigation technique used to identify the underlying reasons for defects or failures. By addressing root causes rather than symptoms, organizations can implement lasting process improvements.
S
A structured form of exploratory testing where testing is organized into time-boxed sessions with defined charters. Each session has a clear mission, and results are documented in session reports.
The degree of impact a defect has on the development or operation of the system. Severity reflects the technical seriousness of the defect, such as whether it causes a crash versus a minor cosmetic issue.
A category of test techniques where tests are derived from the specifications or requirements of the system without reference to its internal structure. It is synonymous with black-box testing.
A black-box technique that models the system as a finite state machine. Test cases are designed to exercise valid and invalid transitions between states, ensuring the system responds correctly to sequences of events.
A white-box coverage metric that measures the percentage of executable code statements that have been exercised by a test suite. It is the weakest form of structural coverage but provides a useful baseline.
The examination of code or other work products without executing the software. Automated tools can detect potential defects such as coding standard violations, unreachable code, and security vulnerabilities.
Testing that involves examining work products such as requirements, design documents, and code without executing the software. It includes reviews and static analysis, and can find defects early in the lifecycle.
A category of test techniques where tests are derived from the internal structure of the system, such as code, architecture, or data flows. It is synonymous with white-box testing.
Testing that focuses on the interactions and interfaces between systems, packages, or external services. It verifies that independently developed systems work together as expected in the integrated environment.
A test level that evaluates the complete, integrated system against its specified requirements. It is performed in an environment that closely resembles the production environment and covers both functional and non-functional aspects.
T
The activity of examining the test basis to identify testable conditions. During test analysis, testers determine what to test by breaking down requirements and other documentation into specific, verifiable test conditions.
The overall strategy and methodology chosen for testing a particular project. It defines the test levels, techniques, tools, and resources to be used, and is tailored to the project context and risks.
The use of software tools to execute tests, compare actual outcomes with expected outcomes, and report results. Automation is especially valuable for regression testing and repetitive tasks that would be time-consuming to perform manually.
The body of knowledge used as the foundation for designing test cases. It includes requirements specifications, design documents, user stories, code, and any other documentation that defines expected system behavior.
A set of preconditions, inputs, actions, expected results, and postconditions developed to verify a particular test condition or requirement. Well-designed test cases are specific, repeatable, and traceable to requirements.
A brief statement that defines the scope and objectives of an exploratory testing session. It provides enough direction to guide the tester while leaving room for creative investigation.
The activities performed at the end of a test phase, including archiving test artifacts, analyzing lessons learned, and creating a test summary report. It ensures that valuable information is preserved for future projects.
A testable aspect of a component or system identified from the test basis. Test conditions are derived during test analysis and serve as the foundation for creating specific test cases.
The ongoing activity of comparing actual test progress against the test plan and taking corrective actions when deviations occur. It involves adjusting priorities, reallocating resources, or revising the test schedule.
The input values and environmental settings needed to execute a test case. Test data can be created manually, generated by tools, or extracted from production systems with appropriate anonymization.
The activity of transforming test conditions into concrete test cases and other test artifacts. It involves selecting test techniques, defining input values, and specifying expected results.
The hardware, software, network configuration, tools, and other infrastructure needed to execute tests. A well-configured test environment should closely mirror the production environment to produce reliable results.
The process of predicting the effort, time, and resources required for testing activities. Common approaches include metrics-based estimation using historical data and expert-based estimation using team experience.
The process of running test cases on the system under test and recording the results. It involves comparing actual outcomes with expected outcomes and logging any discrepancies as potential defects.
The activity of preparing everything needed for test execution, including finalizing test cases, setting up test data, configuring the test environment, and creating test procedures and automated scripts.
A group of testing activities that are organized and managed together. The four main test levels are component testing, integration testing, system testing, and acceptance testing, each with distinct objectives.
The planning, estimation, monitoring, and control of testing activities and resources. Effective test management ensures that testing is aligned with project goals and delivers maximum value within constraints.
A quantitative measure used to assess the progress, quality, and effectiveness of testing activities. Common metrics include defect detection rate, test execution progress, and code coverage percentages.
The continuous gathering of information about testing activities to provide visibility into progress and status. Metrics such as test execution rates, defect counts, and coverage levels are tracked and reported to stakeholders.
The work product or component that is the subject of testing. It can be a piece of code, a module, a system, a requirements document, or any other artifact that needs to be verified.
A source of information used to determine the expected results of a test. Oracles can include requirements documents, existing systems, user knowledge, or calculated values that define correct behavior.
A document that describes the scope, approach, resources, schedule, and activities for a testing effort. It defines what will be tested, how it will be tested, and the criteria for starting and stopping testing.
A detailed sequence of actions for executing one or more test cases. It specifies the order of operations, including setup steps, execution steps, and cleanup activities needed to run the tests.
The set of interrelated activities that constitute testing, including planning, analysis, design, implementation, execution, completion, and monitoring. A well-defined test process improves consistency and effectiveness.
A high-level description of the test levels and testing approaches to be applied across an organization or program. Unlike a test plan, which is project-specific, a test strategy provides general guidelines applicable to multiple projects.
A collection of test cases grouped together for a specific testing purpose, such as testing a particular feature or executing a regression cycle. Test suites help organize and manage large numbers of test cases.
A document produced at the end of a testing phase that summarizes the testing activities, results, and any deviations from the plan. It provides stakeholders with an assessment of the quality of the tested product.
A systematic method for deriving or selecting test cases. Techniques are categorized as black-box, white-box, or experience-based, each offering different approaches to achieving thorough test coverage.
A software product that supports one or more testing activities, such as test management, test execution, static analysis, or performance measurement. Tools can improve efficiency but require investment in setup and maintenance.
The ability to link test conditions and test cases back to their source in the test basis, such as requirements or user stories. Traceability helps ensure complete coverage and supports impact analysis when requirements change.
U
The testing of individual units or components of the software in isolation. Developers typically write and execute unit tests to verify that each small piece of code behaves as intended before integration.
A type of non-functional testing that evaluates how easy and intuitive the software is for end users. It assesses factors such as learnability, efficiency, user satisfaction, and accessibility.
A form of acceptance testing where actual or representative end users verify that the system meets their needs and business processes. It is often the final testing phase before the system goes live.
A short, informal description of a feature from the perspective of an end user. In agile development, user stories serve as a key part of the test basis and often include acceptance criteria that guide test design.
V
The process of evaluating a system or component to determine whether it satisfies the intended use and user needs. Validation answers the question: are we building the right product?
The process of evaluating a system or component to determine whether it meets the specified requirements and design documents. Verification answers the question: are we building the product right?
W
A type of review where the author of a work product guides participants through the document, explaining the content and gathering feedback. It is less formal than an inspection but more structured than an informal review.
A testing technique where the internal structure, code, and logic of the system are used to design test cases. It aims to exercise specific code paths, branches, and conditions to verify internal operations.
Ready to Test Your Knowledge?
Practice ISTQB Foundation Level questions covering all these terms.
Try Free Practice QuestionsTestPrepPro is an independent exam preparation resource. We are not affiliated with, endorsed by, or sponsored by the International Software Testing Qualifications Board (ISTQB). All trademarks are the property of their respective owners. Practice questions are original content created for educational purposes.