Het vakgebied Software Testen maakt gebruik van een internationaal jargon, waar de International Software Testing Qualifications Board (ISTQB) een rol speelt in het handhaven van een consistente uitleg van de termen en begrippen. We hebben voor u een doorzoekbaar mechanisme gerealiseerd waarmee u niet alleen de woorden kunt vinden, maar ook de definities ervan kunt doorzoeken.
Mocht u een begrip of definitie missen, laat het ons dan weten.
Standard Glossary of Terms used in Software Testing
Er zijn 51 termen in deze lijst die beginnen met de letter D.
a development activity where a complete system is compiled and linked every day (usually overnight), so that a consistent system is available at any time including all latest changes.
A representation of dynamic measurements of operational performance for some organization or activity, using metrics represented via metaphores such as visual “dials”, “counters”, and other devices resembling those on the dashboard of an automobile, so that the effects of events or activities can be easily understood and related to operational goals. See also corporate dashboard, scorecard.
An executable statement where a variable is assigned a value.
data driven testing
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.
An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]
data flow analysis
A form of static analysis based on the definition and usage of variables.
data flow coverage
The percentage of definition-use pairs that have been exercised by a test suite.
data flow testing
A white box test design technique in which test cases are designed to execute definition and use pairs of variables.
data integrity testing
See database integrity testing.
database integrity testing
Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.
A path of execution (usually through a graph representing a program, such as a flow-chart) that does not include any conditional nodes such as the path of execution between two decisions.
See unreachable code.
See debugging tool.
The process of finding, analyzing and removing the causes of failures in software.
A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
decision condition coverage
The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.
decision condition testing
A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
The result of a decision (which therefore determines the branches to be taken).
A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
decision table testing
A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal04] See also decision table.
A white box test design technique in which test cases are designed to execute decision outcomes.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
defect based technique
See defect based test design technique.
defect based test design technique
A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category. See also defect taxonomy.
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).
Defect Detection Percentage (DDP)
The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.
The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]
defect management tool
A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. See also incident management tool.
An occurrence in which one defect prevents the detection of another. [After IEEE 610]
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]
A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.
defect tracking tool
See defect management tool.
The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).
Any (work) product that must be delivered to someone other than the (work) product’s author.
An iterative four-step problem-solving process, (plan-do-check-act), typically used in process improvement. [After Deming]
An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).
Testing of software or a specification by manual simulation of its execution. See also static testing.
Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]
See incident report.
The phase within the IDEAL model where it is determined where one is, relative to where one wants to be. The diagnosing phase consists of the activities: characterize current and desired states and develop recommendations. See also IDEAL.
See negative testing.
Testing the quality of the documentation, e.g. user guide or installation guide.
The set from which valid input and/or output values can be selected.
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]
The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]
dynamic analysis tool
A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.
Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.
Testing that involves the execution of the software of a component or system.