Term
Role Keywords:
F
ATM
ATA
ATT
EITP
ETM |
|
Definition
F : Keyword ISTQB Foundation syllabus
ATM : Keyword ISTQB Advanced – Test Management syllabus
ATA : Keyword ISTQB Advanced – Test Analyst syllabus
ATT : Keyword ISTQB Advanced – Technical Test Analyst syllabus
EITP : Keyword ISTQB Expert – Improving the Testing Process syllabus
ETM : Keyword ISTQB Expert – Test Management syllabus. |
|
|
Term
|
Definition
The exit criteria that a component or system must satisfy in order to be
accepted by a user, customer, or other authorized entity. [IEEE 610] |
|
|
Term
|
Definition
Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorized entity to determine whether or not to
accept the system. [After IEEE 610] |
|
|
Term
ATA accessibility testing: |
|
Definition
Testing to determine the ease by which users with disabilities can use a
component or system. [Gerrard] |
|
|
Term
|
Definition
The capability of the software product to provide the right or agreed results or effects
with the needed degree of precision. [ISO 9126] See also functionality. |
|
|
Term
|
Definition
The process of testing to determine the accuracy of a software product |
|
|
Term
|
Definition
The phase within the IDEAL model where the improvements are
developed, put into practice, and deployed across the organization. The acting phase
consists of the activities: create solution, pilot/test solution, refine solution and implement
solution. See also IDEAL...
IDEAL: An organizational improvement model that serves as a roadmap for initiating,
planning, and implementing improvement actions. The IDEAL model is named for the five
phases it describes: initiating, diagnosing, establishing, acting, and learning. |
|
|
Term
action word driven testing |
|
Definition
A scripting technique that uses data files to contain not only test
data and expected results, but also keywords related to the application being tested. The
keywords are interpreted by special supporting scripts that are called by the control script
for the test. See also data-driven testing...
data-driven testing: A scripting technique that stores test input and expected results in a
table or spreadsheet, so that a single control script can execute all of the tests in the table.
Data-driven testing is often used to support the application of test execution tools such as
capture/playback tools. [Fewster and Graham] See also keyword-driven testing. |
|
|
Term
|
Definition
User or any other person or system that interacts with the system under test in a
specific way. |
|
|
Term
|
Definition
The behavior produced/observed when a component or system is tested. |
|
|
Term
|
Definition
The behavior produced/observed when a component or system is tested. |
|
|
Term
|
Definition
A review not based on a formal (documented) procedure. |
|
|
Term
|
Definition
Testing carried out informally; no formal test preparation takes place, no
recognized test design technique is used, there are no expectations for results and
arbitrariness guides the test execution activity. |
|
|
Term
|
Definition
The capability of the software product to be adapted for different specified
environments without applying actions or means other than those provided for this purpose
for the software considered. [ISO 9126] See also portability.
...
portability: The ease with which the software product can be transferred from one hardware
or software environment to another. [ISO 9126] |
|
|
Term
|
Definition
A statement on the values that underpin agile software development. The
values are:
- individuals and interactions over processes and tools
- working software over comprehensive documentation
- customer collaboration over contract negotiation
- responding to change over following a plan. |
|
|
Term
agile software development: |
|
Definition
A group of software development methodologies based on
iterative incremental development, where requirements and solutions evolve through
collaboration between self-organizing cross-functional teams. |
|
|
Term
|
Definition
Testing practice for a project using agile software development methodologies,
incorporating techniques and methods, such as extreme programming (XP), treating
development as the customer of testing and emphasizing the test-first design paradigm. See
also test driven development.
...
test driven development: A way of developing software where the test cases are developed,
and often automated, before the software is developed to run those test cases. |
|
|
Term
|
Definition
A.K.A: - branch testing: A white box test design technique in which test cases are designed to execute
branches. |
|
|
Term
|
Definition
Simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site, but outside the development organization.
Alpha testing is often employed for off-the-shelf software as a form of internal acceptance
testing. |
|
|
Term
|
Definition
Testing based on a systematic analysis of e.g., product risks or
requirements. |
|
|
Term
|
Definition
The capability of the software product to be diagnosed for deficiencies or causes
of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See
also maintainability...
maintainability: The ease with which a software product can be modified to correct defects,
modified to meet new requirements, modified to make future maintenance easier, or
adapted to a changed environment. [ISO 9126]
|
|
|
Term
|
Definition
static analyzer: A tool that carries out static analysis. |
|
|
Term
|
Definition
Any condition that deviates from expectation based on requirements specifications,
design documents, user documents, standards, etc. or from someone’s perception or
experience. Anomalies may be found during, but not limited to, reviewing, testing,
analysis, compilation, or use of software products or applicable documentation.
[IEEE
1044] See also bug, defect, deviation, error, fault, failure, incident, problem. |
|
|
Term
|
Definition
Repeated action, process, structure or reusable solution that initially appears to
be beneficial and is commonly used but is ineffective and/or counterproductive in practice. |
|
|
Term
API (Application Programming Interface) testing |
|
Definition
Testing the code which enables
communication between different processes, programs and/or systems. API testing often
involves negative testing, e.g., to validate the robustness of error handling. See also
interface testing...
interface testing: An integration test type that is concerned with testing the interfaces
between components or systems. |
|
|
Term
|
Definition
branch testing: A white box test design technique in which test cases are designed to execute
branches. |
|
|
Term
|
Definition
A document summarizing the assessment results, e.g. conclusions,
recommendations and findings. See also process assessment...
process assessment: A disciplined evaluation of an organization’s software processes against
a reference model. [after ISO 15504] |
|
|
Term
|
Definition
A person who conducts an assessment; any member of an assessment team. |
|
|
Term
|
Definition
A condition that cannot be decomposed, i.e., a condition that does not
contain two or more single conditions joined by a logical operator (AND, OR, XOR). |
|
|
Term
|
Definition
Directed and focused attempt to evaluate the quality, especially reliability, of a test
object by attempting to force specific failures to occur. See also negative testing...
negative testing: Tests aimed at showing that a component or system does not work.
Negative testing is related to the testers’ attitude rather than a specific test approach or test
design technique, e.g. testing with invalid input values or exceptions. [After Beizer]. |
|
|
Term
|
Definition
An experience-based testing technique that uses software attacks to
induce failures, particularly security related failures. See also attack. |
|
|
Term
|
Definition
The capability of the software product to be attractive to the user. [ISO 9126]
See also usability...
usability: The capability of the software to be understood, learned, used and attractive to the
user when used under specified conditions. [ISO 9126]
|
|
|
Term
|
Definition
An independent evaluation of software products or processes to ascertain compliance
to standards, guidelines, specifications, and/or procedures based on objective criteria,
including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured. [IEEE 1028 |
|
|
Term
|
Definition
A path by which the original input to a process (e.g. data) can be traced back
through the process, taking the process output as a starting point. This facilitates defect
analysis and allows a process audit to be carried out. [After TMap] |
|
|
Term
|
Definition
Testware used in automated testing, such as tool scripts. |
|
|
Term
|
Definition
The degree to which a component or system is operational and accessible when
required for use. Often expressed as a percentage. [IEEE 610] |
|
|
Term
|
Definition
Testing in which two or more variants of a component or system are
executed with the same inputs, the outputs compared, and analyzed in cases of
discrepancies. [IEEE 610] |
|
|
Term
|
Definition
A strategic tool for measuring whether the operational activities of a
company are aligned with its objectives in terms of business vision and strategy. See also
corporate dashboard, scorecard.
corporate dashboard: A dashboard-style representation of the status of corporate
performance data. See also balanced scorecard, dashboard. |
|
|
Term
|
Definition
A specification or software product that has been formally reviewed or agreed upon,
that thereafter serves as the basis for further development, and that can be changed only
through a formal change control process. [After IEEE 610] |
|
|
Term
|
Definition
A sequence of one or more consecutive executable statements containing no
branches. EG: A node in a control flow graph represents a basic block. |
|
|
Term
|
Definition
A set of test cases derived from the internal structure of a component or
specification to ensure that 100% of a specified coverage criterion will be achieved. |
|
|
Term
|
Definition
fault seeding: The process of intentionally adding defects to those already in the component
or system for the purpose of monitoring the rate of detection and removal, and estimating
the number of remaining defects. Fault seeding is typically part of development (prerelease)
testing and can be performed at any test level (component, integration, or system).
[After IEEE 610] |
|
|
Term
|
Definition
The response of a component or system to a set of input values and preconditions. |
|
|
Term
|
Definition
(1) A standard against which measurements for comparisons can be made.
(2) A test that is be used to compare components or systems to each other or to a standard
as in (1). [After IEEE 610] |
|
|
Term
|
Definition
Software developed specifically for a set of users or customers. The
opposite is off-the-shelf software. |
|
|
Term
|
Definition
A superior method or innovative practice that contributes to the improved
performance of an organization under given context, usually recognized as ‘best’ by other
peer organizations. |
|
|
Term
|
Definition
Operational testing by potential and/or existing users/customers at an external
site not otherwise involved with the developers, to determine whether or not a component
or system satisfies the user/customer needs and fits within the business processes. Beta
testing is often employed as a form of external acceptance testing for off-the-shelf software
in order to acquire feedback from the market. |
|
|
Term
|
Definition
An integration testing approach in which software elements, hardware
elements, or both are combined all at once into a component or an overall system, rather
than in stages. [After IEEE 610] See also integration testing.
integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems. See also component integration
testing, system integration testing. |
|
|
Term
|
Definition
black box test design technique: Procedure to derive and/or select test cases based on an
analysis of the specification, either functional or non-functional, of a component or system
without reference to its internal structure. |
|
|
Term
|
Definition
Testing, either functional or non-functional, without reference to the
internal structure of the component or system. |
|
|
Term
|
Definition
A test case that cannot be executed because the preconditions for its
execution are not fulfilled. |
|
|
Term
|
Definition
An incremental approach to integration testing where the lowest level
components are tested first, and then used to facilitate the testing of higher level
components. This process is repeated until the component at the top of the hierarchy is
tested. See also integration testing. |
|
|
Term
|
Definition
An input value or output value which is on the edge of an equivalence
partition or at the smallest incremental distance on either side of an edge, for example the
minimum or maximum value of a range. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed
based on boundary values. See also boundary value. |
|
|
Term
|
Definition
The percentage of boundary values that have been exercised by a
test suite. |
|
|
Term
|
Definition
boundary value analysis: A black box test design technique in which test cases are designed
based on boundary values. |
|
|
Term
|
Definition
A basic block that can be selected for execution based on a program construct in
which one of two or more alternative program paths is available, e.g. case, jump, go to, ifthen-
else. |
|
|
Term
|
Definition
condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also
condition testing
condition testing: A white box test design technique in which test cases are designed to
execute condition outcomes. |
|
|
Term
branch condition combination coverage |
|
Definition
multiple condition coverage: The percentage of combinations of all single condition
outcomes within one statement that have been exercised by a test suite. 100% multiple
condition coverage implies 100% modified condition decision coverage. |
|
|
Term
|
Definition
The percentage of branches that have been exercised by a test suite. 100%
branch coverage implies both 100% decision coverage and 100% statement coverage. |
|
|
Term
|
Definition
A white box test design technique in which test cases are designed to execute
branches. |
|
|
Term
|
Definition
A device or storage area used to store data temporarily for differences in rates of data
flow, time or occurrence of events, or amounts of data that can be handled by the devices
or processes involved in the transfer or use of the data. [IEEE 610] |
|
|
Term
|
Definition
A memory access failure due to the attempt by a process to store data
beyond the boundaries of a fixed length buffer, resulting in overwriting of adjacent
memory areas or the raising of an overflow exception. See also buffer. |
|
|
Term
|
Definition
defect:
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system. |
|
|
Term
|
Definition
A document reporting on any flaw in a component or system that can cause the
component or system to fail to perform its required function. [After IEEE 829] |
|
|
Term
|
Definition
defect taxonomy:
A system of (hierarchical) categories designed to be a useful aid for
reproducibly classifying defects |
|
|
Term
|
Definition
defect management tool:
A tool that facilitates the recording and status tracking of defects
and changes. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of defects and provide reporting facilities.
See also
incident management tool. |
|
|
Term
business process-based testing |
|
Definition
An approach to testing in which test cases are designed
based on descriptions and/or knowledge of business processes. |
|
|
Term
|
Definition
An abstract representation of calling relationships between subroutines in a
program. |
|
|
Term
Capability Maturity Model Integration |
|
Definition
A framework that describes the key elements of an effective product development and maintenance process. The Capability
Maturity Model Integration covers best-practices for planning, engineering and managing
product development and maintenance. [CMMI] |
|
|
Term
|
Definition
Acronym for Computer Aided Software Engineering. |
|
|
Term
|
Definition
Acronym for Computer Aided Software Testing. See also test automation.
test automation:
The use of software to perform or support test activities, e.g. test
management, test design, test execution and results checking. |
|
|
Term
|
Definition
All documents from which the requirements of a component or system can be
inferred. The documentation on which the test cases are based.
If a document can be
amended only by way of formal amendment procedure, then the test basis is called a frozen
test basis. [After TMap |
|
|
Term
|
Definition
See test environment.
test environment:
An environment containing hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test. [After IEEE 610] |
|
|
Term
|
Definition
The analysis of defects to determine their root cause. [CMMI] |
|
|
Term
|
Definition
See cause-effect graphing.
cause-effect graphing:
A black box test design technique in which test cases are designed
from cause-effect graphs. [BS 7925/2] |
|
|
Term
cause-effect decision table |
|
Definition
See decision table.
decision table:
A table showing combinations of inputs and/or stimuli (causes) with their
associated outputs and/or actions (effects), which can be used to design test cases. |
|
|
Term
|
Definition
A graphical representation used to organize and display the
interrelationships of various possible root causes of a problem. Possible causes of a real or
potential defect or failure are organized in categories and subcategories in a horizontal
tree-structure, with the (potential) defect or failure as the root node. [After Juran] |
|
|
Term
|
Definition
A graphical representation of inputs and/or stimuli (causes) with their
associated outputs (effects), which can be used to design test cases. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed
from cause-effect graphs. [BS 7925/2] |
|
|
Term
|
Definition
The process of confirming that a component, system or person complies with
its specified requirements, e.g. by passing an exam. |
|
|
Term
|
Definition
See configuration control.
configuration control:
An element of configuration management, consisting of the
evaluation, co-ordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification. [IEEE
610] |
|
|
Term
configuration control board (CCB): |
|
Definition
A group of people responsible for evaluating and
approving or disapproving proposed changes to configuration items, and for ensuring
implementation of approved changes. [IEEE 610] |
|
|
Term
configuration identification |
|
Definition
An element of configuration management, consisting of
selecting the configuration items for a system and recording their functional and physical
characteristics in technical documentation. [IEEE 610] |
|
|
Term
|
Definition
An aggregation of hardware, software or both, that is designated for
configuration management and treated as a single entity in the configuration management
process. [IEEE 610] |
|
|
Term
|
Definition
A discipline applying technical and administrative direction and
surveillance to: identify and document the functional and physical characteristics of a
configuration item, control changes to those characteristics, record and report change
processing and implementation status, and verify compliance with specified requirements.
[IEEE 610] |
|
|
Term
configuration management tool |
|
Definition
A tool that provides support for the identification and
control of configuration items, their status over changes and versions, and the release of
baselines consisting of configuration items |
|
|
Term
|
Definition
See portability testing.
portability testing:
The process of testing to determine the portability of a software product. |
|
|
Term
|
Definition
See re-testing
re-testing: Testing that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions. |
|
|
Term
|
Definition
See compliance testing.
compliance testing: The process of testing to determine the compliance of the component or
system. |
|
|
Term
|
Definition
The degree of uniformity, standardization, and freedom from contradiction
among the documents or parts of a component or system. [IEEE 610] |
|
|
Term
|
Definition
Testing driven by the advice and guidance of appropriate experts from
outside the test team (e.g., technology experts and/or business domain experts). |
|
|
Term
|
Definition
A process model providing a detailed description of good engineering
practices, e.g. test practices. |
|
|
Term
continuous representation |
|
Definition
A capability maturity model structure wherein capability levels
provide a recommended order for approaching process improvement within specified
process areas. [CMMI] |
|
|
Term
|
Definition
A statistical process control tool used to monitor a process and determine
whether it is statistically controlled. It graphically depicts the average value and the upper
and lower control limits (the highest and lowest values) of a process. |
|
|
Term
|
Definition
A sequence of events (paths) in the execution through a component or system. |
|
|
Term
|
Definition
A form of static analysis based on a representation of unique paths
(sequences of events) in the execution through a component or system. Control flow
analysis evaluates the integrity of control flow structures, looking for possible control flow
anomalies such as closed loops or logically unreachable process steps. |
|
|
Term
|
Definition
An abstract representation of all possible sequences of events (paths) in
the execution through a component or system. |
|
|
Term
|
Definition
See path
path: A sequence of events, e.g. executable statements, of a component or system from an
entry point to an exit point. |
|
|
Term
|
Definition
An approach to structure-based testing in which test cases are designed
to execute specific sequences of events. Various techniques exist for control flow testing,
e.g., decision testing, condition testing, and path testing, that each have their specific
approach and level of control flow coverage.
See also decision testing, condition testing,
path testing. |
|
|
Term
|
Definition
A metric that shows progress toward a defined criterion, e.g.,
convergence of the total number of test executed to the total number of tests planned for
execution. |
|
|
Term
|
Definition
Testing of software used to convert data from existing systems for use in
replacement systems. |
|
|
Term
|
Definition
A dashboard-style representation of the status of corporate
performance data.
See also balanced scorecard, dashboard. |
|
|
Term
|
Definition
The total costs incurred on quality activities and issues are often split into
prevention costs, appraisal costs, internal failure costs and external failure costs. |
|
|
Term
|
Definition
Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. |
|
|
Term
|
Definition
The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite. |
|
|
Term
|
Definition
Measurement of achieved coverage to a specified coverage item during
test execution referring to predetermined criteria to determine whether additional testing is
required and if so, which test cases are needed. |
|
|
Term
|
Definition
An entity or property used as a basis for test coverage, e.g. equivalence
partitions or code statements. |
|
|
Term
coverage measurement tool |
|
Definition
See coverage tool.
coverage tool:
tool that provides objective measures of what structural elements, e.g.
statements, branches have been exercised by a test suite. |
|
|
Term
|
Definition
An element necessary for an organization or project to achieve its
mission. Critical success factors are the critical factors or activities required for ensuring
the success. |
|
|
Term
Critical Testing Processes |
|
Definition
A content-based model for test process improvement built
around twelve critical processes. These include highly visible processes, by which peers
and management judge competence and mission-critical processes in which performance
affects the company's profits and reputation. See also content-based model. |
|
|
Term
|
Definition
See Critical Testing Processes.
Critical Testing Processes:
A content-based model for test process improvement built
around twelve critical processes. These include highly visible processes, by which peers
and management judge competence and mission-critical processes in which performance
affects the company's profits and reputation. See also content-based model. |
|
|
Term
|
Definition
See bespoke software
bespoke software:
Software developed specifically for a set of users or customers. The
opposite is off-the-shelf software. |
|
|
Term
|
Definition
A software tool developed specifically for a set of users or customers |
|
|
Term
|
Definition
The maximum number of linear, independent paths through a
program.
Cyclomatic complexity may be computed as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph or subroutine)
[After McCabe] |
|
|
Term
|
Definition
See cyclomatic complexity.
The maximum number of linear, independent paths through a
program.
Cyclomatic complexity may be computed as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph or subroutine)
[After McCabe] |
|
|
Term
|
Definition
A development activity whereby a complete system is compiled and linked every
day (often overnight), so that a consistent system is available at any time including all
latest changes. |
|
|
Term
|
Definition
A representation of dynamic measurements of operational performance for some
organization or activity, using metrics represented via metaphores such as visual ‘dials’,
‘counters’, and other devices resembling those on the dashboard of an automobile, so that
the effects of events or activities can be easily understood and related to operational goals.
See also corporate dashboard, scorecard. |
|
|
Term
|
Definition
An executable statement where a variable is assigned a value. |
|
|
Term
|
Definition
A scripting technique that stores test input and expected results in a
table or spreadsheet, so that a single control script can execute all of the tests in the table.
Data-driven testing is often used to support the application of test execution tools such as
capture/playback tools. [Fewster and Graham] See also keyword-driven testing. |
|
|
Term
|
Definition
An abstract representation of the sequence and possible changes of the state of
data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer] |
|
|
Term
|
Definition
A form of static analysis based on the definition and usage of variables. |
|
|
Term
|
Definition
The percentage of definition-use pairs that have been exercised by a test
suite. |
|
|
Term
|
Definition
A white box test design technique in which test cases are designed to
execute definition-use pairs of variables. |
|
|
Term
|
Definition
See database integrity testing.
database integrity testing:
Testing the methods and processes used to access and manage the
data(base), to ensure access methods, processes and data rules function as expected and
that during access to the database, data is not corrupted or unexpectedly deleted, updated or
created. |
|
|
Term
|
Definition
An attribute of data that indicates correctness with respect to some pre-defined
criteria, e.g., business expectations, requirements on data integrity, data consistency. |
|
|
Term
database integrity testing |
|
Definition
Testing the methods and processes used to access and manage the
data(base), to ensure access methods, processes and data rules function as expected and
that during access to the database, data is not corrupted or unexpectedly deleted, updated or
created. |
|
|
Term
|
Definition
A path between two decisions of an algorithm, or two decision nodes of a
corresponding graph, that includes no other decisions. See also path. |
|
|
Term
|
Definition
See unreachable code.
unreachable code: Code that cannot be reached and therefore is impossible to execute. |
|
|
Term
|
Definition
See debugging tool.
debugging tool: A tool used by programmers to reproduce failures, investigate the state of
programs and find the corresponding defect. Debuggers enable programmers to execute
programs step by step, to halt a program at any program statement and to set and examine
program variables. |
|
|
Term
|
Definition
A program point at which the control flow has two or more alternative routes. A
node with two or more links to separate branches. |
|
|
Term
decision condition coverage |
|
Definition
The percentage of all condition outcomes and decision
outcomes that have been exercised by a test suite. 100% decision condition coverage
implies both 100% condition coverage and 100% decision coverage. |
|
|
Term
decision condition testing
|
|
Definition
A white box test design technique in which test cases are
designed to execute condition outcomes and decision outcomes. |
|
|
Term
|
Definition
The percentage of decision outcomes that have been exercised by a test
suite. 100% decision coverage implies both 100% branch coverage and 100% statement
coverage. |
|
|
Term
|
Definition
The result of a decision (which therefore determines the branches to be
taken). |
|
|
Term
|
Definition
A table showing combinations of inputs and/or stimuli (causes) with their
associated outputs and/or actions (effects), which can be used to design test cases. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed to
execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
[Veenendaal04] See also decision table. |
|
|
Term
|
Definition
A white box test design technique in which test cases are designed to
execute decision outcomes. |
|
|
Term
|
Definition
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system. |
|
|
Term
|
Definition
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system. |
|
|
Term
|
Definition
See defect-based test design technique.
defect-based test design technique:
A procedure to derive and/or select test cases targeted at
one or more defect categories, with tests being developed from what is known about the
specific defect category. See also defect taxonomy. |
|
|
Term
|
Definition
See defect type.
defect type: An element in a taxonomy of defects. Defect taxonomies can be identified with
respect to a variety of considerations, including, but not limited to:
· Phase or development activity in which the defect is created, e.g., a specification error
or a coding error
· Characterization of defects, e.g., an “off-by-one” defect
· Incorrectness, e.g., an incorrect relational operator, a programming language syntax
error, or an invalid assumption
· Performance issues, e.g., excessive execution time, insufficient availability. |
|
|
Term
|
Definition
The association of a definition of a variable with the subsequent use of
that variable. Variable uses include computational (e.g. multiplication) or to direct the
execution of a path (“predicate” use). |
|
|
Term
|
Definition
Any product that must be delivered to someone but not the
product’s author. |
|
|
Term
|
Definition
An iterative four-step problem-solving process, (plan-do-check-act), typically
used in process improvement. [After Deming] |
|
|
Term
|
Definition
An approach to testing in which test cases are designed based on the
architecture and/or detailed design of a component or system (e.g. tests of interfaces
between components or systems). |
|
|
Term
|
Definition
Testing of software or a specification by manual simulation of its execution.
See also static testing.
static testing: Testing of a software development artifact, e.g., requirements, design or code,
without execution of these artifacts, e.g., reviews or static analysis. |
|
|
Term
|
Definition
Formal or informal testing conducted during the implementation of a
component or system, usually in the development environment by developers. [After IEEE
610] |
|
|
Term
|
Definition
See incident.
incident: Any event occurring that requires investigation. [After IEEE 1008] |
|
|
Term
|
Definition
See incident report.
incident report: A document reporting on any event that occurred, e.g. during the testing,
which requires investigation. [After IEEE 829] |
|
|
Term
|
Definition
The phase within the IDEAL model where it is determined where one
is, relative to where one wants to be. The diagnosing phase consists of the activities:
characterize current and desired states and develop recommendations. See also IDEAL.
IDEAL: An organizational improvement model that serves as a roadmap for initiating,
planning, and implementing improvement actions. The IDEAL model is named for the five
phases it describes: initiating, diagnosing, establishing, acting, and learning. |
|
|
Term
|
Definition
See negative testing.
negative testing:
Tests aimed at showing that a component or system does not work.
Negative testing is related to the testers’ attitude rather than a specific test approach or test
design technique, e.g. testing with invalid input values or exceptions. [After Beizer]. |
|
|
Term
|
Definition
Testing the quality of the documentation, e.g. user guide or
installation guide. |
|
|
Term
|
Definition
The set from which valid input and/or output values can be selected. |
|
|
Term
|
Definition
A black box test design technique that is used to identify efficient and
effective test cases when multiple variables can or should be tested together. It builds on
and generalizes equivalence partitioning and boundary values analysis. See also boundary
value analysis, equivalence partitioning. |
|
|
Term
|
Definition
A software component or test tool that replaces a component that takes care of the
control and/or the calling of a component or system. [After TMap] |
|
|
Term
|
Definition
The process of evaluating behavior, e.g. memory performance, CPU
usage, of a system or component during execution. [After IEEE 610] |
|
|
Term
|
Definition
A tool that provides run-time information on the state of the software
code. These tools are most commonly used to identify unassigned pointers, check pointer
arithmetic and to monitor the allocation, use and de-allocation of memory and to flag
memory leaks. |
|
|
Term
|
Definition
Comparison of actual and expected results, performed while the
software is being executed, for example by a test execution tool. |
|
|
Term
|
Definition
The capability of producing an intended result. See also efficiency.
efficiency:
(1) The capability of the software product to provide appropriate performance,
relative to the amount of resources used under stated conditions. [ISO 9126]
(2) The capability of a process to produce the intended outcome, relative to the amount of
resources used |
|
|
Term
|
Definition
efficiency:
(1) The capability of the software product to provide appropriate performance,
relative to the amount of resources used under stated conditions. [ISO 9126]
(2) The capability of a process to produce the intended outcome, relative to the amount of
resources used |
|
|
Term
|
Definition
The process of testing to determine the efficiency of a software product. |
|
|
Term
EFQM (European Foundation for Quality Management) excellence model |
|
Definition
A nonprescriptive
framework for an organisation's quality management system, defined and
owned by the European Foundation for Quality Management, based on five 'Enabling'
criteria (covering what an organisation does), and four 'Results' criteria (covering what an
organisation achieves). |
|
|
Term
elementary comparison testing |
|
Definition
A black box test design technique in which test cases are
designed to execute combinations of inputs using the concept of modified condition
decision coverage. [TMap] |
|
|
Term
embedded iterative development model |
|
Definition
A development lifecycle sub-model that applies an
iterative approach to detailed design, coding and testing within an overall sequential
model. In this case, the high level design documents are prepared and approved for the
entire project but the actual detailed design, code development and testing are conducted in
iterations. |
|
|
Term
|
Definition
The ability, capacity, and skill to identify, assess, and manage the
emotions of one's self, of others, and of groups. |
|
|
Term
|
Definition
A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system. [IEEE 610] See also simulator.
simulator: A device, computer program or system used during testing, which behaves or
operates like a given system when provided with a set of controlled inputs. [After IEEE
610, DO178b] See also emulator. |
|
|
Term
|
Definition
The set of generic and specific conditions for permitting a process to go
forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a
task from starting which would entail more (wasted) effort compared to the effort needed
to remove the failed entry criteria. [Gilb and Graham] |
|
|
Term
|
Definition
An executable statement or process step which defines a point at which a given
process is intended to begin. |
|
|
Term
|
Definition
See equivalence partition
equivalence partition:
A portion of an input or output domain for which the behavior of a
component or system is assumed to be the same, based on the specification. |
|
|
Term
equivalence partition coverage |
|
Definition
The percentage of equivalence partitions that have been
exercised by a test suite. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed
to cover each partition at least once. |
|
|
Term
equivalence partition coverage |
|
Definition
The percentage of equivalence partitions that have been
exercised by a test suite. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed
to execute representatives from equivalence partitions. In principle test cases are designed
to cover each partition at least once. |
|
|
Term
|
Definition
A human action that produces an incorrect result |
|
|
Term
|
Definition
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a resultof errors made, and to design tests specifically to expose them
|
|
|
Term
|
Definition
See fault seeding
fault seeding: The process of intentionally adding defects to those already in the component
or system for the purpose of monitoring the rate of detection and removal, and estimating
the number of remaining defects. Fault seeding is typically part of development (prerelease)
testing and can be performed at any test level (component, integration, or system).
[After IEEE 610] |
|
|
Term
|
Definition
See fault seeding tool.
fault seeding tool: A tool for seeding (i.e. intentionally inserting) faults in a component or
system. |
|
|
Term
|
Definition
The ability of a system or component to continue normal operation despite
the presence of erroneous inputs. [After IEEE 610]. |
|
|
Term
|
Definition
The phase within the IDEAL model where the specifics of how an organization will reach its destination are planned. The establishing phase consists of the
activities: set priorities, develop approach and plan actions. See also IDEAL.
IDEAL: An organizational improvement model that serves as a roadmap for initiating,
planning, and implementing improvement actions. The IDEAL model is named for the five
phases it describes: initiating, diagnosing, establishing, acting, and learning.
|
|
|
Term
|
Definition
See testing.
testing: The process consisting of all lifecycle activities, both static and dynamic, concerned
with planning, preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit for
purpose and to detect defects. |
|
|
Term
|
Definition
Behavior of a component or system in response to erroneous input, from
either a human user or from another component or system, or to an internal failure. |
|
|
Term
|
Definition
A statement which, when compiled, is translated into object code, and
which will be executed procedurally when the program is running and may perform an
action on data. |
|
|
Term
|
Definition
A program element is said to be exercised by a test case when the input value
causes the execution of that element, such as a statement, decision, or other structural
element. |
|
|
Term
|
Definition
A test approach in which the test suite comprises all combinations of
input values and preconditions. |
|
|
Term
|
Definition
The set of generic and specific conditions, agreed upon with the stakeholders
for permitting a process to be officially completed. The purpose of exit criteria is to
prevent a task from being considered completed when there are still outstanding parts of
the task which have not been finished. Exit criteria are used to report against and to plan
when to stop testing. [After Gilb and Graham] |
|
|
Term
|
Definition
An executable statement or process step which defines a point at which a given
process is intended to cease. |
|
|
Term
|
Definition
See expected result.
expected result: The behavior predicted by the specification, or another source, of the
component or system under specified conditions. |
|
|
Term
experience-based technique |
|
Definition
See experience-based test design technique
experience-based test design technique: Procedure to derive and/or select test cases based
on the tester’s experience, knowledge and intuition. |
|
|
Term
|
Definition
Testing based on the tester’s experience, knowledge and intuition. |
|
|
Term
|
Definition
An informal test design technique where the tester actively controls the
design of the tests as those tests are performed and uses information gained while testing to
design new and better tests. [After Bach] |
|
|
Term
|
Definition
A software engineering methodology used within agile
software development whereby core practices are programming in pairs, doing extensive
code review, unit testing of all code, and simplicity and clarity in code. See also agile
software development. |
|
|