Term
|
Definition
A flaw in a component or system that can cause the component or system to fail to perform its required function. A defect if encountered during execution, may cause a failure of the component or system. |
|
|
Term
|
Definition
The process of finding, analyzing and removing the cuases of fialures in software |
|
|
Term
|
Definition
A human action that produces an incorrect result |
|
|
Term
|
Definition
A test approach in which the test suite comprises all combinations of input values and preconditions. |
|
|
Term
|
Definition
A set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit critera is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criterai are used to report against and to plan when to stop testing. |
|
|
Term
|
Definition
Deviation of the component or system from its expected delivery. |
|
|
Term
|
Definition
Seperation of responsibilities, which encourages the accomplishment of objective testing. |
|
|
Term
|
Definition
A risk directly related to the test object. (software) |
|
|
Term
|
Definition
A risk related to management and control of the (test) project. examples include lack of staffing, strict deadlines, changing requirements. |
|
|
Term
|
Definition
The degree to whic a component, system, or process meets specififed requirements and/or user/customer needs and expectations. |
|
|
Term
|
Definition
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. |
|
|
Term
|
Definition
A factor that could result in future negative consequences; usually expressed as impact and likelihood. |
|
|
Term
|
Definition
An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process. |
|
|
Term
|
Definition
All documents from which the requirements of a component can be inferred. |
|
|
Term
|
Definition
A chronological record of relevant details about the execution of tests. |
|
|
Term
|
Definition
A reason or purpsoe for desingning and executing a test. |
|
|
Term
|
Definition
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preperation and evaluation of software products to determine that they satisfy specified requirements, to demonstrate they are fit for purpose and to detect defects. |
|
|
Term
|
Definition
Testing only shows the presense of defects.
Complete Exhaustive testing is impossible
|
|
|
Term
|
Definition
Effective testing begins with requirements |
|
|
Term
|
Definition
A few modules will have the most defects, and testing should be focused appropriately. |
|
|
Term
Principle <
Pesticide Paradox |
|
Definition
Running the same tests over and over will eventually not find defects; the defects have been removed |
|
|
Term
Testing is context depenedent |
|
Definition
What you test and how thoroughly you test it varies depending on the risk of failure |
|
|
Term
Principle
Absence of errors fallacy |
|
Definition
Fixing all the located defects does not guarantee the sofware will work |
|
|
Term
|
Definition
Test Planning and control
Test analysis and design
Test implementation and execution
Evaulating exit critera and reporting
Test closure activities |
|
|
Term
|
Definition
Public
Client and Employer
Product
Judgment
Management
Profession
Colleagues
Self |
|
|
Term
|
Definition
A skeletal or special-purpose implementation of a software component used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. |
|
|
Term
|
Definition
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. |
|
|
Term
|
Definition
A set of several test cases for component or system under test, where the post condition of one test is often used as the precondition for the next one. |
|
|
Term
|
Definition
Confirmation by examination and thorough provision of objective evidence that specified requirements have been fulfilled |
|
|
Term
|
Definition
Confirmation by examination and thorough provision of objective evidence that the requirements for specific intended use or application have been fulfilled. |
|
|
Term
|
Definition
Component testing integration testing system testing acceptance testing |
|
|
Term
|
Definition
A way of developing software where the test cases are developed, and often automated before the software is developed to run test cases. |
|
|
Term
|
Definition
Process to determine the performance of a software product |
|
|
Term
|
Definition
testing to find out the maximum usable capacity of a system |
|
|
Term
|
Definition
formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customer or other authorized entity to determine whether or not to accept a system. |
|
|
Term
|
Definition
testing based on an analysis of the specification of the functionality of a system |
|
|
Term
|
Definition
a requirement that specifies a function that a component or system must perform |
|
|
Term
|
Definition
Functional Testing Non-functional Testing Structural Testing Change Related Testing |
|
|
Term
|
Definition
Testing the attributes of a component or system that do not relate to functionality |
|
|
Term
Structural Testing (white box) |
|
Definition
Testing based on an analysis of the internal structure of the component or system |
|
|
Term
|
Definition
the degree, expressed as a percentage to which a specified coverage item has been exercised by testing |
|
|
Term
|
Definition
Testing that run test cases that failed the last time they were run; in order to verify the success of corrective actions |
|
|
Term
|
Definition
Testing the changes yo an operating system or the impact of a changed environment to an operating system |
|
|
Term
|
Definition
Process of testing to determine the maintainability of a product |
|
|
Term
|
Definition
Assessment of change to the layers of development documentation, test documentation, and components in order to implement a given change to specified in requirements |
|
|
Term
|
Definition
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made |
|
|
Term
|
Definition
Capability of the software product to interact with one or more specified components or system. |
|
|
Term
|
Definition
the process of testing an integrated system to verify that it meets specified requirements |
|
|
Term
|
Definition
Process of combining components or systems into larger assemblies |
|
|
Term
|
Definition
Testing performed to expose defects in the interactions between integrated components and systems |
|
|
Term
|
Definition
Testing that involves the executionof the software of a component or a system |
|
|
Term
|
Definition
The set of generic and specific conditions for permitting a process to go forward with a defined task. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria |
|
|
Term
|
Definition
A review not based on a formal (documented) procedure |
|
|
Term
|
Definition
A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-confromance to higher level documentations. The most formal review technique and therefore always based on a documented procedure |
|
|
Term
|
Definition
the leader and main person responsible for an inspection or other review process |
|
|
Term
|
Definition
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical reviw and walkthrough |
|
|
Term
|
Definition
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. |
|
|
Term
|
Definition
The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process. |
|
|
Term
|
Definition
The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scirbe should ensure that the logging form is readable and understandable. |
|
|
Term
|
Definition
Analysis of software development artifacts, eg requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool. |
|
|
Term
|
Definition
Testing of a software development artifact without execution of these artifacts. |
|
|
Term
|
Definition
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken |
|
|
Term
|
Definition
a step by step presentation by the author of a document in order to gather information and to establish a common understanding of its content |
|
|
Term
Black-Box Test design technique |
|
Definition
Procedure to derive and or select test cases based on an analysis of the specification, either functional or non-functional, of a component without reference to its internal structure. |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed based on boundary values. |
|
|
Term
|
Definition
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed. |
|
|
Term
|
Definition
A software tool that translates programes expressed in a high order language into their machine language equivalents |
|
|
Term
|
Definition
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain, and verify. |
|
|
Term
|
Definition
A sequence of events in the execution through a component or system |
|
|
Term
|
Definition
An abstract representation of the sequence and possible changes of the state of datat objects, where the state of an object is any of: creation, usage, or destruction |
|
|
Term
|
Definition
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies 100% branch coverage and 100% statement coverage |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. |
|
|
Term
|
Definition
A black box test design technique in whcih test cases are designed to execute represenatives from equivalence partitions. In priniciple test cases are designed to cover each partition at least once. |
|
|
Term
|
Definition
A test design technique where the experience of the tester is used to antcipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them. |
|
|
Term
Experience-based test design technique |
|
Definition
Procedure to derive and/or select test cases based on the tester's experience, knowledge, and intuition |
|
|
Term
|
Definition
An informal test design technique wher the tester actively controls the design of the tests as those tests are performed and uses information and uses information gained while testing to design new and better tests |
|
|
Term
|
Definition
Directed and focused attempt to evaluate the quality, especially realiability of a test object by attempting to force specifc failures to occur |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed to executed valid and invalid state transitions. |
|
|
Term
|
Definition
The percentage of executable statements that have been excersised by a test suite |
|
|
Term
|
Definition
|
|
Term
White box test design technique |
|
Definition
Procedure to dervie and/or select test cases based on an analysis of the internal structure of a component or system |
|
|
Term
|
Definition
A black box test design technique in which test cases are designed to execute scenarios of use cases |
|
|
Term
|
Definition
the ability to identify related items in documentation and software such as requirements with associated tests. |
|
|
Term
|
Definition
Artifacts produced during the test process required to plan design and execute tests. such as documentation, scripts ,inputs, expected results, set up and clear up proceudres, files databases, environment, and any additional software or utilities used in testing. |
|
|
Term
|
Definition
Commonly used to refer to a test procedure specification, especially an automated one |
|
|
Term
|
Definition
A set of input values, execution preconditions, expeced results and execution post conditions, developed for a paticular objective or test condition. |
|
|
Term
|
Definition
(See Test design specification)) (A document specifying test conditions for a test item. The process of transforming general testing objectives into tangible test conditions and test cases |
|
|
Term
|
Definition
Procedure used to derive and.or select test cases |
|
|