Term
Objectives of Testing (K1) Identify typical objectives of testing |
|
Definition
To evaluate work products such as requirements, user stories, design, and code
To verify whether all specified requirements have been fulfilled
To validate whether the test object is complete and works as the users and other stakeholders expect
To build confidence in the level of quality of the test object
To prevent defects
To find failures and defects
To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
To reduce the level of risk of inadequate software quality (e.g., previously undetected failures occurring in operation)
To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards |
|
|
Term
Testing and Debugging (K2) Differentiate testing from debugging |
|
Definition
Testing and debugging are different.
Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyzes, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects.
In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging and associated component testing. |
|
|
Term
Why is Testing Necessary? (K2) Give examples of why testing is necessary |
|
Definition
Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation.
When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems.
In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.
Examples:
1. Having testers involved in requirements reviews or user story refinement could detect defects in these work products.
2. Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it.
3. Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it.
4. Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed. |
|
|
Term
Quality Assurance and Testing (K2) Describe the relationship between testing and quality assurance and give examples of how testing contributes to higher quality |
|
Definition
Quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together.
Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control.
Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. |
|
|
Term
Errors, Defects, and Failures (K2) Distinguish between error, defect, and failure |
|
Definition
A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code. If a defect in the code is executed, this may cause a failure.
In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions.
Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects. |
|
|
Term
Defects, Root Causes and Effects (K2) Distinguish between the root cause of a defect and its effects |
|
Definition
The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. |
|
|
Term
Seven Testing Principles (K2) Explain the seven testing principles |
|
Definition
1.Testing shows the presence of defects, not their absence
2.Exhaustive testing is impossible
3.Early testing saves time and money
4.Defects cluster together
5.Beware of the pesticide paradox
6.Testing is context dependent
7.Absence-of-errors is a fallacy |
|
|
Term
Test Process in Context (K2) Explain the impact of context on the test process |
|
Definition
Specific software test process in any given situation depends on many factors:
Software development lifecycle model and project methodologies being used
Test levels and test types being considered
Product and project risks
Business domain
Operational constraints, including but not limited to: o Budgets and resources o Timescales o Complexity o Contractual and regulatory requirements
Organizational policies and practices
Required internal and external standards |
|
|
Term
Test Activities and Tasks (K2) Describe the test activities and respective tasks within the test process |
|
Definition
A test process consists of the following groups of activities:
Test planning Test monitoring and control Test analysis Test design Test implementation Test execution Test completion
Each group of activities is composed of constituent activities. Each activity within each group of activities in turn may consist of multiple individual tasks, which would vary from one project or release to another. |
|
|
Term
Test Activities and Tasks (Test planning) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline).
Test plans may be revisited based on feedback from monitoring and control activities. |
|
|
Term
Test Activities and Tasks (Test monitoring & control) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
Test monitoring involves the on-going comparison of actual progress against the test plan using any test monitoring metrics defined in the test plan.
Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time).
Test monitoring and control are supported by the evaluation of exit criteria (definition of done). |
|
|
Term
Test Activities and Tasks (Test analysis) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
During test analysis, the test basis is analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.
Test analysis includes the following major activities:
Analyzing the test basis appropriate to the test level being considered, for example: o Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behavior
o Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flows, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
o The implementation of the component or system itself, including code, database metadata and queries, and interfaces
o Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
Identifying features and sets of features to be tested
Defining and prioritizing test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
Capturing bi-directional traceability between each element of the test basis and the associated test conditions |
|
|
Term
Test Activities and Tasks (Test design) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”
Test design includes the following major activities:
Designing and prioritizing test cases and sets of test cases
Identifying necessary test data to support test conditions and test cases
Designing the test environment and identifying any required infrastructure and tools
Capturing bi-directional traceability between the test basis, test conditions, test cases, and test procedures |
|
|
Term
Test Activities and Tasks (Test implementation) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures.
So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?”
Test implementation includes the following major activities:
Developing and prioritizing test procedures, and, potentially, creating automated test scripts
Creating test suites from the test procedures and (if any) automated test scripts
Arranging the test suites within a test execution schedule in a way that results in efficient test execution
Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
Preparing test data and ensuring it is properly loaded in the test environment
Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites
Test design and test implementation tasks are often combined. |
|
|
Term
Test Activities and Tasks (Test execution) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
During test execution, test suites are run in accordance with the test execution schedule.
Test execution includes the following major activities:
Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
Executing tests either manually or by using test execution tools
Comparing actual results with expected results
Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur)
Reporting defects based on the failures observed
Logging the outcome of test execution (e.g., pass, fail, blocked)
Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results. |
|
|
Term
Test Activities and Tasks (Test completion) (K2) Describe the test activities and respective tasks within the test process |
|
Definition
Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information.
Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished (e.g., as part of a retrospective meeting), a test level is completed, or a maintenance release has been completed.
Test completion includes the following major activities:
Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
Creating a test summary report to be communicated to stakeholders
Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later reuse
Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
Analyzing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
Using the information gathered to improve test process maturity |
|
|
Term
Test Work Products (Test planning work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control. |
|
|
Term
Test Work Products (Test monitoring and control work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test monitoring and control work products typically include various types of test reports, including test progress reports and test summary reports (produced at various completion milestones).
All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarizing the test execution results once those become available.
Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. |
|
|
Term
Test Work Products (Test analysis work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test analysis work products include defined and prioritized test conditions, each of which is ideally bidirectionally traceable to the specific element(s) of the test basis it covers.
For exploratory testing, test analysis may involve the creation of test charters.
Test analysis may also result in the discovery and reporting of defects in the test basis. |
|
|
Term
Test Work Products (Test design work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results.
Test design also results in the design and/or identification of the necessary test data, the design of the test environment, and the identification of infrastructure and tools, though the extent to which these results are documented varies significantly.
Test conditions defined in test analysis may be further refined in test design. |
|
|
Term
Test Work Products (Test implementation work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test implementation work products include:
Test procedures and the sequencing of those test procedures Test suites A test execution schedule
Test implementation also may result in the creation and verification of test data and the test environment. |
|
|
Term
Test Work Products (Test execution work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test execution work products include:
Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
Defect reports
Documentation about which test item(s), test object(s), test tools, and testware were involved in the testing |
|
|
Term
Test Work Products (Test completion work products) (K2) Differentiate the work products that support the test process |
|
Definition
Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations (e.g., following a project Agile retrospective), change requests or product backlog items, and finalized testware. |
|
|
Term
Traceability between the Test Basis and Test Work Products (K2) Explain the value of maintaining traceability between the test basis and test work products |
|
Definition
In order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element.
Traceability supports:
Analyzing the impact of changes
Making testing auditable
Meeting IT governance criteria
Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
Relating the technical aspects of testing to stakeholders in terms that they can understand
Providing information to assess product quality, process capability, and project progress against business goals |
|
|
Term
The Psychology of Testing (K1) Identify the psychological factors that influence the success of testing |
|
Definition
Identifying defects or failures may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs.
For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect.
In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.
As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality.
To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. |
|
|
Term
The Psychology of Testing (K2) Explain the difference between the mindset required for test activities and the mindset required for development activities |
|
Definition
A mindset reflects an individual’s assumptions and preferred methods for decision making and problemsolving.
A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.
A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to find mistakes in their own work. |
|
|