Term
| Testing Methods: Black Box Testing |
|
Definition
| Testing without reference to the internal workings. The goal is to test how well the component conforms to the requirements for the component. |
|
|
Term
| Testing Methods: White Box Testing |
|
Definition
| Also called Glass box testing. This is testing done on the internal workings of a program. While it may uncover many errors, it might not detect missing requirements. |
|
|
Term
| Testing Methods: Gray Box Testing |
|
Definition
| This involves having a knowledge of internal data structures for purposes of designing tests, while executing those tests at the user, or black box level. |
|
|
Term
| Testing Methods: Static vs Dynamic Testing |
|
Definition
| Static testing is to proofread the code. Its purpose is to verify. Dynamic testing is testing done while the program is running. Its purpose is to validate. |
|
|
Term
| Testing Levels: Unit Testing |
|
Definition
| Also known as component testing. It verifies the functionality of a specific section of code. |
|
|
Term
| Testing Levels: Integration Testing |
|
Definition
| This type seeks to verify the interfaces and interactions between components against a software design. Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. |
|
|
Term
| Testing Levels: Component Interface Testing |
|
Definition
| This checks the handling of data passed between various units beyond full integration testing between those units. This is a type of black box testing. |
|
|
Term
| Testing Levels: System Testing |
|
Definition
| or End to End testing, tests a completely integrated system to verify that it meets its requirements. |
|
|
Term
| Testing Levels: Operational Acceptance Testing |
|
Definition
| Determines the operational readiness of a product (if it can be released). It is a common type of non functional software testing. |
|
|
Term
| Testing Types: Installation Testing |
|
Definition
| assures the system is installed correctly and working on actual customer's hardware. |
|
|
Term
| Testing Types: Compatibility Testing |
|
Definition
| tests the compatibility with other application software, operating systems (old and new), etc. |
|
|
Term
| Testing Types: Smoke and Sanity Testing |
|
Definition
| Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing consists of minimal attempts to operate the software, designed to determine wheter there are any basic problems that will prevent it from working at all. Such tests can be used as a Build Verification Test. |
|
|
Term
| Testing Types: Regression Testing |
|
Definition
| This is a type of negative testing. It tries to break the software to see if old bugs that have been fixed come back. It consists of re-running previous sets of test cases. |
|
|
Term
| Testing Types: Acceptance Testing |
|
Definition
| Testing conducted to enable a client to determine whether to accept a software product. Normally performed to validate that the software meets a set of agreed acceptance criteria. |
|
|
Term
| Testing Types: Alpha Testing |
|
Definition
| testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. |
|
|
Term
| Testing Types: Beta Testing |
|
Definition
| testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. |
|
|
Term
| Testing Types: Functional vs Non Functional Testing |
|
Definition
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. |
|
|
Term
| Testing Types: Destructive Testing |
|
Definition
| This type of testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines |
|
|
Term
| Testing Types: Software Performance Testing |
|
Definition
| Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing". |
|
|
Term
| Testing Types: Usability Testing |
|
Definition
| This is used to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. |
|
|
Term
| Testing Types: Accessibility Testing |
|
Definition
| This testing is done to see if the software can be used by people who are blind, deaf, etc. |
|
|
Term
| Testing Types: Security Testing |
|
Definition
| Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. |
|
|
Term
| Testing Workflow: Traditional Waterfall development model |
|
Definition
|
|
Term
| Testing Workflow: Agile or Extreme Development Model |
|
Definition
Also called Test Driven Development. The software engineers write the initial test cases knowing that they will fail.
[image] |
|
|
Term
| Testing Workflow: Top down and bottom up |
|
Definition
|
|
Term
| Testing Workflow: A sample Testing cycle |
|
Definition
- Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
- Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
- Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
- Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
- Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
- Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
- Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
- Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
- Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
|
|
|
Term
|
Definition
| A defect is an error in design. |
|
|
Term
|
Definition
| This type of testing is a methodology used to test whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems. |
|
|
Term
|
Definition
| This is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. |
|
|
Term
| Testing Types: Ad Hoc Testing |
|
Definition
| A testing phase where the tester tries to 'break' the system by randomly trying the systems functionality. Also called Monkey Testing. This can be a type of Negative testing. |
|
|
Term
|
Definition
Application Binary Interface.
A specification defining requirements for portability of applicaitons in binary forms accross different system platforms and environments. |
|
|
Term
|
Definition
Applicaiton Programming Interface.
A formalized set of software calls and routines that can be referenced by an application program in order to access supported system or network services. |
|
|
Term
|
Definition
Automated Software Quality
The use of software tools, such as automated testing tools, to improve software quality. |
|
|
Term
|
Definition
| A sequence of one or more consecutive, executable statements containing no branches |
|
|
Term
| Testing Types: Basis Path Testing |
|
Definition
| A white box test case design technique that uses the algorithmic flow of the program and design tests. |
|
|
Term
|
Definition
Software Development Life Cycle
[image] |
|
|
Term
| Software Defect Life Cycle |
|
Definition
|
|