Software Testing - Chapter 1
What are some potential consequences a software defect or bug for people, companies and the environment?
-Financial loss. -Damage to personal or business reputations. -Injury/Death -System failures. A defect can also occur due to environmental changes, such as radiation, pollution, electronic fields, and magnetism.
What are the objectives for testing?
-Finding defects. -Gaining confidence -Providing information for decision-making -Preventing defects
7 Testing Principles
1. Testing shows presence of defects 2. Exhaustive testing is impossible 3. Early testing 4. Defect clustering 5. Pesticide paradox 6. Testing is context dependent 7. Absence-of-errors fallacy.
test log
A chronological record of relevant details about the execution of tests.
requirement
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
test plan
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
test procedure
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
test design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.
test summary report
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
risk
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Defect (fault, bug)
A flaw in the component or system that can cause the component or system to fail to perform its required function.
test policy
A high level document describing the principles, approach and major objectives of the organization regarding testing.
test strategy
A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).
error (mistake)
A human action that produces an incorrect result.
test objective
A reason or purpose for designing and executing a test.
test case
A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
test suite
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
exhaustive testing (complete testing)
A test approach in which the test suite comprises all combinations of input values and preconditions.
error guessing
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
test control
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
test monitoring
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.
false-fail result (false-positive result)
A test result in which a defect is reported although no such defect actually exists in the test object.
False-pass result (false-negative result)
A test result which fails to identify the presence of a defect that is actually present in the test object.
test basis
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
review
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
incident
An event occurring that requires investigation.
test condition
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
testware
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
test data
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
failure
Deviation of the component or system from its expected delivery, service or result.
How can testing lead to higher quality?
Lessons are learned from each project, and the team can learn from past mistakes, improving the quality of their processes and their software.
independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing.
regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
confirmation testing (or re-testing)
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
quality
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
test coverage
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
test approach
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
testing
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
debugging
The process of finding, analyzing and removing the causes of failures in software.
test execution
The process of running a test on the component or system under test, producing actual result(s).
exit criteria
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
Why is testing necessary?
To find faults, measure the quality of the software system and to increase the quality.
A test team consistently finds between 90% and 95% of the defects present in the system under test. While the test manager understands that this is a good defect-detection percentage for her test team and industry, senior management and executives remain disappointed in the test group, saying that the test team misses too many bugs. Given that the users are generally happy with the system and that the failures which have occurred have generally been low impact, which of the following testing principles is most likely to help the test manager explain to these managers and executives why some defects are likely to be missed? a. Exhaustive testing is impossible. b. Defect clustering. c. Pesticide paradox. d. Absence-of-errors fallacy.
a. Exhaustive testing is impossible.
Ensuring that test design starts during the requirements definition phase is important to enable which of the following test objectives? a. Preventing defects in the system. b. Finding defects through dynamic testing. c. Gaining confidence in the system. d. Finishing the project on time.
a. Preventing defects in the system.
A company recently purchased a commercial off-the-shelf application to automate their bill-paying process. They now plan to run an acceptance test against the package prior to putting it into production. Which of the following is their most likely reason for testing? a. To build confidence in the application. b. To detect bugs in the application. c. To gather evidence for a lawsuit. d. To train the users.
a. To build confidence in the application.
According to the ISTQB Glossary, the word 'bug' is synonymous with which of the following words? a. Incident. b. Defect. c. Mistake. d. Error.
b. Defect.
Which of the following is most important to promote and maintain good relationship between testers and developers? a. Understanding what managers value about testing. b. Explaining test results in a neutral fashion. c. Identifying potential customer work-arounds for bugs. d. Promoting better quality software whenever possible.
b. Explaining test results in a neutral fashion.
According to the ISTQB Glossary, a risk relates to which of the following? a. Negative feedback to the tester. b. Negative consequences that will occur. c. Negative consequences that could occur. d. Negative consequences for the test project.
c. Negative consequences that could occur.
According to the ISTQB Glossary, regression testing is required for what purpose? a. To verify the success of corrective actions. b. To prevent a task from being incorrectly considered completed. c. To ensure that defects have not been introduced by a modification. d. To motivate better unit testing by the programmers.
c. To ensure that defects have not been introduced by a modification.
Which of the statements below is the best assessment of how the test principals apply across the test life cycle? a. Test principles only affect the preparation for testing. b. Test principles only affect test execution activities. c. Test principles affect the early test activities such as review. d. Test principles affect activities throughout the test lifecycle.
d. Test principles affect activities throughout the test lifecycle.