Fundamentals of Testing - Chapter 4
Explain decision coverage
Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome). Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage.
Explain error guessing
Error guessing is a technique used to anticipate the occurrence of mistakes, defects, and failures, based on the tester's knowledge, including: -How the application has worked in the past -What types of mistakes the developers tend to make -Failures that have occurred in other applications
Explain exploratory testing
In exploratory testing, informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically during test execution. The test results are used to learn more about the component or system, and to create tests for the areas that may need more testing. Exploratory testing is sometimes conducted using session-based testing to structure the activity. In session-based testing, exploratory testing is conducted within a defined time-box, and the tester uses a test charter containing test objectives to guide the testing. The tester may use test session sheets to document the steps followed and the discoveries made. Exploratory testing is most useful when there are few or inadequate specifications or significant time pressure on testing. Exploratory testing is also useful to complement other more formal testing techniques.
Apply equivalence partitioning to derive test cases from given requirements
See page 58 -Valid values are values that should be accepted by the component or system. An equivalence partition containing valid values is called a "valid equivalence partition." -Invalid values are values that should be rejected by the component or system. An equivalence partition containing invalid values is called an "invalid equivalence partition." -Partitions can be identified for any data element related to the test object, including inputs, outputs, internal values, time-related values (e.g., before or after an event) and for interface parameters (e.g., integrated components being tested during integration testing). -Any partition may be divided into subpartitions if required. -Each value must belong to one and only one equivalence partition. -When invalid equivalence partitions are used in test cases, they should be tested individually, i.e., not combined with other invalid equivalence partitions, to ensure that failures are not masked. Failures can be masked when several failures occur at the same time but only one is visible, causing the other failures to be undetected.
Explain checklist-based testing
In checklist-based testing, testers design, implement, and execute tests to cover test conditions found in a checklist. As part of analysis, testers create a new checklist or expand an existing checklist, but testers may also use an existing checklist without modification. Such checklists can be built based on experience, knowledge about what is important for the user, or an understanding of why and how software fails. Checklists can be created to support various test types, including functional and non-functional testing. In the absence of detailed test cases, checklist-based testing can provide guidelines and a degree of consistency. As these are high-level lists, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.
Explain the characteristics, commonalities, and differences between black-box test techniques, white-box test techniques, and experience-based test techniques
See page 57 for more information Common characteristics of black-box test techniques include the following: a. Test conditions, test cases, and test data are derived from a test basis that may include software requirements, specifications, use cases, and user stories b. Test cases may be used to detect gaps between the requirements and the implementation of the requirements, as well as deviations from the requirements c. Coverage is measured based on the items tested in the test basis and the technique applied to the test basis ----------------------------------------------- Common characteristics of white-box test techniques include the following: a. Test conditions, test cases, and test data are derived from a test basis that may include code, software architecture, detailed design, or any other source of information regarding the structure of the software b. Coverage is measured based on the items tested within a selected structure (e.g., the code or interfaces) c. Specifications are often used as an additional source of information to determine the expected outcome of test cases ----------------------------------------------- Common characteristics of experience-based test techniques include the following: a. Test conditions, test cases, and test data are derived from a test basis that may include knowledge and experience of testers, developers, users and other stakeholders
Apply state transition testing to derive test cases from given requirements
See page 60
Apply boundary value analysis to derive test cases from given requirements
See pages 58 and 59
Apply decision table testing to derive test cases from given requirements
See pages 59
Explain statement coverage
Statement testing exercises the executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage.
Explain the value of statement and decision coverage
When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once, but it does not ensure that all decision logic has been tested. Of the two whitebox techniques discussed in this syllabus, statement testing may provide less coverage than decision testing. When 100% decision coverage is achieved, it executes all decision outcomes, which includes testing the true outcome and also the false outcome, even when there is no explicit false statement (e.g., in the case of an IF statement without an else in the code). Statement coverage helps to find defects in code that was not exercised by other tests. Decision coverage helps to find defects in code where other tests have not taken both true and false outcomes. Achieving 100% decision coverage guarantees 100% statement coverage (but not vice versa).