ITSE 1391 - Chapter 5

¡Supera tus tareas y exámenes ahora con Quizwiz!

Test manager (Test lead)

The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object. *Devise test objectives/policies *Estimate testing effort *Lead, guide and monitor analysis, design, and implementation of test cases *Validate test environment *Schedule test activities

Test Management

The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Risk

The probability of a negative or undesirable outcome. Two types: project risk & product risk

Incident Management

The process of recognizing, investigating, taking action and disposing of incidents. It involves logging incidents, classifying them and identifying the impact. *One objective of testing is to find defects *Defects are discrepancies between actual and expected outcomes *Initially an observed discrepancy is logged as an incident *Incidents are logged, tracked and investigated *Not all incidents become "defects" *There will be a prescribed process to follow starting with identifying discrepancies that ultimately will result in closure of the incident

Reporting test status

Is about effectively communicating findings to other project stake-holders

Test Control

Is is about guiding and corrective actions to try to achieve the best possible outcome for the project

Product Risk Analysis

Likelihood (of a problem occurring) & Impact (of a problem occurring). Evaluation of the analysis includes a numeric value of each risk.

Entry Criteria will include:

*Acquisition and supply *Test items required

Incident vs Defect

*An incident is any situation where the system exhibits questionable behavior *They become a defect only when the root cause is some problem in the item under test *Other causes can be misconfiguration or failure of the test environment, corrupted test data, bad tests, and user mistakes

Types of Test Approaches & Strategies

*Analytical *Model based *Methodical *Process or standard-compliant *Dynamic *Consultative or directed *Regression-averse *Risks *Skills *Objectives *Regulations *Product *Business

Skills test staff need

*Application or business domain -- understand the intended behavior of the system *Technology -- be aware of issues, limitations, a capabilities of the technology *Testing -- know the testing topics

Risk Management Summary

*Assess / analyze risks early in the project *These are educated guesses *Re-assess and adjust risks at regular intervals *Do NOT confuse impact and likelihood *Manage risks appropriately based on impact and likelihood

Estimating Costs

*Break the test down into phases *Identify activities within each phase *Identify the test environment *Cost of acquiring and configuring *Create a work-breakdown structure *Include costs and resources

Cost Estimation Techniques

*Consult the individuals who will do the work *Analyze metrics from past projects *Average effort per test case *Create a work-breakdown structure and perform a detailed estimate *Negotiate the estimate with management

Exit Criteria will include:

*Defects (status, discovery rate) *Test statistics (passed, failed) *Coverage measurements *Quality measurements *Cost of finding the next defect *Schedule implications *Ship risk

Incident Report Contents

*Description of the observed misbehavior *Report should be clear, concise, unambiguous, neutral, impartial and fact-focused *Classification of the observed misbehavior *Steps to reproduce the observed misbehavior *Inputs given and outputs observed, or variance from expectations *Include a work-around if one was found as well as steps used to isolate the problem *Intermittent incidents should be attempted three times to observe any differences in behavior

Typical Test Summary Metrics Reported:

*Extent of completion of test environment preparation *Extent of test coverage achieved, measured against requirements, risks, code, configurations or other areas of interest *Status of testing (including analysis, design and implementation) compared to test milestones *Economics of testing, including costs and benefits of continuing testing or progressing to next phase

Purpose of Test Plans

*Guides our thinking *Communication vehicle with other team members *Helps manage change

Data for Reporting should address:

*How to assess the adequacy of test objectives for a given test level *How to assess the adequacy of the test approaches and if they support the project's testing goals *How to assess the effectiveness of the testing with respect to these objectives and approaches

Types of Product Risk

*Key function omitted *Unreliable (frequent fails) *Cause financial damage *Data integrity problems *Data conversion problems *Data backup problems *Security problems *Maintainability problems *Performance problems

Potential Project Risks

*Late code delivery *Availability of hardware *Excessive delays in bug fixes

Potential Project Options

*Mitigate - take steps in advance to reduce the likelihood *Contingency - plan to reduce impact *Transfer - assign a team member to reduce likelihood *Ignore - do nothing - use only if likelihood and impact are low

Risk Based Testing Involves:

*Mitigation - testing to provide the opportunity to reduce the likelihood of defects *Contingency - testing to identify work around in the event of a failure *Measuring - finding defects in critical areas *Risk-analysis - proactively identify opportunities to remove defects

Incident management

*One of the objectives of testing is to find defects *A defect is a discrepancy between actual and expected outcomes *Incident management is often referred to as "bug reporting", but an incident report may not necessarily be a "bug", but is definitely is an "anomaly" *There is a process from classification to resolution and ultimate disposition using a clearly established set of rules and processes

Test Independence

*Programmers testing their own code vs. testing by a test team *Independent testers allow "skeptical attitude of professional pessimism" *Report results honestly without concern about reprisals *Separate test budget *Provides a career for testers *Can create communication problems

Purpose of Test Monitoring

*Provide feedback on how the testing work is going, this allows for guiding and improving the testing and progress *Provide project team with visibility about test results *Measure the status of testing, test coverage and test items against exit criteria *Gather data to help estimate future test efforts

Test management can be broken down into six activities

*Test organization *Test planning and estimation *Test process monitoring and control *Configuration management *Product and project risks *Incident management

Factors affecting Test Effort

*Test strategy *Product complexity *Technology like avionics and oil exploration *Use of innovative technologies *Need for intricate and multiple test configurations *Stringent security rules *Geographical distribution of the system *Size of the product *Availability of test tools *Skill availability *Time pressures

Configuration Management

A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item. Determines exactly what items make up the software or system *Source code *Third-party software *Test scripts *Hardware *Data *Documentation (both development & test)

Incident Report

A document reporting on any event that occurred, e.g. during the testing, which requires investigation.

Defect report (bug report, problem report)

A document reporting on any flaw in a component or system that can cause component or system to fail to perform its required function.

Product Risk

A risk directly related to the test object. (risk associated with the actual product that is delivered to a customer)

Project Risk

A risk related to the management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirement, etc. (related to how the project is managed)

Tester

A skilled professional who is involved in the testing of a component or system.

Root Cause

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.

You and the project stakeholders develop a list of product risks and project risks during the planning stage of a project. What else should you do with those lists of risks during test planning? A. Determine the extent of testing required for the product risks and the mitigation and contingency actions required for the project risks. B. Obtain the resources needed to completely cover each product risk with tests and transfer responsibility for the project risks to the project manager. C. Execute sufficient tests for the product risks, based on the likelihood and impact of each product risk and execute mitigation actions for all project risks. D. No further risk management action is required at the test planning stage.

A. Determine the extent of testing required for the product risks and the mitigation and contingency actions required for the project risks.

In a test summary report, the project's test leader makes the following statement, The payment processing subsystem fails to accept payments from American Express cardholders, which is considered a must-work feature for this release.' This statement is likely to be found in which of the following sections? A. Evaluation B. Summary of activities C. Variances D. Incident description

A. Evaluation

A product risk analysis is performed during the planning stage of the test process. During the execution stage of the test process, the test manager directs the testers to classify each defect report by the known product risk it relates to (or to 'other'). Once a week, the test manager runs a report that shows the percentage of defects related to each known product risk and to unknown risks. What is one possible use of such a report? A. To identify new risks to system quality. B. To locate defect clusters in product subsystems. C. To check risk coverage by tests. D. To measure exploratory testing.

A. To identify new risks to system quality.

You are writing a test plan using the IEEE 829 template and are currently completing the Risks and Contingencies section. Which of the following is most likely to be listed as a project risk? A. Unexpected illness of a key team member B. Excessively slow transaction-processing time C. Data corruption under network congestion D. Failure to handle a key use case

A. Unexpected illness of a key team member

Risk-based Testing

An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.

Configuration Control (version control)

An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.

Incident Logging

Recording the details of any incident that occurred, e.g. during testing.

Characteristics of a Good Test Plan

Short and focused and will include: *Scope of testing *Test objectives *Project and product risks *Test constraints *Critical success factors *Testability factors *Overall test execution schedule

Test Summary Report

Test progress monitoring involves gathering detailed test data.

A product risk analysis meeting is held during the project planning period. Which of the following determines the level of risk? A. Difficulty of fixing related problems in code B. The harm that might result to the user C. The price for which the software is sold D. The technical staff in the meeting

B. The harm that might result to the user

During test execution, the test manager describes the following situation to the project team: '90% of the test cases have been run. 20% of the test cases have identified defects. 127 defects have been found. 112 defects have been fixed and have passed confirmation testing. Of the remaining 15 defects, project management has decided that they do not need to be fixed prior to release.' Which of the following is the most reasonable interpretation of this test status report? A. The remaining 15 defects should be confirmation tested prior to release. B. The remaining 10% of test cases should be run prior to release. C. The system is now ready for release with no further testing or development effort. D. The programmers should focus their attention on fixing the remaining known defects prior to release.

B. The remaining 10% of test cases should be run prior to release.

According to the ISTQB Glossary, a product risk is related to which of the following? A. Control of the test project B. The test object C. A single test item D. A potential negative outcome

B. The test object

According to the ISTQB Glossary, what do we call a document that describes any event that occurred during testing which requires further investigation? A. A bug report B. A defect report C. An incident report D. A test summary report

C. An incident report

During an early period of test execution, a defect is located, resolved and confirmed as resolved by re-testing, but is seen again later during subsequent test execution. Which of the following is a testing-related aspect of configuration management that is most likely to have broken down? A. Traceability B. Confirmation testing C. Configuration control D. Test documentation management

C. Configuration control

Severity

The degree of impact that a defect has on the development or operation of a component or system.

Priority

The level of (business) importance assigned to an item, e.g. defect

You are working as a tester on a project to develop a point-of-sales system for grocery stores and other similar retail outlets. Which of the following is a product risk for such a project? A. The arrival of a more-reliable competing product on the market. B. Delivery of an incomplete test release to the first cycle of system test. C. An excessively high number of defect fixes fail during re-testing. D. Failure to accept allowed credit cards.

D. Failure to accept allowed credit cards.

In an incident report, the tester makes the following statement, 'At this point, I expect to receive an error message explaining the rejection of this invalid input and asking me to enter a valid input. Instead the system accepts the input, displays an hourglass for between one and five seconds and finally terminates abnormally, giving the message, "Unexpected data type: 15. Click to continue." ' This statement is likely to be found in which of the following sections of an IEEE 829 standard incident report? A. Summary B. Impact C. Item pass/fail criteria D. Incident description

D. Incident description

Defect Detection Percentage (DDP)

The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards. This metric is an indicator of the effectiveness of the test process in removing defects from the product before it is delivered to customers [(Defects found by tester)/(Defects found by testers + Defect found in the field)] x 100%

Defect Density

The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines of-code, number of classes or function points.)

Failure Rate

The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.

What is the primary difference between the test plan, the test design specification, and the testprocedure specification? a. The test plan describes one or more levels of testing, the test design specification identifies the associated high-level test cases and a test procedure specification describes the actions for executing a test. b. The test plan is for managers, the test design specification is for programmers and the testprocedure specification is for testers who are automating tests.c. c. The test plan is the least thorough, the test procedure specification is the most thorough and thetest design specification is midway between the two.d. d. The test plan is finished in the first third of the project, the test design specification is finished in themiddle third of the project and the test procedure specification is finished in the last third of theproject

a. The test plan describes one or more levels of testing, the test design specification identifies the associated high-level test cases and a test procedure specification describes the actions for executing a test.

According to the ISTQB Glossary, what do we mean when we call someone a test manager? a. A test manager manages a collection of test leaders. b. A test manager is the leader of a test team or teams. c. A test manager gets paid more than a test leader.d. d. A test manager reports to a test leader.

b. A test manager is the leader of a test team or teams.

Why is independent testing important? a. Independent testing is usually cheaper than testing your own work. b. Independent testing is more effective at finding defects. c. Independent testers should determine the processes and methodologies used.d. d. Independent testers are dispassionate about whether the project succeeds or fails

b. Independent testing is more effective at finding defects.

Which of the following metrics would be most useful to monitor during test execution? a. Percentage of test cases written. b. Number of test environments remaining to be configured. c. Number of defects found and fixed. d. Percentage of requirements for which a test has been written

b. Number of test environments remaining to be configured.

According to the ISTQB Glossary, what is a test level? a. A group of test activities that are organized together. b. One or more test design specification documents. c. A test type. d. An ISTQB certification

c. A test type.

The ISTQB Foundation Syllabus establishes a fundamental test process where test planningoccurs early in the project, while test execution occurs at the end. Which of the followingelements of the test plan, while specified during test planning, is assessed during test execution? a. Test tasks b. Environmental needs c. Exit criteria d. Test team training

c. Exit criteria

Which of the following factors is an influence on the test effort involved in most projects? a. Geographical separation of tester and programmers. b. The departure of the test manager during the project. c. The quality of the information used to develop the tests. d. Unexpected long-term illness by a member of the project team

c. The quality of the information used to develop the tests.

Which of the following is among the typical tasks of a test leader? a. Develop system requirements, design specifications and usage models. b. Handle all test automation duties.c. c. Keep tests and test coverage hidden from programmers d. Gather and report test progress metrics.

d. Gather and report test progress metrics.

Consider the following exit criteria which might be found in a test plan: I. No known customer-critical defects. II. All interfaces between components tested. III. 100% code coverage of all units. IV. All specified requirements satisfied. V. System functionality matches legacy system for all business rules. Which of the following statements is true about whether these exit criteria belong in an acceptance test plan? a. All statements belong in an acceptance test plan. b. Only statement I belongs in an acceptance test plan. c. Only statements I, II, and V belong in an acceptance test plan. d. Only statements I, IV, and V belong in an acceptance test plan.

d. Only statements I, IV, and V belong in an acceptance test plan.


Conjuntos de estudio relacionados

ADN2 Exam3 Fluid, Electrolyte, and Acid base Balance

View Set

Chapter 6: Interest Rate Questions

View Set

AP Government- Constitution Test

View Set

CLEP:Family, Home, and Society Throughout the Life Span

View Set