ISTQB CTFL - Section 5

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Risk-Based Approach

A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation and execution of tests. In a risk-based approach, the risks identified may be used to: 1. Determine the test techniques to be employed, 2. Determine the extent of testing to be carried out, 3. Prioritize testing in an attempt to find the critical defects as early as possible, 4. Determine whether any non-testing activities could be employed to reduce risk (e.g., providing training to inexperienced designers)

Test Metrics

Common test metrics include: 1. Percentage of work done in test case preparation (or percentage of planned test cases prepared) 2. Percentage of work done in test environment preparation 3. Test case execution (e.g., number of test cases run/not run, and test cases passed/failed) 4. Defect information (e.g., defect density, defects found and fixed, failure rate, and re-test results), 5. Test coverage of requirements, risks or code 6. Subjective confidence of testers in the product, 7. Dates of test milestones, 8. Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test

Configuration Management

Configuration Management is a discipline applying technical and administrative direction and surveillance to 1. identify and document the functional and physical characteristics of a configuration item, 2. control changes to those characteristics, 3. record and report change processing and implementation status, and 4. verify compliance with specified requirements. The purpose of configuration management is to establish and maintain the integrity of the products (components, data and documentation) of the software or system through the project and product life cycle.

Defect Density

Defect Density is the number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines of code, number of classes, or function points).

Exit Criteria

Entry Criteria is the set of generic and specific conditions, agreed upon by the stakeholders for permitting a process to be officially completed. The purpose of Exit Criteria is to prevent a task from being considered completed when there are oustanding parts of the task which have not been completed. Exit Criteria are used to report against and to plan when to stop testing. Typically exit criteria may cover the following: 1. Thoroughness measures, such as coverage of code, functionality or risk, 2. Estimates of defect density or reliability measures, 3. Cost, 4. Residual risks such as defects not fixed or lack of test coverage in certain areas, 5. Schedules such as those based on time to market

Entry Criteria for Testing

Entry criteria define when to start testing such as at the beginning of a test level or when a set of tests is ready for execution. Typically entry criteria may cover the following: 1. Test environment availability and readiness, 2. Test tool readiness in the test environment, 3. Testable code availability, 4. Test data availability

Test Control Actions

Examples of test control actions include: 1. Making decisions based on information from test monitoring, 2. Re-prioritizing tests when an identified risk occurs (e.g., software delivered late), 3. Changing the test schedule due to availability or unavailability of a test environment, 4. Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a developer before accepting them into a build

Failure Rate

Failure Rate is the ratio of the number of failures of a given category to a given unit of time, failures per number of transactions, failures per number of computer runs

Multiple Levels of Testing

For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with some or all of the levels done by independent testers. Development staff may participate in testing, especially at the lower levels, but their lack of objectivity often limits their effectiveness. The independent testers may have the authority to require and define test processes and rules, but testers should take on such process-related roles only in the presence of a clear management mandate to do so.

Risk Management Activities

To ensure that the chance of a product failure is minimized, risk management activities provide a disciplined approach to: 1. Assess (and reassess on a regular basis) what can go wrong (risks) 2. Determine what risks are important to deal with, and 3. Implement actions to deal with those risks

Incident Logging

Incident Logging is recording the details of any incident that occurred, e.g. during testing.

Incident Management

Incident Management is the process of recognizing, investigating, taking action, and disposing of incidents. It involves logging incidents, classifying them, and identifying the impact. Incidents may be raised during development, review, testing or use of a software product. They may be raised for issues in code or the working system, or in any type of documentation including requirements, development documents, test documents, and user information such as "Help" or installation guides.

Incident Report

Incident Report logs the discrepancies between actual and expected outcomes. Details of the incident report may include: 1.Date of issue, 2. issuing organization, and author, 3. Expected and actual results, 4. Identification of the test item (configuration item) and environment, 5. Software or system life cycle process in which the incident was observed, 6. Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots, 7. Scope or degree of impact on stakeholder(s) interests, 8. Severity of the impact on the system, 9. Urgency/priority to fix, 10. Status of the incident (e.g., open, deferred, duplicate, waiting to be fixed, fixed awaiting re-test, closed), 11. Conclusions, recommendations and approvals, 12. Global issues, such as other areas that may be affected by a change resulting from the incident, 13. Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed, 14. References, including the identity of the test case specification that revealed the problem

Incident Report Objectives

Incident reports have the following objectives: 1. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary, 2. Provide test leaders a means of tracking the quality of the system under test and the progress of the testing, 3. Provide ideas for test process improvement

Test Manager

Test Manager is the same as Test Leader or Test Coordinator. This is the person responsible for project management of testing activities and resources, and evaluation of a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object. The role of the Test Manager may be performed by a project manager, a development manager, a quality assurance manager or the manager of a test group.

Test Monitoring

Test Monitoring is a test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to which was planned.

Product Risk

Product Risk is a risk directly related to the test object. Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product. These include: 1. Failure-prone software delivered, 2. The potential that the software/hardware could cause harm to an individual or company, 3. Poor software characteristics (e.g., functionality, reliability, usability and performance), 4. Poor data integrity and quality (e.g., data migration issues, data conversion problems, data transport problems, violation of data standards), 5. Software that does not perform its intended functions. Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans.

Project Risk

Project Risk is a risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. Project risks are the risks that surround the project's capability to deliver its objectives, such as: 1. Organizational factors: • Skill, training and staff shortages • Personnel issues • Political issues • Improper attitude toward or expectations of testing , 2. Technical issues: • Problems in defining the right requirements • The extent to which requirements cannot be met given existing constraints • Test environment not ready on time • Late data conversion, migration planning and development and testing data conversion/migration tools • Low quality of the design, code, configuration data, test data and tests 3. Supplier issues: • Failure of a third party • Contractual issues

Risk

Risk is a factor that could result in future negative consequences; usually expressed as impact and likelihood. Risk can be defined as the chance of an event, hazard, threat or situation occurring and resulting in undesirable consequences or a potential problem. The level of risk will be determined by the likelihood of an adverse event happening and the impact (the harm resulting from that event).

Risk-Based Testing

Risk-Based Testing is an approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process. Risk-based testing draws on the collective knowledge and insight of the project stakeholders to determine the risks and the levels of testing required to address those risks.

Test Strategy

Test Strategy is a high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).

Test Summary Report

Test Summary Report is a document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

Test Approach

Test Approach is the implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project's goal and the risk assessment carried out starting points regarding the test process, the test design techniques to be applied, exit criteria, and test types to be performed.

Test Leader

Test Leader is the same as Test Manager or Test Coordinator. This is the person responsible for project management of testing activities and resources, and evaluation of a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object. The role of the test leader may be performed by a project manager, a development manager, a quality assurance manager or the manager of a test group.

Test Control

Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. Test Control is a test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.

Test Planning Activities

Test planning activities for an entire system or part of a system may include: 1, Determining the scope and risks and identifying the objectives of testing 2. Defining the overall approach of testing, including the definition of the test levels and entry and exit criteria 3. Integrating and coordinating the testing activities into the software life cycle activities (acquisition, supply, development, operation and maintenance) 4. Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated 5. Scheduling test analysis and design activities 6. Scheduling test implementation, execution and evaluation 7. Assigning resources for the different activities defined 8. Defining the amount, level of detail, structure and templates for the test documentation 9. Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues 10. Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution

Test Planning

Test planning is the activity of defining the objectives of testing and the specification of test activities in order to meet the objectives and mission

Test Reporting

Test reporting is concerned with summarizing information about the testing endeavor, including: 1. What happened during a period of testing, such as dates when exit criteria were met, 2. Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in the tested software

Tester

Tester is a skilled professional who is involved in the testing of a component or system.

Benefits of Independent Testing

The benefits of Independent Testing include: 1. Independent testers see other and different defects and are unbiased, 2. an independent tester can verify assumptions people made during specification and implementation of the system

Drawbacks of Independent Testing

The drawbacks of independent testing include: 1. Isolation from the development team (if treated as totally independent), 2. Developers may lose a sense of responsibility for quality, 3. Independent testers may be seen as a bottleneck or blamed for delays in release

Test Organization and Independence

The effectiveness of finding defects by testing and reviews can be improved by using independent testers. Options for independence include the following: 1. No independent testers; 2. developers test their own code, 3. Independent testers within the development teams, 4. Independent test team or group within the organization reporting to project management or executive management, 5. Independent testers from the business organization or user community, 6. Independent test specialists for specific test types such as usability testers, security testers or certification testers (who certify a software product against standards and regulations), 7. Independent testers outsourced or external to the organization

Test Progress Monitoring

The purpose of test monitoring is to provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget.

Testing Effort

The testing effort may depend on a number of factors, including: 1. Characteristics of the product: the quality of the specification and other information used for test models (i.e., the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation, 2. Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure, 3. The outcome of testing: the number of defects and the amount of rework required

Test Estimation

Two approaches for the estimation of test effort are: 1. the metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values, 2. the expert-based approach: estimating the tasks based on estimates made by the owner of the tasks or by experts

Typical Test Approaches

Typical Test Approaches include: 1. Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk, 2. Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles), 3. Methodical approaches, such as failure-based (including error guessing and fault attacks), experience-based, checklist-based, and quality characteristic-based, 4. Process- or standard-compliant approaches, such as those specified by industry-specific standards or the various agile methodologies, 5. Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks, 6. Consultative approaches, such as those in which test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team, 7. Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites

Test Leader/Manager Tasks

Typical test leader tasks may include: 1. Coordinate the test strategy and plan with project managers and others 2. Write or review a test strategy for the project, and test policy for the organization 3. Contribute the testing perspective to other project activities, such as integration planning 4. Plan the tests - considering the context and understanding the test objectives and risks - including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management 5. Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria 6. Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems 7. Set up adequate configuration management of testware for traceability 8. Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product 9. Decide what should be automated, to what degree, and how 10. Select tools to support testing and organize any training in tool use for testers 11. Decide about the implementation of the test environment 12. Write test summary reports based on the information gathered during testing

Tester Tasks

Typical tester tasks may include: 1. Review and contribute to test plans 2. Analyze, review and assess user requirements, specifications and models for testability 3. Create test specifications 4. Set up the test environment (often coordinating with system administration and network management) 5. Prepare and acquire test data 6. Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results 7. Use test administration or management tools and test monitoring tools as required 8. Automate tests (may be supported by a developer or a test automation expert) 9. Measure performance of components and systems (if applicable) 10. Review tests developed by others

Version Control

Version Control is the same as Configuration Control. An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.


Set pelajaran terkait

POB module 1: homework questions

View Set

intro to unix second half of semester 10-18

View Set

the menstrual cycle practice quiz

View Set

Final Exam Questions: Homework 10

View Set

Introduction to SEO - lecture notes

View Set

STUDY GUIDE MODULE 3 SPR 223 BIO 266

View Set

TIA - CH4 Application Software Programs That Let You Work and Play Quiz

View Set

Biology - week 6 respiration and enzymes

View Set

Ricci Chapter 20 - Test Bank - 4th Edition

View Set