ISTQB Foundation Extension Agile Tester Chapter 3: Agile Testing Methods, Techniques, and Tools

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Definition of done - Unit testing

1. 100% decision coverage where possible, with careful reviews of any infeasible paths 2. Static analysis performed on all code 3. No unresolved major defects (ranked based on priority and severity) 4. No known unacceptable technical debt remaining in the design and the code [Jones11] 5. All code, unit tests, and unit test results reviewed 6. All unit tests automated 7. Important characteristics are within agreed limits (e.g., performance)

Definition of done - Integration testing

1. All functional requirements tested, including both positive and negative tests, with the number of tests based on size, complexity, and risks 2. All interfaces between units tested 3. All quality risks covered according to the agreed extent of testing 4. No unresolved major defects (prioritized according to risk and importance) 5. All defects found are reported 6. All regression tests automated, where possible, with all automated tests stored in a common repository

Organizational and behavioral best practices in Scrum teams

1. Cross-functional 2. Self-organizing 3. Co-located 4. Collaborative 5. Empowered 6. Committed 7. Transparent 8. Credible 9. Open to feedback 10. Resilient (~flexible)

Definition of done - System testing

1. End-to-end tests of user stories, features, and functions 2. All user personas covered 3. The most important quality characteristics of the system covered (e.g., performance, robustness, reliability) 4. Testing done in a production-like environment(s), including all hardware and software for all supported configurations, to the extent possible 5. All quality risks covered according to the agreed extent of testing 6. All regression tests automated, where possible, with all automated tests stored in a common repository 7. All defects found are reported and possibly fixed 8 No unresolved major defects (prioritized according to risk and importance)

To be testable, acceptance criteria should address the following topics where relevant

1. Functional behavior: The externally observable behavior with user actions as input operating under certain configurations. 2. Quality characteristics: How the system performs the specified behavior. The characteristics may also be referred to as quality attributes or non-functional requirements. Common quality characteristics are performance, reliability, usability, etc. 3. Scenarios (use cases): A sequence of actions between an external actor (often a user) and the system, in order to accomplish a specific goal or business task. 4. Business rules: Activities that can only be performed in the system under certain conditions defined by outside procedures and constraints (e.g., the procedures used by an insurance company to handle insurance claims). 5. External interfaces: Descriptions of the connections between the system to be developed and the outside world. External interfaces can be divided into different types (user interface, interface to other systems, etc.) 6. Constraints: Any design and implementation constraint that will restrict the options for the developer. Devices with embedded software must often respect physical constraints such as size, weight, and interface connections. 7. Data definitions: The customer may describe the format, data type, allowed values, and default values for a data item in the composition of a complex business data structure (e.g., the ZIP code in a U.S. mail address).

In addition to the user stories and their associated acceptance criteria, other information is relevant for the tester, including

1. How the system is supposed to work and be used 2. The system interfaces that can be used/accessed to test the system 3. Whether current tool support is sufficient 4. Whether the tester has enough knowledge and skill to perform the necessary tests

The tester collaborates with the team on the following activities during Sprint zero

1. Identify the scope of the project (i.e., the product backlog) 2. Create an initial system architecture and high-level prototypes 3. Plan, acquire, and install needed tools (e.g., for test management, defect management, test automation, and continuous integration) 4. Create an initial test strategy for all test levels, addressing (among other topics) test scope, technical risks, test types (see Section 3.1.3), and coverage goals 5. Perform an initial quality risk analysis (see Section 3.2.1) 6. Define test metrics to measure the test process, the progress of testing in the project, and product quality 7. Specify the definition of "done" 8. Create the task board (see Section 2.2.1) 9. Define when to continue or stop testing before delivering the system to the customer Sprint zero sets the direction for what testing needs to achieve and how testing needs to achieve it throughout the sprints.

Examples of quality risks for a system

1. Incorrect calculations in reports (a functional risk related to accuracy) 2. Slow response to user input (a non-functional risk related to efficiency and response time) 3. Difficulty in understanding screens and fields (a non-functional risk related to usability and understandability)

In Agile projects, quality risk analysis takes place at 2 places

1. Release planning 2. Iteration planning

Techniques

1. Test-Driven Development (TDD) 2. Acceptance Test-Driven Development 3. Behavior-Driven Development are three complementary techniques in use among Agile teams to carry out testing across the various test levels. Each technique is an example of a fundamental principle of testing, the benefit of early testing and QA activities, since the tests are defined before the code is written.

Test bases (test basis)

1. User stories 2. Experience from previous projects 3. Existing functions, features, and quality characteristics of the system 4. Code, architecture, and design 5. User profiles (context, system configurations, and user behavior) 6. Information on defects from existing and previous projects 7. A categorization of defects in a defect taxonomy 8. Applicable standards (e.g., [DO-178B] for avionics software) 9. Quality risks

Steps in Acceptance Test-Driven Development

1. step: A specification workshop where the user story is analyzed, discussed, and written by developers, testers, and business representatives. Any incompleteness, ambiguities, or errors in the user story are fixed during this process. 2. step: Create the tests. This can be done by the team together or by the tester individually. In any case, an independent person such as a business representative validates the tests. The tests are examples that describe the specific characteristics of the user story. These examples will help the team implement the user story correctly. Since examples and tests are the same, these terms are often used interchangeably. The work starts with basic examples and open questions. Both positive and negative tests. Tests are expressed in a way that every stakeholder is able to understand, containing sentences in natural language involving the necessary preconditions, if any, the inputs, and the related outputs. The examples must cover all the characteristics of the user story and should not add to the story. This means that an example should not exist which describes an aspect of the user story not documented in the story itself. In addition, no two examples should describe the same characteristics of the user story.

Assessing Quality Risks and Estimating Test Effort

A typical objective of testing in all projects, Agile or traditional, is to reduce the risk of product quality problems to an acceptable level prior to release. Testers in Agile projects can use the same types of techniques used in traditional projects to identify quality risks (or product risks), assess the associated level of risk, estimate the effort required to reduce those risks sufficiently, and then mitigate those risks through test design, implementation, and execution. However, given the short iterations and rate of change in Agile projects, some adaptations of those techniques are required.

Acceptance Test-Driven Developments advantages

Acceptance test-driven development allows quick resolution of defects and validation of feature behavior. It helps determine if the acceptance criteria are met for the feature.

Acceptance Test-Driven Developments and tools

Acceptance test-driven development creates reusable tests for regression testing. Specific tools support creation and execution of such tests, often within the continuous integration process. These tools can connect to data and service layers of the application, which allows tests to be executed at the system or acceptance level.

Acceptance Test-Driven Development

Acceptance test-driven development defines acceptance criteria and tests during the creation of user stories. Acceptance test-driven development is a collaborative approach that allows every stakeholder to understand how the software component has to behave and what the developers, testers, and business representatives need to ensure this behavior.

Test data load tools

After data has been generated for testing, it needs to be loaded into the application. Manual data entry is often time consuming and error prone, but data load tools are available to make the process reliable and efficient. In fact, many of the data generator tools include an integrated data load component. In other cases, bulk-loading using the database management systems is also possible.

Prioritizing of tasks

As mentioned earlier, an iteration starts with iteration planning, which culminates in estimated tasks on a task board. These tasks can be prioritized in part based on the level of quality risk associated with them. Tasks associated with higher risks should start earlier and involve more testing effort. Tasks associated with lower risks should start later and involve less testing effort.

Behavior-Driven Development

Behavior-driven development allows a developer to focus on testing the code based on the expected behavior of the software. Because the tests are based on the exhibited behavior of the software, the tests are generally easier for other team members and stakeholders to understand.

Risk analysis in Release planning

Business representatives who know the features in the release provide a high-level overview of the risks, and the whole team, including the tester(s), may assist in the risk identification and assessment.

Software Build and Distribution Tools

Daily build and deployment of software is a key practice in Agile teams. This requires the use of continuous integration tools and build distribution tools.

Transparent

Development and testing progress is visible on the Agile task board (see Section 2.2.1). (Organizational and behavioral best practices in Scrum teams)

Estimating testing effort during release planning

During release planning, the Agile team estimates the effort required to complete the release. The estimate addresses the testing effort as well. A common estimation technique used in Agile projects is planning poker, a consensus-based technique.

The tester uses this during Exploratory Testing

During test execution, the tester uses creativity, intuition, cognition, and skill to find possible problems with the product. The tester also needs to have good knowledge and understanding of the software under test, the business domain, how the software is used, and how to determine when the system fails.

Cross-functional

Each team member brings a different set of skills to the team. The team works together on test strategy, test planning, test specification, test execution, test evaluation, and test results reporting. (Organizational and behavioral best practices in Scrum teams)

Test level and definition of done

Each test level has its own definition of done.

Exploratory Testing and Agile Testing

Exploratory testing is important in Agile projects due to the limited time available for test analysis and the limited details of the user stories. In order to achieve the best results, exploratory testing should be combined with other experience-based techniques as part of a reactive testing strategy, blended with other testing strategies such as analytical risk-based testing, analytical requirements-based testing, model-based testing, and regression-averse testing. In exploratory testing, test design and test execution occur at the same time, guided by a prepared test charter. During exploratory testing, the results of the most recent tests guide the next test. The same white box and black box techniques can be used to design the tests as when performing pre-designed testing.

Open to feedback

Feedback is an important aspect of being successful in any project, especially in Agile projects. Retrospectives allow teams to learn from successes and from failures. (Organizational and behavioral best practices in Scrum teams)

Behavior-driven developments advantages

From these requirements (Given, When, Then) Behavior-driven development framework generates code that can be used by developers to create test cases. Behavior-driven development helps the developer collaborate with other stakeholders, including testers, to define accurate unit tests focused on business needs.

Integration

In Agile projects, the objective is to deliver customer value on a continuous basis (preferably in every sprint). To enable this, the integration strategy should consider both design and testing. To enable a continuous testing strategy for the delivered functionality and characteristics, it is important to identify all dependencies between underlying functions and features.

Test case creation

In Agile testing, many tests are created by testers concurrently with the developers' programming activities. Just as the developers are programming based on the user stories and acceptance criteria, so are the testers creating tests based on user stories and their acceptance criteria. (Some tests, such as exploratory tests and some other experience-based tests, are created later, during test execution.)

Communication and Information Sharing Tools

In addition to e-mail, documents, and verbal communication, Agile teams often use three additional types of tools to support communication and information sharing: wikis, instant messaging, and desktop sharing. These tools should be used to complement and extend, not replace, face-to-face communication in Agile teams.

Task Management and Tracking Tools

In some cases, Agile teams use physical story/task boards (e.g., whiteboard, corkboard) to manage and track user stories, tests, and other tasks throughout each sprint. Other teams will use application lifecycle management and task management software, including electronic task boards. These tools serve the following purposes: 1. Record stories and their relevant development and test tasks, to ensure that nothing gets lost during a sprint 2. Capture team members' estimates on their tasks and automatically calculate the effort required to implement a story, to support efficient iteration planning sessions 3. Associate development tasks and test tasks with the same story, to provide a complete picture of the team's effort required to implement the story 4. Aggregate developer and tester updates to the task status as they complete their work, automatically providing a current calculated snapshot of the status of each story, the iteration, and the overall release 5. Provide a visual representation (via metrics, charts, and dashboards) of the current state of each user story, the iteration, and the release, allowing all stakeholders, including people on geographically distributed teams, to quickly check status 6. Integrate with configuration management tools, which can allow automated recording of code check-ins and builds against tasks, and, in some cases, automated status updates for tasks

Acceptance Test-Driven Development

Is a test-first approach. Test cases are created prior to implementing the user story. The test cases are created by the Agile team, including the developer, the tester, and the business representatives and may be manual or automated.

Exploratory Testing and documentation

It is important for the tester to document the process as much as possible. Otherwise, it would be difficult to go back and see how a problem in the system was discovered. The following list provides examples of information that may be useful to document: 1. Test coverage: what input data have been used, how much has been covered, and how much remains to be tested 2. Evaluation notes: observations during testing, do the system and feature under test seem to be stable, were any defects found, what is planned as the next step according to the current observations, and any other list of ideas 3. Risk/strategy list: which risks have been covered and which ones remain among the most important ones, will the initial strategy be followed, does it need any changes 4. Issues, questions, and anomalies: any unexpected behavior, any questions regarding the efficiency of the approach, any concerns about the ideas/test attempts, test environment, test data, misunderstanding of the function, test script or the system under test 5. Actual behavior: recording of actual behavior of the system that needs to be saved (e.g., video, screen captures, output data files) The information logged should be captured and/or summarized into some form of status management tools (e.g., test management tools, task management tools, the task board), in a way that makes it easy for stakeholders to understand the current status for all testing that was performed.

Test techniques and testing levels in Agile projects

Many of the test techniques and testing levels that apply to traditional projects can also be applied to Agile projects.

Agile Testing Practices

Many practices may be useful for testers in a scrum team, some of which include: 1. Pairing: Two team members (e.g., a tester and a developer, two testers, or a tester and a product owner) sit together at one workstation to perform a testing or other sprint task. 2. Incremental test design: Test cases and charters are gradually built from user stories and other test bases, starting with simple tests and moving toward more complex ones. 3. Ming mapping: Mind mapping is a useful tool when testing. For example, testers can use mind mapping to identify which test sessions to perform, to show test strategies, and to describe test data. These practices are in addition to other practices discussed in this syllabus and in Chapter 4 of the Foundation Level syllabus.

Definition of done - User Story

May be determined by the following criteria: 1. The user stories selected for the iteration are complete, understood by the team, and have detailed, testable acceptance criteria 2. All the elements of the user story are specified and reviewed, including the user story acceptance tests, have been completed 3. Tasks necessary to implement and test the selected user stories have been identified and estimated by the team

Definition of done - Release

May include the following areas: 1. Coverage: All relevant test basis elements for all contents of the release have been covered by testing. The adequacy of the coverage is determined by what is new or changed, its complexity and size, and the associated risks of failure. 2. Quality: The defect intensity (e.g., how many defects are found per day or per transaction), the defect density (e.g., the number of defects found compared to the number of user stories, effort, and/or quality attributes), estimated number of remaining defects are within acceptable limits, the consequences of unresolved and remaining defects (e.g., the severity and priority) are understood and acceptable, the residual level of risk associated with each identified quality risk is understood and acceptable. 3. Time: If the pre-determined delivery date has been reached, the business considerations associated with releasing and not releasing need to be considered. 4. Cost: The estimated lifecycle cost should be used to calculate the return on investment for the delivered system (i.e., the calculated development and maintenance cost should be considerably lower than the expected total sales of the product). The main part of the lifecycle cost often comes from maintenance after the product has been released, due to the number of defects escaping to production.

Definition of done - Feature

May include: 1. All constituent user stories, with acceptance criteria, are defined and approved by the customer 2. The design is complete, with no known technical debt 3. The code is complete, with no known technical debt or unfinished refactoring 4. Unit tests have been performed and have achieved the defined level of coverage 5. Integration tests and system tests for the feature have been performed according to the defined coverage criteria 6. No major defects remain to be corrected 7. Feature documentation is complete, which may include release notes, user manuals, and online help functions

Definition of done - Iteration

May include: 1. All features for the iteration are ready and individually tested according to the feature level criteria 2. Any non-critical defects that cannot be fixed within the constraints of the iteration added to the product backlog and prioritized 3. Integration of all features for the iteration completed and tested 4. Documentation written, reviewed, and approved At this point, the software is potentially releasable because the iteration has been successfully completed, but not all iterations result in a release.

Quality risk mitigation before test execution

Quality risks can also be mitigated before test execution starts. For example, if problems with the user stories are found during risk identification, the project team can thoroughly review user stories as a mitigating strategy.

Quality risks or product risks vs. project risks or planning risks

Quality risks or product risks: When the primary effect of the potential problem is on product quality. Project risks or planning risks: When the primary effect of the potential problem is on project success.

Assessing Quality Risks in Agile Projects

Risk identification, analysis, and risk mitigation strategies can be used by the testers in Agile teams to help determine an acceptable number of test cases to execute, although many interacting constraints and variables may require compromises.

Risk

Risk is the possibility of a negative or undesirable outcome or event. The level of risk is found by assessing the likelihood of occurrence of the risk and the impact of the risk.

Test Planning

Since testing is fully integrated into the Agile team, test planning should start during the release planning session and be updated during each sprint. Sprint planning results in a set of tasks to put on the task board, where each task should have a length of one or two days of work. In addition, any testing issues should be tracked to keep a steady flow of testing.

Test Design, Implementation, and Execution Tools

Some tools are useful to Agile testers at specific points in the software testing process. While most of these tools are not new or specific to Agile, they provide important capabilities given the rapid change of Agile projects.

Behavior-driven Development - definition of acceptance criteria

Specific behavior-driven development frameworks can be used to define acceptance criteria based on the given/when/then format: Given some initial context, When an event occurs, Then ensure some outcomes.

Empowered

Technical decisions regarding design and testing are made by the team as a whole (developers, testers, and Scrum Master), in collaboration with the product owner and other teams if needed. (Organizational and behavioral best practices in Scrum teams)

TDD

Test-Driven Development

Test-driven developments popularity

Test-driven development gained its popularity through Extreme Programming, but is also used in other Agile methodologies and sometimes in sequential lifecycles. It helps developers focus on clearly-defined expected results. The tests are automated and are used in continuous integration.

Collaborative

Testers collaborate with their team members, other teams, the stakeholders, the product owner, and the Scrum Master. (Organizational and behavioral best practices in Scrum teams)

Testers role in use of testing practices

Testers in Agile projects play a key role in guiding the use of testing practices throughout the lifecycle.

Co-located

Testers sit together with the developers and the product owner. (Organizational and behavioral best practices in Scrum teams)

Resilient

Testing must be able to respond to change, like all other activities in Agile projects. (Organizational and behavioral best practices in Scrum teams)

Testing quadrants

Testing quadrants align the test levels with the appropriate test types in the Agile methodology. During any given iteration, tests from any or all quadrants may be required. The testing quadrants apply to dynamic testing rather than static testing.

Planning poker

The product owner or customer reads a user story to the estimators. Each estimator has a deck of cards with values similar to the Fibonacci sequence (i.e., 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...) or any other progression of choice (e.g., shirt sizes ranging from extra-small to extra-extra-large). The values represent the number of story points, effort days, or other units in which the team estimates. The Fibonacci sequence is recommended because the numbers in the sequence reflect that uncertainty grows proportionally with the size of the story. A high estimate usually means that the story is not well understood or should be broken down into multiple smaller stories. The estimators discuss the feature, and ask questions of the product owner as needed. Aspects such as development and testing effort, complexity of the story, and scope of testing play a role in the estimation. Therefore, it is advisable to include the risk level of a backlog item, in addition to the priority specified by the product owner, before the planning poker session is initiated. When the feature has been fully discussed, each estimator privately selects one card to represent his or her estimate. All cards are then revealed at the same time. If all estimators selected the same value, that becomes the estimate. If not, the estimators discuss the differences in estimates after which the poker round is repeated until agreement is reached, either by consensus or by applying rules (e.g., use the median, use the highest score) to limit the number of poker rounds. These discussions ensure a reliable estimate of the effort needed to complete product backlog items requested by the product owner and help improve collective knowledge of what has to be done

Self-organizing

The team may consist only of developers, but, as noted in Section 2.1.5, ideally there would be one or more testers. (Organizational and behavioral best practices in Scrum teams)

The Test Pyramid

The test pyramid concept is based on the testing principle of early QA and testing (i.e., eliminating defects as early as possible in the lifecycle). 1. Acceptance (top) 2. System 3. Integration 4. Unit (buttom)

Test Pyramid - number of tests

The test pyramid emphasizes having a large number of tests at the lower levels (bottom of the pyramid) and, as development moves to the upper levels, the number of tests decreases (top of the pyramid). 1. Acceptance (top) 2. System 3. Integration 4. Unit (buttom) Usually unit and integration level tests are automated and are created using API-based tools. At the system and acceptance levels, the automated tests are created using GUI-based tools.

Committed

The tester is committed to question and evaluate the product's behavior and characteristics with respect to the expectations and needs of the customers and users. (Organizational and behavioral best practices in Scrum teams)

Credible

The tester must ensure the credibility of the strategy for testing, its implementation, and execution, otherwise the stakeholders will not trust the test results. This is often done by providing information to the stakeholders about the testing process. (Organizational and behavioral best practices in Scrum teams)

The testing quadrants model advantages

The testing quadrants model, and its variants, helps to ensure that all important test types and test levels are included in the development lifecycle. This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives.

Test case management tools

The type of test case management tools used in Agile may be part of the whole team's application lifecycle management or task management tool.

Risk analysis in Iteration planning

The whole team identifies and assesses the quality risks.

Automated test execution tools

There are test execution tools which are more aligned to Agile testing. Specific tools are available via both commercial and open source avenues to support test first approaches, such as behavior-driven development, test-driven development, and acceptance test-driven development. These tools allow testers and business staff to express the expected system behavior in tables or natural language using keywords

The definition of done

This concept of the definition of done is critical in Agile projects and applies in a number of different ways as discussed in the following sub-subsections.

Risk and changes

Throughout the project, the team should remain aware of additional information that may change the set of risks and/or the level of risk associated with known quality risks. Periodic adjustment of the quality risk analysis, which results in adjustments to the tests, should occur. Adjustments include identifying new risks, re-assessing the level of existing risks, and evaluating the effectiveness of risk mitigation activities.

Session-based test management

To manage exploratory testing, a method called session-based test management can be used. A session is defined as an uninterrupted period of testing which could last from 60 to 120 minutes. Test sessions include the following: 1. Survey session (to learn how it works) 2. Analysis session (evaluation of the functionality or characteristics) The quality of the tests depends on the testers ability to ask relevant questions about what to test. Examples include the following: 1. What is most important to find out about the system? 2. In what way may the system fail? 3. What happens if.....? 4. What should happen when.....? 5. Are customer needs, requirements, and expectations fulfilled? 6. Is the system possible to install (and remove if necessary) in all supported upgrade paths?

Tools in Agile Projects

Tools described in the Foundation Level syllabus are relevant and used by testers on Agile teams. Not all tools are used the same way and some tools have more relevance for Agile projects than they have in traditional projects. In addition to the tools described in the Foundation Level syllabus, testers on Agile projects may also utilize these tools: 1. Task Management and Tracking Tools 2. Communication and Information Sharing Tools 3. Software Build and Distribution Tools 4. Configuration Management Tools 5. Test Design, Implementation, and Execution Tools 6. Cloud Computing and Virtualization Tools These tools are used by the whole team to ensure team collaboration and information sharing, which are key to Agile practices.

Exploratory test tools

Tools that capture and log activities performed on an application during an exploratory test session are beneficial to the tester and developer, as they record the actions taken. This is useful when a defect is found, as the actions taken before the failure occurred have been captured and can be used to report the defect to the developers. Logging steps performed in an exploratory test session may prove to be beneficial if the test is ultimately included in the automated regression test suite.

Test data preparation and generation tools

Tools that generate data to populate an application's database are very beneficial when a lot of data and combinations of data are necessary to test the application. These tools can also help re-define the database structure as the product undergoes changes during an Agile project and refactor the scripts to generate the data. This allows quick updating of test data as changes occur. Some test data preparation tools use production data sources as a raw material and use scripts to remove or anonymize sensitive data. Other test data preparation tools can help with validating large data inputs or outputs.

Test design tools

Use of tools such as mind maps have become more popular to quickly design and define tests for a new feature.

Cloud Computing and Virtualization Tools

Virtualization allows a single physical resource (server) to operate as many separate, smaller resources. When virtual machines or cloud instances are used, teams have a greater number of servers available to them for development and testing. This can help to avoid delays associated with waiting for physical servers. Provisioning a new server or restoring a server is more efficient with snapshot capabilities built into most virtualization tools. Some test management tools now utilize virtualization technologies to snapshot servers at the point when a fault is detected, allowing testers to share the snapshot with the developers investigating the fault.

Configuration management tools

are important to testers in Agile teams due to the high number of automated tests at all levels and the need to store and manage the associated automated test artifacts.

Testing quadrants Q2

is system level, business facing, and confirms product behavior. This quadrant contains functional tests, examples, story tests, user experience prototypes, and simulations. These tests check the acceptance criteria and can be manual or automated. They are often created during the user story development and thus improve the quality of the stories. They are useful when creating automated regression test suites.

Testing quadrants Q4

is system or operational acceptance level, technology facing, and contains tests that critique the product. This quadrant contains performance, load, stress, and scalability tests, security tests, maintainability, memory management, compatibility and interoperability, data migration, infrastructure, and recovery testing. These tests are often automated.

Testing quadrants Q3

is system or user acceptance level, business facing, and contains tests that critique the product, using realistic scenarios and data. This quadrant contains exploratory testing, scenarios, process flows, usability testing, user acceptance testing, alpha testing, and beta testing. These tests are often manual and are user-oriented.

Sprint zero

is the first iteration of the project where many preparation activities take place (see Section 1.2.5).

Testing quadrants Q1

is unit level, technology facing, and supports the developers. This quadrant contains unit tests. These tests should be automated and included in the continuous integration process

Test-Driven Development

is used to develop code guided by automated test cases. The process for test-driven development is: 1. Add a test that captures the programmer's concept of the desired functioning of a small piece of code 2. Run the test, which should fail since the code doesn't exist 3. Write the code and run the test in a tight loop until the test passes 4. Refactor the code after the test is passed, re-running the test to ensure it continues to pass against the refactored code 5. Repeat this process for the next small piece of code, running the previous tests as well as the added tests The tests written are primarily unit level and are code-focused, though tests may also be written at the integration or system levels.

Configuration Management Tools

may be used not only to store source code and automated tests, but manual tests and other test work products are often stored in the same repository as the product source code. This provides traceability between which versions of the software were tested with which particular versions of the tests, and allows for rapid change without losing historical information. The main types of version control systems include centralized source control systems and distributed version control systems. The team size, structure, location, and requirements to integrate with other tools will determine which version control system is right for a particular Agile project.

Test charter

provides the test conditions to cover during a time-boxed testing session. May include the following information: 1. Actor: intended user of the system 2. Purpose: the theme of the charter including what particular objective the actor wants to achieve, i.e., the test conditions 3. Setup: what needs to be in place in order to start the test execution 4. Priority: relative importance of this charter, based on the priority of the associated user story or the risk level 5. Reference: specifications (e.g., user story), risks, or other information sources 6. Data: whatever data is needed to carry out the charter 7. Activities: a list of ideas of what the actor may want to do with the system (e.g., "Log on to the system as a super user") and what would be interesting to test (both positive and negative tests) 8. Oracle notes: how to evaluate the product to determine correct results (e.g., to capture what happens on the screen and compare to what is written in the user's manual) 9. Variations: alternative actions and evaluations to complement the ideas described under activities


Ensembles d'études connexes

(Chapter 16 HW) Exchange Rates and International Capital Flows

View Set

The Great Gatsby Vocabulary Chapters 1-3

View Set

Chapter 48: Care of Patient with Ear Problems

View Set

Macroeconomics Chapter 16 Monetary System

View Set

TRUE/FALSE QUESTIONS FOR CHAPTER 6

View Set

Programming Fundamentals 2 - Inheritance/Polymorphism Quiz

View Set