Fundamentals of Testing - Chapter 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Beta Testing

-A form of acceptance testing -Used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. is performed by potential or existing customers, and/or operators at their own locations. may come after alpha testing, or may occur without any preceding alpha testing having occurred.

regulatory acceptance testing

-A form of acceptance testing -is performed against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.

contractual acceptance testing

-A form of acceptance testing -may come after alpha testing, or may occur without any preceding alpha testing having occurred.

operational acceptance testing

-A form of acceptance testing The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include: Testing of backup and restore Installing, uninstalling and upgrading -Disaster recovery -User management Maintenance tasks -Data load and migration tasks -Checks for security vulnerabilities -Performance testing

maintenance testing

-Once deployed to production environments, software and systems need to be maintained. Changes of various sorts are almost inevitable in delivered software and systems, either to fix defects discovered in operational use, to add new functionality, or to delete or alter already-delivered functionality -to evaluate the success with which the changes were made and to check for possible side-effects -focuses on testing the changes to the system, as well as testing unchanged parts that might have been affected by the changes Triggers: Modification, Migration, Retirement (Page 43)

Alpha Testing

A form of acceptance testing. -Used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. -performed at the developing organization's site, not by the development team, but by potential or existing customers, and/or operators or an independent test team

sequential development model

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete.

Identify reasons why software development lifecycle models must be adapted to the context of project and product characteristics

An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile's brake control system.

confirmation testing

After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software may also be tested with new tests if, for instance, the defect was missing functionality. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.

Explain the relationships between software development activities and test activities in the software development lifecycle

In any software development lifecycle model, there are several characteristics of good testing: For every development activity, there is a corresponding test activity Each test level has test objectives specific to that level Test analysis and design for a given test level begin during the corresponding development activity Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

regression testing

It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions. Testing involves running tests to detect such unintended side-effects

Compare the purposes of confirmation testing and regression testing

See page 41. Testing to confirm changes corrected the defect / correct functionality vs. testing to detect unintended side-effects.

Describe the role of impact analysis in maintenance testing

See page 43 and 44. Evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change.

Summarize triggers for maintenance testing

See page 43. Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes, changes of the operational environment (such as planned operating system or database upgrades), upgrades of COTS software, and patches for defects and vulnerabilities Migration, such as from one platform to another, which can require operational tests of the new environment as well as of the changed software, or tests of data conversion when data from another application will be migrated into the system being maintained Retirement, such as when an application reaches the end of its life

integration testing

focuses on interactions between components or systems. Objectives of integration testing include: -Reducing risk -Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified -Building confidence in the quality of the interfaces -Finding defects (which may be in the interfaces themselves or within the components or systems) -Preventing defects from escaping to higher test levels

system testing

focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks. Objectives of system testing include: -Reducing risk -Verifying whether the functional and non-functional behaviors of the system are as designed and specified -Validating that the system is complete and will work as expected -Building confidence in the quality of the system as a whole -Finding defects -Preventing defects from escaping to higher test levels or production

component integration testing

focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.

non-functional testing

-evaluates characteristics of systems and software such as usability, performance efficiency or security. -testing of "how well" the system behaves -Black-box techniques (see section 4.2) may be used to derive test conditions and test cases

impact analysis

-evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change. -may be done before a change is made, to help decide if the change should be made, based on the potential consequences in other areas of the system.

System Integration Testing

-focuses on the interactions and interfaces between systems, packages, and microservices. -can also cover interactions with, and interfaces provided by, external organizations (e.g., web services). In this case, the developing organization does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organization's code are resolved, arranging for test environments, etc.). -may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

functional testing

-involves tests that evaluate functions that the system should perform. -may be described in work products such as business requirements specifications, epics, user stories, use cases, or functional specifications, or they may be undocumented. -The functions are "what" the system should do. -considers the behavior of the software -black-box techniques may be used to derive test conditions and test cases for the functionality of the component or system

Compare the different test levels from the perspective of objectives, test basis, test objects, typical defects and failures, and approaches and responsibilities

COMPONENT TESTING 1. Objectives (pg 31): Component testing (also known as unit or module testing) focuses on components that are separately testable. Objectives of component testing include: a. Reducing risk b. Verifying whether the functional and non-functional behaviors of the component are as designed and specified c. Building confidence in the component's quality d. Finding defects in the component e. Preventing defects from escaping to higher test levels 2. Test basis: Examples of work products that can be used as a test basis for component testing include: a Detailed design b. Code c. Data model d. Component specifications 3. Test objects: Typical test objects for component testing include: a. Components, units or modules b. Code and data structures c. Classes e. Database modules 4. Defects and failures: Examples of typical defects and failures for component testing include: a. Incorrect functionality (e.g., not as described in design specifications) b. Data flow problems c. Incorrect code and logic 5. Approaches and Responsibilities -usually performed by the developer who wrote the code, but it at least requires access to the code being tested. -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- INTEGRATION TESTING 1. Objectives: (See pg 32 for more info!) Focuses on interactions between components or systems. Objectives of integration testing include: a. Reducing risk b. Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified c. Building confidence in the quality of the interfaces d.Finding defects (which may be in the interfaces themselves or within the components or systems) e. Preventing defects from escaping to higher test levels See pg 32 for more info on Component Integration Testing and System Integration Testing 2. Test Basis: Examples of work products that can be used as a test basis for integration testing include: a. Software and system design b. Sequence diagrams c. Interface and communication protocol specifications d. Use cases e. Architecture at component or system level f. Workflows g. External interface definitions 3. Test objects: Typical test objects for integration testing include: a. Subsystems b. Databases c. Infrastructure d. Interfaces e. APIs f. Microservices 4. Defects and Failures See page 33 for typical defects and failures for Component Integration Testing and System Integration Testing 5. Approaches and Responsibilities: Component integration tests and system integration tests should concentrate on the integration itself. For example: -If integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. -If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. See page 34 for more information. -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- SYSTEM TESTING focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks. 1. Objectives: a. Reducing risk b. Verifying whether the functional and non-functional behaviors of the system are as designed and specified c. Validating that the system is complete and will work as expected d. Building confidence in the quality of the system as a whole e. Finding defects f. Preventing defects from escaping to higher test levels or production 2. Test basis: Examples of work products that can be used as a test basis for system testing include: a. System and software requirement specifications (functional and non-functional) b. Risk analysis reports c. Use cases d. Epics and user stories e. Models of system behavior f. State diagrams g. System and user manuals 3. Test objects: Typical test objects for system testing include: a. Applications b. Hardware/software systems c. Operating systems d. System under test (SUT) e. System configuration and configuration data 4. Typical Defects and failures: Examples of typical defects and failures for system testing include: a. Incorrect calculations b. Incorrect or unexpected system functional or non-functional behavior c. Incorrect control and/or data flows within the system d. Failure to properly and completely carry out end-to-end functional tasks e. Failure of the system to work properly in the production environment(s) f. Failure of the system to work as described in system and user manuals 5. Approaches and Responsibilities: System testing should focus on the overall, end-to-end behavior of the system as a whole, both functional and non-functional. System testing should use the most appropriate techniques (see chapter 4) for the aspect(s) of the system to be tested. For example, a decision table may be created to verify whether functional behavior is as described in business rules. See page 35 and 36 for more info. -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- ACCEPTANCE TESTING 1. Objectives: Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or product. Objectives of acceptance testing include: a. Establishing confidence in the quality of the system as a whole b. Validating that the system is complete and will work as expected c. Verifying that functional and non-functional behaviors of the system are as specified See pages 36-37 for common forms of acceptance testing 2. Test Basis: Examples of work products that can be used as a test basis for any form of acceptance testing include: a. Business processes b. User or business requirements c. Regulations, legal contracts and standards d. Use cases e. System requirements f. System or user documentation g. Installation procedures h. Risk analysis reports See page 38 for more work products that can be used. 3. Typical Test Objects: Typical test objects for any form of acceptance testing include: a. System under test b. System configuration and configuration data c. Business processes for a fully integrated system d. Recovery systems and hot sites (for business continuity and disaster recovery testing) e. Operational and maintenance processes f. Forms g. Reports h. Existing and converted production data 4. Typical Defects and Failures Examples of typical defects for any form of acceptance testing include: a. System workflows do not meet business or user requirements b. Business rules are not implemented correctly c. System does not satisfy contractual or regulatory requirements d. Non-functional failures such as security vulnerabilities, inadequate performance efficiency under high loads, or improper operation on a supported platform 5. Approaches and Responsibilities Acceptance testing is often the responsibility of the customers, business users, product owners, or operators of a system, and other stakeholders may be involved as well. Acceptance testing is often thought of as the last test level in a sequential development lifecycle, but it may also occur at other times, for example: -Acceptance testing of a COTS software product may occur when it is installed or integrated -Acceptance testing of a new functional enhancement may occur before system testing For more information, see page 39.

white-box testing

Derives tests based on the system's internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system Can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered. At the component testing level, code coverage is based on the percentage of component code that has been tested, and may be measured in terms of different aspects of code (coverage items) such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested. These types of coverage are collectively called code coverage. At the component integration testing level, white-box testing may be based on the architecture of the system, such as interfaces between components, and structural coverage may be measured in terms of the percentage of interfaces exercised by tests.

Compare functional, non-functional, and white-box testing

Functional testing of a system involves tests that evaluate functions that the system should perform. The functions are "what" the system should do. Considers the behavior of the software. See page 39 Non-functional evaluates characteristics of systems and software such as usability, performance efficiency or security. Testing of "how well" the system behaves. See page 40 White-box testing derives tests based on the system's internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system. See page 40.

Acceptance Testing

Like system testing, typically focuses on the behavior and capabilities of a whole system or product. Objectives of acceptance testing include: -Establishing confidence in the quality of the system as a whole -Validating that the system is complete and will work as expected -Verifying that functional and non-functional behaviors of the system are as specified -May produce information to assess the system's readiness for deployment and use by the customer (end-user).


संबंधित स्टडी सेट्स

State and Local Taxes - Chapter 3

View Set

Psych Exam 2 - Ch. 34 (Therapeutic Groups)

View Set

Quiz: Administering a Tube feeding

View Set

Exam 2 WE-1-DP Electricity 120102b

View Set

A&P-Musculoskeletal and Integumentary Systems

View Set

Unit 3: Membrane Structure and Function

View Set