Chapter 8 - (Software Engineering - Global Edition - 2019 10th Sommerville)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Automated tests have three parts, what are they ?

1. A setup part, where you initialize the system with the test case, namely, the inputs and expected outputs. 2. A call part, where you call the object or method to be tested. 3. An assertion part, where you compare the result of the call with the expected result. If the assertion evaluates to true, the test has been successful; if false, then it has failed.

There are three types of user testing, what are they ?

1. Alpha testing, where a selected group of software users work closely with the development team to test early releases of the software. 2. Beta testing, where a release of the software is made available to a larger group of users to allow them to experiment and to raise problems that they discover with the system developers. 3. Acceptance testing, where customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment.

What inputs and outputs are required in sequence diagram ?

1. An input of a request for a report should have an associated acknowledgment. A report should ultimately be returned from the request. During testing, you should create summarized data that can be used to check that the report is cor- rectly organized. 2. An input request for a report to WeatherStation results in a summarized report being generated. You can test this in isolation by creating raw data correspond- ing to the summary that you have prepared for the test of SatComms and check- ing that the WeatherStation object correctly produces this summary. This raw data is also used to test the WeatherData object.

What are benefits of test-driven development ?

1. Code coverage In principle, every code segment that you write should have at least one associated test. Therefore, you can be confident that all of the code in the system has actually been executed. Code is tested as it is written, so defects are discovered early in the development process. 2. Regression testing A test suite is developed incrementally as a program is devel- oped. You can always run regression tests to check that changes to the program have not introduced new bugs. 3. Simplified debugging When a test fails, it should be obvious where the prob- lem lies. The newly written code needs to be checked and modified. You do not need to use debugging tools to locate the problem. Reports of the use of TDD suggest that it is hardly ever necessary to use an automated debugger in test-driven development (Martin 2007). 4. System documentation The tests themselves act as a form of documentation that describe what the code should be doing. Reading the tests can make it easier to understand the code.

Six stages of acceptance testing is ?

1. Define acceptance criteria This stage should ideally take place early in the pro- cess before the contract for the system is signed. The acceptance criteria should be part of the system contract and be approved by the customer and the devel- oper. In practice, however, it can be difficult to define criteria so early in the process. Detailed requirements may not be available, and the requirements will almost certainly change during the development process. 2. Plan acceptance testing This stage involves deciding on the resources, time, and budget for acceptance testing and establishing a testing schedule. The accept- ance test plan should also discuss the required coverage of the requirements and the order in which system features are tested. It should define risks to the testing process such as system crashes and inadequate performance, and discuss how these risks can be mitigated. 3. Derive acceptance tests Once acceptance criteria have been established, tests have to be designed to check whether or not a system is acceptable. Acceptance tests should aim to test both the functional and non-functional characteristics (e.g., performance) of the system. They should ideally provide complete cover- age of the system requirements. In practice, it is difficult to establish completely objective acceptance criteria. There is often scope for argument about whether or not a test shows that a criterion has definitely been met. 4. Run acceptance tests The agreed acceptance tests are executed on the system. Ideally, this step should take place in the actual environment where the system will be used, but this may be disruptive and impractical. Therefore, a user testing environment may have to be set up to run these tests. It is difficult to automate this process as part of the acceptance tests may involve testing the interactions between end-users and the system. Some training of end-users may be required. 5. Negotiate test results It is very unlikely that all of the defined acceptance tests will pass and that there will be no problems with the system. If this is the case, then acceptance testing is complete and the system can be handed over. More commonly, some problems will be discovered. In such cases, the developer and the customer have to negotiate to decide if the system is good enough to be used. They must also agree on how the developer will fix the identified problems. 6. Reject/accept system This stage involves a meeting between the developers and the customer to decide on whether or not the system should be accepted. If the system is not good enough for use, then further development is required to fix the identified problems. Once complete, the acceptance testing phase is repeated.

When testing software you are doing two things, what are they ?

1. Demonstrate to the developer and the customer that the software meets its requirements. For custom software, this means that there should be at least one test for every requirement in the requirements document. For generic software products, it means that there should be tests for all of the system features that will be included in the product release. You may also test combinations of features to check for unwanted interactions between them. 2. Find inputs or input sequences where the behavior of the software is incorrect, undesirable, or does not conform to its specification. These are caused by defects (bugs) in the software. When you test software to find defects, you are trying to root out undesirable system behavior such as system crashes, unwanted interac- tions with other systems, incorrect computations, and data corruption.

Software system has to go through three stages of testing, what are they ?

1. Development testing, where the system is tested during development to discover bugs and defects. System designers and programmers are likely to be involved in the testing process. 2. Release testing, where a separate testing team tests a complete version of the system before it is released to users. The aim of release testing is to check that the system meets the requirements of the system stakeholders. 3. User testing, where users or potential users of a system test the system in their own environment. For software products, the "user" may be an internal market- ing group that decides if the software can be marketed, released and sold. Acceptance testing is one type of user testing where the customer formally tests a system to decide if it should be accepted from the system supplier or if further development is required.

In system testing the interface testing can overlap, what are the two different differences ?

1. During system testing, reusable components that have been separately developed and off-the-shelf systems may be integrated with newly developed components. The complete system is then tested. 2. Components developed by different team members or subteams may be integrated at this stage. System testing is a collective rather than an individual process. In some companies, system testing may involve a separate testing team with no involvement from designers and programmers.

Software inspection has three advantages over testing and are ?

1. During testing, errors can mask (hide) other errors. When an error leads to unexpected outputs, you can never be sure if later output anomalies are due to a new error or are side effects of the original error. Because inspection doesn't involve executing the system, you don't have to worry about interactions between errors. Consequently, a single inspection session can discover many errors in a system. 2. Incomplete versions of a system can be inspected without additional costs. If a program is incomplete, then you need to develop specialized test harnesses to test the parts that are available. This obviously adds to the system development costs. 3. As well as searching for program defects, an inspection can also consider broader quality attributes of a program, such as compliance with standards, portability, and maintainability. You can look for inefficiencies, inappropriate algorithms, and poor programming style that could make the system difficult to maintain and update.

What are the general guidelines for interface testing ?

1. Examine the code to be tested and identify each call to an external component. Design a set of tests in which the values of the parameters to the external com- ponents are at the extreme ends of their ranges. These extreme values are most likely to reveal interface inconsistencies. 2. Where pointers are passed across an interface, always test the interface with null pointer parameters. 3. Where a component is called through a procedural interface, design tests that deliberately cause the component to fail. Differing failure assumptions are one of the most common specification misunderstandings. 4. Use stress testing in message passing systems. This means that you should design tests that generate many more messages than are likely to occur in prac- tice. This is an effective way of revealing timing problems. 5. Where several components interact through shared memory, design tests that vary the order in which these components are activated. These tests may reveal implicit assumptions made by the programmer about the order in which the shared data is produced and consumed.

What are the different types of interface errors that can occur ?

1. Parameter interfaces These are interfaces in which data or sometimes function references are passed from one component to another. Methods in an object have a parameter interface. 2. Shared memory interfaces These are interfaces in which a block of memory is shared between components. Data is placed in the memory by one subsystem and retrieved from there by other subsystems. This type of interface is used in embedded systems, where sensors create data that is retrieved and processed by other system components. 3. Procedural interfaces These are interfaces in which one component encapsu- lates a set of procedures that can be called by other components. Objects and reusable components have this form of interface. 4. Message passing interfaces These are interfaces in which one component requests a service from another component by passing a message to it. A return message includes the results of executing the service. Some object-oriented sys- tems have this form of interface, as do client-server systems

Two strtagegies that can bee effective in helping in test cases are ?

1. Partition testing, where you identify groups of inputs that have common charac- teristics and should be processed in the same way. You should choose tests from within each of these groups. 2. Guideline-based testing, where you use testing guidelines to choose test cases. These guidelines reflect previous experience of the kinds of errors that program- mers often make when developing components.

The level of required confidence depends on the sys- tem's purpose, the expectations of the system users, and the current marketing environment for the system

1. Software purpose The more critical the software, the more important it is that it is reliable. For example, the level of confidence required for software used to control a safety-critical system is much higher than that required for a demon- strator system that prototypes new product ideas. 2. User expectations Because of their previous experiences with buggy, unreliable software, users sometimes have low expectations of software quality. They are not surprised when their software fails. When a new system is installed, usersmay tolerate failures because the benefits of use outweigh the costs of failure recovery. However, as a software product becomes more established, users expect it to become more reliable. Consequently, more thorough testing of later versions of the system may be required. 3. Marketing environment When a software company brings a system to market, it must take into account competing products, the price that customers are willing to pay for a system, and the required schedule for delivering that system. In a competitive environment, the company may decide to release a program before it has been fully tested and debugged because it wants to be the first into the market. If a software product or app is very cheap, users may be willing to toler- ate a lower level of reliability.

When testing programs what can help to reveal defects ?

1. Test software with sequences that have only a single value. Programmers natu- rally think of sequences as made up of several values, and sometimes they embed this assumption in their programs. Consequently, if presented with a single-value sequence, a program may not work properly. 2. Use different sequences of different sizes in different tests. This decreases the chances that a program with defects will accidentally produce a correct output because of some accidental characteristics of the input. 3. Derive tests so that the first, middle, and last elements of the sequence are accessed. This approach reveals problems at partition boundaries.

Stress test helps you to do two things which are ?

1. Test the failure behavior of the system. Circumstances may arise through an unexpected combination of events where the load placed on the system exceeds the maximum anticipated load. In these circumstances, system failure should not cause data corruption or unexpected loss of user services. Stress testing checks that overloading the system causes it to "fail-soft" rather than collapse under its load. 2. Reveal defects that only show up when the system is fully loaded. Although it can be argued that these defects are unlikely to cause system failures in normal use, there may be unusual combinations of circumstances that the stress testing replicates.

There are two important distinctions between release testing and system testing during the development process, what are they ?

1. The system development, team should not be responsible for release testing. 2. Release testing is a process of validation checking to ensure that a system meets its requirements and is good enough for use by system customers. System test- ing by the development team should focus on discovering bugs in the system (defect testing).

Effectiveness in testing means two things ?

1. The test cases should show that, when used as expected, the component that you are testing does what it is supposed to do. 2. If there are defects in the component, these should be revealed by test cases.

There are three stages of development testing, what are they ?

1. Unit testing, where individual program units or object classes are tested. Unit testing should focus on testing the functionality of objects or methods. 2. Component testing, where several individual units are integrated to create com- posite components. Component testing should focus on testing the component interfaces that provide access to the component functions. 3. System testing, where some or all of the components in a system are integrated and the system is tested as a whole. System testing should focus on testing com- ponent interactions.

What are the fundamental process in TDD or Test Driven Development ?

1. You start by identifying the increment of functionality that is required. This should normally be small and implementable in a few lines of code. 2. You write a test for this functionality and implement it as an automated test. This means that the test can be executed and will report whether or not it has passed or failed. 3. You then run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail. This is deliberate as it shows that the test adds something to the test set. 4. You then implement the functionality and re-run the test. This may involve refactoring existing code to improve it and add new code to what's already there. 5. Once all tests run successfully, you move on to implementing the next chunk of functionality.

Verification question is ?

Are we building the product right?

Validation question is ?

Are we building the right product?

What is SapFix ?

Bugs are found via automated testing, SapFix tries to fix the bug automatically using AI to search for similar fixes.

Interface errors fall into three categories, what are they ?

■ Interface misuse A calling component calls some other component and makes an error in the use of its interface. This type of error is common in parameter inter- faces, where parameters may be of the wrong type or be passed in the wrong order, or the wrong number of parameters may be passed. ■ Interface misunderstanding A calling component misunderstands the specification of the interface of the called component and makes assumptions about its behavior. The called component does not behave as expected, which then causes unexpected behavior in the calling component. For example, a binary search method may be called with a parameter that is an unordered array. The search would then fail. ■ Timing errors These occur in real-time systems that use a shared memory or a message-passing interface. The producer of data and the consumer of data may operate at different speeds. Unless particular care is taken in the interface design, the consumer can access out-of-date information because the producer of the information has not updated the shared interface information.

Chapter 8 Key points

■ Testing can only show the presence of errors in a program. It cannot show that there are no remaining faults. ■ Development testing is the responsibility of the software development team. A separate team should be responsible for testing a system before it is released to customers. In the user testing process, customers or system users provide test data and check that tests are successful. ■ Development testing includes unit testing in which you test individual objects and methods; component testing in which you test related groups of objects; and system testing in which you test partial or complete systems. ■ When testing software, you should try to "break" the software by using experience and guidelines to choose types of test cases that have been effective in discovering defects in other systems. ■ Wherever possible, you should write automated tests. The tests are embedded in a program that can be run every time a change is made to a system. ■ Test-first development is an approach to development whereby tests are written before the code to be tested. Small code changes are made, and the code is refactored until all tests execute successfully. ■ Scenario testing is useful because it replicates the practical use of the system. It involves inventing a typical usage scenario and using this to derive test cases. ■ Acceptance testing is a user testing process in which the aim is to decide if the software is good enough to be deployed and used in its planned operational environment.


संबंधित स्टडी सेट्स

Social Studies, Chp.: 11, 12, 15, 16, 36, 37, 45, 46

View Set

PrepU chapter 47 lipid-lowering Agents

View Set