SWE Ch 17 and 19

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Dunn and Ulman Summary

"Software quality assurance is the mapping of the managerial precepts and design disciplines of quality assurance onto the applicable managerial and technological space of software engineering." The ability to ensure quality is the measure of a mature engineering discipline. When the mapping is successfully accomplished, mature software engineering is the result.

testing should not be viewed as a safety net.

"You can't test in quality. If it's not there before you begin testing, it won't be there when you're finished testing." Quality is incorporated into software throughout the process of software engineering, and testing cannot be applied as a fix at the end of the process. Proper application of methods and tools, effective technical reviews, and solid management and measurement all lead to quality that is confirmed during testing

When testing commences, there is a subtle, yet definite, attempt to

"break" the thing that the software engineer has built. From the point of view of the builder, testing can be considered to be (psychologically) destructive. So the builder treads lightly, designing and executing tests that will demonstrate that the program works, rather than to uncover errors. Unfortunately, errors will be nevertheless present. And, if the software engineer doesn't find them, the customer will!

SQA encompasses

(1) an SQA process, (2) specific quality assurance and quality control tasks (including technical reviews and a multitiered testing strategy), (3) effective software engineering practice (methods and tools), (4) control of all software work products and the changes made to them (Chapter 22), (5) a procedure to ensure compliance with software development standards (when applicable), and (6) measurement and reporting mechanisms.

Potential Errors that should be tested when error handling is eval'd

(1) error description is unintelligible, (2) error noted does not correspond to error encountered, (3) error condition causes system intervention prior to error handling, (4) exception-condition processing is incorrect, or (5) error description does not provide enough information to assist in the location of the cause of the error

White-Box Testing Cases

(1) guarantee that all independent paths within a module have been exercised at least once, (2) exercise all logical decisions on their true and false sides, (3) execute all loops at their boundaries and within their operational bounds, and (4) exercise internal data structures to ensure their validity

Black Box Testing finds errors in the categories:

(1) incorrect or missing functions, (2) interface errors, (3) errors in data structures or external database access, (4) behavior or performance errors, and (5) initialization and termination errors.

Problems with MTBF

(1) it projects a time span between failures but does not provide us with a projected failure rate, and (2) MTBF can be misinterpreted to mean average life span even though this is not what it implies.

A software testing strategy will succeed only when software testers:

(1) specify product requirements in a quantifiable manner long before testing commences, (2) state testing objectives explicitly, (3) understand the users of the software and develop a profile for each user category, (4) develop a testing plan that emphasizes "rapid cycle testing," (5) build "robust" software that is designed to test itself (the concept of antibugging is discussed briefly in Section 9.3), (6) use effective technical reviews as a filter prior to testing, (7) conduct technical reviews to assess the test strategy and test cases themselves, and (8) develop a continuous improvement approach (Chapter 28) for the testing process

Reccommended Plan structure identifies:

(1) the purpose and scope of the plan, (2) a description of all software engineering work products (e.g., models, documents, source code) that fall within the purview of SQA, (3) all applicable standards and practices that are applied during the software process, (4) SQA actions and tasks (including reviews and audits) and their placement throughout the software process, (5) the tools and methods that support SQA actions and tasks, (6) software configuration management (Chapter 22) procedures, (7) methods for assembling, safeguarding, and maintaining all SQA-related records, and (8) organizational roles and responsibilities relative to product quality

Consider two safety-critical systems that are controlled by computer. List at least three hazards for each that can be directly linked to software failures.

1. Air Traffic Control Systems Collision Hazards: Software errors in flight tracking or collision avoidance algorithms could lead to incorrect positioning data, causing near-misses or actual collisions between aircraft. Landing and Takeoff Accidents: Failures in software managing landing and takeoff could result in incorrect timing, sequencing, or clearance communications, leading to runway incursions or accidents. Altitude and Route Deviation: Software issues could lead to incorrect altitude or route instructions being given to pilots, potentially causing aircraft to stray into unauthorized airspace or hazardous conditions, risking collisions or environmental dangers (like flying into a storm). 2. Medical Device Systems (e.g., Infusion Pumps) Incorrect Dosage Delivery: A software malfunction in an infusion pump could lead to the incorrect amount of medication being administered, either overdosing or underdosing the patient, which can be life-threatening depending on the medication. Timing Errors: Software failures that impact the timing of medication delivery can lead to drugs being administered too quickly or too slowly, disrupting the intended treatment regimen and potentially causing harm to the patient. Data Communication Errors: If software inaccurately communicates or records patient data, this could lead to inappropriate treatment decisions. For example, an incorrect readout on a patient's condition could lead to a misdiagnosis or incorrect medication being administered.

BVA Guidelines

1. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b. 2. If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. 3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature versus pressure table is required as output from an engineering analysis program. Test cases should be designed to create an output report that produces the maximum (and minimum) allowable number of table entries. 4. If internal program data structures have prescribed boundaries (e.g., a table has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary.

Equivalence Class Defined Guidelines

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. 4. If an input condition is Boolean, one valid and one invalid class are defined. By applying the guidelines for the derivation of equivalence classes, test cases for each input domain data item can be developed and executed. Test cases are selected so that the largest number of attributes of an equivalence class are exercised at once.

Statistical SQA

1. Information about software errors and defects is collected and categorized. 2. An attempt is made to trace each error and defect to its underlying cause (e.g., nonconformance to specifications, design error, violation of standards, poor communication with the customer). 3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible causes), isolate the 20 percent (the vital few). 4. Once the vital few causes have been identified, move to correct the problems that have caused the errors and defects. The application of the statistical SQA and the Pareto principle can be summarized in a single sentence: Spend your time focusing on things that really matter, but first be sure that you understand what really matters!

Using the same flow graph each complexity can be computed as such

1. The flow graph has four regions. 2. V(G) = 11 edges − 9 nodes + 2 = 4. 3. V(G) = 3 predicate nodes + 1 = 4. More important, the value for V(G) provides you with an upper bound for the number of independent paths that form the basis set and, by implication, an upper bound on the number of tests that must be designed and executed to guarantee coverage of all program statements. So in this case we would need to define at most four test cases to exercise each independent logic path.

3 Ways Complexity is Computed

1. The number of regions of the flow graph corresponds to the cyclomatic complexity. 2. Cyclomatic complexity V(G) for a flow graph G is defined as V(G) = E − N + 2 where E is the number of flow graph edges and N is the number of flow graph nodes. 3. Cyclomatic complexity V(G) for a flow graph G is also defined as V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G.

Antibugging

A good design anticipates error conditions and establishes error-handling paths to reroute or cleanly terminate processing when an error does occur. Yourdon [You75] calls this approach antibugging. Unfortunately, there is a tendency to incorporate error handling into software and then never test the error handling. Be sure that you design tests to execute every error-handling path. If you don't, the path may fail when it is invoked, exacerbating an already dicey situation.

Independent Path

An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated in terms of a flow graph, an independent path must move along at least one edge that has not been traversed before the path is defined.

The MTBF concept for reliability is open to criticism. Explain why.

Applicability Mostly to Repairable Systems: MTBF is most relevant for systems that can be repaired and put back into service after a failure. It's less applicable to non-repairable systems, where Mean Time To Failure (MTTF) might be a more suitable measure. Assumption of Exponential Failure Distribution: MTBF calculations often assume an exponential distribution of failures, which implies a constant failure rate over time. Many systems, especially complex ones like software, do not exhibit a constant failure rate; they might have a "bathtub curve" failure rate, with high initial failures, a period of low failure rate, followed by an increase as the system ages. Doesn't Account for Severity of Failures: MTBF only measures the average time between failures, not the severity or impact of those failures. A system with frequent but minor failures could have the same MTBF as one with infrequent but catastrophic failures. Misinterpretation of Meaning: There's often a misunderstanding that MTBF represents the expected time before a system will fail, which is not accurate. MTBF is an average measure, meaning that individual units could fail much sooner or much later than the MTBF. Ignores Down Time and Repair Time: MTBF does not consider the duration of downtime and repair, which are crucial for understanding the overall reliability and availability of a system. Metrics like Mean Time to Repair (MTTR) and Mean Time to Recover are also important in this context.

Give at least three examples in which black-box testing might give the impression that "everything's OK," while white-box tests might uncover an error. Give at least three examples in which white-box testing might give the impression that "everything's OK," while black-box tests might uncover an error

Caught by Black-Box, but likely missed by White-Box • Division by Zero - white box unlikely to hit the exact number, black box may find this as a boundary value. • Possible off by one error - likely boundary case in black-box testing. • All performance issues - white box is not concerned with this at all. • Memory errors (very large input) - likely to be a block box testing boundary value, but not needed for white-box testing coverage. • Rounding errors and overflow errors - again, likely block box testing boundary cases, but not needed to test for this to achieve any white-box testing coverage. • Requirements not implemented (missing path problem) - block box testing would easily catch (missing function), white-box testing would not since we would not know the code is missing. Some examples in white-box testing: Caught by White-Box, but missed by Black-Box (examples): • Unexpected and erroneous branches to optimize the code - if some strange (and wrong) optimization is used for some cases, bb is unlikely to catch it, but white-box testing would force coverage of this code and we might find the problem. • Missing requirements (the code has been implemented, but the requirement is not there) • Code and requirements are out of date

Besides counting errors and defects, are there other countable characteristics of software that imply quality? What are they and can they be measured directly?

Code Complexity Metrics: These metrics evaluate the complexity of the code, which can impact maintainability, understandability, and the likelihood of errors. Common measures include Cyclomatic Complexity, which counts the number of linearly independent paths through a program's source code, and Halstead Complexity Measures, which are based on the number of operators and operands in the code. Test Coverage: This metric measures the percentage of the codebase that is covered by automated tests. High test coverage typically implies a lower likelihood of undetected bugs and better testability of the code. However, it's important to note that high test coverage doesn't always equate to high code quality. Compliance with Standards: The degree to which software complies with relevant coding or industry standards can be a measure of its quality.

Test Case Design (Cont)

Data flow across a component interface is tested before any other testing is initiated. If data do not enter and exit properly, all other tests are moot. In addition, local data structures should be exercised and the local impact on global data should be ascertained (if possible) during unit testing. Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. Boundary testing is one of the most important unit-testing tasks. Software often fails at its boundaries. That is, errors often occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, or when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at, and just above maxima and minima are very likely to uncover errors.

Six Sigma Steps

Define customer requirements and deliverables and project goals via well-defined methods of customer communication. Measure the existing process and its output to determine current quality performance (collect defect metrics). Analyze defect metrics and determine the vital few causes. If an existing software process is in place, but improvement is required, Six Sigma suggests two additional steps: Improve the process by eliminating the root causes of defects. Control the process to ensure that future work does not reintroduce the causes of defects. These core and additional steps are sometimes referred to as the DMAIC (define, measure, analyze, improve, and control) method. If an organization is developing a software process (rather than improving an existing process), the core steps are augmented as follows: Design the process to (1) avoid the root causes of defects and (2) to meet customer requirements. ∙ Verify that the process model will, in fact, avoid defects and meet customer requirements. This variation is sometimes called the DMADV (define, measure, analyze, design, and verify) method.

Why is a highly coupled module difficult to unit test?

Dependency on External Modules: When a module is highly coupled, it relies heavily on other modules or external systems. This makes it difficult to isolate the module for testing, as the behavior of the module can be influenced by external factors. Testing such a module requires either the presence of all the connected systems or elaborate mock-ups of these systems. Brittleness in Testing: With high coupling, changes in other modules or systems can easily break the tests for the module in question. This leads to a fragile test suite that requires constant maintenance as the overall system evolves.

Why is there often tension between a software engineering group and an independent software quality assurance group? Is this healthy?

Differing Objectives: The primary goal of software engineers is to develop functional software that meets user needs and requirements. In contrast, QA's goal is to ensure the software is of high quality, which involves identifying defects and issues. This fundamental difference in objectives can lead to tension, as what is a success for one group might be a point of criticism for the other. Time and Resource Constraints: Software development projects often operate under tight deadlines and limited resources. Engineers may feel pressured to deliver features quickly, while QA teams might be concerned that the rush compromises the thoroughness of testing, potentially leading to quality issues.

In your own words, describe why the class is the smallest reasonable unit for testing within an OO system

Encapsulation of Behavior and Data: Classes in OO systems encapsulate behavior (methods) and data (attributes). This encapsulation makes a class a self-contained unit that can be tested in isolation. Testing at the class level allows for the verification of both the behavior and the state of objects in a controlled environment.

Cost-Effective Testing

Exhaustive testing requires every possible combination of input values and test-case orderings be processed by the component being tested (e.g., consider the move generator in a computer chess game). In some cases, this would require the creation of a near-infinite number of data sets. The return on exhaustive testing is often not worth the effort, since testing alone cannot be used to prove a component is correctly implemented. There are some situations in which you will not have the resources to do comprehensive unit testing. In these cases, testers should select modules crucial to the success of the project and those that are suspected to be error-prone because they have complexity metrics as the focus for your unit testing.

Flowchart and Graph

Here, a flowchart is used to depict program control structure. Figure 19.5b maps the flowchart into a corresponding flow graph (assuming that no compound conditions are contained in the decision diamonds of the flowchart). Referring to Figure 19.5b, each circle, called a flow graph node, represents one or more procedural statements. A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An edge must terminate at a node, even if the node does not represent any procedural statements (e.g., see the flow graph symbol for the if-then-else construct). Areas bounded by edges and nodes are called regions. When counting regions, we include the area outside the graph as a region.

Nested Loop Test

If we were to extend the test approach for simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases. This would result in an impractical number of tests. Beizer [Bei90] suggests an approach that will help to reduce the number of tests: 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or excluded values. 3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values and other nested loops to "typical" values. 4. Continue until all loops have been tested.

Software Failures

In fact, all software failures can be traced to design or implementation problems

Can a program be correct and have poor quality?

In summary, while correctness is a fundamental aspect of software quality, it is only one component. A program that is functionally correct but falls short in areas like usability, performance, reliability, maintainability, or security can be deemed to have poor overall quality. Quality is a holistic measure of a software product's overall excellence and fitness for its intended purpose.

Can a program be correct and still not be reliable? Explain

In summary, while correctness is about meeting specified functional requirements, reliability encompasses the broader and more practical aspect of consistent performance over time and across various conditions. A program needs both to be truly effective in real-world applications.

Testing Strategy Spiral

Initially, system engineering defines the role of software and leads to software requirements analysis, where the information domain, function, behavior, performance, constraints, and validation criteria for software are established. Moving inward along the spiral, you come to design and finally to coding. To develop computer software, you spiral inward along streamlines that decrease the level of abstraction on each turn.

Software testing steps

Initially, tests focus on each component individually, ensuring that it functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of testing techniques that exercise specific paths in a component's control structure to ensure complete coverage and maximum error detection. Next, components must be assembled or integrated to form the complete software package. Integration testing addresses the issues associated with the dual problems of verification and program construction. Test-case design techniques that focus on inputs and outputs are more prevalent during integration, although techniques that exercise specific program paths may be used to ensure coverage of major control paths. After the software has been integrated (constructed), a set of high-order tests is conducted. Validation criteria (established during requirements analysis) must be evaluated. Validation testing provides final assurance that software meets all functional, behavioral, and performance requirements.

Test-Case Design

It is a good idea to design unit test cases before you develop code for a component. This ensures that you'll develop code that will pass the tests or at least the tests you thought of already

Recommended testing strategy

It takes an incremental view of testing, beginning with the testing of individual program units, moving to tests designed to facilitate the integration of the units (sometimes on a daily basis), and culminating with tests that exercise the constructed system as it evolves

Mean Time Between Failures (MTBF)

Mean time to failure + mean time to repair

Criteria for Done

One response to the question is: "You're never done testing; the burden simply shifts from you (the software engineer) to the end user." Every time the user executes a computer program, the program is being tested. This sobering fact underlines the importance of other software quality assurance activities. Another response (somewhat cynical but nonetheless accurate) is: "You're done testing when you run out of time or you run out of money." However, through statistical quality assurance, By collecting metrics during software testing and making use of existing statistical models, it is possible to develop meaningful guidelines for answering the question: "When are we done testing?"

Flow Path Example

Path 1: 1-11 Path 2: 1-2-3-4-5-10-1-11 Path 3: 1-2-3-6-8-9-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11 Note that each new path introduces a new edge. The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 Paths 1 through 4 constitute a basis set for the flow graph in Figure 19.5b. That is, if you can design tests to force execution of these paths (a basis set), every statement in the program will have been guaranteed to be executed at least one time and every condition will have been executed on its true and false sides. It should be noted that the basis set is not unique. In fact, a number of different basis sets can be derived for a given procedural design

Independent SQA group activities

Prepares an SQA plan for a project. Participates in dev of project's software process description Reviews software engineering activities to verify compliance with the defined software process. Audits designated software work products to verify compliance with those defined as part of the software process. Ensures that deviations in software work and work products are documented and handled according to a documented procedure. Records any noncompliance, and reports to senior management In addition to these activities, the SQA group coordinates the control and management of change (Chapter 22) and helps to collect and analyze software metrics.

Boundary value analysis

Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as wel

SQA Goal from Activities

Requirements quality Design quality Code quality Quality control effectiveness

List some problems that might be associated with the creation of an independent test group. Are an ITG and an SQA group made up of the same people?

Resource Allocation: Establishing an ITG requires additional resources - both human and financial. Finding skilled testers and allocating budget for this separate group can be challenging, especially for smaller organizations. Delayed Feedback: When testing is conducted by a separate group, feedback on issues may not be as immediate as when developers test their own code. This can slow down the development process and make it more difficult to implement quick fixes. "Us vs. Them" Mentality: Separating testers from developers can inadvertently create an adversarial relationship. Developers may feel criticized by testers, while testers may feel their concerns are not adequately addressed by developers. Independent Test Group (ITG): This group is primarily focused on testing the software to identify defects and ensure that it works as expected. The ITG is involved in hands-on testing activities, including writing and executing test cases, reporting bugs, and verifying bug fixes. Software Quality Assurance (SQA) Group: SQA is more about overseeing and managing the overall quality of the software development process. This includes defining and enforcing quality standards, methodologies, and practices; conducting audits and reviews; and ensuring that the development process is followed correctly. SQA is less about hands-on testing and more about process management and improvement.

Quality vs Reliability

Scope: Quality is a comprehensive concept that includes all characteristics of a product or service. It is not limited to the functional aspects but also includes non-functional aspects like usability, aesthetics, and customer service. Reliability is a subset of quality, focusing specifically on the aspect of consistent performance and dependability over time. Measurement: Measuring quality can be subjective and varies depending on the criteria set by stakeholders (customers, manufacturers, etc.). It often involves both quantitative and qualitative measures, including customer satisfaction surveys, defect rates, and compliance with standards. Reliability is usually measured more quantitatively, often in terms of mean time between failures (MTBF), failure rate, or other statistical measures.

Software reliability vs safety

Software reliability uses statistical analysis to determine the likelihood that a software failure will occur. However, the occurrence of a failure does not necessarily result in a hazard or mishap. Software safety examines the ways in which failures result in conditions that can lead to a mishap. That is, failures are not considered in a vacuum but are evaluated in the context of an entire computer-based system and its environment

Elements of SQA

Standards (IEEE, ISO) Reviews and audits. (SQA does audits with the intent of ensuring quality is followed) Testing (testing is planned and conducted efficiently) Error/defect collection and analysis. (masures data to understand how errors come and what is best for eliminating them) Change Management (SQA ensures nonjarring change practices) Education ( takes lead in software process improvement and is a key proponent and sponsor of educational programs) Vendor Management. (For external software vendors: shrink-wrapped packages, tailored shell (skeleton structure tailored to purchaser), and contracted software (custom to design from customer)) . The job of the SQA organization is to ensure that high-quality software results by suggesting specific quality practices that the vendor should follow (when possible) and incorporating quality mandates as part of any contract with an external vendor. Security Management (SQA ensures appropriate process and tech are used for security) Safety. (assessing impact of software failure and reducing risk) of failure) Risk Management. (ensures risk management activites are properly conducted with risk-related plans)

The problem with quality management

The problem of quality management is not what people don't know about it. The problem is what they think they do know . . . In this regard, quality has much in common with sex. Everybody is for it. (Under certain conditions, of course.) Everyone feels they understand it. (Even though they wouldn't want to explain it.) Everyone thinks execution is only a matter of following natural inclinations. (After all, we do get along somehow.) And, of course, most people feel that problems in these areas are caused by other people. (If only they would take the time to do things right.)

SQA approaches may not work in one environment like another

The solution to this dilemma is to understand the specific quality requirements for a software product and then select the process and specific SQA actions and tasks that will be used to meet those requirements. The Software Engineering Institute's CMMI and ISO 9000 standards are the most commonly used process frameworks. Each proposes "a syntax and semantics" [Par11] that will lead to the implementation of software engineering practices that improve product quality, these are usually combined.

Behavioral Testing State Diagram

The state diagram for a class can be used to help derive a sequence of tests that will exercise the dynamic behavior of the class (and those classes that collaborate with it). Still more test cases could be derived to ensure that all behaviors for the class have been adequately exercised. In situations in which the class behavior results in a collaboration with one or more classes, multiple state diagrams are used to track the behavioral flow of the system. The state model can be traversed in a "breadth-first" [McG94] manner. In this context, breadth-first implies that a test case exercises a single transition and that when a new transition is to be tested, only previously tested transitions are used

Test recordkeeping

The test cases can be recorded in a Google Docs spreadsheet that briefly describes the test case, contains a pointer to the requirement being tested, contains expected output from the test case data or the criteria for success, allows testers to indicate whether the test was passed or failed and the dates the test case was run, and should have room for comments about why a test may have failed to aid in debugging

Traceability

To ensure that the testing process is auditable, each test case needs to be traceable back to specific functional or nonfunctional requirements or anti-requirements. Often nonfunctional requirements need to be traceable to specific business or architectural requirements.

Spiral can also be used for software testing strategy

Unit testing begins at the vortex of the spiral and concentrates on each unit (e.g., component, class, or WebApp content object) of the software as implemented in source code. Testing progresses by moving outward along the spiral to integration testing, where the focus is on design and the construction of the software architecture. Taking another turn outward on the spiral, you encounter validation testing, where requirements established as part of requirements modeling are validated against the software that has been constructed. Finally, you arrive at system testing, where the software and other system elements are tested as a whole. To test computer software, you spiral out along streamlines that broaden the scope of testing with each turn.

When each color testing takes place

Unlike white-box testing, which is performed early in the testing process, black-box testing tends to be applied during later stages of testing.

Scaffolding

Using driver main programs and stubs for doing unit-testing of entire programs Drivers and stubs represent testing "overhead." That is, both are software that must be coded (formal design is not commonly applied) but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low

Verification vs Validation

Verification refers to the set of tasks that ensure that software correctly implements a specific function. Validation refers to a different setof tasks that ensure that the software that has been built is traceable to customer requirements. Verification: "Are we building the product right?" Validation: "Are we building the right product?"

Object-Oriented Testing

When object-oriented software is considered, the concept of the unit changes. Encapsulation drives the definition of classes and objects. This means that each class and each instance of a class packages attributes (data) and the operations that manipulate these data. An encapsulated class is usually the focus of unit testing. However, operations (methods) within the class are the smallest testable units. Because a class can contain a number of different operations, and a particular operation may exist as part of a number of different classes, the tactics applied to unit testing must change.

OO Testing Cont

You can no longer test a single operation in isolation (the conventional view of unit testing) but rather as part of a class. To illustrate, consider a class hierarchy in which an operation X is defined for the superclass and is inherited by a number of subclasses. Each subclass uses operation X, but it is applied within the context of the private attributes and operations that have been defined for the subclass. Because the context in which operation X is used varies in subtle ways, it is necessary to test operation X in the context of each of the subclasses. This means that testing operation X in a stand-alone fashion (the conventional unit-testing approach) is usually ineffective in the object-oriented context.

Equivalence Partitioning

a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might otherwise require many test cases to be executed before the general error is observed Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts introduced in the preceding section, if a set of objects can be linked by relationships that are symmetric, transitive, and reflexive, an equivalence class is present [Bei95]. An equivalence class represents a set of valid or invalid states for input conditions.

Failures in Time

a statistical measure of how many failures a component will have over 1 billion hours of operation. Therefore, 1 FIT is equivalent to one failure in every billion hours of operation

White-box testing

a testcase design philosophy that uses the control structure described as part of componentlevel design to derive test cases.

SQA is composed of

a variety of tasks associated with two different constituencies—the software engineers who do technical work and an SQA group that has responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.

Loop testing

a white-box testing technique that focuses exclusively on the validity of loop constructs. Two different classes of loops [Bei90] can be defined: simple loops and nested loops. (Figure 19.6)

Cyclomatic Complexity

answers to know how many paths to look for. Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program and provides you with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.

Software reliability from AI

defined as the probability of failure-free software operation for a specified time period in a specified environment. This means that we can never know the exact moment when a software product will fail because we will never have the complete data needed to calculate the probability. Uses Bayes theorem to get Bayesian inference that uses the theorem to update probability for a hypothesis as more evidence or info becomes available.

Basis Path Testing

enables the test-case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing. Before the basis path method can be introduced, a simple notation for the representation of control flow, called a flow graph (or program graph), must be introduced.4 A flow graph should be drawn only when the logical structure of a component is complex. The flow graph allows you to trace program paths more readily

Class Testing

equiv of unit testing for conventional software, driven by the operations encapsulated by the class and the state behavior of the class.

Black-box testing (behavioral/functional)

focuses on the functional requirements of the software. That is, black-box testing techniques enable you to derive sets of input conditions that will fully exercise all functional requirements for a program

Anti-reqs and negative test cases

if we are to uncover new defects, it is also important to write test cases to test that a component does not do things it is not supposed to do (e.g., accessing privileged data sources without proper permissions). These may be stated formally as anti-requirements3 and may require specialized security testing techniques (Section 21.7) [Ale17]. These so-called negative test cases should be included to make sure the component behaves according to the customer's expectations.

Six Sigma (Strategy for Statistical QA)

is a business-management strategy designed to improve the quality of process outputs by minimizing variation and causes of defects in processes. It is a subset of the Total Quality Management (TQM) methodology with a heavy focus on statistical applications used to reduce costs and improve quality"

Software safety

is a software quality assurance activity that focuses on the identification and assessment of potential hazards that may affect software negatively and cause an entire system to fail. If hazards can be identified early in the software process, software design features can be specified that will either eliminate or control potential hazards To be effective, software must be analyzed in the context of the entire system. For example, a subtle user input error (people are system components) may be magnified by a software fault to produce control data that improperly positions a mechanical device. If and only if a set of external environmental conditions is met, the improper position of the mechanical device will cause a disastrous failure. Analysis techniques [Eri15] such as fault tree analysis, real-time logic, and Petri net models can be used to predict the chain of events that can cause hazards and the probability that each of the events will occur to create the chain.

Condition testing

is a test-case design method that exercises the logical conditions contained in a program module.

Software reliability

is defined in statistical terms as "the probability of failure-free operation of a computer program in a specified environment for a specified time" In the context of any discussion of software quality and reliability, failure is nonconformance to software requirements. To illustrate, program X is estimated to have a reliability of 0.999 over eight elapsed processing hours. In other words, if program X were to be executed 1000 times and require a total of 8 hours of elapsed processing time (execution time), it is likely to operate correctly (without failure) 999 times.

ISO quality assurance system

m may be defined as the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management [ANS87]. Quality assurance systems are created to help organizations ensure their products and services satisfy customer expectations by meeting their specifications

Is unit testing possible or even desirable in all circumstances? Provide examples to justify your answer.

not always possible/desireable: ex real/time embdedded systems and highly coupled systems, not possible due to time constraints,

SQA plan

provides a road map for instituting software quality assurance. Developed by the SQA group (or by the software team if an SQA group does not exist), the plan serves as a template for SQA activities that are instituted for each software project.

Independent test group

remove the inherent problems associated with letting the builder test the thing that has been built. Independent testing removes the conflict of interest that may otherwise be present. After all, ITG personnel are paid to find errors. They work closely with the dev to correct errors uncovered.

Data flow testing

selects test paths of a program according to the locations of definitions and uses of variables in the program.

SQA group

serves as the customer's in-house representative. That is, the people who perform SQA must look at the software from the customer's point of view. Does the software adequately meet the quality factors noted in Chapter 15? Have software engineering practices been conducted according to preestablished standards? Have technical disciplines properly performed their roles as part of the SQA activity? The SQA group attempts to answer these and other questions to ensure that software quality is maintained.

V&V include many SQA activities

technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, usability testing, qualification testing, acceptance testing, and installation testing. Although testing plays an extremely important role in V&V, many other activities are also necessary.

By applying black-box techniques, you derive a set of test cases that satisfy the following criteria

test cases that reduce, by a count that is greater than one, the number of additional test cases that must be designed to achieve reasonable testing, and test cases that tell you something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Software dev is still responsible for

testing the individual units (components) of the program, ensuring that each performs the function or exhibits the behavior for which it was designed. In many cases, the developer also conducts integration testing—a testing step that leads to the construction (and test) of the complete software architecture. Only after the software architecture is complete does an independent test group become involved.

Software Availability

the probability that a program is operating according to requirements at a given point in time and is defined as: Availability = (MTTF/MTTF + MTTR) x 100%

Interface Testing

used to check that the program component accepts information passed to it in the proper order and data types and returns information in proper order and data format [Jan16]. Interface testing is often considered part of integration testing. Because most components are not stand-alone programs, it is important to make sure that when the component is integrated into the evolving program it will not break the build. This is where the use stubs and drivers (Section 19.2.1) become important to component testers.

Simple Loop Test

where n is the maximum number of allowable passes through the loop. 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes through the loop. 4. m passes through the loop where m < n. 5. n − 1, n, n + 1 passes through the loop.

Black Box Test Questions

∙ How is functional validity tested? ∙ How are system behavior and performance tested? ∙ What classes of input will make good test cases? ∙ Is the system particularly sensitive to certain input values? ∙ How are the boundaries of a data class isolated? ∙ What data rates and data volume can the system tolerate? ∙ What effect will specific combinations of data have on system operation?

Software Testing Process

∙ To perform effective testing, you should conduct technical reviews (Chapter 16). By doing this, many errors will be eliminated before testing commences. ∙ Testing begins at the component level and works "outward" toward the integration of the entire computer-based system. ∙ Different testing techniques are appropriate for different software engineering approaches and at different points in time. ∙ Testing is conducted by the developer of the software and (for large projects) an independent test group. ∙ Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. A strategy should provide guidance for the practitioner and a set of milestones for the manager.


Ensembles d'études connexes

GMAT - Divisibility, Inequalities, Min Max Stats 4

View Set

Chapter 16 - Advertising and Public Relations

View Set

Test #3 Quizzes(The Wars of Religion)

View Set

Ch 5/6, Pt. 4/1, Intro to Advertising

View Set