ISTQB TTA. Chapter 3 and 4

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

McCabe's design predicate approach to integration testing, consists of three steps:

1. Draw a call-graph between the modules of a system showing how each unit calls and is called by others. This graph has four separate kinds of interactions, as you shall see in a moment. 2. Calculate the integration complexity. 3. Select tests to exercise each type of interaction, not every combination of all interactions.

Control risk using various techniques:

■ Choosing an appropriate test design technique ■ Reviews and inspections ■ Reviews of test design ■ An appropriate level of independence for the various levels of testing ■ The use of the most experienced person on test tasks ■ The strategies chosen for confirmation testing (retesting) and regression testing

Path testing by itself has limited effectiveness for the following reasons:

■ Planning to cover does not mean you will cover - especially when there are bugs contained. ■ It does not show totally wrong or missing functionality. ■ Interface errors between modules will not show up in unit testing. ■ Database and data-flow errors may not be caught. ■ Incorrect interaction with other modules will not be caught in unit testing. ■ Not all initialization errors can be caught by control-flow testing. ■ Requirements and specification errors will not be caught in unit testing.

Risk management includes three primary activities:

■ Risk identification, figuring out what the different project and quality risks are for the project ■ Risk analysis, assessing the level of risk—typically based on likelihood and impact—for each identified risk item ■ Risk mitigation, which is really more properly called "risk control" because it consists of mitigation, contingency, transference, and acceptance actions for various risks

test-affecting project risks like the following:

■ Test environment and tools readiness ■ Test staff availability and qualification ■ Low quality of inputs to testing ■ Overly high rates of change for work products delivered to testing ■ Lack of standards, rules, and techniques for the testing effort.

What business factors should we consider when assessing impact?

■ The frequency of use of the affected feature ■ Potential damage to image ■ Loss of customers and business ■ Potential financial, ecological, or social losses or liability ■ Civil or criminal legal sanctions ■ Loss of licenses, permits, and the like ■ The lack of reasonable workarounds

Ways to classify the level of risk

■ The likelihood of the problem occurring; i.e., being present in the product when it is delivered for testing ■ The impact of the problem should it occur; i.e., being present in the product when it is delivered to customers or users after testing

The algorithm for designing a test in LCSAJ:

1. A path starts at either the start of the module or a line that was jumped to from somewhere. 2. Execution follows a linear, sequential path until it gets to a place where it must jump. 3. Find data that force execution through that path and use it in a test case.

Write cases for MC/DC coverage if ((A OR B) AND C) then...

1. A set to TRUE, B set to FALSE, C set to TRUE, which evaluates to TRUE 2. A set to FALSE, B set to TRUE, C set to TRUE, which evaluates to TRUE 3. A set to TRUE, B set to TRUE, C set to FALSE, which evaluates to FALSE

Write cases for decision/condition coverage if ((A OR B) AND C) then...

1. A set to TRUE, B set to TRUE, C set to TRUE where the expression evaluates to TRUE 2. A set to FALSE, B set to FALSE, C set to FALSE where the expression evaluates to FALSE

Procedure that will work to derive tests that achieve state/transition coverage

1. Adopt a rule for where a test procedure or test step must start and where it may or must end. An example is to say that a test step must start in an initial state and may only end in a final state. The reason for the "may" or "must" wording on the ending part is because, in situations where the initial and final states are the same, you might want to allow sequences of states and transitions that pass through the initial state more than once. 2. From an allowed test starting state, define a sequence of event/condition combinations that leads to an allowed test ending state. For each transition that will occur, capture the expected action that the system should take. This is the expected result. 3. As you visit each state and traverse each transition, mark it as covered. The easiest way to do this is to print the state transition diagram and then use a marker to highlight each node and arrow as you cover it. 4. Repeat steps 2 and 3 until all states have been visited and all transitions traversed. In other words, every node and arrow has been marked with the marker.

How to test loops extensively. Note that he is essentially covering the three point boundary values of the loop variable with a few extra tests thrown in.

1. If possible, test a value that is one less than the expected minimum value the loop can take. For example, if we expect to loop with the control variable going from 0 to 100, try -1 and see what happens. 2. Try the minimum number of iterations—usually zero iterations. Occasionally, there will be a positive number as the minimum. 3. Try one more than the minimum number. 4. Try to loop once (this test case may be redundant and should be omitted if it is.) 5. Try to loop twice (this might also be redundant.) 6. Try to test a typical value. Beizer always believes in testing what he often calls a nominal value. Note that the number of loops, from one to max is actually an equivalence set. The nominal value is often an extra test that we tend to leave out. 7. Try to test one less than the maximum value. 8. Try to test the maximum number of loops. 9. Try to test one more than the maximum value.

Statement coverage hints:

1. Not testing a piece of code leaves a residue of bugs in the program in proportion to the size of the untested code and the probability of bugs. 2. The high probability paths are always thoroughly tested if only to demonstrate that the system works properly. If you have to leave some code untested at the unit level, it is more rational to leave the normal, high-probability paths untested, because someone else is sure to exercise them during integration testing or system testing. 3. Logic errors and fuzzy thinking are inversely proportional to the probability of the path's execution. 4. The subjective probability of executing a path as seen by the routine's designer and its objective execution probability are far apart. Only analysis can reveal the probability of a path, and most programmers' intuition with regard to path probabilities is miserable. 5. The subjective evaluation of the importance of a code segment as judged by its programmer is biased by aesthetic sense, ego, and familiarity. Elegant code might be heavily tested to demonstrate its elegance or to defend the concept, whereas straightforward code might be given cursory testing because "How could anything go wrong with that?"

Some of the consequences of pointer failures:

1. Sometimes we get lucky and nothing happens. A wild pointer corrupts something that is not used throughout the rest of the test session. Unfortunately, in production we are not this lucky; if you damage something with a pointer, it usually shows a symptom eventually. 2. The system might crash. This might occur when the pointer trashes an instruction of a return address on the stack. 3. Functionality might be degraded slightly—sometimes with error messages, sometimes not. This might occur with a gradual loss of memory due to poor pointer arithmetic or other sloppy usage. 4. Data might be corrupted. Best case is when this happens in such a gross way that we see it immediately. Worst case is when that data are stored in a permanent location where they will reside for a period before causing a failure that affects the user.

To derive a set of tests that covers the state transition table, we can follow the following procedure.

1. Start with a set of tests (including the starting and stopping state rule), derived from a state transition diagram, that achieves state/transition coverage. 2. Construct the state transition table and confirm that the tests cover all the defined rows. If they do not, then either you didn't generate the existing set of tests properly or you didn't generate the table properly, or the state transition diagram is screwed up. Do not proceed until you have identified and resolved the problem, including re-creating the state transition table or the set of tests, if necessary. 3. Select a test that visits a state for which one or more undefined rows exists in the table. Modify that test to attempt to introduce the undefined event/condition combination for that state. Notice that the action in this case is undefined. 4. As you modify the tests, mark the row as covered. The easiest way to do this is to take a printed version of the table and use a marker to highlight each row as covered. 5. Repeat steps 3 and 4 until all rows have been covered.

How do we derive test cases to cover those sequences and achieve the desired level of coverage

1. Start with a set of tests (including the starting and stopping state rule), derived from a state transition diagram, that achieves state/transition coverage. 2. Construct the switch table using the technique shown previously. Once you have, confirm that the tests cover all of the cells in the 0-switch columns. If they do not, then either you didn't generate the existing set of tests properly or you didn't generate the switch table properly, or the state transition diagram is wrong. Do not proceed until you have identified and resolved the problem, including re-creating the switch table or the set of tests, if necessary. Once you have that done, check for higher-order switches already covered by the tests. 3. Now, using 0-switch sequences as needed, construct a test that reaches a state from which an uncovered higher-order switch sequence originates. Include that switch sequence in the test. Check to see what state this left you in. Ideally, another uncovered higher-order switch sequence originates from this state, but if not, see if you can use 0-switch sequences to reach such a state. You're crawling around in the state transition diagram looking for ways to cover higher-order sequence. Repeat this for the current test until the test must terminate. 4. As you construct tests, mark the switch sequences as covered once you include them in a test. The easiest way to do this is to take a printed version of the switch table and use a marker to highlight each cell as covered. 5. Repeat steps 3 and 4 until all switch sequences have been covered.

Advice for reducing the number of tests when dealing with nested loops.

1. Starting at the innermost loop, set all outer loops to minimum iteration setting. 2. Test the boundary values for the innermost loop as shown previously. 3. If you have done the outermost loop already,go to step 5. 4. Continue outward, one loop at a time until you have tested all loops. 5. Test the boundaries of all loops simultaneously. That is, set all to 0 loops, 1 loop, maximum loops, 1 more than maximum loops.

We have covered three main different ways of approaching test design so far:

1. Statement testing, where the statements themselves drove the coverage. 2. Decision testing, where the branches drove the coverage. 3. Condition, decision/condition, modified condition/decision coverage, and multiple condition coverage, all of which looked at sub-expressions and atomic conditions of a particular decision.

In testing, we're concerned with two main types of risks.

1. product or quality risk. 2. project or planning risks.

(A || B) && (C == D)

A and B are both atomic conditions that are combined together by the OR operator to calculate a value for the subexpression (A || B). Because A and B are both atomic conditions, the subexpression (A || B) cannot be an atomic condition. However, (C == D) is an atomic condition as it cannot be broken down any further. That makes a total of three atomic conditions.

boundary value analysis:

A black-box test design technique in which test cases are designed based on boundary values.

equivalence partitioning:

A black-box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once.

state transition testing:

A black-box test design technique in which test cases are designed to execute valid and invalid state transitions.

decision table testing:

A black-box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

state diagram:

A diagram that depicts the states that a component or system can assume and shows the events or circumstances that cause and/or result from a change from one state to another.

test plan:

A document describing the scope, approach, resources, and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used (and the rationale for their choice), and any risks requiring contingency planning. It is a record of the test planning process.

specification:

A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system and, often, the procedures for determining whether these provisions have been satisfied.

risk:

A factor that could result in future negative consequences; usually expressed as impact and likelihood.

N-switch testing:

A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions.

control-flow analysis:

A form of static analysis based on a representation of unique paths (sequences of events) in the execution through a component or system. Control-flow analysis evaluates the integrity of control-flow structures, looking for possible control-flow anomalies such as closed loops or logically unreachable process steps.

data-flow analysis:

A form of static analysis based on the definition and usage of variables.

test point analysis (TPA):

A formula-based test estimation method based on function point analysis.

state table

A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

test level:

A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test, and acceptance test.

test strategy:

A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).

test policy:

A high-level document describing the principles, approach, and major objectives of the organization regarding testing.

test schedule:

A list of activities, tasks, or events of the test process, identifying their intended start and finish dates and/or times and interdependencies.

memory leak:

A memory access failure due to a defect in a program's dynamic store allocation logic that causes it to fail to release memory after it has finished using it, eventually causing the program and/or other concurrent processes to fail due to lack of memory.

session-based test management:

A method for measuring and managing session-based testing, e.g., exploratory testing.

dd-path:

A path of execution (usually through a graph representing a program, such as a flow chart) that does not include any conditional nodes such as the path of execution between two decisions.

wild pointer:

A pointer that references a location that is out of scope for that pointer or that does not exist.

equivalence partition:

A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

defect-based technique:

A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category.

product risk:

A risk directly related to the test object. See also risk.

project risk:

A risk related to management and control of the (test) project, e.g., lack of staffing, strict deadlines, changing requirements, etc.

risk type:

A set of risks grouped by one or more common factors such as a quality attribute, cause, location, or potential effect of risk. A specific set of product risk types is related to the type of testing that can mitigate (control) that risk type. For example, the risk of user interactions being misunderstood can be mitigated by usability testing.

basis test set:

A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.

test oracle:

A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), other software, a user manual, or an individual s specialized knowledge, but it should not be the code.

test charter:

A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.

defect taxonomy:

A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.

Failure Mode and Effect Analysis (FMEA):

A systematic approach to risk identification and analysis of possible modes of failure and attempting to prevent their occurrence.

decision table:

A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

error guessing:

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made and to design tests specifically to expose them.

test control:

A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.

test monitoring:

A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.

master test plan:

A test plan that typically addresses multiple test levels.

level test plan:

A test plan that typically addresses one test level. See also test plan.

state transition:

A transition between two states of a component or system.

branch testing:

A white-box test design technique in which test cases are designed to execute branches.

condition testing:

A white-box test design technique in which test cases are designed to execute condition outcomes.

decision testing:

A white-box test design technique in which test cases are designed to execute decision outcomes.

data-flow testing:

A white-box test design technique in which test cases are designed to execute definition and use pairs of variables.

path testing:

A white-box test design technique in which test cases are designed to execute paths.

statement testing:

A white-box test design technique in which test cases are designed to execute statements.

multiple condition testing:

A white-box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).

condition determination testing:

A white-box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

test basis:

All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

requirements-based testing:

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g., tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

risk-based testing:

An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.

Wideband Delphi:

An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.

Failure Mode, Effect and Criticality Analysis (FMECA):

An extension of FMEA. In addition to the basic FMEA, it includes a criticality analysis, which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively high probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value.

exploratory testing:

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

boundary value:

An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, such as, for example, the minimum or maximum value of a range.

static analysis:

Analysis of software artifacts, e.g., requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool.

software attacks:

Directed and focused attempt to evaluate the quality,especially reliability, of a test object by attempting to force specific failures to occur.

LCSAJ:

Linear Code Sequence and Jump, consists of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control-flow is transferred at the end of the linear sequence.

structure-based design technique (white-box test design technique):

Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

specification-based technique (or black-box test design technique):

Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

experience-based technique:

Procedure to derive and/or select test cases based on the tester s experience, knowledge and intuition.

risk management:

Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

black-box testing:

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

test estimation:

The calculated approximation of a result related to various aspects of testing (e.g., effort spent, completion date, costs involved, number of test cases, etc.) which is usable even if input data may be incomplete, uncertain, or noisy.

x

The first, shown above, has a single Boolean variable, x, which might evaluate to TRUE or FALSE.

risk level:

The importance of a risk as defined by its characteristics, impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively.

test management:

The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

risk analysis:

The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

dynamic analysis:

The process of evaluating behavior, e.g., memory performance, CPU usage, of a system or component during execution.

risk identification:

The process of identifying risks using techniques such as brainstorming, checklists, and failure history.

risk mitigation or risk control:

The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

Decision/condition coverage guarantees

condition coverage, decision coverage, and statement coverage.

Control-flow testing is done through

control-flow graphs, a way of abstracting a code module in order to better understand what it does. Control-flow graphs give us a visual representation of the structure of the code. The algorithm for all control-flow testing consists of converting a section of the code into a control graph and then analyzing the possible paths through the graph. There are a variety of techniques that we can apply to decide just how thoroughly we want to test the code. Then we can create test cases to test to that chosen level.

We will split the lifecycle of a data variable into three separate patterns:

d: This stands for the time when the variable is created, defined, or initialized. u: This stands for used. The variable may be used in a computation or in a decision predicate. k: This stands for killed, destroyed. or has become out of scope.

D && F

has two atomic conditions, D and F, which are combined together by the AND operator to determine the value of the whole expression.

boundary value analysis is

predominately about testing the edges of equivalence classes. In other words, instead of selecting one member of the class, we select the largest and smallest members of the class and test them. We will discuss some other options for boundary analysis later.

"transactional situations"

situations where the conditions—inputs, preconditions, etc.—that exist at a given moment in time for a single transaction are sufficient by themselves to determine the actions the system should take. If the conditions on their own are not sufficient, but we must also refer to what conditions have existed in the past, then we'll want to use state-based testing

(a>b)||(x+y==-1)&&((d)!=TRUE)

Еhree atomic conditions. The first, (a>b), is an atomic condition that cannot be broken down further. The second, (x+y == -1), is an atomic following the same rule. In the last sub-expression, (d!=TRUE) is the atomic condition.

How do we distinguish a state, an event, or an action?

■ A state persists until something happens—something external to the thingnitself, usually—to trigger a transition. A state can persist for an indefinite period. ■ An event occurs, either instantly or in a limited, finite period. It is the something that happened—the external occurrence—that triggers the transition. Events can be triggered in a variety of ways, such as, for example, by a user with a keyboard or mouse, an external device, or even the operating system. ■ An action is the response the system has during the transition. An action, like an event, is either instantaneous or requires a limited, finite period. Often, an action can be thought of as a side effect of an event.

Some examples of data-flow errors:

■ Assigning an incorrect or invalid value to a variable. These kinds of errors include data-type conversion issues where the compiler allows a conversion but there are side effects that are undesirable. ■ Incorrect input results in the assignment of invalid values. ■ Failure to define a variable before using its value elsewhere. ■ Incorrect path taken due to the incorrect or unexpected value used in a control predicate. ■ Trying to use a variable after it is destroyed or out of scope. ■ Redefining a variable before it is used. ■ Side effects of changing a value when the scope is not fully understood. For example, a global or static variable's change may cause ripples to other processes or modules in the system.

Dynamic tests are broken down into five main types in the Advanced level:

■ Black-box (also referred to as specification-based or behavioral), where we test based on the way the system is supposed to work. Black-box tests can further be broken down in two main subtypes, functional and nonfunctional, following the ISO 9126 standard. The easy way to distinguish between functional and non-functional is that functional is testing what the system does and non-functional is testing how or how well the system does what it does. ■ White-box (also referred to as structural), where we test based on the way the system is built. ■ Experience-based, where we test based on our skills and intuition, along with our experience with similar applications or technologies. ■ Defect-based, where we use our understanding of the type of defect targeted by a test as the basis for test design, with tests derived systematically from what is known about the defect. ■ Dynamic analysis, where we analyze an application while it is running, usually via some kind of instrumentation in the code.

What technical factors should we consider when assessing likelihood?

■ Complexity of technology and teams ■ Personnel and training issues ■ Intrateam and interteam conflict ■ Supplier and vendor contractual problems ■ Geographical distribution of the development organization, as with outsourcing ■ Legacy or established designs and technologies versus new technologies and designs ■ The quality—or lack of quality—in the tools and technology used ■ Bad managerial or technical leadership ■ Time, resource, and management pressure, especially when financial penalties apply ■ Lack of earlier testing and quality assurance tasks in the lifecycle ■ High rates of requirements, design, and code changes in the project ■ High defect rates ■ Complex interfacing and integration issues.

Fields of rows of state transition diagram

■ Current state ■ Event/condition ■ Action ■ New state

List of basic security attacks:

■ Denial of service, which involves trying to tie up a server with so much traffic that it becomes unavailable. ■ Distributed denial of service, which is similar, is the use of a network of attacking systems to accomplish the denial of service. ■ Trying to find a back door—an unsecured port or access point into the system—or trying to install a back door that provides us with access ■ Sniffing network traffic—watching it as it flows past a port on the system acting as a sniffer—to capture sensitive information like passwords. ■ Spoofing traffic, sending out IP packets that appear to come from somewhere else, impersonating another server, and the like. ■ Spoofing e-mail, sending out e-mail messages that appear to come from somewhere else. ■ Replay attacks, where interactions between some user and a system, or between two systems, are captured and played back later. ■ TCP/IP hijacking, where an existing session between a client and a server is taken over, generally after the client has authenticated. ■ Weak key detection in encryption systems. ■ Password guessing, either by logic or by brute-force password attack using a dictionary of common or known passwords. ■ Virus, an infected payload that is attached or embedded in some file and then run, causes replication of the virus and perhaps damage to the system that ran the payload. ■ A worm, similar to a virus, can penetrate the system under attack itself— generally through some security lapse—and then cause its replication and perhaps damage. ■ War-dialing is finding an insecure modem connected to a system, which is rare now, but war-driving is finding unsecured wireless access points, which is amazingly common. ■ Finally, social engineering is not an attack on the system, but on the person using the system. It is an attempt to get the user to, say, change her password to something else, to e-mail a file, etc.

We need to identify both product and project risks. We can identify both kinds of risks using techniques like these:

■ Expert interviews ■ Independent assessments ■ Use of risk templates ■ Project retrospectives ■ Risk workshops and brainstorming ■ Checklists ■ Calling on past experience

Risk can guide testing in various ways:

■ First, during all test activities, test managers, technical test analysts, and test analysts allocate effort for each quality risk item proportionally to the level of risk. Technical test analysts and test analysts select test techniques in a way that matches the rigor and extensiveness of the technique with the level of risk. Test managers, technical test analysts, and test analysts carry out test activities in risk order, addressing the most important quality risks first and only at the very end spending any time at all on less-important ones. Finally, test managers, technical test analysts, and test analysts work with the project team to ensure that the repair of defects is appropriate to the level of risk. ■ Second, during test planning and test control, test managers provide both mitigation and contingency responses for all significant, identified project risks. The higher the level of risk, the more thoroughly that project risk is managed. ■ Third, test managers, technical test analysts, and test analysts report test results and project status in terms of residual risks. For example, which tests have not yet been run or have been skipped? Which tests have been run? Which have passed? Which have failed? Which defects have not yet been fixed or retested? How do the tests and defects relate back to the risks?

So, what is true about applying these defect- and experience-based techniques?

■ There are no specifications available. ■ There is poor documentation of the system under test. ■ You are forced to cope with a situation where insufficient time was allowed for the test process earlier in the lifecycle; specifically, insufficient time to plan, analyze, design, implement. ■ Testers have experience with the application domain, with the underlying technology, and perhaps most important, with testing. ■ We can analyze operational failures and incorporate that information into our taxonomies, error guesses, checklists, explorations, and attacks.

What should we look for to decide whether to adjust our risk analysis?

■ Totally new or very much changed product risks ■ Unstable or defect-prone areas discovered during the testing ■ Risks, especially regression risk, associated with fixed defects ■ Discovery of unexpected bug clusters ■ Discovery of business-critical areas that were missed

Considering the user interface for the Telephone Banker: 1. Outline a set of attacks that could be made. 2. For each attack, depict how you might make that attack.

■ Trigger all error messages. We would get a list of error messages that are defined by the developers, analyze each message, and try to trigger each one. The defect we are trying to isolate is where the developer tried to handle an error condition but failed to do it effectively. ■ Attack all default values. We would bring up each screen. For any input field that has a default value, we would remove it. At that point, we would try to process the screen. If an error was correctly thrown due to the field being NULL, we would try a different illegal value there. Often failure will occur because the developer, having put a default value in the field, assumes it does not need other error handling. ■ Attack input fields. We would want to try putting in illegal values (including NULL). These would include the standards (chars, symbols, non-digit values in integer fields) but would also include trying to put complex expressions in numeric fields, using special characters that may affect the programming language used (escape characters, pipes, redirection, etc.). If the system does not parse inputs correctly, it may fail. ■ Overflow buffers. Our standard attack is to paste in 4,000 characters into every input field. If the developer does not check length before using a buffer, bad things are going to happen. ■ Attack dates. Try different formats, illegal values, leap year anomalies, etc. We don't find that this works often since developers tend to use well-tested libraries for dates. But, every now and then... ■ Change calculated outputs. Sometimes, after inputting some values, a system will calculate some values and then ask for more input. If we can change that intermediate output, we will and see what happens when the other input is accepted. Sometimes, the system re-inputs the values that were outputted without checking whether their values changed. ■ Look for recursive input fields. In HELLOCARMS, there may be recursive fields for entering current debts or some such. We would try entering the same debt over and over and find out how the system handles it. The developer may have a limit of how many are allowed and did not document it. She might also not check for redundancies.

Consensus should exist between test analyst and test manager about the following:

■ What the test session is to achieve ■ Where the test session will focus ■ What is in and out of scope for the session ■ What resources should be used, including how long it should last (often called a timebox)


Ensembles d'études connexes

EXAM 2: NCLEX (Renal, GI, Neuro)

View Set

Cultural Anthro CH 3&4 NNU final 2k19

View Set

BUSI 301 (Business Law) Chapter 1 Concepts

View Set

Interactions between Body Systems

View Set

ATI PN Mental Health Practice 2020 A

View Set

Vocabulary for the College Bound Level 12: Lesson 6

View Set