Midterm Review

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Strong equivalence class testing

(Multi dimensional), based on the cartesian product of the partition subsets (AxBxC), i.e. testing all interactions of all equivalence classes. # cases = |A| x |B| x |C|

White box testing

A procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system. Aka structural, open box testing - explicit knowledge of internal working of SUT is used to select, execute, and collect test data - using specific knowledge of source code to define tests

Path

A sequence of nodes. Each pair of nodes is an edge

Test suites

A set of tests that all share the same fixture. The order of the tests shouldn't matter

NIST's covering array tool

A solution to the issue of finding the minimum covering array. Defines variables in SUT and add constraints among variables. Proposed covering array can be output as a spreadsheet

Doubles

A test double is a replacement for a DOC (Dependent on component), e.g. sending an email if a request fails

Test result formatter

A test runner produces results in 1 or more output formats

Maintainability

Ability to undergo repairs and modifications. A measure of the time to service restoration since the last failure occurrence, or measure of the continuous delivery of incorrect service

Non-regression verification

After correction, repeat the verification process in order

CFG

Control flow graph/flowchart A directed graph with each node corresponding to a program statement. Each directed edge in the set indicates flow control from 1 statement of program to another

3 attributes of software structures

Control flow structure- sequence of execution of instructions of the program Data flow- keep track of data as it's created and handled by the program Data structure- the organization of data itself independent of the program

DFG

Data flow graph

Test automation

Embed test values into executable scripts

test coverage tools

Emma, CodeCover, JaCoCo

Scope

Ensure that each unit has been implemented correctly (looking for faults in isolated areas- class/obj)

Completeness

Entire input set is covered

Node (line or statement) coverage

Execute every program statement

What is observable? (fault, failure, error?)

Failure

Failure Density

Failure per kilo-lines of developed code (or per FP), e.g. 1 failure per KLOC, 0.2 failures per FP, etc.

Mock

Fake java class that replaces the depended class and can be examined after the test is finished for its interactions with the class under test, ex. ask it whether a method was called or how many times it was called. Typical mocks are classes with side effects that need to be examined

Test drives

Modules that act as temporary replacements for a calling module and give the same output as that of the actual product

Reachability

Not all statements are always reachable. We can't always decide if a statement is reachable and how many are. When one doesn't reach 100%, it's hard to see why

Requirement analysis

Studying requirements from a testing perspective to identify the testable requirements. Results in a Requirement Traceability Matrix (RTM)

SUT

System Under Test

Software Testing

Techniques to execute programs with the intent of finding as many defects as possible and/or gaining sufficient confidence in the software system under test

Functional testing

Testing the functional requirements. Checking the correct functionality of a system

Fault prevention

To avoid fault occurrences by construction. Attained by quality control techniques employed during the design and manufacturing of software. It intends to prevent operational physical faults. Example techniques: design review, modularization, consistency checking, structured programming, etc. Activities: requirement review, design review, clear code, establishing standards, using CASE tools with built-in check mechanisms

Disjoint classes

To avoid redundancy so only 1 element of each equivalence class is tested

Category-partition testing

combines BVA and EC with domain expertise. Constraints can also be defined

Def-pair set

du(ni, nj, v): all simple paths with respect to a given variable from a given definition (ni) to a given use (nj)

Def-path set

du(ni, v): all simple paths with respect to a given variable defined in a given node

Prime flow graph

flow graphs that can't be decomposed non-trivially by sequencing and nesting

Steps for unit testing

get code to test (SUT) write program that calls SUT and executes it check if results meet expected

Loops

looping structures (for, while, etc.)

Weak equivalence class testing

(1 dimensional), choosing 1 variable from each equivalence class so that all classes are covered

Fault

(bug) A cause for either a failure of the program or an internal error (e.g. an incorrect state, incorrect timing). It must be detected and removed. - They can be removed without execution (code inspection, design reviews). - Removal of these due to execution depends on the occurrence of the associated failure. - Occurrence depends on the length of execution time and operational profile

Exploratory test case design

(human-based), designing test values based on domain knowledge of the program and human knowledge of testing, exploratory testing

Characteristics of unit testing

- unit test must specify only 1 unit of functionality - must be fast to build - doesn't access DB or file system - doesn't communicate via network - doesn't require any special environment setup - leaves environment in the same state as before - focus on developed components only

Error

1. A human action that results in software combining a fault 2. A discrepancy between a computed, observed, or measured value/condition, and the true, specified or theoretically correct value/condition

Trivial partitions

1. A set containing all expected or legal inputs 2. a set containing unexpected or illegal inputs These can be divided into subsets on which the application is required to behave differently

Means of Dependability

1. Fault prevention- how to prevent occurrence/introduction of faults 2. Fault tolerance- how to deliver correct service in the presence of faults 3. Fault removal- how to reduce the number or severity of faults 4. Fault forecasting- how to estimate the present number, the future incidence, and the likely consequences of faults

3 key points of reliability

1. Reliability depends on how the software is used, thus a model of usage is required 2. Reliability can be improved over time if certain bugs are fixed (reliability growth), thus a trend model (aggregation or regression) is needed 3. Failures may happen at random times, thus a probabilistic model of failure is needed

How to build equivalence classes

1. don't forget equivalence classes for invalid inputs 2. look for extreme range of values 3. look for max size of memory (stackable) variables 4. if an input must belong to a group, 1 equivalence class must include all members of the group 5. analyze levels for boolean variables 6. look for dependent variables and replace them with their equivalents

Control flow coverage

1. from source code, create a CFG 2. design test cases to cover certain elements of CFG 3. decide upon appropriate coverage metrics to report test results (statement, decision, condition, path coverage metrics) 4. execute tests, collect and report coverage data

Stub

A fake class that comes with preprogrammed return values. Injected into the class under test to give control over what's being tested as input, ex. a database connection that allows you to mimic a connection scenario without a real database. A simple test double that supplies responds to requests from the SUT. Contains pre-defined responses to specific requests. Often hand-coded

Mock object

A fake object that decides whether a unit test has passed or failed by watching interactions between objects. Needed when a unit of code depends on an external object. A dummy implementation for a class in Mock framework

Fault tolerance

A fault-tolerant system is capable of providing specified services in the presence of a bounded number of failures. The use of techniques to enable continuous delivery of services during system operation. It's generally implemented by error detection and subsequent system recovery. Based on the principle: "act during operation defined during specification and design" Process: 1. detection of faults and causes, 2. assessment to which the system state has been damaged or corrupted, 3. recovery (remaining operational), 4. fault treatment and continued service (locate and repair fault to prevent it from happening again)

Reliability

A measure of the continuous delivery of correct service. The probability that a system or a capability of a system functions without failure for a specified time or number of natural units in a specified environment, given that the system was functioning properly at the start of the time period. The probability of a failure free operation for a specified time in a specified environment for a given purpose The most important attribute of software

Availability

A measure of the delivery of correct service with respect to the alteration of correct and incorrect service. Avail = uptime/(uptime + downtime) Avail = MTTF/(MTTF + MTTR) = MTTF/MTBF

Test path

A path that starts at an initial node and ends at a final node. Represents execution of test cases

Test automation framework

A set of assumptions and tools that support test automation

Profile based testing

A system may have several functionalities but only a subset of them need to work together at any time. The usage of various operations may vary. Write tests for operations based on their frequency of usage

Service

A system's behaviour as it is perceived by users

Black box testing

A testing approach that focuses on the functionality of the application or product and does not require knowledge of the code. Applies at all granularity levels. Aka specification-based testing.

Service Restoration

A transition from incorrect to correct service

All definitions coverage

ADC. uses every definition. For each set of du-paths, the test requirement contains at least 1 path in the set. For each definition, at least 1 use must be reached

All du-path coverage

ADUPC. follow all du-paths. For each set of du-paths, test requirement contains every path in the set. For each def-use pair, all paths between def and use must be covered

All uses coverage

AUC. get to every use. For each set of du-paths, test requirement contains at least 1 path in the set. All uses must be reached for each definition

Safety

Absence of catastrophic consequences on users and environment. An extension of reliability: safety is reliability with respect to catastrophic failures. When the state of correct and incorrect service due to non-catastrophic failures are grouped into a safe state (free from catastrophic damage, not from danger), safety is a measure of continuous safeness, or the time to catastrophic failure

Integrity

Absence of improper system state alterations

Confidentiality

Absence of unauthorized disclosure of information

Qualitative Evaluation

Aims to evaluate the extent to which some of the attributes of dependability are satisfied; those attributes are then viewed as measures of dependability

Quantitate Evaluation (ordinal)

Aims to identify, classify, rank the failure modes or event combinations that lead to system failures

Testing

Allows for defect detection and starts later in the life cycle

Crowd testing

An approach to testing in which testing is distributed to a large group of testers.

Failure

An event that occurs when the delivered service deviates from the correct service. It is thus a transition from correct to incorrect service, i.e. to not implementing the system function. Any departure of system behaviour in execution from user needs. Caused by a fault, which is usually caused by human error

Test runner

An executable program that runs tests implemented using JUnit framework and reports test results

User

Another system (physical, human) that interacts with the system at the service interface

Data flow coverage

Augment the CFG with extra information - branch predicates - defs: statements that assign values to variables - uses: statements that use variables

Dependability attributes

Availability- readiness for correct service Reliability- continuity of correct service Safety- absence of catastrophic consequences on users and environment Confidentiality- absence of unauthorized disclosure of info Integrity- absence of improper system state alterations Maintainability- ability to undergo repairs and modifications Availability is always required, and others may/not be.

BVT

Boundary Value Testing. Mistakes are made in processing values at/near the boundaries of equivalence classes

Validation

Checking whether the system meets customer's actual needs (building the right product)

Subsumption

Coverage criterion C1 subsumes C2 iff every test set that satisfies C1 also satisfies C2, e.g. branch coverage subsumes statement coverage. - 100% branch coverage is 100% statement coverage

Data flow graph

DFG. A variation of CFG where nodes are annotated by "def" and "use" which is broken into p-use and c-use. arcs are annotated by predicate conditions as true or false. A def-use table can help identify the test paths

Structural coverage criteria

Defined on a graph just in terms of nodes and edges

Correct Service

Delivered when the service implements the system function

System Outage

Delivery of incorrect service

Condition coverage

Design a test set so that each individual condition in the program is both true and false. May cause branch coverage to suffer. Should usually consider both branch and condition coverage together by executing the program for each element in the set **Make individual conditions true and false

Modified Condition Decision Coverage (MCDC)

Effectively test important combinations of conditions. Only combinations of values so that every atomic condition impacts the overall condition's truth value, i.e. the outcome of a decision changes as a result of changing each single condition

ECT

Equivalence Class Testing 1. Decide how many independent variables you have 2. for each, how many distinct partitions for each variable we have 3. select number of test cases based on weak, strong, or robustness criteria 4. create test cases using acceptable values assigned from each distinct partition for each variable involved 5. review test cases: remove redundant, add tests for problems perceived, etc. 6. repeat steps until satisfied with test cases

Test evaluation

Evaluate results of testing, report to developers

Statement coverage

Faults can't be discovered if code containing them are not executed. Equivalent to covering all nodes in CFG. Several inputs execute the same statements. Using this type of coverage may lead to incompleteness

Manual scripted testing

Follows a path written by tester that includes test cases and documented steps. No deviation is laid out in the script

Method

For a given class, create another of the same name with "Test" appended to test it, which contains various test cases to run. Each method looks for particular results and either passes/fails

Testing through SDLC

Identifying test cases early to help shorten the development time.

Testing in iterative model

Iterative process starts with implementing a subset of the requirements. At each iteration, design modifications are made and new functional capabilities are added. Idea is to develop a system through repeated cycles and in incremental portions

JUnit

Java testing framework used to write and run tests. Open sourced

Combinatorial testing

May be impractical to test all possible combinations of values for all variables. In this type of testing, we combine values systematically but not exhaustively. We assume that unplanned interactions will happen among a few parameters. If all faults triggered by interaction of t or less variables, then testing all t-way combinations can provide strong assurance

Procedure nodes

Nodes with out degree = 1

Predicate nodes

Nodes with out degree other than 1 or 0 (branches)

How to build a CFG

Nodes: - statement nodes represent single entry single exit sequence statements - predicate nodes represent conditions for branching - auxiliary nodes for completing the graph

Equivalence class

Partitions of the input space in such a way that input data have the same effect on the SUT

Fault removal

Performed in both the development phase and operational life of a system. During the development phase, there are 3 steps: verification, diagnosis, and correction

Defect

Refers to either fault (cause) or failure (effect)

Decision/branch coverage

Relates to decisions in a program. Given a CFG, select a test such that by executing the program for each test case in the set, each edge of CFG's decision nodes is traversed at least once. Need to exercise all decisions that traverse the control flow of the program with T or F values **Make compound predicate true and false

RTM

Requirements traceability matrix. Lists the test cases as rows and requirements as columns. An "X" is placed in each box that for a test case that fills that requirement. Helps engineers to see whether all requirements are tested and helps project managers to see if they are on schedule with development and testing

Test execution

Run tests on the software and record the results

Decision table

Sections: - conditions: lists conditions and their combinations, express relationships among decision variables - actions: lists responses to be produced when corresponding combinations of conditions are true 1 test case for each rule from this table cases = 2^conditions

Path coverage

Select a test set such that by executing the program for each test case, all paths leading from the initial to final node of program''s CFG are traversed. Counts the number of full paths from input to output through a program that gets executed. Full path coverage leads to full branch coverage

SDLC

Software Development Life Cycle

STLC

Software Testing Life Cycle

Inspection

Strict, close examinations conducted on specifications, design, code, tests, etc. It allows for defect detection, prevention, isolation. Starts early in the life cycle. Up to 20x more efficient than testing. Code reading detects 2x as many defects/hour as testing does. 80% of coding errors are found by this method.

TDD

Test Driven Development. Test cases are the requirements of the system. Shortens the programming feedback loop, guarantees development of high quality software. Test writing is very time consuming though

Exploratory testing

Test cases are designed an executed at the same time. About enabling choice but not constraining it Human tester using brain to create realistic scenarios that will either cause software to fail or succeed. No formal plan. All about varying things

Coverage metrics

Test coverage measures the amount of testing performed by some tests. Coverage = # cvg items exercised/total # of cvg items x100%

Exhaustive testing

Testing a software system using all possible inputs. Usually impossible

Criteria-based testing

Testing based on a set of criteria

Manual regression testing

Testing done to check that a system update doesn't re-introduce faults that have been corrected earlier. Usually performed after a bug fix code is checked into the code repo (smoke testing)

Non-functional testing

Testing the non-functional requirements: performance, localization/internationalization, recovery, security, portability, compatibility, usability, scalability, reliability, etc.

Scripted testing

Tests are first designed and recorded then may be executed at some later time or by the same/different tester. Misses the same things every time. In small (functions), focuses on a simple functionality and excludes everything else. Easy to localize any faults In large (operation), focuses on a set or sequence of functionalities. Difficult to localize faults

Failure Effect

The consequences of a failure mode on an operation, function, status of a system, process, activity, or environment. The undesirable outcome of a fault of a system element in a particular mode.

Test execution

The execution of an individual unit test: @Before, @Test, @After

Failure Mode

The manner in which a fault occurs, i.e. the way in which the element faults

Cyclomatic complexity

The number of regions in a flow graph. A program's complexity can be measured by the cyclomatic number of the program flow graph. The cyclomatic number can be calculated by: - flow graphs - code - number of regions in flow graph

Failure Intensity (Failure rate)

The rate that failure occur, i.e. the number of failures per natural or time unit. A way of expressing system reliability, e.g. 5 failures per hour

Test fixtures

The set of preconditions or state needed to run a test (context)

Test fixture

The state of a test - objects and variables that are used by more than 1 test - initializations - reset values

Factors affecting software quality

Time Cost Quality/Reliability

TBF

Time between failure

Exceptions

Uses @Test(expected = Exception.class) and make sure the function throws that exception

String-normal ECT

What is this an example of, weak or strong ECT? A has 3 partitions, B has 4, C has 2. 3x4x2 = 24 test cases

Weak-normal ECT

What is this an example of, weak or strong ECT? A has 3 partitions, B has 4, C has 2. max(3,4,2) = 4 test cases

Function

What the system is intended to do, described by the functional specification

Test coverage

Whenever we can count things and tell whether each of those things has been tested by some test

Boundary value analysis

While partitioning, select tests from inside equivalence classes. Focuses on tests at/near boundaries of equivalence class - at min, just below, just above - nominal value - at max, just below, just above

WCT

Worst case testing. When more than 1 variable uses an extreme value

Assertion

a line of code that compares 2 given values (one from test case, one from oracle) Verifies the behaviour/state of the unit under test, e.g. true, null, false

Def

a location where a value for a variable is stored

use

a location where a variable's value is accessed (read, evaluated, computed)

Du-pair

a pair of definition and use for a variable

Covering array

a set of concrete test cases that covers all t-way interactions among the k factors with all combinations of l levels of values so that the combination appears in at least 1 test

Profile

a set of disjoint alternatives called elements that represent that phenomenon together with their occurrence probabilities

Test case

a set of inputs and the expected outputs (oracle) for a unit/module/system under test. Specifying a function's direct input variables and the expected outcome. Specifying the indirect input variables gives a test case the necessary context.

Test suite

a set of test cases

Test Suite

a set of test cases that needs to be run together with the SUT

Du-path

a simple path where the initial node of the path is the only defining node of variable in the path - du(ni, v): set of du-paths that start at ni for variable v - du(ni, nj, v): the set of du-paths from ni to nj for variable v

Subpath

a subsequence of nodes in p is a subpath of p

Definition occurrence

a value is written (bound) to a variable

Direct input variable

a variable that controls the operation directly. Ex. arguments, selection menu, entered data field. Important during exploratory/unit test

Indirect input variable

a variable that only influences the operations or its effects are propagated to the operation. Ex. traffic load, environment variables. Important during integration testing.

Parameterized tests

allow running the same test repeatedly using different input test data. 5 steps: 1. annotate test class with @RunWith 2. create a public static method annotated with @Parameters that returns a collection of objects (Array) as test data set 3. create a public constructor that takes in 1 row of test data 4. create an instance variable for each column of test data 5. create your test cases using the instance variables as the source of the test data

Robustness testing

below lower bound and above upper bound are included in the test cases

Computational use

c-use: compute a value for defining other variables or output values

Tabular representation

composed of a list of operation names and their probability of occurrence

def(n)

contains variables that are defined at node n (written)

use(n)

contains variables that are used at node n (read)

c-use pairs

dcu(x). The set of all nodes using variable x from the live definitions of the variables at a given previous node i

Boundary conditions

defining the edge cases of the equivalent classes

Criteria based test case design

design test values to satisfy coverage criteria or other engineering goals

p-use pairs

dpu(x). The set of all edges such that there is a def-clear path from node i to edge, and x is used at node k

Edge (decision or branch) coverage

execute every branch

Testing approaches

exhaustive testing random testing (ad-hoc, exploratory) partitioning

Oracle

getting a value from someone/something where we know it's correct

Equivalent classes

if a variable is changing, are they equivalent with respect to the functionality being tested or not. A group of test cases are equivalent if: - they all test the same unit - if 1 test case can catch a bug, the others probably will too - if 1 test case doesn't catch a bug, the others probably won't either - all involve same input variables - all affect same output variables

reach

if there's a def-clear path from nodes m to p with respect to v1, then the definition of v at m reaches the use at p

Mock frameworks

jMock, Mockito, EasyMock Provide auto generation of mock objects that implement a given interface, methods for declaring and asserting your expectations, and logging of what calls are performed on the mock objects

parameters that make a good test

long, descriptive names timeout one test per method avoid logic

Start node

nodes with in degree 0

Terminal (end) nodes

nodes with out degree 0

Def-clear path

p with respect to v is a sub-path where v is not defined at any nodes in p Any path starting from a node at which a variable is defined and ends at a node where that variable is used without redefining the variable anywhere else on the path

Predicate use

p-use: a variable is used to decide whether a predicate evaluates to true/false

Fault forecasting

performing an evaluation of the system behaviour with respect to fault occurrence or activation. 2 aspects: qualitative/ordinal evaluation and quantitative/probabilistic evaluation

Test planning

preparing a test plan document for various testing, selecting tools for testing, test cost/effort estimation, resource planning and role determining. Deliverables are a test plan document and effort estimation document

Data flow coverage criteria

requires a graph to be annotated with references to variables

limitations of human based testing

scalability, bias scaling for complex systems repeatability

Operational profile

set of operations (names and frequency) and their probability of occurrences

Data flow based testing

test the connections between variable definitions (write) and variable uses (read). The starting point is a variation of a CFG annotated with the location of all variables

Acceptance testing

testing by users to check whether the system satisfies requirements (aka alpha testing)

system testing

testing the complete system before delivery

Integration testing

testing to expose problems arising from the combination of components

Infeasibility

the du-pair is not feasible

Length

the number of edges. A single node has a path of length 0

Verification

the process of checking whether a system adheres to given properties/verification conditions. If it doesn't, the other steps follow: diagnosing faults that prevented verification, and then performing necessary corrections (building the product right)

Multiple condition

using condition coverage on some compound condition C implies that each simple condition within C has been evaluated to true and false. However, it doesn't imply that all combinations of the values of the individual simple conditions in C have been exercised

Use occurrence

value of a variable is read (referred)


Ensembles d'études connexes

La négritude (fondateurs et ses philosophies)

View Set

Similarities and Differences between the Articles of Confederation and Constitution

View Set

Clinical Pharmacology Dosage Calculation

View Set

Health and Behavior Change Quiz 4

View Set