ISTQB Foundation Level

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Maturity

(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The degree to which a component or system meets needs for reliability under normal operation.

Test frameworks*

1) Reusable and extensible libraries that can be used to build tools (which are also called test harnesses); 2) A type of automation design (e.g. data-driven and keyword - driven); and 3) An overall process of execution of evaluation.

Boundary value analysis

A black box test design technique in which test cases are designed based on values at the edge of an equivalence partition.

Decision table testing

A black box test technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Use case testing (Scenario testing, user scenario testing)

A black-box test design technique in which test cases are designed to execute scenarios involving interaction with users.

Equivalence partitioning (Partition testing)

A black-box test technique in which test cases are designed to exercise sets or groups of data by using one representative member of each.

State transition testing (Finite state testing)

A black-box test technique using a diagram or table to derive test cases to evaluate whether the test item successfully executes valid situational changes and blocks invalid changes.

Test execution schedule

A calendar for the implementation of test suites within a test cycle.

Quality characteristic (software product characteristic, software quality characteristic, quality attribute)

A category of product attributes that bears on how well a work product meets requirements.

Decision table (Cause-effect decision table)

A chart showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

Peer review

A check of work products performed by others qualified to do the same work.

Test log*

A chronological record of relevant details about the execution of assessments.

Incremental development model

A code design model in which the component or system is delivered in a series of steps.

System

A collection of interacting elements organized to accomplish a specific function or set of functions.

Planning poker

A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.

Product risk

A danger of failure impacting the quality of a product.

Quality risk

A danger of failure related to an attribute affecting whether requirements are met.

Project risk

A danger of failure that could impact the success of the testing process, e.g. lack of staffing, strict deadlines, changing requirements, etc.

Regression

A degradation in the quality of a component or system due to a change.

Use Case

A depiction of the sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system.

Component specification*

A description of a unit's or module's function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

Coding standard

A description of the characteristics of a design or a design description of data or program components.

Lifecycle model

A description of the processes, workflows, and activities used in the development, delivery, maintenance, and retirement of a system.

Iterative development model

A design and coding lifecycle in which the component or system is developed through a series of repeated cycles.

Functional integration

A development approach that combines the components or systems for the purpose of getting basic functionality working early.

Static analysis tool* (static analyzer)

A device that checks source code for certain properties such as conformance to coding standards, quality metrics or data flow anomalies without executing any code.

Defect management tool

A device that facilitates the recording and status tracking of faults and changes. They often have workflow-oriented facilities to track and control the allocation, correction, and re-testing of defects and provide reporting facilities.

Incident management tool*

A device that facilitates the recording and status tracking of investigatable events.

Unit test framework

A device that provides an environment for evaluation in which a component can be tested in isolation with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.

Dynamic analysis tool*

A device that provides run-time information on the state of the software code. It is most commonly used to identify unassigned pointers, check pointer arithmetic, and to monitor the allocation, use, or de-allocation of memory and to flag memory leaks.

Security testing tool*

A device that provides support for evaluating how well a software product is able to prevent unauthorized access.

Test tool

A device that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis.

Requirements management tool

A device that supports the recording of specifications, attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of specifications and specification change management. Some also provide facilities for static analysis, such as consistency checking and violations to pre-defined rules.

Stress testing tool*

A device to evaluate a system or component performing at or beyond its anticipated limits.

Load testing tool*

A device to support testing of user numbers and/or transactions, whereby it can simulate an increasing amount within a specified time period.

Performance-testing tool (load-testing tool)

A device to support testing that usually can simulate either multiple users or high volumes of input data during execution.

Debugging tool* (debugger)

A device used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. It enables programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.

Simulator

A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.

Configuration management

A discipline applying technical and administrative directions and surveillance to identify and document the functional and physical characteristics of the parts of a component or system, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.

Feature (Software feature)

A distinguishing characteristic of a component or system.

Test case specification

A document specifying a set of objectives, inputs, actions, expected results, and execution preconditions for a test item.

Risk

A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Control flow analysis*

A form of static investigation based on a representation of unique paths (sequences of events) in the execution of a component or system. This type of investigation evaluates the integrity of path structures, looking for possible anomalies such as closed loops or logically unreachable process steps.

Technical review

A formal check by a team of technically-qualified personnel that examines the suitability of a work product for its intended use and identifies discrepancies from specifications and standards.

Informal review (Ad hoc review)

A group analysis not based on a documented procedure.

Test Type

A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.

Test policy (Organizational test policy)

A high-level document describing an organization's principles, approach, and major objectives regarding assessment.

User story

A high-level requirement commonly used in Agile software development, typically consisting of one sentence in the everyday or business language capturing what functionality a user needs, the reason behind this, any non-functional criteria, and acceptance criteria.

Error (Mistake)

A human action that produces an incorrect result

Executable statement

A line or related lines of code which will be run procedurally when the program is functioning and may perform an action on data.

Test schedule

A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies.

Test monitoring

A management activity that involves checking the status of testing activities, identifying any variances from the planned or expected status, and reporting status to stakeholders.

Metric

A measurement scale and the method used for measurement.

Test data preparation tool

A mechanism that enables inputs to be selected from existing databases or created, generated, manipulated and edited for use in testing.

Test execution tool

A mechanism that implements tests against a designated test item and evaluates the outcomes against expected results and postconditions.

Coverage tool (Coverage measurement tool)

A mechanism that provides objective measures of what structural elements, e.g., statements or branches, have been exercised by a test suite.

Configuration management tool

A mechanism that provides support for the identification and control of the parts of a component or system, the status over changes and versions, and the release of baselines.

Test management tool

A mechanism that provides support to the planning, estimating, monitoring, and control of assessment activities. It often has several capabilities, such as scheduling of tests, the logging of results, progress tracking, incident management, and reporting.

Review tool*

A mechanism that provides support to the process of assessing the success of a component or system, usually by a group of colleagues. Typical features include planning and tracking support, communication support, collaboration, and a repository for collection and reporting of metrics.

Test design tool

A mechanism that supports the test creation activity by generating inputs from a specification that may be held in a CASE repository, e.g., requirements management tool, from specified test conditions held in the tool itself, or from code.

Retrospective meeting (Post-project meeting)

A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.

Memory leak

A memory access failure due to a defect in a program's dynamic store allocation logic that causes it to fail to release memory after it has finished using it, eventually causing the program and/or other concurrent processes to fail due to lack of memory.

Performance indicator (key performance indicator)

A metric that supports the judgment of process efficiency.

Component

A minimal software part that can be tested in isolation.

Boundary value

A minimum or maximum value of an ordered equivalence partition..

Reliability growth model

A model that shows the growth in the ability of a component or system to perform specified functions over time during continuous testing as a result of the removal of defects that result in failures.

Moderator (inspection leader)

A neutral person who conducts a review session.

Test item

A part of a test object used in the testing process.

Reviewer (Checker, inspector)

A participant in a group assessment that identifies issues in the work product.

Scribe (Recorder)

A person who records information during the review meetings.

User experience

A person's perceptions and responses that result from the use or anticipated use of a product, system or service, this can modify over time due to changing usage circumstances

Milestone

A point in time in a project at which defined (intermediate) deliverables and results should be ready.

Equivalence partition (Equivalence class)

A portion of the value domain of a data element related to the test object for which all values are expected to be treated the same based on the specification.

Experience-based test technique

A procedure to derive and/or select test cases based on the tester's background, knowledge, and intuition.

Test technique (test case design technique, test specification technique, test design technique)

A procedure used to derive and/or select test cases.

Test process improvement

A program of activities designed to improve the performance and maturity of the organization's procedures and the results of such a program.

Process improvement

A program of activities designed to improve the performance and maturity of the organization's procedures, and the result of such a program.

Project

A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.

Rational Unified Process (RUP)

A proprietary adaptable iterative software development process framework consisting of four project lifecycle phases: inception, elaboration, construction and transition.

Requirement

A provision that contains criteria to be fulfilled.

Burndown chart

A publicly displayed chart that depicts the outstanding effort versus time in an iteration. It shows the status and trend of completing the tasks of the iteration. The X-axis typically represents days in the sprint, while the Y-axis is the remaining effort (usually either in ideal engineering hours or story points).

Test objective

A reason or purpose for designing and executing an assessment.

Test progress report

A report produced at regular intervals about the status of test activities against a baseline, risks, and alternatives requiring a decision.

Test Summary Report

A report that contains an evaluation of the corresponding test items against exit criteria.

State diagram

A representation that depicts the modes that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one mode to another.

Non-functional requirement

A requirement that describes how [or how well] the component or system will do what it is intended to do.

Finding

A result of an evaluation that identifies some important issue, problem, or opportunity.

Ad hoc reviewing

A review technique carried out by independent reviewers informally, without a structured process.

Checklist-based reviewing

A review technique guided by a list of questions or required attributes.

Perspective-based reading (perspective-based reviewing)

A review technique whereby reviewers evaluate the work product from different viewpoints.

Data-driven testing

A scripting technique that stores test input and expected results in a table or spreadsheet so that a single control script can execute all of the tests in the table. It is often used to support the application of test execution tools such as capture/playback tools.

Keyword-driven testing (action word-driven testing)

A scripting technique that uses data files to contain not only test data and expected results but also central terms related to the application being tested. The terms are interpreted by special supporting scripts that are called by the control script for the test.

Path (Control flow path)

A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.

Test script

A sequence of instructions for the execution of an assessment, especially an automated one.

Test procedure

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution.

V-model

A sequential development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.

Quality control (QC)

A set of activities designed to evaluate the degree to which a component or system meets requirements.

Process

A set of interrelated activities, which transform inputs into outputs.

Test

A set of one or more test cases.

Risk type

A set of possible future failures grouped by one or more common factors.

Test case

A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

Unreachable code (dead code)

A set of statements that cannot be reached and therefore are impossible to execute.

Test Suite (Test case suite, test set)

A set of test cases or test procedures to be executed in a specific test cycle.

Test environment (test bed, test rig)

A setting containing hardware, instrumentations, simulators, software tools, and other support elements needed to conduct a test.

Stub

A skeletal or special-purpose implementation of a software component used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

Tester

A skilled professional who is involved in the assessment of a component or system.

Driver (test driver)

A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

Extreme Programming (XP)

A software engineering methodology used within Agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code.

Monitoring tool

A software or hardware device that runs concurrently with the component or system under test and supervises, records, and/or analyzes the behavior of the component or system.

Commercial off-the-shelf software/COTS (off-the-shelf software)

A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

Compiler*

A software tool that translates programs expressed in a high order language into their machine language equivalents.

Test oracle

A source to determine expected results to compare with the actual result of the system under test. It may be the existing system (for a benchmark), other software, a user manual, or an individual's specialized knowledge, but should not be the code.

Functional requirement

A specification that determines behavior that a component or system must perform.

Smoke test* (confidence test, intake test, verification test))

A subset of all defined/planned test cases that cover the main functionality of a component or system to ascertain that the most crucial functions of a program work, but not bothering with finer details.

Root cause analysis (Causal analysis)

A technique aimed at identifying the sources of defects. By directing corrective measures at these, it is hoped that the likelihood of defect recurrence will be minimized.

Service virtualization

A technique to enable simulated delivery of services which are deployed, accessed and managed remotely.

Role-based reviewing

A technique where reviewers evaluate a work product from the perspective of different stakeholders.

Exhaustive testing (complete testing)

A test approach in which the test suite comprises all combinations of input values and preconditions.

low level test case (concrete test case)

A test case with definite values for input data and expected results.

High-level test case (abstract test case , logical test case)

A test case without concrete values for input data and expected results.

Fail

A test is deemed to do this if its actual result does not match its expected result.

Pass

A test is deemed to do this if its actual result matches its expected result.

Test control

A test management task that deals with developing and applying a set of corrective actions to get a project on track when monitoring shows a deviation from what was planned.

Master test plan

A test plan that is used to coordinate multiple test levels or test types.

Error guessing

A test technique in which tests are derived on the basis of the tester's knowledge of past failures, or general knowledge of failure modes.

Test comparator* (comparator)

A test tool to contrast actual automated test results with expected results.

Test Harness

A testing environment comprised of stubs and drivers needed to execute a test.

State transition

A transition between two modes of a component or system.

Formal review

A type of analysis that follows a defined process with a formally documented output.

Scripting language*

A type of coding in which automated tests are written, used by a test execution tool (e.g. a capture/playback tool).

Inspection

A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process.

Big-bang testing*

A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.

Sequential development model

A type of lifecycle model in which a complete system is built in a linear way of several discrete and successive phases with no overlap between them.

Stress testing*

A type of performance evaluation conducted to assess a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources.

Load testing

A type of performance testing to determine how much can be handled by the component or system. It evaluates the behavior of the component/system by increasing the number of parallel users and/or transactions.

Walkthrough

A type of review in which an author leads members through a work product and the members ask questions and make comments about possible issues.

Decision

A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.

Review

A type of static testing during which a work product or process is evaluated by one or more individuals to detect issues and to provide improvements.

System under test (SUT)

A type of test object that is a system.

Test-driven development (TDD)

A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

Statement Testing

A white-box technique in which test cases are designed to execute certain blocks of code.

Decision testing

A white-box testing technique in which test cases are designed to execute certain branches or paths.

Call graph*

An abstract representation of calling relationships between subroutines in a program.

Data flow

An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction.

Configuration item

An aggregation of hardware, software, or both, that is designated for management and treated as a single entity in the management process.

Pair Programming*

An agile software development practice in which two programmers work together on one workstation.

Code coverage

An analysis method that determines which parts of the software have been executed by the test suite and which parts have not been executed.

Session-based testing

An approach in which exploratory assessment activities are conducted within a defined time-box with a specified charter.

Exploratory testing

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item, and the results of previous tests.

Test condition (test requirement, test situation)

An aspect of the test basis that is relevant in order to achieve specific test objectives.

Functional testing

An assessment conducted to evaluate the compliance of a component or system with requirements.

Usability testing

An assessment to determine the extent to which a software product can be used by specified users to achieve specified goals in a specified context of use.

Feature

An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints).

Data quality

An attribute of inputs or outputs that indicates correctness with respect to some pre-defined criteria, e.g., business expectations, requirements on data integrity, data consistency.

Coverage item

An attribute or combination of attributes that is derived from one or more test conditions by using a test technique that enables the measurement of the thoroughness of the test execution.

Defect type

An element in a taxonomy of defects. Defect taxonomies can be identified with respect to a variety of considerations, including, but not limited to: - Phase or development activity in which the defect is created, e.g., a specification error or a coding error - Characterization of defects, e.g., an "off-by-one" defect, Incorrectness, e.g., an incorrect relational operator, a programming language syntax error, or an invalid assumption - Performance issues, e.g., excessive execution time, insufficient availability

Configuration control* (version control)

An element of management consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to the constituent parts of a component or system.

Variable

An element of storage in a computer that is accessible by a software program by referring to it by a name.

Statement

An entity in a programming language, which is typically the smallest indivisible unit of execution.

Scenario-based reviewing

An evaluation technique that is guided by determining the ability of the work product to address a specific sequence of events.

Failure

An event in which a component or system does not perform a required function within specified limits.

Checklist-based testing

An experience-based test design technique in which the experienced tester uses a high-level list of items or a set of rules or criteria against which a product has to be verified.

Wideband Delphi

An expert-based test estimation technique that uses the collective wisdom of the team members.

Defect (Bug, fault)

An imperfection or deficiency in a work product where it does not meet its requirements or specifications.

Audit

An independent examination of a work product, process, or set of processes that is performed by a third party to assess compliance with specifications, standards, contractual agreements, or other criteria.

Scrum

An iterative incremental framework for managing projects commonly used with Agile software development.

IDEAL

An organizational improvement model that serves as a roadmap for initiating, planning, and implementing improvement actions. The model is named for the five phases it describes: initiating, diagnosing, establishing, acting, and learning.

Test session

An uninterrupted period of time spent in executing tests. In exploratory testing, each test session is focused on a charter, but testers can also explore new opportunities or issues during a session. The tester creates and executes on the fly and records their progress.

Problem

An unknown underlying cause of one or more incidents.

Deliverable*

Any (work) product that must be delivered to someone other than the (work) product's author.

Anomaly

Any condition that deviates from expectation based on requirement specifications, design documents, user documents, standards, etc. or from someone's perception or experience. They may be found, but not limited to, in reviewing, testing, analysis, compilation, or use of software products or applicable documentation.

Incident* (deviation)

Any event occurring that requires investigation.

Experience-based testing

Assessment based on the tester's background, knowledge and intuition.

Site acceptance testing

Assessment by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Regression testing

Assessment of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made.

System Integration Testing

Assessment of the combination and interaction of collections of interacting elements.

Test reporting

Collecting and analyzing data from testing activities and subsequently summarizing the data to inform stakeholders.

Dynamic comparison*

Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

Code*

Computer instruction and data definitions expressed in a programming language or in a form of output by an assembler, compiler, or other translator.

Software

Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.

Validation

Confirmation by examination and through the provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Verification

Confirmation by examination of software specifications and through the provision of objective evidence that specified requirements have been fulfilled.

Quality management

Coordinated activities to direct and control an organization with regard to the degree to which work products meet requirements that include establishing a policy and objectives, planning, control, assurance, and improvement.

Input

Data received by a component or system from an external source.

Output

Data transmitted by a component or system to an external destination.

Attack (fault attack)*

Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur.

Fault attack* (attack)

Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur.

Test charter

Documentation of test activities in session-based exploratory testing.

Defect report (bug report)

Documentation of the occurrence, nature, and status of a fault or bug.

Incident report (deviation report, software test incident report, test incident report)

Documentation of the occurrence, nature, and status of an event that occurred which requires investigation.

Test Plan

Documentation organized to coordinate testing activities by describing the objectives and means of achieving them, as well as the schedule within which such achievements are to be completed.

Test report

Documentation summarizing test activities and results.

Specification

Documentation that provides a detailed description of a component or system for the purpose of developing and testing it.

Confirmation testing (re-testing)

Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore.

Non-functional testing

Evaluating how [or how well] the component or system does what it is intended to do.

White-box testing (Clear-box testing, code-based testing, structural testing, glass-box testing, logic-coverage testing, logic-driven testing, structure-based testing)

Evaluation based on the internal structure of the component or system.

Test cycle

Execution of the test process against a single identifiable release of the test object.

Effectiveness

Extent to which correct and complete goals are achieved.

Development testing*

Formal or informal testing conducted during the design or implementation of a component or system.

Acceptance testing (acceptance)

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the user's criteria.

Standard

Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).

Test level

Groups of test activities that are organized and managed together, performed in relation to software at a given stage of development. Each one is an instance of the test process.

Operational environment

Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.

Test Strategy (Organizational test strategy)

High-level documentation that expresses the generic requirements for assessing one or more projects run within an organization, providing detail on how testing is to be performed and is aligned with the test policy.

Test Data

Inputs created or selected to satisfy the execution preconditions and to execute one or more test cases.

Coverage analysis*

Measurement of the amount a specified item has been exercised during test execution using prearranged criteria to determine whether additional testing is required and if so, which test cases are needed.

Structural coverage

Measures based on the internal architecture of a component or system.

Maintenance

Modification of a software product after delivery to correct defects, to improve performance or other attributes or to adapt the product to a changed environment.

Test leader (Lead tester)

On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.

Quality assurance

Part of management focused on providing confidence that requirements will be fulfilled.

White-box test technique (Structural test technique , structure-based test technique, structure-based technique, white-box technique)

Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

Black box test technique (non-functional test design technique, specification-based test design technique)

Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

Incident logging*

Recording the details of any event occurring that requires investigation, e.g. during testing.

Efficiency

Resources expended in relation to the extent with which users achieve specified goals.

Independence of testing

Separation of responsibilities, which encourages the accomplishment of objective testing

Alpha testing

Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization.

Beta testing (field testing)

Simulated or actual operational testing conducted at an external site, by roles outside the development organization.

Installation guide

Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

Basis test set

Test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.

Static testing (desk checking)

Testing a work product without code being executed.

Model-based testing (MBT)

Testing based on or involving a description or representation of a system's behavior.

User acceptance testing (UAT)

Testing conducted in a real or simulated operational environment by intended users focusing on their needs, requirements and business processes.

Contractual acceptance testing

Testing conducted to verify whether a system satisfies its contractual requirements.

Operational acceptance testing (production acceptance testing)

Testing in the acceptance phase, typically performed in a (simulated) environment by operations and/or systems administration staff focusing on aspects such as recoverability, resource-behavior, installability, and technical compliance.

Risk-based testing

Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding types and levels of possible future failures.

Integration testing

Testing performed to expose defects in the interfaces and in the interactions between combined components or systems.

Component integration testing

Testing performed to expose defects in the interfaces and interaction between combined components.

Dynamic testing

Testing that involves the execution of the software of a component or system.

Maintenance testing

Testing the changes to an operational system or the impact of a changed environment to an operational system.

Documentation testing*

Testing the quality of publications, e.g. user guides or installation guides.

Portability testing (Configuration testing)

Testing to determine how easily a software product can be transferred to different environments.

Robustness testing*

Testing to determine how well a product can function under stressful conditions.

Security testing

Testing to determine how well a software product is able to prevent unauthorized access.

Accessibility testing

Testing to determine the ease by which users with disabilities can use a component or system.

Volume testing

Testing where the system is subjected to a large amount of data.

Software development lifecycle (SDLC)

The activities performed at each stage in software programming, and how they relate to one another logically and chronologically.

Test design

The activity of creating and specifying test cases from test conditions.

Test Planning

The activity of establishing or updating the documentation containing the objectives, means, and schedule for assessment.

Test analysis

The activity that identifies test conditions by evaluating the test basis.

Test completion

The activity that makes test assets available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.

Test implementation

The activity that prepares the testware needed for test execution based on test analysis and design.

Regulatory acceptance testing

The assessment conducted to verify whether a system conforms to relevant laws, policies, and regulations.

Expected result

The behavior predicted by the specification, or another source, of the component or system under specified conditions.

Actual result (Actual outcome)

The behavior produced/observed when a component or system is tested.

Test basis

The body of knowledge on which assessment analysis and design are based.

Test estimation

The calculated approximation of a result related to various aspects of testing (e.g. effort spent, completion date, costs involved, number of test cases, etc.)

Compliance

The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.

Understandability

The capability of the software product to enable the user to determine whether the software is suitable, and how it can be used for particular tasks and conditions of use.

Safety

The capability that a system will not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.

Test object

The component or system to be assessed.

Configuration

The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

Result (outcome, test outcome, test result)

The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.

Risk management

The coordinated activities to direct and control an organization with regard to possible future failure.

Test input

The data received from an external source by the test object during execution. The external source can be hardware, software or human.

Testability

The degree of effectiveness and efficiency with which tests can be designed and executed for a component or system.

Severity

The degree of impact that a defect has on the development or operation of a component or system.

Usability

The degree to which a component or system can be employed by specified users to achieve specified goals in a specified context of use.

Accessibility

The degree to which a component or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.

Compatibility

The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.

Robustness

The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions

Complexity (cyclomatic complexity)

The degree to which a component or system has a design and /or internal structure that is difficult to understand, maintain, and verify.

Availability

The degree to which a component or system is operational and accessible when required for use.

Reliability

The degree to which a component or system performs specified functions under stated conditions for a specified period of time.

Security

The degree to which a component or system protects information and data so that persons or other components or systems have the degree of access appropriate to their types and levels of authorization.

Functional suitability (Functionality)

The degree to which a component or system provides behavior that meets stated and implied needs when used under specified conditions.

Performance efficiency (time behavior, performance)

The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.

Quality

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Traceability (horizontal and vertical)

The degree to which a relationship can be established between two or more work products.

Maintainability

The degree to which a software product can be modified to correct defects, meet new requirements, make future maintenance easier, or adapted to a changed environment.

Performance*

The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Coverage (test coverage)

The degree, expressed as a percentage, to which specified items have been exercised by a test suite.

Portability

The ease with which the software product can be transferred from one hardware or software environment to another.

Probe effect

The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.

Postcondition

The expected state of a test item and its environment at the end of test case execution.

Impact Analysis

The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.

Test approach

The implementation of the test strategy for a specific project.

Test manager

The individual who directs, controls, administers, plans, and regulates the evaluation of a test object.

Facilitator

The leader and main person responsible for an inspection or review process.

Priority

The level of (business) importance assigned to an item, e.g., defect.

Risk level (Risk exposure)

The likelihood and possible impact that an event with negative consequences could occur.

Modeling tool*

The means to support the creation, amendment, and verification of representations of the software or system.

Defect density

The number of defects per unit size of a work product.

Measure

The number or category assigned to an attribute of an entity by making a measurement.

Test infrastructure

The organizational artifacts needed to perform testing, consisting of environments, tools, environment, and procedures.

Risk analysis

The overall process of identifying and assessing the possibility of future failure.

Boundary value coverage*

The percentage of boundary values that have been exercised by a test suite.

Statement coverage

The percentage of executable code that has been exercised by a test suite.

Decision coverage

The percentage of paths that have been exercised by a test suite.

Software lifecycle

The period of time that begins when a software product is conceived and ends when the software is no longer available for use. It typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.

Test management

The planning, estimating, monitoring, and control of assessment activities, typically carried out by a test leader or coordinator.

Expected result (expected outcome, predicted outcome)

The predicted observable behavior of a component or system executing under specified conditions, based on its specification or another source.

Testing (evaluation)

The process consisting of all life-cycle activities both static and dynamic, concerned with planning, preparation, and evaluation of software products and related work products to determine that they are fit for purpose, and to detect defects.

System testing

The process of assessing a collection of interacting elements to verify that it meets specified requirements.

Reliability testing*

The process of assessment to determine the ability of a software product to perform its required functions under stated conditions.

Measurement

The process of assigning a number or category to an entity to describe an attribute of that entity.

Integration

The process of combining components or systems into larger assemblies.

Certification*

The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

Static analysis

The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.

Dynamic analysis

The process of evaluating behavior, e.g., memory performance or CPU usage, of a system or component during execution.

Test Execution

The process of executing code in the component or system under test, producing actual results

Debugging

The process of finding, analyzing, and removing the causes of failures in software.

Test comparison*

The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. It can be performed during test execution or after.

Defect management

The process of recognizing, investigating, taking action and disposing of bugs or faults. It involves recording them, classifying them and identifying the impact.

Incident management*

The process of recognizing, recording, classifying, resolving and disposing of events needing investigation.

Interoperability testing (compatibility testing)

The process of testing to determine the ability of a software product to exchange information with other components or systems.

Performance testing

The process of testing to determine the degree to which a system or component accomplishes its designated functions regarding processing time and throughput rate.

Compliance testing* (regulation acceptance testing)

The process of testing to determine the degree to which the component or system conforms to regulations, etc.

Maintainability testing

The process of testing to determine the ease of a software product to be modified or supported.

Risk mitigation

The process through which decisions are reached and protective measures are implemented for reducing or maintaining the possibility of future failure to specified levels.

Failure rate

The ratio of the number of deviations of a given category to a given unit of measure, e.g. per unit of time, per number of transactions, per number of computer runs, etc.

Simulation

The representation of selected behavioral characteristics of one physical or abstract system by another system.

Precondition

The required state of a test item and its environment prior to test case execution.

Behavior

The response of a component or system to a set of input values and preconditions.

Decision outcome

The result of a decision that determines the next statement to be executed.

Control flow

The sequence in which operations are performed during the execution of a test item.

Domain*

The set from which valid input and/or output values can be selected.

Exit criteria (completion criteria, test completion criteria, definition of done)

The set of conditions for officially completing a defined task.

Entry criteria

The set of conditions for officially starting a defined task.

Test process

The set of interrelated activities comprised of planning, monitoring and control, analysis, design, implementation, execution, and completion.

Root cause

The source of a defect such that, if it is removed, the occurrence of the defect type is decreased or removed.

Test design technique*

The specific procedure used to create and/or select test cases.

Component testing (unit testing, module testing)

The testing of individual hardware or software units or modules.

Cost of quality

The total expenses of QA activities and issues and often split into prevention, appraisal, internal and/or external failure.

Software quality

The totality of functionality and features of a computer code product that bear on its ability to satisfy stated or implied needs.

Test Automation

The use of software to perform or support assessment activities, e.g. management, design, execution and results checking.

user interface

The visual elements of an program through which a user controls or communications the application. Often abbreviated UI.

Actor

User or any other person or system that interacts with the test object in a specific way.

Testware

Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.


Ensembles d'études connexes

Integrated Curriculum Final Sasser

View Set

Fluid, electrolyte, and acid-base balance.

View Set

Maternal Newborn ATI Practice Questions

View Set

Certify Teacher (EC-3 292) Practice Exam #1

View Set

HRM/420T: Human Resource Risk Management

View Set

International Business Chapter 10

View Set

Chapter 3: Business and the Constitution

View Set

Fundamentals of Nursing: Chapters 15, 17, 18, 19, 20, 21

View Set

Chronic Obstructive Pulmonary Disease (COPD)

View Set

Chapter 26: Growth and Development of the Toddler

View Set