Basics of Software Testing
Testing is a set of techniques
- Always apply general principles of testing - We should use the best selection from different well proven test design techniques
Why Is Testing Hard? (6)
- Budget-, time- and personnel constraints - Great diversity of SW-environments - Applications are becoming increasingly large and complex - SW-developers are neither trained nor motivated to test - Testers are willing but incapable - Lack of a testing culture
Testing is much more than just debugging
- Debugging is the process to identify causes of failures in code and undertake corrections (remove them) - Testing is a systematic exploration of a component or system to find and report defects - Both are needed for achieving a good quality
Requirements coverage
- Has the software been tested against all requirements for the normal range of use? - Has the software been tested against all requirements for abnormal or unexpected usage?
Where quality problems originate?
- Incomplete requirements - Lack of user involvement - Lack of resources - Unrealistic expectations - Lack of executive support - Changing requirements & specs - Lack of planning - Didn't need any longer - ...
Why Errors / Defects Occur?
- No one is perfect! We all make mistakes or omissions - The more pressure we are under the more likely we are to make mistakes - Inexperiensed, insufficiently skilled project stakeholders - Complexity, misunderstandings - In IT development we have time and budgetary deadlines to meet - FAST & CHEAP then GOOD enough? - GOOD & CHEAP can be made FAST? - FAST & GOOD can be CHEAP? - Poor Communication - Requirements not clearly defined - Requirements change & are not properly documented - Data specifications not complete - ASSUMPTIONS!
XP Principles
- Planning Game: requirements and those priorities defined by client - Small releases: very simple, working system which is developed continually - Simple design: the simplest solution to a requirement - Metaphor: common system naming conventions in the project - Test before coding (TDD) - Refactoring - Pair programming - Common ownership of code - Continuous integration, build - 40-hours work per week - Internal client - Coding regulations
Is Testing Hard?
- Software systems are not linear nor continuous, like building a bridge - Exhaustive Testing is a test approach in wich all possible data combinations are used. - Exhaustively testing all possible input/output combinations is too expensive - the number of test cases increase exponentially with the number of input/output variables
SCRUM Principles and Processing (5)
- Split your organization into small, cross-functional, self-organizing teams. - Split your work into a list of small, concrete deliverables. Sort the list by priority and estimate the relative effort of each item. - Split time into short fixed-length iterations (usually 1-4 weeks), with potentially shippable code demonstrated after each iteration - Optimize the release plan and update priorities in collaboration with the customer, based on insights gained by inspecting the release after each iteration. - Optimize the process by having a retrospective after each iteration.
Testing has types: static and dynamic
- Static testing: the code is not executed (e.g.: Reviews) - During dynamic testing the program under test is executed with some test data
Software Quality Factors
- Testability - Maintainability - Modularity - Reliability - Efficiency - Usability - Reusability - Legal requirements/Standards - etc.
Does testing improve Quality?
- Testing does not build Quality into the software - Testing is a means of determining the Quality of the Software under Test
Testing Dimensions
- Testing is much more than just debugging - Testing has types: static and dynamic - Testing is a process - Testing is a set of techniques
Testing is a process
- Testing means not just test execution but, design, record, checking for completion - We design test process to ensure not to miss critical steps and do things in the right order
Software Testing - definitions
- The process of executing a program with the intent to certify its Quality − Mills - The process of executing a program with the intent of finding faults / failures − Myers - The process of exercising software to verify that it satisfies specified functional & non-functional requirements - Examination of the behavior of a program by executing the program on sample data sets.
Root Causes
- There are many factors which provokes the defects to occur - RCA (Root cause analysis) is a mechanism of analyzing the defects, to identify its cause - By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.
Typical Objectives of Testing (9)
- To evaluate work products such as requirements, user stories, design, and code - To verify whether all specified requirements have been fulfilled - To validate whether the test object works as the users and other stakeholders expect - To build confidence in the level of quality of the test object - To prevent further defects - To find defects and situations when the system fail - To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object - To reduce the level of risk of inadequate software quality - To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object's compliance with such requirements or standards
Why We Test? (Right answers)
- To provide confidence in the system - To provide an understanding of the overall system - To provide sufficient information to allow an objective decision on applicability to deploy - Establish the extent that the requirements have been met - Establish the degree of Quality
Why We Test? (Wrong answers)
- To use up spare budget - To provide jobs for people who can't code - To provide a good excuse why the project is late - To prove that the software works as intended
Traceability
- Traceability refers to the ability to link requirements back to stakeholders' rationales and forward to corresponding design artifacts, code, and test cases. - Traceability supports numerous software engineering activities such as change impact analysis, verification of code, regression test selection, and requirements validation. - In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Why Testing Necessary? (correct answers)
- because software is likely to have faults - to learn about the reliability of the software - because failures can be very expensive - to avoid being sued by customers - to stay in business - to provide a measure of quality
Why Testing Necessary? (incorrect answers)
- to fill the time between delivery of the software and the release date - to prove that the software has no faults - only because testing is included in the project plan
Testing Principles (10)
1. Exhaustive testing is impossible 2. Early Testing - Tests should be planned long before testing begins: - Testing should start as early as possible - It saves time and money 3. Testing is context dependent 4. Beware of the Pesticide paradox - Running the same set of tests continually will not find new defects. 5. Pareto Principle / Defect Clustering - Approx. 80% of faults occur in 20% of code 6. Absence of errors is a fallacy - Just because testing didn't find any defects in the software, it doesn't mean that the software is perfect, or ready to be shipped 7. Presence of defects - "Testing can only show the presence of bugs, not their absence" (Dijkstra) 8. Testing is not just a process for measuring the quality of the product. It needs to be able to add to the value of the product. 9. All tests should be traceable (at least) to customer requirements 10. Tests must be prioritized so that, whenever you stop testing, you have done the best testing in the time available.
Five Faulty Assumptions
1: Quality requirements dictate the schedule 2: Quality = reliability 3: Users know what they want 4: The requirements will be correct 5: Product maturity is required
Test Scenario
A Test Scenario is a special test object describing an end-to-end business requirement to be tested; it is a high level classification of test requirements grouped depending on the functionality. A good test scenario reduce the complexity of test design.
The Cost of Errors / Defects
A single failure may incur little cost - or millions - Report layouts can be wrong - little cost - Or a significant error may cost millions... - Ariane V, Venus Probe, Mars Explorer and Polar Lander - UK government online filling of tax refund (security) - Denver Airport 1995, Pentium Chip 1994 - In extreme cases a software or systems error may cost LIVES - Therac 25 Radiation Machine 1985-1987 - Usually safety critical systems are tested rigorously - Aeroplane, railway, nuclear power systems etc.
Test Selection
A test selection criterion is a means of selecting test cases or determining that a set of test cases is sufficient for a specified purpose - How is the test set composed? - Which test cases should be selected? - A functional-based criterion may suggest test cases covering representative combinations of values - A structural-based criterion may require each statement to be executed at least once
Basic Questions about Testing
All IT project managers know that they must do some testing The basic questions are; - Is testing easy? - How much we need? - What sort? - By whom? - When? - How? These questions are difficult to answer
Sequential Models - Waterfall
Also called: Plan-Driven Model - Shows the steps in sequence - Each work-product or activity is completed before moving on to the next - Testing serves as a quality check - Testing is carried out once the code has been fully produced - The product can be released into the live environment after test completion
AUT
Application Under Test
Testware
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing
Incident
Behavior of fault. An incident is the symptom(s) associated with a failure that alerts user to the occurrence of a failure
BVA
Boundary Value Analysis
BS
British Standard
CR
Change Request
COTS
Commercial Off-The-Shelf
CAST
Computer Aided Software Testing
CM
Configuration Management
Verification
Demonstration of consistency, completeness, and correctness of the software artifacts at each stage of and between each stage of the software life-cycle. - Different types of verification: manual inspection, testing, formal methods - Verification answers the question: Am I building the product right?
Fault of Omission
Designer can make error of omission, the resulting fault is that something is missing that should have been present in the representation
Sequential Models - V-model
Extension of the waterfall model - Defects can be identified as early as possible in the life-cycle - In practice, a V-model may have fewer, more or different levels of development and testing, depending on the project and the software product
Fallacy 4: The requirements will be correct
Fact: Engineers are people, they evolve good requirements through trial-and-error experimentation
Fallacy 5: Product maturity is required
Fact: Price and availability are far more important considerations in most business applications
Fallacy 2: Quality = reliability
Fact: Reliability is only one component of the quality
Fallacy 3: Users know what they want
Fact: User expectations are vague and general, not detailed and feature-specific. This is especially true for business software products. This phenomenon has led to feature bloat
Fallacy 1: Quality requirements dictate the schedule
Facts: For most software systems, market forces and competition dictate the schedule
FMEA
Failure Mode and Effect Analysis
GUI
Graphical User Interface
Structural coverage
Has each element of the software been exercised during testing? - Code Statements - Decisions, - Conditions, etc.
Architectural coverage
Have the actual control and data links been utilised during testing? - Data paths - Control links
Testability
How easy is to test the application, clear, unambiguous requirements
Maintainability
How easy it is for developers to maintain the application and how quickly maintenance changes an be made
Reusability
How easy it is to re-use elements of the solution in other identical situations
Modularity
How much of a system or computer program is composed of discrete components and a change to one component has minimal impact on another component
Efficiency
How well a component performs its designated functions using minimal resources
Change Impact Analysis
Impact analysis tries to assess the impact of changes on the rest of the system: when a certain component changes, which system parts will be affected, and how will they be affected? Change impact analysis is defined as "identifying the potential consequences of a change, or estimating what needs to be modified to accomplish a change".
Expectations on Software Testing
In general, it is not possible to prove using testing that there are no faults in the software - Testing should help locate faults, not just detect their presence - a "yes/no" answer to the question "is the program correct?" is not very helpful - Testing should be repeatable - could be difficult for distributed or concurrent software
Incremental development
Incremental development involves breaking a large chunk of work into smaller portions. This is typically preferable to a monolithic approach where all development work happens in one huge chunk.
IEEE
Institute of Electrical and Electronics Engineers
Is random testing enough?
It is not a good idea to pick the test data randomly (with uniform distribution)
Iterative development
Iterative development involves a cyclical process. Learning from one iteration informs the next iteration. An iterative process embraces the fact that it is very difficult to know upfront exactly what the final product should look like
Definitions: Test data set
Let P be a program and let D denote its input domain A test data set T is a finite set of test data, i.e., a subset of D - Exhaustive testing corresponds to setting T = D
Definitions: Test data
Let P be a program and let D denote its input domain A test data t is an element of input domain t e D - a test data results a valuation for the input variables of the program
LOC
Lines of code
MTBF
Mean Time Between Failure
More tests results in better testing?
No. The power of the test set is not determined by the number of its elements
W-model
Not sequential any more
Failure
Occurs when fault executes
How to Prioritize?
Possible ranking criteria - test where failures would be most visible - test where failures are most likely - ask the customer to prioritize the requirements - what is most critical to the customer's business - areas changed most often - areas with most problems in the past - most complex areas, or technically critical
Forms of iterative development include
Prototyping (or spiral) Rapid application development (RAD) („tool supported development") Unified Process (former RUP) Agile software development - XP - Scrum - Kanban
QA
Quality Assurance
Quality
Quality is the totality of the characteristics of an entity that bear on its ability to satisfy stated or implied needs. Or in other words, quality is the degree to which a component, system or process meets specified requirements and/or user needs and expectations.
RAD
Rapid Application Development
Error / Mistake
Represents mistakes made by people
Typical life-cycle phases include
Requirement Specification Conceptual Plan Architectural Design Detailed Design Component Development Integration System Qualification Release System Operation & Maintenance Retirement / Disposal
Test Coverage Types
Requirements coverage Structural coverage Architectural coverage
RTM
Requirements traceability matrix
Fault / Defect / Bug
Result of an error. May be categorized as - Fault of Commission - Fault of Omission
SDLC
Software Development Life-cycle
SEI
Software Engineering Institute
SUT
Software Under Test
Test Basis
Test Basis is defined as the source of information or the document that is needed to write test cases and also for test analysis. - Test basis should be well defined and adequately structured so that one can easily identify test conditions from which test cases can be derived - Typical Test Basis: - Business document, Requirement document - Legacy - Codes Repository - Be aware of the traceability between the test basis and the various testing work products
Test Condition
Test Condition is a statement to the test object, and can be stated to any part of the test object (to any item, event, transaction, function, structural element of a system, etc) that could be verified by one or more test cases. A test condition can be anything you want to verify. Example for test condition: - Given a Login Form When User Name and Password are valid Then application will move forward
TMM
Test Maturity Model
Test Object
Test Object describes the target of testing (system under test - SUT, method under test - MUT, object under test - OUT, class under test - CUT, implementation under test -IUT, etc.) A test object may contain many test items.
Test Adequacy
Test adequacy criteria can be used to decide when sufficient testing will be, or has been accomplished - How much testing is enough? - How many test cases should be selected?
Test Coverage
Test coverage measures the amount of testing performed by a set of test according to some adequacy criteria. The basic coverage measure is where the coverage item is whatever we have been able to count and see whether a test has exercised or used this item. 100% coverage does not mean 100% tested.
TDD
Test-driven development
The basic difficulty in testing
The basic difficulty in testing is finding a test set that will uncover the faults in the program
Usability
The ease of use of the software by the intended users
Validation
The process of evaluating software at the end of the software development to ensure compliance with respect to the customer needs and requirements. - Validation can be accomplished by verifying the artifacts produced at each stage of the software development life cycle - Validation answers the question: Am I building the right product?
Vertical Traceability
The relationship among the parts of a single workproduct (discipline) e.g. requirements.
Horizontal Traceability
The relationship of the collections of components across collections of workproducts e.g. a design component is traced to the code components that implement that part of the design.
Reliability
The time or transactions processed between failures in the software (MTBFs)
UML
Unified Modelling Language
V&V
Verification and validation
Fault of Commission
We enter something into representation that is incorrect
Exhaustive Testing
What is exhaustive testing? - when all the testers are exhausted: NO - when all the planned tests have been executed: NO - exercising all combinations of inputs and preconditions: YES How much time will exhaustive testing take? - infinite time: NO - not much time: NO - impractical amount of time: YES
XP
eXtreme Programming