Software Testing Terms

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Error

A bad system state that can lead to a failure. Errors can be latent (dormant) or active.

Hot Fix

A critical bug fix that needs to go live before the next scheduled update.

Failure

A deviation between actual and expected outcome/output.

Grey Box Testing

A mix of white and black box testing. Test cases are designed in regards to the internal structure/code but from the perspective of a user.

Acceptance Criteria

A set of conditions that must be met in order for a feature to be considered ready for release.

Test Suite

A set of test cases based on category.

Flakey Tests

A test that has a non-deterministic outcome. Sometimes passes or fails without changing the source code. Concurrency, timing, and change in environment can cause flakey tests.

Black Box Testing

A testing approach that focuses on the overall functionality of the application or product and does not involve internal code. Test cases are designed, selected, and run based on specifications.

Minimum Viable Product (MVP)

A version of the application that meets the minimum required criteria to be accepted for launch.

Release Candidate

A version of the software that is ready for release assuming no major bugs are found during testing.

Requirements

AKA specifications ("Specs"). Documentation that lists all information about a feature.

Testware

All test artifacts in a project, including test data, test plans, and test cases used to design and perform a particular test.

Mutation Testing

Altering source code to find and remove redundancies. Mutation tests can also verify unit tests to make sure those unit tests detect faulty mutations of the source code.

Integration Testing

Assemble small units of code into bigger groups and test. The goal is to make sure the individual units run together properly.

Behavior Driven Development (BDD)

BDD is a methodology rather than a specific tool. It describes a way of talking about how your software should behave and checking that it's behaving that way. BDD grew out of a need to not just make tests for the sake of having test coverage, but to direct testing and development to the most important behaviors the software should have. It also gives a framework for talking about tests between the business side and the tech side.

Sanity Testing

Checks if the planned functionality is working as expected.

Smoke Testing

Checks if the software is ready to be tested thoroughly. Smoke tests are basic tests that check fundamental functionality of the application.

Static Testing

Cost effective method of testing done without executing code. Performed early in development life cycle. This is done either by manual examination where the code is analyzed by a QA analyst or by automatic analysis where a testing tool automatically checks the program for any errors.

Fault Injection

Creating intentional faults in the source code to test the robustness of the software.

Interface Testing

Determines if two different components of the software can communicate with each other.

Unit Testing

Dividing code into the smallest possible components then testing them individually (white box testing). Usually done by developers but QA engineers can too.

Exploratory(Ad-hoc) Testing

Flexible and creative form of testing. No test planning required and tests are lead through the own testers intuition. Exploratory testing also allows for collaboration, theory crafting and collaboration all on the fly. These should be done manually and never automated.

Software Defect Reports (SDR)

Formal documentation of a defect containing all relevant information about the defect such as identifying info, description of the problem, various status indicators, severity, resolution status, any useful notes, names, steps to reproduce, error printouts, screenshots, relevant test cases, etc.

Code Complete

Indicates that developers have finished implementing the bug fix or new feature. This means its ready for QA or will be soon once code is deployed. Does not mean new version is bug-free.

Load Testing

Load testing is a subpart of performance testing and specializes in simulating real-world workload for any software or site. This particular testing method checks whether or not a site or software is functioning as it should during normal and high usage loads.

Localization Testing

Localization Testing is a software testing technique in which the behavior of a software is tested for a specific region, locale or culture. The purpose of doing localization testing for a software is to test appropriate linguistic and cultural aspects for a particular locale. It is the process of customizing the software as per the targeted language and country.

Manual Testing

Manually written tests by test engineers to test/verify various parts of the software according to test cases/plans. Relatively low-cost compared to automated testing but is labor intensive and prone to human error & bias.

Performance Testing

Performance testing is often used to set up the benchmark behavior of a given system. Performance testing helps testers understand how the components of a system are performing in a given situation. You can validate different aspects of the system, such as scalability, resource usage, and reliability.

Acceptance Testing

Provides the final certification that the system is ready to be used in a production setting. Users/customers are involved in this stage to provide feedback on whether or not the software fits their needs.

Test Cases

Requirements with steps for testing a specific part of the software.

Regression Testing

Rerunning old tests on new builds to make sure the software still meets previous (but still valid) requirements.

Stress Testing

Stress testing involves validating a system's behavior when it has to execute commands under stress. A system under stress is a system dealing with a lack of resources or functional impairments and failures. This helps us understand the system's total limit by reducing resources and evaluating the system's behavior.

What is the proper format for writing a good test case? What are the steps involved?

Test case identification, test case description, severity, priority, environment, build version, steps to execute, expected results, and actual results.

Risk-Based Test Planning

Test plans that are centered around risks. Testing frequency and rigor are customized to each risk depending on their impact and likelihood.

System Testing

Test the complete/fully integrated version of the software. The goal is to ensure the system as a whole meets requirements.

White Box Testing

Testing based on an analysis of the internal code of the software. Source code is used to derive test cases and code coverage is important. Tests can be run at earlier stages of the software and are more thorough than black box tests. White box testing requires understanding of the code. Involves: - API testing - Code coverage - Fault injection - Mutation testing - Dynamic & Static testing

Compatibility Testing

Testing software under diverse system configurations. An example would be testing if a website works properly on different web browsers on mobile or desktop devices. AKA cross-browser/device testing.

API Testing

Testing that checks the reliability, accuracy, and consistency of the software's APIs. Can be automated to ensure APIs are sending/receiving data correctly at all times.

Recovery Testing

Testing that pushes the system to crash and tests its capability to recover.

Dynamic Testing

Testing that requires execution of the code. Performed further into development cycle. Dynamic Testing has 4 stages: 1. Unit Testing 2. Integration Testing 3. System Testing 4. (Customer) Acceptance Testing

Security Testing

Testing the data in the software against intrusions, malware attacks and exposing any security loopholes.

Usability Testing

Testing the design choices are functional and intuitive. This testing is done from a user perspective with the goal of maintaining/improving user experience.

Install Testing

Testing the installation/uninstallation process of software.

Code Coverage

Testing the software to ensure all statements, functions, lines, etc. are run at least once.

Reliability Testing

Testing the software's capability of performing failure-free operations of a specified period in a specified environment.

Compliance Testing

Testing the software's compliance to set norms/standards. These can be contractual, legal, or industry-specific.

Functional Testing

Testing to make sure the system supports the requirements that were defined before development began. This includes checking that inputs' actual output matches expected output. Involves: - Unit testing - Integration testing - System testing - Sanity testing - Smoke testing - Interface testing - Regression testing - Beta/Acceptance testing

Non-functional Testing

Tests directed to non-functional requirements of the software such as performance, usability, and reliability. Involves: - Performance testing - Load testing - Stress testing - Volume testing - Security testing - Compatibility testing - Install testing - Recovery testing - Reliability testing - Usability testing - Compliance testing - Localization testing

Automated Testing

Tests run automatically by a tool without manual instruction by a tester. Automated test scripts are used for repetitive/labor-intensive tasks like data input validation and unit tests. These are run anytime a change is made to source code.

Volume Testing

Tests the performance and processing speed of the system with an unusually large data set or high volume.

Software Under Test (SUT)

The current software that is undergoing testing.

Describe the main artifacts a quality assurance engineer would refer to when writing different test cases.

The major artifacts used by quality assurance engineers include functional requirement specification, requirement understanding document, use cases, wireframes, user stories, acceptance criteria, and user acceptance test (UAT) cases.

Dependability

The quality of a service such that it can be relied upon. Measures of Dependability: - Availability > readiness for correct service - Reliability > continuity of correct service - Safety > absence of catastrophic consequences on the users and environment - Integrity > absence of improper system alteration - Maintainability > ability for a process to undergo modifications and repairs - Robust > continuity of service (may not be correct service)

Fault

The root cause of an error. Can be created by a human or physical (short circuit).

Stages of Software Testing Life Cycle

The software testing life cycle phases include the requirements phase, the planning phase, the analysis phase, the design phase, the implementation phase, the execution phase, the conclusion phase, and the closure phase.

Cyclomatic Complexity

This technique is used to identify the three major questions we should be asking about the program's features: Is it testable? Is it understood by everyone? Is it reliable enough? Quality assurance engineers use this technique to determine what level of testing a particular feature requires and whether it is considered a high priority. Formula: M = (# of edges) - (# of nodes) + 2 * (# of components connected)

Verification & Validation

Verification: Verify the product is in line with specifications (The product is built correctly). This is a process of checking documents, design, code, and program in order to check if the software has been built according to the requirements or not. Validation: Validate that the product is what users want (The right product is built). This is a dynamic mechanism of testing and validating if the software product actually meets the exact needs of the customer or not.


Ensembles d'études connexes

NUR 275 Exam 3: gas exchange and cellular regulation

View Set

Seller's Remedies UCC §2-703, §2-705, §2-706, §2-708

View Set

A Brief History of Programming Languages

View Set

Unit 2, Chapter 4, Study Guide 2

View Set

Key Words for Fluency by George Woolard,Pre-Intermediate

View Set