All in Test analysis

Ace your homework & exams now with Quizwiz!

security policy

A high level document to describind principles, approach and major objectives of organization regarding security

Performance indicator

A high level metrics effectiveness and or efficiency used to used to guide and control progressive development fq lead time slip for software development

indicator

A measure that can be used to estimate or predict another measure

Bug

A mismatch between actual behavior of a software application and its intended (expected) behavior. We learn about expected behavior from requirements, specifications, other technical documentation.

Project

A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective confirming to specific requirements including the constraints of time, cost ans resources

process

A set of interrelated activities which transferm input into output

Security procedure

A set of steps required to implement the security policy and the steps to be taken in response to a security incident

Software Test Life Cycl

Black box testing has its own life cycle called Software Test Life Cycle (STLC) and it is relative to every stage of Software Development Life Cycle. Requirement - This is the initial stage of SDLC and in this stage requirement is gathered. Software testers also take part in this stage. Test Planning & Analysis - Testing Types applicable to the project are determined. A Test Plan is created which determines possible project risks and their mitigation. Design - In this stage Test cases/scripts are created on the basis of software requirement documents Test Execution- In this stage Test Cases prepared are executed. Bugs if any are fixed and re-tested.

Which documents would you refer to when creating Test Cases?

All business and technical documentation available: PRD - Product Requirements Document BRD - Business Requirements Document Functional Specifications Manuals and Help Use Cases Test Design Third party publications (books, published by independent authors)

What is the most important contribution to the product development process that QA could bring?

Clarifying (make clear) requirements Bringing down percentage of code must be re-written if the requirements have to change.

What is the most important impact QA can have on a product development process?

Clarifying requirements Bringing down percentage of code re-written due to the change in requirements

HTTP cookies

An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers.

social engineering

An attempt to trick someone in revealing information ( password) that can be used to attack systems or networks

test condition

An item or event of a component or system that can be verified by one or more test cases f.q a function, transaction, feature, quality attribute or structural element

Static Testing

Static Testing is checking of code, requirement documents, and design documents to find errors without code execution. Hence, the name "static". Main objective of this testing is to improve the quality of software products by finding errors in early stages of the development cycle. Unfortunately in practice it's often skipped;

Test Cases Important Components

Test case ID 1. Title / Purpose Test description; Intent; Objective; etc 2. Instructions Instructions on how to get an application from base state to an expected result; 3. Expected result Expected application behavior based on the requirements 4. Actual result Actual application behavior 5. Pass/Fail verification of actual result (application behavior) against expected result (specified in the test case); 6. Precondition 7. Test data

What is the difference between a test case and a test plan?

Test plan is the most comprehensive Software Testing document that describes the objectives, scope, approach, and focus of a software testing effort Test case is the smallest Software Testing document that describes both typical and atypical situation that may occur in the use of an application

Automation Testing

Testers use appropriate automation tools to develop the test scripts and validate the software. The goal is to complete test execution in a less amount of time. Automated testing entirely relies on the pre-scripted test which runs automatically to compare actual result with the expected results. This helps the tester to determine whether or not an application performs as expected. When to automate? ● Regression, not expected to change; ● Repetitive tests; ● Structured data; ● Static, complex but static; ● Performance (Load, volume, capacity)

Non-Functional testing

Testing conducted to evaluate the compliance of a component or system with non-functional requirements. Reliability, usability, maintainability, compatibility, response times, capacity for performance testing, etc.

What is the difference between Software Testing and Software QA?

Testing is mainly focused on the source code (black, gray, white box) it is an 'error detection' process. Usually, it starts at "Testing" stage of Software Development Life Cycle. Software QA is 'preventative' process. It's goal to ensure quality in the methods & processes. It occurs during whole SDLC (Software Development Life Cycle).

Dynamic Testing

Testing that involves the execution of the software of a component or system. - actually executing program code using set of test cases or other way.

Usability Testing

Usability Testing performed based on the users actual behavior. Evaluate: Is it easy to use? How much time it takes to user complete basic tasks? How many mistakes did people make? How much does the person remember after periods of non-use? Emotional response: how does the person feel about this app?

User Acceptance Testing (UAT)

User Acceptance Testing (UAT) is one of the final stages of a project. Before accepting new system, client/customer test application to make sure that it meets the requirements. Preconditions: Application is fully developed; Unit, Integration and System are completed; Hight priority bugs have already been fixed before UAT.

Is it possible to find/fix all the bugs in a software product before it goes to the customers? Why test?

No, it is impossible. Cause - human factor - every fixed bug created new bugs. 1)We owe it to our users and ourselves to deliver the best application we can. 2) ..to do Quality Assurance, to establish and to enforce business systems of the QA Organization ● Test Planning ● Bug Reporting ● Bug Tracking ● Test Automation ● Release Certification

Smoke testing

Smoke test must be performed on each build. Goal is to check main business flow and consistency of application. Performed by testers, and usually doesn't take too much time, aprox 20-30 min.

How should QA engineer communicate with a developer?

in my experience, there are six key best practices QA teams should adopt to improve collaboration with developers. 1. Focus on quality, not on testing 2. Share responsibility 3.Be constructive about defects 4. Take initiative

Project Plan

is a formal document to summarize business, management and financial aspects of a project. "Contract" between Project Manager and customers. It includes scope, objectives, benefits, costs, risks and plans, etc.

TEST SUITE

is a group of test cases;

z

...

Requirements

1) MRD - Marketing Requirements Document MRD- The market requirements that describe the opportunity or the market need. 2) PRD - The purpose is to clearly describe the product's purpose, features, functionality, and behavior. PRD- Product Requirements Document 3) Use Cases

What is The purpose of testing is

1) Verification is checking how the actual result meet the requirements. (Verification: Are we building the system right?) 2) Validation is the process of checking that what has been required is what the user actually wanted. (Validation: Are we building the right system?) 3) Error Detection: finding if things happen when they shouldn't or things don't happen when they should.

Black box testing strategy

1. Equivalence Class Testing: It is used to minimize the number of possible test cases to an optimum level while maintains reasonable test coverage. 2. Boundary Value Testing: is focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not. It is very useful in reducing the number of test cases. It is mostly suitable for the systems where input is within certain ranges. 3. Decision Table Testing: A decision table puts causes and their effects in a matrix. There is unique combination in each column.

Bug Report components

1. Report number: 2. Application / Module being tested 3. Version & release number 4. Short Description (Problem summary) 5. Synopsis 6. Severity (Assigned by tester) How error effects on an application (Critical, Serious, Minor, Suggestion) . 7. Environment (Software and/or hardware configuration) 8. Detailed description 9. Steps to Reproduce 10. Reported by 11. Assigned to developer . 12.Status (Open, Pending, Fixed, Closed, cannot reproduce, etc.) 13. Priority (Assigned by manager) (High, Medium, Low) 14.Resolution / Notes Keywords

Network zone

A sub network with a defined level of trust. For example the internet or a public zone would be considered to be untrust

Test Strategy

A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. What does Test Strategy include?This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment. or . Documentation that expresses the generic requirements for testing one or more projects run within an organization, providing detail on how testing is to be performed, and is aligned with the test policy.

state transition

A transition between two states of a component or system

output

A variable (whether stored within a component or outside) that is written by a component

What are Advantages & Disadvantages of black box testing?

Advantages: Test cases can be designed as soon as the specifications are complete Developer and tester are independent of each other Tester does not need to know any specific programming language Testing a software as a user, not a developer (other point of view) Disadvantages: Test cases are harder to design. Impossible to test every possible input stream Many program paths will go untested

Acceptance Testing

After the system test has corrected all or most defects, the system will be delivered to the user or customer for Acceptance Testing or User Acceptance Testing (UAT). is black-box testing performed on a software prior to its delivery. The goal of acceptance testing is to establish confidence in the system. Acceptance testing is most often focused on a validation type testing. Facilitated by QA team, but performed by Subject Matter Experts (SMEs) on behalf of users. It is usually performed by the customer - User Acceptance Testing - UAT

Agile Model

Agile SDLC model is a combination of iterative and incremental process models with the focus on process adaptability and customer satisfaction by rapid delivery of working software product. Agile Methods break the product into small incremental builds. These builds are provided in iterations. Each iteration typically lasts from about one to three weeks. Every iteration involves cross-functional teams working simultaneously on various areas like − Planning Requirements Analysis Design Coding Unit Testing and Acceptance Testing. Advantages of Agile model: 1. Customer satisfaction by rapid, continuous delivery of useful software. 2. Customers, developers, and testers constantly interact with each other. 3. Working software is delivered frequently (weeks rather than months). 4. Regular adaptation to changing circumstances. Disadvantages of Agile model: 1. There is lack of emphasis on necessary designing and documentation. 2. Only senior programmers are capable of taking the kind of decisions required during the development process. When to use Agile model: When new changes are needed to be implemented. Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users' needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly affected or removed based on feedback. This effectively gives the customer the finished system they want or need.

Business Requirements Document (BRD)

BRD is written by the Business Analysts. It details the business solution for a project including the documentation of customer needs and expectations. The most common objectives of the BRD are: To gain agreement with stakeholders To provide a foundation to communicate to a technology service provider what the solution needs to do to satisfy the customer's and business' needs To provide input into the next phase for this project To describe what not how the customer/business needs will be met by the solution

Testing boundary conditions? Why? How?

Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges. Boundary value analysis is a method which refines equivalence partitioning. It generates test cases that highlight errors better than equivalence partitioning. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the portions of the business requirement document used to generate tests. For example, if a valid range of quantity on hand is -9,999 through 9,999, write test cases that include: 1. the valid test case quantity on hand is -9,999, 2. the valid test case quantity on hand is 9,999, 3. the invalid test case quantity on hand is -10,000 and 4. the invalid test case quantity on hand is 10,000

Beside test case & test plan, what documents are required to write?

Check Lists Test matrices Test design specs End-to-end tests Test summary reports Bug reports

Why testers are needed if software goes to customers with lots of unfixed bugs?

Creating, mainaining, and improving QA group/team/department business systems * bug reporting * bug tracking * test planning * release certification * test automation

Quality

Customer satisfaction. Subjective term. It will depend on who the 'customer' is. Each type of customer will have their own view on 'quality'. Broadly from consumer perspective it is customer satisfaction. or The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

TEST PLAN

DEFINITION: is a document that describes the - objectives - scope - approach - focus of a software testing effort. • The highest level Software testing document possible; • Does not include test case; • Developed mostly by QA team with help of other team possibly contributing (everybody is involved at some level); or The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product testing. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

TEST CASE

DEFINITION: is a set of conditions and/or variables under which a tester will identify if application up to requirements. Test Case • Test Case is 100% come requirement; • The smallest possible action in software testing; • The lowest possible level document in software testing; • Might fail or pass in just one place; • Test cases should be created in a way that all of the paths are executed at least once. or the smallest possible action in software testing the group of test cases is called "test suite" the lowest possible level document in software testing Guidelines for writing test cases 1. There is a difference between 2 questions: - Write test cases for... - How would you test... 2. First test case - single most important test (happy path) 3. Prioritizing test cases 4. Level of detail. Same test case should be executed in exact same way by each and every tester. Use Test Data. 5. Start with requirements - test cases MUST match the requirements - minimal possible change Writing test cases would normally involve the following: - Requirements (assumptions of requirements) - Preconditions (prerequisites) - Test data NOTICE: The application is supposed to be in testable condition: avoid test cases such as "button is clickable", "text field accepts input", "check box could be checked"

Test matrix

Data collection mechanism. It provides a structure for testing the effect of combining two or more variables, circumstances, types of hardware, or events. Row and column headings identify the test conditions. Cells keep the results of test execution.

Test closure

During the test closure phase of a test process data is collected from completed activities to consolidate experience, testware, facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating test process including the preparation of test evolution report

Effectiveness

Extent to which correct and complete goals are achieved. The capability of producing an intended result

Operational Environment

Hardware and software products installed at users or customers site where the components or system under test will be used. The software may include operating system, database

Build

In a programming context, a build is a version of a program. As a rule, a build is a pre-release version and as such is identified by a build number, rather than by a release number. Reiterative (repeated) builds are an important part of the development process. Throughout development, application components are collected and repeatedly compiled for testing purposes. or It is an executable file that has been made by compiling code.

use case testing

In order to identify and execute the functional requirement of an application from start to finish "use case" is used and the techniques used to do this is known as "Use Case Testing"

What are the levels of testing do you know

In software development life cycle there are defined phases like requirement gathering and analysis, design, coding or implementation, testing and deployment. Each phase goes through the testing. The various levels of testing are: 1. Unit testing: It is basically done by the developers to make sure that their code is working fine and meet the user specifications. 2. Component testing: It is also called as module testing. The basic difference between the unit testing and component testing is in unit testing the developers test their piece of code but in component testing the whole component is tested. For example, in a student record application there are two modules one which will save the records of the students and other module is to upload the results of the students. Both the modules are developed separately and when they are tested one by one then we call this as a component or module testing. 3. Integration testing: is done when two modules are integrated, in order to test the behavior and functionality of both the modules after integration. Sometimes there can be several levels of integration testing : -Component integration testing: This testing is basically done to ensure that the code should not break after integrating the two modules. -System integration testing: (SIT) is a testing where testers basically test that in the same environment all the related systems should maintain data integrity and can operate in coordination with other systems. 4. System testing: the testers basically test the compatibility of the application with the system. System integration testing may be performed after system testing or in parallel with system testing. 5. Acceptance testing: are basically done to ensure that the requirements of the specification are met. -Alpha testing: is done at the developers site. It is done at the end of the development process -Beta testing: is done at the customers site. It is done just before the launch of the product.

Installation testing

Installation testing verifies that the system is installed and set up correctly. Typically done by QA Engineer working closely with Configuration Manager;

load testing

Load testing is a kind of Performance Testing which determines a system's performance under real-life load conditions. This testing helps determine how the application behaves when multiple users access it simultaneously. This testing usually identifies - 1. The maximum operating capacity of an application 2. Determine whether current infrastructure is sufficient to run the application 3. Sustainability of application with respect to peak user load 4. Number of concurrent users that an application can support, and scalability to allow more users to access it. It is a type of non-functional testing. Load testing is commonly used for the Client/Server, Web based applications - both Intranet and Internet. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g., numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system. In Load testing we validate the system behavior under the expected load. The load can be of concurrent user or resources accessing the system at the same time.

Performance Testing

Performance Testing runs to determine is system stable under a specific workload and how fast some aspect of a system performs under it. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. ● Testing if application meets the performance criteria; ● Compare two systems to find which performs better; ● Measure what parts of the system causes poor performance; ● Need a stable build and environment, closest to production; REMEMBER: Load, Stress, Spike, Endurance, Configuration testing - are the instances of Performance Testing

authorization

Permission given to a user or process to access recourses

Positive testing

Positive testing it is testing that simulates users expected behavior. The purpose of Positive testing is showing software works as designed when user acting as expected. - should be done BEFORE negative or Positive testing aimed at showing software works as intended when user does correct actions.

types of testing

Positive/ negative; black box/ white box; unit- intergration -system testing; functional- regression- acceptance; validation/ verification testing; ad hoc- exploratory- structured

Compatibility Testing

Purpose of compatibility testing is to evaluate the application's compatibility with the different environments. Browser compatibility testing checks how application looks, behaves, and responds in different browsers. What to test? ● Cross browser/Platform compatibility issues; ● Screen resolutions; ● Frames; ● Font display and availability; ● Color limitations; ● JavaScript availability( mobile devices);

QA Process:

QA is the process of verifying or determining whether products or services meet or exceed customer expectations. QA is a process-driven approach with specific steps to help define and attain goals. This process considers design, development, production, and service. The four quality assurance steps within the PDCA model stand for: Plan: Establish objectives and processes required to deliver the desired results. Do: Implement the process developed. Check: Monitor and evaluate the implemented process by testing the results against the predetermined objectives Act: Apply actions necessary for improvement if the results require changes. ● Test Planning ● Test Development ● Test Execution ● Bug Reporting ● Defect Management

Regression test

Regression testing is testing existing software applications to make sure that a change or addition hasn't broken any existing functionality. Its purpose is to catch bugs that may have been accidentally introduced into a new build or release candidate, and to ensure that previously eradicated bugs continue to stay dead. By re-running testing scenarios that were originally scripted when known problems were first fixed, you can make sure that any new changes to an application haven't resulted in a regression, or caused components that formerly worked to fail. Such tests can be performed manually on small projects, but in most cases repeating a suite of tests each time an update is made is too time-consuming and complicated to consider, so an automated testing tool is typically required.

Release/build acceptance testing

Release/build acceptance testing is a quick test to verify that application is stable enough to execute a tasting more completely. Acceptance testing by QA is different from acceptance testing by the customer (UAT - user acceptance testing).

Risk analysis

Risk analysis means the actions taken to avoid things going wrong on a software development project, things that might negatively impact the scope, quality, timeliness, or cost of a project. This is, of course, a shared responsibility among everyone involved in a project. However, there needs to be a 'buck stops here' person who can consider the relevant trade offs when decisions are required, and who can ensure that everyone is handling their risk management responsibilities. or means the actions taken to avoid things going wrong on a software development project, things that might negatively effect the scope, quality, timeliness and cost of a project.

SCRUM

Scrum is one of the methodology of agile model for development software. There is one or more cross-functional, self-organizing teams of about seven people each in the SCRUM. In this teems might be only three roles: Product Owner, Team, and Scrum Master. Scrum uses fixed-length iterations, called Sprints, which are typically 1-2 weeks long (never more than 30 days). Scrum teams try to build a potentially shippable (properly tested) product increment every iteration.

Security testing

Security testing is activity with purpose makes app secure. It means - detect errors that may cause a big loss (losing of personal/financial or any other important information) - finding all possible loopholes and weaknesses of the system - measure its potential all possible security risks in the system

Which tools are used to write Test Cases?

Test Management Tools such as HP Quality Center, Test Link, Zephyr, Rational TestManager Many companies use spreadsheets (Excel) or word processors (Word)

Stress Testing

Stress test puts a emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goal may be to ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks. or Stress testing - performed to determine system behavior under an extreme load (kind of Performance testing). Will the system performs properly "under stress"?

Software Testing Documentation

Test Plan; Test Cases; Test Suite; Test Strategy; Traceability Matrix; Test Script; etc.

System testing

System testing is executed on a fully integrated/complete applications to identify if application meets to requirements. System testing is a black box testing; You need to know the requirements or design or logic of system. Preconditions for System Testing 1. All components should be successfully: - Unit Tested passed; - Integration Testing passed. 2.System Testing should be executed in an environment, closest to production setting or if necessary, in several multiple environments. Or. Testing an Integrated system to verify that it meets specified requirements

risk management

Systematics application of procedures and practices to the tasks to identifying, analysing, prioritizing and controlling risk

Black box

Testing, either functional or non-functional, without reference to the internal structure of the component or system Testing done from a user perspective (no access to the source code used) In BlackBox Testing we just focus on inputs and output of the software system Could be manual or automated (still black box)This method of test can be applied to every level of software testing: unit, integration, system and acceptance. *the generic steps followed to carry out any type of Black Box Testing. 1. Initially requirements and specifications of the system are examined. 2. Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly . Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect them. 3. Tester determines expected outputs for all those inputs. 4. Software tester constructs test cases with the selected inputs. The test cases are executed. 5. Software tester compares the actual outputs with the expected outputs. 6. Defects if any are fixed and re-tested. Types of Black Box Testing - - Functional testing - is related to functional requirements of a system; it is done by software testers. - Non-functional testing - is not related to testing of a specific functionality, but non-functional requirements such as performance, scalability, usability. - Regression testing -is done after code fixes, upgrades or any other system maintenance to check the new code has not affected the existing code.

SDLC - Environments

Tests may occur in the following hardware and software environment types: ● Development; ● QA environment; ● Staging; ● Production; etc.

Testware

Testware is test artifacts like test cases, test data, test plans needed to design and execute a test.

Test input

The Data received from an external source by the test object during test execution. The external source can be hardware, software or human

DHCP/DNS

The Dynamic Host Configuration Protocol (DHCP) is a network management protocol used on UDP/IP networks whereby a DHCP server dynamically assigns an IP address and other network configuration parameters to each device on a network so they can communicate with other IP networks. The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities.

Compliance

The capability of the software product to adhere the standards conventions or regulations in law and similar prescriptions

Stability

The capability of the software product to avoid unexpected effects from modifications in the software

Accuracy

The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.

Test object

The component or system to be tasted

Risk impact

The damage that will be caused if the risk becomes the actual outcome or event

complexity

The degree to which a component has a design and or internal structure that is difficult to understand, maintain and verify

quality control

The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements.

Test Plan include

The following are some of the items that might be included in a test plan, depending on the particular project: * Title * Identification of software including version/release numbers * Revision history of document including authors, dates, approvals * Table of Contents * Purpose of document, intended audience * Objective of testing effort * Software product overview * Relevant related document list, such as requirements, design documents, other test plans, etc. * Relevant standards or legal requirements * Traceability requirements * Relevant naming conventions and identifier conventions * Overall software project organization and personnel/contact-info/responsibilities * Test organization and personnel/contact-info/responsibilities * Assumptions and dependencies * Project risk analysis * Testing priorities and focus * Scope and limitations of testing * Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable * Outline of data input equivalence classes, boundary value analysis, error classes * Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems * Test environment validity analysis - differences between the test and production systems and their impact on test validity. * Test environment setup and configuration issues * Software migration processes * Software CM processes * Test data setup requirements * Database setup requirements * Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs * Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs * Test automation - justification and overview * Test tools to be used, including versions, patches, etc. * Test script/test code maintenance processes and version control * Problem tracking and resolution - tools and processes * Project test metrics to be used * Reporting requirements and testing deliverables * Software entrance and exit criteria * Initial sanity testing period and criteria * Test suspension and restart criteria * Personnel allocation * Personnel pre-training needs * Test site/location * Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues * Relevant proprietary, classified, security, and licensing issues * Open issues * Appendix - glossary, acronyms, etc.

Beta testing

The goal of beta testing is to place your application in the hands of real users outside of your own engineering team to discover any flaws or issues from the user's perspective that you would not want to have in your final, released version of the application. Example: Microsoft and many other organizations release beta versions of their products to be tested by users.

Unit Testing

The goal of unit testing is to isolate each part of the program and show that the individual parts (units) are correct. A unit is the smallest testable part of an application. It may be an individual function or procedure. Unit testing is provided by developers, not testers. or Unit testing is the phase of testing where object of testing is Unit ( the smallest testable part of an application.). Unit testing verifies that units usable and have correct data. Unit tests are usually created/run by developers (or white box testers) during the development process to ensure that code meets its specification and behaves as expected.

Risk level

The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be express either.

Software Testing

The purpose of testing is verification, validation and error detection (in order to find and fix the problems) - Verification is checking for conformance and consistency by evaluating the results against pre-specified requirements. (Verification: Are we building the system right?) - Validation is the process of checking that what has been specified is what the user actually wanted. (Validation: Are we building the right system?) - Error Detection: finding if things happen when they shouldn't or things don't happen when they should. or the process of analyzing the software purpose of it detects the differences between actual results and expected results (requirements), and to evaluate the features of the software.

Software development life cycle

The software development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Stages: 1) Planning (Planning & Concept development) 2) Analysis (Analysis & Requirements gathering) 3) Design (Architecture & Specifications) 4) Development (Development & Test) 5) Implementation/Release 6) Maintenance

How do you determine which piece of software require how much testing?

The technique helps to identify the below 3 questions for the programs / features Is the feature / program testable? Is the feature/ program understood by every one? Is the feature / program reliable enough? As a QA we can use this technique to identify the "level" of our testing. It is a practice that if the result of cyclomatic complexity is more or a bigger number, we consider that piece of functionality to be of complex nature and hence we conclude as a tester; that the piece of code / functionality requires an in-depth testing. On the other hand if the result of the Cyclomatic Complexity is a smaller number, we conclude as QA that the functionality is of less complexity and decide the scope accordingly. As a QA its very important that we understand the entire testing lifecycle and should be able to suggest changes in our process if required. The goal is to deliver high quality software and in that way a QA should take all the necessary measures to improve the process and way testing team executes the tests

Software Quality

The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. or From a consumer perspective, it is about customer satisfaction (subjective matter) From QA perspective it is a measurement of how close is actual software product to the expected product, actuality it is an ability of a product to meet of requirements. And if to speak about the quality of Software We can describe Quality Software as a - reasonably bug-free, - delivered on time and within budget, - to meet of requirements and/or expectations, - maintainable product.

Releases

This is a process of delivering and providing the product to clients

Ad Hoc Testing

This type of testing can be done at anytime anywhere in the Software Development Life cycle (SDLC) without following any formal process like requirement documents, test plan, test cases, etc. -Ad-hoc testing is usually done to discover the issues or defects which cannot be found by following the formal process. -Ad-hoc testing is done after the completion of the formal testing on the application or product. -This testing is performed with the aim to break the application without following any process. -The testers executing the ad-hoc testing should have thorough knowledge on the product. -Ad-hoc testing can be executed only once until and unless a defect is found which requires retesting. -The test scenarios executed during the ad-hoc testing are not documented so the tester has to keep all the scenarios in their mind which he/she might not be able to recollect in future.

Effective Testing

To test effectively you need: 1. Test Coverage; 2. Strong planning; 3. Execution.

scripted testing

Under scripted testing, tester designs test cases first and than executes it.

Use case

Use case is a format used by Business Analysts for specifying system requirements. Each use case normally represents completed business operation performed by user. From the QA prospective we will execute corresponding End-To-End test to make sure the requirement is implemented.

Volume testing

Volume testing (part of stress testing) performed to determine system behavior with a defined amount of data (dB size, etc.)

What is walk-through meeting?

Walk-through meeting is a form of software peer review in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.

Development Models

Waterfall (Traditional-sequential) Agile/Extreme: (extreme programming)

What types of testing do you know?

What types of testing do you know? Static vs Dynamic Manual vs. Automation Black/white/gray box Non-functional/functional Positive/negative exploratory (ad-hog) unit/integration/system/ end to end acceptance regression security alpha testing/betta Testing Smoke Testing Performance: Load, Stress Testing Compatibility Testing Usability Testing Installation Testing Internationalization Testing/Localization Testing Accessibility Testing Boundary Testing

White box

White box testing is done with access to the code. Bugs are reported at the source code level, not behavioral.

Exploratory testing

is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. In this way, exploratory testing can be used as a check on the formal test process by helping to ensure that the most serious defects have been found. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. Good if no requirements or if they are incomplete.

Integration testing

is a level of software testing where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units. Test drivers and test stubs are used to assist in Integration Testing. Integration Test Case differs from other test cases in the sense it focuses mainly on the interfaces & flow of data/information between the modules. Here priority is to be given for the integrating links rather than the unit functions which are already tested. Sample Integration Test Cases for the following scenario:Application has 3 modules say 'Login Page', 'Mail box' and 'Delete mails' and each of them are integrated logically. Here do not concentrate much on the Login Page testing as it's already been done in Unit Testing. But check how it's linked to the Mail Box Page. Similarly Mail Box: Check its integration to the Delete Mails Module.

TEST SCRIPT

is a procedure, or programming code that replicates/imitate user actions. Test Case is a base to create test scripts using a tool or a program.

Waterfall model

is a sequential software development methodology, in which progress goes constantly in one direction only (like a waterfall) through the phases of 1) Planning (Planning & Concept development) 2) Analysis (Analysis & Requirements gathering) 3) Design (Architecture & Specifications) 4) Development (Development & Test) 5) Implementation/Release 6) Maintenance Waterfall don't allow review or come back phase to any prior phase once it's complete. In "Waterfall model" phases are executed sequentially.

Bug Tracking system

is a software application, designed to management of software testing process that include: Report bugs; bug tracing; Close bugs; And also it designed to run the statistical analysis, monitor, and summarize the results; Popular Bug Tracking Tools examples: ● Jira; ● Mantis; ● Bugzilla; ● Elementool; ● Trac; ● Redmine; ● OTRS;

TRACEABILITY MATRIX

is a table that correlates requirements (design documents) to test documents (test cases).

API Testing

is a type of software testing that involves testing application programming interfaces (APIs) directly and as part of integration testing to determine if they meet expectations for functionality, reliability, performance, and security. in order to test an API, you will need to: -Use Testing Tool to drive the API -Write your own code to test the API Important: -perform by testers -End to end functionality is tested -Testers cannot access the source code

Alpha testing

is one of the most common software testing strategy used in software development. This test takes place at the developer's site. is final testing before the software is released to the general public. It has two phases: I. the software is tested by in-house developers. The goal is to catch bugs quickly. 2. the software is handed over to the software QA staff, for additional testing in an environment that is similar to the intended use. The focus of this testing is to simulate real users by using blackbox and whitebox techniques. The aim is to carry out the tasks that a typical user might perform.

Functionality testing?

is performed to verify that a software application performs and functions correctly according to design specifications. During functionality testing we check the core application functions, text input, menu functions and installation and setup on localized machines. The following is needed to be checked during the functionality testing: Installation and setup on localized machines running localized operating systems and local code pages. Text input, including the use of extended characters or non-Latin scripts. Core application functions. String handling, text, and data, especially when interfacing with non-Unicode applications or modules. Regional settings defaults. Text handling (such as copying, pasting, and editing) of extended characters, special fonts, and non-Latin scripts. Accurate hot-key shortcuts without any duplication.

Negative testing

is showing that software handles properly in situations when a user acts not as a user is supposed to act (invalid inputs, unreasonable selections of settings, etc.) • makes no sense if Positive Testing Fails • negative testing is to be done AFTER positive • takes more time than positive • under time pressure minimize (skip) Negative Testing • it results in more bugs than positive

Gray Box Testing

is testing technique performed with limited information about the internal functionality of the system. is using structural, design, and environment information (complete or incomplete) to expand or focus black box testing and to enhance testing productivity by using appropriate methods and tools. Extension of BLACK BOX testing a lot of web testing is done in gray box area It is primarily useful in Integration Testing and Penetration Testing. Example of Gray Box Testing: While testing websites feature like links, if tester encounters any problem with these links, then he can make the changes straightaway in HTML code and can check in real time. Gray Box Testing is performed for the following reason, 1. It provides combined benefits of both black box testing and white box testing 2. It combines the input of developers as well as testers and improves overall product quality 3. It reduces the overhead of long process of testing functional and non-functional types 4. It gives enough free time for developer to fix defects Gray Box Testing Strategy To perform Gray box testing, it is not necessary that the tester has the access to the source code. Test are designed based on the knowledge of algorithm, architectures, internal states, or other high -level descriptions of the program behavior. Techniques used for Grey box Testing are- 1. Matrix Testing: This testing technique involves defining all the variables that exist in their programs. 2. Regression Testing: To check whether the change in the previous version has regressed other aspects of the program in the new version. It will be done by testing strategies like retest all, retest risky use cases, retest within firewall. 3. Orthogonal Array Testing or OAT: It provides maximum code coverage with minimum test cases. Pattern Testing: This testing is performed on the historical data of the previous system defects. Unlike black box testing, gray box testing digs within the code and determines why the failure happened Usually, Grey box methodology uses automated software testing tools to conduct the testing. Stubs and module drivers are created to relieve tester to manually generate the code. Steps to perform Grey box Testing are: Step 1: Identify inputs Step 2: Identify outputs Step 3: Identify major paths Step 4: Identify Subfunctions Step 5: Develop inputs for Subfunctions Step 6: Develop outputs for Subfunctions Step 7: Execute test case for Subfunctions Step 8: Verify correct result for Subfunctions Step 9: Repeat steps 4 & 8 for other Subfunctions Step 10: Repeat steps 7 & 8 for other Subfunctions The test cases for grey box testing may include, GUI related, Security related, Database related, Browser related, Operational system related, etc.

End-To-End

is the process verifying a software system along with its sub-systems, with external interfaces. The purpose of End-to-End Testing is to exercise a complete production-like scenario. is usually executed after functional and System Testing.

Bug life cycle

is the specific set of states that a Bug goes through from discovery to fixation. life cycle includes following steps or status: 1. New: When a defect is logged and posted for the first time. 2. Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. 3. Open: At this state the developer has started analyzing and working on the defect fix. 4. Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as 'Fixed' and the bug is passed to testing team. 5. Pending retest: After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. 6. Retest: At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not. 7. Verified: The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to "verified". 8. Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to "reopened". The bug goes through the life cycle once again. 9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to "closed". This state means that the bug is fixed, tested and approved. 10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to "duplicate". 11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. 12. Deferred: it means the bug is expected to be fixed in next releases. The reasons: priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 13. Not a bug: if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of colour of some text then it is not a bug but just some change in the looks of the application.

Localization Testing

it is a process of customizing software application as per the targeted language and country. The major area affected by localization testing includes content and UI. The purpose of doing localization testing is to check appropriate linguistic and cultural aspects for a particular locale. It includes a change in user interface or even the initial settings according to the requirements. For a typical localization testing, we set up build verification testing, Functional Testing, Regression Testing and final sign-off. Sample Test Case 1. Glossaries are available for reference and check. 2. Time and date is properly formatted for target region. 3. Phone number formats are proper to target region. 4. Currency for the target region. 5. Is the License and Rules obeying the current website(region). 6. Text Content Layout in the pages are error free, font independence and line alignments. 7. Special characters, hyperlinks and hot keys functionality. 8 Validation Message for Input Fields. 9. The generated build includes all the necessary files. 10. The localized screen has the same type of elements and numbers as that of the source product. 11. Ensure the localized user interface of software or web applications compares to the source user interface in the target operating systems and user environments. Things that are often changed because localization, such as the ● User Interface and content files. ● Keyboards ● Text Filters ● Hot keys ● Spelling Rules ● Sorting Rules ● Size of Papers ● Date formats ● Rulers and Measurements ● Voice User Interface language/accent

What is the most frequently executed type of testing?

release/build acceptance (every build) next one is regression

Functional testing

s a type of testing which verifies that each function of the software application operates in conformance with the requirement specification. This testing mainly involves black box testing and it is not concerned about the source code of the application. Each and every functionality of the system is tested by providing appropriate input, verifying the output and comparing the actual results with the expected results. This testing involves checking of User Interface, APIs, Database, security, client/ server applications and functionality of the Application Under Test. The testing can be done either manually or using automation May occur at all test levels. it mainly concentrates on - 1. Mainline functions: Testing the main functions of an application 2. Basic Usability: It involves basic usability testing of the system. It checks whether an user can freely navigate through the screens without any difficulties. 3. Accessibility: Checks the accessibility of the system for the user 4. Error Conditions: Usage of testing techniques to check for error conditions. It checks whether suitable error messages are displayed.

Manual Testing

testers manually execute test cases without using any automation tools. Manual testing: ● Manually testing software for defects; ● Tester plays the role of an end user; ● Test all features to ensure the correct behavior. When to conduct the Manual Testing? ● Expensive to do automation; ● Need for human judgment; ● Ongoing need for human intervention.

Software Quality Assurance

the process of monitoring and improving all activities associated with software development, during whole Software Development Life Cycle from requirements gathering, to coding, testing, and implementation.

Versions

unique version name or number to identify states of an application. Version is generally assigned in increasing order and correspond to new developments in the software. Build number is often used as a version number.

Why Test Cases?

• Better testing coverage; • Reusable; • Reviewable; • Traceable - know what is tested; • Tracking test results;


Related study sets

Physics hw/ quiz work and energy

View Set

Civil Liberties (5) - American Government

View Set

FBLA Entrepreneurship Study Guide

View Set

Chapter 50 - Principles of Electrical Systems.

View Set

322: Final practice, previous quizzes

View Set