Software Testing Fundamentals
Acceptance Testing
The final complete piece of software is tested to make sure it is ready for deployment. - This is done by a human
Behavior Driven Development (BDD)
- Expansion of TDD - Develop failing acceptance tests and program code to pass them - Write user stories, turn them into tests, then write code to make them pass - We will use Cucumber to accomplish this.
Priority
How pressing is it that a defect be corrected
Test Driven Development (TDD)
Develop tests (typically unit tests) first and then write code to make them pass.
Test Strategy
Document that sets the standards for the testing process. - Typically company wide - The same for every project - High Level - more generic - Rules and Responsibilities - Objectives - Scopes
Test Plan
Document that tells us what to test and how and who will do it - Project wide - Features to be tested - Testing techniques - Test environments - Schedule - Low level - more specific
Regression Testing
Incremental testing of software to ensure that new additions to the software do not break existing functionality.
Retest
Just retesting your code
Defect Lifecycle
New - Bug detected Assigned - Assigned to developer Rejected - Defect is thrown out for various reasons such as: not a defect Deferred - Defect is put off for various reasons such as: not important, may be overwritten Text/Fixed - Test the cases and attempt to fix it. Reassign - Failed to fix. Might not have been assigned to a developer with the appropriate skills. Verified - Other developers or testers will verify the bug is fixed. Closed - Bug is documented as fixed.
Beta Testing
Releasing the software to a limited number of expected users who have no connection to the development.
Smoke Testing
Running Tests just to see if your application is even in a state to be tested. Extremely simple and passing means it's okay to test. So that if you test a welcome screen for a website and it fails you know not to keep testing because there is a problem.
System Testing
Test a complete and integrated software application
Flaky Test
Test that has different outcomes given the same base conditions.
Load Testing
Testing a system and evaluating its behavior under heavy use. - Have 10,000 users use the application simultaneously. JMeter
Alpha Testing
Testing an application as to how an end user might use it. - Usually done by internal employees
Spike Testing
Testing conducted to see how the software deals with a sudden surge of use. - Having no users at first. Logging in 10,000 users all of a sudden. - Maybe you have many users logged in, but many are idle or not making any request. Have many users all of a sudden make many request.
Integration Testing
Testing done to expose deficiencies between code components working together.
Testing Automation
Using software to perform manual testing - Ex. Junit automatic testing - Tests are repeatable - After hours testing - Increase code coverage*
Defect/Bug
When the software doesn't behave as intended, often in a detrimental way.
Types of Testing
White box testing - Testing based on the knowledge of the internal code. - Applies mostly to low level unit tests. - Ex. Junit Black box testing - Testing an application without knowledge of how it works. - Typically focuses on functionality - Mimics how a user would test/use the application - Ex. Postman
Software Development Lifecycle
1. Analysis of the existing system a. Identify areas of improvement 2. System requirements are identified a. Propose new features b. Address existing issues/concerns 3. Proposed System is defined a. Overall design is developed b. Technology stack is chosen i. What your code should do so that you can design it 4. New system is developed a. Actual coding/implementation 5. Testing/Integration a. Test your new feature(s) b. Test for compatibility 6. Maintenance/Monitoring a. Report area of deficiencies b. Use this information to start a new cycle
Software Testing Lifecycle - STLC
1. Planning/Analysis a. What needs to be tested? b. Is it testable? 2. Design a. How are we going to set up our tests? b. Is it going to be automatic? c. What framework? 3. Develop your test cases a. Write the code for your tests b. find edge cases i. Good test cases should not overlap or be redundant ii. However, they should cover all functionality iii. Edge cases - are a case that could not happen but could happen, unexpected cases 1) Code coverage - how many lines of code that are covered, a way to cover all code 2) Try to avoid redundant 4. Choose an environment for your tests a. What software/hardware are you using to run your tests. 5. test Execution - run your tests 6. Conclusion/Wrap Up a. Generate a report of the findings from the test execution i. ex. 100 test passed, 3 failed ii. What's my code coverage? iii.Sonar cloud/quebe
Test Suite
A group of test cases related to each other (same functionality)
Test Case
A set of inputs and conditions with expected results. Every test that you write as an expected result after the test even if it failed.
Severity
How much does the functionality of a feature get compromised when it fails
Red/Green Testing
Mantra used TDD, where you write tests that fail (RED). Then you write code until they pass (GREEN). Then you can write more tests. - Should be incremental, Don't write All tests at once.
Reliability Testing
Testing whether the software can perform failure-free operations for a specified length of time. - Feature Testing - Test that a feature's operation are all executed properly. - Load Testing - Software tends to perform better at first and degrades over time. - Regression Testing - Make sure new bugs haven't been introduced with new code.
Stress Testing
Testing evaluating a system's behaviors when pushed beyond what it is designed to do. - Check pressure points - Submit 10,000 orders all at once
Non-functional Testing
Testing for non-functional aspects of the software application - Usability, Performance, Reliability
Usability Testing
Testing if the application is friendly for users to use. - Has to be done by a human - Ex. Black screen with a small black text box. This could function, but it is unfriendly to use.
Unit Testing
Testing of individual software components - Test on per method/function test. - If method is to large then break it down
Risk-Based Testing
Testing principle that priories testing features based on: - Risk of Failure - The importance of the function to be tested - Impact of failure - Prioritize and emphasize the suitable tests during test execution - Improved quality - all critical functions are tested
Performance Testing
Testing the responsiveness, stability, and efficiency
Functional Testing
Testing to see if a piece of software does what it is supposed to do. - Given a set of inputs there are expected outputs.
Endurance Testing
Testing to see if the application can be processed and work for a prolonged period of time under heavy use.
Negative Testing
Testing to see that a certain procedure fails - User enters invalid data
Positive Testing
Testing to see that a certain procedure succeeds - User enters valid data