Software Testing Terminology
Key terms and concepts in software testing
Software Testing Terminology
Software Testing
Definition: Software Testing is the process of evaluating a software item to detect differences between given input and expected output. It is also used to evaluate the features of the software item.
Main Software
Bug
Definition: A bug is an error, flaw, or fault in a computer program that causes it to produce an incorrect or unexpected result. The term originated from Grace Hopper in 1947 when she found an actual moth trapped in a relay of the Harvard Mark II computer, causing it to malfunction. She taped the moth into the logbook with the note "First actual case of bug being found."
Fault
Definition: A fault is a defect in the code that causes an error when executed. It is a static defect in the software that may or may not lead to a failure. Faults are introduced during the development process (coding, design, or requirements). The concept of fault analysis has been fundamental to software engineering since the early days of computing.
Error
Definition: An error is a human action that produces an incorrect result. It is a mistake made by a person (developer, tester, or user) during any phase of the software development lifecycle. Errors can lead to faults in the software. The study of human error in computing dates back to the work of early computer scientists who recognized that human fallibility is an inherent part of software development.
Failure
Definition: A failure is the inability of a system or component to perform its required functions within specified performance requirements. It occurs when a fault is executed and the system produces incorrect or unexpected behavior. Failures are observable deviations from expected results. Understanding failure modes has been crucial since the early days of computing, from Thomas Edison's work on system reliability to modern software engineering practices.
Standards & Processes
IEEE Standards 610
Definition: IEEE Standard 610 is a standard glossary of software engineering terminology that provides consistent definitions for software engineering terms. It helps establish a common language for software development and testing professionals.
Verification
Definition: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. It answers the question: "Are we building the product right?"
Validation
Definition: Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. It answers the question: "Are we building the right product?"
Modelling & Simulation
Definition: Modelling & Simulation is the process of creating a model (a representation of a system) and using it to simulate the behavior of the system under various conditions. It is used to predict system behavior, test scenarios, and validate designs before actual implementation.
Software Quality Assurance
Definition: Software Quality Assurance (SQA) is a set of activities for ensuring quality in software engineering processes (that ultimately result in quality software products). It includes process definition and implementation, auditing, and training. SQA focuses on preventing defects rather than just finding them.
Accreditation
Definition: Accreditation is the formal recognition by an authorized body that an organization, individual, or product meets specified standards and is competent to carry out specific tasks. In software testing, it often refers to certification of testing processes, tools, or personnel.
Test Driven Development (Agile)
Definition: Test Driven Development (TDD) is a software development approach where test cases are written before the actual code. The development cycle follows a "Red-Green-Refactor" pattern: write a failing test (Red), write the minimum code to pass the test (Green), then refactor the code for better quality. TDD is a core practice in Agile methodologies.
Test Cases
Definition: A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements. A test case typically includes:
- Inputs: The data or conditions provided to the system being tested
- Expected Outputs: The anticipated result or behavior of the system
- Pass/Fail Criteria: The conditions that determine whether the test has passed or failed
Test cases are documented with specific steps, preconditions, and postconditions to ensure reproducibility and clear understanding of what is being tested.
Traceability
Definition: Traceability is the ability to track and trace the relationships between different artifacts throughout the software development lifecycle. In software testing, traceability typically refers to:
- Requirements Traceability: Linking requirements to test cases to ensure all requirements are tested
- Test Traceability: Tracking test cases back to requirements, design documents, and code
- Defect Traceability: Linking defects to the test cases that found them and the requirements they affect
A Requirements Traceability Matrix (RTM) is commonly used to document these relationships, ensuring complete coverage and enabling impact analysis when changes occur.
Observability
Definition: Observability is the ability to observe the internal state and behavior of a software system during testing. It refers to how well a system's internal operations can be monitored and understood through external outputs such as logs, metrics, traces, and events. High observability enables testers to diagnose issues, understand system behavior, and verify that the software is functioning correctly. Observability is crucial for debugging, performance analysis, and ensuring system reliability in production environments.
Controllability
Definition: Controllability is the ability to control the inputs, state, and behavior of a software system during testing. It refers to how easily a tester can set up specific test conditions, manipulate system state, and control the execution flow to test various scenarios. High controllability enables comprehensive testing by allowing testers to reach specific code paths, test edge cases, and simulate different operating conditions. Controllability is essential for effective test design, automation, and ensuring thorough test coverage.
Types of Testing (Classification)
Unit Testing
Definition: Unit testing is the process of testing individual components or modules of a software separately. It focuses on the smallest testable parts of an application, typically functions, methods, or classes. Unit tests are usually written and executed by developers to ensure that each unit of code performs as expected. This level of testing isolates dependencies and uses mocks or stubs to test the unit in isolation.
Integration Testing
Definition: Integration testing is the process of testing the interfaces between components or modules to verify they work together correctly. It combines individual units and tests them as a group to expose faults in the interaction between integrated units. Integration testing can be performed incrementally (top-down, bottom-up, or sandwich approach) or all at once (big bang). It focuses on data communication between modules and the integration of external systems.
System Testing
Definition: System testing is the process of testing the complete integrated system to verify it meets all specified requirements. It is performed on the entire system in an environment that closely resembles production. System testing evaluates both functional and non-functional requirements, including performance, security, usability, and reliability. This is the first level of testing where the entire application is tested as a whole.
Acceptance Testing
Definition: Acceptance testing is the process of testing to determine if a system satisfies acceptance criteria and is ready for delivery to the end user. It is typically performed by the customer or stakeholders to validate that the software meets business requirements and user needs. Acceptance testing can be formal (UAT - User Acceptance Testing) or informal, and it is the final testing phase before the software is released to production.
Beta Testing
Definition: Beta testing is the process of testing a pre-release version of software by a limited group of external users in a real-world environment. It is performed after alpha testing (internal testing) and before the final release. Beta testing helps identify bugs, usability issues, and performance problems that may not have been discovered during internal testing. Beta testers provide feedback on the software's functionality, stability, and overall user experience.
Quality-Based Testing Types
Functional Testing
Definition: Functional testing is a type of software testing that validates the software system against the functional requirements and specifications. It focuses on testing the features and behavior of the software by providing input and verifying the output against expected results. Functional testing ensures that the application performs as expected from the user's perspective, covering all user scenarios, business rules, and use cases. It includes techniques such as black-box testing, unit testing, integration testing, and system testing.
Stress Testing
Definition: Stress testing is a type of performance testing that evaluates how a system behaves under extreme conditions, often beyond its expected capacity. It involves testing the system with heavy loads, high concurrency, or limited resources to identify breaking points and ensure the system fails gracefully. Stress testing helps determine the system's stability and robustness by pushing it to its limits and observing how it recovers from failure conditions.
Performance Testing
Definition: Performance testing is a type of software testing that evaluates how a system performs in terms of responsiveness, stability, scalability, and speed under a particular workload. It measures various performance metrics such as response time, throughput, resource utilization, and latency. Performance testing ensures the system meets performance requirements and can handle expected user loads without degradation. It includes load testing, stress testing, spike testing, and endurance testing.
Usability Testing
Definition: Usability testing is a type of testing that evaluates how easy and user-friendly a software application is for its intended users. It involves observing real users as they interact with the system to complete specific tasks, identifying usability issues, and gathering feedback on the user experience. Usability testing focuses on aspects such as learnability, efficiency, memorability, error prevention and recovery, and user satisfaction. It helps ensure the software meets user expectations and provides a positive experience.
Regression Testing
Definition: Regression testing is a type of software testing that ensures that recent code changes have not adversely affected existing features. It involves re-running previously executed test cases to verify that the software still functions correctly after modifications such as bug fixes, enhancements, or configuration changes. Regression testing helps detect unintended side effects and ensures that the system's overall quality and stability are maintained throughout the development lifecycle. It can be performed manually or through automated test suites.
Methodology
Black Box Testing
Definition: Black box testing is a software testing method where the internal structure, design, and implementation of the software are not known to the tester. The tester focuses on the input and output of the software without knowledge of how the system processes the input. This approach is based on requirements and specifications, testing the software from an end-user perspective. Black box testing techniques include equivalence partitioning, boundary value analysis, decision table testing, and state transition testing.
White Box Testing
Definition: White box testing, also known as glass box testing, clear box testing, or structural testing, is a software testing method where the internal structure, design, and implementation of the software are known to the tester. The tester examines the code, algorithms, and internal logic to verify that the software functions correctly at the code level. White box testing requires programming knowledge and focuses on internal paths, control structures, and data flow. Techniques include statement coverage, branch coverage, path coverage, and condition coverage.
Grey Box Testing
Definition: Grey box testing is a software testing method that combines elements of both black box and white box testing. The tester has partial knowledge of the internal structure and implementation of the software, but not complete access to the source code. This approach allows testers to design more effective test cases by understanding some of the internal workings while still testing from a user perspective. Grey box testing is particularly useful for integration testing, web application testing, and distributed systems where the tester has access to database schemas, algorithms, or internal state information.
Types of Test Activities
Test Design
Definition: Test design is the activity of creating test specifications, test cases, and test data based on requirements and design documents. It involves analyzing the software requirements, identifying test conditions, and determining the most effective testing approach. Test design includes selecting appropriate testing techniques, defining test coverage criteria, and documenting test procedures. The output of test design is a set of test cases that can be executed to validate the software against its requirements.
Test Automation
Definition: Test automation is the process of using software tools and scripts to execute test cases automatically, reducing manual effort and increasing testing efficiency. It involves writing automated test scripts, selecting appropriate automation frameworks, and integrating tests into continuous integration/continuous deployment (CI/CD) pipelines. Test automation is particularly useful for regression testing, performance testing, and repetitive test scenarios. It improves test coverage, reduces human error, and enables faster feedback on code changes.
Test Execution
Definition: Test execution is the activity of running test cases against the software under test and recording the results. It involves setting up the test environment, executing test cases manually or through automation, capturing actual outputs, and comparing them with expected results. Test execution includes logging defects, tracking test status, and generating test reports. This activity is critical for identifying bugs and validating that the software meets its requirements before release.
Test Evaluation
Definition: Test evaluation is the activity of analyzing test results to determine the quality of the software and the effectiveness of the testing process. It involves reviewing test execution reports, analyzing defect patterns, assessing test coverage, and determining whether the software is ready for release. Test evaluation includes identifying areas that need additional testing, evaluating the severity of remaining defects, and providing recommendations to stakeholders. The output of test evaluation is a test summary report that informs release decisions.