Jun 19, 2023

The Importance of Tests – When, Why, and How to Write

The Importance of Tests cover image

What is a test?

A test is a piece of code that verifies the functionality of other code based on predefined scenarios. Essentially, a test describes the behavior of the code being tested, the conditions being checked, and lists the input parameters and expected results.

Why is testing necessary?

Testing is performed to ensure that the code works correctly, exactly as intended. It helps minimize issues and errors in the code.

Tests are essential for early detection of defects in software, allowing for their elimination before users encounter them.

In a large project, tests are a requirement because without them, there would be constant regressions. When multiple people work on a project, automated tests become crucial. Otherwise, we cannot have confidence that a new feature implemented by one person does not break some existing part of the project developed by someone else.

What are the different methods (approaches) of testing?

Manual Testing: Testers manually test the program following predefined scenarios.

Automated Testing: Specialized software automatically tests the code based on pre-programmed scenarios.

As mentioned earlier, automated tests are an integral part of development in a large project. This should be accepted as a fundamental aspect, meaning that any feature or bug fix is considered complete only when tests are written for them.

What types of tests are there?

There are various types of tests, but I will mention the main and most commonly used ones:

  • Unit Tests: These focus on testing individual units or components of the software in isolation. Unit tests verify that each unit performs as expected and can help catch issues early in the development process.
  • Integration Tests: Integration tests verify the interaction and compatibility between different components or modules of the software. They ensure that the integrated system functions correctly when multiple components are combined.
  • System Tests: System tests validate the behavior and functionality of the entire software system. They test the system as a whole, including its interfaces, external dependencies, and interactions with other systems.
  • Acceptance Tests: Acceptance tests are conducted to determine whether the software meets the client's requirements and satisfies the intended use cases. These tests are often performed by the client or end-users to validate that the software fulfills their needs.
  • Performance Tests: Performance tests evaluate the software's performance and scalability under various load conditions. They assess factors such as response times, throughput, resource usage, and stability.
  • Load and Stress Tests: Load and stress tests assess the software's performance under heavy workloads or stressful conditions. These tests measure the system's response and behavior when subjected to high user loads, peak traffic, or resource-intensive operations, helping identify performance bottlenecks or scalability issues.
  • Security Tests: Security tests focus on identifying vulnerabilities and weaknesses in the software's security mechanisms. They help ensure that sensitive data is protected and the software is resistant to attacks.
  • Usability Tests: Usability tests focus on assessing the user-friendliness and intuitiveness of the software's user interface and overall user experience. These tests involve observing and gathering feedback from users while they perform specific tasks or scenarios, helping identify any usability issues or areas for improvement.
  • Compatibility Tests: Compatibility tests verify that the software functions correctly across different platforms, operating systems, web browsers, or devices. These tests ensure that the software works as intended in various environments and configurations.
  • Regression Tests: Regression tests are performed to ensure that previously implemented features and functionalities of the software have not been adversely affected by recent changes or updates. They help identify any unintended side effects or regressions introduced during the development process.
  • Smoke Tests: Smoke tests, also known as sanity tests, are quick and basic tests that are executed to check if the critical functionalities of the software are working as expected. These tests are typically run after a new build or deployment to ensure that the system is stable enough for further testing.

When should tests be written and when can we ignore them?

In general, it is advisable to always write tests.

Even if the program is small and being developed solo, it is still better to write tests, at least for the most critical parts of the project. This is a good practice to get accustomed to, where considering test writing as part of software development or feature implementation becomes automatic.

When we neglect writing tests due to certain circumstances, we accumulate technical debt. Technical debt is like a snowball that keeps growing over time and becomes increasingly difficult to manage. Therefore, it is important to be aware from the very beginning and weigh the benefit of accumulating technical debt (due to the lack of tests) against the benefit of speed. For example, let's say we have tested a hypothesis and obtained a group of "hot" users with a specific pain point. These users are willing to pay for a solution, but we don't yet have a feature that addresses their needs. The feature is substantial, and accordingly, writing tests will also take some time. Users may lose interest and find another solution that meets their needs. In this case, it might be acceptable to take the risk and release a "raw" solution that attracts paying users. However, afterward, it is important to allocate time and add tests to that feature since users are actively using it.

Who should write tests?

It is not enough to simply write a test. Having a test for the sake of having one will not make your product better. It is important for a test to cover all possible use cases and demonstrate how the code will behave under various input parameters.

Different tests (test scenarios) are written/implemented by different people:

  • Developers: Developers are often responsible for writing unit tests. They create tests to verify the correctness and behavior of individual units or components of the software they are working on. Developers may also contribute to writing integration tests to ensure the proper interaction between different components.
  • Quality Assurance (QA) Engineers: QA engineers specialize in testing software to ensure its quality and compliance with requirements. They may write various types of tests, including functional tests, non-functional tests, and regression tests. QA engineers focus on validating the software against specified criteria and identifying any defects or issues.
  • Test Engineers: Test engineers are dedicated professionals who focus on designing and implementing test strategies and plans. They collaborate with different stakeholders to define the testing scope, identify test cases, and write comprehensive tests. Test engineers may also automate tests using testing frameworks or tools.
  • Business Analysts: Business analysts play a crucial role in understanding business requirements and translating them into testable scenarios. They collaborate with stakeholders to gather requirements and define acceptance criteria, which are then used to write acceptance tests. Business analysts ensure that the software meets the specified business needs.
  • Product Owners/Clients: Product owners or clients may contribute to defining acceptance tests based on their understanding of the software requirements and desired outcomes. They provide input on the expected behavior and use cases, which help shape the test scenarios.

It's important to note that collaboration among these roles is often key to successful test creation. Developers and QA professionals, in particular, work closely together to ensure proper coverage and test effectiveness. The specific distribution of test-writing responsibilities may vary depending on the organization's structure, team composition, and established processes.

How should tests be written?

There are many different types of testing, and each of them will have its approach and nuances.

However, in general, regardless of the type of test, it is always important to understand the requirements of the task that the code being tested is intended to solve. Based on these requirements, we need to create usage scenarios. For each usage scenario, we need to define input parameters and expected output. This information can be used to create a test requirements document that QA engineers/developers can refer to when writing tests.

When writing tests, it is important to consider a variety of input data. Every function has one or more main usage scenarios. In addition to testing the main scenario, we should also consider "boundary cases" - situations in which the code may behave differently:

  • Handling an empty string.
  • Handling null values.
  • Division by zero (which typically causes an error in most programming languages).
  • Specific situations for particular algorithms.

Special attention should be given to validating input data. For example, if we expect the function to work with numbers, but a string is provided as input or vice versa.

In practice, developers often create test scenarios and write the tests themselves. This is not a bad practice, but there are two important considerations: first, in such cases, developers should have a good understanding of the context and business logic beyond the feature (to cover all possible use cases). Second, developers may inadvertently overlook the logic that they did not initially consider. Therefore, ideally, the creation of such scenarios should be done by a separate individual.

Moreover, under ideal conditions, for each significant test, we should have a test plan and an error report. The test plan is a separate document that outlines the testing process, checklist, and test scenarios. The error report is a document that describes the sequence of actions that lead to incorrect code behavior.

TDD (Test-Driven Development)

TDD, or Test-Driven Development, is an approach where you start feature development by writing tests for the feature first. Then, you write the code that fulfills the tests, ensuring its logic is correct.

It is a beneficial approach that encourages engineers to write code with better test coverage, fewer regressions, and easier maintenance.

It's important to carefully consider test case coverage, ensuring all possible scenarios are covered. This is always important, not just when using the TDD approach.

The TDD approach typically follows a cycle known as the "Red-Green-Refactor" cycle. Here's an overview of the TDD process:

  • Write a Test (Red): In TDD, the development cycle begins by writing a failing test. This test focuses on a specific requirement or functionality that needs to be implemented. At this stage, the test is expected to fail because the corresponding production code has not been written yet.
  • Write the Minimum Code (Green): After writing the failing test, the next step is to write the minimum amount of production code necessary to make the test pass. The focus here is on making the test pass successfully without adding any additional functionality. This helps keep the codebase simple and focused.
  • Run the Test (Verify): Once the minimal production code is written, the test is executed to verify that it now passes successfully. The test should demonstrate that the implemented functionality meets the desired behavior defined by the test.
  • Refactor the Code (Refactor): After the test has passed, the code can be refactored to improve its design, readability, or performance. Refactoring does not introduce new functionality but aims to enhance the code's maintainability and overall quality. The existing tests act as a safety net to ensure that the code changes do not break any existing functionality.
  • Repeat the Cycle: The TDD process is iterative and follows the Red-Green-Refactor cycle. The next step is to write another failing test to drive the development of the next piece of functionality. The cycle is repeated for each new requirement or enhancement, gradually building up the codebase with well-tested and maintainable code.

In summary:

It is always better to write tests; it's a good habit to adopt.

Large projects are unlikely to thrive without tests. Otherwise, you will find yourself constantly fixing different things in different places.

Testing is a crucial step before rolling out to production and also after rolling out.

Involving non-technical people and individuals from different departments in testing is highly beneficial because engineers often have a specific approach and may inadvertently overlook some use cases.


originally posted on linkedin.com

CV