Is software testing worth it?

Is software testing worth it?
Do not index
Do not index
Yes, and no. Well, as with everything, it depends. In this post I'll try to illustrate my experience with testing, as a developer, then as a startup founder.

About me

Hi! I'm Simone, a full-stack developer, and CTO with 15 years of experience coding in various industries, including web agency, e-commerce, adult, and enterprise software. In this blog post, I'd like to share insights on software testing and how to determine appropriate methods for different business needs and stages.
Currently, I'm CTO and Co-founder of Bugpilot, a bug monitoring tool that helps SaaS teams catch bugs that slipped trough the development, testing, and QA processes.

Introduction

Different kinds of testing

In the software development world, many testing practices have long been regarded as essential for ensuring the reliability and quality of software products. However, it is important to recognize that while these methodologies have their merits, they are not without their drawbacks.
This blog post aims to spark a conversation and shed light on the potential harm that TDD and QA can inflict on software teams. By acknowledging that there is no one-size-fits-all approach and that different businesses and stages require appropriate methods, we can begin to navigate the complexities of software development more effectively.
I acknowledge there are many different methodologies, common practices, and recommendations around software testing; in this post I chose to focus mostly on Test-driven development, and Quality Assurance.
Here's an overview, in case you're not familiar with the terms:
TDD
Test-driven development (TDD) is a software development method that involves writing automated tests before writing code. This helps catch issues early in the development cycle, ensuring that the code is thoroughly tested and meets business requirements. Developers first define the expected behavior of a new feature with a test, then write the code to pass the test, and finally optimize the code through refactoring. This approach results in well-designed, maintainable code.
QA
QA ensures software meets quality standards before release. It identifies and fixes defects through manual and automated testing, such as verifying functional requirements and checking performance. This ensures reliable, stable software that meets business and end-user needs.
 

A couple of questions for you!

My goal is with this post is to spark a conversation. I can't stress enough that I strongly believe that tests are needed, in many cases. Think of medical appliances, aircraft software, control systems, banking, and much more, software in use by billions of people where the color of one button makes a significant profit difference.
Software development requires a balance between structure and flexibility, and blindly following testing principles without considering the context can lead to suboptimal outcomes - code libraries probably benefit from 99% coverage. But does your software need 99% coverage as well?
 
I prepared a couple of questions. Try to answer yes or no — why is that?
  • Do you really need to test your UI components?
  • Do you need E2E testing for that “update profile picture” functionality?
  • Do you fail QA and delay a release if there's a minor inconsistency between Figma designs and actual UI? What's the boundary?
  • Should I fire John for refusing to write tests?
 

Chapter 1. My case against Test-Driven-Development

notion image
 

Challenging the dogma

Test-driven development (TDD) has gained an almost-religious following, often accompanied by the dogmatic belief that it is the only path to success. This chapter aims to explore the drawbacks of TDD and shed light on the rigidity trap, time constraints, false sense of security, and the challenges posed by testing siloes. Drawing from personal experiences, this chapter challenges the notion that TDD is the ultimate solution for all scenarios and encourages a more nuanced approach to testing.
As a CTO, I have encountered colleagues who staunchly insisted on TDD for even the most trivial functions. The prevailing dogma seemed to be, "You must do TDD, or you're an idiot!" This unwavering belief in TDD as the only acceptable approach often made me question my own judgment and competence. I distinctly recall an incident where I felt like a fool for not adhering to TDD when writing a simple 2-lines function to filter an array.
 
This personal experience highlighted one of the key pitfalls of TDD—the rigidity trap. The dogmatic adherence to writing tests first can sometimes lead to a rigid code structure that is difficult to modify or refactor. While TDD can be invaluable for complex or critical components, it may not always be the most efficient or appropriate approach for every situation.
 

Is it really worth the time?

A significant challenge posed by TDD is the time constraint it imposes on the development process. Writing comprehensive tests for every piece of code can be time-consuming, especially for trivial functions that have minimal impact on the overall system. In fast-paced environments where quick iterations and timely delivery are crucial, the additional overhead of TDD can slow down the development cycle and hinder agility. Balancing thoroughness and efficiency is essential, and there are scenarios where a more streamlined testing approach may be more appropriate, allowing for faster iterations and quicker responses to market demands.
Moreover, the complexity and imperfection of writing tests that involve interactions with external dependencies or complex systems can further compound the time constraints of TDD. In such cases, developers often need to create mock objects or stubs to simulate the behavior of these dependencies during testing. While mocks can be valuable tools for isolating code and reducing dependencies, they can also introduce additional overhead and complexity.
notion image
The reliance on mocks may lead to an imperfect representation of real-world interactions, as it can be challenging to accurately replicate the behavior of external systems or third-party components. This can introduce a level of uncertainty into the testing process, potentially resulting in false positives or false negatives. Tests are passing, I can have a good night sleep, right? Right?
 

Tests are passing, I can sleep safe and sound!

notion image
There's an inherent but often overlooked danger of relying solely on tests. A false sense of security. While testing is undoubtedly essential for identifying and preventing defects, it is not a foolproof solution.
In the realm of web development, there is a multitude of factors that can impact the user experience, and testing alone cannot cover all the intricacies and variations that exist in the real world. One such factor is the diversity of end-user devices, including different screen sizes, resolutions, operating systems, and browsers. Each combination presents a unique set of challenges that can affect how the software functions and appears to users.
Consider the vast array of devices and configurations: smartphones, tablets, laptops, desktops, running on iOS, Android, Windows, or macOS, using various versions of browsers like Chrome, Safari, Firefox, or Internet Explorer. Each device, operating system, and browser combination may render web content differently, and user interactions may vary as well. It is virtually impossible to anticipate and account for every possible device and configuration, making it challenging for tests to provide complete coverage.
Furthermore, user personas and profiles add another layer of complexity. Your software may target a diverse audience with different preferences, behaviors, and accessibility needs. Testing can help uncover issues that arise in typical scenarios, but it may miss edge cases or specific user interactions that deviate from the norm. For example, users with visual impairments who rely on assistive technologies may encounter usability challenges that are difficult to capture through automated tests alone.
In addition to the technical variations, user context and real-world scenarios play a crucial role in shaping the user experience. Factors such as network conditions, bandwidth limitations, or concurrent usage can impact the performance and reliability of the software. While tests can provide a controlled environment, they may not accurately simulate the diverse network conditions and usage patterns that users encounter in their daily lives.
 

Chapter 2. QA & The need for speed

I've witnessed firsthand the challenges posed by “strict” Quality Assurance practices, particularly when it comes to smaller companies that must move swiftly to stay on top of their competitors.
Especially for early-stage startups, you're somehow likely to throw away the whole feature soon if your customers are not using it. So, was it worth that one whole week of testing and refining the unit tests?
Let me share my personal experiences and shed light on the dangers of getting caught up in perfectionism, especially when minor visual or presentational aspects become the focus of QA efforts.
 
notion image
 
In my previous role at a startup, we faced a constant battle between delivering features quickly and ensuring impeccable quality. There were instances when our release cycle was unnecessarily delayed due to minute issues such as a misaligned CSS margin, an incorrect font choice, or a missing line break. While attention to detail is important, I began to question the impact of obsessing over these cosmetic imperfections on our ability to stay ahead in the market.
One of the risks of excessive QA is the prioritization of perfection over practicality. In our pursuit of flawlessness, we often found ourselves investing significant time and resources into fixing minuscule visual glitches. While it's essential to maintain high standards, I started to realize that dedicating extensive effort to these minute details might be counterproductive, diverting our focus away from core functionality and user experience improvements that truly mattered to our customers.
The danger became apparent as we observed the consequences of an overly cautious QA approach. Our team began to exhibit risk-averse behavior, opting for a slow and meticulous release process. While the intention was to deliver a near-flawless product, we inadvertently stifled innovation and hindered our ability to respond quickly to market demands. As a smaller company, we relied on our agility to iterate rapidly and outmaneuver larger competitors. However, excessive QA practices were holding us back, impeding our ability to seize opportunities and stay ahead of the curve.

Chapter 4. Customers are the best testers

The financial implications of prolonged release cycles became evident. Missed market windows, delayed revenue generation, and potential customer churn began to take a toll. As a small company with limited resources, we couldn't afford to dawdle. We needed to leverage our agility and speed to seize opportunities and maintain a competitive edge. The time spent on perfecting minor details needed to be balanced against the need for rapid iteration and market responsiveness.
notion image
 
While testing helps identify and prevent defects, it is equally important to embrace user feedback as a valuable source of information. Users' experiences, preferences, and suggestions can provide insights that go beyond what testing alone can uncover. By fostering a feedback loop with users, actively listening to their needs, and incorporating their input into the development process, we can create a user-centric product that meets their expectations.
Rather than relying solely on extensive internal testing, involving users in the testing process can be highly beneficial. Early user testing, such as beta testing or usability studies, allows for real-world scenarios and user interactions to be observed. This user-centric approach helps identify pain points, usability issues, and unanticipated problems that may not be caught through internal testing alone. Incorporating this feedback early on can greatly enhance the user experience and satisfaction.
Unfortunately, I personally witnessed a strong “unbalance” in many software teams. Here's a question for you: should QA fail for a minor UI inconsistency?

Chapter 5. “Users will leave if they encounter a bug!”

notion image
Research consistently highlights the negative impact of broken functionality on user retention. Studies have shown that users are less likely to continue using a product if they encounter frequent bugs, crashes, or errors that disrupt their experience. According to a survey by Qualtrics, 79% of users will abandon an app after just one or two instances of poor performance. These findings emphasize the critical role that functional stability plays in retaining users and building long-term engagement.
When users encounter broken functionality, it erodes their trust in the product and the development team behind it. Even if the issues are eventually resolved, users may develop a negative perception and be reluctant to return. Research by the Baymard Institute revealed that 42% of users lose trust in a website if they experience functional errors or glitches. This loss of trust can significantly impact user loyalty and their willingness to continue using the product.

But… is this still true in 2023?

While it's true that users may encounter bugs from time to time, it is important to distinguish between minor bugs and critical functionality issues. Users may be nowadays more forgiving of minor bugs that do not significantly impact their overall experience. We have seen that users learned to accept bugs as inevitable in software development, bugs are part of our daily lives.
However, when it comes to broken core functionality that hinders their ability to use the software as intended, users are likely to be less forgiving and may seek alternatives.
B2B bugs matter
In B2B scenarios, broken functionality can have severe consequences for users and their businesses. It can result in delayed project timelines, missed deadlines, and even financial losses.
Users may become frustrated and angry when they encounter bugs that prevent them from accomplishing their work. Their loyalty to a software product is closely tied to its reliability and ability to help them succeed in their professional responsibilities. Low reliability should equal higher churn.
However, that's not always the case. Once a whole organization adopts a technology, is it that easy to make the whole company switch to a competitor solution? Unlikely.
E-commerce have (more) loyalty challenges
In the e-commerce space, users have numerous alternatives readily available at their fingertips. If a website or app fails to function properly or provides a frustrating experience, users can easily switch to a competitor's platform to make their purchases. E-commerce users expect a smooth, hassle-free experience that allows them to find products, complete transactions, and receive support without unnecessary obstacles. Broken functionality or bugs can lead to abandoned shopping carts, decreased customer satisfaction, and lost business opportunities.
 

Chapter 6. Are TDD & QA the only solution?

Obviously, no. While tests play a crucial role in identifying and preventing some defects, there are additional measures that can be taken to minimize the impact of bugs on users. Here are a few approaches I have studied:

Monitoring and Error Tracking

Implementing robust monitoring and error tracking systems allows for proactive identification and resolution of issues. Real-time monitoring can help detect anomalies, performance issues, and errors, allowing for prompt remediation before they impact users. Error tracking enables the capture of error details and helps prioritize bug fixes based on their impact on users.
Tools such as Sentry, Rollbar, Bugsnag, and Bugpilot help teams automatically detect coding errors and problematic user behavior, so they can proactively address issues.

User Feedback and Support

Actively encouraging and collecting user feedback provides valuable insights into usability issues, bugs, and areas for improvement. Promptly addressing user-reported issues and providing support demonstrates a commitment to resolving problems and maintaining a positive user experience.
Tools such as Canny, Hotjar, and Bugpilot help teams easily collecting feedback from their users.

Documentation and User Education

Clear and comprehensive documentation, along with user education materials, can help users navigate the software effectively and minimize the risk of user-induced errors. Providing resources that explain common issues and troubleshooting steps empowers users to resolve minor problems independently.
 

Conclusion

When it comes to reducing the likelihood and impact of errors on users, a multi-faceted approach is necessary. While testing plays a vital role in preventing defects and ensuring software quality, it should not be the sole focus. A combination of preventive and mitigative measures is crucial to minimize user impact and maintain a positive user experience.
Testing acts as a crucial preventive measure by identifying issues early in the development process. Thorough testing, including unit tests, integration tests, and end-to-end tests, helps - to some extent - catch bugs and ensure the stability and functionality of the software. But over-testing, and too strict processes are very likely to damage the company on the long-run.
Anyway, despite our best efforts, errors may occasionally slip through the testing net. Therefore, it is equally important to have mitigation strategies in place. By implementing mitigation solution, software teams can quickly detect and address errors, minimizing their impact on users and swiftly resolving any issues that arise.
Recognizing that no software is entirely bug-free, it is essential to create an environment that encourages user feedback and provides effective customer support. By actively listening to user reports and promptly addressing their concerns, software teams can maintain a positive relationship with their users and foster trust and loyalty.
 
notion image

Get automatic notifications when coding errors occur, failed network requests happen, or users rage-click. Bugpilot provides you with user session replay and all the technical info needed to reproduce and fix bugs quickly.

Never Miss a Bug Again!

Start a free trial today