The Scaling Problem with Manual Tests

The Problem

You are getting ready to release your product.  All features are finished, and it is time to test.

Time Cost Quality

If a lot of your tests are manual, you must run through a set of manual steps, which takes time.  After a certain period of manual steps, the product is ready to be deployed.

That’s all well and good until a new release needs to be made.  Perhaps it is a bug fix for the release that just occurred.

To be thorough, you may need to run all those manual tests again.

Which isn’t too bad, unless you have 10 or 20 different versions of that product… and 5 other products that also need to be tested for that release.

Manual tests quickly become a scaling problem.  Adding new testers to the team for each new product or version is not really feasible or cost effective.  It means with every new version that gets new updates, you need another (or perhaps a fraction of another) tester.  Same for a new product.  With each new tester comes a learning curve, interviewing, additional wage costs, etc.

How are you going to combat the skyrocketing cost of testing all the different version of every different product?

Automated Tests Scale Much Better

Let’s take the same scenario as above, where we have 10 different versions of 5 different products.  It is time to release version 2.1.1, which is a release for every product.  That means 5 different products need to be tested.

There are several different ways this release can be tested:

  • Serially, testing one product, then the next, then the next, etc.
  • In parallel, with a different tester testing each different product
  • In parallel, with one or zero testers kicking off automated scripts for each product

Continuous Feedback

If there is a lot of automation with continuous integration setup, each time changes are committed to a code repository, tests automatically run.

For larger system tests, those perhaps can be run nightly.

If there is a lot of automation, obviously either a tester(s) or the development team needs to be responsible for those automated tests.  But as far as spending extra time or effort to run and test the system, it is all automatic.

Once setup (which is a big feat, so I don’t mean to trivialize this), testing the software can be done with no extra effort from anyone.  If there are failures, there will be effort to analyze and fix the issues… but if everything passes, a release is only an automated test away.

 

Implementation Can Be Very Different…

Depending on your application, the implementation of an automated testing framework can be wildly different.  Nevertheless, there usually are a few different levels of testing.

Strategy

  • Unit testing: these tests run automatically against functions in your code (white-box testing).
  • Integration testing: these types of tests run against several modules all working together.  If your system is deployed on specialized hardware, this type of testing would involve getting the software to work properly on the hardware.
  • System testing: this involves potentially mimicking a customer’s setup and making sure requirements are met.

Depending on how far you want to go with automation, every one of those can be automated… potentially completely, but all at least to a certain extent.

How awesome would it be to have a customer’s environment setup in a lab and new code automatically get deployed and run against it on a nightly basis?  That’s where quality and quick releases come into play.  Manually doing this would never give you the same benefits as an automated test framework would.  You just can’t run the tests the same number of times as you would with automation.

While unit tests look similar from product to product (as they should use some programming language’s framework, such as cppunit or pyunit), integration and system tests can look very different.  This is where implementations will vary between products and companies.

Perhaps some framework makes sense.  Or perhaps a simple bash script will suffice.  It really depends on the use case.

My recommendation on getting started is to start with unit tests.  The framework already exists, and tying those tests into continuous integration generally is easier than integration and system tests.

In addition, gaps in unit test coverage will become apparent as dependencies become too difficult to test from that level of testing.  That’s when it makes sense to have integration or system tests to fill those gaps.

Conclusion

I hope I have made a good enough case to convince you to start using automated tests in your development.  Unless you want to spend a majority of your time debugging or testing, you need some automated tests for scaling purposes.

If you have a lot of existing code without automated tests, use the boy scout rule – leave the code in a better state than when you first started modifying it.  Over time, the code quality will get to a point where it doesn’t have as many code smells.

If you want to get started, I recommend starting with unit tests and building up with higher-level testing as you get more familiar with testing.  Over time, you will get a better grasp on how to perform integration and system level testing on your code.

Please follow and like us:

Leave a Reply

avatar
  Subscribe  
Notify of