Creating A Testing Culture

  6 mins read  

In the past, I’ve promised additional articles about testing software, and while I still plan to offer a more technical/in-depth look at the topic in future posts, for this one, I’d like to talk about the idea of “testing culture”.  When I was writing about characterization tests, I mentioned that there are a lot of programming teams that ‘talk a good game’ when it comes to automated testing, but when it comes time to actually execute and get decent test coverage on their software, they tend to fall short.  The truth is though, they’re not usually falling short because they lack knowledge about strategies like characterization testing or acceptance tests.  In my experience, the problem is the absence of a workplace ‘culture’ that places an emphasis and a premium on solid software testing.

It doesn’t matter how many cool testing tools and techniques you might be aware of or willing to use.  If you can’t commit yourself or your team to using them, and if you can’t make that use an integral part of how you get work done, your test coverage is going to continue to suffer, and by extension, your software will suffer. An oft-used argument for not testing is the pressure to release software as quickly as possible and the perception, (however inaccurate), that writing tests will waste valuable time. The reality is that not testing your software will lead to bugs and failures that generally take much longer to fix. For those of you interested in avoiding that fate, read on for some tips for incorporating a ‘testing culture’ in your team.

Get the Tools

You can’t use what you don’t have, so the first step to instituting a testing culture on your team is to make sure you have available the tools you need. I have my preferences for every language/platform I work in, but you might have your own, or depending upon your platform, you may not have a lot of choices.  In the Ruby/Rails community there are a ton of options for testing your software, but the popular front-runners these day seem to be Rspec and/or MiniTest.  On the PHP side, PHPUnit seems to be the go-to solution. For Java, JUnit is probably worth a look.  Whatever you choose to use, make sure it’s available to your project.

Be The Example

Testing tools are easy to ignore, even if they’re available for developers to use. If you actually cut code on your project, (versus being a product lead or something like that), then you should be religiously adding test coverage to your code.  For one thing, it’s a lot easier for the uninitiated to ‘get into the groove’ of testing if there are some real, practical examples of tests already present in the codebase.  Even a great book or tutorial on testing isn’t going to be as valuable for pushing others to jump in and start themselves as actually seeing working test code in their own project.

Even if you’re not a programmer, if you’re in a position to push for testing on the project you’re involved in, you can evangelize for it without necessarily demanding anything.  Trying to impose culture by mandate is almost certain to fail.  At least initially, expressing enthusiasm without handing down orders is going to be far more effective.  Now, that being said, if you’re the guy running the show and you want to require that project code be tested, then by all means, make it a requirement. Still, simply making it a rule without justifying it or without showing the value of the practice will lead to lax testing standards and the general feeling that testing is a ‘chore’ rather than something integral and important to the overall development process.

Set the Bar

This bit of advice comes down to one concrete suggestion: Use code-coverage tools. If your language/framework/platform has testing tools available to it, it almost certainly has code-coverage tools available to it as well.  For Ruby/Rails, you should take a look at simplecov.  PHP_CodeCoverage integrates with PHPUnit and does a similar job. Check out EMMA if you’re doing Java development.  If you’re not sure what I mean when I say ‘code-coverage’, it’s simple: Code-coverage is the percentage of your application’s code that is being exercised by an automated test. Tools like simplecov, PHP_CodeCoverage and EMMA provide insight in to how much testing your project actually has.

A healthy software project has a minimum of 80% code-coverage at any given time. I personally like to shoot for a mark above 90%, but I’d forgive 80% and above.  The key to using tools like this is to make sure that the results are well-known. The absolute best way to use code-coverage tools is to incorporate them in to a continuous-integration process of some kind.  In other words, a software build doesn’t pass unless code-coverage is above a certain threshold.  If you’re not in a position to set up something like that, you can still make a regular habit of running your coverage tools and publicizing the results.  Announce coverage percentages to team members, maybe even in the same breath as you’re telling them about the new testing tools you’ve added to the project.

Most of these tools provide visual feedback…that is, charts and graphs that express coverage percentages.  Speaking for myself, having that visual indicator is a great motivator for increasing coverage, so if you can manage it, put the results where they can be seen by others.

Run the Tests

Writing tests will feel pointless if those tests aren’t being run, and like your code-coverage results, the results of test runs should be made easily available. Again, the absolute best way to accomplish this is to institute some kind of continuous-integration system.  I’ll be writing an article soon about my experience using Heroku, BitBucket and Jenkins to set up a continuous integration environment, but whatever combination of tools works best for you to accomplish this will pay off infinitely for you.  If you can’t get continuous integration set up, you still have options.

If you wanted to go the extreme route, you can write pre-commit hooks for Git or SVN to run your test suite before committing code, (which would obviously fail to commit if the tests failed).  If you can’t or don’t want to go quite that far, your best bet is to run the tests yourself and be loud when they fail.  Even for a team that isn’t totally sold on the idea of testing, if you can show them a test that proves that their code is broken, it’s going to be difficult for them to ignore. Every time a test catches a bug or problem, you’ll be providing evidence as to why the whole team should be on the testing bandwagon.

Concluding

I think the main takeaway from this article should be that instituting a testing culture on your team really comes down to individual action, at least to start with. As Ghandi said, “Be the change you wish to see in the world…”. As your individual actions start to pay dividends, you’ll get natural buy-in to your techniques, and the culture of software testing will grow naturally, which is really the only way a culture can develop in the first place.  Once that culture has developed and everyone, (all the way up to the managers and the people that control resource-allocation for your team), is on-board, you’ll find that your development processes and your software will have improved dramatically as a result.