Agility And Software Testing

  9 mins read  

For as long as I’ve been writing Rails applications, I’ve been doing it in so-called “Agile” shops.  Within the Ruby community, I’ve always had the sense that Agile was one of its favorite buzzwords.  I’m so convinced of that, that I’m willing to bet you’ve got the word “agile” on your resume at least once, if you’ve done any professional Rails development. Even if you’ve managed to keep it off your resume, have you ever responded to a Ruby on Rails job ad that didn’t include the world “Agile”?  At least in my experience, the idea of Rails development goes hand-in-hand with “Agile” development.

With that said, there’s many ways to “do Agile”, and some of the most popular even have names: XP and Scrum come to mind. Even if your shop’s particular set of practices doesn’t have a fancy name or a lot of adherents, that doesn’t mean it’s not Agile. In fact, the single most valuable piece of advice I ever heard about Agile development was that if you were doing it by the book you’re doing it wrong. The very word “Agile” implies flexibility, adaptability, and speed. Agile is supposed to allow us to produce better software faster, and it umbrellas any process which allows us to do that.

One of the key components of Agile software development has always been automated testing.  Automated testing is a big part of what allows for the tight feedback loop that is the cornerstone of Agile development.  As we build out features, we write tests that help us to quickly validate that the feature is working as intended…lather, rinse, repeat, and then finally — release.  Testing is such a big deal that Ruby and Rails have a ton of libraries designed for it, including one baked right in to the core language.  The emphasis on testing is apparent in the explosion of CI software and code-coverage tools as well. Aside from the generally understood idea that we should be testing at all though, what does it mean to do “Agile testing”?

If you come from the XP camp, (or from one of any number of other “purist” camps), you’ll say that the definition of Agile testing is “test-first”.  Red, Green, Refactor, so says the mantra, right?  My disagreement with TDD as a definition for “Agile testing” is that it leaves no room for argument or adjustment.  Thou shalt write thy tests first!  That’s not Agile, that’s actually anti-Agile.  Better, in my opinion, would be, “You should write some tests first”. Some tests necessarily come later, and some tests that could be written first are just easier to write later.  If Agile is about flexibility, adaptability and speed, then no rule or process that creates an impediment to reaching the overall goal of software development should be considered Agile .  We shouldn’t be interested in creating a bunch of arduous work for ourselves if we don’t have to. Also, we should remember that the point of software development is to write working programs, not to write a thorough set of tests. Contrary to purist belief, the former is not necessarily synonymous with the latter.

I feel like I should point out that I’m not advocating skipping testing altogether. Testing your software isn’t an “Agile” practice, it’s just a smart one, and not doing it is conversely, dumb.  I’m simply saying that TDD as a definition for “Agile testing” just doesn’t work.  I think that the “true” definition, if in fact we even want to admit that there might be a true and complete definition, doesn’t come down to when or how you test, (TDD, BDD, whatever…), but rather what you test.  It’s the finding of the path that will get you the most valuable test cases with a minimum of effort.  Even if you adhere to the idea that your test code should heavily outweigh your production code, (something I don’t necessarily disagree with), your test code should still be as brief and pointed as possible.  Wasted effort is wasted effort, no matter what you waste it on.  Wasted effort is not Agile.

So, what is “Agile” testing?  For starters, I’d say it’s whatever combination of processes you and your team come up with that allow you to quickly and efficiently achieve the goal of releasing good, working software. As far as I’m concerned, in order to be good, software must have tests_, _but beyond that, it’s probably negotiable. _That’s _Agile thinking.  With that said, I don’t think “just have some tests”, is a good enough yardstick either.  So, I’ve put together this list of what I think the idea of Agile testing implies:

It’s Driven By High Level Interactions

One of the most common mistakes I see newer developers making is their tendency to write tests at an extremely low level before, or even instead of, writing them at a high level.  The thinking isn’t surprising.  Programmers are predisposed to envision systems as a bunch of small things stacked together to make larger things.  It follows within that logic that a bunch of small, low-level tests will eventually combine to derive and validate the behavior of the entire program, and encompass all of it’s possible interactions with human users, APIs, etc.  That may be true, but there are no guarantees, and if you stay at a low-level, it’ll take you far too long to generate a set of tests that are actually valuable in validating higher-level application function.

Instead, I propose that Agile testing starts at the top, at the outside, and finds the outer edges.  It’s this thinking that has lead to the creation and use of “integration” tests and the tools that allow us to build them, like Cucumber.  The beautiful thing about this kind of test is that it doesn’t matter when you write them.  You may actually find, (as I do), that writing out Cucumber/integration scenarios is actually best before you start writing code, because they give you a nice little plain-language roadmap to follow.  With that said, I usually stick to pre-defining my “happy path” scenarios, and only add in sad path and edge cases as they occur to me and/or rear their ugly heads as I’m developing or testing.

It’s Supported By Low Level “Validation” tests

I don’t mean “validation test” in the sense of testing model validations or something like that.  If you’re spending your time testing the built-in features of your framework, (like simple model relationships or validations), then you’re definitely not being Agile since that’s the very definition of wasting your time. If Rails itself is broken, you’ve got bigger problems. What I mean is that you write tests that validate that certain crucial parts of your application are working as intended.  The key step in doing this is to determine what constitutes a “crucial” part of your application.

In my opinion, there are two ways you can answer that question.  The first way is the more “purist” way. If you assume that you wouldn’t waste time writing a non-essential function, then the essential functions you have written are, by definition, crucial.  In other words, you should test every single method that you’ve written yourself.  I actually like that definition, but it’s only half an answer.  The “other way” to answer that question actually helps complete the whole picture:  If the method is touched by a high-level interaction, you should test for the effects of that interaction.  Now, that doesn’t mean “test every single line of the method so that my code-coverage tool doesn’t get mad at me.”  It means to find and test the crucial parts, even within the “crucial parts”.

Testing The Effect

One of the ways in which developers, (and even managers and executives of the CTO flavor), tend to measure their application’s testedness, (is that a word?), is to utilize code-coverage tools which tell them what percentage of the application’s code has been ‘exercised’ by a test.  Unfortunately, these tools can be extremely misleading, because they do nothing to measure the quality of the exercise.  If I told you to do 100 pushups in a minute, you can believe that you’re going to be a hell of a lot more exercised than if I told you stand in place and just wave your arms around for a minute.  The goal of testing should not be to make sure that a certain percentage of our code is touched, it should be to make sure that what is touched is thoroughly and usefully validated, and that the pieces we’re testing are actually worth the bother.

As long as you focus on testing the effect of interactions with your code, be that interactions between end-users, or interactions with APIs, or even interaction with other methods within your own code, the number, type, and character of the tests you must write actually become extremely self-explanatory, and you’ll find yourself wasting a lot less time trying to figure out “what to test”, and coming up with arbitrary and random test cases that don’t really add value.  This is another situation where test-first is neither necessary or even valuable.  In fact, sometimes it’s better to observe the effect before testing for it’s reliability, which is something you can’t do in a “test-first” situation.

What this thinking does more than anything is connect your lower-level unit tests to the higher-level goals of your integration tests.  Doing so will always “keep you on point”, as your integration tests should be defining the core use-cases of your application.  Following this pattern will not necessarily give you high percentages of code-coverage, but it should give you high levels of quality tests on crucial system components, and focusing on that is _definitely _more Agile than spending time trying to creep a percentage up a few extra points.

Concluding

I’ve rambled a bit more than I intended to here, but I hope my overall point was clear enough.  Agility in software development implies extending that agility to our software testing.  Agility in testing can’t mean making sure all of our tests come before all of our code, and it can’t mean that we have to hold ourselves to an arbitrary coverage percentage that forces us to write pointless tests.  Agile software development is all about doing precisely what is needed and asked for, and doing it as quickly and efficiently as possible.  Agile testing then, should follow the same definition.  This is my definition, and I’m sure there are all kinds of people that strongly disagree with me, and that’s totally fine.  After all, as I’m fond of saying and will repeat here…if you’re doing Agile by the book…you’re doing it wrong.