Jared Richardson points to Johanna Rothman's blog post on a method of jumpstarting what Jared coins Blitzkrieg Testing. In this post I describe our experience with the technique (though I was not aware of the formalized name).
Jared's Blitzkrieg Testing essentially proposes a process of installing critical automated testing coverage where none originally existed. Key points in this style of testing:
Aim for breadth, not depth. If you're testing a portal product that has ten main pages, write a test that logs in, visits a page, verifies the page, and then logs out. You've written the equivalent of a "Hello world!" program for that page. Next, add the same type of test for every page in your portal ... your preferences page, configurations, content pages, and so on. The point here is to run across the product and create a basic test for as many areas as you can.
Don't get stuck in any one area. You don't want to dig in; you want to roll across the country in a tank!
Johanna's post recommends a similar practice in incorporating one test for each feature in the smoke test.
Interestingly enough, not having read Jared's formalized description prior to tonight, our group still saw how effective the practice was during our latest release cycle: I had mentioned previously that we wrap our business logic into FitNesse acceptance tests. That needs clarification: we did manage to do this for a few cycles previous to this one, but this last release developed from an amalgamation of 3 concurrent initiatives, 1 of which was maintaining the FitNesse tests, the other 2 were not. New team members were unfamiliar with the FitNesse or the specific fixture code and various pressures didn't allow us to back-track into familiarizing them with it. The other risk factor during this release: was a decision I made to switch our UI testing tool from Selenium to Watir a very short time ago, yes in the middle of this release cycle. Now it may sound strange, making that decision while automated acceptance tests would be left broken. We were essentially compounding the issues by switching UI coverage tools mid-stream. Well let's just say that once it was clear our UI tests would be our product's "testing vice" I had ramped up my use of Selenium but ran into maintenance and simple implementation issues that left me very nervous about putting all my eggs in that basket. Luckily, this is also when I found out about Watir for Ruby. As a test of Watir, I recreated our base UI-driven test of logging into the system using every combination of User Type, Access Level, and User Origin (our web application customizes the features available to users according to these parameters) and tested that navigation to each of the feature pages was successful for each login. This is the standard hook-in test that we run to verify that access and page availability is there on our system, and sounds a lot like the process that Jared describes above.
I had that going so painlessly that I was sold on Watir, so I continued to build on that login code to account for major functional features (client CRUD functions and basic transactions) to cover at a very high level the critical existing pieces. And from there I added a test to cover each of the new features being added in the release.
From there I experienced what Jared describes as momentum: we made a mind-map of the application on whiteboards where we listed major components branching to features, then branching to nodes, then branching to business rules. We traced on it the branches where our blitz had provided coverage, and then prioritized which areas needed to be given coverage next, and which could not be covered with automation and would therefore require greater attention from our exploratory manual testing effort.
Now, would I recommend doing away with any existing tests just to go through this experience? Definitely not: I sorely missed the practice of writing FitNesse tests as specifications by example and not just as automated tests. Having these back in place is my current priority. I am also very aware that many of the tests now placed in the UI script would be better placed in the FitNesse framework as business rule validations. Not to mention I was wickedly stressed until the release went into production (cleanly I might add) last week.
All said and done, we had rebuilt basic coverage of the existing features and planned release features prior to the start of the testing cycle, and a good picture of what needed to be accomplished when the application was built into our test environment. Our testing cycle was still very short and effective because we knew where to focus our manual tests, integration bugs were found and resolved quickly, and we added automated tests to cover many of the bugs found manually to address bug regression. In the end it allowed 3 of the 5 main project members, including myself, to keep vacations this week that were booked out months in advance. Niice.
This process captures a very effective way of implementing the practice of automated testing (in this case acceptance/integration testing) in an environment/project where none exists.