Lessons from a watir success story

There aren’t enough UI test automation success stories documented on the net, so here’s my contribution.

This post is about  how we started with zero automated UI testing capability, and how a couple of months later we had the capability to run 1,000 tests in 20 minutes (you may also notice some gorilla-style chest-beating). Not only that, our test execution results were always spot on (no false passes, no false fails), and our framework was easy to maintain and extend. Here’s how we did it and the lessons we learned along the way.

The Phoenix rises from the ashes…

After several false starts with UI testing (involving QTP and watin and fitnesse and and and…), it finally got off the ground after I demonstrated what could be achieved with watir. This was the first lesson for me: demonstrating working code makes for a far better case than “just believe me, I’ve done it before, it’ll work fine”. Realizing this, I spent a weekend putting together a simple ruby/test::unit/watir testing framework.

When, on monday morning, I demoed a bunch of working tests that could run over and over and over and each time give the correct results to a room full of doubters, it was decided that we’d give ruby/test::unit/watir a go.

An important constraint

In order to avoid the biggest mistake made in previous attempts at UI automation, I insisted on agreement to a very important constraint: the ruby/test::unit/watir framework would only test the web app part of the system. One of my philosophies when it comes to UI automation is: “use the right tool for the right job”. There would be no scope creep. The framework would be used for the web app only. The WPF fat client could be tested with something else – anything else, just please don’t subvert ruby/test::unit/watir for testing anything else but the web app.

This was all part of expectations management; over the years I’ve learned that this is one of the most important things when doing automated UI testing. It’s worth a separate post – I’ll get around to it.

Framework overview

The framework I came up with that weekend was very simple (and very unoriginal – I’ve done similar stuff before): each page in the app would be represented by a ruby module. Each “page module” would contain one method per control on the web page; the method when called would return the required object. Eg:

module HomePage
  def home_page_sign_in_link
    @browser.link(:xpath, "//a[contains(@href,'signin')]")
  end
  def home_join_button
    @browser.button(:id, "join")
  end
end

My experience is that IDs on objects often can’t be trusted. What I’ve found is that they usually get put in during development and that they are removed when they’re not required. If I write my object recognition code to depend on an ID I usually regret it. I’ve learned that xpath is often more reliable – IDs tend to disappear. Where I have reliable IDs, I’ll use them. But for anything else, I use xpath. I find myself maintaining ID-based object recognition a lot more than xpath-based object recognition. Your mileage may vary.

Each of these page modules was “mixed in” to a common module which was then mixed into the test, making each object in the system available to the test without having to deal with namespacing. Not the prettiest solution, but it got us working very quickly. We surprised ourselves by having no method name clashes – mainly due to the object naming convention we used.

Our app uses restful services so the net::http libraries were great. The calls were wrapped up in classes and placed alongside our test-data-generation code (random strings, valid usernames, etc). We wrote them to be independent of the rest of the framework (apart from the central config file that pointed them at the right environment) so that we could use them whenever and however we liked. This philosophy was applied to as much of the framework as possible – to the greatest extent possible we kept everything independent and modular. This allowed us to keep up with sweeping changes in the app we were testing. We had a lot of sweeping changes; to many people’s amazement (not least our own!) we were able to keep up with everything thrown at us.

One of watir’s greatest features which doesn’t get nearly enough air time is the “checkers” functionality. Every time a page is visited you can run custom code you’ve wrapped up in a ruby proc. Environment instability usually manifests itself in obvious ways. Watir checkers allow you to recognize these as soon as they appear. If, in your checker code, you raise an exception with a fixed message, you can find these when the test is done and mark the tests as “failed due to environment instability”; or you can be more specific, e.g: “failed due to database connection going down, again”. This saves us hours of work every day. Literally, hours.

Our test execution script keeps note of tests that failed. When all the tests have been run, the tests that failed get run again in the hope that environment instability “got better”. We’d have to rerun those tests manually at the end of a run anyway, so why not automate it? A big time saver.

Test::Unit

We used test::unit for executing the tests. It’s simplicity and rock-solid nature made it a good choice to start the framework with. It is also easily expanded/monkey-patched – we added a few things to it.

Though I have ruby experience, everyone else on the team had a C#/NUnit background; test::unit helped bridge their knowledge gap. Conceptually, test::unit and NUnit are identical (both are xunit tools) – the similarity meant one less thing for the others to learn. When under pressure, that was a good thing.

Netbeans as an IDE

Netbeansintegration with test::unit is great, so we decided on using it for our IDE. It turns out that NetBeans is a great IDE for watir testing; I heartily recommend it. The green ticks and red crosses were comforting to all the right people, the ease of viewing the source code of the watir/firewatir libraries was great, its intellisense wasn’t as bad as I was expecting it to be and its svn integration is better than anything else I’ve used so far. It’s also free so I didn’t have to get approval to buy it! It has loads of useful plugins too.

Firefox plugins

We used firefox (and therefore firewatir) because we needed to spoof useragents and headers – using firefox profiles meant we could test the mobile, browser and fat client webstores. Using firefox meant we could use plugins; here are the ones we found most useful:

  • Firebug – it is awesome. Install it, learn it, love it.
  • XPath Checker – It’s a great tool which does one thing very well: Write an xpath and it’ll show you what it evaluates to. There are other tools that do the same thing and more but the experience of everyone on the team was that xpath viewer was all they needed. Everything else was bloat.

Version Control

One area where testing is usually decades behind development is in the use of version control. This is probably due to the fact that most commercial tools make it less than trivial to do (the QTP and RFT test file formats are hardly version-control-friendly).

One of the many joys of using watir for UI testing is that using version control is simple – there are no ‘project files’ to worry about, no accompanying files that you need to check in when you change a test file, no bizarre links across directories, no weird inter-file dependancies.

Using version control for our tests and framework meant that we were free to change stuff as we saw fit – if something didn’t work we could just revert. I can’t recommend using version control enough. The “blame” feature in svn was particularly useful ;)

Parallelization of test execution

We got our super-fast execution time by cheating: we parallelized the execution of the tests. We used a Mac Pro (16GB RAM, 8 core, 4xHDD), the Parallels virtualization software to run 12 VMs, and a shonky “n mod 12″ function to decide what tests to run on each VM (we tried to get testjour working but it needs a fair bit more dev and documentation before it works as advertised and becomes usable).

One advantage of using a Mac (apart from its rock-solid stability) was “Exposé” – we could see all 12 VMs running at once. The amount of people who walked past our desk and got hypnotized by seeing 12 tiny PCs each executing as if they were self aware… – good PR for our team!

Results sanitization

It doesn’t matter if you can run a million tests in 5 minutes, if the results aren’t accurate and sanitized quickly. We’re quite proud of the fact that we provide our results within 5 minutes of finishing the last test (usually straight away). We’ve put a lot of effort into being able to do this and it’s one of the biggest selling points of the framework we’ve written. Here’s what we’ve done to get to this stage:

  1. Though we use many VMs to execute tests on, when execution is finished we write all the results to a central location. We don’t have to worry about collating results, it’s done automatically.
  2. We use the watir checker feature heavily (see above). Any test that failed because of an environment problem is marked as such. This means that we don’t need to investigate why the test failed – we know that it didn’t get past the login screen because we saw the “database connection down” message, not because there’s a bug in login.
  3. Where we find a bug, we put the number in the relevant assertion message. If the test fails at the same point in the next run, it’s because the bug is still there. The bug will be printed to the results file and when our summary script runs over the results it’ll mark the test as a known fail due to the bug referenced in the assertion message. A particularly awesome feature!
  4. Every test that hasn’t been marked as a known fail (environment or known bug) gets added to a list of tests that need to be investigated. They’ve already been run twice (see above), so there’s a very high chance that a regression bug has crept in.
  5. As well as a count of the various flavors of failed tests, we also gave percentages. These mean far more to the scrum masters, so that’s what we gave them.

Making the effort public

Initially, there was a lot of hostility toward the project – understandably! UI testing doesn’t have a good history where I work at the moment. One of the scrum masters pointed out that the best PR was fast, consistent and accurate test results and he was right. But there are a few more things that gave people reason to change their opinion.

  1. Every time our tests found a regression bug, we updated our “bug count” on a big, visible-to-all white board. There was no denying the value that we provided (we crossed the 100 bug count in only a few weeks)
  2. We gave the scrum masters remote access to our run machine. This allowed them to kick off test runs and watch them as they progressed. The instant feedback they got blew them away. The Exposé functionality let them see environment instability issues as soon as they occurred allowing them to go and kick some heads in the infrastructure team as soon as there was a problem.
  3. We put our test execution box on the end of our desk, facing out. As mentioned above, the Exposé function was great PR for the team. In the weeks before go-live, many evenings were spent huddled around the screen as we would do test run after test run with 6 or 7 people from varying levels of management all hypnotized by the tiny windows!
  4. The test owners wrote their tests on a wiki page – as soon as a test was automated we would mark it as such. Test owners knew exactly where they stood.

Summary of the lessons we learned

  1. Sell the idea of watir-based automated UI testing with working code
  2. Manage your managers’ expectations well
  3. Keep the framework simple – complexity you don’t need introduces risk you probably can’t afford
  4. Use the right tool for the right job
  5. Use free tools where possible; you’ll be freed from the hassle of purchasing stuff
  6. Use version control – it will be your salvation many times over
  7. For object-recognition, use xpath unless you have reliable IDs you can depend on
  8. Attempt to recognize environment-related errors in the test and report on them as such (watir checkers)
  9. Keep as much of your framework code as modular and independent as possible
  10. Make the test team’s work public. Provide test results as soon as is possible.
  11. Put a lot of effort into making results sanitization as quick as possible. There is very little as frustrating for a scrum master/test manager as knowing that test execution is complete but that the results “aren’t ready yet”.

That’s all for now!

11 thoughts on “Lessons from a watir success story

  1. Pingback: Tweets that mention Lessons from a watir success story « Nat On Testing -- Topsy.com

  2. Thanks for sharing this. A couple of questions:
    1) I see you mention Cucumber on your blog, did you end up switching from test::unit?
    2) In your VMs, what guest OS were you running? Watir on Windows or FireWatir on Linux?

  3. Pingback: uberVU - social comments

  4. Alister

    1) We’ve migrated a whole load of stuff over to cucumber – it’s not done yet and I have some serious misgivings about it… I’ll write it up in a couple of days

    2) None of the above! Firewatir on Windows…

  5. Pingback: Most Tweeted Articles by Agile Development Experts: MrTweet

  6. Nice write up. yes, checkers is a great feature and page adapters is the way to go imho. All the best and thanks for sharing the success story.

  7. Its always useful to learn from other peoples successes (and failures), thanks for sharing Nat, very well written!

  8. Nice work Nat. You’re right that we certainly don’t have enough automation success stories – but I’m not surprised. I’ve not seen the management expectations for benefits through test automation reduce yet – despite the general failures that automation efforts seem to result in. Obviously we need to learn from our own mistakes (yes, I have learnt much!) it’s also write-ups like this one will certainly help the rest of us achieve more success than otherwise. Personally – I’m more interested in the framework aspects, but your views on tool choice are also very helpful. Cheers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>