Thursday, April 20, 2006

Don't test everything!

There was a lesson we learnt today. Don't test everything if you cannot write the universal test for it.

I recently joined the great team at Atlassian that is behind JIRA - bug tracking, issue tracking and project management and Confluence - the enterprise wiki. We released JIRA version 3.6 just few days ago and we are already working on 3.6.1. The Jira team is more or less pragmatic team and we decided that we will try to do things XP way. We started with story cards on the wall. We took one each, one meaning one pair as we also embrace the pair programming. The first stories we had were mostly outstanding issues or bugs in the current version. We nailed them very well by writing unit or func test to cover or test as much as possible and fixing the bug itself.

Unfortunatelly, there are always some things that are impossible to test. And if not impossible, then so hard to test that the time and effort spent writing such test would not be paid off by the value of the test anyway. So sometimes you can't test, but sometimes you can. The question is how far will you go with your tests. I usually write tests until at least 80% of the existing code is covered by my unit tests. It should be nearly 100% for the new code as you write the test first and then you keep implementing the class until it passes all tests, right? But for the existing code, I am comfortable with anything above 80% (your comfort zone can be different).

In this case we were creating the test around an issue with two sub-tasks and we wanted to make sure that those sub-tasks will be displayed on a particular page in the application. So we wrote the test. The test checked that these sub-tasks appear on the screen one after another. All tests succeeded and we went home happy.

Next morning we learnt a valueable lesson. Our test failed on one of the testing environments (we support and test on JDK 1.3, 1.4, 5.0 + several supported servers and RDBMS on top - helluva lot of tests run everyday, if you can imagine). There was a slight difference in one of the testing environments that resulted in the order of the sub-tasks to be different than in the others. The order of sub-tasks was not important, it was not a business rule, nor a requirement, so we removed the condition that was based on the order of the two. Now the test only verifies that the two sub-tasks are present and the order is ignored. Cool! Everything works now and what we needed is tested.

So the lesson is, stick to the principle of XP and write only minimum code required to implement the required functionality. That applies to your tests as well. Test everything you can, but don't waste time testing something that is not required. Write a test for it once it becomes a requirement.

No comments:


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License.