Source Code and Self-Documentation
The XP movement posits that documentation on the system design is the source code of the system. Keeping external documentation and the system which it documents synchronized requires far too much discipline than most programmers have.
So why not put the documentation in the code? That's what JavaDoc is for - to document the structure of the system as it currently exists.
But why stop there? There are usually many components in a project which are found necessary (such as a bug tracker), yet are disjoint from the source code. Many times, these artifacts are dependent upon dynamic analysis of the system, rather than upon static analysis such as JavaDoc and Lint. The perfect candidates to receive the attention are tests. They should generate dynamic analysis of the system every time a formal build is performed, such as pass/fail statistics.
JUnit tests have been a boon to Java; it is a testing framework that has been nearly universally accepted in project lifecycles. When combined with the XP model of debugging (when a bug is discovered you write a test to simulate the bug, then fix the bug), we have a method for asserting that all bugs marked as "fixed" are indeed fixed, since the JUnit tests pass.
The XP way of thinking states that the code is the system design documentation. Right now, it views unit tests as documenting how a unit can be used properly and improperly.
Also, bug tracking software has become a cheap and easy product to add to any development cycle, partially due to the Open Source movement. Even though there are many differences between the various products, the fundamentals are the same:
The Next Good Thing
What happens when we combine these together? We can directly relate a bug to the test case which tests the bug. The tests now automatically document the bugs based on the results of the testcases which test it. If a bug is marked as "fixed", but its corresponding test case fails, then we can have the system reopen the bug. If the test case is no longer valid (that is, its tested unit disappears), then the bug is no longer valid.
If one is so inclined, the tests can now document the work done on the code. When a requirement is created, the code does not exist to fulfil that requirement, so it is a bug in the current system. So enter the bug into the bug tracking software. When the developer completes the work for that requirement, then the developer should also have tests which test the work. These initial tests then also should be marked as testing that the requirement has been fulfilled.
So now each bug has four general categories: no tests exist for the bug; the bug has one test (or more), but not all of the tests pass; all of the bug's tests pass; and the bug is validated or closed. We can't let the software automatically mark the passed-test bugs as closed, since the tests may not be robust enough, but we can have the software reopen issues if a once working bug's tests suddenly begin to fail.
The tests-to-bug relationship can be many-to-many, but ideally it should be one or more tests to one bug.
These advantages don't come for free. Proper organization of the issues as well as code may seem difficult to maintain.
The bug tracking interface used to report on a test-run should be configured to reference only a single release. Also, the source code should be properly branched between releases if their bugs are different, and each release should be updated with the correct bug information.
Just like refactoring your code to make tests easier to write, your bug list should be maintained to match what's really going on in the source. If you find the maintenance of multiple software versions with bug-tracking IDs too difficult, it may be time to rethink the current SCM methodology, and move to a more scalable solution.
Test Case Step Documentation
While nothing can replace a solid test plan for a project, the documentation of the individual test cases can be tedious. If the goal is to have automated tests for each test case, then, like most software documentation, it becomes too easy to have the automated tests become out-of-sync with the test documents.
Let's use the same techniques listed above for documenting our test cases. Since the documentation would be generated with the test execution, the documentation can create a test report, listing each step executed, the date of execution, the duration of the run, the test results, and archives of the test artifacts.
While the test code would declare the process for the test, the execution of the test would generate the test case run results.
This space graciously provided by the SourceForge project
2002-2004 GroboUtils Project.|
All rights reserved.