Thursday, January 10, 2008

Defect Tracking by Test Lead !!!

The test lead, categorizes the defects after meetings with the clients as,

Modify Cases: Test cases to be modified. This may arise when the testers understanding may be incorrect.

Discussion Items: Arises when there is a difference of opinion between the test and the development team. This is marked to the Domain consultant for final verdict.

Change Technology: Arises when the development team has to fix the bug.

Data Related: Arises when the defect is due to data and not coding.

User Training: Arises when the defect is not severe or technically not feasible to fix, it is decided to train the user on the defect. This should ideally not be critical.

New Requirement: Inclusion of functionality after discussion

User Maintenance: Masters and Parameter maintained by the user causing the defect.

Observation: Any other observation, which is not classified in the above categories like a user perspective GUI defect.

Reporting is done for defect evaluation and also to ensure that the development team is aware of the defects found and is in the process of resolving the defects. A detailed report of the defects is generated everyday and given to the development team for their feedback on defect resolution. A summary report is generated for every report to evaluate the rate at which new defects are found and the rate at which the defects are tracked to closure.

Defect counts are reported as a function of time, creating a Defect Trend diagram or report, and as a function of one or more defect parameters like category or status, creating a Defect Density report. These types of analysis provide a perspective on the trends or distribution of defects that reveal the system’s reliability, respectively.

It is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. A threshold can be established below which the system can be deployed. Defect counts can also be reported based on the origin in the implementation model, allowing detection of “weak modules”, “hot spots”, parts of the system that are fixed again and again, indicating some fundamental design flaw.

Defects included in an analysis of this kind are confirmed defects. Not all reported defects report an actual flaw, as some may be enhancement requests, out of the scope of the system, or describe an already reported defect. However, there is a value to looking at and analysing why there are many defects being reported that are either duplicates or not confirmed defects.

No comments: