What is defect density? The term refers to the number of defects that are detected and identified in software. The quantity of errors detected during development is then divided by the size of the software.
Uses of Defect Density
- Defect density is used for comparing the relative number of defects in various software components. Therefore, high-risk parts can be identified and resources focused on them.
- It is also for comparing software/products. Hence, the quality of each software/product can be quantified and resources focused on those with low quality.
Factors That Affect Defect Density
Time Spent Testing
Defect density does not take into account the amount of time spent testing. It takes a snapshot of time and states how many bugs are in the software for this area/lines of code at this time. More time spent testing should yield a higher amount of bugs, just as having a more skilled tester will. And once again, does more time spent testing means the quality of the software is less? Surely the more bugs found, the more bugs can be fixed, and afterward, the overall quality will be higher? Defect density once again does not give us any useful information, besides the obvious is that there are bugs, and some should probably be fixed.
One of the issues with using a metric like defect density is the temptation to “game” the system, raising many similar bugs to increase the defect density or bundling bugs together to make it seem like there are fewer bugs. There could even be the temptation not to report bugs if having a high defect density is seen as a bad reflection on the team. Just like with judging the quality of the tester on the number of bugs raised (another outdated metric), the quality of the software should not be judged with the number of bugs raised either.
Something that defect density does not take into account is different bug types: trivial, minor, major, critical, and blocker. If I have a product that has four minor defects, versus a product with four significant errors, which product has the lower quality? According to defect density, both are the same! Some products end up with hundreds of trivial and minor bugs – normally simple UI issues that no one cares to fix immediately. The viruses that are generally important and telling of the quality of the software are the major, critical, and blocker bugs. Defect density does not include these in the calculations per line of code or area, and it is a pretty crucial piece of information to be lacking from any quality assessment.
The first factor that could affect the defect density is how skilled the tester is. A highly skilled tester will be more likely to find more bugs than a lower experienced tester. If a virus has not been found yet, does that mean that the quality of the software is better? Or is the software quality still the same, since the bugs are still in the system but not recorded anywhere? Defect density tells the team the number of bugs found in the software, and nothing to do with actual quality. No defects found != good quality, it could merely mean unskilled testers have been employed.
Defect density does not give any information on what the user experience of the software is. It does not tell if navigating the software using a keyboard is a nightmare. Likewise, if moving back and forth between web pages is clunky and annoying. Most times, usability issues like these are reported as “improvements” or “suggestions.” They do not end up in the bug count. However, these issues do give some indication of the quality of the software.
It is important to find out the amount of defects detected in software. This will help indicate if the software is of high quality or not. Therefore, knowledge in defect density computation is a must to learn.