Bugs Quality Dashboard

 I would like to share the bugs quality dashboard we use.

The report represents some criteria we thought would give us a good understanding of the quality of bugs we open, and we measure it over time so we could improve and see progress.

To be sure, reports tell only a part of the story. Bugs can be rejected for many reasons, some may not be related to the situation reported. Also, for example, the failed component indicated in the bug could be based on logical assumptions even if in the end the bug was elsewhere. So the reports are just a starting point, not the conclusion.

The dashboard is in Jira, like the bugs, and the reports were done using Jira plugin eazyBI.

The reports

Please review each clause with the picture below.

  • The first gauge report, Bugs Integrity Level, is an aggregation of the evaluations of most of the other reports, a general view.

  • The one under it on the left, Dev Efficiency - Bugs per resolution with drill across, shows only closed bugs (closed on that specific Q), by resolution.
    The green means the resolution = Done, meaning the bug was justified, and the code was fixed and tested.
    The other colors indicate rejected bugs (we have 4 reject reasons: Works as designed (WAD), Ir-reproducible, Environment, and Duplicated).
    The reason we measure only closed/done bugs is that we don't want to measure premature rejects, so we wait till the QA approves it was justified to reject by closing the bug.

  • The report next to it, Dev Efficiency - % Rejected, is an aggregation of all 4 rejected bugs as seen on its left.

  • The next one, under the above two, Average time in status - bug, shows the average time the bug was in specific statuses.
    We treat it only as an indication, it is not measured in the aggregated evaluation because not all statuses are in the QA hands, and not always the person who opened it is responsible for the other statuses that are relevant for the QA.

  • The next one, Bugs integrity - the number of missing fields, was the hardest one to create from a technological POV, we used a Jira plugin called ScriptRunner.
    We have a template in the bug description:
    -Steps to Reproduce
    -Expected results
    -Actual results
    The script checks for text existence after each one of the template clauses. We expect the bug will be detailed with as much relevant data as possible.
    The black line is the number of all bugs opened in the relevant Q.

  • The report under it, Bugs integrity - % perfect bugs, shows the number of bugs that didn't have any of the above issues - that had a description, steps, expected, and actual results.

  • The next line, on the left, # UI bugs without visual attachment, is relevant to UI bugs (based on a predefined custom field named Flow with the value User Flow). In this type of bug, we expect visual attachment - like pic or video. We don't expect to have 100% bugs with attachments, but less than 10% is unusual.

  • The last one, on the right, unnecessary ping pong, needs some explanation of our system.
    The product we produce is very complex, multi-disciplined, and contains dozens of different components. It is not always easy to determine which component it is, so our QA needs to be (and are) very professional.
    Since the programmers usually know they own components but not other components, and QA knows the whole system, not pointing to the right component might yield a long and unnecessary process. If the bug was opened on the UI for example, the UI team checks sees the error is from the component before them in the flow, so they will move the bug to that team, and this can go on until it reaches the correct component. This is a lot of wasted time.
    The report counts closed bugs, that their component, when moved to Done, was different from the one it was opened on.
    The blue part indicates the bugs that were opened and closed on the same component, this measures the programmers, not the QA