Tags Cloud

Search the site

Tests Quality Dashboard (XRAY for Jira)

It is relatively easy to know what is a high-quality bug, but to do the same for tests is harder - what makes a test to be considered adequate?

The report below is one answer. To be honest, not all of our QA managers approved it (I will mention one good criticism below), but I think it is a good start :)

As with the bugs, it is aggregated based on Qs.

The reports are done on the Jira plugin eazyBI and ScriptRunner, the tests are on XRAY.

I will go now over the reports:

  • The first one - Test Quality - Score - is an aggregated evaluation of the below reports.

  • Test Quality - approved vs. not approved, shows the status of the tests. To be clear, not the test run status, but the regular Jira issues status. We decided that the Test workflow will include a few statuses like Internal Review until it is ready to be used in Approved status. So we check how many tests are in Approved vs. all created tests in the relevant Q based on the Reporter.

  • Test Quality - Steps Count. We think that a good scripted test (meaning with steps) should have between 4 and 12 steps. This report shows the "bad" tests based on steps number and the "good" tests.

  • Test Quality - Description. We prefer to have a short description per test - what is the test about, what is important, or any other comment.

  • % Tests with attachments - not all tests should have attachments, but I guess one out of three is a good ratio.

Some important remarks:

  1. As always - this is just an indication. Tests that are defined as "bad" in the dashboard can be good, and vice versa. I also think a team leader should know the quality of the test of her team. But it is a good indication to start checking if the numbers don't look as expected.

  2. No, XRAY doesn't support reports bases on attachments. This is something we managed to do.

  3. The criticism that I think has a point on this report is this: While looking at bugs, it makes sense to view over time because a bug that was opened last year and closed as Reject can't be undone; it is water under the bridge. On the other hand, tests can be fixed backward, so we want to see the test aggregation summary. Sounds reasonable, but I think that the over-time view also indicates that we write better tests now.

Tests Quality Dashboard