STR = Software/System Testing Results/Report (all versions are in use)
Why do we need a STR?
STR gives a snapshot of a particular aspect of the quality of the product at the time of release - the state of tests run and results.
Assures that there will not be a situation in which test cases will not be performed and then forgotten.
STR is a backward record, if one day we want to know what the status of the tests of one version or another.
STR comes out with a version and in some industries it must be kept for a few years.
Some companies separate the specific test results from the results summary report. Although there is a difference, for me the term STR includes both because there are no appropriate terminology for each and they depend on each other.
Test manager / tester at the Agile group.
What is STR?
Once it was a physical document, and perhaps even now in certain industries like the pharmaceutical industry. Today the STR is viewed and managed in a test management tool such as TestRail. Beyond the obvious reasons (nature preservation, comfort, aesthetics) the advantage of these tools is the possibility of displaying the test summaries through graphs. In fact, in many cases you go over the totals - rather than specific tests - at the end of the version while deciding whether to release the product or not.
STR includes the results of the tests themselves and a summary of a look at the graphs.
STR is not the only consideration when discussing whether the version is good enough for release to the customer.
The life cycle of the STR
The STR is actually used throughout the test period of the version. It should be updated from the first day of the test period and fully prepared when the tests are completely finished. That is, STR is also a tool for reporting the progress of the tests.
Finally, it is also part of the considerations of whether the version is ready to be deployed. When it comes to releasing a version, sometimes the summary is the only part of the STR that is presented and not the individual results.
It is important to say that the results of the tests are not the whole story in deciding whether the version is ready for release to the customer but are an important criterion. For example, the release of a version should also include a story about the quality of the version beyond the factual results - what value the customer receives, the quality of the features - whether they can be used properly, whether they provide a solution to the customer, etc.
What does the STR contain?
The most basic form of the STR is the "normal" test document with markings of what passed and what failed. But this is not the optimal form.
The STR should contain the same structure of all test documents, including the entire hierarchy - from the feature division for example to the individual steps.
In order for us to have a value from looking at STR, each step must be recorded: past / failed / not run / blocked.
If a step fails, write down the bug number, its severity, and specify its current status. Open bugs should be highlighted. If the bug is closed - update the document.
If the test didn't run - add a comment - why? Because of a bug / lack of time / we have not reached them yet. It is best to add a future runtime (even when it is due to a bug that is blocked).
Leave room for comments.
It is preferable to have a graduated tool, that is, one that will contain all the tests up to the level of the individual step, but that can be observed at different levels, the total number of runs, the level of test subjects up to the single step. If the test case has passed, you do not need all the steps to be collapse as default.
Of course, if there is a possibility to save these results it is excellent. When exist, in case of running a test twice, the record of the two runs will remain, and not only the last one.
In addition, you must enter all relevant information: test environment, configuration, versions and more.
Of course, in order to produce such a document it is better to work with an automatic tool, but I also saw nice excels sheets.
What else can you attach to STR?
I would attach the statistics (ie graphs) to illustrate the status of the test results:
The amount of all tests facing the status: how many steps there are, how many tests have been performed, how many have passed, how many have failed, how many blocked.
Test types vs results (performance, functional tests).
Test cases versus requironment: to verify no requirement has been neglected. If there are 15 test cases versus a single requirement one might ask "is this a requirement that is written in high level of formulation? If only one test case: Is that enough?
Test cases vs. feature: how many tests were run against a feature? How many passed and if not enough - did we manage it? Will the feature really work? And more.
You can compare anything to a previous parallel version.
Bugs that were opened over the test period and their impact on the feature tests.
What is the fate of STR?
At the end of the version you review the STR with the stakeholders that was involved in the version including an SQM* and decide with other exit criteria if the version is ready to exit.
If someone does want a STR template like "the old days" - https://strongqa.com/qa-portal/testing-docs-templates/test-report
* SQM - System Quality Manager - Responsible for the quality of development processes, not for testing.