Tags Cloud

Search the site

Automation Strategy - Part I

Fairytale, the first version: The king searches for a woman who will be able to run the household and restore the economic balance of the kingdom, which was lost as a result of the depletion of the kingdom's wealth and waste. The local miller jumps up and tells the king that his daughter is capable of turning straw into gold.

Fairytale, the second version: A manager looking for a woman (or man) who can manage the company's economy and restore the economic balance that was lost as a result of the depletion of his company's treasure and the great waste on the manual tests.

What to do? The miller, sorry, the local provider of IT services jumps up and informs the manager that his men can turn manual checks into automation overnight.

Automation Strategy - Part I

After not such a long time they come, the members of the IT services provider, full of motivation and hubris. We sit with them for a day or two, explain the product a little, a little about the tests, give them a few days to think about an infrastructure, but press them to come to a conclusion quickly (no one thinks that until now there was no automation and things worked (even if not optimally), that maybe we can wait a little longer so we can make a wise choice).

After that, we give them the regression documents and ask how long and how much will it take to transfer the design of the tests into automation code. Four months, we are told. Are you sure? We ask doughtfully? Maybe 6 months if there are problems we did not anticipate. Yes, even if we recruit a new automation coder he will also not want to say that it will take at least nine months, and certainly an outside company that has an interest in getting the foot in. No one says that the automation people will need frequent supervision by a test engineer (because they do not know the product) and of a programmer (which will help in interfacing and creating easier or deeper tests).

The "six months" also seems to us a lot, we think, but we will wait. For managers, we will explain that within two months we will be earning valuable working days and invisible workers who work every night.

After a week we have the first scenario. We present it to the group/company, including managers, and everyone is amazed at the buttons that are pressed magically on the screen. Wow. Good work everyone!

The rest is pretty predictable. Rumpelstiltskin does not arrive, so there is a crazy run to do as much as possible. No time to think about the important questions such as which layer to automate, just translate the regression tests.

Six months later, not only are we not close, we are also frustrated. Not only did we not save anything, not only that "manual testers" continue their work scope as before, but we pay the IT services provider and the licenses, and we do not see a return of investment on the horizon. The CEO does not say hello to us in the corridor and the test engineers no longer bother to appreciate the automation people. We become part of the statistics that say that only 20% of the tests that can be automated are implemented (in this survey, for example, only 4% of the companies have 90% automation coverage).

So what do we have here? A serious problem with reality, and concentration only on the technology aspect - which is, BTW, very important (you do not want to change infrastructure in the middle, as happened in several companies I know), but no more important than the strategy.

The great context of automation: Automation is part of the testing system and is not effective as a single test layer. It should come as an array with Unit Test and Exploratory Testing.

This is how Google describes the testing system:

The "small" tests are unit test, medium - automation (integration), and the large tests are exploratory.

Do you want to achieve high quality and efficiency? That's probably the way. Otherwise you may save time by automation, but you will not yet achieve good quality.

The next thing you should know: automation is not testing, and it will not, I repeat, will not replace the human testing. When an employee test, he evaluates the situation, he follows a professional logic, he has feelings (the results of the automation run will not show the following sentence: "Whut? what an annoyance it is to wait 4 seconds for that to happen"). The example of Michael Bolton: The Mars Rover may be very sophisticated, but it does not "know" that it is on Mars.

And two other quotes: - “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.” James Bach

- "Automation testing purpose is to confirm the system is running, which is the opposite of testing." Michael Bolton

So what is automation? Automation is checking, verifying. It verifies a deterministic process, and can only say "past" or "failed."

It is due to the fact that automation can not "translate" regressions because it does not "test". It can take the "verifying" part of the tests: If you press the X button in the Y mode, Z will happen. If the UI is distorted and immediately "regenerates", we will never know about it, not at least from the results of the run.

So Google, for example, is still running manual end-to-end testing. On the other hand, and here comes the positive part - it can do much more than testing without a tool like automation, because automation is indeed an auxiliary tool. What could it do (inspired by Michael Bolton)?

  • Check CPU, memory, speed.

  • Check test that are exhausting for the tester:

  • Are all files installed?

  • Are they in the right position? The right size? Encrypted?

  • Are all files signed?

  • Has the registry been properly updated?

  • Are the configuration files correct?

And besides:

  • Prepare data for tests.

  • Prepare environments.

  • Being (almost) random.

  • Change configuration.

  • Data manipulation

  • Make alerts pop up

  • Collect and even analyze logs.

And, right, it can also press buttons.

Yes, manual testing may be eroded, slow, sum up with less important or obvious bugs, come in a late stage and therefore less effective. Automatic checks are supposed to give the testers the place to deal with important bugs that only a person can see.

What is most important when we write the automation?

Think for a moment. Have you answered the "Coverage percentage"? "Speed"? You were wrong.

Here's the right order:

1. Results readings. Yes, that's exactly what I said, the results. If:

  1. they are not clear: what exactly was tested, what exactly was the result in a spoken language.

  2. are not easy to understand the situation, how failed, etc. at a glance.

  3. missing screenshots, video and logs.

  4. do not make it easier to open the bug.

  5. not accurate.

  6. do not indicate exactly the problem.

Then throw the tests honorably to the code cemetery. If the person who executes the automation will have to work three days analyzing the results on a two-hour run that has saved manual tests two days, it will not work.

2. The reliability of the results and / or the instability: if the results are not reliable - the same, if every time the tester that executes the automation has to analyse for a few hours a "strange" problems and finally he or she finds this is a problem in the code of... the automation they will stop trying. This problem can be especially acute in GUI testing where many things may Influence the outcome.