Automation Strategy - Part I

 Fairytale, the first version:


The king searches for a woman who will be able to run the household and restore the economic balance of the kingdom, which was lost as a result of the depletion of the kingdom's wealth and waste. The local miller jumps up and tells the king that his daughter is capable of turning straw into gold.

Fairytale, the second version:

A manager looking for a woman (or man) who can manage the company's economy and restore the economic balance that was lost as a result of the depletion of his company's treasure and the great waste on manual tests.

What to do? The miller, sorry, the local provider of IT services jumps up and informs the manager that his men can turn manual checks into automation overnight.


After not such a long time they come, the members of the IT services provider, full of motivation and hubris. We sit with them for a day or two, explain the product a little, a little about the tests, and give them a few days to think about infrastructure, but press them to come to a conclusion quickly (no one thinks that until now there was no automation and things worked (even if not optimally), that maybe we can wait a little longer so we can make a wise choice).

After that, we give them the regression documents and ask how long and how much will it take to transfer the design of the tests into automation code. Four months, we are told. Are you sure? We ask doubtfully? Maybe 6 months if there are problems we did not anticipate. Yes, even if we recruit a new automation coder he will also not want to say that it will take at least nine months, and certainly an outside company that has an interest in getting the foot in. No one says that the automation people will need frequent supervision by a test engineer (because they do not know the product) and of a programmer (which will help in interfacing and creating easier or deeper tests).

The "six months" also seems to us a lot, we think, but we will wait. For managers, we will explain that within two months we will be earning valuable working days and invisible workers who work every night.

After a week we have the first scenario. We present it to the group/company, including managers, and everyone is amazed at the buttons that are pressed magically on the screen. Wow. Good work everyone!


The rest is pretty predictable. Rumpelstiltskin does not arrive, so there is a crazy run to do as much as possible. No time to think about the important questions such as which layer to automate, just translate the regression tests.

Six months later, not only are we not close, but we are also frustrated. Not only did we not save anything, not only that "manual testers" continue their work scope as before, but we paid the IT services provider and the licenses, and we did not see a return on investment on the horizon. The CEO does not say hello to us in the corridor and the test engineers no longer bother to appreciate the automation people. We become part of the statistics that say that only 20% of the tests that can be automated are implemented (in this survey, for example, only 4% of the companies have 90% automation coverage).


So what do we have here?
A serious problem with reality, and concentration only on the technology aspect - which is, BTW, very important (you do not want to change infrastructure in the middle, as happened in several companies I know), but no more important than the strategy.

The great context of automation:
Automation is part of the testing system and is not effective as a single test layer. It should come as an array with Unit Tests and Exploratory Testing.

This is how Google describes the testing system:


The "small" tests are unit test, medium - automation (integration), and the large tests are exploratory.

Do you want to achieve high quality and efficiency? That's probably the way. Otherwise, you may save time by automation, but you will not yet achieve good quality.

The next thing you should know: automation is not testing, and it will not, I repeat, will not replace human testing. When an employee tests, and evaluates the situation, he follows professional logic, he has feelings (the results of the automation run will not show the following sentence: "What? what an annoyance it is to wait 4 seconds for that to happen"). The example of Michael Bolton: The Mars Rover may be very sophisticated, but it does not "know" that it is on Mars.

And two other quotes:

- “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.” James Bach

- "Automation testing purpose is to confirm the system is running, which is the opposite of testing." Michael Bolton

So what is automation? Automation is checking, and verifying. It verifies a deterministic process, and can only say "past" or "failed."


It is due to the fact that automation can not "translate" regressions because it does not "test". It can take the "verifying" part of the tests: If you press the X button in the Y mode, Z will happen. If the UI is distorted and immediately "regenerates", we will never know about it, not at least from the results of the run.

So Google, for example, is still running manual end-to-end testing.

On the other hand, and here comes the positive part - it can do much more than testing without a tool like automation because automation is indeed an auxiliary tool. What could it do (inspired by Michael Bolton)?

  • Check CPU, memory, and speed.

  • Check tests that are exhausting for the tester:

  • Are all files installed?

  • Are they in the right position? The right size? Encrypted?

  • Are all files signed?

  • Has the registry been properly updated?

  • Are the configuration files correct?

And besides:

  • Prepare data for tests.

  • Prepare environments.

  • Being (almost) random.

  • Change configuration.

  • Data manipulation

  • Make alerts pop up

  • Collect and even analyze logs.

And, right, it can also press buttons.

Yes, manual testing may be eroded, slow, sum up with less important or obvious bugs, come in a late stage, and therefore less effective. Automatic checks are supposed to give the testers the place to deal with important bugs that only a person can see.

What is most important when we write the automation?

Think for a moment.

Have you answered the "Coverage percentage"? "Speed"? You were wrong.

Here's the right order:

1. Results readings. Yes, that's exactly what I said, the results. If:

  1. they are not clear: what exactly was tested, what exactly was the result in a spoken language.

  2. are not easy to understand the situation, how failed, etc. at a glance.

  3. missing screenshots, video, and logs.

  4. do not make it easier to open the bug.

  5. not accurate.

  6. do not indicate exactly the problem.

Then throw the tests honorably to the code cemetery.

If the person who executes the automation will have to work three days analyzing the results on a two-hour run that has saved manual tests two days, it will not work.

2. The reliability of the results and/or the instability: if the results are not reliable - the same, if every time the tester that executes the automation has to analyze for a few hours "strange" problems and finally he or she finds this is a problem in the code of... the automation they will stop trying. This problem can be especially acute in GUI testing where many things may influence the outcome.

3. Ease of Running - Two / three clicks and the Run-Off is underway.

4. The code must be robust and should not be changed frequently

5. Runs must be fast (relative to the product).

What will it be built on?

Many times people choose to automate to a GUI because it seems the most natural. The problem is that these checks become obsolete because the GUI tends to change frequently and may give different results in different resolutions/devices.

Therefore, the automation should be as close as possible to the code, the APIs, REST, the execution of a process, etc. Even so, in most cases you will find the business logic is there in this layer.

If there is a lot of logic in the GUI you have to consider if this needs will be tested as well.


What will it be built of?

From building blocks, that is blocks that each makes a logical action such as login, password replacement, etc. So the one that runs the automation will be able to build the scenario easily according to the context, such as checking a bug fix: the test engineer will drag the blocks so that the scenario is relevant. We will also benefit from easy and focused maintenance.

It would be better if there were a possibility of keeping the flows that we would run as a regression.

It will read a lot of data from external files (Data Driven Testing) so we can change inputs and also use words and sentences as triggers for action in such a way that we will have the ability to build automated tests without even knowing the code. (Keyword Driven Testing).


How will this be done in terms of roles?

Consider the infrastructure team, and automation personnel who write only code. They create environments (virtual machines, mobile devices, emulators), run the code, produce the report, and more.

In the agile case (and even in case it is not), the testers can build the scripts themselves, which, for example, send out requests and check the responses or work on the GUI.

Usually, the testers execute the automation.

In the event that there is no automation personnel within the testing teams or the Agile teams, the automation personnel should still know the product and work very closely with the testers. Otherwise, they will be delayed all the time, will build things that are not relevant, and will probably fail.

What do you start with?

No no no no. Not with the attitude of: We run to implement all the regressions! Start small: smoke tests, if it works then sanity then if it works go to regressions.

Regarding the regressions and the conversion order, I have created a tool to help.

And on the same subject: Here size does not matter! That you have 500 tests or a thousand - that's not impressive. The questions in this case will be:

  • How much coverage do you have?

  • When did it last run?

  • How many tests fail regularly?

If you answered something like 20%, a month ago, 10% or more - it does not help anyone.

Automation should not provide 100% coverage!

What should not be converted?

  • Acceptance tests, usability tests, or final testing of the product - things that need to be assessed by a person.

  • One-time tests.

  • Behavior or output that is not deterministic.

  • What runs not often, costs a lot, or does not matter (see above link "with what to begin").

Rule of thumb: 80% of manual tests are convertible.


How do you know that automation is reliable in its results?

There is a tendency in automation to err in creating False Negative validations, ie the execution or the result does not find the problem, because - we did not write the test correctly, because we did not rely on what is indicated on the problem, and more. In this case, we think that the execution (or the specific part) is positive and there is no problem when the actual situation is not good.

There is also a tendency to False Positive but in this case, we will understand that there is a problem and will be corrected.


How do we solve the False Negative issue? Beyond the code review and a careful conversion, while keeping conversations with the testers, we will not start to execute new automation and stop manual testing at the same time. What we will do is that for a while we execute the automation and the manual tests in parallel and check whether they have found the same issues. If automation has found everything and more - we are in a good place.




Comments