Applying for jobs, whether you are successful or otherwise is a worthwhile endeavour. The reason for this is that they make you think, they encourage you to research, they force you to stay relevant. Right, enough of me plugging for you all to go out and apply for jobs, down to the reason for this blog post. I was asked to answer a question for a recent application and was quite pleased with my answer: The question was
what makes a good automated test?
My answer was
No, not that my answer was actually a smart answer (although if you think it is please feel free to let me know) but that the Human Resource acronym SMART could actually apply here. Here's my full response;
Will the test cover more than one piece of functionality? An automated test, when it fails, should identify the area that is broken. Having an automated test that leads the QA on a wild goose chase through several areas of functionality isn't a sound return on investment. One of the benefits of automation is the quantity of tests you can have; therefore, break the tests down to be specific.
Can the test give repeatable outputs that allow us to determine if it passes or fails? If it isn't repeatable (i.e.: it will work until we merge the live database down to our test environment) then spending the time to write a test that will potentially give false negatives (or positives) isn't, in my humble opinion, a good use of said time.
Can the functionality be automated or are there specific steps that just have to remain a manual action? Unfortunately, not everything can be automated therefore achievability must be a consideration.
Is it worth automating the functionality? This is slightly similar to Measurable and Achievable but differs in that it looks at cost; it may be achievable to automate, and it may also be measurable, but does our upfront cost in doing so negate the return on investment? For example (and this example is massively exaggerated), if it took weeks to automate logging into one system and then checking another whether it had logged your login, but this functionality never changed and was deemed low risk. Is it realistic and sensible to invest that large amount of time in this area?
This is where I divert from the acronym pathway; can we reuse the automation code anywhere else? A good automation test, in the process of writing, benefits others who write automation by adding to the framework or library of re-usable code. For example, a new Wait method or new way or interacting with the browser.
Anyway, that's enough of me wittering on. I would love to hear your thoughts on this!