My article is get published in June’11 issue of Automated Software Testing Magazine! That is cool news – check that out: http://www.automatedtestinginstitute.com/home/index.php?option=com_content&view=article&id=1276:ast-magazine&catid=105:ast-cover-description&Itemid=122
Direct link to download PDF of that magazine issue:
Also I wanna put the copy of the content as blog post right over here:
Test Automation Management: A Call For Better Tools
One of the cool things about beginning a test automation effort is that only a minimal level of technical expertise is required – basic skills in testing, some coding and knowledge of key technologies. The shockingly unobvious thing in test automation is the difficulty surrounding management of tests in mid-sized and large automation teams. The major difficulty comes into play when your team tries to analyze the results of automated test that have been run against your daily build and attempts to keep those tests up-to-date and stable for the next day’s build. Moreover, as a release’s delivery date draws closer there is often an increase in scope, instability, expected turnaround and visibility. In order to effectively manage this chaos some mechanism must be employed to facilitate and assist in organizing the team’s work in an efficient way. The output from automated test scripts is typically a low level log and stats that are useful for debugging and internal analysis by automators, but are less useful for higher level status and issue reporting. Not to mention, they can be monotonous and just plain boring! So this alone is not effective for managing the test effort.
To truly understand the need for an effective automated test management solution think about issue tracking, source control, and task management systems. These tools are all about easing our collaboration by making inter- and intra-team communication more efficient. The same standard should be placed on a test automation management system in ways similar to the following:
• Intra-team Business Requirements – Test automation engineers should have a collaborative environment that provides reporting, tasking, and a transparent view of the entire project. Ideally this environment enables clear traceability with other testing artifacts such as defect reports (product and internal automation), test cases and software specifications.
• Inter-team Business Requirements – Project stakeholders should have access to a flexible, transparent and multilevel reporting tool to see what is going on with automation and the tested product. It should offer a high level picture of the product’s evolution and trends, as well as aid in evaluating the maturity and progress of the test automation effort.
The Need for Something New?
There are a crop of available test management systems that exist, but the question that must be answered is “do they meet our needs?” While you could theoretically use many of these tools to some effect, many of them have serious disadvantages in the context of heavy test automation.
The basic features that exist in test management systems include:
- Test case and test suite management
- Test cycle and test release management
- Planning, group tasking and scheduling
- Defect tracking
- Reporting and trends analysis (usually defects trends, execution trends)
Let’s take an honest look at how these features are often used on real projects:
Test case and test suite management – Once created, test cases are rarely kept up to date. Some of them may be used for a subsequent release, some will be junk. Also, most of the tools I’ve seen tend to complicate test case writing – multiple tabs and double clicks to navigate back and forth, page response, etc. I recognize the advantage of fielded data but should it come at the expense of features such as auto formatting, formulas, very simple copying and cloning? Many testers think not, and desire this type of simplicity, evident in the fact that they often create cases in Excel spreadsheets and then import them into the test management system for managers.
Test cycle and release management – Basically, this nice feature is often implemented as a folder that inherits test cases from the test plan. Since the test management tool is often disconnected from the Continuous Integration (CI) server, this feature is not automatically tied to the creation of a new release on the CI server, which creates a disconnect that can make it tedious to keep the CI server and test management tool in sync.
Planning – As a manager, you can assign any test or test suite for execution for a particular day. But who wants to repeatedly perform this boring activity? Typically, testing tasks are based on the functionality to which a tester has been assigned.
Defect tracking – Who knows, maybe some companies use this feature. Very often, however, this feature of a test management solution goes to waste, because the organization uses some separate defect tracking solution or has no defect tracking solution at all.
Reporting – I suppose this is the most important feature but only if other features are used accurately. There is no value in the reporting of incorrect or incomplete data.
Traceability – If you have granular requirements in a single Test or Application Lifecycle Management (ALM) tool, it is nice to be able to obtain automatic linkages between tests and requirements. If specifications live somewhere outside of the tool used for test management, there is minimal value from this feature because the linking process and the process of keeping the links up to date will be a manual task.
So, my conclusion is that existing test management tools are useful when their whole set of functionality is used; particularly when used on mid-to-large scaled projects, on distributed and/or remote teams and for cross project analysis. Otherwise, they may be more trouble than they are worth. So we must ask ourselves the following question: “If these tools are flawed for manual test case management, why we should we rely on them from a test automation perspective?”
What is expected?
From my experience I have drawn a few specific requirements for test automation management tools and I would like to share them here.
Useful Reporting – Reporting should be meaningful and should facilitate the continual learning of lessons over time. This implies the presence of trend analysis over historical data, calculating key performance indicators (KPIs), root cause analysis, and graphical data representation.
Compliance with development practices – A test management solution that seriously works for test automation should comply with basic software development practices. For example, offering access to standardized, xUnit formatted logs through a web services interface.
Traceability – It’s important to be able to trace other testing artifacts to the test automation process, thus making it integral within the project. It is very useful to have all relevant information or references in one place: i.e. references to defects that are covered by test, reference to test design, data and specifications, and references to the CI server (i.e. link to the build number).
Making test debugging less painful – Helping test developers organize their debugging work is often an essential task. Sometimes finding the cause of a test failure and applying fixes requires a lengthy effort, an effort that should be eased by the information provided by a test management tool.
Compliance with Agile lifecycle – The ability to obtain quick daily status information is important for getting an effective start on a new Scrum day. Just enough information to recognize changes is often all that’s necessary. The changes are usually marked by differences in errors found in the test logs between day A and day A+1. For example, the difference in the number of failures (log errors) signals probable changes either in the AUT or in test code (broken test code base, instabilities) or at times in both.
Interfaces for triggered and manual test execution – Ideally test automation possesses a facility to run a bunch of tests remotely against several machine instances. This includes hardware management (physical and virtual), tasks queue management and tasks prioritization to rule the order of execution if concurrent use of resources takes place. The interface should not be limited to UI elements, but should provide a simple API that may be used by a CI server to schedule and manage tasks. An example of a good practice is when the CI server triggers test execution through that API upon the deployment of a new build. The API might be a simple REST service in a POST or GET request implementation. The most challenging part of this component is the queuing mechanism: prioritization, watching, management and coordination.
Tool agnostic interface – All capabilities should be agnostic to the utilized test automation tool. You can use multiple tools or migrate from one to another but your test automation management is done via a single unified interface. Interfaces with CI should be decoupled in order to get rid of preferences. It is the responsibility of the CI server to schedule tasks by means of calling the API, while test automation can return reporting in an xUnit format.
Transparency of deliverables and internal team collaboration – The idea is to make the tool clear and lightweight for any stakeholder. It should be flexible enough to provide information with an appropriate level of detail.
Keep it simple – The look and feel of the end user interfaces as well as automatic interfaces (i.e. results submission interface, integration with CI severs) should be intuitive enough to facilitate a quick startup.
Reducing test automation costs
– Ultimately the test automation management tool should facilitate making test automation profitable, reactive and valuable.
Current state of the art
There are a few tools providing test automation management capabilities at some extent of described expected functionality. This article is not about comparing them. Here are the tools I’m aware about:
Quality Center – does not segregate test automation from manual. Though there is scheduling capability through Web UI and through API, which is actually complicated and unstable COM object. HP QC has straightforward interfaces with QTP only. Yes you can hack it to run something else but why then buy the tool?
QAComplete – ALM product similar to HP QC but looks like more flexible in terms of compatibility with automation tools (e.g. they provide connector for QTP test runner). In same way as QC, the tool mixes ups manual and automated effort and does not provide specific tools to debug and to have collaborative environment specific for test automation engineers.
Gredy – lightweight Web tool that explicitly serves as test automation management system and does not have preferences to automation tools. Has advanced capabilities to manage and analyze test execution results, assists in debugging, internal automation team collaboration and provides flexible reporting.
Tapper - open source solution from AMD written on Ruby. It has ready to go interfaces for test scheduling and viewing test execution reports.
Bromine – provides capability to manage test cases and test lab. The tool is tightly tied to Selenium automation tool. Tests can be run on remote machines with installed Selenium IDE/RC.
Litmus – open source from Mozilla and currently used by Mozilla to run and manage test automation suites. Has nice look and feel and thoughtful approach to manage tests and analyze test execution results.