Just imagine you are going to sell your test automation project to Project/Test/QA management in a way as sales managers/engineers present, propose and account a solution. Under solution we can mean outsourcing project development, COTS, open-source assuming paid hosting, integration and support or whatever else. Right automated testing is product itself too. As told James Bach “Useful test automation is a major software project”. I’m not trying to escape from original topic
Really how you will sell that unclear automation? You need to prepare the best proposal, solution, surveys, fact sheets, comparisons and of course happy case studies. Once your presentation is polished out – you are ready to present it to your customer, stakeholders or, in general, decision-makers.
In fact, people like and want real cases, they want to hear that you or whoever else did the same for some of “Fortune 100″ companies. Moreover they want to know how, why and for how much they had got that. Your potential Clients want to know risks, issues, lessons learnt and benefits of your proposal from other guys who had got the experience. They would like to ask them but usually cannot. And my target as seller is to forestall them, I’m trying to show them case studies.
Case studies could be personalized or anonymized. The last one appears as pattern (proper or anti) and still serves as message reflecting a real-life situations and experience. Someone may say it’s tricky to show anonymous reviews/surveys. I’d rather agree in most cases. But what if your market is so limited and closed for sharing information? Answer – nothing but tricky hinting. What if your study covers a huge group of customers/users? In this case you can extract some representative groups with the same obtained result/capabilities/behavior/model or generally segment a target audience groups. I apologize for my passion, but statistical analysis is what I like as well as test automation.
The fortunate case studies from me are cheerfully given but please consider them as patterns of proper or happy software automated testing.
Case #1 Various environments support, compatibility with other software, components…
Test automation framework is designed as platform-independent. So that test engineers easily run test sets against various environments and combinations of installed software in order to test compatibility and fault tolerance under different real-life usage scenarios. For Web systems it could be running on various browsers and versions, with different combinations of installed plug-ins and add-ons on different set of OSs
Case #2 Multiple Localization testing
Automated tests are not tied to GUI visible captions, it means recognition is implemented by checking invariant attributes (ID, location binding in hierarchy, appearance index…). However localization validation is not nightmare. For that, localization database to be designed which feeds data for checking localization attributes in GUI. Basically you can run a verification of GUI either at runtime or as batch-state-checkpoint. The last one is better as it brings a separate level of testing called GUI localization testing. Other words, whole automation scope is split out on functional testing (the same code without parameterization will test product behavior for all locales) and GUI testing (will just verify appearance using dynamic parameterization from localization DB).
Case #3 Unstable GUI but stable API
Some companies made and forced a decision of testing by pulling API. It means end-User UI is not touched upon testing. The drawback is that the approach will not cover real user interface but test coverage and stability will be incomparable to functional testing through GUI. Yet other advantages are less expensive overall automation, quicker execution and integration with codebase of AUT.
However project decided to have GUI testing anyway. For that, test automation guys came up with GUI testing as additional verification layer which was inlined to API tests. It’s really simple and clever – call or change something in application by API, then verify result again with API and additionally and independently in GUI.
Case #4 Business needs to support software As-Is-It-Now for a few years but fixing bugs will be applied by patches, CFs and so forth
This case is a usual for COTS and enterprise systems. Support requires testing and main effort is regression. Business should make sure that fix does not affect existing functionality. This is very risky to ship critical fix without a round of regression testing. Here is automation can come as cheap or almost free solution (someone just have to run and review results)
Case #5 System does not evaluate significantly between releases. New release is just old one + some fixes + additional features
It’s again nice point to leave regression for test automation. The rest effort is spending on new features and bugs. From this point automation becomes very effective as returning investements on automation is continious process. It’s like planting – initially you just spend after a while you gather in the crops but a little effort on support still have place.
Case #6 Often commits, often builds
Making test automation as part of continuous integration in aggressive project environment for providing ASAP and often feedback about build quality. Some tests could be run as sanity checking against each code pre and/or post commit. The rest suite is run upon each new build arrival
Case #7 Mission critical project shipping
In this case stakeholders may agree to have automated tests as part of delivery package. For instance a new build is not accepted if test automation does not yet ready as planned for a build/release and if whole test suite does not result a Green (passed) result. This is really challengeable. Just imagine test automation is a inseparable part from a product. I like this approach especially for self-tests and embedded automated tests. Take a look at the example
Case #8 Our system is a legacy back-end data processing and some unstable (under continuous development) to changes UI. Almost no money on test automation
under this case goes well a claim “We just need to support that unknown legacy system with small money”. Well, if small effort, consider automation of system logs parsing, checking datatbase consistency, reliability and data integrity by running queries periodically, automatic memory dumps processing (learn in depth), build in a memory leaks checkers, continous health monitoring of environment
Case #9 Test automation is looks like chaotic but it’s smart distributed one indeed
Imagine you have a cloud of various environment and you have a smart dispatcher which decides which tests, where and how to run based on its acknowledges about the cloud. This is amazing abstraction that presents accelerated test running relying on wide spread test stands and acknowledged by stands load/usage. System completely controls execution, engineer just to check results out in real-time. IMO it’s state of art!
Case #10 Integration with 3-rd party systems
Test automation daemon may sit as listener somewhere in integration point. Automated test should understand native communication language, so listening traffic and verifying result is just a deal. Let say we can create Web services testing (SOAP/WSDL) which utilizes the same data schema as it incorporated in web service itself. Test can proxy, interpret, stub or listen data.
Case #11 Security is a priority number one on Agile (shipping often) project
It’s not a secret security testing can be perfectly modeled and automated. The core of security tests will work against most of web projects (XSS, SQL injections, session hijacking, tampering, sniffing and so forth). If a project runs short iterations and releasing frequency is high, security tests could be run over and over again against pages and contained. For instance test engine walks through all web pages and then feeds basic dangerous scripts (JS, SQL, VBS, sh) to all fields, then triggering form submission. You can even simulate DOS attacks and brute force password hacking. Although there are already many tools which will do it for you, thus our job is just to automate execution and reporting.
Is it enough? It does not but I need to stop this story in a fear the story will never finish. Actually any project can produce different cases and practices since inviting a best fit to project solution demands creativity and out of the ordinary thinking. Patterns themselves don’t require using them as is, they are rather reusable good practices which could be ported and adopted if it make sense in your project. Using combination of patterns may bear new patterns or a single pattern of level higher than reused ones.
It would be very nice if we will share specific cases, patterns and lessons learnt of fortunate test automation. I just doubt why bloggers running test automation exchange their successes and achievements too rarely for the industry; instead we can found risks, problems, obstacles and why-dont-run-low-ROI-automation mostly in the internet.