Test Automation Engineer

From Testing
Jump to: navigation, search


Introduction and Objectives for Test Automation


API testing, CLI testing, GUI testing, System Under Test, test automation architecture, test automation framework, test automation strategy, test automation, test script, testware

Purpose of Test Automation

In software testing, test automation (which includes automated test execution) is one or more of the following tasks:

  • Using purpose-built software tools to control and set up test preconditions
  • Executing tests
  • Comparing actual outcomes to predicted outcomes

A good practice is to separate the software used for testing from the system under test (SUT) itself to minimize interference. There are exceptions, for example, embedded systems where the test software needs to be deployed to the SUT. Test automation is expected to help run many test cases consistently and repeatedly on different versions of the SUT and/or environments. But test automation is more than a mechanism for running a test suite without human interaction. It involves a process of designing the testware, including:

  • Software
  • Documentation
  • Test cases
  • Test environments
  • Test data

Testware is necessary for the testing activities that include:

  • Implementing automated test cases
  • Monitoring and controlling the execution of automated tests
  • Interpreting, reporting and logging the automated test results

Test automation has different approaches for interacting with a SUT:

  • Testing through the public interfaces to classes, modules or libraries of the SUT (API testing)
  • Testing through the user interface of the SUT (e.g., GUI testing or CLI testing)
  • Testing through a service or protocol

Objectives of test automation include:

  • Improving test efficiency
  • Providing wider function coverage
  • Reducing the total test cost
  • Performing tests that manual testers cannot
  • Shortening the test execution period
  • Increasing the test frequency/reducing the time required for test cycles

Advantages of test automation include:

  • More tests can be run per build
  • The possibility to create tests that cannot be done manually (real-time, remote, parallel tests)
  • Tests can be more complex
  • Tests run faster
  • Tests are less subject to operator error
  • More effective and efficient use of testing resources
  • Quicker feedback regarding software quality
  • Improved system reliability (e.g., repeatability, consistency)
  • Improved consistency of tests

Disadvantages of test automation include:

  • Additional costs are involved
  • Initial investment to setup TAS
  • Requires additional technologies
  • Team needs to have development and automation skills
  • On-going TAS maintenance requirement
  • Can distract from testing objectives, e.g., focusing on automating tests cases at the expense of executing tests
  • Tests can become more complex
  • Additional errors may be introduced by automation

Limitations of test automation include:

  • Not all manual tests can be automated
  • The automation can only check machine-interpretable results
  • The automation can only check actual results that can be verified by an automated test oracle
  • Not a replacement for exploratory testing

Success Factors in Test Automation

The following success factors apply to test automation projects that are in operation and therefore the focus is on influences that impact on the long-term success of the project. Factors influencing the success of test automation projects at the pilot stage are not considered here.

Major success factors for test automation include the following:

Test Automation Architecture (TAA)

The Test Automation Architecture (TAA) is very closely aligned with the architecture of a software product. It should be clear which functional and non-functional requirements the architecture is to support. Typically this will be the most important requirements. Often TAA is designed for maintainability, performance and learnability. (See ISO/IEC 25000:2014 for details of these and other non-functional characteristics.) It is helpful to involve software engineers who understand the architecture of the SUT.

SUT Testability

The SUT needs to be designed for testability that supports automated testing. In the case of GUI testing, this could mean that the SUT should decouple as much as possible the GUI interaction and data from the appearance of the graphical interface. In the case of API testing, this could mean that more classes, modules or the command-line interface need to be exposed as public so that they can be tested.

The testable parts of the SUT should be targeted first. Generally, a key factor in the success of test automation lies in the ease of implementing automated test scripts. With this goal in mind, and also to provide a successful proof of concept, the Test Automation Engineer (TAE) needs to identify modules or components of the SUT that are easily tested with automation and start from there.

Test Automation Strategy

A practical and consistent test automation strategy that addresses the maintainability and consistency of the SUT.

It may not be possible to apply the test automation strategy in the same way to both old and new parts of the SUT. When creating the automation strategy, consider the costs, benefits and risks of applying it to different parts of the code.

Consideration should be given to testing both the user interface and the API with automated test cases to check the consistency of the results.

Test Automation Framework (TAF)

A test automation framework (TAF) that is easy to use, well documented and maintainable, supports a consistent approach to automating tests.

In order to establish an easy to use and maintainable TAF, the following must be done:

  • Implement reporting facilities: The test reports should provide information (pass/fail/error/not run/aborted, statistical, etc.) about the quality of the SUT. Reporting should provide the information for the involved testers, test managers, developers, project managers and other

stakeholders to obtain an overview of the quality.

  • Enable easy troubleshooting: In addition to the test execution and logging, the TAF has to provide an easy way to troubleshoot failing tests. The test can fail due to
    • failures found in the SUT
    • failures found in the TAS
    • problem with the tests themselves or the test environment.
  • Address the test environment appropriately: Test tools are dependent upon consistency in the test environment. Having a dedicated test environment is necessary for automated testing. If there is no control of the test environment and test data, the setup for tests may not meet the requirements for test execution and it is likely to produce false execution results.
  • Document the automated test cases: The goals for test automation have to be clear, e.g., which parts of the application are to be tested, to what degree, and which attributes are to be tested (functional and non-functional). This must be clearly described and documented.
  • Trace the automated test: TAF shall support tracing for the test automation engineer to trace individual steps to test cases.
  • Enable easy maintenance: Ideally, the automated test cases should be easily maintained so that maintenance will not consume a significant part of the test automation effort. In addition, the maintenance effort needs to be in proportion to the scale of the changes made to the SUT. To do this, the cases must be easily analyzable, changeable and expandable. Furthermore, automated testware reuse should be high to minimize the number of items requiring changes.
  • Keep the automated tests up-to-date: when new or changed requirements cause tests or entire test suites to fail, do not disable the failed tests – fix them.
  • Plan for deployment: Make sure that test scripts can be easily deployed, changed and redeployed.
  • Retire tests as needed: Make sure that automated test scripts can be easily retired if they are no longer useful or necessary.
  • Monitor and restore the SUT: In real practice, to continuously run a test case or set of test cases, the SUT must be monitored continuously. If the SUT encounters a fatal error (such as a crash), the TAF must have the capability to recover, skip the current case, and resume testing with the next case.

The test automation code can be complex to maintain. It is not unusual to have as much code for testing as the code for the SUT. This is why it is of utmost importance that the test code be maintainable. This is due to the different test tools being used, the different types of verification that are used and the different testware artifacts that have to be maintained (such as test input data, test oracles, test reports).

With these maintenance considerations in mind, in addition to the important items that should be done, there are a few that should not be done, as follows:

  • Do not create code that is sensitive to the interface (i.e., it would be affected by changes in the graphical interface or in non-essential parts of the API).
  • Do not create test automation that is sensitive to data changes or has a high dependency on particular data values (e.g., test input depending on other test outputs).
  • Do not create an automation environment that is sensitive to the context (e.g., operating system date and time, operating system localization parameters or the contents of another application). In this case, it is better to use test stubs as necessary so the environment can be controlled.

The more success factors that are met, the more likely the test automation project will succeed. Not all factors are required, and in practice rarely are all factors met. Before starting the test automation project, it is important to analyze the chance of success for the project by considering the factors in place and the factors missing keeping risks of the chosen approach in mind as well as project context. Once the TAA is in place, it is important to investigate which items are missing or still need work.

Preparing for Test Automation


testability, driver, level of intrusion, stub, test execution tool, test hook, test automation manager

SUT Factors Influencing Test Automation

Tool Evaluation and Selection

Design for Testability and Automation

The Generic Test Automation Architecture


capture/playback, data-driven testing, generic test automation architecture, keyword-driven testing, linear scripting, model-based testing, process-driven scripting, structured scripting, test adaptation layer, test automation architecture, test automation framework, test automation solution, test definition layer, test execution layer, test generation layer

Introduction to gTAA

Overview of the gTAA

Test Generation Layer

Test Definition Layer

Test Execution Layer

Test Adaptation Layer

Configuration Management of a TAS

Project Management of a TAS

TAS Support for Test Management

TAA Design

Introduction to TAA Design

Approaches for Automating Test Cases

Technical considerations of the SUT

Considerations for Development/QA Processes

TAS Development

Introduction to TAS Development

Compatibility between the TAS and the SUT

Synchronization between TAS and SUT

Building Reuse into the TAS

Support for a Variety of Target Systems

Deployment Risks and Contingencies


risk, risk mitigation, risk assessment, product risk

Selection of Test Automation Approach and Planning of Deployment/Rollout

Pilot Project


Deployment of the TAS Within the Software Lifecycle

Risk Assessment and Mitigation Strategies

Test Automation Maintenance

Types of Maintenance

Scope and Approach

Test Automation Reporting and Metrics


automation code defect density, coverage, traceability matrix, equivalent manual test effort, metrics, test logging, test reporting

Selection of TAS Metrics

Implementation of Measurement

Logging of the TAS and the SUT

Test Automation Reporting

Transitioning Manual Testing to an Automated Environment


confirmation testing, regression testing

Criteria for Automation

Identify Steps Needed to Implement Automation within Regression Testing

Factors to Consider when Implementing Automation within New Feature Testing

Factors to Consider when Implementing Automation of Confirmation Testing

Verifying the TAS



Verifying Automated Test Environment Components

Verifying the Automated Test Suite

Continuous Improvement



Options for Improving Test Automation

Planning the Implementation of Test Automation Improvement