This project is read-only.

Introduction to Test Automation

The concept of test automation has been around since the mid-1980s when record-n-playback tools were first created. A user would manually perform a test and these automated tools would capture the inputs and outputs. The user could then play back the test and it would repeat the inputted test steps while comparing the output to the original recorded output. It would then report any differences as failures. These tests could be reused as long as the application under test (AUT) remained unchanged.

The problem with this type of test is all data is hard-coded and it is missing certain vital features of a testing tool: conditional statements, loops, logging functionality, exception handling, reporting functionality, database testing, re-execution of failed tests and screenshot-taking capability. These scripts were not reliable and would often fail when replayed due to alerts, popups, messages or other unforeseeable event. If the tester made a mistake during the recording process, they would have to re-record the entire script. Likewise, each time the application would change, the tester would have to re-record each test. This became a maintenance nightmare and the cost for doing so was outrageous and unacceptable.

In the early 1990s, more sophisticated keyword-driven testing tools were created that harnessed more advanced programming languages which allowed for the inclusion of conditional statements, loops, logging functionality, exception handling, etc. They also allowed for the data to be stored in an external data file so the scripts did not have to be re-recorded to use different data. However, to use these tools, the tester would require more advanced skills and would effectively need to be a programmer. Although this opened the door to automatic execution and regression testing, there was still no way to measure automated coverage and fragile tests still broke easily. Also, these tools were usually limited in the browsers in which they could execute a test.

Recently, the demand for automated testing has sparked an influx of testing tools, each with a different approach. These approaches include Model-Based testing, Data-Driven testing, and Image-Based testing; all of which have their pros and cons and should be considered based on the application under test (AUT).

Introduction to Crystal Test

Crystal Test is a web-based test management system, developed in .NET (C#.Net), that was created to serve as a test case repository as well as a test management system that leverages Selenium WebDriver for automation testing.

Crystal Test has 3 distinct parts: A database, an automation engine, and a front-end GUI. The database serves not only as a repository for test cases and results, but is also used to define automated test cases. The automation engine is where the code for the tests themselves resides, and this engine runs autonomously in the background, independently processing any requests for test execution that are inserted into the database. Any number of different front-ends could have been used, but Crystal Test was implemented as a web application to allow simultaneous configuration and execution of tests in by authorized users across the world.

Crystal Test stores test results with very granular detail; each combination of project, test case, browser, environment, etc. are stored as separate result records, and all historical results are also stored for future analysis. Not only are simple results like pass/fail stored, but also screenshots, detailed error messages, and custom output text (e.g. what unique email address was used for that registration test). To help easily parse through all this historical data, views are created for such things as the latest test result record in each browser and environment. The database also tracks tests in progress and in queue, allowing for such functionality as the displaying of debugging messages for a test in progress.
Within the Crystal Test database, a metadata table is created for each unique type of test; columns of these metadata tables represent configurable choices in the test, such as which option to choose for a radio button or drop-down, what text to enter in a field, how many times to iterate over a repeatable step, whether to refund a purchase or not, etc. A metadata table for a test like “request refund” might even reference rows in other metadata tables like “create user”, “purchase”, etc. (each of which might also serve as a standalone test).

Testing code has to be developed only once per table, then any number of rows can be created to satisfy various test conditions. These metadata rows are then linked to specific test cases so their results can be properly displayed; for example, test case #100 “Reject refund” might use metadata row 2 on the refund table, and row 7 of the purchase table. Child test cases are also supported, for example the aforementioned test case 100 doesn’t need to just show up as 1 record in the results, the system can also trigger child test cases such as create user, purchase, and countless other smaller test cases to also be marked off as tested upon completion of that one automation test.

The automation engine serves as the real executor of automated test cases. It starts by scanning the database for test cases with a status of “in queue”, and if it has available servers in its selenium test grid, it flags those test case(s) as “in progress” as it begins its work. Based on the criteria defined in the database record, one or more threads might be kicked off to support the browsers being tested in simultaneous tests. Depending on how the test fares, the final result might be pass or fail, but that’s just the tip of the iceberg. The system also takes screenshots, saves them to the server with unique names, and stores the location of the saved file in the results. The engine scan also store additional relevant data in the results; for example if the test created a new user and performed a purchase, then it might store the user credentials and other identifiers for what they purchased. This way if there’s ever a problem or a concern about the results, you can always look up that user, their purchase, etc., in the target system to verify they were created properly.

The front-end of Crystal Test could take many forms - For example, a continuous integration system might directly insert “in queue” records into the database, skipping over the UI altogether. Multiple different front-ends could even be implemented that all feed into the same back-end through the database. Likewise multiple back-end applications tie to the same database, one for handling Selenium tests, another for handling SoapUI tests, etc. The front-end(s) are not programmatically tied to the automation engine back-end(s), each is developed separately. If desired, the automation engine could be developed in Java while the front-end is in ASP .NET; they don’t need to interact with each other directly, only through their respective interactions with the shared database.

The Crystal Test front-end was implemented as a website. The website includes pages with forms for inserting, updating, or deleting test cases and test results. It supports both automated and manual test cases, with each being saved to the same tables, just with different flags to identify them. Tests can also be exported in bulk to Excel or likewise imported, to allow for populating test cases or test results in bulk. Other pages display summaries or reports by date, user, project, or whatever criteria are desired. Finally, the test case page is where automation tests are kicked off. As with other pages, numerous filters and sorting options are available. In particular, filters by test case Project, Group, Release, Sprint, Environment, and Keyword have been implemented so far. This page also displays the most recent statuses per browser and when each test was last run to help provide users the information they may need when choosing what tests they should execute. Additional administrative options are available, for such things as aborting tests, restarting the selenium grid, viewing log files on selenium grid servers, etc.

Last edited Mar 15, 2014 at 1:18 PM by jacquelinewalton, version 2