Crystal Test is an open source Test Management System with a built-in test automation framework for Selenium and can be extended for use with other frameworks.
Crystal Test Features
- 100% Free Test Management System and Automation Test Architecture
- We took the leg-work out of automation and did it for you. All you need to do is set it up and start writing scripts.
- Web-based system hosted on your servers for security and privacy
- Run it local on your machine or publish the entire test grid. (Its already setup for you! You are up and running after a few configurations)
- Open Source so you can add to it or change it to meet your companies needs.
- Access maintained by Active Directory
- Track manual and automated test results
- Upload Excel sheets with test cases and/or results (No need to change the way your company is used to doing things)
- Ability to manually enter test cases and test results via the UI
- Export test cases to Excel
- Assign test cases to yourself or others
- Assign test cases to Sprints, Releases, Functional Groups
- Personal Groups visible only to you
- Personal Task Lists
- Dashboard with Test Result metrics, Test Case metrics, and Automation metrics
- Dashboard allows you to see test cases in progress, in queue, needing updated at first glance
- Allows for unlimited number of projects
- Each user can choose a default project and Crystal Test always opens to that projects data for that user.
- Link with popular defect management applications such as Jira and TFS
- Cross-browser testing
- Mobile (Coming Soon)
- Control your automation test grid from the Crystal Test GUI
- Stop start your servers/VMs
- View the Test Automation Engine, Selenium HUB, and Selenium Node consoles right from the Crystal Test GUI
- Start/Stop an entire suite of automated tests with a single click
- Innovative data-driven architecture for easy maintenance of automated scripts
- Plenty of documentation to walk you through every step of setup and script writing
Dashboard and Metrics
Crystal Test keeps track of all tests ran, both manual and automated, and automatically tracks # of pass, fail, other, as well as automation metrics. All of these metrics can be narrowed down by project/release/sprint/groupings/date range and can be viewed
for all environments.
The dashboard also gives a quick view of the automated tests in progress or in queue and shows a list of automated scripts and/or test cases that need to be updated.
This screen also shows the number of test cases written/developed as well as execution metrics and defects. These metrics can be viewed by project, user, or browser.
Test Case List and Quick View
This screen is a list of all test cases by project. This list can be drilled down to view only test cases associated with a release, sprint, or test group. These groups can be public (such as a regression suite, or test cases associated with the homepage, etc)
or each person can make personal groupings of test cases and their personal groups are only visible to them via the group drop down. A test case is written once and can be test both manually or through automation and the latest test result displays in this
grid. The grid displays the results for the environment selected in the environment drop down. Having these test cases in a system such as this reduces the chances of test cases getting lost when kept on personal computers. Also whenever a test case is updated,
a copy of the test cases history is kept so no information lost.
When the View drop down is set to All, or Manual, all test cases are visible and the test button opens a dialog box allowing users to enter manual results. When the View dropdown is set to Automated, only the automated test cases and their children are visible
and the test button queues the automated test. The test button does not display for children since the parent test case will mark them as passed when it passes. The number of automated test cases that can be ran simultaneously can be configured depending on
the number of nodes available.
Multiple test cases can be marked as passed when you check the desired test cases and click the first icon at the top. Additionally the entire test suite or whatever test cases are selected can be downloaded to an excel spreadsheet, updated and re-uploaded.
Additionally, bulk test results can also be uploaded via a spreadsheet. (2nd icon). The 3 – 5th icon allows you to assign test cases to a Release, sprint, or group. The 6th icon allows you to assign test cases to a user and those test cases will show up in
the user’s personal task list (see below). The 7th icon allows a user to delete checked test cases. Test cases can be deleted but they are never truly deleted. They are stored in a separate table and can be restored at any time. When restoring test cases,
all associated releases, sprints, defects, history, and screenshots are restored with it, again, assuring no loss of data.
The quick view section of this page gives a status at a glance of the current test results including % of total tests passed and % of tested tests passed and % of total tests tested.
Test Case Details
The Test Case Details page shows the current steps, expected results, notes, and screenshots for a test case. It also shows any releases, sprints, and groups associated with the test case and allows a user to add the test case to a sprint/release/group, etc.
(not shown below). This page also shows a history of all test results as well as any defects ever associated with the test case and links to those defects (TFS or Jira). The history tab shows a list of the history meaning any time the test case was changed
you can see all previous version here. The automation tab is where you install the automated test case and assign test case children. Often times when automated test cases are ran, several other smaller test cases are also ran. For example, if you run a test
case to ensure a user can log in, you are also satisfying the test case, of making sure a user can fill in their username or password, so if the parent test case passes, the children automatically pass as well. When you assign children to an automated test
case, when the parent passes, the children are automatically marked as passed as well. This reduces redundant testing.
Personal Metrics and Task List
When a user clicks their name in the header, they are taken to their personal dashboard which includes a quick view of their personal metrics for the current month as well as their personal task list which includes all test cases assigned to them. And test
cases that have passed within the last 7 days are highlighted green.
The Selenium Dashboard allows a user to administer the Selenium test grid with just a click of a button. A user can stop, start, or view the Automation Test Engine Console which gives information about test cases put in queue or started. The Selenium HUB Console
gives information as to which nodes test cases were assigned to. The Node Consoles allow you to watch the log files as test cases are executed. This can be helpful in debugging automated test cases. This page also allows you to reset the entire Selenium grid
with a click of a button which resets all test cases, restarts the nodes and hub and then re-queues all automated test cases that were running previously and kicks them off. This page also allows you to reset the entire Crystal Test website if needed. Many
of these tasks would take a user quite a bit of time to do manually, or may rely on someone else if servers are in other places. This reduces that work time.
The status below show UNKOWN because I am currently running this on my local machine and it is not on a live server.
Installing an Automated Script
Automated scripts are written in C# and compiled into projects within the Crystal Test code and then installed on the website by entering the “Namespace.ClassName, AssemblyName” of the script as well as the database table and row for the test script data.
Even before automated scripts are written for test cases, the ‘Automated’ field below can be populated. This determines the automation status of a test case. This field is used to determine the automation metrics on the Crystal Test Dashboard page.
1. Yes: The test is case is currently automated
2. No: The test case is not automated and will not be automated in the future.
3. Future: The test case is not automated but it is possible that it may be in the future
4. Automation Reason: This field is mainly to describe why a test case cannot be automated but could be used for other automation notes as well.
Tell Me More!
Crystal Test is a test management system created to normalize the way the Quality Assurance department writes, stores, and executes test cases and their results. With its included data-driven automation test architecture (D-DATA), this system will serve to
exponentially reduce the time and cost of testing and enhance manual efforts via increasing test coverage and replacing manual and labor intensive tasks.
Crystal Test consists of 3 main components:
- Website: The interactive web-based application that allows users to import, write, execute and view test cases and test results, view test metrics, and more.
- Automation Infrastructure: The behind-the-scenes data-driven automated test architecture (D-DATA) that executes automated test scripts and stores their results.
- Database: Contains both the data needed for the Crystal Test website as well as the data for D-DATA automated test scripts
All three are integrated but are built in such a way so they can be separated to be used with other systems.
The Test Management System
The Crystal Test frontend was initially implemented as a website. The website includes pages with forms for inserting, updating, or deleting test cases and test results. It supports both automated and manual test cases, with each being saved to the same tables,
just with different flags to identify them. Tests can also be exported in bulk to Excel or likewise imported, to allow for populating test cases or test results in bulk. Other pages display summaries or reports by date, user, project, or whatever criteria
are desired. Finally, the test case page is where automation tests are kicked off. As with other pages, numerous filters and sorting options are available. In particular, filters by test case Project, AUtomated/Manual, Functional Group, Release, Sprint, Environment,
and Keyword have been implemented so far in the TMS. This page also displays most recent statuses per browser and when each test was last run to help provide users the information they may need when choosing what tests they should execute. Additional administrative
options are available, for such things as aborting tests, restarting the selenium grid, viewing log files on selenium grid servers, etc.
D-DATA (Data-Driven Automation Test Architecture)
Within the Crystal Test database, a metadata table is created for each unique type of test; columns of these metadata tables represent configurable choices in the test, such as which option to choose for a radio button or dropdown, what text to enter in a field,
how many times to iterate over a repeatable step, etc. A metadata table for a test like “request refund” might even reference rows in other metadata tables like “create user”, “purchase”, etc (each of which might also serve as a standalone test). Testing code
has to be developed only once per table, then any number of rows can be created to satisfy various test conditions. These metadata rows are then linked to specific test cases so their results can be properly displayed.
While many systems claim to be data-driven, most of them merely allow for parameterization and require you to run all sets of parameters each time you run a test failing everything if an of them fail. Crystal Test not only uses the database tables for parameterization
but for decision making as well, allowing one script to process a multitude of similar tests. Link each database row to a separate test case. This allows you to run some tests for a smoke test or perhaps all during a regression. You are in complete control
of what you want to run and when.
The Automation Engine
The Automation Engine is where the code for the tests themselves resides, and this engine runs autonomously in the background, independently processing any requests for test execution that are inserted into the database. It starts by scanning the database for
test cases with status of “In Queue”, and if it has available servers in its selenium test grid, it flags those test case(s) as “In Progress” as it begins its work. Based on the criteria defined in the database record, one or more threads might be kicked off
to support the browsers being tested in simultaneous tests. Depending on how the test fares, the final result might be pass or fail, but that’s just the tip of the iceberg. The system also takes screenshots, saves them to the server with unique names, and
stores the location of the saved file in the results. The engine stores additional relevant data in the results; for example if the test created a new user and performed a purchase, then it might store the user credentials and other identifiers for what they
purchased. This way if there’s ever a problem or a concern about the results, you can always look up that user, their purchase, etc., in the target system to verify they were created properly.
Crystal Test stores test results with very granular detail; each combination of project, test case, browser, environment, etc. are stored as separate result records, and all historical results are also stored for future analysis. Not only are simple results
like pass/fail stored, but also screenshots, detailed error messages, and custom output text (e.g. what unique email address was used for that registration test). To help easily parse through all this historical data, views are created for such things as the
latest test result record in each browser and environment. The database also tracks tests in progress and in queue, allowing for such functionality as the displaying of debugging messages for a test in progress.
Child Test Cases (Reduce redundant code)
Child test cases are also supported. Running a test script will often cover more than one test case. Crystal Test allows you to assign child test cases to be marked as passed when its parent passes.
The frontend of Crystal Test could take many forms - For example, a continuous integration system might directly insert “In Queue” records into the database, skipping over the UI altogether. Multiple different frontends could even be implemented that all feed
into the same backend through the database. Likewise multiple backend applications tie to the same database, one for handling Selenium tests, another for handling SoapUI tests, REST APIs, etc. The frontend(s) are not programmatically tied to the automation
engine backend(s), each is developed separately, allowing for a more flexible and scalable system. If desired, the automation engine could be developed in Java while the frontend is in ASP .NET; they don’t need to interact with each other directly, only through
their respective interactions with the shared database. (Perhaps a developer may want to contribute a JAVA version of the Automation Engine)
Cant decide if Crystal Test is right for you?
Read these articles comparing Crystal Test to other popular automation tools:
Crystal Test vs Microsoft Test Management
Crystal Test vs JUnit
Crystal Test vs QTP
Want to be a Crystal Test Developer or Contributor?
Email Jacqueline Walton
for more information.
QAI Conference 2015
Monday, April 20: 8:30 AM – 4:30 PM | Format: One-Day Class | Focus: Automation
Automation Architecture - A Blueprint to Success
, Instructor, Jacqueline Walton
Thursday, April 23: 1:00 PM - 2:30 PM | Format: Workshop | Focus: Automation
Quest Conference 2015 - 3 Steps to a Maintainable Automated Test Suite
, Instructor, Jacqueline Walton
The Legal Stuff
Crystal Test is copyright 2013-2016 Pixeltrix, LLC
. If you want to use, copy, modify or distribute Crystal Test, you may do so under the terms of the
GNU General Public License
as long as you include this copyright notice with any and all distributions. You may not remove the Powered By Crystal Test notification in the footer of Crystal Test.