While working at FusionCharts, every release we were faced with a daunting task of black box testing. Daunting mainly because of the breadth and the depth of the product. This post talks only about black box testing. FusionCharts Product as a whole contains about roughly 90 charts and each chart can be visually tweaked with a set of about 300 chart options/attributes. Ignoring any further permutation and combination, we have right there around 90*300 test cases. Apart from chart options, api testing which consisted of events and methods was needed. Clearly automation was required, as our manual testing would cover only a very small sample set that too based on smart assumptions.
With this problem at hand, I broke it down as follows:
User Browser based testing would require a more comprehensive server deployment. This part would spawn all the different major user facing browsers (chrome, firefox, ie) and emulate tests on them. Give a more in depth report on the state of the entire product. Meant to be run before tagging a customer facing release.
Visual Regression testing would aim at the visual changes that the product undergoes. This would be based of a version which would be assumed as a stable product release. Any variation from the baseline version would be a tagged as a failure and then manually triaged as an intended change or bug.
API Testing would listen to different events or trigger various methods exposed by the product. It would compare the output to the expected value and mark it as a fail or pass accordingly.
API Testing
With this problem at hand, I broke it down as follows:
- Headless Browser
- Visual Regression
- API Testing - User Browser
- Visual Regression
- API Testing
We liked to call it the FusionCharts Automated Testing Suite.
Headless testing would be based of a headless browser and integrated in to our nightly builds. It was aimed mainly to be capable of running on individual developer machines as and when needed. This was targeted to be a quick smoke test giving an idea if something major had been broken.
User Browser based testing would require a more comprehensive server deployment. This part would spawn all the different major user facing browsers (chrome, firefox, ie) and emulate tests on them. Give a more in depth report on the state of the entire product. Meant to be run before tagging a customer facing release.
Visual Regression testing would aim at the visual changes that the product undergoes. This would be based of a version which would be assumed as a stable product release. Any variation from the baseline version would be a tagged as a failure and then manually triaged as an intended change or bug.
API Testing would listen to different events or trigger various methods exposed by the product. It would compare the output to the expected value and mark it as a fail or pass accordingly.
The Process and Stack
- https://nodejs.org/ and npm for building a cross platform CLI tool and pack management.
Visual Regression
- Load the input spec for the test case.
- Generate the web page with the baseline version of the product.
- Host the web page on a local server.
- Load the page on the browser.
- Take a screenshot and save the image.
- Swap the baseline version of the product with the test version on the web page.
- Load the page on the browser.
- Take another screenshot and save the image.
- Compare the two images.
- Mark the test case as pass or fail depending on the image comparison.
- Generate a result report as json and html.
- The html report would contain the baseline image, test image and the diff image of the two.
Input spec / test case - Custom JSON Format
Server - Express http://expressjs.com/
Headless Browser - PhantomJS http://phantomjs.org/
Launching and Controlling User Browsers - Selenium Webdriver Bindings http://webdriver.io/
Image Comparison - Resemble.js http://huddle.github.io/Resemble.js/
Result Reporting - ejs http://www.embeddedjs.com/
API Testing
- Load the input spec for the test case.
- Generate the web page with the test version of the product.
- Host the web page on a local server.
- Load the page on the browser.
- Trigger events or call the intended api method.
- Listen to the events or resulting change from the api method.
- Compare the received result with the expected result.
- Mark the test case as pass or fail depending on the comparison.
- Generate a result report json.
- Create an html report appended to the larger testing report which includes visual regression.
Input spec / test case - Custom JSON Format
Server - Express http://expressjs.com/
Headless Browser - PhantomJS http://phantomjs.org/
Launching and Controlling User Browsers - Selenium Webdriver Bindings http://webdriver.io/
Listening to Events and API Methods on Browser - socket.io http://socket.io/
Result Reporting - ejs http://www.embeddedjs.com/
Comments
Post a Comment