Skip to main content

FusionCharts Automated Testing Tool

While working at FusionCharts, every release we were faced with a daunting task of black box testing. Daunting mainly because of the breadth and the depth of the product. This post talks only about black box testing. FusionCharts Product as a whole contains about roughly 90 charts and each chart can be visually tweaked with a set of about 300 chart options/attributes. Ignoring any further permutation and combination, we have right there around 90*300 test cases. Apart from chart options, api testing which consisted of events and methods was needed. Clearly automation was required, as our manual testing would cover only a very small sample set that too based on smart assumptions.

With this problem at hand, I broke it down as follows:
  1. Headless Browser
    - Visual Regression
    - API Testing
  2. User Browser
    - Visual Regression
    - API Testing
We liked to call it the FusionCharts Automated Testing Suite.

Headless testing would be based of a headless browser and integrated in to our nightly builds. It was aimed mainly to be capable of running on individual developer machines as and when needed. This was targeted to be a quick smoke test giving an idea if something major had been broken.

User Browser based testing would require a more comprehensive server deployment. This part would spawn all the different major user facing browsers (chrome, firefox, ie) and emulate tests on them. Give a more in depth report on the state of the entire product. Meant to be run before tagging a customer facing release.

Visual Regression testing would aim at the visual changes that the product undergoes. This would be based of a version which would be assumed as a stable product release. Any variation from the baseline version would be a tagged as a failure and then manually triaged as an intended change or bug.

API Testing would listen to different events or trigger various methods exposed by the product. It would compare the output to the expected value and mark it as a fail or pass accordingly.

The Process and Stack and npm for building a cross platform CLI tool and pack management.

Visual Regression 

  1. Load the input spec for the test case.
  2. Generate the web page with the baseline version of the product.
  3. Host the web page on a local server.
  4. Load the page on the browser.
  5. Take a screenshot and save the image.
  6. Swap the baseline version of the product with the test version on the web page.
  7. Load the page on the browser.
  8. Take another screenshot and save the image.
  9. Compare the two images.
  10. Mark the test case as pass or fail depending on the image comparison.
  11. Generate a result report as json and html.
  12. The html report would contain the baseline image, test image and the diff image of the two.

Input spec / test case - Custom JSON Format

Server - Express

Headless Browser -  PhantomJS

Launching and Controlling User Browsers - Selenium Webdriver Bindings

Image Comparison - Resemble.js

Result Reporting - ejs

API Testing

  1. Load the input spec for the test case.
  2. Generate the web page with the test version of the product.
  3. Host the web page on a local server.
  4. Load the page on the browser.
  5. Trigger events or call the intended api method.
  6. Listen to the events or resulting change from the api method.
  7. Compare the received result with the expected result.
  8. Mark the test case as pass or fail depending on the comparison.
  9. Generate a result report json.
  10. Create an html report appended to the larger testing report which includes visual regression.
Input spec / test case - Custom JSON Format

Server - Express

Headless Browser -  PhantomJS

Launching and Controlling User Browsers - Selenium Webdriver Bindings

Listening to Events and API Methods on Browser -

Result Reporting - ejs


Popular posts from this blog

ES6 Babel Transforms : Code Injection and Increase in File Size

This post aims to give a rough idea on the increase of file size after applying babel transforms to ES6 JavaScript. These figures depend highly on your style of writing code since each extra space leads to an additional byte. However, the transform comparisons will give you an idea on how they are treated by babel and how much of additional code will be injected in your file. 
These file sizes are non-uglify, non-gzipped and without any method of compression on OSX. let - var3 bytes for the first rename. 4 bytes for each subsequent rename.const - varSave 1 byte per declaration.
Class Tranforms852 bytes for using simple class declaration with a constructor. Adds two new functions, namely, _createClass and _classCallCheck.

2114 bytes for creating and extending a class.

Extending a class adds two more functions let us have a look at them, namely,
_get and _inherits.

Arrow => functions20 bytes per usage, depends on the usage as well.

Template StringsA little less than 4 bytes per variable cal…

Nouveau - Summer Project

Implementing a software scripting engine on Fermi to achieve safe memory re-clocking.

Fermi stands for Nvidia GPUs based on Fermi architecture.

NVidia cards have long had the possibility to reclock at least some of the engines of its GPUs. Up to the geforce 7 (included), reclocking used to happen at boot time and usually didn't involve memory reclocking at all.

It changed with geforce 8 (nv50) where almost all laptops got the capability to reclock both the VRAM and the main engines. This was introduced in order to lower power consumption when the GPU was mostly idle. The default boot clocks were usually in some intermediate state between the slowest and the fastest clocks. The reclocking process for these cards is mostly understood and Nouveau is not far from being safely reclock on the fly, even while gaming.

Geforce 200 (nva3) introduced load-based reclocking on all the cards. This started being a real problem because the default boot clocks are a third to a half of the maximum …