Web page screenshot. What can go wrong?

The idea of visual regression testing of websites is very easy on the surface. Essentially you just need to create webpage screenshot and compare them. However, it is much harder in practice vs. the concept. Let's see what can go wrong with creating screenshots.


Usually, when the question arises of what tool to use to create screenshots, PhantomJS is one of the candidates. It is the rendering engine behind of Safari, it is headless and it doesn't need a lot of efforts to set up. Unfortunately, the problem is in the quality of screenshots it takes. If a web page heavily uses javascripts, there is a chance the screenshot won't be as accurate as you would expect. Additionally, if the website has javascript errors, the screenshot won't be created at all. See an example below for screenshot made with phantomjs of SmashingMagazine website (pay attention to the sidebar blocks).

(image is clickable. 4Mb)

Link to the script to create the screenshot.


Another commonly used tool that you might use  is Selenium with some browsers. You can host Selenium yourself or use one of the hosted services such as Browserstack or Sauselabs. They provide a variety of browser / OS combinations. Meanwhile, there are some limitations with Selenium. If it is possible, creating your own grid of servers to host Selenium with browsers (as it is pretty resources intensive) will be the best bet with the Selenium approach.

User Agent

Setting up the appropriate User Agent for the browser is a vital part of creating accurate screenshots. First, some websites have device detection based on user agent headers. So if you have your own hosted Selenium you can check the website with different breakpoints (i.e. different User Agents). Second, if you monitor live websites you will need to separate traffic generated by Selenium from real visitors. Setting up filters on User Agent is a very common practice that helps to identify your tool as a bot and not to count those visits.

Memory limit

One of the biggest problem from our perspective is that some web pages can be very large. And we are talking about sites that have 15.000 pixels height on desktop and more. In that case creating screenshot takes a lot of memory that SaaS Selenium platforms are not always willing to provide. For our project, we have created several massive servers with 16Gb RAM that are capable of handling really massive pages.

Scroll page or delay to load assets

Modern websites utilize lazy-loading of images. This means that images are loaded only when they are about to be shown to the end user. For example, there are a few images in the bottom of the page that were loaded only when a user is close to viewing them by scrolling the page. Another technique is to load images with a timed delay. Therefore a delayed capturing (e.g. 10 seconds) of screenshots will be useful in capturing the screenshot accurately.

Images manipulations

Another important part of creating an accurate and high-quality screenshot for comparisons is to avoid false positives triggered by ads or highly dynamic elements. For this, you can use libraries like WebdriverCSS to remove or mask elements that are not needed to be displayed on screenshots.


Taking accurate and high-quality screenshots can be much harder than it looks as mentioned above. Fortunately, we have addressed many technical challenges in this endeavor and are happy to deliver the best results for your project. Signup to try our BackTrac.