This article is over 4 years old, it may be outdated!
Published on 03/10/2015 | 10.03.12015 HE
I’ve spent the past week developing some features for our own website at work. We’re using WordPress and I almost always enjoy working with WordPress beside Magento projects, it’s way easier to keep an overview and with a good overview of data flow and available data it’s easier to measure and find performance bottleneck. In the following post I want to examine my workflow and the way I refactored our code base to increase the speed and overall performance of our site.
Mesuring load times
With WebPageTest.org it is quite easy to measure load times, render times, asset loading times and non cached assets. WPT even shows which assets should be cached or served via a CDN. Initially it checks for First View and Repeat View and outputs a table of numbers and fancy graphics. The thing I’ve been most interested in were First View and Content Breakdown. The Content Breakdown showed that ~66% of our landing page consist of images so optimizing these images (see below) was a natural approach to boost performance. When testing with WebPageTest I like to select the second fastest Internet connection and a location somewhere relevant (for this site America, for our own somewhere in Germany).
Then I watch the videos and see how the loading wents on. At first it was a blank page and after 2.5s the page “suddenly” rendered all content at once. This was not the desired behavior and so I started testing more.
(Chrome) Developer Tools
In Chrome there are 2 vertical lines - blue and red. The blue line marks the
time at which the
DOMContentLoaded event is fired, that is when the DOM is
Images, etc.) has been loaded.
Measuring performance and finding obvious bottlenecks has become fairly easy nowadays. With Tools like WebPageTest.org, Developer Tools or automated testing suits there are a ton of ways to analyze a the loading behavior of a website.
Regularly refactor your code base to see where unused code exists or code can be optimized. By ueing a Pre-Processor it’s easy to remove the code from production but keep it for later by using an import system.
Next I found that our landing page is pretty image heavy, especially with a big banner image. To reduce the size I searched for plugins to automate the process but couldn’t find a good one so I tried out PNGQuant a Command Line tool for PNG optimization. With this tool I could reduce the size of almost any image by 50-70% which again reduced the load time by 100-200ms.
Always optimize images, either with a tool before uploading them or with a Plugin. Not optimizing images is a waste of time (literally) and bad for users.
As I mentioned earlier, rendering was also blocked by Google Fonts being directly included into our Stylesheets. Out of curiosity I checked if loading it asynchronously via the provided snippet from Google Fonts would help and indeed it did! We saved almost 100ms simply by loading the Font files asynchronously.
On the very First View this can make the content ‘jump’ a bit because the initial view loads with a fallback font (sans-serif, for example) and then once the Font is loaded from Google’s CDN it replaces the old font with the new font on-the-fly. This may looks ugly and if you absolutely can not live with it you must take the 100ms-slower Pill and include the fonts in your CSS where they’ll be loaded before the CSS is rendered - therefore no delay.
Below is the function that’ll load the fonts via a script tag that as the
async attribute set.
There’s quite a lot to do when optimizing from the server side. From things like
optimizing Queries, Caching Queries to using the variety of Apache or Nginx
Modules and settings to optimize the servers workflow. While we have our site at
a specific WordPress Hoster our access to server features is mainly limited to
their admin interface and using the
.htaccess - which is what I did.
Compression with mod_deflate
With this snippet above we tell the server to cache certain file types by 1
month (if they don’t change of course). This way the server can sent back the
files from its cache instead of re-generating the files all the time. Next I
mod_deflate to compress the files before they are sent to the client.
Now all our files are cached and gzip (compressed). Additionally our Hoster has its own cache (Varnish) which should benefit to our sites overall performance, too.
This little script loads the
main.min.js file after the DOM is constructed so
it doesn’t block the rendering of the page.
What could be done next?
Next we could think about inlining our Critical Path CSS using a Grunt or Gulp task, as Google PageSpeed suggests all the time. I’ve yet never done this before and need to try it before I can say if it’s worth or not.
In another round we could review the entire code base and replace the legacy Compass compiler with modern, node-sass and Libsass, a C++ implementation of Sass that is a lot faster than Ruby, to increase compile performance. Most of the vendor prefixing is done using own mixins or compass functions (which are legacy), so handing the prefixing job to Autoprefixer is another desirable improvement.
At this point, our website performs at 900ms-1.2s in load time for the front page and 700-800ms for most sub pages. Yet there is still optimization that need to be done, especially the question on how to integrate the optimization into everyone’s workflow. While developers could use a grunt/gulp/cli task the people who actually write content need an easy way to handle the optimization of uploaded files. The last resort of more optimization is the server respond time on which we don’t have any influence.