Improving Google PageSpeed Score
How we improved our PageSpeed score from 46 to 82 on wakefit.co?
Wakefit serves around one million visitors every month. It has all sorts of furniture categories consisting around 50 Product pages.
Earlier, we used to make around 190 requests and Page size was ~5 Mb.
Now, we make only 56 requests with Page size reduced to half ~3 Mb.
One can also see that load time has also increased by 4X (from 12s to 3s.)
Below are the steps we followed to make this change:
- Using webP format for all Images instead of JPEG. (saved ~2 Mb)
- Combining all CSS into a single CSS file. (saved ~10 trips)
- Combining all JS into one JS file. (saved ~15 trips)
- Introduced Cache-Control Headers for static assets like Images, JS, CSS.
- Small SVG (1–5kB) files are embedded into html. (saved ~25 trips)
- Using Native Image Lazy-Loading.
- Pre-Loading essential assets like critical images, fonts & Pre-connecting to important third-party servers in-advance.
- Converting fonts to modern woff2 format (having in-built compression) instead of regular ttf.
Let’s get in detail:
- Since our website was Image heavy, we switched to a more optimal Image format i.e. webP.
Our homepage itself loads more than 40 images constituting more than 2 Mb of data, when converted to webP saved half of the space ~1Mb.
For the fact, our whole AWS S3 bucket used to consist of more than 200 MB of JPEG images which when converted to webP costed only ~100MB.
2) We used to serve around 14 CSS files for every new user which was a lot.
We combined them into a single CSS file, minified it, & served it with gzip/brotl compression.
Thus, saving all those 14 HTTP requests. Earlier our CSS files for HomePage costed us ~500KB which now rests at ~40KB. (Huge 10X Savings!!!)
In our investigation journey, we found that some of them could luckily be removed & rest can be combined into 2 bundles. One critical file, which is needed right at the start. Another file/bundle is less critical involving Analytics plugins & carousels.
4) Now, since we served all the Images from our Amazon S3 bucket, it must be under huge load as every visit requires fetching of Images from the server.
To resolve this, we added the Cache-Control header to each Image resource and set it to expire 1 month from now.
Results: Our server load reduced by HALF!!!
5) We noticed ~20 calls were made to retrieve small SVG images on homepage which includes small icons of social media, user, etc.
We had two options :
- Either we put Cache-Control header so that every repeating visitor is saved from downloading it again. But in this case, every user anyhow has to make all ~20 calls to download all of them at first place.
- Secondly, we can embed all of them in the HTML document in base64 encoding. This method would indeed save us from all those ~20 HTTP download calls. But the downside would be that now we can’t use Caching.
After careful consideration, we chose option 2) to enhance first-time users experience.
We used the native lazy-loading which saved us around 40% of bandwidth.
7) Next came the turn of Analytic plugins. As every e-commerce is loaded with tons of plugins to deeply analyze its customers, we were also not behind, running around 5–10 different plugins.
After taking a collaborative decision, we removed some plugins saving us a few more network calls.
Analysis across Competitors
Wakefit lies in the e-commerce bucket in India and has many competitors like
- Urban Ladder
We analysed pageSpeed Scores across all and prepared Charts which show that WakeFit stands tall.
A lot has been done. But still a lot can be done. Some things that are yet to be tried:
- Trying SSR (Server-Side Rendering).
- Using HHVM instead of regular PHP to boost throughtput.
- Using LightHouse CI (Continuous Integration) to automate Audit checking in future as we build more.
- Using Varnish Cache at the Server or trying NGINX default cache.
- Using HTTP/2 server push?
- Utilizing Service workers for caching?