Google Core Web Vitals Report

Follow

Google's PageSpeed Insights report currently combines three (or four, depending on the reporting system you use) of its previously existing metrics into one section called "Core Web Vitals". This document explains those particular metrics.

Google bases its measurements on a cross-section of Chrome users across various devices, operating systems, and network conditions, who have opted in to sending anonymous browsing data to Google. It is unclear whether this cross-section is from particular geographic locations or worldwide data.

Note: Site speed (performance) is not equivalent to SEO for any search engine ranking.

 

For your reference and for comparison purposes, we ran a PageSpeed Insights test on the CNN website, which fails the Core Web Vitals assessment on both mobile and desktop versions (April 2022). The same is true for The Washington Post and other globally renowned publications.

 

CNN website Core Web Vitals report for mobile (April 2022):

 

cnn-core-vitals-mobile-april-2022.png

 

CNN website Core Web Vitals report for desktop (April 2022):

 

cnn-core-vitals-desktop-april-2022.png

 

Publishers must balance their content presentation, site features, user-friendliness, digital revenue opportunities (e.g. ads, and sponsored content), and performance. This means that compared to a static website with evergreen content spread across a handful of pages, a digital publication will always score much more poorly. 

Google Search Console algorithms cannot distinguish between static and dynamic websites, between sites that use lazy loading or not, between sites that update content frequently vs. sites that do not, between websites using or not using a CMS service, nor between websites with third-party requirements vs. websites that do not need those for revenue or as features for site visitors.

You will find guidelines listed below the individual metrics with instructions to "minimize main thread work" or "remove unused JavaScript", for example. For explanations of these and other suggestions for improvement provided by the Google report that aren't outlined here, please review the following help document: PageSpeed Insights Report

BACK TO TOP

 

First Contentful Paint (FCP)

Here are excerpts from Google's documentation defining the FCP metric:

The First Contentful Paint (FCP) metric measures the time from when the page starts loading to when any part of the page's content is rendered on the screen. For this metric, "content" refers to text, images (including background images), <svg> elements, or non-white <canvas> elements.

FCP marks the first point in the page load timeline where the user can see anything on the screen.

To provide a good user experience, sites should strive to have a First Contentful Paint of 1.8 seconds or less.

This means that the FCP metric is designed to find resources such as scripts or CSS stylesheets that may be impeding page elements from rendering quickly.

The SVG file elements mentioned by Google are an interesting example of performance trade-offs. SVG files are generally smaller than traditional image formats such as JPG or PNG files. Therefore, they are good for the FCP metric when compared to other images. However, because they are technically text and not image files, their size cannot be determined before loading and they can therefore lower Google's Cumulative Layout Shift (CLS) metric discussed below.

BACK TO TOP

 

Largest contentful paint (LCP)

Here are excerpts from Google's documentation defining the LCP metric:

Largest Contentful Paint (LCP) marks the point in the page load timeline when the page's main content has likely loaded.

LCP is the amount of time to render the largest content element visible in the viewport, relative to when the page first started loading. The largest element is typically an image or video, or perhaps a large block-level text element.

To provide a good user experience, sites should strive to have Largest Contentful Paint of 2.5 seconds or less.

Caution: Since users can open pages in a background tab, it's possible that the Largest Contentful Paint will not happen until the user focuses the tab, which can be much later than when they first loaded it.

In other words, LCP only measures partial load time of a page: that of the largest item above the fold. You may find a link to that item in the LCP section of the report. If it is an image, you could consider reducing its file size. PNG images tend to be very large compared to JPGs, for example. Lossless Image Compression is also always a good thing for site performance.

However, every time a website user opens a link from their search engine results or from within your site in a new tab, without immediately switching to that tab, it will skew the results for the LCP to a lower score. Google does not provide numbers detailing how often this may or may not occur for your site in the report.

BACK TO TOP

 

First input delay (FID)

Here are excerpts from Google's documentation defining the FID metric:

First Input Delay (FID) is a metric that measures a page's responsiveness during load. As such, it only focuses on input events from discrete actions like clicks, taps, and key presses.

It is the time from when a user first interacts with your page (when they clicked a link, tapped on a button, and so on) to the time when the browser responds to that interaction. This measurement is taken from whatever interactive element that the user first clicks. This is important on pages where the user needs to do something, because this is when the page has become interactive.

To provide a good user experience, sites should strive to have a First Input Delay of 100 milliseconds* or less.

Gotchas: FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers. While this time does affect the user experience, including it as part of FID would incentivize developers to respond to events asynchronously—which would improve the metric but likely make the experience worse.

* 100 milliseconds = 0.1 seconds

Rest assured that Metro Publisher developers understand the FID metric, including the difference between asynchronous and synchronous event handling. Metro Publisher follows best practices for user experience and performance. Global web standards and best practice are not based on Google metrics alone.

BACK TO TOP

 

Cumulative Layout Shift (CLS)

Here are excerpts from Google's documentation defining the CLS metric:

Cumulative Layout Shift is the amount that the page layout shifts during the loading phase. The score is rated from 0–1, where zero means no shifting and 1 means the most shifting.

CLS helps quantify how often users experience unexpected layout shifts. This is important because having pages elements shift while a user is trying to interact with it is a bad user experience. If you can't seem to find the reason for a high value, try interacting with the page to see how that affects the score.

Unexpected movement of page content usually happens because resources are loaded asynchronously or DOM elements get dynamically added to the page above existing content. The culprit might be an image or video with unknown dimensions, a font that renders larger or smaller than its fallback, or a third-party ad or widget that dynamically resizes itself.

In other words, this is about display times and since we implement lazy loading (on purpose for performance), this triggers the CLS Google metric.

Lazy loading means page elements (ads, images, Sprockets, maps, text) load as users scroll, so not until the elements are actually in the users' viewport (the area they can see on their screen). Therefore, text will usually load first on the screen on any image-heavy or script-heavy site. If the entire page has to load before anything displays, including all scripts, then it would take longer for visitors to see anything at all.

Please note that the largest effect on load times lies with third-party scripts as illustrated by all Google Page Speed Insights reports. We cannot influence third-party codes and the performance resources they consume.

A perfect CLS score would require the browsers knowing how much space elements will need before they load, which is difficult with CSS and responsive design in play. The space needed has to be read out for whichever device a visitor is using and adjusted. At Metro Publisher, we have already implemented all the best practice features for the CLS metric.

Google is assuming all the information for displaying all page elements in the HTML code can be right at the top of every page. That isn’t possible whenever scripts exist on a site that have to be loaded first. It’s also complicated by responsive design. 

CLS scores can potentially be improved by reducing the file size of especially large images, including ad creatives, but Metro Publisher and ad serving providers already have file size limits in place to prevent accidental uploading of oversized images.

The other way is by reducing the number of scripts required to display elements. Anytime a script has to execute before content can display (e.g. HTML widget code) you may see layout shifts as users scroll because other elements nearby may be loading much more quickly.

You can also increase image compression, but that will depend on whether the images uploaded are of high enough quality to be compressed more and on whether they need it at all. The default compression if no override value is entered is 85. Stick with images that are 1280px wide and 72 dpi resolution.

We have information on compressing images in Metro Publisher here:

Lossless Image Compression
Responsive Image Sizing

SVG files cannot be compressed, nor can their width and height be read out before loading, because they are actually text and not image files. This is why their filesize is so low. However, it is not less data than any HTML that has to load and not knowing the size before SVGs load means they also negatively affect the CLS score. However, they are still usually smaller than other image formats. This is an example of a performance trade-off worth considering. Such a comparison is not included in the Google report.

As long as website pages aren't jumping a lot for readers, which is what the CLS metric is about, then this metric is not an issue. CLS is trying to measure user experience, rather than technical errors, and any comparisons you make to interpret your scores should be with other magazine publishing websites.

 

BACK TO TOP

 

More Help with Core Web Vitals Metrics

Please note that any troubleshooting of third-party services as well as individual performance reviews of your site or specific pages of your site is generally outside of the scope of our standard support service.

You are welcome to submit a ticket with inquiries regarding your Core Vitals and PageSpeed Insights report and we will respond to any general questions.

Questions requiring deeper investigation will be passed along to our custom services team for a quote.

BACK TO TOP

Have more questions? Submit a request

Comments

Powered by Zendesk