top of page

Digital Marketing Made Easy

WILCO Web Services

Google PageSpeed Insights: How To Read Scores & Fix Issues

  • Anthony Pataray
  • 11 minutes ago
  • 17 min read

You ran your site through Google PageSpeed Insights, and the score wasn't great. Maybe it was red. Maybe it was yellow. Either way, you're staring at a wall of metrics, acronyms, and suggestions that don't mean much without context. You're not alone, most local business owners we work with at Wilco Web Services see this tool for the first time and immediately feel overwhelmed by terms like LCP, CLS, and FCP.


Here's the thing: PageSpeed Insights is genuinely useful, but only if you know what the data actually means and which fixes will move the needle for your site. Not every suggestion carries the same weight, and chasing a perfect 100 score isn't always the right goal. What matters is real-world performance, how fast your pages load for the people trying to find and hire you. A slow site costs you leads, rankings, and revenue. We see it constantly with the local businesses we build and optimize websites for.


This guide breaks down exactly how to read your PageSpeed Insights scores, what each metric measures, and how to fix the most common issues dragging your site down. We'll walk through it step by step, with practical recommendations you can act on, whether you're handling it yourself or handing it off to your developer.


What Google PageSpeed Insights measures and why it matters


Google PageSpeed Insights (PSI) is a free tool that analyzes a URL and returns performance data about how that page loads and behaves for real users. It pulls from two different data sources simultaneously: a simulated test run in a controlled lab environment and actual field data collected from real Chrome users visiting your site. Understanding what each data source tells you is the first step to making sense of everything on your report. Before you act on a single suggestion, you need to know which numbers reflect reality and which reflect a controlled simulation.


The difference between lab data and field data


Lab data comes from Lighthouse, Google's open-source automated auditing tool. When you run a test, Lighthouse loads your page on a simulated mid-range mobile device using a throttled network connection. It measures how the page performs under those consistent, repeatable conditions. This is useful for debugging because it gives you a stable baseline, but the conditions are artificial by design.


Field data, labeled as "Real User Measurements" or CrUX data in your report, comes from the Chrome User Experience Report. Google collects this data passively from real Chrome users who have opted into sharing browsing statistics. It reflects actual load times, actual devices, and actual network conditions your visitors experience. If your audience is mostly on fast desktop connections, your field data will reflect that. If they're mostly on mobile with spotty connections, that shows up too.


Field data is what Google's ranking systems actually use to evaluate your site's performance, so treat it as the authoritative signal and use lab data as your diagnostic tool.

Data Type

Source

Why It Matters

Lab Data

Lighthouse simulation

Reproducible, useful for debugging fixes

Field Data (CrUX)

Real Chrome users

Reflects actual user experience; used in ranking


Core Web Vitals: the metrics that directly affect rankings


Core Web Vitals are the subset of performance metrics Google has designated as ranking signals. As of 2024, there are three: Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). Each one measures a distinct dimension of user experience: how fast the main content appears, how quickly the page responds to user input, and how stable the page layout stays as it loads.


Google defines passing thresholds for each metric. For LCP, good is under 2.5 seconds. For INP, good is under 200 milliseconds. For CLS, good is a score under 0.1. Your PSI report color-codes each metric: green means good, orange means needs improvement, and red means poor. If your field data shows red or orange on any Core Web Vital, that's where your optimization effort needs to start because those metrics directly influence where your pages rank in search results.


Diagnostic metrics and other signals


Beyond Core Web Vitals, Google PageSpeed Insights surfaces additional metrics under the lab data section: First Contentful Paint (FCP), Speed Index, Total Blocking Time (TBT), and Time to Interactive (TTI). These don't carry direct ranking weight on their own, but they help you pinpoint the specific technical causes behind a poor LCP or INP score.


Your report also includes an "Opportunities" section and a "Diagnostics" section. Opportunities are specific fixes with estimated time savings attached, like compressing images or eliminating render-blocking resources. Diagnostics highlight patterns that signal deeper performance problems, such as excessive DOM size or long main-thread tasks. Neither section replaces the Core Web Vitals data, but both tell you exactly where to look when you're ready to start making improvements. Treat them as a prioritized to-do list, not a checklist where every item needs immediate attention.


Run a clean PageSpeed test the right way


Before you react to any score, you need to make sure your test is actually giving you reliable data. A lot of people run Google PageSpeed Insights while logged into their site, with browser extensions active, or right after pushing a code change, and then wonder why the numbers look inconsistent from one test to the next. The tool is straightforward, but small variables in how you run it can produce misleading results that send you chasing problems that don't exist.


Set up your browser and environment correctly


The first thing to do is open a private or incognito browser window before you paste any URL into PageSpeed Insights. This prevents browser extensions, logged-in sessions, and cached data from interfering with how the page loads during the test. Extensions like ad blockers, password managers, and SEO toolbars can inject scripts that inflate your blocking time and artificially hurt your score.


Always disable any active VPN connections before running a test, since routing traffic through a VPN server adds latency that doesn't reflect what your real visitors experience.

You should also wait at least 15 minutes after deploying any changes before re-testing. Content delivery networks and caching layers need time to propagate updates across servers. Testing immediately after a deploy often means you're measuring a partially cached or stale version of your page.


Choose which pages to test


Not every page on your site performs the same way, so testing only your homepage gives you an incomplete picture. Focus your tests on the pages that drive the most traffic and conversions: your main service pages, landing pages tied to ad campaigns, and any page where a slow load directly costs you a lead or inquiry.



  • Homepage

  • Primary service or product pages

  • Contact or booking pages

  • Any page tied to active paid traffic campaigns

  • Blog posts or location pages already ranking on page one


Run multiple tests and average the results


A single test run can fluctuate based on server response time at that exact moment and network conditions during the simulated load. Run three to five tests on the same URL within a short window and note the range of scores you get. If your scores land consistently within five points of each other, you have a reliable baseline to work from. If they swing by 15 points or more, that's a signal your server response time is inconsistent and worth investigating before you touch anything else.


Read the report without chasing a perfect score


The score at the top of your Google PageSpeed Insights report is the first thing you see, and it usually causes the most unnecessary stress. That number is a Lighthouse performance score, a weighted combination of lab metrics that gives you a quick reference point but doesn't reflect how your site performs for real users. Chasing a perfect 100 wastes time on marginal fixes when the real goal is passing the Core Web Vitals assessment in your field data section.


A site scoring 72 with passing field data Core Web Vitals outperforms a site scoring 95 with poor field LCP from a ranking standpoint.

Understand how the score is calculated


Lighthouse calculates your score as a weighted average of six lab metrics, with each one contributing a different percentage to the final number. Knowing the weights helps you understand why fixing one issue can move your score dramatically while another fix barely registers. TBT alone accounts for 30% of the total score, which is why reducing JavaScript execution time often produces the biggest jumps.


Metric

Weight in Score

Total Blocking Time (TBT)

30%

Largest Contentful Paint (LCP)

25%

Cumulative Layout Shift (CLS)

15%

First Contentful Paint (FCP)

10%

Speed Index

10%

Time to Interactive (TTI)

10%


Focus on field data before the lab score


When you open your report, scroll past the score and go directly to the "Core Web Vitals Assessment" section at the top of the page. This section pulls from real Chrome user data and tells you whether your site passes or fails Google's thresholds. If field data shows green across LCP, INP, and CLS, your site already meets Google's performance standard for ranking purposes, regardless of what the lab score says.


If your site lacks enough traffic to generate field data, the report will display "insufficient data." In that situation, use your lab metrics as a working proxy and prioritize getting LCP under 2.5 seconds and TBT under 200 milliseconds before addressing anything else.


Use opportunity estimates to filter your work


Your report lists opportunities with estimated time savings next to each one. Sort by savings size and focus on the biggest gains first. Anything offering 0.5 seconds or more in savings deserves immediate attention, while items listed at 0.05 seconds can sit at the bottom of your list.


  • High priority: Savings of 0.5 seconds or more

  • Medium priority: Savings between 0.2 and 0.5 seconds

  • Low priority: Savings under 0.2 seconds


Decide what to fix first with a simple priority plan


Your Google PageSpeed Insights report will hand you a list of 10 to 20 suggestions, and almost none of them come with a label telling you which one matters most. Jumping in randomly burns time and often produces small gains that don't move your Core Web Vitals scores. You need a structured approach to rank your fixes before you touch a single line of code.


Sort fixes by Core Web Vitals impact first


The clearest way to build your priority list is to tie each fix directly to a Core Web Vitals metric. If a recommendation in your Opportunities section directly improves LCP, INP, or CLS, it moves to the top of your list regardless of how small the time-saving estimate looks. Those three metrics are the ones Google uses in ranking, so every other fix is secondary until your field data shows green across all three.


Fix what fails your Core Web Vitals assessment before you optimize anything else, because passing that threshold is what affects your search rankings directly.

Start by marking each fix in your report with its associated metric using a simple tracking sheet:


Fix

Associated Metric

Estimated Saving

Priority

Preload LCP image

LCP

1.2s

High

Reduce JavaScript execution

INP / TBT

0.8s

High

Set explicit image dimensions

CLS

N/A

High

Compress images

LCP

0.6s

High

Remove unused CSS

TBT

0.3s

Medium

Minify CSS

FCP / Speed Index

0.1s

Low


Group remaining fixes by effort versus gain


Once you've separated Core Web Vitals fixes from the rest, apply a second filter to the remaining items: estimate the effort required against the size of the gain. A fix that takes 15 minutes and saves 0.5 seconds belongs above one that takes two days of developer time for a 0.05-second improvement. Quick wins with meaningful savings should always land ahead of technically complex changes that deliver marginal results.


Use this simple framework to bucket remaining fixes by effort:


  • Do immediately: Low effort, saving above 0.3 seconds (image compression, adding explicit width and height attributes to images, enabling text compression on your server)

  • Schedule soon: Medium effort, saving between 0.2 and 0.5 seconds (deferring non-critical JavaScript, removing unused CSS)

  • Plan carefully: High effort, saving under 0.2 seconds (font subsetting, third-party script audits, server-side rendering changes)


Working through fixes in this order means every hour you invest moves the metrics that matter most before you spend time on refinements with limited real-world impact.


Improve LCP so the main content loads faster


Largest Contentful Paint measures how long it takes for the biggest visible element on your page to finish rendering in the viewport. That element is almost always a hero image, a large heading, or a video thumbnail. If your LCP is above 2.5 seconds in your field data, Google flags it as poor, and it directly pulls down your search performance. Before you can fix it, you need to know exactly which element is causing the delay.


Find your LCP element first


Open your Google PageSpeed Insights report and scroll to the lab data section. Click the "LCP" metric to expand its details, and the report will identify the specific element responsible for your score. Knowing whether your LCP element is an image, a background CSS image, a block of text, or something else determines which fix you apply, so don't skip this step.


If your LCP element is a CSS background image, preloading it requires a different approach than preloading a standard <img> tag, so confirm the element type before writing any code.

Preload the LCP image so the browser finds it sooner


The browser normally discovers your LCP image only after it parses your HTML and stylesheet. Adding a preload link in your <head> tells the browser to fetch the resource immediately, before it processes the rest of the page. This single change consistently produces the largest LCP improvements on image-heavy pages.


Add this tag inside your <head>:


<link rel="preload" as="image" href="/images/hero.webp" fetchpriority="high">


If your LCP image is responsive and uses srcset, use this version instead:


<link rel="preload" as="image" href="/images/hero.webp" imagesrcset="/images/hero-480.webp 480w, /images/hero-800.webp 800w" imagesizes="100vw" fetchpriority="high">


Only preload your LCP image specifically. Preloading multiple resources competes for bandwidth and can slow other critical assets down.


Compress and serve images in a next-gen format


Even a properly preloaded image loads slowly if the file size is too large. Convert your LCP image to WebP or AVIF format, which typically cuts file size by 30 to 50 percent compared to JPEG or PNG without a visible quality loss. Pair compression with correct sizing: serve an image at the exact pixel dimensions it displays at, not a 3,000-pixel wide version scaled down in CSS.


  • Use WebP for broad browser compatibility

  • Use AVIF for even smaller files on modern browsers

  • Set explicit width and height attributes on every <img> tag to prevent layout shifts while the image loads


Improve INP so your site responds faster to clicks


Interaction to Next Paint measures how long your page takes to visually respond after a user clicks a button, taps a link, or types into a field. Google's threshold for a good INP score is under 200 milliseconds. Anything above 500 milliseconds is considered poor, and that delay is noticeable enough that users abandon the action entirely. On most sites, slow INP traces back to JavaScript blocking the main thread at the exact moment the user interacts.


Find what's causing slow interactions


Your Google PageSpeed Insights report flags INP under the Core Web Vitals section, but it won't show you which specific interaction triggered the delay. To dig deeper, open Chrome DevTools (press F12), go to the Performance panel, and record a session while clicking through the interactions on your page. Look for long tasks shown as red blocks in the main thread row. Those are your primary culprits.


A long task is any script that runs for more than 50 milliseconds on the main thread, blocking the browser from responding to user input until it finishes.

Pay attention to which scripts own those long tasks: first-party application code, third-party analytics tags, chat widgets, and ad scripts are the most common offenders. Note the function names and source files so you can target them directly in the next step.


Break up long JavaScript tasks


The browser can only process one task at a time on the main thread. When your JavaScript runs a single long function, every user interaction waits in a queue until that function finishes. The fix is to break large tasks into smaller chunks that yield control back to the browser between steps. You can do this using the scheduler.yield() API where supported, or a setTimeout fallback.


Here's a basic pattern for yielding between chunks of work:


async function runInChunks(items) { for (const item of items) { processItem(item); // Yield to the browser between each item await new Promise(resolve => setTimeout(resolve, 0)); } }


This approach gives the browser a window to handle pending user input between each iteration instead of locking up until the entire loop finishes.


Defer or remove third-party scripts


Third-party scripts are one of the biggest sources of main-thread blocking on local business sites. Analytics platforms, live chat widgets, review badges, and ad pixels all execute JavaScript that competes with your page's responsiveness. Audit every third-party script by loading your page and filtering the DevTools Network tab by domain.


Remove scripts you don't actively use and load the ones you keep using the defer attribute so they don't execute until after your page has fully parsed:


<script src="https://example.com/widget.js" defer></script>


Deferring third-party scripts alone can drop your INP by 100 milliseconds or more on pages carrying five or more external tags.


Improve CLS so the page stops shifting while loading


Cumulative Layout Shift measures how much your page's visual content moves around while it loads. Every time an image pops in and pushes your text down, or a font swap causes a heading to reflow, or an ad banner appears and shoves everything below it, that counts as a layout shift. Google's threshold for a good CLS score is under 0.1, and anything above 0.25 is flagged as poor. On local business sites, CLS problems are almost always caused by a handful of predictable issues: unsized images, late-loading fonts, and dynamically injected content like ads or cookie banners.


A layout shift that happens within 500 milliseconds of a user interaction doesn't count against your CLS score, but unexpected shifts during page load do, so focus on what loads without any user trigger.

Find what's causing layout shifts


Your Google PageSpeed Insights report flags CLS in the Core Web Vitals section, but identifying the exact elements responsible requires a closer look. Open Chrome DevTools, go to the Performance panel, and record a fresh page load. In the Experience row at the top of the timeline, red blocks mark layout shift events. Click any block to see exactly which elements shifted, how far they moved, and when the shift occurred relative to page load.


Look for patterns: if shifts cluster around the two to four second mark, late-loading images or fonts are usually the cause. If shifts happen later, dynamically injected content like chat widgets or ad slots is the more likely culprit. Knowing the timing helps you target the right fix instead of guessing.


Set explicit dimensions on every image and media element


The most common source of high CLS scores is images and videos without declared dimensions. When the browser encounters an <img> tag with no width or height, it reserves zero space for the element until the file loads, then suddenly expands the layout to fit it. Adding explicit dimensions tells the browser exactly how much space to hold before the asset arrives.


Add width and height attributes directly on every image tag:


<img src="/images/team-photo.webp" width="800" height="533" alt="Our team">


For responsive images that scale with CSS, pair the attributes with height: auto in your stylesheet to preserve the correct aspect ratio across screen sizes:


img { max-width: 100%; height: auto; }


Reserve space for ads and embedded content


Ad slots and third-party embeds like maps, videos, and review widgets inject content after the initial page render, which almost always causes a shift. The fix is to reserve the space those elements will occupy before they load, so the layout doesn't need to adjust when the content arrives.


Use a CSS aspect ratio container to hold space for any embed:


.embed-container { width: 100%; aspect-ratio: 16 / 9; background-color: #f0f0f0; }


For ad slots with a fixed height, set a minimum height on the container that matches the largest ad unit you serve in that position. This prevents the page from collapsing the space and then expanding when the ad loads, which is one of the most disruptive shift patterns users encounter.


Fix common PSI audits that slow most sites down


Your Google PageSpeed Insights report surfaces dozens of potential fixes, but the same four or five audits appear on nearly every local business site we analyze. Addressing these high-frequency issues will produce faster, more consistent gains than working through a random list of suggestions one by one. Each fix below maps to a specific audit flag you'll see in your Opportunities or Diagnostics section.


Enable text compression on your server


Your server sends HTML, CSS, and JavaScript files as plain text by default, and those files are far larger than they need to be. Enabling Gzip or Brotli compression on your server instructs it to compress text-based files before sending them to the browser, which typically reduces transfer size by 60 to 80 percent. Brotli compresses more efficiently than Gzip on modern browsers, so use Brotli where your server supports it.


Brotli compression is supported by all major modern browsers and typically outperforms Gzip by 15 to 25 percent on text-based assets.

Add these lines to your .htaccess file on Apache servers to enable both:


AddOutputFilterByType DEFLATE text/html text/css application/javascript <IfModule mod_brotli.c> AddOutputFilterByType BROTLI_COMPRESS text/html text/css application/javascript </IfModule>


Eliminate render-blocking resources


Render-blocking CSS and JavaScript are files that force the browser to stop building the page until they finish downloading and parsing. Every render-blocking file delays your First Contentful Paint and indirectly raises your LCP. The fix is to load non-critical CSS asynchronously and defer any JavaScript that doesn't need to run before the page displays.


Load non-critical stylesheets without blocking render using this pattern:


<link rel="preload" href="/css/non-critical.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript><link rel="stylesheet" href="/css/non-critical.css"></noscript>


Add defer to any script tag that doesn't need to execute during the initial parse:


<script src="/js/analytics.js" defer></script>


Serve static assets with efficient cache policies


When your server sends images, fonts, and scripts without cache-control headers, browsers re-download those files on every visit instead of loading them from local cache. Setting long cache lifetimes for static assets cuts load time significantly for returning visitors, who represent a large portion of your leads on local business sites.


Add this to your .htaccess file to set a one-year cache on common static file types:


<FilesMatch "\.(webp|jpg|png|svg|woff2|css|js)$"> Header set Cache-Control "max-age=31536000, public, immutable" </FilesMatch>


Re-test, validate in the real world, and keep it fast


After you apply a fix, running your page through Google PageSpeed Insights again is the fastest way to confirm the change did what you expected. But re-testing alone isn't enough. Lab scores shift for many reasons unrelated to your work, so you need to pair your PSI results with real-world validation from tools that reflect how actual users experience your site. Building a simple re-testing rhythm into your workflow protects the gains you've made and catches new regressions before they affect your rankings.


Re-test after every fix, not after all fixes


Testing after each individual change instead of waiting until you've applied everything isolates the impact of each fix precisely. If you compress images, defer three scripts, and add preload tags all at once, then re-test, you can't tell which change moved the needle or whether one change accidentally counteracted another. Apply one fix at a time, re-test three times, average the results, and record the before-and-after numbers in a simple tracking sheet.


Fix Applied

LCP Before

LCP After

CLS Before

CLS After

Score Before

Score After

Preloaded hero image

4.1s

2.3s

0.08

0.08

54

71

Deferred analytics script

2.3s

2.1s

0.08

0.08

71

78

Set image dimensions

2.1s

2.0s

0.22

0.06

78

85


Tracking changes this way gives you a clear record of what worked, which matters when a future update breaks something and you need to trace the cause back quickly.


Validate with real user data in Search Console


Your PSI lab score reflects a single simulated load, not thousands of real visitors. Open Google Search Console and navigate to the Core Web Vitals report under the Experience section. This report pulls from the same CrUX field data your PSI report uses and shows you which URLs pass or fail across your entire site, not just the one URL you tested manually.


Field data in Search Console typically updates every 28 days, so give your fixes at least a month before expecting the report to reflect your improvements.

Check the "Poor URLs" list and confirm each fixed page has moved from poor to needs improvement or from needs improvement to good. URLs that remain in the poor category after a fix usually have a secondary issue you haven't addressed yet.


Set a recurring check schedule to stay fast


Performance degrades over time as you add plugins, update themes, and install new third-party scripts. Scheduling a monthly PSI check on your top five pages catches regressions before they compound into a significant ranking problem. Pair that with a quarterly audit of your third-party scripts to remove any tags that no longer serve a clear purpose.


  • Monthly: Run PSI on your homepage and top service pages; check Search Console Core Web Vitals report

  • Quarterly: Audit all third-party scripts, review image library for uncompressed additions, check cache headers on new assets

  • After every site update: Re-test any page affected by the change before considering it complete


Next steps to keep your site fast


You now have a clear, repeatable process for using Google PageSpeed Insights to find problems, prioritize fixes, and validate results with real user data. The work doesn't stop after your first round of improvements. Core Web Vitals scores drift as you add new content, update plugins, and bring on third-party tools, so treating performance as a one-time project will cost you the gains you just made.


Start with the highest-priority fix your report flagged today. Run three tests, record the before-and-after numbers, then move to the next item on your list. Build your monthly check-in into your calendar so regressions don't quietly compound over time.


If you'd rather hand this off to a team that handles it daily, get in touch with Wilco Web Services to talk through what your site needs. Faster pages convert better, and that directly affects how many leads walk through your door.

 
 
 

Comments


bottom of page