Lighthouse and the Ops Mindset

I added search to my personal blog last week. Pagefind, a static search library that indexes your site at build time. Cmd+K opens an overlay, you type, results appear. No server, no API, no third party. The kind of thing I love: simple, self-contained, fast.

Then I ran Lighthouse.

The Score

I’d added the Netlify Lighthouse plugin to my build pipeline. Nine lines of config.

nvim
1
2
3
4
5
6
7
8
[[plugins]]
package = "@netlify/plugin-lighthouse"

[plugins.inputs.thresholds]
performance = 0.9
accessibility = 0.9
best-practices = 0.9
seo = 0.9
NORMAL netlify.toml
toml 5 blogs 14.52%

Performance was below 90. The build didn’t fail because Netlify’s plugin is advisory, but the numbers bothered me. I’m an ops person. If there’s a metric and it’s bad, I want to fix it.

I don’t write JavaScript for a living. I know enough to wire things together, read a stack trace, and screw things up in interesting ways. What I do know is systems. I know what render-blocking means. I know what lazy loading is. I know that prefetching exists as a strategy. I don’t always know the specific API or the right attribute name, but I know the concepts because they’re the same concepts from every other layer of the stack.

This is where working with Claude Code gets interesting for someone like me.

The Fixes

Lighthouse told me what was slow. I knew from experience what the strategies should be. Claude Code knew the implementation details. Here’s what we did in about an hour.

Pagefind was loading on every page. The CSS and JavaScript for search loaded whether or not anyone opened the search overlay. I knew this was wrong. Lazy load the assets, only fetch them when someone actually opens search.

nvim
1
2
3
4
5
6
7
8
9
10
11
function loadPagefind(callback) {
  if (loaded) { callback(); return; }
  var css = document.createElement('link');
  css.rel = 'stylesheet';
  css.href = '/pagefind/pagefind-ui.css';
  document.head.appendChild(css);
  var js = document.createElement('script');
  js.src = '/pagefind/pagefind-ui.js';
  js.onload = function() { loaded = true; callback(); };
  document.head.appendChild(js);
}
NORMAL baseof.html
javascript 5 blogs 14.52%

First search open loads the assets. Every subsequent open is instant.

Google Fonts blocked first paint. The font stylesheet was render-blocking. The fix is a well-known trick: load it as media="print" and swap to media="all" on load.

nvim
1
2
3
4
5
6
7
<link href="https://fonts.googleapis.com/css2?family=..."
      rel="stylesheet" media="print"
      onload="this.media='all'">
<noscript>
  <link href="https://fonts.googleapis.com/css2?family=..."
        rel="stylesheet">
</noscript>
NORMAL head.html
html 5 blogs 14.52%

Images were JPG. Hugo’s image processing was outputting JPG. One-word change: webp. Smaller files, same quality.

nvim
1
{{ $processed := $img.Resize "1456x webp q85" }}
NORMAL single.html
go-html-template 5 blogs 14.52%

All thumbnails were eager-loaded. The blog list page loaded every post thumbnail immediately, even ones far below the fold. Added loading="lazy" to all of them, then Lighthouse came back and said the first thumbnail was above the fold and shouldn’t be lazy. Fair. Skip lazy on the first one.

nvim
1
2
3
4
5
{{ range $i, $_ := .Pages.ByDate.Reverse }}
  ...
  <img src="{{ $thumb.RelPermalink }}"
       alt="{{ $post.Title }}"
       {{ if $i }} loading="lazy"{{ end }}>
NORMAL list.html
go-html-template 5 blogs 14.52%

Click-through was slow because hero images are large. I love the art on this blog. Full-width hero images, high quality. They’re big files. The solution: on hover, prefetch both the post HTML and the full-size hero image. By the time someone clicks, the browser already has everything cached.

Hugo resolves the hero image URL at build time and puts it in a data attribute on each post card.

nvim
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
var prefetched = {};
document.querySelectorAll('.post-card[data-prefetch]').forEach(function(card) {
  card.addEventListener('mouseenter', function() {
    var href = card.dataset.prefetch;
    var hero = card.dataset.hero;
    if (prefetched[href]) return;
    prefetched[href] = true;
    var link = document.createElement('link');
    link.rel = 'prefetch';
    link.href = href;
    document.head.appendChild(link);
    if (hero) {
      var img = document.createElement('link');
      img.rel = 'prefetch';
      img.href = hero;
      img.as = 'image';
      document.head.appendChild(img);
    }
  });
});
NORMAL index.html
javascript 5 blogs 14.52%

Hover a card in the network tab. You’ll see the page and image get fetched before you click.

Accessibility. Lighthouse flagged missing aria-labels on links that wrapped post summaries. Two lines of HTML.

The Pattern

None of these fixes are complicated. Every one of them is a concept I already knew from ops work. Lazy loading is the same idea as lazy initialization in any system. Prefetching on hover is the same pattern as connection pooling or DNS prefetch. Async font loading is just “don’t block the critical path,” which is the first thing you learn in any performance work.

I didn’t know the specific HTML attributes or the JavaScript API for injecting link tags. That’s what the AI is good at. I said “I want to prefetch the hero image when someone hovers a post card” and got working code. The knowledge of what to do came from twenty years of thinking about systems. The knowledge of how to do it in this specific context came from the tool.

This is the version of AI-assisted development that I think actually works. Not “generate my app from a prompt.” More like “I know what the architecture should be, help me write the implementation in a language I use once a month.”

The Numbers

All of this took about an hour. Performance went above 90. Accessibility hit 100. The blog loads fast, search doesn’t cost anything until you use it, and hovering a post card feels like the click-through is instant.

Then Best Practices Dropped

The next build ran Lighthouse again. Performance was green. Best practices was not. No Content Security Policy. The score took a hit.

CSP is a security header that tells the browser what’s allowed to run on your page. Scripts, styles, fonts, where they can come from. If something isn’t on the list, the browser blocks it. It’s your last line of defense against cross-site scripting. I’ve configured CSP headers on production systems for years. I know the concept cold.

The problem was my own code. Every one of those inline scripts I’d just written, the search overlay, the prefetch, the blur effect, the font loader trick, they all violated the strict policy I wanted to set. script-src 'self' means no inline JavaScript. Period. You can weaken it with 'unsafe-inline', but that defeats the purpose. It’s like putting a lock on the door and leaving the window open.

So I had to extract every inline script into external files. Five of them: search, prefetch, hero blur, image pool, font loading. The blur effect was duplicated across three templates. The prefetch logic was copy-pasted in two. Extracting them actually cleaned things up.

The font loading was the interesting one. The original onload="this.media='all'" trick is an inline event handler, which CSP also blocks. The replacement uses a data attribute and an external script that listens for the load event.

nvim
1
2
3
4
<link href="https://fonts.googleapis.com/css2?family=..."
      rel="stylesheet" media="print"
      data-font-lazy>
<script src="/js/font-loader.js"></script>
NORMAL head.html
html 5 blogs 14.52%

Same result. No inline code. CSP stays strict.

The final header in netlify.toml:

nvim
1
2
3
4
5
6
7
8
9
10
11
[headers.values]
Content-Security-Policy = """
default-src 'self';
script-src 'self';
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
font-src 'self' https://fonts.gstatic.com;
img-src 'self' data:;
connect-src 'self';
frame-ancestors 'none';
base-uri 'self';
form-action 'self'"""
NORMAL netlify.toml
toml 5 blogs 14.52%

'unsafe-inline' only appears in style-src because Google Fonts injects inline styles. Everything else is locked to 'self'. Scripts can only run from my own domain.

Same pattern as before. I knew what CSP was and why it mattered. I knew 'unsafe-inline' was a cop-out. I didn’t know the fastest way to extract five inline scripts from Hugo templates and wire them up as external files. That took about twenty minutes with Claude Code.

The Takeaway

You don’t need to be a frontend developer to fix frontend performance. You need to know what questions to ask. Lighthouse gives you the questions. Experience gives you the strategy. The AI gives you the syntax. That division of labor got me from a failing score to green in about an hour, and kept it green when I tightened the security policy right after.