Engineering
Performance budgets that stick
How we keep Lighthouse scores above 95 in production by treating performance as a build-time gate instead of a quarterly cleanup project.
Performance budgets fail for the same reason most engineering policies fail: they live in a wiki page nobody reads. Ours work because they’re enforced by the build, not by good intentions.
The numbers we hold ourselves to
Every page on every site we ship has to clear the same bar before it can merge to main:
- Lighthouse Performance at or above 95.
- Lighthouse Accessibility at exactly 100.
- Lighthouse SEO at exactly 100.
- JavaScript shipped per page under 50 KB, gzipped.
- Largest Contentful Paint under 2.0 seconds on a throttled mid-tier device.
These thresholds aren’t aspirational. They run on every pull request via Lighthouse CI, and a regression on any one of them turns the merge button red.
Why static-first is the easy answer
Astro’s static-first rendering does most of the heavy lifting for us. By default our pages ship zero JavaScript. When we need interactivity — a search box, a contact form, a CTA reveal — we opt into hydration island by island, with explicit client: directives that show up in code review.
The fastest JavaScript is the JavaScript you never sent.
The result is that the typical post page on this site weighs less than most CMS-generated pages weigh in HTML alone. We didn’t earn that with clever optimization. We earned it by saying no to almost every component that wanted to be interactive.
The boring discipline
Below the framework choice, three habits keep the budget intact:
- Self-host every asset. Fonts, scripts, analytics — if it’s served from someone else’s origin, it’s a third-party request we don’t control. The CSP we ship enforces this; nothing external loads without an explicit allowlist entry and a written justification.
- Image optimization on by default. Astro’s image pipeline handles format selection, responsive sizing, and lazy loading. The only knob authors touch is the source image and its alt text.
- Audit on every PR. Lighthouse CI is unskippable. So is Linkinator. So is the bundle-size check. The cost of a regression is finding out within ten minutes, not three months later when a customer complains.
What we’d do differently
If we were starting over, we’d add real-user monitoring on day one instead of deferring it. Synthetic Lighthouse runs catch most regressions, but a small percentage of users hit network conditions or device profiles that Lighthouse never simulates, and those are the regressions that hurt revenue.
We’re rolling a self-hosted RUM tool into the stack next quarter. When it ships, we’ll publish the dashboard.
- #performance
- #lighthouse
- #core-web-vitals
Share this post
Related reading
-
Engineering · Apr 22, 2026
Why we built our own marketing platform
On the trade-offs behind rolling our own Astro-based stack instead of reaching for an off-the-shelf CMS.
-
Process · Apr 26, 2026 · 4 min
Schemas before content
Why we write the Zod schema before we write the first paragraph — and how that one rule has saved us from years of cleanup work.
-
Updates · Apr 20, 2026
Welcome to Site A
A short intro to what this site is, who it's for, and where we're headed next.