Most developers think you have to choose between a rich, animated UI and a perfect Lighthouse score. You don’t. Here is the engineering roadmap I used to optimize a heavy Astro + Svelte portfolio from a sluggish ~70 to a perfect 100, slicing main thread work by 48%.
TL;DR: The Numbers Don’t Lie
I took a portfolio site loaded with Lenis smooth scrolling, custom pointers, and Sanity CMS integrations and put it on a diet. I didn’t remove a single feature; I just changed how they loaded.
| Metric | Before optimization | After optimization | Impact |
|---|---|---|---|
| Lighthouse Score | ~75 (Yellow) | 100 (Green) | 🟢 Perfect |
| Main Thread Work | ~2.9s | ~1.5s | 🔻 48% Drop |
| Script Evaluation | 1019ms | ~550ms | 🔻 46% Drop |
| Total Blocking Time | >500ms | <200ms | ⚡ Non-blocking |
The Stack: Astro (Static), Svelte Islands, Sanity CMS, Lenis Scroll, Vercel.
The “Fast Framework” Trap
We’ve all been there. You choose Astro because you want that “Islands Architecture” speed. You choose Svelte because the bundle sizes are tiny. You deploy your site, expecting instant greens, and Lighthouse slaps you with a 72 Performance score.
That was my starting point.
My portfolio looked fast, but the metrics told a different story: 2.9 seconds of main thread work. The culprit wasn’t the framework—it was how I was managing hydration and third-party scripts.
Here is the exact engineering journey to fix it.
1. The Silent Bundle Killer: Barrel Imports
The first bottleneck was hidden in plain sight: Icon imports. I was using lucide-svelte via barrel exports.
The Mistake:
// ❌ This pulls the ENTIRE library into the bundle
import { Github, Linkedin, Mail } from "lucide-svelte";
Even with tree-shaking, barrel files can cause build tools to struggle, often bundling way more code than necessary during parsing.
The Fix: I switched to direct file imports. This reduced script parsing time by ~200ms instantly.
// ✅ This imports ONLY what you use
import Github from "lucide-svelte/icons/github";
import Linkedin from "lucide-svelte/icons/linkedin";
2. Hydration Discipline: Triage Your Components
Astro allows partial hydration, but I was abusing it. I had client:load on almost everything to ensure animations fired immediately. This meant the browser had to hydrate heavy interactive components before it could even think about painting the page.
I moved to a “Triage Strategy”:
- Above the Fold:
client:idle. The Hero and Nav don’t need to be interactive the nanosecond the HTML parses. Waiting for the browser idle state saves critical milliseconds. - Below the Fold:
client:visible. Why hydrate the “Contact” form if the user hasn’t scrolled down yet?
<!-- ✅ Smart Hydration Strategy -->
<Navigation client:idle />
<Hero client:idle />
<Projects client:visible /> <!-- Only hydrates when scrolled to -->
<Contact client:visible />
Impact: This cut huge chunks out of the initial Total Blocking Time (TBT).
3. Taming the Smooth Scroll (Lenis)
Smooth scrolling is notoriously bad for performance metrics. It usually initializes immediately, fighting the browser for main thread resources during the critical initial render.
I realized scroll smoothing is a progressive enhancement, not a critical feature.
The Fix:
I deferred Lenis initialization using requestIdleCallback and added logic to adapt based on the user’s device power.
// src/lib/lenis.ts
export function initLenis() {
// 1. Accessibility Check
if (window.matchMedia("(prefers-reduced-motion: reduce)").matches) return;
const startLenis = () => {
import("lenis").then(({ default: Lenis }) => {
// 2. Lightweight config for low-end devices
const config = getDevicePerformance().isLowEnd
? lowSpecConfig
: highSpecConfig;
new Lenis(config);
});
};
// 3. Defer until the Main Thread is chill
if ('requestIdleCallback' in window) {
requestIdleCallback(startLenis, { timeout: 2000 });
} else {
setTimeout(startLenis, 1000);
}
}
Note: The UI paints instantly using native scroll. A second later, the “luxury” smooth scroll kicks in imperceptibly.
4. Advanced Code Splitting & Header Caching
Finally, I stopped letting Vite bundle everything into one massive generic chunk. I configured rollupOptions to split critical CSS, Svelte vendor files, and my custom code into separate, cacheable chunks.
I paired this with aggressive Vercel header configuration:
{
"source": "/chunks/(.*)",
"headers": [
{ "key": "Cache-Control", "value": "public, max-age=31536000, immutable" }
]
}
The Result: 100/100
After these changes, I re-ran the audit.
- Visual Regression: None. The site looks exactly the same. Animations are fluid.
- Performance: The main thread breathes easier. TTI (Time to Interactive) dropped from 4s to <2.5s.
Key Takeaways for Engineers
- Hydration is expensive. Don’t buy it if you don’t need it yet (
client:visible). - Smooth scroll is a luxury. Load it lazily.
- Barrel files are traps. Import directly.
requestIdleCallbackis your best friend. Use it for anything that isn’t critical to the First Contentful Paint (FCP).
🚀 Need to squeeze performance out of complex architecture?
AI tools can write code, but they struggle to understand the nuance of the browser’s main thread, complex hydration waterfalls, and device-specific bottlenecks.
If you have a high-stakes application suffering from render blocking, poor Core Web Vitals, or memory leaks that useEffect couldn’t fix, let’s talk.
👉 Hire me for Advanced Performance Engineering & Debugging 1