How I Made My Next.js Portfolio Actually Production-Ready (For $0)
I'll be honest. For a long time, my portfolio was held together with hope.

The animations were smooth. The Lighthouse scores were green. It looked great in a demo. But if you pulled back the curtain, the reality was embarrassing: no CI pipeline, no automated tests, no rate limiting on the API routes, and nothing stopping a broken import from nuking the entire live site.
I'd push to main, refresh the browser, and pray.
That's not how real production software works. At any serious company, code goes through layers of automated checks before it reaches users. Bad formatting gets rejected at commit time. Builds are verified in isolated environments. API endpoints are protected from abuse. These aren't luxuries. They're table stakes.
So I decided to build all of it into my portfolio. Not to over-engineer a personal site, but to actually understand, hands-on, what production infrastructure feels like when you're the one setting it up.
The surprising part? Every tool I used is open-source, and I didn't spend a single dollar.
Here's the full breakdown.
What I Was Actually Solving For
Before I started, I wrote down the specific failure modes I wanted to eliminate:
graph TD
A["My Portfolio Before"] --> B["No CI Pipeline"]
A --> C["No Automated Tests"]
A --> D["Open API Routes"]
A --> E["Analytics Blocking UI"]
B --> F["Broken commits reach production"]
C --> G["Silent regressions on 40+ routes"]
D --> H["Bots can drain Firebase quota"]
E --> I["Animations stutter on cheap phones"]
style A fill:#991b1b,stroke:#fca5a5,color:#fca5a5
style F fill:#7f1d1d,stroke:#f87171,color:#fca5a5
style G fill:#7f1d1d,stroke:#f87171,color:#fca5a5
style H fill:#7f1d1d,stroke:#f87171,color:#fca5a5
style I fill:#7f1d1d,stroke:#f87171,color:#fca5a5Each of these is a real problem. Not theoretical. The Firebase one especially. A basic while true; do curl loop pointed at my /api/spotify route could burn through my entire daily quota in under 10 minutes.
I needed five layers of defense, each catching a different class of failure.
Layer 1: Kill Bad Code at the Keyboard (Husky + Prettier)
The cheapest bug to fix is the one that never makes it into your Git history. That's what "shift left" means in practice: move your quality gates as early in the pipeline as possible.
I set up three tools that chain together:
sequenceDiagram
participant Dev as Developer
participant Git as git commit
participant Husky as Husky Hook
participant LS as lint-staged
participant P as Prettier + ESLint
Dev->>Git: git commit -m "fix header"
Git->>Husky: Pre-commit hook triggered
Husky->>LS: Identify staged files only
LS->>P: Format & lint staged files
alt Code passes
P-->>Git: Allow commit
else Code fails
P-->>Dev: Reject commit with errors
endHusky intercepts git commit before Git records anything. lint-staged identifies only the files you've changed (no point scanning your entire codebase every time). Prettier reformats those files to a strict, consistent style.
If there's a syntax issue or ESLint violation, the commit gets rejected before it even exists in your local history.
Setup
pnpm add -D husky prettier lint-staged
npx husky initThe pre-commit hook (.husky/pre-commit):
#!/bin/sh
pnpm exec lint-stagedThe lint-staged config (.lintstagedrc.js):
module.exports = {
"*.{js,jsx,ts,tsx,json,css,md}": ["prettier --write"],
};Three files. That's it. Every developer who touches the repo now has automatic formatting enforced at the Git level. No IDE plugins required, no "please remember to run prettier" messages in Slack.
The First Run Is Painful (And That's the Point)
When I first ran Prettier across my codebase, it flagged 326 files with inconsistencies. Tabs vs spaces, trailing commas, quote style. Years of accumulated drift. I ran pnpm prettier --write once, committed the mass-format, and never thought about it again.
That single commit was probably the highest-impact code quality improvement I've ever made.
Layer 2: Verify Every Push in Isolation (GitHub Actions)
Pre-commit hooks run on your laptop. But what if someone clones the repo without Husky installed? What if they force-push? You need a second gate that runs on neutral ground.
graph LR
A["git push"] --> B["GitHub Actions Triggered"]
B --> C["Install Dependencies"]
C --> D["Run ESLint"]
D --> E["Check Prettier"]
E --> F["Full Production Build"]
F -->|Pass| G["✅ Green Checkmark"]
F -->|Fail| H["❌ PR Blocked"]
style G fill:#166534,stroke:#4ade80,color:#bbf7d0
style H fill:#991b1b,stroke:#f87171,color:#fca5a5Every push to main and every pull request triggers this pipeline. It installs dependencies from scratch (no cache shortcuts), lints the code, checks formatting in read-only mode, and runs a full production build.
name: CI
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
jobs:
build-and-lint:
runs-on: ubuntu-latest
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
with:
version: 10
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "pnpm"
- run: pnpm install
- run: pnpm lint
- run: pnpm prettier --check "**/*.{js,jsx,ts,tsx,json,css,md}"
- run: pnpm buildIf any step fails, the commit gets a red X and merging is blocked.
Quick Note: The Node.js Version Confusion
You might notice FORCE_JAVASCRIPT_ACTIONS_TO_NODE24 alongside node-version: "20". These are two completely different things.
GitHub Actions uses Node.js internally to run its own action scripts (actions/checkout, etc.). That internal runtime is migrating to Node 24. The env flag tells GitHub's scaffolding to use the newer version, which suppresses deprecation warnings.
Your actual application still builds on Node 20. They're separate execution contexts running on the same machine.
Layer 3: Catch What Linting Can't (Vitest + Playwright)
Linting catches syntax problems. The build step catches type errors. But neither catches behavioral regressions, the kind where a change to a shared utility function silently breaks three pages you haven't opened in months.
I set up two layers of testing:
graph TD
A["Testing Strategy"] --> B["Unit Tests - Vitest"]
A --> C["E2E Tests - Playwright"]
B --> D["Component isolation"]
B --> E["Utility function logic"]
B --> F["Mocked JSDOM environment"]
B --> G["Runs in milliseconds"]
C --> H["Real browser engines"]
C --> I["Full navigation flows"]
C --> J["Network request assertions"]
C --> K["Multi-browser: Chromium + WebKit"]
style B fill:#1e3a5f,stroke:#60a5fa,color:#bfdbfe
style C fill:#3b1f5e,stroke:#a78bfa,color:#ddd6feVitest handles unit tests. Isolated component rendering, utility function logic, expected return values. It uses JSDOM to simulate a browser environment without actually opening one, so tests complete in milliseconds.
Playwright handles the scary stuff. It launches real Chromium and WebKit browsers, navigates your routes, clicks buttons, waits for API responses, and asserts that specific elements appeared on the page. If a hydration mismatch or CSS layout break happens, Playwright catches it.
// playwright.config.ts
import { defineConfig } from "@playwright/test";
export default defineConfig({
testDir: "./e2e",
webServer: {
command: "pnpm dev",
port: 3000,
reuseExistingServer: !process.env.CI,
},
projects: [
{ name: "chromium", use: { browserName: "chromium" } },
{ name: "webkit", use: { browserName: "webkit" } },
],
});Together, these two layers mean I can refactor a shared component and know within seconds whether I've broken anything downstream.
Layer 4: Stop Bots at the Door (Upstash Edge Rate Limiting)
This is the part that I'm genuinely proud of. Not because the code is complicated. It's shockingly simple. But because the architecture is so clean.
The Problem
My portfolio makes real API calls. /api/spotify fetches my currently playing track. /api/gallery pulls photo metadata from Cloudinary. /api/push handles Firebase push notifications.
Each of these calls a downstream service with usage quotas. Firebase's free tier gives you 50,000 reads per day. A bot running curl in a loop at 100 requests/second would exhaust that quota in under 9 minutes.
The Solution
Instead of adding rate limiting to each individual route (fragile, repetitive), I put it in the middleware layer. In Next.js, middleware runs at Vercel's Edge, geographically close to the user, before the request ever reaches your serverless function.
Here's how the request flow works at a high level:
sequenceDiagram
participant User
participant MW as Vercel Edge (Middleware)
participant Redis as Upstash Redis
participant API as Next.js API
User->>MW: Request /api/data
MW->>Redis: Check slidingWindow(IP)
Redis-->>MW: Return status (Limit exceeded?)
alt If Remaining > 0
MW->>API: Pass traffic
API-->>User: Return 200 OK
else If Limit Exceeded
MW-->>User: Return 429 Too Many Requests
endAnd here's the expanded version showing how the database stays protected:
sequenceDiagram
participant User
participant Edge as Vercel Edge
participant Redis as Upstash Redis
participant API as Next.js API Route
participant DB as Firebase / Cloudinary
User->>Edge: GET /api/spotify
Edge->>Redis: Check IP rate (slidingWindow)
Redis-->>Edge: 7/10 remaining
alt Within Limit
Edge->>API: Forward request
API->>DB: Fetch data
DB-->>API: Return data
API-->>User: 200 OK + data
else Limit Exceeded
Edge-->>User: 429 Too Many Requests
Note right of Edge: API never executes.<br/>DB quota untouched.
endThe critical insight here: when the rate limit triggers, the API route never executes. The request gets bounced at the network edge. Your Firebase quota, your Cloudinary bandwidth, your serverless function invocations. None of them are touched.
The Code
// src/middleware.ts
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const ratelimit = process.env.UPSTASH_REDIS_REST_URL
? new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "10 s"),
analytics: true,
})
: null;
export async function middleware(request: NextRequest) {
const response = NextResponse.next();
// Only rate-limit API routes
if (request.nextUrl.pathname.startsWith("/api") && ratelimit) {
const ip = request.ip ?? "127.0.0.1";
const { success, limit, reset, remaining } = await ratelimit.limit(ip);
// Professional rate limit headers (same pattern as GitHub/Stripe APIs)
response.headers.set("X-RateLimit-Limit", limit.toString());
response.headers.set("X-RateLimit-Remaining", remaining.toString());
response.headers.set("X-RateLimit-Reset", reset.toString());
if (!success) {
return new NextResponse("Too many requests, slow down.", {
status: 429,
});
}
}
return response;
}slidingWindow(10, '10 s') means: 10 requests per 10-second rolling window, per IP. The "sliding" part matters. It's not a hard reset every 10 seconds. It continuously tracks request frequency, so you can't game it by timing your bursts.
Why Upstash Specifically
Three reasons:
- Serverless-native. Upstash exposes Redis over a REST API (HTTPS), which means it works in Vercel Edge functions. Regular Redis uses TCP connections, which Edge functions don't support.
- Fast. Rate limit checks complete in under 5 milliseconds because Upstash replicates data globally.
- Free. 10,000 commands/day, 256 MB storage. For a rate limiter storing IP addresses (~15 bytes each), that's enough to track millions of concurrent visitors.
Graceful Degradation
Notice the conditional: process.env.UPSTASH_REDIS_REST_URL ? new Ratelimit(...) : null. If the environment variables aren't set (local development, for instance), the rate limiter just doesn't activate. The middleware still runs, still sets security headers, but skips rate limiting entirely.
No crashes. No errors. Features should degrade gracefully, not explode loudly.
Layer 5: Lock Down the Browser (Content Security Policy)
While I was already inside middleware.ts, I hardened the Content Security Policy (CSP). This is probably the most underrated security mechanism in web development.
CSP tells the browser exactly which domains are allowed to load scripts, styles, images, and frames on your page. Without it, a cross-site scripting (XSS) attack could inject a <script> tag that exfiltrates user data. With CSP, the browser itself blocks anything that doesn't come from a whitelisted source.
graph LR
A["Incoming Request"] --> B["Middleware"]
B --> C["Inject CSP Header"]
B --> D["Inject HSTS Header"]
B --> E["Inject X-Frame-Options"]
B --> F["Inject Referrer-Policy"]
B --> G["Inject Permissions-Policy"]
C --> H["Browser enforces:<br/>Only whitelisted domains<br/>can load scripts/styles"]
E --> I["Prevents clickjacking<br/>via iframe embedding"]
style B fill:#1e3a5f,stroke:#60a5fa,color:#bfdbfe
style H fill:#166534,stroke:#4ade80,color:#bbf7d0
style I fill:#166534,stroke:#4ade80,color:#bbf7d0Here's a simplified version of my CSP:
const CSP_HEADER =
"default-src 'self'; " +
"script-src 'self' 'unsafe-inline' https://www.googletagmanager.com " +
"https://challenges.cloudflare.com; " +
"style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; " +
"img-src 'self' data: https: blob:; " +
"connect-src 'self' https://*.googleapis.com wss://*.firebaseio.com; " +
"frame-src 'self' https://challenges.cloudflare.com; " +
"frame-ancestors 'none';";Every domain in that list is one I explicitly trust. If someone manages to inject a script pointing to sketchy-domain.com, the browser refuses to execute it before it even loads.
The frame-ancestors 'none' directive is particularly important. It prevents anyone from embedding your site inside an iframe, which is the primary vector for clickjacking attacks.
The Complete Pipeline
Here's what happens now when I write code and push it to production:
graph TD
A["Write Code"] --> B["git commit"]
B --> C["Husky Intercepts"]
C --> D["lint-staged + Prettier"]
D -->|Fail| E["❌ Commit Rejected"]
D -->|Pass| F["Commit Recorded"]
F --> G["git push"]
G --> H["GitHub Actions CI"]
H --> I["Lint → Format Check → Build"]
I -->|Fail| J["❌ PR Blocked"]
I -->|Pass| K["✅ Merge Allowed"]
K --> L["Vercel Auto-Deploy"]
L --> M["Production Live"]
M --> N["Every Request Hits Middleware"]
N --> O["CSP Headers Injected"]
N --> P["API Rate Limiting Active"]
style E fill:#991b1b,stroke:#f87171,color:#fca5a5
style J fill:#991b1b,stroke:#f87171,color:#fca5a5
style K fill:#166534,stroke:#4ade80,color:#bbf7d0
style M fill:#166534,stroke:#4ade80,color:#bbf7d0Five layers. Each one catches a different class of failure. And every single tool in this stack is free.
What I Actually Learned
Building this infrastructure taught me something that frontend development alone hadn't: the most important code in a production system is the code that prevents other code from breaking things.
Prettier doesn't ship features. GitHub Actions doesn't improve animations. Rate limiting doesn't make your page load faster. But together, they create a system where you can ship with confidence, iterate without fear, and go to sleep knowing a bot somewhere isn't quietly draining your database quota.
If you're building a portfolio or side project and you want it to feel professionally built, not just professionally designed, start here. Not with the UI. Start with the infrastructure that makes everything else reliable.
FAQ
Is this overkill for a portfolio site?
Technically? Sure. But the setup took an afternoon, and the ongoing maintenance cost is zero. More importantly, it's a working demonstration that you understand production infrastructure, not just React components. Hiring managers notice that.
How much does all of this cost?
Nothing. GitHub Actions is free for public repos. Upstash's free tier covers 10,000 commands/day. Husky, Prettier, Vitest, and Playwright are all open-source. The only investment is a few hours of your time.
Will the rate limiter accidentally block real users?
No. The limit is 10 API requests per 10 seconds per IP. Normal browsing generates 2-3 API calls per page load. You'd have to deliberately hammer the refresh button for 10 straight seconds to trigger it.
Can I use this setup outside Next.js?
The CI/CD layer (Husky, Prettier, GitHub Actions) works with any JavaScript framework. The Edge rate limiting is specific to Next.js middleware, but Upstash provides adapters for Cloudflare Workers, Deno, and standard Node.js servers.
Does the rate limiter work locally?
By design, no. The middleware checks for UPSTASH_REDIS_REST_URL, which only exists in production (Vercel). Locally, limiting is silently skipped. Your dev workflow stays completely unaffected.
What about error monitoring?
I'm currently evaluating Highlight.io and PostHog as open-source observability platforms. Both offer generous free tiers and can be self-hosted. That's a topic for a future post.
