1. Blog
  2. Kiro Ide Spec Driven Development
Ranti

Rantideb Howlader

@ranti

Connect
Search PostsReading ListTimelineBlog Stats

On this page

What We Are Building
Step 1: Install Kiro and Open Your Project
Step 2: Create Steering Files
Step 3: Build the Feature With a Spec
Step 4: Set Up Hooks
Step 5: Wire an MCP Server
Step 6: Fix a Bug With a Bugfix Spec
Step 7: The Final Project Structure
Going Deeper: Building a Custom MCP Server
Going Deeper: The Hook Pipeline
Going Deeper: Migrating from Cursor to Kiro
Going Deeper: Steering Patterns for Monorepos
Kiro vs. Cursor vs. Windsurf: When to Use What
Pricing
Final Thoughts

Kiro IDE: Building a Production API With Spec-Driven AI (Hands-On Tutorial)

Rantideb Howlader•Today•35 min read•
By Rantideb Howlader

What We Are Building

In this tutorial, I am going to build a reading analytics API for a Next.js blog. The API tracks how far visitors scroll through each article and stores anonymous scroll depth events.

I am going to build the entire thing in Kiro — from setup to shipping — so you can see every step of the spec-driven workflow in action.

Here is what we will do:

  1. Install Kiro and set up the project
  2. Create steering files so the AI knows our conventions
  3. Use a spec to plan the feature before writing code
  4. Set up hooks for auto-testing and security scanning
  5. Wire an MCP server to create GitHub issues from the IDE
  6. Fix a bug using Kiro's bugfix spec workflow
  7. Ship

If you want to follow along, you need a Next.js project (any version with App Router). If you do not have one, npx create-next-app@latest works fine.


Step 1: Install Kiro and Open Your Project

Download Kiro from kiro.dev/downloads. It is a standalone desktop app for macOS, Windows, and Linux. No AWS account needed.

Alternatively, install the CLI:

curl -fsSL https://cli.kiro.dev/install | bash

Sign in with GitHub, Google, or AWS Builder ID. Open your project folder.

First thing you will notice: Kiro looks exactly like VS Code. It runs on Code OSS, so your themes, keybindings, and extensions from the Open VSX registry carry over. But the left sidebar has a new Kiro panel with three sections: Specs, Agent Hooks, and Steering.


Step 2: Create Steering Files

Before we build anything, we need to teach Kiro how our project works. This is what steering files do — they give the AI persistent knowledge about your conventions.

In the Kiro panel, click Steering → Generate Steering Docs. Kiro scans your project and creates three files:

.kiro/
└── steering/
    ├── product.md
    ├── tech.md
    └── structure.md

Open tech.md. It should look something like this (Kiro infers it from your package.json, tsconfig.json, and existing code):

---
inclusion: always
---
 
# Technology Stack
 
- Framework: Next.js 14 (App Router)
- Language: TypeScript (strict mode)
- Styling: Vanilla CSS with CSS Modules
- Database: Supabase (PostgreSQL)

The inclusion: always frontmatter means this file is loaded into every AI interaction. That is the default.

Now let's create a custom steering file for our API conventions. Kiro panel → Steering → click + → select Workspace → name it api-standards.md:

---
inclusion: fileMatch
fileMatchPattern: "src/app/api/**/*"
---
 
# API Standards
 
## Validation
- All API routes MUST validate input using Zod schemas
- Error responses MUST follow this shape:
  { success: false, error: string }
- Success responses MUST follow this shape:
  { success: true, data?: any }
 
## HTTP Status Codes
- 400 for validation errors
- 401 for authentication failures
- 404 for missing resources
- 500 only for unhandled exceptions
 
## Logging
- Log validation failures at WARN level
- Log server errors at ERROR level with stack trace
- Never log request bodies containing PII
 
## Patterns
Follow the existing pattern in:
#[[file:src/app/api/polls/route.ts]]

Two things to notice:

  1. inclusion: fileMatch — This steering file only loads when I am editing files in src/app/api/. It does not pollute context when I am working on React components.
  2. #[[file:...]] — This links to a live file. When Kiro reads this steering doc, it also reads route.ts for context. If the code changes, the context stays current.

Kiro supports four inclusion modes:

Mode Frontmatter When it loads
Always inclusion: always Every interaction (default)
File match inclusion: fileMatch + fileMatchPattern: "glob" When you edit matching files
Manual inclusion: manual When you type #filename in chat
Auto inclusion: auto + name + description When your prompt matches the description

Save the file. It is immediately active.


Step 3: Build the Feature With a Spec

Now let's build the reading analytics API. In the Kiro panel, click Specs → click +. Kiro asks: Feature or Bug?

Select Feature. Type this prompt:

Build a reading progress tracking API. When a visitor scrolls past 25%, 50%, 75%, or 100% of a blog post, the frontend should send a POST request to /api/analytics/reading with the article slug and scroll depth. The API should save anonymous scroll events to a database table. Include a progress bar component that shows scroll position at the top of the viewport.

Kiro does not start writing code. Instead, it generates three files in .kiro/specs/reading-progress/:

File 1: requirements.md

Kiro converts your prompt into structured requirements using EARS syntax (Easy Approach to Requirements Syntax):

# Reading Progress — Requirements
 
## User Story 1: Scroll Progress Indicator
As a blog reader, I want to see how far I have scrolled
through an article so I can gauge remaining content.
 
### Acceptance Criteria
- WHEN a visitor loads a blog post THE SYSTEM SHALL render
  a horizontal progress bar fixed to the top of the viewport.
- WHILE the visitor scrolls THE SYSTEM SHALL update the
  progress bar width proportionally (0% to 100%).
- The system shall not render the progress bar on non-article
  pages.
 
## User Story 2: Anonymous Scroll Analytics
As a site owner, I want anonymized scroll depth data to
identify which articles hold reader attention.
 
### Acceptance Criteria
- WHEN a visitor scrolls past 25%, 50%, 75%, or 100% THE
  SYSTEM SHALL send a POST to /api/analytics/reading.
- The system shall never store IP addresses, cookies, or
  personally identifiable information.
- IF a depth milestone has already been recorded for this
  session THE SYSTEM SHALL not record a duplicate event.

The EARS patterns look like this:

Pattern Syntax
Ubiquitous The system shall [action].
Event-driven WHEN [trigger] THE SYSTEM SHALL [action].
State-driven WHILE [state] THE SYSTEM SHALL [action].
Conditional IF [condition] THE SYSTEM SHALL [action].
Complex WHEN [trigger] WHILE [state] IF [condition] THE SYSTEM SHALL [action].

Review the requirements. Edit anything that is wrong. When you are satisfied, click Approve to move to the next phase.

File 2: design.md

Kiro generates a technical design — not a verbose AI essay, but a working blueprint:

# Reading Progress — Design
 
## Component Architecture
- ReadingProgress (Client Component): renders fixed progress bar,
  subscribes to scroll events via useScrollDepth hook.
- useScrollDepth (Custom Hook): uses IntersectionObserver with
  sentinel elements at 25/50/75/100% of article body.
 
## API Contract
### POST /api/analytics/reading
Request (validated with Zod):
  - articleSlug: string (required, max 255 chars)
  - depth: z.enum(["25", "50", "75", "100"])
  - sessionId: string (UUID, generated client-side, no PII)
 
Response: { success: true } | { success: false, error: string }
 
## Database Schema
Table: reading_events
  - id: UUID (primary key, auto-generated)
  - article_slug: VARCHAR(255), indexed
  - depth: INTEGER
  - session_id: VARCHAR(36)
  - created_at: TIMESTAMPTZ (default: now())
 
## Testing Strategy
- Unit: useScrollDepth threshold detection
- Integration: POST /api/analytics/reading validation + insert
- E2E: progress bar renders and updates on scroll

Notice: the API response shape follows { success: true } — because the AI read our api-standards.md steering file.

Review and approve.

File 3: tasks.md

Kiro breaks the design into ordered, executable tasks:

# Implementation Tasks
 
- [ ] Task 1: Create reading_events table migration
      (Design: Database Schema)
- [ ] Task 2: Implement useScrollDepth hook with
      IntersectionObserver (Design: Component Architecture)
- [ ] Task 3: Build ReadingProgress client component
      (Requirement: User Story 1, AC 1-3)
- [ ] Task 4: Create POST /api/analytics/reading with Zod
      validation (Design: API Contract)
- [ ] Task 5: Wire component → hook → API integration
      (Requirement: User Story 2)
- [ ] Task 6: Write unit + integration tests
      (Design: Testing Strategy)
- [ ] Task 7: Add conditional rendering — blog pages only
      (Requirement: User Story 1, AC 3)

Each task references the requirement or design section it fulfills. This is requirement traceability — the same pattern used in regulated industries (healthcare, finance, government) to prove code meets its specification.

Now click Run All Tasks or run them one by one. Kiro executes each task, writes the code, and updates the task status in real time:

- [x] Task 1: Create reading_events table migration ✓
- [x] Task 2: Implement useScrollDepth hook ✓
- [x] Task 3: Build ReadingProgress component ✓
- [/] Task 4: Create POST /api/analytics/reading...

When it hits Task 4 (the API route), it reads the api-standards.md steering file automatically — because the file pattern matches src/app/api/**/*. The generated route uses Zod validation, returns { success: false, error: "..." } on validation failure, and logs at the correct level. All without you prompting it.

The Code Kiro Generated

Here is every file Kiro created from the spec. This is the actual output — not cleaned up, not hand-edited.

Task 1: Database migration (supabase/migrations/20260401_reading_events.sql)

CREATE TABLE reading_events (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  article_slug VARCHAR(255) NOT NULL,
  depth INTEGER NOT NULL CHECK (depth IN (25, 50, 75, 100)),
  session_id VARCHAR(36) NOT NULL,
  created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
 
CREATE INDEX idx_reading_events_slug ON reading_events(article_slug);
CREATE INDEX idx_reading_events_session ON reading_events(session_id, article_slug, depth);
 
-- Prevent duplicate depth events per session per article
CREATE UNIQUE INDEX idx_reading_events_unique
  ON reading_events(session_id, article_slug, depth);

Notice the unique index on (session_id, article_slug, depth). This came directly from the acceptance criteria: "IF a depth milestone has already been recorded for this session THE SYSTEM SHALL not record a duplicate event." Kiro traced the requirement to a database constraint.

Task 2: Scroll depth hook (src/hooks/useScrollDepth.ts)

"use client";
 
import { useEffect, useRef, useCallback } from "react";
 
type DepthMilestone = 25 | 50 | 75 | 100;
 
interface UseScrollDepthOptions {
  articleSlug: string;
  onMilestone: (depth: DepthMilestone) => void;
}
 
export function useScrollDepth({
  articleSlug,
  onMilestone,
}: UseScrollDepthOptions) {
  const reachedRef = useRef<Set<DepthMilestone>>(new Set());
  const sentinelsRef = useRef<HTMLDivElement[]>([]);
 
  const handleIntersection = useCallback(
    (entries: IntersectionObserverEntry[]) => {
      entries.forEach((entry) => {
        if (!entry.isIntersecting) return;
 
        const depth = Number(
          entry.target.getAttribute("data-depth")
        ) as DepthMilestone;
 
        if (reachedRef.current.has(depth)) return;
 
        reachedRef.current.add(depth);
        onMilestone(depth);
      });
    },
    [onMilestone]
  );
 
  useEffect(() => {
    const article = document.querySelector("article");
    if (!article) return;
 
    const observer = new IntersectionObserver(handleIntersection, {
      threshold: 0,
    });
 
    const milestones: DepthMilestone[] = [25, 50, 75, 100];
 
    milestones.forEach((depth) => {
      const sentinel = document.createElement("div");
      sentinel.setAttribute("data-depth", String(depth));
      sentinel.style.position = "absolute";
      sentinel.style.height = "1px";
      sentinel.style.width = "1px";
      sentinel.style.top = `${depth}%`;
      sentinel.style.pointerEvents = "none";
      article.style.position = "relative";
      article.appendChild(sentinel);
      sentinelsRef.current.push(sentinel);
      observer.observe(sentinel);
    });
 
    return () => {
      observer.disconnect();
      sentinelsRef.current.forEach((s) => s.remove());
      sentinelsRef.current = [];
    };
  }, [articleSlug, handleIntersection]);
 
  return {
    progress: reachedRef.current,
  };
}

IntersectionObserver instead of scroll events — because the design doc specified it. Sentinel elements at 25/50/75/100% of the article body. Deduplication via a Set. No PII anywhere.

Task 3: Progress bar component (src/components/ReadingProgress.tsx)

"use client";
 
import { useState, useCallback } from "react";
import { useScrollDepth } from "@/hooks/useScrollDepth";
import styles from "./ReadingProgress.module.css";
 
interface ReadingProgressProps {
  articleSlug: string;
}
 
export function ReadingProgress({ articleSlug }: ReadingProgressProps) {
  const [scrollPercent, setScrollPercent] = useState(0);
  const sessionId = useSessionId();
 
  const handleMilestone = useCallback(
    async (depth: 25 | 50 | 75 | 100) => {
      setScrollPercent(depth);
 
      try {
        await fetch("/api/analytics/reading", {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({
            articleSlug,
            depth: String(depth),
            sessionId,
          }),
        });
      } catch {
        // Silent fail — analytics should never break UX
      }
    },
    [articleSlug, sessionId]
  );
 
  useScrollDepth({ articleSlug, onMilestone: handleMilestone });
 
  return (
    <div className={styles.container}>
      <div
        className={styles.bar}
        style={{ width: `${scrollPercent}%` }}
        role="progressbar"
        aria-valuenow={scrollPercent}
        aria-valuemin={0}
        aria-valuemax={100}
        aria-label="Reading progress"
      />
    </div>
  );
}
 
function useSessionId(): string {
  if (typeof window === "undefined") return "";
 
  let id = sessionStorage.getItem("reading-session-id");
  if (!id) {
    id = crypto.randomUUID();
    sessionStorage.setItem("reading-session-id", id);
  }
  return id;
}

Anonymous session ID from crypto.randomUUID() stored in sessionStorage — expires when the tab closes. No cookies, no fingerprinting, no PII. The requirement said "never store personally identifiable information" and the code reflects that.

Task 4: API route (src/app/api/analytics/reading/route.ts)

import { NextRequest, NextResponse } from "next/server";
import { z } from "zod";
import { supabase } from "@/lib/db/client";
 
const ReadingEventSchema = z.object({
  articleSlug: z.string().min(1).max(255),
  depth: z.enum(["25", "50", "75", "100"]),
  sessionId: z.string().uuid(),
});
 
export async function POST(request: NextRequest) {
  try {
    const body = await request.json();
    const result = ReadingEventSchema.safeParse(body);
 
    if (!result.success) {
      console.warn(
        "[analytics/reading] Validation failed:",
        result.error.flatten().fieldErrors
      );
      return NextResponse.json(
        { success: false, error: result.error.issues[0].message },
        { status: 400 }
      );
    }
 
    const { articleSlug, depth, sessionId } = result.data;
 
    const { error } = await supabase.from("reading_events").upsert(
      {
        article_slug: articleSlug,
        depth: parseInt(depth, 10),
        session_id: sessionId,
      },
      {
        onConflict: "session_id,article_slug,depth",
        ignoreDuplicates: true,
      }
    );
 
    if (error) {
      console.error("[analytics/reading] Database error:", error);
      return NextResponse.json(
        { success: false, error: "Internal server error" },
        { status: 500 }
      );
    }
 
    return NextResponse.json({ success: true });
  } catch (err) {
    console.error("[analytics/reading] Unhandled error:", err);
    return NextResponse.json(
      { success: false, error: "Internal server error" },
      { status: 500 }
    );
  }
}

Look at what the steering file enforced:

  • Zod validation — api-standards.md said "All API routes MUST validate input using Zod schemas"
  • Response shape — { success: false, error: string } matches the standard
  • HTTP status codes — 400 for validation, 500 for server errors
  • Logging — console.warn for validation failures, console.error for server errors
  • upsert with ignoreDuplicates — prevents duplicate depth events (traced from acceptance criteria)

I did not prompt any of this. The steering file and the spec did the work.

Task 6: Tests (src/__tests__/analytics-reading.test.ts)

import { POST } from "@/app/api/analytics/reading/route";
import { NextRequest } from "next/server";
 
function makeRequest(body: Record<string, unknown>) {
  return new NextRequest("http://localhost/api/analytics/reading", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify(body),
  });
}
 
describe("POST /api/analytics/reading", () => {
  it("returns 400 when articleSlug is missing", async () => {
    const res = await POST(makeRequest({ depth: "50", sessionId: crypto.randomUUID() }));
    expect(res.status).toBe(400);
    const data = await res.json();
    expect(data.success).toBe(false);
  });
 
  it("returns 400 when depth is invalid", async () => {
    const res = await POST(
      makeRequest({
        articleSlug: "test-post",
        depth: "30",
        sessionId: crypto.randomUUID(),
      })
    );
    expect(res.status).toBe(400);
  });
 
  it("returns 400 when sessionId is not a UUID", async () => {
    const res = await POST(
      makeRequest({
        articleSlug: "test-post",
        depth: "50",
        sessionId: "not-a-uuid",
      })
    );
    expect(res.status).toBe(400);
  });
 
  it("accepts valid depth values (25, 50, 75, 100)", async () => {
    for (const depth of ["25", "50", "75", "100"]) {
      const res = await POST(
        makeRequest({
          articleSlug: "test-post",
          depth,
          sessionId: crypto.randomUUID(),
        })
      );
      expect(res.status).toBe(200);
      const data = await res.json();
      expect(data.success).toBe(true);
    }
  });
});

Boundary tests for every Zod constraint. Invalid depth value. Missing fields. Non-UUID session ID. Valid values for all four milestones. The test coverage hook generated these automatically when it detected the new route.

Step 4: Set Up Hooks

The feature is built. Now let's set up automations that protect the code going forward.

In the Kiro panel, click Agent Hooks → click + → choose Ask Kiro to create a hook.

Hook 1: Test Coverage on Save

Describe it in plain English:

When I save any TypeScript file in src/, check if the modified functions have tests. If tests are missing, generate them using the project's existing test conventions.

Kiro creates the hook. Or if you prefer to build it manually, click + → Manually create a hook and fill in:

Field Value
Title Test Coverage Maintainer
Event File Save
File pattern src/**/*.{ts,tsx}
Action Ask Kiro
Instructions (below)
When a source file is modified:
1. Identify new or modified functions and methods
2. Check if corresponding tests exist and cover the changes
3. If coverage is missing, generate test cases for the new code
4. Run the tests to verify they pass
5. Update coverage reports

Now every time you save a file in src/, this hook fires automatically. It checks for missing tests, generates them, and runs them. No manual test writing for baseline coverage.

Hook 2: Security Scanner

Create another hook:

Field Value
Title Security Pre-commit Scanner
Event Agent Stop
Action Ask Kiro
Instructions (below)
Review changed files for potential security issues:
1. Look for API keys, tokens, or credentials in source code
2. Check for private keys or sensitive credentials
3. Scan for encryption keys or certificates
4. Flag passwords or secrets in configuration files
5. Detect hardcoded internal URLs
6. Spot database connection credentials
 
For each issue found:
1. Highlight the specific security risk
2. Suggest a secure alternative approach

This fires after every AI agent turn. If the AI (or you) accidentally hardcodes a secret, this hook catches it before you commit.

Hook 3: i18n Sync (if you have translations)

Field Value
Title Internationalization Sync
Event File Save
File pattern src/locales/en/*.json
Action Ask Kiro
Instructions (below)
When an English locale file is updated:
1. Identify which string keys were added or modified
2. Check all other language files for these keys
3. For missing keys, add them with a "NEEDS_TRANSLATION" marker
4. For modified keys, mark them as "NEEDS_REVIEW"
5. Generate a summary of changes needed across all languages

Hook 4: Shell Command Hook (Linting)

Hooks are not limited to AI prompts. You can run shell commands:

Field Value
Title Auto Lint
Event File Save
File pattern src/**/*.{ts,tsx}
Action Run Command
Command npx eslint --fix ${FILE}

This runs ESLint with auto-fix on every save. No extension needed.


Step 5: Wire an MCP Server

MCP (Model Context Protocol) connects Kiro to external tools. Let's add a GitHub server so we can create issues from the chat.

Open the command palette: Cmd + Shift + P → type Kiro: Open workspace MCP config (JSON).

This opens .kiro/settings/mcp.json. Paste:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-github"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
      },
      "autoApprove": ["list_issues", "search_repositories"]
    },
    "aws-docs": {
      "command": "npx",
      "args": [
        "-y",
        "@anthropic/mcp-server-aws-docs"
      ]
    }
  }
}

Set your GitHub token as an environment variable (export GITHUB_TOKEN=ghp_...). Save the file. The MCP servers connect automatically.

Now in the Kiro chat, you can type:

Create a GitHub issue titled "Add reading progress analytics" with label "feature" and assign it to me.

Kiro calls the GitHub MCP server and creates the issue. No browser tab. No copy-pasting.

You can also ask the AWS docs server questions:

What are the DynamoDB BatchWriteItem throttling limits?

It pulls from official AWS documentation instead of hallucinating.

MCP Config Properties

Property What it does
command CLI command to start the server
args Arguments passed to the command
env Environment variables (${VAR} expands from your system)
autoApprove Tools that run without confirmation each time
disabledTools Tools to disable on this server
disabled Set true to temporarily turn off the server

MCP configs live at two levels:

  • Workspace: .kiro/settings/mcp.json (project-specific, commit to repo)
  • User: ~/.kiro/settings/mcp.json (global, applies to all projects)

Step 6: Fix a Bug With a Bugfix Spec

A week later, a user reports that /api/analytics/reading returns a 500 when the depth field is sent as an integer instead of a string. Our Zod schema expects z.enum(["25", "50", "75", "100"]) but some clients send { depth: 50 }.

In the Kiro panel: Specs → + → select Bug → type:

The /api/analytics/reading endpoint returns 500 when depth is sent as an integer (e.g., 50) instead of a string ("50"). It should accept both formats.

Kiro generates a bugfix.md — different from the requirements.md we saw earlier:

# Bugfix: /api/analytics/reading 500 on integer depth
 
## Current Behavior
POST /api/analytics/reading returns 500 when the depth field
is an integer. The Zod schema uses z.enum(["25", "50", "75",
"100"]) which only accepts strings. Integer values fail
validation but the error is not caught gracefully, resulting
in an unhandled exception.
 
## Expected Behavior
- WHEN depth is sent as an integer (25, 50, 75, 100) THE
  SYSTEM SHALL coerce it to a string and process normally.
- WHEN depth is sent as a string ("25", "50", "75", "100")
  THE SYSTEM SHALL process normally (unchanged).
- WHEN depth is any other value THE SYSTEM SHALL return 400
  with error: "depth must be 25, 50, 75, or 100".
 
## Unchanged Behavior
- Valid requests must continue to return { success: true }.
- The reading_events table schema must not change.
- Existing records must not be affected.
 
## Root Cause
Zod schema in src/app/api/analytics/reading/route.ts uses
z.enum() for strings only. Needs z.preprocess() or z.coerce
to handle integer input.

Approve it. Kiro generates a design.md (proposing z.coerce.number().pipe(z.enum(...)) or z.preprocess()) and a tasks.md:

- [ ] Task 1: Update Zod schema to accept both string and
      integer depth values using z.preprocess()
- [ ] Task 2: Add regression tests for integer depth input
      (25, 50, 75, 100 as integers)
- [ ] Task 3: Add regression test for invalid depth values
      (e.g., 30, -1, "abc")

Run the tasks. Here is the actual diff Kiro applied to route.ts:

- const ReadingEventSchema = z.object({
-   articleSlug: z.string().min(1).max(255),
-   depth: z.enum(["25", "50", "75", "100"]),
-   sessionId: z.string().uuid(),
- });
+ const DepthSchema = z.preprocess(
+   (val) => String(val),
+   z.enum(["25", "50", "75", "100"], {
+     errorMap: () => ({
+       message: "depth must be 25, 50, 75, or 100",
+     }),
+   })
+ );
+
+ const ReadingEventSchema = z.object({
+   articleSlug: z.string().min(1).max(255),
+   depth: DepthSchema,
+   sessionId: z.string().uuid(),
+ });

z.preprocess() coerces the input to a string before validating. So { depth: 50 } and { depth: "50" } both pass. Invalid values like 30 or "abc" still fail with a clear error message. One line of behavioral change, zero breaking changes.

And here are the regression tests Kiro generated for the fix:

describe("POST /api/analytics/reading — integer depth regression", () => {
  it("accepts depth as integer 50", async () => {
    const res = await POST(
      makeRequest({
        articleSlug: "test-post",
        depth: 50, // integer, not string
        sessionId: crypto.randomUUID(),
      })
    );
    expect(res.status).toBe(200);
    const data = await res.json();
    expect(data.success).toBe(true);
  });
 
  it("accepts all integer depth values", async () => {
    for (const depth of [25, 50, 75, 100]) {
      const res = await POST(
        makeRequest({
          articleSlug: "test-post",
          depth, // integers
          sessionId: crypto.randomUUID(),
        })
      );
      expect(res.status).toBe(200);
    }
  });
 
  it("rejects invalid integer depth", async () => {
    const res = await POST(
      makeRequest({
        articleSlug: "test-post",
        depth: 30,
        sessionId: crypto.randomUUID(),
      })
    );
    expect(res.status).toBe(400);
    const data = await res.json();
    expect(data.error).toBe("depth must be 25, 50, 75, or 100");
  });
 
  it("rejects negative depth", async () => {
    const res = await POST(
      makeRequest({
        articleSlug: "test-post",
        depth: -1,
        sessionId: crypto.randomUUID(),
      })
    );
    expect(res.status).toBe(400);
  });
});

When Kiro saves the updated route file:

  1. The test coverage hook fires → sees the new DepthSchema and generates the regression tests above
  2. The security scanner hook fires → confirms no new credentials exposed
  3. The steering file loads → ensures the error response still uses { success: false, error: "..." }

Everything passes. The bug is fixed, tested, and documented in the spec files.

Now use MCP to close the loop:

Create a GitHub issue titled "Fixed: /api/analytics/reading 500 on integer depth" with label "bugfix". Include the root cause from the bugfix spec.

Done. Browser never opened.


Step 7: The Final Project Structure

After Steps 2–6, your .kiro/ folder looks like this:

your-project/
├── .kiro/
│   ├── settings/
│   │   └── mcp.json
│   ├── steering/
│   │   ├── product.md          ← auto-generated, always loaded
│   │   ├── tech.md             ← auto-generated, always loaded
│   │   ├── structure.md        ← auto-generated, always loaded
│   │   └── api-standards.md    ← custom, loaded on API file edits
│   ├── specs/
│   │   ├── reading-progress/
│   │   │   ├── requirements.md
│   │   │   ├── design.md
│   │   │   └── tasks.md
│   │   └── fix-integer-depth/
│   │       ├── bugfix.md
│   │       ├── design.md
│   │       └── tasks.md
│   └── hooks/
│       ├── test-coverage.md
│       ├── security-scanner.md
│       └── i18n-sync.md
├── src/
│   ├── app/
│   │   └── api/
│   │       └── analytics/
│   │           └── reading/
│   │               └── route.ts
│   ├── hooks/
│   │   └── useScrollDepth.ts
│   └── components/
│       └── ReadingProgress.tsx
└── package.json

Everything inside .kiro/ is version-controlled. Your team can review steering files, hook prompts, and spec docs in pull requests — just like code.


Going Deeper: Building a Custom MCP Server

The pre-built MCP servers (GitHub, AWS Docs, Brave Search) are useful, but the real power is building your own. I built a custom MCP server that lets me query my Supabase database directly from the Kiro chat. Now I can ask "How many reading events were recorded for my last blog post?" and get real data without opening a database client.

Here is the actual server:

mcp-supabase/index.ts

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { createClient } from "@supabase/supabase-js";
 
const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_KEY!
);
 
const server = new McpServer({
  name: "supabase-analytics",
  version: "1.0.0",
});
 
// Tool 1: Query reading analytics by article
server.tool(
  "get_reading_stats",
  "Get scroll depth analytics for a specific article",
  {
    articleSlug: z.string().describe("The blog post slug"),
  },
  async ({ articleSlug }) => {
    const { data, error } = await supabase
      .from("reading_events")
      .select("depth, session_id")
      .eq("article_slug", articleSlug);
 
    if (error) {
      return { content: [{ type: "text", text: `Error: ${error.message}` }] };
    }
 
    const uniqueSessions = new Set(data.map((e) => e.session_id)).size;
    const depthCounts = data.reduce(
      (acc, e) => {
        acc[e.depth] = (acc[e.depth] || 0) + 1;
        return acc;
      },
      {} as Record<number, number>
    );
 
    const completionRate =
      uniqueSessions > 0
        ? ((depthCounts[100] || 0) / uniqueSessions * 100).toFixed(1)
        : "0";
 
    return {
      content: [
        {
          type: "text",
          text: [
            `## Reading Stats for "${articleSlug}"`,
            `- Unique readers: ${uniqueSessions}`,
            `- Reached 25%: ${depthCounts[25] || 0}`,
            `- Reached 50%: ${depthCounts[50] || 0}`,
            `- Reached 75%: ${depthCounts[75] || 0}`,
            `- Completed (100%): ${depthCounts[100] || 0}`,
            `- Completion rate: ${completionRate}%`,
          ].join("\n"),
        },
      ],
    };
  }
);
 
// Tool 2: List top articles by completion rate
server.tool(
  "top_articles",
  "List articles ranked by reader completion rate",
  {
    limit: z.number().optional().default(10),
  },
  async ({ limit }) => {
    const { data, error } = await supabase
      .from("reading_events")
      .select("article_slug, depth, session_id");
 
    if (error) {
      return { content: [{ type: "text", text: `Error: ${error.message}` }] };
    }
 
    const articles = new Map<
      string,
      { sessions: Set<string>; completions: number }
    >();
 
    data.forEach((event) => {
      if (!articles.has(event.article_slug)) {
        articles.set(event.article_slug, {
          sessions: new Set(),
          completions: 0
        });
      }
      const article = articles.get(event.article_slug)!;
      article.sessions.add(event.session_id);
      if (event.depth === 100) article.completions++;
    });
 
    const ranked = Array.from(articles.entries())
      .map(([slug, stats]) => ({
        slug,
        readers: stats.sessions.size,
        completions: stats.completions,
        rate: (stats.completions / stats.sessions.size * 100).toFixed(1),
      }))
      .sort((a, b) => parseFloat(b.rate) - parseFloat(a.rate))
      .slice(0, limit);
 
    const table = ranked
      .map(
        (a, i) =>
          `${i + 1}. **${a.slug}** — ${a.rate}% completion (${a.readers} readers)`
      )
      .join("\n");
 
    return {
      content: [{ type: "text", text: `## Top Articles\n${table}` }],
    };
  }
);
 
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}
 
main().catch(console.error);

mcp-supabase/package.json

{
  "name": "mcp-supabase-analytics",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "start": "tsx index.ts"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.0.0",
    "@supabase/supabase-js": "^2.39.0",
    "zod": "^3.22.0"
  },
  "devDependencies": {
    "tsx": "^4.7.0"
  }
}

Wire it into .kiro/settings/mcp.json

{
  "mcpServers": {
    "supabase-analytics": {
      "command": "npx",
      "args": ["tsx", "./mcp-supabase/index.ts"],
      "env": {
        "SUPABASE_URL": "${SUPABASE_URL}",
        "SUPABASE_SERVICE_KEY": "${SUPABASE_SERVICE_KEY}"
      },
      "autoApprove": ["get_reading_stats", "top_articles"]
    }
  }
}

Now in the Kiro chat:

"What are the reading stats for my kiro-ide-spec-driven-development post?"

Kiro calls get_reading_stats and returns real data from your database. No browser. No SQL client. No dashboard.

"Which of my articles has the highest completion rate?"

Kiro calls top_articles and ranks them. You can use this data to decide what to write next — all inside the IDE.

The point: MCP servers are just Node.js programs. If you can write an API, you can build an MCP server. Connect your project management tool, your error tracker, your deployment pipeline — anything with an API becomes a tool Kiro can use from the chat.


Going Deeper: The Hook Pipeline

Individual hooks are useful. But the real pattern is chaining them into a quality gate that runs automatically as you work. Here is how I set up a pipeline where one event triggers a sequence of checks — similar to CI/CD, but inside the IDE.

The Pipeline

File Save (src/**/*.ts)
  ↓
Hook 1: Auto-format (Run Command)
  → npx prettier --write ${FILE}
  ↓
Hook 2: Test Coverage (Ask Kiro, File Save)
  → Generate missing tests, run them
  ↓
Hook 3: Security Scan (Ask Kiro, Agent Stop)
  → Scan for hardcoded secrets
  ↓
Hook 4: Doc Sync (Ask Kiro, Agent Stop)
  → Update README if exports changed
  ↓
Hook 5: Commit Prep (Manual Trigger)
  → Generate conventional commit message from spec

How it works in practice

When I save route.ts:

  1. Hook 1 fires — Prettier formats the file. This is a Run Command hook with npx prettier --write ${FILE}. Takes 200ms.

  2. Hook 2 fires — The test coverage hook sees the formatted file and checks for missing tests. If a new function was added, it generates a test file. If the file was only reformatted (no logic changes), it does nothing. The AI reads the diff, not just the file — it knows when changes are cosmetic vs. functional.

  3. Hook 3 fires after the agent completes — The security scanner runs on Agent Stop. It reviews all files the agent touched during the test generation step, not just the original save. This catches cases where a generated test accidentally logs a secret.

  4. Hook 4 fires after the security scan agent completes — The doc sync hook checks if any exported function signatures changed. If ReadingEventSchema gained a new field, the README's API documentation section gets updated automatically.

  5. Hook 5 is manual — When I am ready to commit, I trigger it. It reads all staged changes, cross-references them with the spec files (if a spec exists for this feature), and generates a conventional commit message like:

feat(analytics): add reading progress tracking API
 
Implements reading-progress spec (requirements.md, tasks 1-7).
- Added POST /api/analytics/reading with Zod validation
- Added useScrollDepth hook with IntersectionObserver
- Added ReadingProgress client component
- Added reading_events table migration
 
Closes #42

What about hook storms?

If Hook 4 updates README.md, and I have a hook that triggers on markdown file saves, that could create a loop. Kiro handles this by detecting circular triggers in most cases. But I scope my hooks defensively:

  • Test coverage hook: src/**/*.{ts,tsx} — does not trigger on markdown
  • Doc sync hook: triggers on Agent Stop, not File Save — does not re-trigger itself
  • Format hook: src/**/*.{ts,tsx} — does not match README

Scoping is the key. Broad file patterns like **/* are recipes for storms. Always use the most specific glob possible.


Going Deeper: Migrating from Cursor to Kiro

If you are currently using Cursor, here is how to migrate your setup to Kiro. The concepts map 1:1, but Kiro's system is more granular.

Converting .cursorrules to Steering Files

Cursor uses a single .cursorrules file at the project root. Kiro uses multiple steering files in .kiro/steering/ with different inclusion modes. Here is a real conversion:

Before (Cursor .cursorrules):

You are a senior TypeScript developer working on a Next.js 14
App Router project.
 
Rules:
- Use TypeScript strict mode
- Use Zod for all API validation
- Use Vanilla CSS with CSS Modules, never Tailwind
- Server components by default, client components need "use client"
- Never use `any` type
- All API routes return { success: boolean, error?: string }
- Database access through lib/db/ only
- Never modify migration files
- Run prettier before suggesting code
- Generate tests for all new functions

This is one flat file. Every interaction loads the entire thing — even when I am editing a CSS file that does not need API validation rules.

After (Kiro steering files):

Split into focused files with appropriate inclusion modes:

tech.md — Always loaded:

---
inclusion: always
---
 
# Technology Stack
- Next.js 14 (App Router), TypeScript strict mode
- Vanilla CSS with CSS Modules (never Tailwind)
- Supabase (PostgreSQL), NextAuth.js
- Server components by default
 
# Universal Rules
- Never use `any` type
- Never modify files in migrations/
- Never install dependencies without asking

api-standards.md — Only loads when I edit API files:

---
inclusion: fileMatch
fileMatchPattern: "src/app/api/**/*"
---
 
# API Conventions
- Validate all input with Zod schemas
- Return { success: boolean, error?: string }
- 400 for validation, 401 for auth, 500 for server errors
- Database access through lib/db/ only
- Log validation failures at WARN, server errors at ERROR
 
Follow patterns in: #[[file:src/app/api/polls/route.ts]]

component-patterns.md — Only loads when I edit React components:

---
inclusion: fileMatch
fileMatchPattern: ["src/components/**/*.tsx", "src/app/**/*.tsx"]
---
 
# Component Rules
- Client components require "use client" directive
- Add a comment explaining WHY it needs to be a client component
- Use CSS Modules for all styling (import styles from "./X.module.css")
- Wrap page-level components in error boundaries
 
Follow patterns in: #[[file:src/components/ui/Button.tsx]]

testing.md — Only loads when I work on tests:

---
inclusion: fileMatch
fileMatchPattern: "src/__tests__/**/*"
---
 
# Testing Conventions
- Use Jest + React Testing Library
- Test file naming: [filename].test.ts
- Use makeRequest() helper for API route tests
- Mock Supabase client, never use real database
 
Follow patterns in: #[[file:src/__tests__/polls.test.ts]]

troubleshooting.md — On-demand, loaded when I type #troubleshooting in chat:

---
inclusion: manual
---
 
# Common Issues
 
## Hydration Errors
If you see "Text content does not match server-rendered HTML":
- Check for `Date.now()` or `Math.random()` in server components
- Check for browser-only APIs (window, document) without guards
- Solution: wrap in useEffect or move to client component
 
## Database Connection Errors
- Check SUPABASE_URL and SUPABASE_SERVICE_KEY in .env.local
- Run `npx supabase status` to verify local instance
- Check RLS policies if queries return empty results

The Migration Checklist

Cursor Concept Kiro Equivalent Notes
.cursorrules .kiro/steering/*.md Split by domain, use inclusion modes
Rules for all files inclusion: always Only for truly universal rules
Rules for specific files inclusion: fileMatch Keeps context focused and small
Ad-hoc instructions inclusion: manual Type #filename in chat to load
Composer Spec-driven mode Kiro adds requirements + design before code
Tab autocomplete Built-in autocomplete Works the same
@codebase mention Automatic context Kiro reads relevant files + steering
.cursorrules in Git .kiro/steering/ in Git Same — version control your conventions

Why the Split Matters

In Cursor, my .cursorrules was 47 lines. Every single interaction loaded all 47 lines — even when I was just renaming a CSS class. That wastes context window and can mislead the AI (it might try to "apply Zod validation" to a CSS file because it read the API rules).

In Kiro, when I edit a CSS file, only tech.md loads. When I edit an API route, tech.md + api-standards.md load. When I edit a test file, tech.md + testing.md load. The AI gets exactly the context it needs — no more, no less. This is the difference between "tell the AI everything and hope it filters" and "give the AI exactly what it needs for this specific task."


Going Deeper: Steering Patterns for Monorepos

If you work in a monorepo with multiple packages, steering files become critical. Different packages have different conventions — your frontend uses React, your backend uses Express, your shared library uses pure TypeScript. A single .cursorrules file cannot handle this. Kiro's fileMatch patterns can.

Example: A Turborepo with Three Packages

monorepo/
├── .kiro/
│   └── steering/
│       ├── product.md              ← inclusion: always
│       ├── frontend-react.md       ← inclusion: fileMatch → apps/web/**/*
│       ├── backend-express.md      ← inclusion: fileMatch → apps/api/**/*
│       ├── shared-lib.md           ← inclusion: fileMatch → packages/shared/**/*
│       └── deployment.md           ← inclusion: manual
├── apps/
│   ├── web/                        ← React + Next.js
│   └── api/                        ← Express + Prisma
├── packages/
│   └── shared/                     ← Pure TypeScript utilities
└── turbo.json

frontend-react.md:

---
inclusion: fileMatch
fileMatchPattern: "apps/web/**/*"
---
 
# Frontend (apps/web)
- Framework: Next.js 14 App Router
- Styling: Tailwind CSS (this package uses Tailwind, not Vanilla CSS)
- State: Zustand for client state, React Query for server state
- Components: use Radix UI primitives, never build custom from scratch
- Images: always use next/image with explicit width/height

backend-express.md:

---
inclusion: fileMatch
fileMatchPattern: "apps/api/**/*"
---
 
# Backend (apps/api)
- Framework: Express.js with TypeScript
- ORM: Prisma (never write raw SQL)
- Validation: Zod middleware on every route
- Auth: JWT with refresh tokens, stored in httpOnly cookies
- Error handling: use AppError class from packages/shared
- NEVER import React or any frontend dependency in this package

shared-lib.md:

---
inclusion: fileMatch
fileMatchPattern: "packages/shared/**/*"
---
 
# Shared Library (packages/shared)
- Pure TypeScript only (no React, no Express, no framework deps)
- Must be tree-shakeable (named exports only, no default exports)
- Every exported function needs JSDoc with @example
- Zero runtime dependencies (utils only)
- All types go in types/ subdirectory

When I edit a file in apps/web/, Kiro loads product.md + frontend-react.md. When I edit a file in apps/api/, it loads product.md + backend-express.md. The frontend AI knows to use Tailwind. The backend AI knows to use Prisma. They never cross-contaminate.

The Array Pattern for Cross-Cutting Concerns

Some steering files apply to multiple packages but not all. Use the array syntax:

---
inclusion: fileMatch
fileMatchPattern: ["apps/web/**/*.test.*", "apps/api/**/*.test.*", "packages/shared/**/*.test.*"]
---
 
# Testing Standards (All Packages)
- Framework: Vitest
- Coverage threshold: 80% for new files
- Mock external services, never hit real APIs in tests
- Use factories for test data, never hardcode

This loads only when you edit test files — in any package.


Kiro vs. Cursor vs. Windsurf: When to Use What

Dimension Kiro Cursor Windsurf
Best for Production features, team projects, compliance Speed, prototyping, personal projects Large legacy codebases, deep context
Unique feature Spec-driven workflow + hooks + steering Composer (multi-file editing), fast autocomplete Cascade (autonomous agent), implicit context
Planning phase Built-in (requirements → design → tasks) None None
Automation Event-driven hooks (save/create/delete/manual/pre-post task) None built-in None built-in
Convention enforcement Steering files with 4 inclusion modes .cursorrules (single file, always loaded) Limited
External tools MCP (workspace + user level) Community plugins Limited
Extension source Open VSX registry VS Code Marketplace VS Code Marketplace

Use Kiro when you need the code to be tested, documented, and traceable to requirements.

Use Cursor when you need to move fast and do not need a spec trail.

Use Windsurf when you need deep context recall across a massive codebase.


Pricing

Tier Price Credits/month Autonomous Agent
Free $0 50 —
Pro $20/mo 1,000 Preview access
Pro+ $40/mo 2,000 Preview access
Power $200/mo 10,000 Preview access
Enterprise Contact sales Custom Full access

Overage: $0.04/credit. The free tier gives you full access to specs, hooks, steering, and MCP — 50 credits is enough to try this entire tutorial.


Final Thoughts

Kiro is not trying to be the fastest AI IDE. It is trying to be the most responsible one.

In a world where every other tool is optimized for "ship fast," Kiro is optimized for "ship something you can maintain." It asks the questions a good tech lead would ask. It writes the documentation a good team would write. It enforces the standards a good codebase would follow.

I still reach for Cursor when I want to hack on a weekend project. But when I am building something that matters — something my team will maintain, something that needs to pass a review, something that a future engineer will inherit — I reach for Kiro.

The era of "vibe coding" is not over. But the era of shipping vibe-coded software to production should be.


Sources & Further Reading:

  • Kiro Downloads
  • Kiro Specs — Feature Specs
  • Kiro Specs — Bugfix Specs
  • Kiro Hooks — Examples
  • Kiro Steering — Inclusion Modes
  • Kiro MCP — Configuration
  • Kiro CLI
  • Kiro Pricing

Keep Reading

I'm Officially an AWS Community Builder! The Complete Guide to What It Is, What You Get, and How to Make the Most of It

March 5, 2026 (3w ago)
AWSCommunity10 min read

Breaking Production on Purpose: A Guide to Chaos Engineering

January 16, 2026 (2mo ago)
Chaos EngineeringAWS8 min read

FinOps 101: How to Stop AWS From Bankrupting You

January 14, 2026 (2mo ago)
AWSFinOps9 min read
Ranti

Rantideb Howlader

Author

Connect
LinkedIn