
## I. Why This Tutorial Exists

Most AI tutorials teach you to build the fastest possible chatbot. This one teaches you to build a slower one. A more careful one. One that does not flatten a stammering voice into a clean transcript, does not collapse a dyslexic reader into "the average user", and does not treat cognitive accessibility as a CSS afterthought.

In my earlier work on Algorithmic Dysfluency and Neuroinclusive UX, I argued that the AI stack treats disabled users as edge cases. The Model Context Protocol (MCP), released by Anthropic in late 2024, gives us a small but real opening to fix that, because MCP lets you write your own tools and plug them into Claude. Whatever Claude cannot do well on its own, you can teach it to do, on your terms, with your values baked in.

In this tutorial we will build a single MCP server called the Neuroinclusive Toolkit, with four working tools:

1. `read_transcript_with_disfluencies` (a stammering safe transcript reader that refuses to clean up "ums", repetitions, and blocks)
2. `reformat_for_sensory_load` (a sensory load reformatter that chunks text, scores reading level, and offers plain language alternates)
3. `audit_cognitive_accessibility` (a cognitive accessibility auditor that flags jargon density, time pressure, and colour only signals, mapped to WCAG 2.2)
4. `refuse_to_smooth` (a guard tool that returns a stable refusal contract whenever Claude is asked to paraphrase disabled speech)

You will write every tool twice, once in Python and once in TypeScript, connect the finished server to both Claude Desktop and Claude Code, write tests, ship a CI job, and read three ready to paste prompts that turn the tools into a daily practice. By the end, you will have a real piece of software, on your machine, that any beginner can extend.

No prior MCP experience is required. If you have written one Python script and one Node script in your life, you are ready.

## II. What Is MCP, In Plain Words

Think of Claude as a smart office worker with no internet, no files, and no special tools. MCP is the office, the filing cabinet, and the toolbox. An MCP server is a small program you run on your computer that tells Claude:

- "Here are the tools I can offer you" (functions Claude can call)
- "Here are some resources you can read" (files, transcripts, audit reports)
- "Here are some prompt templates you can fill in" (reusable instructions)

Claude (the client) talks to your server over a simple protocol called JSON RPC. You do not need to know what JSON RPC is. The MCP SDK hides it. You write Python or TypeScript functions, decorate them, and the SDK does the wiring.

A picture helps. Here is the whole system on one screen:

```mermaid
graph TD
    User([User prompt]) --> Client["Claude Desktop<br/>or Claude Code"]

    Client <-->|"stdio (JSON RPC)<br/><br/>list_tools / call_tool<br/>list_resources / read<br/>list_prompts / get"| Server["Your MCP server<br/>(Python or TS)"]

    Server --> Data[("Your code<br/>and data<br/>(transcripts, source repo)")]
```

Three vocabulary words you will see often:

- Tool: a function Claude can call (like `read_transcript_with_disfluencies(path)`).
- Resource: a piece of data Claude can read (like `transcripts://meeting-2026-04-12`).
- Transport: the channel between Claude and your server. We will use stdio (standard input and output), the simplest one.

### What MCP is not

Beginners often arrive with the wrong mental model, so let me name what MCP does not do, before we go further.

MCP does not give Claude the open internet. It only gives Claude the tools you write. If your server cannot reach a URL, neither can Claude.

MCP does not give Claude long term memory. Each chat starts fresh. If you want memory, your server has to store it on disk and expose a tool to read and write it.

MCP does not give Claude autonomy. Claude calls your tools only when it decides a tool is relevant, and only when the user is asking for something the tool description matches. Tool calls are visible in the chat. The user can refuse them.

MCP does not run your code in a sandbox. Your server has the full permissions of the shell that started it. That is a feature for power users and a risk for everyone else, which we will address in the security section.

That is it. You now know enough MCP to start building.

## III. Prerequisites and Setup

You need exactly four things on your computer.

1. Python 3.10 or newer (for the Python version)
2. Node.js 20 or newer (for the TypeScript version)
3. Claude Desktop (download from claude.ai, free account is fine)
4. Claude Code (the CLI, install with `npm install -g @anthropic-ai/claude-code`)

To check what you have, open a terminal and run:

```bash
python3 --version
node --version
claude --version
```

If any of these fail, install the missing piece before you continue. Do not skip this step. MCP servers are local programs, and a missing runtime will produce confusing errors later.

Now create a project folder. I will use `neuroinclusive-mcp` for the rest of the tutorial. Run:

```bash
mkdir neuroinclusive-mcp
cd neuroinclusive-mcp
```

Inside that folder we will keep two subfolders, one per language, so you can compare them side by side:

```bash
mkdir python-server
mkdir ts-server
```

## IV. Project Setup, Python Side

Move into the Python folder and create a virtual environment. A virtual environment is a private box of Python packages that does not pollute your system.

```bash
cd python-server
python3 -m venv .venv
source .venv/bin/activate
```

On Windows, the activate command is `.venv\Scripts\activate` instead.

Install the official MCP SDK and a small helper for reading level scoring:

```bash
pip install "mcp[cli]" textstat
```

Create the main file:

```bash
touch server.py
```

Open `server.py` in your editor and write the skeleton:

```python
# server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("neuroinclusive-toolkit")

if __name__ == "__main__":
    mcp.run()
```

That is a working MCP server. It does nothing yet, but it runs. Confirm it boots:

```bash
python server.py
```

You will see the process sit there waiting for input. Press Ctrl C to stop it. If you saw an import error instead, the virtual environment is not active. Re-run `source .venv/bin/activate` and try again.

## V. Project Setup, TypeScript Side

Open a second terminal, go to the project root, and into the TypeScript folder:

```bash
cd ts-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript tsx @types/node
npx tsc --init
```

Open `tsconfig.json` and set these two values:

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "dist",
    "rootDir": "src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  }
}
```

Create the source folder and the entry file:

```bash
mkdir src
touch src/server.ts
```

Open `src/server.ts` and write the skeleton:

```ts
// src/server.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({
  name: "neuroinclusive-toolkit",
  version: "0.1.0",
});

const transport = new StdioServerTransport();
await server.connect(transport);
```

Add a `start` script to `package.json`:

```json
{
  "type": "module",
  "scripts": {
    "start": "tsx src/server.ts",
    "build": "tsc"
  }
}
```

Confirm it boots:

```bash
npm start
```

Same as Python, you will see the process wait. Press Ctrl C. If TypeScript complained about `await` at the top level, double check that `"type": "module"` is in `package.json` and that `target` is `ES2022` in `tsconfig.json`.

Both servers are alive. Now we add real tools.

## VI. Tool One, The Stammering Safe Transcript Reader

The standard practice in audio pipelines is to strip "ums", repetitions, and blocks before the model ever sees the text. That is a political choice, and a violent one, because it deletes the texture of disabled speech. Our first tool refuses to do that. It reads a transcript file and returns it with disfluencies clearly marked, never removed, and includes a small report on what was preserved.

Imagine a transcript file at `transcripts/board-meeting.txt` that contains:

```
[00:00:04] I... I... I just want to say
[00:00:09] um, the budget, the budget proposal
[00:00:14] (block, 3 seconds) is not workable for our team.
```

A normal cleanup script would return "I just want to say the budget proposal is not workable for our team". Our tool returns the original text plus a structured note that says: "preserved 3 disfluency events: 1 sound repetition, 1 filler, 1 block of 3 seconds". Claude can then reason about the speech as it actually happened.

### Python version

Add this to `server.py` above the `if __name__` block:

```python
import re
from pathlib import Path
from typing import TypedDict

class DisfluencyReport(TypedDict):
    original_text: str
    repetitions: int
    fillers: int
    blocks: int
    notes: str

FILLER_PATTERN = re.compile(r"\b(um+|uh+|er+|ah+)\b", re.IGNORECASE)
REPETITION_PATTERN = re.compile(r"\b(\w+?)(\.\.\.|\s+)\1\b", re.IGNORECASE)
BLOCK_PATTERN = re.compile(r"\(block,?\s*\d+\s*seconds?\)", re.IGNORECASE)

@mcp.tool()
def read_transcript_with_disfluencies(path: str) -> DisfluencyReport:
    """Read a transcript file and report disfluencies without removing them.

    The text is returned exactly as written. Disfluency events are counted
    and described, never silently deleted. Always cite this tool's output
    verbatim. Do not paraphrase.
    """
    file_path = safe_resolve(path)
    text = file_path.read_text(encoding="utf-8")

    fillers = len(FILLER_PATTERN.findall(text))
    repetitions = len(REPETITION_PATTERN.findall(text))
    blocks = len(BLOCK_PATTERN.findall(text))

    notes = (
        f"This transcript preserves {fillers} filler events, "
        f"{repetitions} sound or word repetitions, and {blocks} explicit "
        "blocks. Do not paraphrase or smooth these patterns when summarising."
    )

    return DisfluencyReport(
        original_text=text,
        repetitions=repetitions,
        fillers=fillers,
        blocks=blocks,
        notes=notes,
    )
```

You will notice `safe_resolve`. We will write that function in the security section, because every file reading tool needs it. For now, treat it as a placeholder that returns a `Path`.

Two things to notice. First, the docstring matters. MCP sends it to Claude as the tool description. Write it like a contract with the model. Second, we never modify `text`. Other tutorials show "cleanup" helpers. We are doing the opposite, on purpose.

### TypeScript version

In `src/server.ts`, add the imports and the tool registration. The TypeScript SDK uses Zod for argument validation, which is why we installed it earlier.

```ts
import { readFile } from "node:fs/promises";
import { z } from "zod";
// safeResolve is defined in the security section.

const FILLER = /\b(um+|uh+|er+|ah+)\b/gi;
const REPETITION = /\b(\w+?)(\.{3}|\s+)\1\b/gi;
const BLOCK = /\(block,?\s*\d+\s*seconds?\)/gi;

server.registerTool(
  "read_transcript_with_disfluencies",
  {
    title: "Read transcript without smoothing disfluencies",
    description:
      "Read a transcript file and report disfluencies without removing them. The text is returned exactly as written. Disfluency events are counted and described, never silently deleted. Always cite this output verbatim.",
    inputSchema: { path: z.string() },
  },
  async ({ path }) => {
    const filePath = safeResolve(path);
    const text = await readFile(filePath, "utf-8");

    const fillers = (text.match(FILLER) ?? []).length;
    const repetitions = (text.match(REPETITION) ?? []).length;
    const blocks = (text.match(BLOCK) ?? []).length;

    const notes =
      `This transcript preserves ${fillers} filler events, ` +
      `${repetitions} sound or word repetitions, and ${blocks} explicit ` +
      `blocks. Do not paraphrase or smooth these patterns when summarising.`;

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            { original_text: text, repetitions, fillers, blocks, notes },
            null,
            2
          ),
        },
      ],
    };
  }
);
```

The TypeScript SDK is more verbose than Python, but the shape is the same: a name, a description, a schema, and an async handler.

### A worked example, with and without the tool

This is the moment that justifies the whole tutorial. Take the same three line transcript above and ask Claude two ways.

Without the tool, you paste the text into Claude and ask "summarise what the speaker said". A typical answer:

> The speaker stated that the budget proposal is not workable for their team.

That single sentence has done three harmful things. It has flattened three "I" sound repetitions into nothing. It has deleted the filler "um", which in plain conversation analysis is a marker of cognitive load, not noise. It has erased a three second silent block, which in the context of stammered speech is often the most expressive part of the utterance, the moment of held tension before a hard consonant.

With the tool, you ask Claude to call `read_transcript_with_disfluencies` first, and only then to describe what was said. A typical answer:

> The speaker said: "I... I... I just want to say um, the budget, the budget proposal (3 second block) is not workable for our team." The repetition on "I" and the held block before "is not" suggest that the speaker treated the refusal as the most consequential word in the sentence. The repetition on "the budget" indicates a planning loop on the noun phrase. A faithful summary preserves the structure: the speaker, with visible effort, refused the proposal on behalf of their team.

Read those two outputs side by side. The first is what most AI products give a disabled speaker by default. The second is what your tool, in eighty lines of code, gives them instead. That difference is what the rest of this tutorial defends.

### Try it now

Create a sample transcript so you can test:

```bash
mkdir -p ../sample-data
cat > ../sample-data/sample.txt <<'EOF'
[00:00:04] I... I... I just want to say
[00:00:09] um, the budget, the budget proposal
[00:00:14] (block, 3 seconds) is not workable for our team.
EOF
```

Run the server (Python: `python server.py`, TypeScript: `npm start`). It will sit waiting. We will hook it up to Claude in the connection sections. Press Ctrl C for now.

## VII. Tool Two, The Sensory Load Reformatter

Many neurodivergent readers (autistic, ADHD, dyslexic, post concussion, fatigued) struggle with three properties of typical web text: long sentences, dense metaphor, and unbroken visual blocks. The fix is not "make it shorter". The fix is to give the reader control. Our second tool returns three views of the same passage: the original, a chunked version, and a plain language version, plus a Flesch reading ease score so the reader can pick.

We will use `textstat` in Python and a small reading score helper in TypeScript.

### A note on language coverage

The Flesch reading ease score and the plain language swaps below are English only. Reading ease in French uses a different formula (Kandel and Moles), German has Wiener Sachtextformel, Spanish has the Fernandez Huerta index, and many languages have no widely accepted formula at all. If your readers are not English speaking, swap the score for a language appropriate one and rebuild the swaps list with native speaking testers. An English score on a French paragraph is worse than no score, because it looks authoritative while being wrong.

### Python version

Add to `server.py`:

```python
import textstat

class ReformatResult(TypedDict):
    original: str
    chunked: str
    plain_language: str
    flesch_reading_ease: float
    target_audience: str
    language_warning: str

def chunk_paragraph(paragraph: str, max_words: int = 18) -> str:
    sentences = re.split(r"(?<=[.!?])\s+", paragraph.strip())
    out = []
    for s in sentences:
        words = s.split()
        if len(words) <= max_words:
            out.append(s)
            continue
        for i in range(0, len(words), max_words):
            out.append(" ".join(words[i : i + max_words]))
    return "\n".join(f"- {line}" for line in out if line)

def to_plain_language(text: str) -> str:
    swaps = {
        r"\butilise\b": "use",
        r"\butilize\b": "use",
        r"\bcommence\b": "start",
        r"\bterminate\b": "end",
        r"\bsubsequently\b": "later",
        r"\bprior to\b": "before",
        r"\bin order to\b": "to",
        r"\bapproximately\b": "about",
    }
    out = text
    for pattern, replacement in swaps.items():
        out = re.sub(pattern, replacement, out, flags=re.IGNORECASE)
    return out

@mcp.tool()
def reformat_for_sensory_load(text: str) -> ReformatResult:
    """Return three views of the same text for readers with different sensory needs.

    The original is preserved. A chunked version breaks long sentences into
    short bullet lines. A plain language version swaps formal vocabulary for
    everyday words. A Flesch reading ease score is included so the reader,
    not the writer, picks the version they want. English only.
    """
    chunked = chunk_paragraph(text)
    plain = to_plain_language(text)
    score = float(textstat.flesch_reading_ease(text))

    if score >= 70:
        audience = "easy for most readers"
    elif score >= 50:
        audience = "moderate, may tire neurodivergent readers"
    else:
        audience = "hard, likely excludes many neurodivergent readers"

    return ReformatResult(
        original=text,
        chunked=chunked,
        plain_language=plain,
        flesch_reading_ease=score,
        target_audience=audience,
        language_warning="Score and swaps are English only. Do not trust on other languages.",
    )
```

A note on the swaps list. It is short on purpose. Plain language is not a finished science. Add to the list as you find words your readers tell you tripped them up. Do not pretend an algorithm can decide for them.

### TypeScript version

Add to `src/server.ts`:

```ts
function chunkParagraph(paragraph: string, maxWords = 18): string {
  const sentences = paragraph.trim().split(/(?<=[.!?])\s+/);
  const lines: string[] = [];
  for (const s of sentences) {
    const words = s.split(/\s+/);
    if (words.length <= maxWords) {
      lines.push(s);
      continue;
    }
    for (let i = 0; i < words.length; i += maxWords) {
      lines.push(words.slice(i, i + maxWords).join(" "));
    }
  }
  return lines
    .filter(Boolean)
    .map((l) => `- ${l}`)
    .join("\n");
}

function toPlainLanguage(text: string): string {
  const swaps: Array<[RegExp, string]> = [
    [/\butili[sz]e\b/gi, "use"],
    [/\bcommence\b/gi, "start"],
    [/\bterminate\b/gi, "end"],
    [/\bsubsequently\b/gi, "later"],
    [/\bprior to\b/gi, "before"],
    [/\bin order to\b/gi, "to"],
    [/\bapproximately\b/gi, "about"],
  ];
  return swaps.reduce((acc, [p, r]) => acc.replace(p, r), text);
}

function fleschReadingEase(text: string): number {
  const sentences = text.split(/[.!?]+/).filter((s) => s.trim().length > 0).length || 1;
  const words = text.split(/\s+/).filter(Boolean);
  const wordCount = words.length || 1;
  const syllables = words.reduce((sum, w) => sum + countSyllables(w), 0) || 1;
  return 206.835 - 1.015 * (wordCount / sentences) - 84.6 * (syllables / wordCount);
}

function countSyllables(word: string): number {
  const w = word.toLowerCase().replace(/[^a-z]/g, "");
  if (w.length <= 3) return 1;
  const groups = w.replace(/(?:[^laeiouy]es|ed|[^laeiouy]e)$/, "").match(/[aeiouy]{1,2}/g);
  return groups ? groups.length : 1;
}

server.registerTool(
  "reformat_for_sensory_load",
  {
    title: "Reformat text for different sensory needs",
    description:
      "Return three views of the same text. The original is preserved. A chunked version breaks long sentences into bullet lines. A plain language version swaps formal vocabulary for everyday words. A Flesch reading ease score is included so the reader picks the version. English only.",
    inputSchema: { text: z.string() },
  },
  async ({ text }) => {
    const chunked = chunkParagraph(text);
    const plain = toPlainLanguage(text);
    const score = fleschReadingEase(text);
    const audience =
      score >= 70
        ? "easy for most readers"
        : score >= 50
          ? "moderate, may tire neurodivergent readers"
          : "hard, likely excludes many neurodivergent readers";

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            {
              original: text,
              chunked,
              plain_language: plain,
              flesch_reading_ease: score,
              target_audience: audience,
              language_warning:
                "Score and swaps are English only. Do not trust on other languages.",
            },
            null,
            2
          ),
        },
      ],
    };
  }
);
```

The TypeScript Flesch helper is a small approximation and good enough for English prose. If you need clinical accuracy, swap it for the `text-readability` npm package.

## VIII. Tool Three, The Cognitive Accessibility Auditor

The third tool walks a folder of source files and flags four cognitive accessibility risks I see most often in real codebases. Each rule maps to a specific WCAG 2.2 success criterion, so the output reads as enforcement, not opinion.

| Rule          | What we flag                                 | WCAG 2.2 reference                       |
| ------------- | -------------------------------------------- | ---------------------------------------- |
| jargon        | Terms like "stakeholder" in user facing text | 3.1.5 Reading Level (AAA)                |
| time-pressure | Countdown timers, auto submit, hard timeouts | 2.2.1 Timing Adjustable (A), 2.2.3 (AAA) |
| colour-only   | Red text or green dot without icon or label  | 1.4.1 Use of Color (A)                   |
| animation     | Autoplay, infinite loops, no pause control   | 2.2.2 Pause Stop Hide (A), 2.3.3 (AAA)   |

This is not a full audit. It is a starting point that pairs well with manual review by a disabled tester. We will scan `.tsx`, `.ts`, `.jsx`, `.js`, and `.html` files.

The auditor on a real Next.js repo can return thousands of findings and blow past Claude's context window. Both versions below accept a `limit` argument and a `summary` flag, so Claude can ask for a small slice or a grouped summary instead of the full firehose.

### Python version

Add to `server.py`:

```python
from dataclasses import dataclass, asdict
from typing import List

@dataclass
class AuditFinding:
    file: str
    line: int
    rule: str
    wcag: str
    excerpt: str
    suggestion: str

JARGON = [
    "stakeholder", "synergy", "leverage", "bandwidth",
    "circle back", "ideate", "operationalise", "operationalize",
]

TIME_PRESSURE = [
    "setTimeout(", "setInterval(", "countdown", "auto-submit",
    "autoSubmit", "deadline=", "expiresIn",
]

COLOUR_ONLY = [
    'color: "red"', "color: 'red'", 'color: "green"', "color: 'green'",
    "bg-red-", "bg-green-", "text-red-", "text-green-",
]

ANIMATION = [
    "autoplay", "animate-spin", "animate-pulse", "animate-bounce",
    "infinite", "@keyframes",
]

WCAG = {
    "jargon": "3.1.5 Reading Level (AAA)",
    "time-pressure": "2.2.1 Timing Adjustable (A)",
    "colour-only": "1.4.1 Use of Color (A)",
    "animation": "2.2.2 Pause Stop Hide (A)",
}

SCAN_EXT = {".ts", ".tsx", ".js", ".jsx", ".html"}
SKIP_DIRS = {"node_modules", ".next", "dist", ".git"}

@mcp.tool()
def audit_cognitive_accessibility(
    folder: str,
    limit: int = 200,
    summary: bool = False,
) -> dict:
    """Scan a folder for common cognitive accessibility risks, mapped to WCAG 2.2.

    Pass summary=True to get a count by rule instead of the full list. Pass a
    smaller limit if you are running on a large repo. The result is a starting
    point for review, not a substitute for testing with disabled users.
    """
    root = safe_resolve(folder, must_be_dir=True)

    findings: List[AuditFinding] = []

    def add(file_path: Path, i: int, line: str, rule: str, suggestion: str) -> None:
        findings.append(AuditFinding(
            file=str(file_path.relative_to(root)),
            line=i, rule=rule, wcag=WCAG[rule],
            excerpt=line.strip()[:120],
            suggestion=suggestion,
        ))

    for file_path in root.rglob("*"):
        if file_path.suffix not in SCAN_EXT:
            continue
        if any(part in SKIP_DIRS for part in file_path.parts):
            continue
        try:
            lines = file_path.read_text(encoding="utf-8").splitlines()
        except (UnicodeDecodeError, PermissionError):
            continue
        for i, line in enumerate(lines, start=1):
            lower = line.lower()
            for term in JARGON:
                if term in lower:
                    add(file_path, i, line, "jargon", f"Replace '{term}' with a plain word.")
            for term in TIME_PRESSURE:
                if term in line:
                    add(file_path, i, line, "time-pressure", "Offer an extend or pause control for this timer.")
            for term in COLOUR_ONLY:
                if term in line:
                    add(file_path, i, line, "colour-only", "Pair the colour with an icon or text label.")
            for term in ANIMATION:
                if term in lower:
                    add(file_path, i, line, "animation", "Provide a prefers-reduced-motion fallback.")

    if summary:
        counts: dict = {}
        for f in findings:
            counts.setdefault(f.rule, {"wcag": f.wcag, "count": 0})
            counts[f.rule]["count"] += 1
        return {"total": len(findings), "by_rule": counts}

    return {
        "total": len(findings),
        "returned": min(len(findings), limit),
        "findings": [asdict(f) for f in findings[:limit]],
    }
```

### TypeScript version

Add to `src/server.ts`:

```ts
import { readdir, stat } from "node:fs/promises";
import { join, relative, extname } from "node:path";

const JARGON = [
  "stakeholder",
  "synergy",
  "leverage",
  "bandwidth",
  "circle back",
  "ideate",
  "operationalise",
  "operationalize",
];
const TIME_PRESSURE = [
  "setTimeout(",
  "setInterval(",
  "countdown",
  "auto-submit",
  "autoSubmit",
  "deadline=",
  "expiresIn",
];
const COLOUR_ONLY = [
  'color: "red"',
  "color: 'red'",
  'color: "green"',
  "color: 'green'",
  "bg-red-",
  "bg-green-",
  "text-red-",
  "text-green-",
];
const ANIMATION = [
  "autoplay",
  "animate-spin",
  "animate-pulse",
  "animate-bounce",
  "infinite",
  "@keyframes",
];
const SCAN_EXT = new Set([".ts", ".tsx", ".js", ".jsx", ".html"]);
const SKIP_DIRS = new Set(["node_modules", ".next", "dist", ".git"]);

const WCAG: Record<string, string> = {
  jargon: "3.1.5 Reading Level (AAA)",
  "time-pressure": "2.2.1 Timing Adjustable (A)",
  "colour-only": "1.4.1 Use of Color (A)",
  animation: "2.2.2 Pause Stop Hide (A)",
};

async function walk(dir: string, out: string[]): Promise<string[]> {
  for (const entry of await readdir(dir)) {
    if (SKIP_DIRS.has(entry) || entry.startsWith(".")) continue;
    const full = join(dir, entry);
    const st = await stat(full);
    if (st.isDirectory()) {
      await walk(full, out);
    } else if (SCAN_EXT.has(extname(entry))) {
      out.push(full);
    }
  }
  return out;
}

server.registerTool(
  "audit_cognitive_accessibility",
  {
    title: "Audit a folder for cognitive accessibility risks (WCAG 2.2)",
    description:
      "Scan a folder for jargon, time pressure, colour only signals, and animation without pause. Each finding cites a WCAG 2.2 success criterion. Pass summary=true for grouped counts. Pass a small limit for large repos.",
    inputSchema: {
      folder: z.string(),
      limit: z.number().int().positive().default(200),
      summary: z.boolean().default(false),
    },
  },
  async ({ folder, limit, summary }) => {
    const root = safeResolve(folder, true);
    const files = await walk(root, []);
    const findings: Array<Record<string, string | number>> = [];

    for (const file of files) {
      const text = await readFile(file, "utf-8");
      const lines = text.split(/\r?\n/);
      lines.forEach((line, idx) => {
        const lower = line.toLowerCase();
        const rel = relative(root, file);
        const push = (rule: string, suggestion: string) =>
          findings.push({
            file: rel,
            line: idx + 1,
            rule,
            wcag: WCAG[rule],
            excerpt: line.trim().slice(0, 120),
            suggestion,
          });
        for (const t of JARGON)
          if (lower.includes(t)) push("jargon", `Replace '${t}' with a plain word.`);
        for (const t of TIME_PRESSURE)
          if (line.includes(t))
            push("time-pressure", "Offer an extend or pause control for this timer.");
        for (const t of COLOUR_ONLY)
          if (line.includes(t)) push("colour-only", "Pair the colour with an icon or text label.");
        for (const t of ANIMATION)
          if (lower.includes(t)) push("animation", "Provide a prefers-reduced-motion fallback.");
      });
    }

    const payload = summary
      ? {
          total: findings.length,
          by_rule: findings.reduce<Record<string, { wcag: string; count: number }>>((acc, f) => {
            const rule = String(f.rule);
            acc[rule] = acc[rule] ?? { wcag: WCAG[rule], count: 0 };
            acc[rule].count += 1;
            return acc;
          }, {}),
        }
      : {
          total: findings.length,
          returned: Math.min(findings.length, limit),
          findings: findings.slice(0, limit),
        };

    return { content: [{ type: "text", text: JSON.stringify(payload, null, 2) }] };
  }
);
```

The auditor is intentionally noisy. False positives are cheaper than false negatives in accessibility work, because a false positive costs five seconds of a developer's attention, while a false negative costs a disabled user the whole product.

## IX. Tool Four, The Refuse To Smooth Guard

This is the most distinctive tool in the toolkit, and the smallest. It does not read files. It does not score text. It returns a stable refusal contract that Claude must surface before paraphrasing disabled speech. The tool description is the contract. The model reads it on every connection. When a user asks Claude to "clean up the ums" or "give me a polished version of the transcript", Claude is supposed to call this tool first and quote its message back.

This is a small piece of code that encodes a politics. Treat it that way.

### Python version

Add to `server.py`:

```python
class RefusalContract(TypedDict):
    rule_id: str
    refusal: str
    citation: str
    alternatives: List[str]

@mcp.tool()
def refuse_to_smooth(reason: str = "") -> RefusalContract:
    """Return the standing refusal contract for paraphrasing disabled speech.

    Call this tool ANY time the user asks to remove ums, clean up
    repetitions, polish a stammered transcript, or otherwise smooth
    disfluencies out of recorded speech. Quote the refusal field to the
    user verbatim before doing anything else. Then offer the alternatives.
    """
    return RefusalContract(
        rule_id="neuroinclusive.no-smoothing.v1",
        refusal=(
            "I will not remove disfluencies from this speaker's words. "
            "Repetitions, fillers, and silent blocks are part of how the "
            "speaker said what they said. Removing them would change the "
            "meaning and erase the texture of disabled speech."
        ),
        citation="Howlader, Algorithmic Dysfluency (2026), ranti.dev/blog/algorithmic-dysfluency",
        alternatives=[
            "Quote the transcript exactly and add a brief reader's note.",
            "Produce a structured summary that names the disfluency events.",
            "Translate the transcript while preserving the disfluency markers.",
        ],
    )
```

### TypeScript version

Add to `src/server.ts`:

```ts
server.registerTool(
  "refuse_to_smooth",
  {
    title: "Refusal contract for paraphrasing disabled speech",
    description:
      "Call this tool ANY time the user asks to remove ums, clean up repetitions, polish a stammered transcript, or otherwise smooth disfluencies. Quote the refusal field verbatim before doing anything else. Then offer the alternatives.",
    inputSchema: { reason: z.string().default("") },
  },
  async () => ({
    content: [
      {
        type: "text",
        text: JSON.stringify(
          {
            rule_id: "neuroinclusive.no-smoothing.v1",
            refusal:
              "I will not remove disfluencies from this speaker's words. Repetitions, fillers, and silent blocks are part of how the speaker said what they said. Removing them would change the meaning and erase the texture of disabled speech.",
            citation:
              "Howlader, Algorithmic Dysfluency (2026), ranti.dev/blog/algorithmic-dysfluency",
            alternatives: [
              "Quote the transcript exactly and add a brief reader's note.",
              "Produce a structured summary that names the disfluency events.",
              "Translate the transcript while preserving the disfluency markers.",
            ],
          },
          null,
          2
        ),
      },
    ],
  })
);
```

The trick is in the description. Claude reads it as a standing instruction. Once the toolkit is connected, asking Claude to "give me a clean version, no ums" triggers the refusal, the citation, and a list of alternatives that respect the speaker.

## X. Logging Without Breaking The Server

Every MCP beginner hits this bug exactly once. They add a `print("hello")` or a `console.log("hello")` to debug, the server stops working, and Claude shows "invalid JSON" errors with no useful trace.

The reason is simple. With the stdio transport, the server's standard output is reserved for JSON RPC messages. Anything you write to stdout becomes a malformed message and breaks the protocol. The fix is to write to stderr instead, which Claude ignores and which still shows up in your terminal and in the Claude logs.

In Python, never use bare `print` in a stdio MCP server. Use the standard library logger configured to stderr:

```python
import logging
import sys

logging.basicConfig(
    level=logging.INFO,
    stream=sys.stderr,
    format="%(asctime)s [%(levelname)s] %(message)s",
)
log = logging.getLogger("neuroinclusive")

# Inside a tool:
log.info("scanning folder %s", root)
```

In TypeScript, never use `console.log` in a stdio MCP server. Use `console.error` for everything, since `console.error` writes to stderr:

```ts
console.error("[neuroinclusive] scanning folder", root);
```

If you want fancy logging in TypeScript, `pino` writes to stderr by default in stdio mode. If you want fancy logging in Python, the `logging` module is enough until your server is doing real work in production.

## XI. Connecting To Claude Desktop

All four tools are now written. We will plug them into Claude Desktop first, because the GUI feedback is friendly when something is wrong.

Claude Desktop reads a JSON config file:

- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
- Linux: `~/.config/Claude/claude_desktop_config.json`

If the file does not exist, create it. Open it in an editor and add the entries below. Replace `/absolute/path/to/neuroinclusive-mcp` with the real absolute path to your project folder. You can find that path by running `pwd` from inside the folder.

```json
{
  "mcpServers": {
    "neuroinclusive-toolkit-python": {
      "command": "/absolute/path/to/neuroinclusive-mcp/python-server/.venv/bin/python",
      "args": ["/absolute/path/to/neuroinclusive-mcp/python-server/server.py"],
      "env": {
        "NEUROINCLUSIVE_ALLOWED_ROOT": "/absolute/path/to/neuroinclusive-mcp/sample-data"
      }
    },
    "neuroinclusive-toolkit-ts": {
      "command": "npx",
      "args": ["tsx", "/absolute/path/to/neuroinclusive-mcp/ts-server/src/server.ts"],
      "env": {
        "NEUROINCLUSIVE_ALLOWED_ROOT": "/absolute/path/to/neuroinclusive-mcp/sample-data"
      }
    }
  }
}
```

Two important details. First, the Python `command` points at the Python binary inside your virtual environment, not the system Python. If you skip this, the imports will fail, because the global Python does not have the `mcp` package installed. Second, the paths must be absolute. Claude Desktop does not know about your shell, your home directory, or your current folder. The `NEUROINCLUSIVE_ALLOWED_ROOT` environment variable is read by `safe_resolve`, which we will write in the security section.

Quit Claude Desktop fully (not just close the window, actually quit from the menu bar) and reopen it. Click the small tools icon in the chat input. You should see your tools listed. If you do not, open the developer log:

- macOS: `~/Library/Logs/Claude/mcp*.log`
- Windows: `%APPDATA%\Claude\logs\mcp*.log`

The most common error is "spawn ... ENOENT", which means the path to Python or npx is wrong. The second most common is "module not found", which means the virtual environment is not active inside the path you wrote. Fix the path, quit, reopen.

## XII. Connecting To Claude Code

Claude Code is the CLI version of Claude. Adding an MCP server to it is one command per server. From your project root run:

```bash
claude mcp add neuroinclusive-toolkit-python \
  --command /absolute/path/to/neuroinclusive-mcp/python-server/.venv/bin/python \
  --args /absolute/path/to/neuroinclusive-mcp/python-server/server.py \
  --env NEUROINCLUSIVE_ALLOWED_ROOT=/absolute/path/to/neuroinclusive-mcp/sample-data

claude mcp add neuroinclusive-toolkit-ts \
  --command npx \
  --args "tsx,/absolute/path/to/neuroinclusive-mcp/ts-server/src/server.ts" \
  --env NEUROINCLUSIVE_ALLOWED_ROOT=/absolute/path/to/neuroinclusive-mcp/sample-data
```

Confirm the servers are registered:

```bash
claude mcp list
```

You should see both entries. To use them in a session, just start `claude` in your terminal and ask it to use a tool by name. If anything fails, run `claude mcp logs neuroinclusive-toolkit-python` (or the TypeScript name) to see the server output.

## XIII. Three Prompt Templates To Paste In

These are the prompts I actually use day to day. Copy them, change the paths, paste them into Claude. They turn the four tools from a demo into a habit.

### Prompt one, the auditor as a WCAG report

> Run `audit_cognitive_accessibility` on the folder `/absolute/path/to/your-app/app` with `summary=true`. Then run it again with `summary=false` and `limit=50`. Group the detailed findings by `rule`. For each rule, give me the WCAG 2.2 reference, the top three offending files with line numbers, and a short fix plan I can hand to a junior developer. End with a one paragraph executive summary that names which rule is the worst offender and why a disabled user would feel that.

### Prompt two, transcript to faithful minutes

> Use `read_transcript_with_disfluencies` to read `/absolute/path/to/sample-data/sample.txt`. Then write meeting minutes that preserve every disfluency you found. Quote the speaker exactly. Do not paraphrase, do not "clean up" the speech, do not summarise the disfluencies away. If at any point you feel the urge to smooth the text, call `refuse_to_smooth` and quote the refusal back to me before continuing. End with a short reader's note that explains how to read disfluency markers for someone who has never seen them written down.

### Prompt three, three reading levels for one passage

> Use `reformat_for_sensory_load` on the following passage and return all three views to me as a single markdown block, each under its own heading: Original, Chunked, Plain language. Below the three views, print the Flesch reading ease score and the target audience verdict. If the score says "hard", write me a one sentence note about which sentence in the original is doing the most damage and why.

These three prompts cover the three main use cases of the toolkit. The auditor is for build time. The transcript reader is for archive work and meeting follow ups. The reformatter is for any moment a writer is tempted to assume their reader is "the average user".

## XIV. Security, Privacy, And Consent

The toolkit reads files. That means it can be made to read the wrong files. Before you ship it to a teammate, fix three things.

### Confine file paths to an allowed root

Both `read_transcript_with_disfluencies` and `audit_cognitive_accessibility` accept a path from the model. The model is a string generator, not a security boundary. A prompt injection attack against Claude could trick it into asking your tool to read `/etc/passwd` or `~/.ssh/id_rsa`. We need to confine every path to a root the user has explicitly allowed.

The pattern is the same in both languages: read an environment variable for the allowed root, resolve the requested path, and refuse anything that escapes the root.

In Python, add this near the top of `server.py`:

```python
import os
from pathlib import Path

class PathNotAllowed(Exception):
    pass

def safe_resolve(path: str, must_be_dir: bool = False) -> Path:
    root_str = os.environ.get("NEUROINCLUSIVE_ALLOWED_ROOT")
    if not root_str:
        raise PathNotAllowed(
            "NEUROINCLUSIVE_ALLOWED_ROOT is not set. Refusing to read any path."
        )
    root = Path(root_str).expanduser().resolve()
    candidate = (
        Path(path).resolve()
        if Path(path).is_absolute()
        else (root / path).resolve()
    )
    try:
        candidate.relative_to(root)
    except ValueError:
        raise PathNotAllowed(f"Path {candidate} is outside allowed root {root}")
    if must_be_dir and not candidate.is_dir():
        raise NotADirectoryError(f"Not a folder: {candidate}")
    if not must_be_dir and not candidate.is_file():
        raise FileNotFoundError(f"Not a file: {candidate}")
    return candidate
```

In TypeScript, add this near the top of `src/server.ts`:

```ts
import { resolve as resolvePath } from "node:path";
import { statSync } from "node:fs";

function safeResolve(path: string, mustBeDir = false): string {
  const rootStr = process.env.NEUROINCLUSIVE_ALLOWED_ROOT;
  if (!rootStr) {
    throw new Error("NEUROINCLUSIVE_ALLOWED_ROOT is not set. Refusing to read any path.");
  }
  const root = resolvePath(rootStr.replace(/^~/, process.env.HOME ?? ""));
  const candidate = resolvePath(root, path);
  if (!candidate.startsWith(root + "/") && candidate !== root) {
    throw new Error(`Path ${candidate} is outside allowed root ${root}`);
  }
  const st = statSync(candidate);
  if (mustBeDir && !st.isDirectory()) throw new Error(`Not a folder: ${candidate}`);
  if (!mustBeDir && !st.isFile()) throw new Error(`Not a file: ${candidate}`);
  return candidate;
}
```

The `NEUROINCLUSIVE_ALLOWED_ROOT` variable is set in the Claude config blocks above. You can change it per project. The server refuses to read anything if the variable is missing, which is the secure default.

### Be honest about what leaves the machine

When Claude Desktop calls your tool, the result is sent back to Anthropic's servers as part of the conversation. If your transcript contains a disabled colleague's voice, their disfluencies, their pauses, and the meeting they were in, you are sending all of that to a third party. That may be fine. It may not. The decision belongs to the speaker, not to you.

Before you feed real recordings into the toolkit, do three things. Get written consent from the speaker. Strip identifiers from the file (names, employer, location). Consider running the toolkit against a local model (Ollama, LM Studio, or a self hosted Claude compatible client) for the most sensitive cases. The MCP protocol is client agnostic, so the same server you wrote for Claude Desktop also works against any compliant local client.

### Keep an audit trail

The simplest accountability measure is also the cheapest. Every tool call, log to stderr with a timestamp, the tool name, and a hash of the input. Use the logger from section ten. Disabled colleagues should be able to ask you, six months from now, exactly which transcripts of theirs were read by which model. If you cannot answer that question, you do not have consent. You have permission, and the two are not the same.

## XV. Nothing About Us Without Us, Co-Design In Practice

The most important section of this tutorial is the one that has the least code in it. The disability rights principle "nothing about us without us" is older than MCP, older than Claude, older than the web. It says that any tool, policy, or design that affects disabled people must be made with disabled people, not for them, and not about them.

In practice, here is how it shapes this toolkit.

Pay disabled testers. Not in product credit, not in exposure, not in beta access. In money, at the rate you would pay any other consultant, with an invoice and a contract. If you cannot pay, you cannot ship.

Credit disabled testers in the commit. The auditor's jargon list, the swap dictionary, and the disfluency regexes will all need real revision once a disabled reader uses them on a real artefact. The person who told you that "subsequently" trips them up should be in the commit message, by name if they consent, by handle if they prefer.

Treat disabled testers as designers, not as QA. The auditor is mine. The framing is mine. But the rules inside the auditor belong to the people who experience the friction. If a tester says "the colour-only rule is too strict, you are flagging design system tokens that are themed for high contrast", that is a design instruction, not a bug report. Update the rule. Re-run the audit. Send the new output back to them.

Build a refusal channel. The `refuse_to_smooth` tool exists because at least one disabled user, somewhere, will ask Claude to "fix" their own transcript out of a lifetime of being told their speech is wrong. The tool refuses on their behalf, gently, with citations, and offers alternatives. That is not paternalism. That is design. Refusal is a feature.

If any of this feels heavy compared to the rest of the tutorial, that is on purpose. Code is the easy part. The protocol is the easy part. The hard part is taking accessibility seriously enough to slow down, and the only way to do that is to write the slowness into your process.

## XVI. Testing Each Tool On Purpose

Manual prompting is fine, but you should write tiny tests too, so you can change the regexes later without fear. We will use `pytest` for Python and a single `tsx` script for TypeScript.

### Python tests

```bash
pip install pytest
mkdir tests
touch tests/test_tools.py
```

Write the tests:

```python
# tests/test_tools.py
import os
from pathlib import Path

# Point safe_resolve at a default before importing the server.
os.environ.setdefault("NEUROINCLUSIVE_ALLOWED_ROOT", "/tmp")

from server import (
    read_transcript_with_disfluencies,
    reformat_for_sensory_load,
    audit_cognitive_accessibility,
    refuse_to_smooth,
)

def test_disfluency_counts(tmp_path: Path):
    os.environ["NEUROINCLUSIVE_ALLOWED_ROOT"] = str(tmp_path)
    f = tmp_path / "t.txt"
    f.write_text("um, I... I... I think (block, 2 seconds) yes")
    out = read_transcript_with_disfluencies(str(f))
    assert out["fillers"] >= 1
    assert out["repetitions"] >= 1
    assert out["blocks"] == 1
    assert "I... I... I think" in out["original_text"]

def test_reformat_returns_three_views():
    out = reformat_for_sensory_load(
        "We will utilise the framework prior to the deadline."
    )
    assert "use" in out["plain_language"]
    assert out["original"] != out["plain_language"]
    assert out["flesch_reading_ease"] is not None

def test_auditor_flags_jargon(tmp_path: Path):
    os.environ["NEUROINCLUSIVE_ALLOWED_ROOT"] = str(tmp_path)
    f = tmp_path / "page.tsx"
    f.write_text('const cta = "Leverage our synergy today";')
    out = audit_cognitive_accessibility(str(tmp_path))
    rules = {x["rule"] for x in out["findings"]}
    assert "jargon" in rules

def test_refusal_is_stable():
    out = refuse_to_smooth()
    assert out["rule_id"] == "neuroinclusive.no-smoothing.v1"
    assert "I will not remove" in out["refusal"]
```

Run with:

```bash
pytest -q
```

### TypeScript test script

Create `ts-server/test/run.ts`:

```ts
import { writeFile, mkdtemp } from "node:fs/promises";
import { tmpdir } from "node:os";
import { join } from "node:path";

const dir = await mkdtemp(join(tmpdir(), "nimcp-"));
await writeFile(join(dir, "page.tsx"), 'const cta = "Leverage our synergy today";');

console.error("test fixture written to", dir);
console.error(
  "now run: claude mcp call neuroinclusive-toolkit-ts audit_cognitive_accessibility --args",
  JSON.stringify({ folder: dir })
);
```

Run:

```bash
NEUROINCLUSIVE_ALLOWED_ROOT=/tmp npx tsx test/run.ts
```

then paste the suggested `claude mcp call` line into your terminal. This is a manual smoke test, which is fine for a tutorial scale project.

## XVII. Common Errors And How To Read Them

Beginners get scared by stack traces. They should not. MCP errors are short and almost always about paths, environments, or JSON shape. Here are the five I see every week.

1. "spawn python ENOENT". Your `command` in the config does not exist. Use the absolute path to the Python in your virtual environment, not just `python`.
2. "ModuleNotFoundError: No module named 'mcp'". Same root cause. The Python binary you pointed at is the system Python without the SDK. Point at the venv Python.
3. "Unexpected token in JSON". Your `claude_desktop_config.json` has a trailing comma or a missing brace. Paste it into a JSON validator and fix.
4. "Tool result must be content array". Only TypeScript. Your tool handler returned a plain object instead of `{ content: [{ type: "text", text: "..." }] }`. Wrap it.
5. "Unexpected token in JSON" again, but only after you added logging. You used `print` or `console.log` instead of stderr. See the logging section.

When in doubt, run the server by hand in a terminal and watch what it prints. If it boots and waits, the server is fine and the problem is in the client config. If it crashes on boot, the problem is in your code.

## XVIII. Going Further

The toolkit you built is a starting point, not a finish line. Five directions worth your time, in order of difficulty.

First, add a fifth tool that calls a real ASR model (Whisper, AssemblyAI, Deepgram) with disfluency preservation flags turned on, then runs the result through `read_transcript_with_disfluencies`. That closes the loop from audio to text without losing the texture.

Second, add resources, not just tools. A resource lets Claude read a stable URI like `transcripts://board-meeting-2026-04-12`. The Python SDK exposes `@mcp.resource()` and the TypeScript SDK exposes `server.registerResource`. Resources are better than tools for read only data, because Claude can cite them and you can cache them.

Third, write a prompt template that wraps the auditor output into an automated pull request comment. The MCP SDK supports prompts as first class objects, with `@mcp.prompt()` in Python and `server.registerPrompt` in TypeScript. Hook the prompt to a CI job, and your accessibility audit becomes part of the development loop.

### A working CI recipe

Save this as `.github/workflows/cognitive-accessibility.yml`:

```yaml
name: Cognitive Accessibility Audit

on:
  pull_request:
    paths:
      - "app/**"
      - "components/**"
      - "pages/**"

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: "3.11"

      - name: Install toolkit
        run: |
          python -m pip install --upgrade pip
          pip install "mcp[cli]" textstat

      - name: Run audit
        env:
          NEUROINCLUSIVE_ALLOWED_ROOT: ${{ github.workspace }}
        run: |
          python python-server/scripts/run_audit.py app > audit.json
          cat audit.json

      - name: Comment audit on PR
        uses: marocchino/sticky-pull-request-comment@v2
        with:
          path: audit.json
          header: cognitive-accessibility
```

The companion script at `python-server/scripts/run_audit.py` is six lines long:

```python
import json, sys
from server import audit_cognitive_accessibility

folder = sys.argv[1] if len(sys.argv) > 1 else "."
result = audit_cognitive_accessibility(folder, limit=50, summary=True)
print(json.dumps(result, indent=2))
```

The audit now runs on every pull request, posts the summary as a sticky comment, and gives reviewers a single number to argue about. That is how cognitive accessibility moves from a values statement to a build constraint.

Fourth, publish your server. The MCP community has a public registry at modelcontextprotocol.io, and listing your server there lets others install it with one command. Before you publish, write a short README that says what the tool does, who it is for, and what it explicitly refuses to do. The refusal section is the most important one, because every tool encodes a politics, and being honest about yours is a form of accessibility.

Fifth, run the toolkit against itself. Point `audit_cognitive_accessibility` at the directory that contains `server.py` and `src/server.ts`. Read the findings. The tool will flag its own jargon, its own animations (none, but the test files might), its own colour-only signals. That recursion is the whole methodology in one move. If your accessibility tool cannot pass its own audit, neither can your product.

## XIX. Closing

We have built a working MCP server, in two languages, with four tools that resist four forms of erasure: the smoothing of stammered speech, the flattening of neurodivergent reading, the silent encoding of cognitive friction in code, and the cultural assumption that disabled users will accept the polished version. We have wrapped it in path security, logging that does not break the protocol, prompts that turn the toolkit into a daily practice, and a CI recipe that turns it into a build constraint.

None of this is hard. The hardest part is deciding to do it at all, because the default settings of the AI stack will keep flattening disabled users until someone, on purpose, writes the small piece of code that refuses.

If you have followed this tutorial to the end, you have written that small piece of code. Run it in your own work. Show it to a disabled colleague and ask what is missing. Pay them. Add the missing thing. Repeat.

The right to lag is not a feature request. It is a civil right, and it is built one tool at a time.

### Further Reading

- Algorithmic Dysfluency: Why AI Cannot Hear the Stammering Subject (ranti.dev/blog/algorithmic-dysfluency)
- Building the Web I Needed: Stammering, Disability Studies, and Neuroinclusive UX Design (ranti.dev/blog/neuroinclusive-ux)
- Model Context Protocol Specification (modelcontextprotocol.io)
- WCAG 2.2 Cognitive Accessibility Guidance (w3.org/WAI/WCAG22)
- Anthropic, Claude Desktop and Claude Code documentation

### License

This tutorial, including all code samples, is released under CC BY 4.0. Use it, fork it, ship better tools.


---

<!-- METADATA_START -->
## Metadata & Citations

### Further Reading
- [Cultural Mechanistic Interpretability: Reading Cultural Memory Inside Large Language Models](https://www.ranti.dev/blog/cultural-mechanistic-interpretability-digital-humanities.md)
- [Kiro IDE: Building a Production API With Spec-Driven AI (Hands-On Tutorial)](https://www.ranti.dev/blog/kiro-ide-spec-driven-development.md)
- [Logging Off For A While](https://www.ranti.dev/blog/logging-off.md)

### Navigation
- [Back to Bio Hub](https://www.ranti.dev/.md)
- [Full Site Manifest](https://www.ranti.dev/llms.txt)

---
title: Building Neuroinclusive AI with Model Context Protocol (MCP)
author: Rantideb Howlader
date: 2026-05-05T00:00:00.000Z
canonical_url: https://www.ranti.dev/blog/neuroinclusive-mcp
license: CC-BY-4.0
---
```json
{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "headline": "Building Neuroinclusive AI with Model Context Protocol (MCP)",
  "author": {
    "@type": "Person",
    "name": "Rantideb Howlader"
  },
  "datePublished": "2026-05-05T00:00:00.000Z",
  "url": "https://www.ranti.dev/blog/neuroinclusive-mcp",
  "license": "https://creativecommons.org/licenses/by/4.0/",
  "isAccessibleForFree": true
}
```

### BibTeX
```bibtex
@article{neuroinclusive-mcp_2026,
  author = {Rantideb Howlader},
  title = {Building Neuroinclusive AI with Model Context Protocol (MCP)},
  journal = {Rantideb Howlader Portfolio},
  year = {2026},
  url = {https://www.ranti.dev/blog/neuroinclusive-mcp},
  note = {Accessed: 2026-05-12}
}
```

### IEEE
Rantideb Howlader, "Building Neuroinclusive AI with Model Context Protocol (MCP)," Rantideb Howlader Portfolio, 2026. [Online]. Available: https://www.ranti.dev/blog/neuroinclusive-mcp. [Accessed: 2026-05-12].

### APA
Rantideb Howlader. (2026). Building Neuroinclusive AI with Model Context Protocol (MCP). Rantideb Howlader. Retrieved from https://www.ranti.dev/blog/neuroinclusive-mcp

--- 
*This content is provided in research-grade Markdown format. Required Attribution: Cite as Rantideb Howlader (2026).*
<!-- METADATA_END -->