Abhishek ChaudharyAbhishek Chaudhary

MCP as CMS: Run Your Brand Admin as an MCP Server in 2026

MCP as CMS is the 2026 pattern for running your brand site admin as a private MCP server. The 9-tool surface, Bearer auth, and the convergence test.

Abhishek Chaudhary14 min read

MCP as CMS is the 2026 pattern of exposing your brand site's admin layer as a private Model Context Protocol server, so any MCP-aware client (Claude, Cursor, ChatGPT, an Agent SDK loop) can drive your content with typed tools instead of you opening a browser. I shipped this on my own site this past month: 9 tools that mirror the database shape, Bearer-secret auth that runs on every method including discovery, and discoverability hygiene that keeps the endpoint invisible to Google. This guide is the pattern, the security shape I picked, and the one primitive that made the whole thing earn its keep.

TL;DR

  • Mount one Streamable HTTP MCP route handler at a path you do not link from anywhere on the public site.
  • Authenticate every method, including initialize and tools/list, with a single shared secret and a timing-safe compare.
  • Ship 9 tools that mirror your DB shape: list, get, create, update, atomic patch, delete on each writable resource, plus a stats tool.
  • Add an X-Robots-Tag: noindex header on every response and confirm /sitemap.xml, /robots.txt, /llms.txt, your JSON-LD, and your nav all naturally exclude the path.
  • Treat atomic find-and-replace (patchBlog shape) as the cheap LLM-native edit primitive. It is the tool that justifies the whole server.

What an MCP-as-CMS server actually is

The Model Context Protocol is an open standard for exposing tools to LLM clients with typed inputs, typed outputs, and a discovery layer. Most published MCP servers wrap third-party APIs (GitHub, Linear, Slack) so a model can read and write them on your behalf. An MCP-as-CMS server is the inverse direction. You expose your own site's content layer, blog rows, media records, taxonomy, stats, as MCP tools, and your own LLM session writes to them.

The trade is small surface, big payoff. Instead of building a chat-driven admin UI, you let the client the model already runs in (Claude Code, Cursor, ChatGPT desktop, an Agent SDK loop) be the admin UI. You ship one route handler and a Zod schema per tool. The client takes care of the prompt, the rendering, the back-and-forth.

I run this site on the solo-founder AI stack I documented separately: Next.js 16 App Router, SQLite via Drizzle, BetterAuth, Docker on a small VPS. The MCP route is a single App Router file that imports the same Drizzle client and the same schema the human admin form uses. There is no second source of truth.

The 9-tool surface I shipped

Pick the smallest surface that lets a model do real work without falling back to a human form. For a content site with two writable resources, blog posts and media, that surface is 9 tools:

ToolResourcePurpose
listBlogsblogPage through rows; filter by published / draft; substring search on title or slug
getBlogblogFetch one row by id or slug; supports field projection and a markdown-snippet mode
createBlogblogInsert a new row from validated input
updateBlogblogReplace whole-row fields when the body is being rewritten
patchBlogblogApply a batch of exact-string find-and-replace edits, atomic, with occurrence guards
deleteBlogblogHard delete; protected by id requirement
listMediamediaPage through audio and image rows; filter by type and published state
getMediamediaFetch one media row by id or slug
getStatssiteAggregate counts across blog, media, and users for a one-call site readout

Two design choices in this surface are worth naming. First, patchBlog is separate from updateBlog. A whole-body update for a typo fix is a wasted round-trip; a find-and-replace tool with occurrence guards is the cheap path. Second, getBlog accepts a searchMarkdown mode that returns matched snippets in place of the full body. That is the verification path after a patchBlog batch, and it is what makes atomic edits trustworthy.

Media is read-only in the MCP surface for now. Audio uploads still go through the admin form because file ingestion runs an ffmpeg pipeline that is awkward to expose as a tool argument. If you handle media as URLs only, you can expose write tools too.

Why the auth runs on initialize and tools/list, not just tools/call

The naive auth pattern is "let anyone call tools/list so they can discover what is here, and only require a secret to actually invoke a tool". On a public CMS server that is fine. On a private one it is the leak.

The tool surface itself is sensitive. The tools/list response says "this server can write to a blog table, a media table, and a stats endpoint". An attacker who can hit that response without a secret has a full description of your attack surface, the field names, the validation rules, and the resource shape. From there, the only thing standing between them and a write is whatever rate limit you forgot to add.

So I authenticate every JSON-RPC method. The handler reads the Authorization: Bearer <secret> header, with x-api-key as a fallback. Bearer is the MCP-side standard, but x-api-key was already the convention in one of my SaaS products and the existing client code I wanted to point at this server expected that header. Supporting both was a five-line port instead of a refactor on the consumer side. Both code paths run a timingSafeEqual compare against process.env.ADMIN_MCP_SECRET and return an identical 401 response whether the secret is wrong or unset. The identical-401 detail matters. If the unset case returns "server not configured" and the wrong-secret case returns "unauthorized", an attacker can probe for misconfigured deployments before bothering to brute-force.

The whole auth block is twenty lines and runs before the MCP transport sees the request. I cribbed the pattern from the same observability-first posture I use across production: see the hono-honeypot post for how that posture compounds. Cheap to ship, expensive to skip.

The discoverability convergence test: 5 surfaces to keep clean

A private MCP endpoint is only private if nothing on the public site points at it. There are five surfaces a 2026 brand site exposes to discovery, and the test is whether all five naturally exclude the route. The convergence test is a one-line audit per surface:

  1. /sitemap.xml: the dynamic sitemap walks public route directories and the published rows of the content tables. The MCP route is mounted under an admin-only directory that the sitemap generator never traverses, so it does not appear. Verify with curl https://<your-site>/sitemap.xml | grep mcp. Should be empty.
  2. /robots.txt: a Disallow: /admin rule is the wrong move because it advertises the path. The right move is to keep the admin directory out of sitemap and out of links, and rely on the noindex header for any crawler that finds the path another way. The MCP route returns X-Robots-Tag: noindex on every response, including 401s.
  3. /llms.txt: the LLM-discoverability route lists pages a bot should ingest. Build it from your sitemap or from a hand-curated list, and never include admin paths. See my honest case for llms.txt on a personal site for what should and should not live there.
  4. JSON-LD: your BreadcrumbList, WebSite, and Person blocks list canonical URLs. None of them should include the admin path. If you template breadcrumbs from the URL pathname, add an early return for admin segments.
  5. Site nav, footer, and any internal anchor: grep your codebase for the admin path. The MCP route should appear in exactly one place, the route file itself. Anywhere else and you have leaked it into HTML.

The convergence test is cheap to run and the failure mode is silent. A single stray <Link href="/admin/mcp"> in a debug page will get the path indexed inside a week. Run the audit on every release that touches discovery surfaces.

The cheap primitive: atomic find-and-replace as the LLM-native edit

patchBlog is the tool that earns the server. The shape is a batch of { find, replace, expectedOccurrences } edits. The handler runs the batch in a single transaction, asserts each edit's occurrence count matches the guard before applying, and rolls back the whole batch on any mismatch. On success it pings IndexNow so Bing, Yandex, and the rest re-crawl the changed URL within minutes.

Three reasons this primitive matters more than it looks:

  • Find-and-replace is cheaper than re-sending the body. A typo fix on a 2400-word post is one 60-byte edit, not a 2400-word updateBlog round-trip. At LLM token costs, the difference adds up across a content site.
  • Occurrence guards make the edit safe. "Replace Choudhary with Chaudhary" without a count guard turns a typo fix into a corpus-wide rewrite. With expectedOccurrences: 1 the same call refuses to run if the substring appears twice, which is exactly what you want when the model is acting on partial context.
  • Atomic batching matches how a model thinks about edits. A single Claude turn that says "fix three typos and one broken link in this post" becomes one MCP call with four edits in the batch, all-or-nothing. Either every edit landed or nothing did. No half-applied state to clean up.

The verification path is getBlog with searchMarkdown set to the substring you removed. If the snippet array comes back empty, the patch landed. If it comes back with matches, you missed a capitalization or a contraction. Same case-sensitivity rules as the patchBlog find: you'll and You'll are separate strings, and sentence-starting capitals are the common miss.

Why I built this, and what it changes day-to-day

The idea clicked when I wanted a personal AI agent that could audit content on a production site, check recent stats, and answer support tickets about typos or stale links without me opening a browser. The admin form was already fine for the long-form work. It was the small, frequent, in-and-out edits, "is this fact still right", "fix this one phrase across three posts", "what shipped in the last seven days", that the form made disproportionately heavy. An MCP surface that the model could just call, while I was doing something else, was the obvious shape.

I have been driving the server through Claude regularly for the past few days. The change is mostly about ease of access. I do not have to open a browser, navigate to the admin, move things from a draft tab, and click through the form. I can be eating dinner while asking my AI assistant to audit a markdown draft sitting locally on disk; once it is happy with the draft, I ask it to publish; once published, if a post I shipped five months ago needs a cleanup, a patchBlog from the same session is one short turn.

Three concrete shifts since I shipped the server:

  • Drafts go from disk to published in one conversation. I keep blog drafts in the gitignored /blogs/ folder per my repo-first writing rule. The model reads the draft from disk with its native filesystem tools, audits the body, and then calls createBlog over MCP to insert the row. No copy-paste, no tab switching, no manual title-slug-excerpt re-entry.
  • Stale-post cleanup is a one-turn task. A reader emails about a wrong tool version in a five-month-old post. I forward the message to a model session, the session calls getBlog with searchMarkdown to confirm the substring is still there, then calls patchBlog with an expectedOccurrences: 1 guard to fix it. IndexNow pings on success. Total time from email to live correction is under a minute.
  • The admin UI did not get more complex. No new tabs, no new buttons, no AI-assistant widget bolted on. The MCP server is invisible from the human admin path. The two surfaces share one DB and one schema, and that is it.

It is not a replacement for the admin form. Bulk uploads, file ingestion, and any work that needs a visual preview still go through the form. The MCP server picks up the long tail of small edits and audits that were never worth opening a browser for.

Where this goes next: an MIT-licensed AI-first CMS

The shape works on this site, and I want to take the pattern further. Two near-term refinements and one bigger move:

  • Add a dryRun mode to patchBlog. Right now I verify with a follow-up getBlog searchMarkdown call. A dryRun: true flag that returns the would-be-applied edit set without writing would save the round-trip and let a model self-check before committing.
  • Expose one read-only catalog tool to the public, separately. The private CMS server stays private. But a tiny separate MCP server that exposes searchTracks and getTrack for the music catalog could let other people's MCP-aware tools recommend the music. The discoverability story flips, the auth story flips, the security model is different. Worth a separate experiment, not a feature on this server.
  • Distill this into an MIT-licensed CMS, agent-first, very soon. WordPress is GPL3 and was built when the audience was humans only. A 2026 CMS should treat AI agents as first-class citizens alongside humans, ship the MCP server in the box, and stay permissively licensed so anyone can build on top. That is the next thing I am building.

The bigger lesson is the one that travels. Ship the smallest typed surface a model can use to do real work, authenticate every method including discovery, keep the path invisible to public surfaces, and pick one cheap primitive that the whole server justifies. Everything else is decoration.

FAQ

What is a private MCP server in plain English?

A private MCP server is a small backend that exposes a typed list of tools (functions with named arguments) over the Model Context Protocol so an LLM client like Claude, Cursor, or ChatGPT can call them on your behalf. The "private" part means a single shared secret is required to even discover what tools exist. You typically run it as one route on a site you already own, alongside your public pages, and use it to drive your own admin layer instead of building a separate chat UI.

How is an MCP server different from a normal REST API?

A REST API exposes raw HTTP endpoints that you (or a hand-coded client) consume. An MCP server exposes a discovery layer (tools/list) that returns the typed signature of every tool, so an LLM can read the surface at runtime and call tools without you wiring custom code per integration. The transport for HTTP MCP is JSON-RPC over a single endpoint, with auth and rate-limit applied to every method. The schema-first design is what lets one server work across Claude Code, Cursor, ChatGPT, and any future MCP client without per-client glue.

Do I need to authenticate tools/list and initialize, or only tools/call?

Authenticate everything. The default examples often gate only tools/call, which lets anyone enumerate your tool surface anonymously. On a private CMS the surface itself is sensitive, the field names and validation rules describe how to attack the server, so the secret check belongs in the request handler before MCP-aware dispatch. Use a constant-time compare like crypto.timingSafeEqual, return identical 401 responses for the wrong-secret and unset-secret cases, and log auth failures for ban-list logic if you have one.

How do I keep the MCP endpoint hidden from Google and from LLM crawlers?

Run the convergence test. The endpoint should not appear in /sitemap.xml, in /llms.txt, in your JSON-LD breadcrumbs, in your site nav, or in any internal link. Set X-Robots-Tag: noindex on every response from the route. Do not list the path in /robots.txt as Disallow: because that line itself advertises the path. Crawlers cannot index a URL they have not been told about, and the noindex header is the safety net for the rare crawler that guesses.

What is patchBlog and why is it the most useful tool?

patchBlog applies a batch of exact-string find-and-replace edits to a single blog row in one atomic transaction. Each edit carries an expectedOccurrences guard; if any guard mismatches, the whole batch rolls back. It is the cheap edit primitive: a typo fix is one 60-byte call, not a whole-row update. The atomic shape matches how an LLM reasons about edits ("fix all three of these things or none of them"), the occurrence guard prevents accidental corpus-wide rewrites, and a follow-up getBlog with searchMarkdown confirms the patch landed.

Should I expose media uploads as MCP tools too?

Probably not in the first iteration. File ingestion usually involves a transcoding pipeline (ffmpeg, image resizing, format conversion) that is awkward to express as a single MCP tool argument. Keep media uploads in the human admin form, expose listMedia and getMedia as read-only tools so the model can reference media in blog edits, and revisit media writes once the read shape is stable. If your media is URL-only and you do no transcoding, you can expose write tools from day one.

Can I open-source a template for this MCP-as-CMS pattern?

The pattern is small enough to template, and I am building exactly that: an MIT-licensed CMS designed to be managed by AI agents, with the MCP server shipped in the box and the audience treated as first-class AI citizens alongside humans. Think of it as a WordPress alternative for the agent era, without the GPL3 constraint, so anyone can build on top. Until that ships, the pattern in this post is small enough to lift directly: one Next.js or Hono route handler, the auth wrapper, a 9-tool registration block, a Zod schema per tool, and a Drizzle adapter against your existing schema.