In Part 1 I argued that AI Visibility is a real and underbuilt distribution layer, and that a public Model Context Protocol server is the cleanest first artifact for an EU SMB to ship. This part is the build log: stack, workflow, and the six places I caught a problem before it caught me.

I'll start with what worked, which is most of it, and then spend most of the word count on what didn't.

The build

The server is a Next.js 16.2 app on Vercel, using the official MCP SDK (@modelcontextprotocol/sdk 1.26.0) wired through mcp-handler 1.1.0. Eight tools, all read-only, all returning structured JSON. Cold start measured at 1.83 seconds. No environment variables, no authentication, no database. The data each tool returns is hard-coded TypeScript constants — for a product showcase that's the right tradeoff: pricing changes mean editing a constant and shipping.

I'm not the person writing the TypeScript. My workflow on this kind of project is architect→coder. I produce a tightly scoped specification for Claude (used as a coding agent) describing the server's tool surface, schemas, response shapes, and constraints. Claude generates the implementation in one pass. I review, deploy to Vercel, and validate against the running endpoint with the MCP Inspector — a free tool from the protocol authors that connects to any MCP server and exercises every tool through a UI.

Validation matters. I went through all eight tools in production, found two minor data inconsistencies — both safe to backlog rather than block launch — and signed off. The whole loop ran across roughly two evenings of focused work.

So: build went well. The interesting part started when I tried to make the server discoverable.

The submission

The Official MCP Registry launched in September 2025 and is the canonical directory for the protocol. Most downstream catalogs — PulseMCP, Glama, GitHub's MCP listing — pull metadata from it through its REST API. Publishing once propagates almost everywhere within a week.

The publish flow is a CLI tool called mcp-publisher, a Go binary that reads a server.json file from your repo, validates it against a JSON schema, authenticates you against the registry through GitHub OAuth, and POSTs the payload. Conceptually a thirty-minute task. In practice, six things bit me. Each is worth a paragraph.

1. A GitHub Personal Access Token in my git remote URL

I ran mcp-publisher init to generate a starter server.json. It autodetected the repository field from my local git config — and copied my remote URL verbatim, including a Personal Access Token embedded in it from when the repo had been cloned with credentials baked into the URL.

If I'd committed and pushed that server.json to a public GitHub repo, the token would have entered the public git history forever. Mirrors, forks, and git log would have surfaced it indefinitely.

I caught it on visual review, rotated the token immediately, cleaned the local git remote with git remote set-url origin <clean-url>, and rewrote server.json from scratch.

Lesson: any tool that reads your git config will copy whatever's in there. If you've ever pasted a token-bearing URL into a clone command, you have one of these landmines somewhere. Audit ~/.gitconfig and every .git/config you can find. Move to GitHub's credential manager or SSH keys; never put credentials in remote URLs.

2. The MCP Registry's GitHub namespace is case-sensitive

GitHub treats Cogniledger and cogniledger as the same organization. Most registries follow that convention. The MCP Registry doesn't.

When you authenticate via GitHub OAuth, the registry binds your publishing rights to the namespace io.github.<exact-case-of-your-username>. I'd written io.github.cogniledger/... in server.json because lowercase felt like the convention for package namespaces. The publish call returned 403: my permission was for io.github.Cogniledger/*, not io.github.cogniledger/*.

Lesson: don't infer naming conventions in unfamiliar systems. Open your GitHub org's profile page, copy the username with the exact casing displayed there, and use that. Machine-checked beats human-intuited.

3. JWT lifetime is shorter than your validation cycle

The registry's auth flow gives you a short-lived JWT. Login, then publish. Easy.

Except between my first login and my first publish attempt, I did the casing fix above, re-ran validate, edited the description (see #4), and only then ran publish. The token had expired. 401.

Lesson: run validate (which doesn't need auth) until your server.json is clean. Then run login, and run publish immediately. Don't insert any other steps. The cheap operation goes before the expensive one — the same rule that governs running schema introspection before writing SQL against an unfamiliar database.

4. Schema limits are not suggestions

My initial description was 156 characters. The schema caps it at 100. The registry returned a 422 with the exact error message — useful, but only because I bothered to read it instead of guessing.

Lesson: when a system rejects your input, the schema is sitting at a public URL. For the MCP Registry it's https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json. Open it. Read the relevant section. Tutorials are written for the happy path; schemas describe the actual law.

5. Remote-only servers are not npm packages

Most MCP server tutorials describe servers you install locally — via npm, pip, or Docker. They tell you to publish your package to npm, add an mcpName field, and place an MCP-name marker in your README.

For a Vercel-hosted server reachable over HTTP, none of that applies. The server.json should contain a remotes array (with type: "streamable-http" and the production URL) and not a packages array. There's no package to publish, no marker to add. The GitHub OAuth tied to your namespace is the entire ownership proof.

I didn't waste much time here, but the autogenerated server.json from init was shaped like an npm package and led me toward unnecessary work. Lesson: when a tool generates a default, check whether that default matches your case. The "obvious" template is usually written for the most common scenario, not yours.

6. Multiple GitHub accounts on one machine

The repo for this project lives under a Cogniledger GitHub organization. My personal everyday work uses a different account on the same Windows machine. When I tried to push the new server.json commit, Windows Credential Manager presented credentials for the wrong account, and the push hit 403: "permission to Cogniledger/cogniledger-mcp-makuri denied to <other-username>".

The fix is a per-repo URL hint:

git remote set-url origin https://Cogniledger@github.com/Cogniledger/cogniledger-mcp-makuri.git

Windows Credential Manager keys credentials by host and username, so prefixing the username in the remote URL forces the right credential lookup. No global account switching, no SSH key gymnastics. Each repo carries its own account binding.

Lesson: multi-account git on a single dev machine is solvable in one line. If you're a consultant working across client orgs, your future self will thank you for setting this up early.

End-to-end timing

The end-to-end submission, after I had a clean server.json, was about ten minutes. Including the false starts, the debugging, and the security recovery, it was closer to ninety. The asymmetry between "happy path" and "real path" on this kind of operation is roughly 9× in my experience, and that's worth budgeting for when you scope the work or quote a client.

The server is now indexed at https://registry.modelcontextprotocol.io/?q=cogniledger as io.github.Cogniledger/cogniledger-mcp-makuri v1.0.0, status active. Part 3 covers what that's actually worth.