Two articles in, here's the inventory.
There is now a public Model Context Protocol server for Makuri at mcp.cogniledger.eu, indexed in the Official MCP Registry as io.github.Cogniledger/cogniledger-mcp-makuri v1.0.0 with status active. Eight tools, all read-only, all working in production with a 1.83-second cold start. Source on GitHub under MIT, dependencies tracked, security scanning in place.
Total elapsed: roughly two weekends of focused work, after a longer planning phase. Total infrastructure cost going forward: under €10/month for Vercel. Total user data exposed: zero by design.
This part covers what that's actually worth — to Makuri, to CogniLedger, and as a pattern for other EU SMBs.
What it gives Makuri
For Makuri specifically, the immediate value is small but real.
Anyone running Claude Desktop, Cursor, or another MCP-aware client can now query authoritative product information about Makuri without scraping a website or relying on training-data residue. When someone asks an MCP-equipped assistant "what subjects does Makuri cover" or "is Makuri GDPR-compliant for children," the answer is the answer I've decided to give, not a hallucination averaged across the internet.
Today the MCP user base is small and skews technical. Practical traffic from this is modest. What matters more is being early on the directory side: as MCP clients proliferate (Claude Desktop already, ChatGPT and Le Chat increasingly, agentic frameworks broadly), the cost of being indexed in the Official Registry from May 2026 is much lower than the cost of trying to be discoverable from late 2026 forward, when the registry will be denser and ranking signals will start to matter.
Being there matters more than being read today.
What it gives CogniLedger
I run CogniLedger Solutions, a Bucharest-based AI consulting practice. One of the services in the catalog is AI Visibility — making EU SMBs discoverable to AI assistants. Until recently, I was selling that service with a methodology and a slide deck. Now I'm selling it with a working server in production, indexed in the canonical directory, with three articles documenting how it was built and where it broke.
That changes the sales conversation in a specific way. Before: "here's what we'd do for you." After: "here's what we already did for ourselves, and here's the public artifact you can verify in thirty seconds." Prospective clients can run npx @modelcontextprotocol/inspector against my server before our second call. The credibility cost of saying "trust me" is replaced by the credibility yield of saying "go look."
That's the real return on the build. Not Makuri's discoverability, which is a long game, but CogniLedger's positioning, which is an immediate one.
What I expect next
Three time horizons.
Within seven days. Downstream catalogs (PulseMCP, Glama, GitHub's MCP listing) auto-pull from the Official Registry. By the time you read Part 1 of this series, the server will likely already be in three or four directories beyond the canonical one, with no additional submission work from me. I'll write a short follow-up if anything unexpected happens there.
Within thirty days. I'll have a real traction signal — how many MCP clients have my server in their config, what tool calls are being made most often, whether any of the eight tools sees enough use to suggest where to invest in expanding the surface. Vercel's structured logging makes this clean to measure.
Within a quarter. AI Visibility either becomes a real category or it doesn't. The honest read is that this is early. MCP adoption in EU SMB markets is still nascent. The case where I built this in May and it generates zero qualified inbound by August is plausible. The case where it becomes a meaningful pillar of CogniLedger's pipeline by Q4 is also plausible.
What I'll say with confidence is that the cost of being early was low. Two weekends of work, ten euros a month, no compliance overhead, no maintenance burden beyond occasional content updates. The expected-value calculation works even if the upside is moderate.
What you do this week if you're an SMB
If you're running an EU SMB and the AI Visibility framing in Part 1 resonated, here's the version of this you can ship without a consultant.
Week one: decide what authoritative facts about your company are worth exposing — pricing, locations, product features, compliance posture, contact channels. The cut should be "things I'd publish on my website anyway." Read-only. Public-safe.
Week two: find a developer who's done a Next.js or Hono project before, give them the MCP SDK quickstart, and have them stand up a server with three to five tools to start. The full pattern is in Makuri's GitHub repo (github.com/Cogniledger/cogniledger-mcp-makuri, MIT-licensed), and adapting it should be a one-week job for any reasonably experienced TypeScript developer.
Week three: publish to the Official MCP Registry following the path I described in Part 2. Budget ninety minutes for the actual submission, including the gotchas. Confirm the listing in the registry's web UI before declaring done.
After that, you wait. AI Visibility is not a campaign you run for ninety days and measure. It's an asset you ship once, maintain occasionally, and hold for the cycle in which AI assistants become a primary distribution layer.
That cycle is in motion now. The cost of joining it is rounding error. The cost of joining it late is the cost of joining a directory after the front-page slots are taken.
Closing
This series has been the case study for one specific shape of AI Visibility work: public, read-only, zero user data, EU-aligned. There are more interesting shapes out there — authenticated MCP servers for B2B platforms, write-enabled servers for AI agents acting on user behalf, the Answer Engine Optimization side of the same problem (structured data on your own website that AI assistants can parse without an MCP server at all). I'll write those up when I've shipped them.
For now, the artifact is live, the directory entry is active, the playbook is documented, and the next thing to do is talk to people about it.
If you're an EU SMB founder, CTO, or operator who's been thinking about how AI assistants will or won't surface your business, I'd genuinely like to hear what you're seeing. The contact information is in the eighth tool of the server itself.