← Back to blog

launchthatbot

How to Build an MCP Server with Cloudflare and Convex

We built an MCP server that lets AI coding agents manage LaunchThatBot deployments from inside Cursor. Here is the architecture: a TypeScript MCP server, a Cloudflare Worker proxy, and Convex as the backend. No REST API required.

Feb 18, 2026LaunchThatBot Team
MOFUIndie Devs

Ready to apply this in your own deployment?

Try the MCP server

When Anthropic released the Model Context Protocol, we saw it immediately: this was how AI coding agents should talk to infrastructure.

Not through a dashboard. Not through a CLI that the agent has to parse. Through a structured protocol where the agent says "list my servers" and gets back typed data it can reason about.

We built the LaunchThatBot MCP server so that an AI agent inside Cursor can manage your entire deployment fleet -- create servers, deploy agents, check health, review audit logs -- without you ever leaving your editor. This is how we built it.

The architecture in three layers

The LaunchThatBot MCP server is not a monolith. It is three distinct pieces that each solve one problem:

  1. The MCP server -- a TypeScript package that speaks the Model Context Protocol over stdio, translating tool calls into Convex queries and mutations
  2. The Cloudflare Worker proxy -- a lightweight edge proxy that gives the Convex backend a clean, branded API URL without requiring Convex Pro custom domains
  3. The Convex backend -- queries, mutations, and actions that implement every operation the MCP server exposes, with API key authentication and tenant isolation

Each layer is independently deployable, independently testable, and small enough to understand in a single sitting.

Layer 1: the MCP server

The MCP server is a standalone TypeScript package that runs as a stdio process. When Cursor launches it, it connects over stdin/stdout using the Model Context Protocol SDK.

Server setup

The entry point is minimal. Create the server, create a stdio transport, connect them:

import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { createServer } from "./server.js";

const main = async () => {
  const server = createServer();
  const transport = new StdioServerTransport();
  await server.connect(transport);
};

main().catch((error) => {
  console.error("Fatal error in LaunchThatBot MCP server:", error);
  process.exit(1);
});

The createServer function instantiates an McpServer and registers tool groups. We split tools into separate modules by domain -- servers, deployments, agents, health, credentials, audit, overview -- so the codebase stays navigable as tool count grows:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

export const createServer = (): McpServer => {
  const server = new McpServer({
    name: "launchthatbot",
    version: "0.1.0",
  });

  registerServerTools(server);
  registerDeploymentTools(server);
  registerAgentTools(server);
  registerHealthTools(server);
  registerOverviewTools(server);
  registerAuditTools(server);
  registerCredentialTools(server);

  return server;
};

Each register* function takes the McpServer instance and calls server.tool() to register individual tools with names, descriptions, Zod schemas for arguments, and handler functions.

Connecting to Convex

Here is the design decision that makes this architecture work: the MCP server does not implement any business logic. It is a translation layer. Every tool handler does the same thing -- call a Convex query, mutation, or action and return the result.

The Convex client is configured once with the API URL:

import { ConvexHttpClient } from "convex/browser";

const PRODUCTION_API_URL = "https://mcp.ltb.it.com";

let client: ConvexHttpClient | null = null;

export const getConvexClient = (): ConvexHttpClient => {
  if (client) return client;
  const url = process.env.LAUNCHTHATBOT_API_URL || PRODUCTION_API_URL;
  client = new ConvexHttpClient(url);
  return client;
};

Notice the URL: mcp.ltb.it.com. That is not Convex's default hostname. That is the Cloudflare Worker proxy (we will get to that).

Tool implementation pattern

Every tool follows the same structure. Here is list_servers as an example:

server.tool(
  "list_servers",
  "List all VPS servers in your LaunchThatBot account",
  {},
  async () => {
    const convex = getConvexClient();
    const result = await convex.query(api.mcp.queries.mcpListServers, {
      apiKey: getApiKey(),
    });
    return {
      content: [{ type: "text", text: JSON.stringify(result, null, 2) }],
    };
  },
);

And here is create_deployment, which is more complex:

server.tool(
  "create_deployment",
  "Create a new VPS deployment (provisions a new server with OpenClaw)",
  {
    name: z.string().describe("Name for the deployment"),
    provider: z.enum(["hetzner", "digitalocean"]).describe("VPS provider"),
    region: z.string().describe("Region code (e.g., 'nbg1', 'fsn1')"),
    serverType: z.string().describe("Server type (e.g., 'cx22', 'cx32')"),
    securityProfile: z
      .enum(["baseline_hardened", "production_hardened"])
      .describe("Security hardening level"),
    credentialId: z.string().describe("ID of the VPS provider credential"),
  },
  async (args) => {
    const convex = getConvexClient();
    const result = await convex.mutation(
      api.mcp.mutations.mcpCreateDeployment,
      {
        apiKey: getApiKey(),
        name: args.name,
        provider: args.provider,
        region: args.region,
        serverType: args.serverType,
        securityProfile: args.securityProfile,
        credentialId: args.credentialId,
      },
    );
    return {
      content: [{
        type: "text",
        text: `Deployment created!\n${JSON.stringify(result, null, 2)}`,
      }],
    };
  },
);

The pattern is consistent: Zod schemas define what the AI agent can pass in, the handler calls Convex, and the response is JSON. The AI agent in Cursor sees the tool descriptions and schemas, understands what each tool does, and calls them with the right arguments.

API key authentication

Every call includes an API key. The key is loaded from the environment and validated on every request:

export const getApiKey = (): string => {
  const key = process.env.LAUNCHTHATBOT_API_KEY;
  if (!key) {
    throw new Error(
      "LAUNCHTHATBOT_API_KEY is required. Set it in your .cursor/mcp.json env.",
    );
  }
  if (!key.startsWith("ltb_sk_")) {
    throw new Error(
      'LAUNCHTHATBOT_API_KEY must start with "ltb_sk_".',
    );
  }
  return key;
};

The ltb_sk_ prefix is a convention borrowed from Stripe. It makes it immediately obvious what kind of key you are looking at, and it lets validation fail fast if someone accidentally passes the wrong credential.

The full tool surface

Here is every tool the MCP server exposes:

Infrastructure: list_servers, get_server, set_server_attachment, list_deployments, get_deployment, create_deployment, redeploy, get_job_status

Agents: list_agents, get_agent, create_agent, update_agent, delete_agent, discover_agents, sync_agent

Observability: get_overview, get_agent_health, get_agent_events, list_audit_events

Credentials & Connections: list_credentials, add_credential, list_convex_connections, list_discord_connections

That is 20 tools. An AI agent in Cursor can manage an entire fleet of OpenClaw deployments across multiple providers using nothing but natural language.

Layer 2: the Cloudflare Worker proxy

Convex deployments get a hostname like basic-hyena-244.convex.site. That is fine for internal use, but we wanted a clean branded URL for the MCP server to hit: api.ltb.it.com for production, api-dev.ltb.it.com for development.

Convex custom domains require the Pro plan. A Cloudflare Worker that proxies requests to the Convex backend costs effectively nothing and gives us the same result.

The Worker

The entire Worker is about 40 lines. It rewrites the Host header so Convex routes the request correctly, handles CORS preflight, and forwards everything else:

interface Env {
  CONVEX_SITE_URL: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    if (request.method === "OPTIONS") {
      return new Response(null, { status: 204, headers: CORS_HEADERS });
    }

    const url = new URL(request.url);
    const targetUrl = env.CONVEX_SITE_URL + url.pathname + url.search;
    const convexHost = new URL(env.CONVEX_SITE_URL).host;

    const proxyHeaders = new Headers(request.headers);
    proxyHeaders.set("Host", convexHost);
    proxyHeaders.set("X-Forwarded-Host", url.host);

    const proxyRequest = new Request(targetUrl, {
      method: request.method,
      headers: proxyHeaders,
      body: request.body,
    });

    const response = await fetch(proxyRequest);

    const responseHeaders = new Headers(response.headers);
    for (const [key, value] of Object.entries(CORS_HEADERS)) {
      responseHeaders.set(key, value);
    }

    return new Response(response.body, {
      status: response.status,
      statusText: response.statusText,
      headers: responseHeaders,
    });
  },
} satisfies ExportedHandler<Env>;

Environment-based routing

The Wrangler config uses environments to route dev and production traffic to different Convex deployments:

name = "ltb-api-proxy-dev"
routes = [
  { pattern = "api-dev.ltb.it.com/*", zone_name = "ltb.it.com" }
]

[vars]
CONVEX_SITE_URL = "https://basic-hyena-244.convex.site"

[env.production]
name = "ltb-api-proxy"
routes = [
  { pattern = "api.ltb.it.com/*", zone_name = "ltb.it.com" }
]

[env.production.vars]
CONVEX_SITE_URL = "https://insightful-malamute-933.convex.site"

Deploy with wrangler deploy for dev, wrangler deploy --env production for prod. Cloudflare handles TLS termination on the custom domain. The Worker does the rest.

Layer 3: the Convex backend

The Convex backend is where the real logic lives. The MCP server is a thin client. The Cloudflare Worker is a transparent proxy. Convex is the source of truth.

API key auth in Convex

Every MCP query, mutation, and action validates the API key before doing anything. Keys are SHA-256 hashed at rest and looked up by hash:

export const validateApiKey = internalQuery({
  args: { apiKey: v.string() },
  returns: v.union(
    v.object({
      keyId: v.id("mcpApiKeys"),
      tenantId: v.string(),
      userId: v.string(),
      scopes: v.array(v.string()),
    }),
    v.null(),
  ),
  handler: async (ctx, args) => {
    const keyHash = await sha256Hex(args.apiKey);
    const record = await ctx.db
      .query("mcpApiKeys")
      .withIndex("by_keyHash", (q) => q.eq("keyHash", keyHash))
      .unique();
    if (!record) return null;
    if (record.revokedAt) return null;
    if (record.expiresAt && record.expiresAt < Date.now()) return null;
    return {
      keyId: record._id,
      tenantId: record.tenantId,
      userId: record.userId,
      scopes: record.scopes,
    };
  },
});

The auth result includes tenantId -- every subsequent database query is scoped to that tenant. There is no way for one user's MCP server to access another user's data.

Scope-based authorization

API keys carry scopes: read, write, deploy. Each action checks for the required scope before executing:

const requireScope = (scopes: string[], required: string) => {
  if (!scopes.includes(required))
    throw new Error(`API key missing required scope: ${required}`);
};

A read-only key can list servers and check health. It cannot create deployments or add credentials. Users generate keys with the minimum scopes they need.

Key generation

Keys are generated with a ltb_sk_ prefix, 32 bytes of randomness, and URL-safe base64 encoding. Only the hash is stored:

const KEY_PREFIX = "ltb_sk_";

const generateRawKey = (): string => {
  const bytes = new Uint8Array(32);
  crypto.getRandomValues(bytes);
  let base64 = "";
  for (const b of bytes) {
    base64 += String.fromCharCode(b);
  }
  const encoded = btoa(base64)
    .replace(/\+/g, "-")
    .replace(/\//g, "_")
    .replace(/=+$/, "");
  return `${KEY_PREFIX}${encoded}`;
};

The raw key is returned exactly once during creation. After that, only the hash exists in the database. The dashboard shows a masked prefix so users can identify which key is which without exposing the secret.

Actions for side effects

Some MCP operations trigger real-world side effects -- discovering agents on a remote server, syncing config files, or provisioning a VPS. These use Convex actions (which run in Node.js) and delegate to the existing LaunchThatBot backend:

export const mcpDiscoverAgents = action({
  args: {
    apiKey: v.string(),
    serverId: v.id("infraServers"),
  },
  returns: v.any(),
  handler: async (ctx, args) => {
    const auth = await authenticateMcp(ctx, args.apiKey);
    requireScope(auth.scopes, "write");
    return await ctx.runAction(
      publicApi.agents.discover.discoverAgentsFromServer,
      {
        tenantId: auth.tenantId,
        userId: auth.userId,
        serverId: args.serverId,
      },
    );
  },
});

The MCP action authenticates, checks scopes, and delegates to the same internal function the dashboard uses. There is one implementation of "discover agents" and two surfaces that call it -- the web dashboard and the MCP server. No logic duplication.

How it all fits together

Here is the full request path when an AI agent in Cursor runs list_servers:

  1. The agent calls the list_servers MCP tool
  2. The MCP server reads LAUNCHTHATBOT_API_KEY from the environment
  3. It creates a ConvexHttpClient pointing at mcp.ltb.it.com
  4. The HTTP request hits the Cloudflare Worker
  5. The Worker rewrites the Host header and proxies to insightful-malamute-933.convex.site
  6. Convex executes the mcpListServers query, which validates the API key, extracts the tenant, and returns tenant-scoped server data
  7. The response flows back: Convex -> Cloudflare -> MCP server -> Cursor
  8. Cursor's AI agent receives structured JSON and can reason about your infrastructure

The whole round trip takes a few hundred milliseconds. The user sees the AI agent describe their servers in natural language, already understanding the state of each deployment.

What we would do differently

Use anyApi sparingly

The MCP server uses anyApi to reference Convex functions because the MCP package lives outside the Convex app and does not have access to generated types. This works but loses type safety at the boundary. If we were starting over, we would explore generating a typed client from the Convex schema specifically for the MCP package.

Start with fewer tools

Twenty tools is a lot. We shipped them all at once because we wanted full coverage from day one. In practice, AI agents use get_overview, list_servers, list_agents, and create_deployment far more than anything else. Starting with five core tools and expanding based on usage would have been a faster path to value.

Building your own

If you want to build an MCP server backed by Convex, the pattern is straightforward:

  1. Create a TypeScript package with @modelcontextprotocol/sdk and convex as dependencies
  2. Set up a Cloudflare Worker to proxy your custom domain to your Convex deployment (skip this if you are fine with the default Convex hostname)
  3. Build a convex/mcp/ module in your Convex app with queries, mutations, and actions for each tool
  4. Implement API key auth -- hash keys at rest, validate on every call, scope to tenant
  5. Register tools on the MCP server with Zod schemas and handlers that call Convex
  6. Ship the bin entry so users can run your MCP server from their mcp.json config

The MCP SDK handles the protocol. Convex handles the data layer. The Cloudflare Worker handles the edge routing. You write the tools and the auth. That is it.

The result is an AI-native interface to your product. Your users' coding agents become first-class clients of your platform, managing infrastructure through the same protocol they use to search the web and read documentation.

That is what we built for LaunchThatBot. And now every user who installs the MCP server can manage their deployments without leaving their editor.

Ready to apply this in your own deployment?

Try the MCP server

Related articles

Feb 16, 2026

Why Your OpenClaw Needs a Real Backend

OpenClaw generates state, secrets, and scheduled tasks. Without a structured backend, that stuff scatters into .env files and manual scripts. Here is why that breaks.