Running a SPA inside ChatGPT using MCP Apps (Step-by-Step Guide)

nyaomaru

nyaomaru

2026-02-02T17:32:58Z

4 min read

Hi there!

I’m @nyaomaru, a frontend engineer who still uses a hot water bottle in 2026.

A new era has arrived — you can now display and interact with your own web app directly inside the ChatGPT chat interface.

https://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-apps/

If you watch the video in the announcement, you’ll see what this looks like. A new common standard called MCP Apps has been released, allowing interactive UIs to run inside ChatGPT and Claude 🎉

In simple terms:
You can now run your own web app inside a chat, using an iframe.

As a developer, that’s something I definitely wanted to try myself.

In this article, I’ll walk through how to display your app inside ChatGPT step by step.

Let’s dive in.

What are MCP Apps?

MCP Apps provide a standardized way to deliver interactive UIs from MCP servers. Your UI renders inline in the conversation, in context, in any compliant host.

MCP Apps are a standard that allows AI hosts to display and interact with external web UIs.

Here’s the overall architecture:

In MCP Apps, your UI does not talk directly to your MCP server.
Instead, ChatGPT (the host) sits in the middle, fetching the UI and mediating all communication.

Docs:
https://modelcontextprotocol.github.io/ext-apps/api/documents/Overview.html

So what does this enable?

It allows ChatGPT or Claude to display a UI built with ext-apps, via an MCP Server.

⚠️ Important note:

Right now, the AI host must explicitly enable your app.
This is not something users can trigger just by typing a prompt. The host needs to have your MCP server configured.

Now, let’s go through how to build and publish this.

Publishing a UI using ext-apps

First, we need a web app to display.

If you don’t have one, the official examples are a great starting point:

https://github.com/modelcontextprotocol/ext-apps/tree/main/examples

https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/quickstart

I added MCP support to my experimental project nyaomaru 3D. Here’s the PR diff:

https://github.com/nyaomaru/nyaomaru-3D/pull/11/changes

This works fine in a monorepo too, but here I’ll assume the frontend and MCP server live in separate repositories.

Setup

Here’s the SPA we’ll use:

https://github.com/nyaomaru/nyaomaru-3D

Install ext-apps:

https://www.npmjs.com/package/@modelcontextprotocol/ext-apps

pnpm add @modelcontextprotocol/ext-apps
Enter fullscreen mode Exit fullscreen mode

Since my app uses Vue + Vite, I also added:

pnpm add -D vite-plugin-singlefile
Enter fullscreen mode Exit fullscreen mode

Next, create an HTML entry file for MCP (I called mine mcp-app.html):

https://github.com/nyaomaru/nyaomaru-3D/blob/main/mcp-app.html

Then create a bundle entry:

https://github.com/nyaomaru/nyaomaru-3D/blob/main/src/mcp-app.ts

My UI component looks like this:

import { App } from '@modelcontextprotocol/ext-apps';

// ...
const app = new App(
  {
    name: 'Nyaomaru MCP App',
    version: '0.1.0',
  },
  {},
  { autoResize: true },
);

onMounted(async () => {
  try {
    await app.connect();
    // ...
  } catch (error) {
    // ...
  }
});

onBeforeUnmount(() => {
  app.close();
});
Enter fullscreen mode Exit fullscreen mode

The key part is app.connect() — this starts communication with the host (ChatGPT).
If you forget this, your UI won’t work inside ChatGPT.

Build config

I separated the MCP build output using a custom Vite mode:

export default defineConfig(({ mode }) => {
  const isMcpBuild = mode === 'mcp';

  return {
    plugins: [vue(), ...(isMcpBuild ? [viteSingleFile()] : [])],
    build: isMcpBuild
      ? {
          outDir: 'dist-mcp',
          emptyOutDir: true,
          rollupOptions: {
            input: fileURLToPath(new URL('mcp-app.html', import.meta.url)),
          },
        }
      : undefined,
  };
});
Enter fullscreen mode Exit fullscreen mode

Add a script:

{
  "scripts": {
    "build:mcp": "vue-tsc -b && vite build --mode mcp"
  }
}
Enter fullscreen mode Exit fullscreen mode

Deploy

Deploy the MCP build separately (I used Vercel):

  • Build Command: pnpm build:mcp
  • Output Directory: dist-mcp
https://{project-name}.vercel.app/mcp-app.html
Enter fullscreen mode Exit fullscreen mode

If this page loads, your UI is ready.

Building the MCP Server

Now we create the MCP server that tells ChatGPT how to load our UI.

Here’s my example repo:

https://github.com/nyaomaru/nyaomaru-3D-mcp-ui

Install:

pnpm add @modelcontextprotocol/ext-apps @modelcontextprotocol/sdk
Enter fullscreen mode Exit fullscreen mode

Key parts:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import {
  registerAppResource,
  registerAppTool,
  RESOURCE_MIME_TYPE,
} from '@modelcontextprotocol/ext-apps/server';

// ...

const server = new McpServer({
  name: 'nyaomaru-3d-ui',
  version: '1.0.0',
});

registerAppTool(
  server,
  'open-nyaomaru-3d-ui',
  {
    title: 'Open Nyaomaru 3D UI',
    description: 'Render the Nyaomaru 3D MCP UI.',
    inputSchema: {},
    _meta: {
      ui: {
        resourceUri: RESOURCE_URI,
      },
    },
  },
  async () => {
    return {
      content: [
        {
          type: 'text',
          text: 'Opening Nyaomaru 3D UI...',
        },
      ],
    };
  },
);

registerAppResource(
  server,
  RESOURCE_URI,
  RESOURCE_URI,
  {
    mimeType: RESOURCE_MIME_TYPE,
  },
  async () => {
    const html = await fetchUiHtml();
    return {
      contents: [
        {
          uri: RESOURCE_URI,
          mimeType: RESOURCE_MIME_TYPE,
          text: html,
        },
      ],
    };
  },
);
Enter fullscreen mode Exit fullscreen mode

Then expose via HTTP:

import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';

const app = express();
app.use(cors());
// ...

app.post('/mcp', async (req, res) => {
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
    enableJsonResponse: true,
  });
  res.on('close', () => {
    transport.close();
  });
  await server.connect(transport);
  await transport.handleRequest(req, res, req.body);
});

const port = Number(process.env.PORT ?? 3001);
app.listen(port, () => {
  console.log(`MCP UI server listening on http://localhost:${port}/mcp`);
});
Enter fullscreen mode Exit fullscreen mode

Exposing the server with ngrok

ngrok http 3001
Enter fullscreen mode Exit fullscreen mode

Copy the HTTPS URL.

Configuring ChatGPT

  1. Open ChatGPT
  2. Settings → Apps → Advanced → Enable Developer Mode
  3. Create a new app
  4. Enter your ngrok URL (e.g. https://xxx.ngrok-free.dev/mcp)
  5. No auth
  6. Save

Then enable the app in chat and ask ChatGPT to open it.

If everything worked — congrats! Your SPA is now running inside ChatGPT 🎉

Final thoughts

Seeing your own app running inside an AI chat interface feels like the future.

We’re moving from “AI responds with text” to “AI hosts applications.”

Right now, AI control over the iframe is limited, but this space is evolving fast.

If you build products, this could soon become a new way users interact with your services.

Definitely worth experimenting with now.

If you’re experimenting with MCP Apps too, I’d love to see what you build 👀