← Back to Blog
28 Mar 2026 WebMCP 12 min read

Your Landing Pages Have Two Audiences Now — And One Just Learned to Act

Pardeep By Pardeep Dhingra

Your landing pages have always served two audiences: humans and machines. Search engine crawlers have been reading your pages for decades. But something fundamental changed in early 2026 — the machine audience upgraded from reading to doing.

WebMCP (Web Model Context Protocol), a W3C draft co-authored by Google and Microsoft, lets websites register JavaScript functions as structured tools that AI agents can discover and call directly. No DOM scraping, no guessing, no fragile heuristics. The agent asks the page what it can do, and the page tells it.

This isn't a theoretical future. Chrome Canary 146 shipped an early preview in February 2026. The MCP ecosystem has exploded to 97 million+ SDK downloads and 17,000+ servers. And we built a demo that shows the difference: the same restaurant reservation task completes in 8 seconds with WebMCP versus 38 seconds without it.

The Three-Layer Model

Most landing pages stop at Layer 1. Some do Layer 2. Almost nobody does Layer 3 — yet. Click each layer to explore:

LAYER 1
Semantic HTML
Tells agents the page structure
Most pages stop here
Headings, nav, main, article, section, forms — the structural bones of your page. This is the foundation for both accessibility and AI readability. Without semantic HTML, agents are navigating a pile of divs with no map. This layer is table stakes — but it's only the beginning.
LAYER 2
Schema.org / JSON-LD
Tells agents what the page contains
Product, price, reviews, FAQs
Structured data adds a metadata layer: product types, prices, reviews, FAQs, business info. This is what Schema.org and JSON-LD provide. AI search engines (ChatGPT Search, Perplexity, Google AI Overviews) actively consume this data to decide which brands to cite. Even after Google restricted FAQ rich results in August 2023, AI systems picked up that exact structured data — pages with well-structured FAQ content see 20-40% higher citation rates in AI-generated answers.
LAYER 3
WebMCP — registerTool()
Tells agents what the page can do
The action layer — NEW
This is the breakthrough. Schema.org tells agents "this is a restaurant with a prix fixe menu." WebMCP tells agents "here's the fill_reservation_form() function to book a table." Together they enable full agentic interaction. The navigator.modelContext.registerTool() API lets you expose any JavaScript function as a structured, discoverable tool with typed inputs and descriptions.

The Speed Difference: 4.7x Faster

We built a side-by-side demo — same restaurant page, same reservation task. The difference is whether the AI agent has to guess at page structure or can call registered tools directly:

Traditional (DOM Scraping)

~38s
Scan 62 DOM nodes, guess field purposes, fill forms one-by-one

WebMCP (Tool Calls)

~8s
Discover tools, call fill_reservation_form(), confirm_reservation()
4.7x faster with WebMCP

How It Works: registerTool()

The implementation cost is minimal — you're wrapping existing JavaScript functions with a registration call:

webmcp-register.js
// Load the WebMCP polyfill (until native browser support ships) // <script src="https://unpkg.com/@mcp-b/global@latest/dist/index.iife.js"> // Register a "fill reservation" tool for AI agents navigator.modelContext.registerTool({ name: "fill_reservation_form", description: "Fill in reservation form fields in one call", inputSchema: { type: "object", properties: { guestName: { type: "string", description: "Guest's full name" }, date: { type: "string", description: "Reservation date (YYYY-MM-DD)" } }, required: ["guestName"] }, handler: async ({ guestName, date, ...rest }) => { // Your existing business logic — nothing changes fillFormFields({ guestName, date, ...rest }); return { success: true, filled: Object.keys(rest) }; } });

That's it. You're exposing your existing reservation logic as a structured tool that any WebMCP-compatible agent can discover and call. No new UI, no new API — just a registration wrapper around functions you already have.

Funnel Compression

The traditional conversion funnel collapses when AI agents can act on behalf of users:

Human Journey
Agent Journey
AWARENESSGoogle search → ad click → landing page
AWARENESSUser asks: "book me a table for Friday"
CONSIDERATIONBrowse menu, check reviews, compare restaurants
CONSIDERATIONAgent calls get_menu(), parses structured data
DECISIONPick a time, fill form fields, hesitate, come back later
DECISIONAgent calls get_availability() immediately
PURCHASEFill reservation form, enter details, submit
PURCHASEAgent calls fill_reservation_form() + confirm_reservation()
5–30 minutes
8–30 seconds

SEO vs. AEO

A new discipline is forming alongside traditional SEO — Agent Engine Optimization:

DimensionTraditional SEOAgent Engine Optimization
DiscoveryGooglebot crawls and indexesAI agents fetch, understand, and act
OptimizationKeywords, backlinks, page speedSemantic HTML, structured data, tool registration
User journeySearch → click → browse → convertAsk agent → agent executes → done
ConversionHuman clicks "Confirm Reservation"Agent calls confirm_reservation()
AttributionWell-established (GA4, UTM)Fragmented — no standard yet
The attribution problem is real. AI agents that complete bookings on behalf of users may not carry UTM parameters, referrer headers, or session cookies. There's currently no standard way to attribute conversions from AI agents vs. direct human traffic. This is an unsolved analytics blind spot in 2026.

Security: The Elephant in the Room

A February 2026 scan found 8,000+ MCP servers exposed on the public internet, with 492 having zero authentication and zero encryption. And 88% of organizations reported confirmed or suspected AI agent security incidents.

The key threats are real: prompt injection via page content, unauthorized agent actions, abuse of registered tools, and data exfiltration via tool responses. If you're implementing WebMCP, authentication and input validation on your tool handlers aren't optional — they're the first thing you build.

Try It Yourself

We built a complete side-by-side demo — a traditional restaurant page vs. the same page with 10 registered WebMCP tools. Each has a built-in AI agent simulation so you can see the difference in real time.

Open Demo →

Is Your Page Ready?

A quick audit for your landing pages — check off what you already have:

Page Readiness Checklist
Layer 1: Semantic HTML — proper headings, nav, main, article, section elements
Layer 1: ARIA labels on interactive elements (buttons, forms, inputs)
Layer 2: JSON-LD structured data (Product, FAQ, Organization)
Layer 2: FAQ content visible on page AND in schema (not just hidden JSON-LD)
Layer 2: All Google-required fields present for your schema types
Layer 3: Core actions registered via navigator.modelContext.registerTool()
Layer 3: Tool handlers include input validation and auth checks
Layer 3: Autocomplete and inputmode attributes on all form fields

The Honest Take

WebMCP is early. It's Chrome Canary flag-gated only, the standards landscape is fragmented (WebMCP, MCP, A2A, agents.json all serve different layers), and browser support beyond Chrome is uncertain. Firefox is evaluating, Safari has no timeline.

But the direction is unmistakable. The MCP ecosystem grew from 714 servers to 17,000+ in a single year. Google and Microsoft co-authored the W3C draft. OpenAI adopted MCP across ChatGPT. Shopify shipped Storefront MCP for Hydrogen. Cloudflare is hosting edge MCP servers.

The question isn't whether AI agents will interact with your pages. It's whether your pages are ready when they do. The companies that optimize for both audiences — human and AI — will capture a compounding advantage as agent traffic scales.

Start with Layer 1 and 2 today. Experiment with Layer 3. The window to be early is still open.