


Create shortened links, generate QR codes, and track analytics with EdgeURL.
Most platforms are quietly bracing for AI agents to start showing up at the front door. We just opened ours.
EdgeURL is now the first identity platform with a fully spec-compliant Model Context Protocol server that lets any AI agent -- Claude Desktop, Claude.ai web, ChatGPT, Cursor, Continue, anything that speaks MCP -- create a verified user, manage their identity, and shorten links on their behalf. With Ed25519-signed receipts of every action and out-of-band approval channels for the writes that matter.
If you've been watching MCP get adopted across Claude Desktop, ChatGPT desktop, Cursor, and the rest, this is the missing piece: an identity layer your AI can actually act inside.
Live now: server atmcp.edgeurl.io, npm wrapper@edgeurl/mcp, full developer guide at/docs/mcp.
mcp.edgeurl.ioThree things that no production MCP server has shipped together:
Every other MCP server presumes you already have an account. Stripe MCP, Linear MCP, GitHub MCP, Notion MCP -- they all start with OAuth from an existing user. EdgeURL is the first to let an agent create a brand-new user from chat.
User says "sign me up for EdgeURL." Agent calls signupuser(claimedemail). EdgeURL emails the user a magic-link confirmation. The user clicks it in their own browser, sees the exact scopes the agent will receive, and approves. Agent polls checksignupstatus, gets a scoped access token, and starts working.
The user never types a password into the chat. The agent never holds a long-lived credential it didn't get explicit consent for. The two-step ceremony is the security model -- the email owner is always the gate.
Every successful authenticated tool call writes a receipt to mcpactionreceipts. Receipts are JWS-EdDSA signed with our MCPRECEIPTSIGNINGKEY, hash-chained per user (prevreceipthash = sha256(prev.signedpayload)), and verifiable offline by anyone -- the public key lives at /.well-known/mcp.
What that means in practice: when an agent acts on your behalf, you get a tamper-evident receipt for it. If the receipt's payload mutates, the chain breaks and every subsequent receipt fails verification. You can publish a receipt URL and let any third party validate that EdgeURL really did sign that action on that date with that scope.
This is the audit trail that doesn't require trusting EdgeURL. It's the thing that makes "yes, my AI did that" provably true to a journalist, an auditor, or a brand partner.
Read tools fire instantly. Write tools route through whatever channel rules the user configured: in-app push, SMS, Telegram bot, email, webhook. Bio edit needs a quick push. Marketplace payout above $50 routes through Telegram with a 60-second cancel window. Account deletion requires email + 2FA.
The agent gets back -32100 pending_approval with a poll URL. The user gets the notification on whatever channel they configured for that action class. When they approve, the agent retries with an X-MCP-Approval-Id header to bypass the gate. If they cancel within the window, the action never happens.
That's the difference between "graduated approval" and "binary y/n in chat." It's also why "agent went rogue" stops being a catastrophe.
We registered a fourth tool at launch -- bridge_delegate -- that lets an agent on Claude or ChatGPT forward a tool call to another MCP server (Stripe, Linear, GitHub) using EdgeURL as the trusted token broker. The schema is shipped, the tool is in tools/list, and the registry endpoint at /api/mcp/bridge enumerates targets being integrated. Each per-partner bridge ships independently as the target server's OAuth + tool contract stabilizes.
The strategic frame: every popular MCP server makes EdgeURL more valuable as the bridge. Compounding moat.
When an AI agent calls one of our tools today, the response embeds the canonical public URL for whatever it just touched. getprofile returns your profileurl and editurl. listlinks returns each link's full shorturl. verifiedsocials returns the platform profile URL for every verified handle. accountinfo returns shortcuts to dashboard, billing, settings, and pricing. queryanalytics returns the analytics dashboard URL.
In practice that means when Claude or ChatGPT reads your bio in chat, you can click straight from the assistant's reply to the live profile, the editable dashboard surface, or the verified social on its native platform. The agent renders the result, you click through, you verify. No "go find this in your account" handoff. The chat is the launchpad.
The screenshot above is a real verifiedsocials call on Claude.ai web -- six platforms come back with their verification dates, and the model uses the per-row platformurl field to spot the LinkedIn handle drift on its own. That observation is something only an agent staring at the structured payload can catch; a human scanning the bio page would never notice the slug pattern.
And here is the actual session, unedited:
Notice every handle is underlined -- those are real hyperlinks, not styling. verifiedsocials returned platformurl for each row plus profile_url for the top-level edgeurl.io page, and Claude rendered them inline. Click @rex0lux next to "Twitter/X" and you land on x.com/rex0lux. Click edgeurl.io/rex0lux and you land on the bio. The chat is the launchpad.
If you have an MCP-aware client, you can verify the server end-to-end right now:
npx @modelcontextprotocol/inspector https://mcp.edgeurl.io
You should see 14 tools, OAuth metadata at /.well-known/oauth-authorization-server, and the receipt-verifier public key at /.well-known/mcp. Calling lookup_profile({"username":"rex0lux"}) returns Schema.org Person JSON-LD with verified socials -- no auth required for the read tools.
For Claude Desktop, drop this into your claudedesktopconfig.json:
{
"mcpServers": {
"edgeurl": {
"command": "npx",
"args": ["-y", "@edgeurl/mcp"],
"env": { "EDGEURLAPIKEY": "" }
}
}
}
That npx -y @edgeurl/mcp line is now a real npm package: npmjs.com/package/@edgeurl/mcp. It's a thin stdio-to-HTTPS proxy -- no business logic in the wrapper, all the tools, OAuth, receipts, and approvals live on mcp.edgeurl.io. Update the server, every client gets the new behavior immediately.
For Claude.ai web (Pro/Max/Team/Enterprise), the same server works as a Custom Connector. Add it from Settings > Connectors > Add custom connector with the URL https://mcp.edgeurl.io/api/mcp and Claude will walk through OAuth itself. Read the full developer guide -- including ChatGPT custom GPT setup, scope reference, receipt verification recipe, and the bridge protocol -- at /docs/mcp.
Most identity platforms in 2026 are still bolting "AI features" on. The frame stays the same: humans are the customers, AI is a feature.
We flipped the frame. AI agents are first-class customers of EdgeURL alongside humans. They have their own auth tier (mcpat* Bearer tokens, audience-bound to mcp.edgeurl.io), their own scope vocabulary, their own per-token rate limits, and their own audit chain. They sign actions cryptographically. They go through the same approval channels a human would for sensitive writes.
This is what AI-native infrastructure actually looks like. Not a chatbot bolted onto a dashboard. Not a fuzzy promise to "support agents soon." A spec-compliant, security-hardened, receipt-issuing identity layer that any AI client can connect to today.
Phase 5 of the rollout is shipped: server at mcp.edgeurl.io, npm wrapper at @edgeurl/mcp, public docs at /docs/mcp, Claude.ai Custom Connector compatibility, and the agent-clickable URL pass on every tool. After this, the focus moves to per-partner bridge integrations (Stripe, Linear, GitHub) and getting EdgeURL listed in Anthropic's public connector directory and ChatGPT's GPT store.
If you build with AI agents and you've been waiting for an identity layer that actually treats them like first-class citizens, that layer is live. The simplest test is the one we just walked through: connect the inspector, call tools/list, and look at what's there.
Create your EdgeURL account (free, no card) -- or, if you have an MCP-aware AI client open right now, ask it to do it for you. We will be the first place that actually answers when it asks.