AI Agents · Model Context Protocol
Run UserJot from the agent you already use.
Connect Claude, ChatGPT, or any MCP-compatible client to UserJot. Triage feedback, update the roadmap, or publish a changelog post with a single sentence to your agent. No install, no SDK, just a URL and a token.
Claude Code 1.4.2 · connected to userjot-mcp
⎿userjot-mcp::listRequests→ 7 requests across 3 boards
⎿userjot-mcp::createChangelog→ draft ready, 184 tokens
This week in Cadence
Workspace exports now respect team timezones. Bulk actions handle 2× larger selections. Fixed a rendering glitch in threaded replies.
⎿userjot-mcp::updateChangelog→ scheduled, notifies 1,284 subscribers
>Tell the voters I love th
Powering product-led companies
Workflows
One sentence moves the whole loop.
The roadmap and changelog are downstream of request status, so one sentence to your agent can cascade through all three. Here are three workflows customers actually run:
Monday triage
Close anything that looks like spam, tag the rest by area, and move what looks real into Review.
- userjot::listRequests
- userjot::updateRequestStatus
- userjot::addRequestTags
A weekend backlog ranked and tagged before standup, without opening the dashboard.
Ship-day changelog
Anything completed this week without a changelog yet? Draft one in our voice, link the voters, and hold it for my review.
- userjot::listRequests
- userjot::createChangelog
- userjot::addChangelogRequests
- userjot::addChangelogAuthors
A draft waiting for review, every shipped request linked, every voter queued to notify on publish.
Grounded product research
Pull the top requests from companies over $10k ARR this quarter, and tell me who asked for each.
- userjot::lookupCompany
- userjot::listRequests
- userjot::lookupMember
A research brief built from real requests, tagged to the customers who asked, ready to drop into a planning doc.
Clients
Pick the agent you already live in.
Each client has a two-minute setup guide. Same server URL, same auth pattern everywhere. If your tool supports remote MCP, it works.
Claude Code
First-class remote MCP with a single CLI command.
Cursor
Drop it in mcp.json, global or per-project.
Codex
Add UserJot to the OpenAI Codex config.
ChatGPT
Connect as a custom MCP app or connector.
Windsurf
Remote MCP configured in Windsurf settings.
GitHub Copilot
VS Code Copilot Chat with MCP support.
Gemini
Google Gemini with a custom MCP connector.
Antigravity
Google's agentic IDE, MCP-ready.
Perplexity
Perplexity with a custom MCP tool.
OpenCode
Open-source coding agent in your terminal.
OpenClaw
Open-source CLI for agentic workflows.
Why MCP
A thin protocol, a full product surface.
The Model Context Protocol is the lingua franca AI clients have settled on for reaching into other tools. UserJot speaks it natively, over HTTP, at api.userjot.com/v1/mcp. Nothing to install locally 01, nothing proprietary to learn.
Every MCP tool maps one-to-one to a UserJot REST endpoint 02, generated from the same OpenAPI spec the dashboard and public API are built against. That means no stripped-down subset. An agent can do whatever a teammate with the same token could do.
Auth is boring, which is the point 03: a bearer token scoped to the user who created it, revocable from the dashboard. And because it's just remote HTTP, the same endpoint serves every supported client 04, and every future one.
Questions
Frequently asked, plainly answered.
Everything worth knowing about running UserJot through an AI agent.
A remote Model Context Protocol endpoint that exposes your UserJot workspace to any MCP-aware AI client. Your agent can read feedback, change request statuses, manage the roadmap, and publish changelogs directly, using the same permissions as the dashboard.
No. UserJot MCP is a remote HTTP server at https://api.userjot.com/v1/mcp. There is no local package, no bridge process, and no SDK. If your client supports remote MCP, you can connect it with a URL and a bearer token.
Claude Code, Cursor, Codex, ChatGPT, and Windsurf have dedicated setup guides. GitHub Copilot, Gemini, Antigravity, Perplexity, OpenCode, OpenClaw, and any other client that supports remote HTTP MCP servers with custom headers will work too. The endpoint and auth shape are the same everywhere.
Standard bearer-token auth with a UserJot API token. Create the token in Settings → API Tokens, pass it in the Authorization header, and revoke it from the dashboard at any time. The token inherits the permissions of the user who created it.
The MCP surface mirrors the full UserJot REST API, so an agent with your token can create, update, and delete requests, boards, comments, and changelogs. Treat it like giving an API key to any automation: use scoped tokens, test in a non-production workspace first, and review draft output before publishing.
Use MCP when you are working through an AI client like Claude Code, Cursor, or ChatGPT, and want natural-language control. Use the REST API when you are building your own backend integration or custom automation where you need full control over requests and responses.
Yes. API access is available on every plan, including Free. The MCP server uses the same tokens, so if you can use the REST API, you can use MCP.
No. MCP requests go directly between your client and api.userjot.com. UserJot does not see your AI provider, and your AI provider does not see your token. The client forwards the bearer header on each tool call.
Start building
Hand UserJot to your agent.
Grab an API token, paste the MCP endpoint into your client, and ask your agent to do something it could not do an hour ago.
That is what the loop, on autopilot, looks like.
Free on every plan · Works with any remote MCP client













