If you're searching for "how to connect Intercom to OpenClaw", the real question is usually not just whether the connection is possible. It's how to make Intercom usable inside an OpenClaw workflow with the right model, the right context, and the right level of control.
That's the practical framing.
OpenClaw gives you the orchestration layer: connectors, skills, tools, prompts, approvals, and the ability to run workflows where your team already works. Intercom provides the domain context. The integration becomes valuable when those two pieces are connected cleanly.
What “Connect Intercom to OpenClaw” Actually Means
In practice, connecting Intercom to OpenClaw usually involves four layers:
- Authentication so OpenClaw can securely access Intercom
- Tooling or proxy endpoints that expose the right Intercom actions and data
- Skills/instructions that tell OpenClaw how to reason over Intercom context
- Model selection so the assistant uses the right LLM for the job
That last piece matters more than most people expect.
Which Models Can You Use?
OpenClaw is model-flexible, so a Intercom integration does not need to be tied to a single provider. Depending on your setup, teams commonly want to use:
- OpenAI models like GPT-4o, GPT-4.1, and o3 for broad reasoning and tool use
- Anthropic models like Claude 3.5 Sonnet, Claude Sonnet 4/4.5, and Claude Opus for strong writing, analysis, and long-context work
- Google models like Gemini 1.5 Pro or newer Gemini models for multimodal and large-context workflows
- Other model backends if your OpenClaw environment exposes them
The practical point: you can connect Intercom to OpenClaw once, then run different workflows with different models depending on the job.
For example:
- Use Claude for nuanced summarisation or drafting
- Use OpenAI for structured extraction, tool-heavy workflows, or general-purpose copiloting
- Use Gemini when multimodal or very large context windows matter
A Good Integration Pattern for Intercom
A strong Intercom + OpenClaw setup usually looks like this:
- OpenClaw receives a request in chat or from an automation
- It calls the right Intercom endpoint or proxy
- The selected model reasons over the returned context
- OpenClaw returns an answer, draft, classification, or action
- High-risk actions stay behind approvals or structured guardrails
That is what makes the setup operational rather than just experimental.
Step-by-Step: Connect Intercom to OpenClaw
Step 1: Create an Intercom Access Token
Go to app.intercom.com/developers, create a new app, and generate an access token with the scopes you need (Conversations read, Contacts read at minimum). Use this as a Bearer token for all API requests to https://api.intercom.io.
Step 2: Use the Search Conversations Endpoint
The /conversations/search endpoint accepts structured queries — filter by state (open, closed, snoozed), assignee, tag, and more. For contact history, the /contacts/{id}/conversations endpoint returns all conversations for a specific customer.
Step 3: Build the Proxy and Skill File
Build your proxy around conversation search and contact lookup. Write ~/.openclaw/skills/intercom.md with your team's tag names and assignment team names — Intercom uses human-readable labels that Claude can work with directly.
Model-Specific Workflow Ideas
Intercom + OpenAI
Use this when you want a strong general-purpose setup for extraction, classification, action planning, and tool-driven workflows around Intercom.
Intercom + Claude
Use this when you want better writing quality, clearer summaries, stronger nuance, and reliable long-context reasoning over Intercom data.
Intercom + Gemini
Use this when the workflow benefits from large context windows, multimodal inputs, or Google-native ecosystem alignment.
Common Mistakes
Most teams do not fail because the model is bad. They fail because:
- the Intercom connection is too thin
- the model lacks the right live context
- prompts are vague
- no structured outputs are enforced
- permissions and approvals are skipped
- one model is forced to do every job, even when another would be a better fit
The best setup is usually one integration layer, multiple model options, and clear guardrails.
Challenges and Caveats
Rate Limits Are Strict on Lower Plans
Intercom's API rate limits are 500–1000 requests per minute depending on plan. Conversation search can involve multiple paginated requests for large inboxes. Cache results where possible.
Conversation History Has Pagination
Long conversation threads are paginated. If a customer has been in contact many times, your proxy may need to handle cursor-based pagination to retrieve the full history.
Want Intercom Connected to OpenClaw Without Building the Whole Stack Yourself?
Cody has Intercom integration built in. Get conversation context and inbox health in Slack without access token setup.
Related OpenClaw Guides
- How to Connect Zendesk to OpenClaw
- How to Connect Freshdesk to OpenClaw
- How to Connect HubSpot to OpenClaw
Looking for a more workflow-first angle? See: Intercom AI Automation and Intercom AI Assistant.