C
Cody
OpenClaw Integrations

How to Connect PhantomBuster to OpenClaw: Setup, Models, and Workflow Guide

·5 min read

If you're searching for "how to connect PhantomBuster to OpenClaw", the real question is usually not just whether the connection is possible. It's how to make PhantomBuster usable inside an OpenClaw workflow with the right model, the right context, and the right level of control.

That's the practical framing.

OpenClaw gives you the orchestration layer: connectors, skills, tools, prompts, approvals, and the ability to run workflows where your team already works. PhantomBuster provides the domain context. The integration becomes valuable when those two pieces are connected cleanly.

What “Connect PhantomBuster to OpenClaw” Actually Means

In practice, connecting PhantomBuster to OpenClaw usually involves four layers:

  • Authentication so OpenClaw can securely access PhantomBuster
  • Tooling or proxy endpoints that expose the right PhantomBuster actions and data
  • Skills/instructions that tell OpenClaw how to reason over PhantomBuster context
  • Model selection so the assistant uses the right LLM for the job

That last piece matters more than most people expect.

Which Models Can You Use?

OpenClaw is model-flexible, so a PhantomBuster integration does not need to be tied to a single provider. Depending on your setup, teams commonly want to use:

  • OpenAI models like GPT-4o, GPT-4.1, and o3 for broad reasoning and tool use
  • Anthropic models like Claude 3.5 Sonnet, Claude Sonnet 4/4.5, and Claude Opus for strong writing, analysis, and long-context work
  • Google models like Gemini 1.5 Pro or newer Gemini models for multimodal and large-context workflows
  • Other model backends if your OpenClaw environment exposes them

The practical point: you can connect PhantomBuster to OpenClaw once, then run different workflows with different models depending on the job.

For example:

  • Use Claude for nuanced summarisation or drafting
  • Use OpenAI for structured extraction, tool-heavy workflows, or general-purpose copiloting
  • Use Gemini when multimodal or very large context windows matter

A Good Integration Pattern for PhantomBuster

A strong PhantomBuster + OpenClaw setup usually looks like this:

  1. OpenClaw receives a request in chat or from an automation
  2. It calls the right PhantomBuster endpoint or proxy
  3. The selected model reasons over the returned context
  4. OpenClaw returns an answer, draft, classification, or action
  5. High-risk actions stay behind approvals or structured guardrails

That is what makes the setup operational rather than just experimental.

Step-by-Step: Connect PhantomBuster to OpenClaw

Step 1: Get Your PhantomBuster API Key

Log into PhantomBuster and go to your profile → API. Copy your API key. The PhantomBuster API base URL is https://api.phantombuster.com/api/v2/. Authenticate with the key in the X-Phantombuster-Key header.

Step 2: List and Monitor Your Agents

The /agents/fetch-all endpoint returns all your configured Phantoms with their IDs and last run status. /agents/fetch-output lets you retrieve the output data from a completed run. /agents/launch triggers a Phantom to run on demand — useful for triggering scrapes from a Slack command.

Step 3: Build the Proxy and Skill File

Build your proxy around agent status and output retrieval. Write ~/.openclaw/skills/phantombuster.md listing your most-used Phantoms by name and agent ID, what they extract, and how often they run. This lets your team ask "what did the LinkedIn company scraper find today?" and get a real answer.

Model-Specific Workflow Ideas

PhantomBuster + OpenAI

Use this when you want a strong general-purpose setup for extraction, classification, action planning, and tool-driven workflows around PhantomBuster.

PhantomBuster + Claude

Use this when you want better writing quality, clearer summaries, stronger nuance, and reliable long-context reasoning over PhantomBuster data.

PhantomBuster + Gemini

Use this when the workflow benefits from large context windows, multimodal inputs, or Google-native ecosystem alignment.

Common Mistakes

Most teams do not fail because the model is bad. They fail because:

  • the PhantomBuster connection is too thin
  • the model lacks the right live context
  • prompts are vague
  • no structured outputs are enforced
  • permissions and approvals are skipped
  • one model is forced to do every job, even when another would be a better fit

The best setup is usually one integration layer, multiple model options, and clear guardrails.

Challenges and Caveats

Phantoms Run on PhantomBuster's Infrastructure

PhantomBuster Phantoms run on PhantomBuster's servers, not yours. The API gives you control and visibility, but execution happens externally. If a Phantom fails due to platform changes (LinkedIn blocking the scraper, for example), you'll see failure status in the API but resolution is in the PhantomBuster dashboard.

Output Size Can Be Large

Some Phantoms extract thousands of rows. Pulling full output into a Slack message isn't practical — your skill file should instruct Claude to summarise or filter output rather than dump raw data.

Want PhantomBuster Connected to OpenClaw Without Building the Whole Stack Yourself?

Cody has PhantomBuster integration built in. Monitor your Phantom runs and pull extracted data into Slack without API setup.

Get started with Cody →


Related OpenClaw Guides


Looking for a more workflow-first angle? See: PhantomBuster AI Automation and PhantomBuster AI Assistant.