Documentation Index
Fetch the complete documentation index at: https://allhandsai-docs-add-send-message-endpoint.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
A ready-to-run example is available here!
ACPAgent lets you use any Agent Client Protocol server as the backend for an OpenHands conversation. Instead of calling an LLM directly, the agent spawns an ACP server subprocess and communicates with it over JSON-RPC. The server manages its own LLM, tools, and execution — your code just sends messages and collects responses.
Basic Usage
acp_command is the shell command used to spawn the server process. The SDK communicates with it over stdin/stdout JSON-RPC.
Key difference from standard agents: With
ACPAgent, you don’t need an LLM_API_KEY in your code. The ACP server handles its own LLM authentication and API calls. This is delegation — your code sends messages to the ACP server, which manages all LLM interactions internally.Prompt Context (AgentContext)
ACPAgent supports agent_context for prompt-only extensions — skills, repository context, current datetime, and system/user message suffixes are appended to the user message before it reaches the ACP server. This lets you inject the same skill catalog and repo-specific guidance that the built-in Agent receives, without interfering with the server’s own tools or execution model.
- The conversation layer builds the user
MessageEvent, including any per-turnextended_content(e.g. triggered-skill injections). ACPAgent._build_acp_prompt()collects all text blocks from the message and appends the renderedAgentContextprompt (datetime, repo context, available skills, system suffix) viato_acp_prompt_context().- The combined text is sent as a single user message to the ACP server.
user_message_suffix is an ACP-compatible field, but it is not duplicated in to_acp_prompt_context() because the conversation layer already applies it through MessageEvent.to_llm_message().Compatible AgentContext Fields
EachAgentContext field is tagged as ACP-compatible or not. At initialization, validate_acp_compatibility() rejects any context that uses unsupported fields.
| Field | ACP Compatible | Notes |
|---|---|---|
skills | ✅ | Skill catalog and trigger-based injections |
system_message_suffix | ✅ | Appended to the prompt context |
user_message_suffix | ✅ | Applied by the conversation layer |
current_datetime | ✅ | Included in the rendered prompt |
load_user_skills | ✅ | Load skills from ~/.openhands/skills/ |
load_public_skills | ✅ | Load skills from the public extensions repo |
marketplace_path | ✅ | Filter public skills via marketplace JSON |
secrets | ❌ | ACP subprocesses do not use OpenHands secret injection |
secrets (or any future field marked acp_compatible: False) raises NotImplementedError.
What ACPAgent Does Not Support
Because the ACP server manages its own tools, context window, and execution, theseAgentBase features are not available on ACPAgent:
tools/include_default_tools— the server has its own toolsmcp_config— configure MCP on the server sidecondenser— the server manages its own context windowcritic— the server manages its own evaluation
NotImplementedError at initialization.
ACPAgent with RemoteConversation
ACPAgent also works with remote agent-server deployments such as APIRemoteWorkspace, DockerWorkspace, and other RemoteWorkspace-backed setups.
When RemoteConversation detects an ACPAgent, it automatically uses the ACP-capable conversation routes for:
- conversation creation
- conversation info reads
- conversation counting
/api/acp/conversations.
How It Works
- Subprocess delegation:
ACPAgentspawns the ACP server and communicates via JSON-RPC over stdin/stdout - Server-managed execution: The ACP server handles its own LLM calls, tools, and context — your code just sends messages
- Auto-approval: Permission requests from the server are automatically granted, so ensure you trust the ACP server you’re running
- Metrics collection: Token usage and costs from the server are captured into the agent’s
LLM.metrics
Configuration
Server Command and Arguments
| Parameter | Description |
|---|---|
acp_command | Command to start the ACP server (required) |
acp_args | Additional arguments appended to the command |
acp_env | Additional environment variables for the server process |
Authentication
When the ACP server advertises authentication methods,ACPAgent automatically selects a credential source:
- ChatGPT subscription login — If the server supports a
chatgptauth method and~/.codex/auth.jsonexists (created byLLM.subscription_login()), this is selected first. This enables ACP-backed workflows to use device-code login credentials without an explicit API key. - API key environment variables — Falls back to checking for
ANTHROPIC_API_KEY,OPENAI_API_KEY, orGEMINI_API_KEYdepending on which auth methods the server supports.
Metrics
Token usage and cost data are automatically captured from the ACP server’s responses. You can inspect them through the standardLLM.metrics interface:
PromptResponse.usage— per-turn token counts (input, output, cached, reasoning tokens)UsageUpdatenotifications — cumulative session cost and context window size
Cleanup
Always callagent.close() when you are done to terminate the ACP server subprocess. A try/finally block is recommended:
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/40_acp_agent_example.py
examples/01_standalone_sdk/40_acp_agent_example.py
Running the Example
Remote Runtime Example
This example is available on GitHub: examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py
APIRemoteWorkspace:
examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py
Running the Example
On the agent-server side, the ACP-capable REST surface lives under
/api/acp/conversations, including POST, GET, search, batch get, and count.Next Steps
- Creating Custom Agents — Build specialized agents with custom tool sets and system prompts
- Agent Delegation — Compose multiple agents for complex workflows
- LLM Metrics — Track token usage and costs across models

