Skip to content

MCP Protocol

The Model Context Protocol (MCP) is an open specification for connecting AI models to external data sources and tools. Published by Anthropic under the MIT License, MCP is a standardized way for AI agents to discover and call tools provided by external servers.

MCP solves a coordination problem: every AI coding agent previously had to invent its own plugin system, and every tool had to be reimplemented for each agent. With MCP, a tool that speaks the protocol works with any agent that speaks the protocol.

Canopy implements the MCP server side of the protocol. AI agents (Claude Code, Cursor, Windsurf, Zed, Continue, and others) implement the MCP client side.

Canopy uses the stdio transport: the MCP client (your AI agent) launches canopy serve as a subprocess and communicates over stdin/stdout. This is the most common MCP transport and is universally supported.

{
"mcpServers": {
"canopy": {
"command": "canopy",
"args": ["serve", "."],
"env": {}
}
}
}

No HTTP server. No ports to configure. No networking between agent and Canopy. The agent and Canopy are the same process group, communicating through pipes.

All messages are JSON-RPC 2.0 over the stdio pipe. The key exchanges:

  1. initialize — Client announces capabilities. Canopy responds with its server info and injects behavioral instructions into the system prompt.
  2. tools/list — Client requests the full tool manifest. Canopy returns all 21 tools with their JSON Schema input definitions and descriptions.
  3. tools/call — Client calls a specific tool with arguments. Canopy runs the query and returns the result.

The most important thing Canopy does at initialize time is inject a block of behavioral instructions into the agent’s context. These instructions:

  • Define the correct order of operations (canopy_prepare before edit, canopy_validate after)
  • Explain when to use workflow tools vs detailed tools
  • Teach the agent how to interpret GO/CAUTION/STOP assessments
  • Document the tool categories and when to reach for each

This is why Canopy’s behavior in properly configured agents is automatic. The agent learns the intended workflow from Canopy itself at the start of every session, without the user needing to write a system prompt or custom instructions.

Canopy’s tools are organized into six categories. The three workflow tools handle the most common AI-agent use cases. The 18 detailed tools are for targeted analysis.

These three tools are the highest-leverage entry points. Each bundles multiple detailed tool calls into a single result:

ToolWhen to useReturns
canopy_prepareBefore any file modificationDependents, imports, health findings, coverage, git activity, GO/CAUTION/STOP
canopy_validateAfter file modifications completeHealth delta (new findings introduced vs fixed), import verification
canopy_understandWhen encountering unfamiliar codeFull structural analysis: symbols, callers, dependencies, test coverage, recent history
ToolWhen to useReturns
canopy_searchFind code by concept or keywordRanked file+snippet results, camelCase-aware
canopy_pattern_searchFind structural code patternsAll locations matching an ast-grep pattern
canopy_search_symbolsFind symbols by nameAll functions/classes/types matching a name query
ToolWhen to useReturns
canopy_trace_importsWhat does this file depend on?Outbound import edges from a file
canopy_trace_dependentsWho depends on this file?Inbound import edges pointing to a file
canopy_check_wiringIs this module connected?Reachability from any entry point in the graph
canopy_find_cyclesAre there circular deps?Circular dependency chains (if any)
canopy_dependency_graphFull subgraph visualizationDOT-format or JSON dependency graph for a path
ToolWhen to useReturns
canopy_health_checkRun all health checksP0/P1/P2/info findings across the repo
ToolWhen to useReturns
canopy_parse_fileRaw symbol list for a fileEvery symbol in a file with type, line, exported status
canopy_extract_symbolGet the full source of a symbolSource code + context for a named function/class/type
ToolWhen to useReturns
canopy_git_historyRecent commits for a fileLast N commits: hash, author, date, message
canopy_git_blamePer-line authorshipBlame data for a line range
ToolWhen to useReturns
canopy_ingest_scipUpgrade to compiler-resolved edgesConfirmation + edge count before/after
canopy_coverageIngest test coverageCoverage stored; per-file % queryable by workflow tools
ToolWhen to useReturns
canopy_index_statusCheck index healthFile counts, stale files, last index time, which layers are ready
canopy_reindexTrigger incremental re-indexRe-index result (files processed, duration)

canopy_prepare is not magic — it calls canopy_trace_dependents, canopy_trace_imports, canopy_health_check, canopy_git_history, and optionally canopy_coverage under the hood, then synthesizes the results into a single actionable summary.

This composability is intentional. You can call the detail tools directly when you need targeted information. Use workflow tools when you want a complete picture.

All tools return structured JSON. This makes results machine-readable and allows the agent to reason about them programmatically — counting dependents, sorting findings by severity, comparing before/after states.

Each tool’s MCP description is written to be read by AI agents, not just humans. The descriptions explain not just what the tool does but when to use it, what inputs to provide, and how to interpret the output. This is part of why Canopy agents need minimal prompting — the tools are self-documenting.

Any agent that supports MCP stdio transport can use Canopy. Tested combinations:

AgentMCP supportNotes
Claude CodeFullPrimary test target. Server instructions fully supported.
CursorFullConfig via cursor_mcp_config.json in project root
WindsurfFullConfig via workspace settings
ZedFullConfig via settings.json context server block
Continue (VS Code)FullConfig via config.json
Codex CLIFull--mcp-config flag or CODEX_MCP_CONFIG env var

For per-agent setup guides, see the How-To section.