An Injection of Documentation Context into Workflows#
FedRAMP 20X has pushed a lot more structured data into the official GitHub repo which means we can stop opening PDFs and start asking questions…maybe? This project is a simple documentation server that pulls raw text and JSON from FedRAMP’s repo, indexes it, and exposes it via an MCP so tools can search and reason over the source material. For a long time, interpretation of confusing compliance documents was done by GRC incumbents that made an entire career on reading and interpreting these docs. But in 2025, virtually everyone is dropping PDFs into ChatGPT, Claude, or Notebook LLM and telling their AI assistant to “explain this to me like I’m five.” A documentation MCP server could help structure that process more formally by allowing agents to query the source material directly rather than relying on RAG or ad-hoc document uploads. This implemnentation isn’t so much about adding too many new tools as it is about injecting relevant context into existing AI workflows.
I’ve been experimenting with documentation-based MCPs and the results are mixed. Focused docs can help coding agents give more accurate, actionable answers โ the Solana Developer MCP (https://mcp.solana.com/) is a good example that has been helpful for structuring smart contracts. (But that might also be because I’m a little rusty on my Anchor Framework and Rust.) However, MCP can consume a lot of LLM context tokens, so financially and operationally it may not make sense. I think the protocol in theory makes a lot of sense but it’s current form and implementation is really heavy ๐ .
Problems with MCP#
From Anthropic’s Mouth#
Anthropic (the company that virtually created MCP as a protocol) even authored, “Code execution with MCP: Building more efficient agents” (https://www.anthropic.com/engineering/code-execution-with-mcp) which highlights the following:
- Tool definitions overload the context window when you load lots of MCP tools directly as โtoolsโ in models
- Intermediate tool results consume tons of tokens because large outputs (docs, sheets, transcripts) get passed through the model multiple times between tools
- This leads to higher cost and latency and makes agents less efficient as you add more MCP servers/tools
- Copying large data through the model increases error risk (models can make mistakes when re-copying big blobs between calls)
- Direct tool-calling doesnโt scale well once you have hundreds/thousands of tools connected over MCP for a single agent.
- Code execution with MCP fixes a lot of this but adds complexity: you now need a secure sandbox, resource limits, and monitoring to safely run agent-generated code
Other issues with MCP#
The list doesn’t stop there though ๐
Single point of failure for secrets
- MCP servers often hold powerful API keys/OAuth tokens for Gmail, Drive, Jira, etc. If a server is compromised, all those creds are at risk
Prompt injection and tool poisoning
- Malicious documents, emails, or even a malicious MCP server can trick the model into calling sensitive tools or exfiltrating data
Overbroad permissions / lack of least privilege
- Itโs easy to wire MCP servers with very broad scopes (full tenant access) instead of tightly scoped, per-use-case permissions
No built-in governance / approval flows
- MCP itself doesnโt enforce RBAC, human approval, or policy checks (e.g., โnever delete prod resourcesโ) โ you have to build those layers in yourself
Stateful SSE (Server-Sent Events) scaling challenges
- Long-lived server-sent event connections can be awkward to scale and load-balance compared to stateless HTTP APIs
Tool orchestration complexity
- Deciding which tools to expose, handling failures, retries, and versioning across many MCP servers gets annoying fast
Identity and audit ambiguity
- Without careful design, itโs unclear whether an action should be attributed to the human, the agent, or a shared service account; audit logs can get messy. (https://acuvity.ai/ has an interesting solution for this)
Ecosystem immaturity
- Many MCP servers are early-stage and may have inconsistent security hardening, error handling, and upgrade paths
Conclusion on MCP Use for Compliance Tools#
I’m hoping as MCP evolves maybe there is a future where a FedRAMP MCP server interacts with MCP servers for other tools (Vulnerablity MCP, PCI-DSS/SOC2 MCP, Trust Center MCP) and can allow users to come to better conclusions faster. Compliance is context heavy process so having focused documentation MCPs could help speed things up if implemented well even if these tools only serve to remind users what gaps are context they are missing in their risk decisions. Currently though, the heavy context and complexity overhead make it a tough sell for most use cases. So maybe stick with RAG for now until this matures.




