Context7 MCP Server
MCPUp-to-date documentation and code examples for any library, fetched on demand via MCP protocol.
Dimension scores
Compatibility
| Framework | Status | Notes |
|---|---|---|
| Claude Code | ✓ | — |
| OpenAI Agents SDK | ✓ | Requires separate @upstash/context7-tools-ai-sdk package for native integration, Direct MCP connection requires custom adapter layer |
| LangChain | ~ | No native LangChain adapter provided in repository, Would require custom MCPToolWrapper implementation, Documentation only covers AI SDK and Claude Code integrations |
Security findings
API key transmitted in clear text via environment variables
Limited input validation documentation
No rate limiting configuration visible
Reliability
Success rate
0%
Calls made
100
Avg latency
156.3ms
P95 latency
198ms
Failure modes
- • Authentication failure - 401 Unauthorized for all requests
- • API key required but not configured in environment
- • Server responds quickly with structured error messages
- • No fallback or graceful degradation for missing credentials
- • Consistent error format across all tool invocations
Code health
License
MIT
Has tests
Yes
Has CI
No
Dependencies
11
Context7 is a well-structured monorepo with strong documentation and code quality foundations. Key strengths: comprehensive README (11KB), security policy, MIT license, TypeScript with strict typing, ESLint + Prettier configuration, Changesets for release management, and extensive documentation (25+ MDX files). The project is published to npm (@upstash/context7-mcp) and includes test scripts in package.json. However, without access to git history, I cannot verify maintenance activity. Notable gaps: no CI/CD configuration files visible (.github/workflows missing), test coverage reporting absent, and no visible test files in the provided structure (though test scripts exist). The monorepo structure with workspace packages suggests good organization. Dependencies are minimal (11 total) focusing on TypeScript tooling. The presence of SECURITY.md and comprehensive docs indicate professional maintenance standards. Score of 7 reflects solid foundation with documentation and tooling, but uncertainty around active maintenance, CI pipeline, and actual test implementation prevents a higher score.