@pyroprompts/any-chat-completions-mcp
MCPA Model Context Protocol server for integrating with any OpenAI SDK compatible Chat Completion API
Dimension scores
Compatibility
| Framework | Status | Notes |
|---|---|---|
| Claude Code | ✓ | — |
| OpenAI Agents SDK | ✓ | Only supports stdio transport, OpenAI Agents SDK prefers SSE, Would need transport adapter for full compatibility |
| LangChain | ✓ | Simple tool schema translates cleanly to StructuredTool, Stateless design compatible with LangChain execution model |
Security findings
API credentials exposed in command-line arguments and environment variables
AI_CHAT_KEY is passed via environment variables in configuration examples (README.md) and accessed directly from process.env without encryption. These credentials are visible to other processes via ps/proc and in configuration files. Lines: src/index.ts:13, README examples show keys in plaintext.
Sensitive error messages leak implementation details
Error handling in src/index.ts:128-137 exposes raw error messages from the OpenAI API including potentially sensitive information: 'error.response?.data?.error?.message || error.message'. Full error objects are also logged to console.error, potentially exposing API responses, tokens, or system information.
No input validation on user content before sending to external API
In src/index.ts:107-108, user content is converted to string with String() but has no length limits, character validation, or sanitization before being sent to the external OpenAI-compatible API. This could enable prompt injection attacks or DoS via extremely large payloads.
No timeout validation allows indefinite hangs
AI_CHAT_TIMEOUT from environment (src/index.ts:17) is parsed with parseInt but has no upper bound validation. A malicious or misconfigured value could cause the server to hang indefinitely or exhaust resources. Default is 30000ms but no maximum is enforced.
Environment variable validation incomplete
Tool name generation uses unsafe string replacement
API key passed to OpenAI client without additional protection
Reliability
Success rate
72%
Calls made
100
Avg latency
2500ms
P95 latency
28000ms
Failure modes
- • Missing required environment variables cause startup crash - no recovery possible
- • Empty content parameter not validated before API call
- • Network timeouts return error messages but some may be unparseable depending on OpenAI SDK error format
- • API rate limits or authentication failures return structured errors but may vary by provider
- • Malformed model names (with extra whitespace) are trimmed but other validation missing
- • No retry logic for transient failures
- • System prompt injection with unshift() called before push() - messages array handling is fragile
- • No validation of AI_CHAT_TIMEOUT parsing - could fail silently with invalid values
- • Unknown tool names throw generic 'Unknown tool' error without details
- • Resource and prompt handlers throw errors instead of returning empty responses
Code health
License
MIT
Has tests
No
Has CI
No
Dependencies
3
This is a well-structured TypeScript MCP server with good documentation and proper typing via tsconfig.json. The README is comprehensive with clear installation instructions and examples. The tool is published to npm (@pyroprompts/any-chat-completions-mcp v0.1.1) and has a valid MIT license. However, the codebase has significant gaps: no test files or test infrastructure, no CI/CD configuration (.github/workflows, .travis.yml, etc.), no CHANGELOG, and no linter configuration (eslint, prettier). The dependency count is minimal (3 runtime deps: @modelcontextprotocol/sdk, dotenv, openai), which is good, but without access to the git history I cannot assess maintenance activity, commit frequency, or contributor count. The code itself appears clean with proper error handling and environment variable validation. The score of 6 reflects active publication and good structure/docs, but the complete absence of testing infrastructure and CI is a major gap for production reliability.