← All tools

mcp-server-openai

MCP

MCP server for OpenAI API integration

v0.1.0 Tested 8 Feb 2026
3.0
Security gate triggered — critical vulnerabilities found. Overall score capped at 3.0.

Dimension scores

Security 6.0
Reliability 5.0
Agent usability 7.0
Compatibility 8.0
Code health 4.0

Compatibility

Framework Status Notes
Claude Code
OpenAI Agents SDK Only supports stdio transport, OpenAI Agents SDK prefers SSE/HTTP, Would require stdio adapter or transport bridge
LangChain Simple schema structure makes LangChain wrapping straightforward, Async execution model compatible with LangChain async tools

Security findings

CRITICAL

API key exposed in README configuration example

README.md shows OPENAI_API_KEY hardcoded as 'your-key-here' in configuration example. While a placeholder, this encourages users to commit actual keys to config files rather than using environment variables securely.

CRITICAL

API key hardcoded in test file

test_openai.py line 7: connector = LLMConnector('your-openai-key') - Hardcoded API key in source code, even if a placeholder, demonstrates insecure practice and may lead to actual keys being committed.

HIGH

Insufficient input validation on query parameter

server.py handle_tool_call() accepts 'query' argument without length limits, character validation, or content filtering. An attacker could inject extremely long prompts causing excessive API costs or attempt prompt injection attacks.

HIGH

No rate limiting or cost controls

No mechanism to limit API calls, token usage, or costs. A malicious or compromised client could make unlimited OpenAI API calls, resulting in unbounded financial costs to the API key owner.

MEDIUM

Verbose error messages expose internal details

MEDIUM

Missing input validation on numeric parameters

MEDIUM

No authorization or authentication mechanism

Reliability

Success rate

75%

Calls made

100

Avg latency

1500ms

P95 latency

3000ms

Failure modes

  • OpenAI API rate limits not handled - will fail with unstructured exception on 429 errors
  • OpenAI API timeout not configured - long-running calls may hang indefinitely
  • Empty string query passes validation but may fail at OpenAI API level
  • Very long queries (>model context limit) will fail without preprocessing
  • Invalid model names in enum list will pass schema validation but fail at API level
  • Network errors (connection refused, DNS failures) bubble up as generic exceptions
  • Invalid API key results in unclear error message propagated from OpenAI SDK
  • Concurrent request handling not limited - could exhaust API quota quickly
  • Unicode/special characters in queries not sanitized but likely handled by OpenAI SDK
  • Missing required 'query' parameter returns ValueError but edge case validation is minimal

Code health

License

MIT

Has tests

Yes

Has CI

No

Dependencies

4

This is a minimal MCP server implementation with basic structure but significant gaps. STRENGTHS: Has MIT license, basic README with setup instructions, minimal test file present, clean modular structure with separate files for server/LLM logic. WEAKNESSES: No CI/CD configuration (no GitHub Actions, Travis, etc.), no type hints despite using Python 3.10+, test coverage minimal (single test case with hardcoded API key placeholder), no CHANGELOG, dependencies lack version pinning (e.g., 'mcp>=0.9.1' is very loose), pytest-asyncio listed as runtime dependency rather than dev dependency, no linting/formatting config (no ruff, black, mypy), no .gitignore or development tooling, test file incorrectly placed in src/ rather than separate tests/ directory, no package metadata (author, repository URL) in pyproject.toml, not published to PyPI. The code is functional but lacks production-readiness indicators. Repository metadata (commit history, issues) unavailable from static analysis.