The **Content Intelligence (CI) MCP Server** is a high-performance, stateless service designed to bridge the gap between raw development data and actionable project insights. Built with **Bun** and **
The Content Intelligence (CI) MCP Server is a high-performance, stateless service designed to bridge the gap between raw development data and actionable project insights. Built with Bun and TypeScript, this server acts as an intelligent layer on top of the Nexus item-store, using LLMs to analyze workflows, predict effort, and surface potential risks before they impact your delivery timelines.
Managing large-scale software projects often involves navigating fragmented data across multiple platforms. The CI MCP server solves this by consuming raw evidence—such as ticket updates, time logs, and commit history—and translating it into "Content Intelligence."
By leveraging LLM-powered assessment, the server performs specialized tasks:
The CI MCP server is unique in that it exposes the same logic through two distinct technical contracts, ensuring compatibility with both automated agents and traditional web clients:
/mcp): A protocol-compliant interface designed for AI agents. By registering a suite of specialized tools (e.g., score_ticket_quality, get_health_confidence, predict_effort), the server allows agents to "query" the state of a project loop in real-time, triage findings, and provide data-driven feedback directly in chat interfaces./api/v1/ci/*): A robust REST-like contract that serves as the backbone for the service. Whether you are building a dashboard to visualize team health or an automated pipeline to monitor ticket quality, this surface provides direct access to findings, evidence, calibration summaries, and diagnostic signals.The server operates in two modes to suit different environments:
Built with a "quality-first" mindset, the repository includes a comprehensive evaluation harness using Promptfoo. This ensures that the LLM prompts—ranging from effort estimators to quality scorers—remain consistent across model updates. These evals allow developers to backtest the engine against historical data, ensuring that the insights surfaced by the CI server are grounded in the team's actual past performance rather than abstract model guesses.