Why this exists
CodivDocs is a Mintlify competitor with one strategic bet: the fastest path to great documentation is an LLM that already speaks your docs dialect. To make that real, we publish three machine-readable specs of the entire component library at fixed URLs. Any LLM — Claude, GPT, Gemini, Cursor, Copilot — can ingest one of these and immediately generate valid CodivDocs MDX without fine-tuning.
The three formats
LLM markdown
Single-paste system prompt with every component, one example each, props summary. ~17KB. Fits any chat context.
JSON schema
Structured manifest for LLMs that support function calling or JSON-mode output. ~67KB. Every component, every prop.
llms.txt
The llmstxt.org standard discovery file. Crawler-friendly index of every page on the docs site.
How LLMs use them
One-shot prompt mode (easiest)
Copy codivdocs-components.llm.md and paste it as the first message in any LLM chat:
Drop into Claude / ChatGPT / Cursor
Replaces the need for any tool integration
Here is the CodivDocs component reference. Use only these components when writing MDX for me.
[paste contents of https://docs.codivdocs.com/codivdocs-components.llm.md here]
Now help me write a docs page for my REST API authentication endpoint.
The LLM now knows every component name, every prop, and every authoring rule. Ask it to generate any MDX page and the output drops straight into your repo.
Function-calling mode
For LLMs that support structured output (Claude, GPT-4 with tool use, Gemini), use codivdocs-components.schema.json as a function manifest. Each component becomes a callable function with typed props. The LLM can no longer hallucinate component names — only the ones in the schema can be invoked.
Crawler mode (zero-effort)
Search bots and AI crawlers (ChatGPT browse, Perplexity, Google AI Overviews) automatically discover https://docs.codivdocs.com/llms.txt. They follow the link to /llms-full.txt for the complete docs content. Your readers benefit even when they ask another tool — "how do I use CodivDocs callouts?" returns accurate answers because the LLM ingested our spec.
What's in each format
codivdocs-components.llm.md
# CodivDocs Component Reference
You are writing MDX for a CodivDocs documentation site. Use ONLY
the components listed below — they are the entire allowed library.
## <Callout>
Highlight important information with callout boxes.
```mdx
<Callout type="info">
This is an informational callout.
</Callout>Props: type: "note" | ..., title: string, icon: string
See: https://docs.codivdocs.com/components/callout
41 components, ~17KB total. Fits inside any LLM context window with room to spare for the user's actual question.
### `codivdocs-components.schema.json`
```json
{
"$schema": "https://codivdocs.com/component-spec.json",
"version": "1.0",
"totalComponents": 41,
"components": [
{
"name": "Callout",
"slug": "callout",
"description": "Highlight important information...",
"url": "https://docs.codivdocs.com/components/callout",
"props": [
{ "name": "type", "type": "string", "required": false },
{ "name": "title", "type": "string", "required": false }
],
"examples": [
{ "language": "mdx", "code": "<Callout type=\"info\">..." }
]
}
]
}Use this format when you need the LLM to validate prop types or enforce required props. ~67KB total.
llms.txt
Follows the llmstxt.org standard for crawler discovery. A short, human-readable index pointing at the full content:
# CodivDocs
> CodivDocs is a Mintlify-compatible documentation platform...
## Component reference
- [Callout](https://docs.codivdocs.com/components/callout): Highlight important information...
- [Card](https://docs.codivdocs.com/components/card): Display content in a card layout...
...
## LLM-optimized references
- [Component reference for LLMs](https://docs.codivdocs.com/codivdocs-components.llm.md)
- [Component schema (JSON)](https://docs.codivdocs.com/codivdocs-components.schema.json)
- [Full docs (Markdown)](https://docs.codivdocs.com/llms-full.txt)How the specs are generated
A build-time script (scripts/generate-llm-spec.mjs) walks every docs/components/*.mdx file and parses:
Frontmatter →
title,descriptionFirst fenced code block → canonical example
## Propssection → typed props array
It outputs all four files into public/ so they ship as static assets at the docs site root. The script runs as part of npm run build (wired into the build pipeline so the specs always match the deployed component library).
What's NOT in the specs
The specs are derived from the dogfood docs pages, not from React source. This is intentional — it means the specs reflect the author-facing API, not implementation details:
React-internal types (refs, context, state) are not exposed
Style props that aren't documented in the dogfood page are not exposed
Components that don't have a dogfood page are not in the spec at all
If a component should be discoverable by LLMs, it needs a dogfood docs page. This is why every commit that adds a new component also ships with docs/components/{name}.mdx in the same commit (the "convention rule" in the project plan).
Phase 3 follow-ups
The current spec is Phase 3.1. Phase 3.2 onwards builds on top:
Phase 3.2 — Copy Page dropdown gets new options: "Copy LLM spec", "Open in Cursor with spec preloaded", per-tenant custom menu items
Phase 3.3 —
codivdocs-mcpModel Context Protocol server. LLMs in Cursor / Claude Desktop / VS Code connect to the server and querylistComponents()/getComponent(name)/validateMdx(source)interactively, instead of pasting the specPhase 3.4 —
codivdocs-cliwithcodivdocs init,codivdocs migrate --from mintlify,codivdocs ai-generate --from openapi.yamlPhase 3.5 — Auto-generated
.cursorrules+CLAUDE.md+.github/copilot-instructions.mdfiles in tenant repos so the LLM-first workflow is one-command
The spec endpoints from this page (/codivdocs-components.llm.md, etc) are the primitive on which all of those build.
Try it now
Open Claude with the spec preloaded:
Open Claude with the spec
The LLM fetches the spec, ingests it, and generates a complete API reference page using <ApiMethod>, <RequestExample>, <ResponseExample>, <ParamField>, and <ResponseField>. No fine-tuning, no prompt engineering, no tool integration.