CLI Integration
The Skrypt CLI can deploy to your hosted project and run AI generation through Skrypt's LLM proxy. This page covers authentication, deployment, and usage tracking.
Connecting to your project
The CLI connects to your hosted project using an API key. You can authenticate interactively or with an environment variable.
Interactive login
npx skrypt-ai login
This opens app.skrypt.sh in your browser. Sign in to link the CLI to your account. The CLI stores your credentials locally using the system keychain when available.
API key authentication
Generate an API key from Dashboard > Settings > API Keys. Each key is scoped to a single project.
Set the key as an environment variable:
export SKRYPT_API_KEY=sk_live_...
Or pass it inline:
SKRYPT_API_KEY=sk_live_... npx skrypt-ai deploy
The CLI sends the API key as a Bearer token in the Authorization header. Keys are SHA-256 hashed on the server, so the raw key is never stored.
CLI deploy
Deploy your local docs to Cloudflare Pages:
npx skrypt-ai deploy
The deploy command:
- Reads your local doc files
- Uploads them as a JSON payload to
POST /api/deploy - Skrypt compiles your MDX, builds the site, and deploys to Cloudflare Pages
- Returns the live URL
Deployed to https://my-project.skrypthq.dev
Limits
| Constraint | Value |
|---|---|
| Max request size | 10 MB |
| Max file size | 5 MB per file |
| Concurrent deploys | 1 per project |
If a deploy is already in progress for your project, the CLI returns a 409 Conflict error. Wait for the current deploy to finish and try again.
Deploy format
The CLI sends files as a JSON array:
{
"files": [
{ "path": "docs/getting-started.mdx", "content": "..." },
{ "path": "docs/api-reference.mdx", "content": "..." }
]
}
Files not included in the payload are removed from the project. Each deploy is a full replacement, not a diff.
CLI generate
Run AI documentation generation through Skrypt's LLM proxy:
npx skrypt-ai generate ./src -o ./content/docs
When the CLI detects a valid API key or active login session, it routes LLM calls through POST /api/proxy/generate instead of calling OpenAI or Anthropic directly. This means:
- No BYOK needed -- you do not need your own OpenAI or Anthropic API key
- Metered usage -- each generation is tracked against your plan
- Consistent model -- defaults to Claude Sonnet 4 via OpenRouter, with GPT-4o as fallback
The proxy validates your session, enforces rate limits, and returns the generated content along with token usage stats.
Usage tracking
Every generation and deploy is counted against your plan limits.
| Action | Free plan | Pro plan |
|---|---|---|
| AI generations | 1/month | Unlimited |
| Deploys | 3/day | Unlimited |
| Agent chat | Shared with generation limit | Unlimited |
Checking usage
Dashboard
Go to Dashboard > Billing to see your current usage for the billing period, including generation count, deploy count, and agent chat messages.
CLI
npx skrypt-ai usage
This prints your current month's usage and remaining quota.
API key management
You can create multiple API keys per project. Each key can be independently revoked from the dashboard.
| Field | Description |
|---|---|
key_hash | SHA-256 hash stored server-side |
last_used_at | Timestamp of the most recent API call |
created_at | When the key was generated |
Rotate keys by creating a new one and deleting the old one. There is no key expiration, but you can revoke keys at any time.
What's next
- Getting Started, create your first hosted project
- Webhooks, get notified when CLI deploys complete
- AI Agent, use the web editor's AI assistant alongside the CLI
- Deploy Guide, advanced deployment options and custom domains