Skip to content

CLI Integration

CLI Integration

The Skrypt CLI can deploy to your hosted project and run AI generation through Skrypt's LLM proxy. This page covers authentication, deployment, and usage tracking.

Connecting to your project

The CLI connects to your hosted project using an API key. You can authenticate interactively or with an environment variable.

Interactive login

Terminal
npx skrypt-ai login

This opens app.skrypt.sh in your browser. Sign in to link the CLI to your account. The CLI stores your credentials locally using the system keychain when available.

API key authentication

Generate an API key from Dashboard > Settings > API Keys. Each key is scoped to a single project.

Set the key as an environment variable:

Terminal
export SKRYPT_API_KEY=sk_live_...

Or pass it inline:

Terminal
SKRYPT_API_KEY=sk_live_... npx skrypt-ai deploy

The CLI sends the API key as a Bearer token in the Authorization header. Keys are SHA-256 hashed on the server, so the raw key is never stored.

CLI deploy

Deploy your local docs to Cloudflare Pages:

Terminal
npx skrypt-ai deploy

The deploy command:

  1. Reads your local doc files
  2. Uploads them as a JSON payload to POST /api/deploy
  3. Skrypt compiles your MDX, builds the site, and deploys to Cloudflare Pages
  4. Returns the live URL
Deployed to https://my-project.skrypthq.dev

Limits

ConstraintValue
Max request size10 MB
Max file size5 MB per file
Concurrent deploys1 per project

If a deploy is already in progress for your project, the CLI returns a 409 Conflict error. Wait for the current deploy to finish and try again.

Deploy format

The CLI sends files as a JSON array:

{
  "files": [
    { "path": "docs/getting-started.mdx", "content": "..." },
    { "path": "docs/api-reference.mdx", "content": "..." }
  ]
}
JSON

Files not included in the payload are removed from the project. Each deploy is a full replacement, not a diff.

CLI generate

Run AI documentation generation through Skrypt's LLM proxy:

Terminal
npx skrypt-ai generate ./src -o ./content/docs

When the CLI detects a valid API key or active login session, it routes LLM calls through POST /api/proxy/generate instead of calling OpenAI or Anthropic directly. This means:

  • No BYOK needed -- you do not need your own OpenAI or Anthropic API key
  • Metered usage -- each generation is tracked against your plan
  • Consistent model -- defaults to Claude Sonnet 4 via OpenRouter, with GPT-4o as fallback

The proxy validates your session, enforces rate limits, and returns the generated content along with token usage stats.

Usage tracking

Every generation and deploy is counted against your plan limits.

ActionFree planPro plan
AI generations1/monthUnlimited
Deploys3/dayUnlimited
Agent chatShared with generation limitUnlimited

Checking usage

Dashboard

Go to Dashboard > Billing to see your current usage for the billing period, including generation count, deploy count, and agent chat messages.

CLI

Terminal
npx skrypt-ai usage

This prints your current month's usage and remaining quota.

API key management

You can create multiple API keys per project. Each key can be independently revoked from the dashboard.

FieldDescription
key_hashSHA-256 hash stored server-side
last_used_atTimestamp of the most recent API call
created_atWhen the key was generated

Rotate keys by creating a new one and deleting the old one. There is no key expiration, but you can revoke keys at any time.

What's next

  • Getting Started, create your first hosted project
  • Webhooks, get notified when CLI deploys complete
  • AI Agent, use the web editor's AI assistant alongside the CLI
  • Deploy Guide, advanced deployment options and custom domains
Was this helpful?