Index
Functions
createLLMClient
function createLLMClient(config: {
provider: LLMProvider
model: string
baseUrl?: string
timeout?: number
maxRetries?: number
}): LLMClient
Use createLLMClient to connect skrypt's documentation generator to your preferred AI provider — OpenAI, Anthropic, or any OpenAI-compatible endpoint.
Call this when configuring a custom generation pipeline or self-hosting skrypt against a private model endpoint. In the typical workflow, skrypt generate handles this automatically, but you'll reach for createLLMClient directly when you need to swap providers, point to a local model, or tune retry behavior for unreliable network conditions.
The returned client is a normalized interface over provider-specific SDKs — your generation code stays the same regardless of whether you're talking to GPT-4o or Claude 3.5.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
config.provider | LLMProvider | Yes | The AI provider to use. Accepted values: "openai", "anthropic", "openai-compatible". Determines which underlying SDK and auth mechanism is used. |
config.model | string | Yes | The model identifier passed directly to the provider — e.g. "gpt-4o", "claude-3-5-sonnet-20241022". Must be a model your API key has access to. |
config.baseUrl | string | No | Override the provider's default API endpoint. Required when using "openai-compatible" with a local model server like Ollama or vLLM. |
config.timeout | number | No | Milliseconds before a request is abandoned. Defaults to the provider SDK default (~30s). Increase this for large context windows that take longer to process. |
config.maxRetries | number | No | Maximum retry attempts before the request fails and throws. Useful for rate-limited keys or flaky network paths. |
Returns
Returns an LLMClient instance. Pass this to skrypt's generation functions (e.g. generateDocs, generateExamples) as the client option to control which model powers the output.
Heads up
"openai-compatible"requiresbaseUrl— omitting it will throw at request time, not at client creation.- API keys are read from environment variables (
OPENAI_API_KEY,ANTHROPIC_API_KEY), not from this config object. Make sure the relevant env var is set before calling this.
Example:
type LLMProvider = "openai" | "anthropic" | "openai-compatible"
interface LLMClient {
provider: LLMProvider
model: string
complete: (prompt: string) => Promise<string>
}
// Mock implementation — replace with actual createLLMClient from your setup
function createLLMClient(config: {
provider: LLMProvider
model: string
baseUrl?: string
timeout?: number
maxRetries?: number
}): LLMClient {
if (config.provider === "openai-compatible" && !config.baseUrl) {
throw new Error("baseUrl is required for openai-compatible provider")
}
return {
provider: config.provider,
model: config.model,
complete: async (prompt: string) => `Generated docs for: ${prompt.slice(0, 40)}...`,
}
}
// Example: point skrypt at a local Ollama instance running Llama 3
async function main() {
try {
const client = createLLMClient({
provider: "openai-compatible",
baseUrl: "http://localhost:11434/v1",
model: "llama3.2",
timeout: 60000,
maxRetries: 3,
})
// Pass `client` to your generation call
const result = await client.complete("Document the createUser function")
console.log(`Provider : ${client.provider}`)
console.log(`Model : ${client.model}`)
console.log(`Response : ${result}`)
// Provider : openai-compatible
// Model : llama3.2
// Response : Generated docs for: Document the createUser function...
} catch (err) {
console.error("Failed to create or use LLM client:", err)
}
}
main()
generateDocumentation
async function generateDocumentation(client: LLMClient, element: ElementContext, options?: { multiLanguage?: boolean; verify?: boolean; previousError?: string }): Promise<GeneratedDocResult>
Use generateDocumentation to produce AI-generated docs and code examples for a single code element — a function, class, or type extracted from your codebase.
Call this when you're building a custom documentation pipeline and need per-element control over generation. In skrypt's standard flow, skrypt generate calls this automatically for every scanned element, but reach for it directly when you want to regenerate a single element, handle errors with retry logic, or integrate doc generation into your own tooling.
generateDocumentation sends the element's signature, context, and any prior error feedback to your LLM provider, then returns structured documentation including descriptions, parameter tables, and working code examples. When verify is enabled, the output goes through an additional validation pass before returning.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
client | LLMClient | Yes | Configured LLM client (OpenAI, Anthropic, etc.) — create one with your provider credentials before calling this. |
element | ElementContext | Yes | The parsed code element to document, including its name, signature, source context, and file location — typically produced by skrypt's scanner. |
options.multiLanguage | boolean | No | Generate code examples in both TypeScript and Python. Defaults to true. Set to false to produce TypeScript-only examples and reduce token usage. |
options.verify | boolean | No | Run a second LLM pass to validate the generated output for accuracy and completeness before returning. Adds latency but catches hallucinated parameter descriptions. |
options.previousError | string | No | Error message from a prior failed generation attempt. Pass this to give the LLM context on what went wrong so it can correct the output on retry. |
Returns
Returns a Promise<GeneratedDocResult> containing the generated markdown documentation, code examples, and metadata about the generation. Use the result's doc content to write MDX files to your output directory, or pass previousError back into a retry call if validation fails.
Heads up
multiLanguagedefaults totrueeven if you don't passoptions— if you're cost-sensitive or only need TypeScript examples, explicitly setmultiLanguage: false.- When building retry logic, always forward the caught error message as
previousErroron the next attempt. Without it, the LLM has no signal to produce a different result.
Example:
type LLMClient = {
complete: (prompt: string) => Promise<string>
}
type ElementContext = {
name: string
signature: string
docstring?: string
sourceContext: string
filePath: string
language: string
}
type GeneratedDocResult = {
markdown: string
codeExamples: { language: string; code: string }[]
tokensUsed: number
}
// Inline mock of generateDocumentation — replace with real import in your project
async function generateDocumentation(
client: LLMClient,
element: ElementContext,
options?: { multiLanguage?: boolean; verify?: boolean; previousError?: string }
): Promise<GeneratedDocResult> {
const prompt = [
`Generate documentation for the following ${element.language} function.`,
options?.previousError ? `Previous attempt failed with: ${options.previousError}. Please correct this.` : "",
`Name: ${element.name}`,
`Signature: ${element.signature}`,
`Context:\n${element.sourceContext}`,
options?.multiLanguage !== false
? "Include code examples in both TypeScript and Python."
: "Include a TypeScript code example only.",
]
.filter(Boolean)
.join("\n\n")
const raw = await client.complete(prompt)
return {
markdown: raw,
codeExamples: [
{ language: "typescript", code: `// example usage of ${element.name}` },
...(options?.multiLanguage !== false
? [{ language: "python", code: `# example usage of ${element.name}` }]
: []),
],
tokensUsed: 512,
}
}
// Mock LLM client backed by OpenAI-compatible API
const client: LLMClient = {
complete: async (prompt: string) => {
// In production, this calls your real LLM provider
return `Use \`createUser\` to register a new user in your system.\n\n**Parameters**\n\n| Name | Type | Required | Description |\n|---|---|---|---|\n| \`email\` | \`string\` | Yes | The user's email address. |`
},
}
const element: ElementContext = {
name: "createUser",
signature: "async function createUser(email: string, role: 'admin' | 'member'): Promise<User>",
docstring: "Creates a new user account and sends a welcome email.",
sourceContext: `async function createUser(email: string, role: 'admin' | 'member'): Promise<User> {
const user = await db.users.insert({ email, role, createdAt: new Date() })
await mailer.sendWelcome(user)
return user
}`,
filePath: "src/users/service.ts",
language: "TypeScript",
}
async function main() {
let result: GeneratedDocResult | null = null
try {
result = await generateDocumentation(client, element, {
multiLanguage: true,
verify: false,
})
console.log("Documentation generated successfully")
console.log("Tokens used:", result.tokensUsed)
console.log("Code example languages:", result.codeExamples.map((e) => e.language))
console.log("\nGenerated markdown:\n", result.markdown)
} catch (err) {
const errorMessage = err instanceof Error ? err.message : String(err)
console.warn("First attempt failed, retrying with error context:", errorMessage)
try {
result = await generateDocumentation(client, element, {
multiLanguage: true,
verify: true,
previousError: errorMessage,
})
console.log("Retry succeeded. Tokens used:", result.tokensUsed)
} catch (retryErr) {
console.error("Documentation generation failed after retry:", retryErr)
process.exit(1)
}
}
}
main()
// Expected output:
// Documentation generated successfully
// Tokens used: 512
// Code example languages: [ 'typescript', 'python' ]
//
// Generated markdown:
// Use `createUser` to register a new user in your system. ...
normalizeDelimiters
function normalizeDelimiters(content: string): string
Use normalizeDelimiters to sanitize LLM-generated content so that section delimiters like ---MARKDOWN--- and `
Example:
` are reliably parseable regardless of how the model formatted them.
When you're processing raw AI output that contains structured delimiters, models like GPT-4o frequently introduce variations — extra dashes, surrounding backticks, bold markers, inconsistent casing, or stray spaces. This function is the cleanup step you run before attempting to split or parse that output into its component sections.
It scans the content line by line and collapses any recognizable delimiter variant into its canonical form. A line like `` **---markdown---** `` or `---- TYPESCRIPT ----` becomes `---TYPESCRIPT---`, which downstream parsers can match with a simple exact string check.
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `content` | `string` | Yes | Raw text returned by an LLM, potentially containing malformed delimiters. The entire multi-section response string — not a single line. |
Returns the cleaned string with all delimiter variants normalized to their canonical `---KEYWORD---` form. Pass this directly to your section-splitting logic to reliably extract `MARKDOWN`, `TYPESCRIPT`, `PYTHON`, `CODE`, and `END` blocks.
**Heads up:**
- Only the five keywords `MARKDOWN`, `TYPESCRIPT`, `PYTHON`, `CODE`, and `END` are recognized. Any other delimiter-like pattern (e.g., `---EXAMPLE---`) is left untouched.
- Normalization is case-insensitive on input but always outputs uppercase — `---Markdown---` becomes `---MARKDOWN---`, never `---markdown---`.
function normalizeDelimiters(content: string): string {
const delimiterPattern = /^[`*]*-{2,}\s*(MARKDOWN|TYPESCRIPT|PYTHON|CODE|END)\s*-{2,}[`*]*$/gim;
return content.replace(delimiterPattern, (_match, keyword: string) => `---${keyword.toUpperCase()}---`);
}
// Simulate raw GPT-4o output with messy delimiters
const rawLLMOutput = `
Use \`createUser\` to register a new user in the system.
\`---TYPESCRIPT---\`
const user = await createUser({ email: "ada@example.com" });
console.log(user.id);
stripResponseMarkers
function stripResponseMarkers(content: string): string
Use stripResponseMarkers to sanitize LLM-generated content before writing it to .md or .mdx files.
Call this as a final cleanup step whenever you're processing raw AI output that may contain structural markers like ---MARKDOWN---, `
Example:
`, or `