Skip to content

LLM — OpenAI

Openai-client

Classes

OpenAICompatibleClient

class OpenAICompatibleClient implements LLMClient
TypeScript

Use OpenAICompatibleClient to connect skrypt's documentation generator to any OpenAI-compatible LLM — including OpenAI, DeepSeek, Ollama, OpenRouter, and Google Gemini — without changing your generation pipeline.

Reach for this class when configuring which AI provider powers your skrypt generate runs, or when you want to swap providers (say, from OpenAI to a self-hosted Ollama instance) without touching the rest of your setup.

The client normalizes provider differences — base URLs, authentication headers, retry behavior — behind a single interface, so the rest of skrypt treats every provider identically.

Constructor parameters

NameTypeRequiredDescription
config.providerLLMProviderYesThe provider to connect to ("openai", "deepseek", "ollama", "openrouter", "gemini"). Determines the base URL and any provider-specific headers automatically.
config.modelstringYesThe model identifier to use for completions — e.g. "gpt-4o", "deepseek-chat", "llama3". Must be a model available on the chosen provider.
config.apiKeystringYesYour provider API key. For Ollama (local), pass any non-empty string — it isn't validated.
config.maxRetriesnumberNoHow many times to retry a failed completion request before throwing. Defaults to 3.

Returns

An OpenAICompatibleClient instance that implements LLMClient. Pass it directly to skrypt's generator config — the generator calls .complete() on it to produce documentation and code examples.

Heads up

  • Ollama requires a locally running server (default http://localhost:11434). The client won't throw on construction if Ollama isn't running — the error surfaces only when a completion is attempted.
  • Provider base URLs are resolved automatically from the provider field. Don't try to override the endpoint by manipulating the API key or model string — use the correct provider value instead.

Example:

// Inline types to keep this self-contained
type LLMProvider = "openai" | "deepseek" | "ollama" | "openrouter" | "gemini"

interface LLMClientConfig {
  provider: LLMProvider
  model: string
  apiKey: string
  maxRetries?: number
}

interface CompletionRequest {
  messages: { role: "system" | "user" | "assistant"; content: string }[]
  temperature?: number
}

interface CompletionResponse {
  content: string
  usage?: { promptTokens: number; completionTokens: number }
}

interface LLMClient {
  provider: LLMProvider
  complete(request: CompletionRequest): Promise<CompletionResponse>
}

// Minimal mock of OpenAICompatibleClient for demonstration
class OpenAICompatibleClient implements LLMClient {
  provider: LLMProvider
  private model: string
  private maxRetries: number
  private apiKey: string

  constructor(config: LLMClientConfig) {
    this.provider = config.provider
    this.model = config.model
    this.maxRetries = config.maxRetries ?? 3
    this.apiKey = config.apiKey
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    // In real usage, this calls the provider's OpenAI-compatible /chat/completions endpoint
    console.log(`[${this.provider}] Sending ${request.messages.length} message(s) to ${this.model}`)
    return {
      content: "## createUser\n\nCreates a new user and returns the created record.",
      usage: { promptTokens: 120, completionTokens: 34 },
    }
  }
}

// --- Example: swap from OpenAI to DeepSeek with no other changes ---

async function generateDocs(client: LLMClient) {
  return client.complete({
    messages: [
      { role: "system", content: "You are a technical documentation writer." },
      { role: "user", content: "Document this function: createUser(email: string): Promise<User>" },
    ],
    temperature: 0.2,
  })
}

async function main() {
  // OpenAI
  const openaiClient = new OpenAICompatibleClient({
    provider: "openai",
    model: "gpt-4o",
    apiKey: "sk-proj-abc123xyz789",
    maxRetries: 3,
  })

  // DeepSeek — same interface, different provider
  const deepseekClient = new OpenAICompatibleClient({
    provider: "deepseek",
    model: "deepseek-chat",
    apiKey: "dsk-abc123xyz789",
  })

  try {
    const result = await generateDocs(openaiClient)
    console.log("Provider:", openaiClient.provider)
    console.log("Generated doc:", result.content)
    console.log("Tokens used:", result.usage)
    // Provider: openai
    // Generated doc: ## createUser\n\nCreates a new user and returns the created record.
    // Tokens used: { promptTokens: 120, completionTokens: 34 }

    // Swap to DeepSeek — generateDocs() is unchanged
    const deepseekResult = await generateDocs(deepseekClient)
    console.log("\nProvider:", deepseekClient.provider)
    console.log("Generated doc:", deepseekResult.content)
  } catch (err) {
    console.error("Completion failed:", err)
  }
}

main()
TypeScript

Methods

constructor

constructor(config: LLMClientConfig)
TypeScript

Use new OpenAICompatibleClient(config) to connect skrypt's documentation generator to any OpenAI-compatible LLM provider — including OpenAI, Anthropic, and self-hosted models.

Instantiate this client once during your skrypt setup, then pass it to the generation pipeline. It handles authentication, model selection, and retry logic for all AI calls that produce your documentation and code examples.

The client resolves the correct base URL for known providers automatically, so you only need to supply a baseUrl when using a custom or self-hosted endpoint.

Parameters

NameTypeRequiredDescription
config.providerstringYesThe LLM provider to target (e.g. "openai", "anthropic", "ollama"). Determines the default base URL and any provider-specific request headers.
config.modelstringYesThe model identifier to use for generation (e.g. "gpt-4o", "claude-3-5-sonnet-20241022"). Must be a model available on your chosen provider.
config.apiKeystringYesYour provider API key — used to authenticate every request. For OpenAI, find it at platform.openai.com/api-keys.
config.baseUrlstringNoOverride the provider's default API endpoint. Required for self-hosted models (e.g. a local Ollama instance at http://localhost:11434/v1).
config.maxRetriesnumberNoHow many times to retry a failed request before throwing. Defaults to 3.

Returns

An OpenAICompatibleClient instance ready to make completion requests. Pass this instance to skrypt generate or your custom generation pipeline to drive all AI-powered doc and example output.

Heads up

  • If you specify a provider that skrypt recognizes, baseUrl is optional — the correct endpoint is resolved for you. If you're using an unknown or local provider, always set baseUrl explicitly or requests will fail.
  • maxRetries applies to transient network and rate-limit errors. It won't retry on authentication failures (4xx), so double-check your apiKey if you see immediate errors.

Example:

// Inline types to keep this self-contained
type LLMProvider = "openai" | "anthropic" | "ollama";

interface LLMClientConfig {
  provider: LLMProvider;
  model: string;
  apiKey: string;
  baseUrl?: string;
  maxRetries?: number;
}

// Minimal mock of OpenAICompatibleClient
class OpenAICompatibleClient {
  private provider: LLMProvider;
  private model: string;
  private maxRetries: number;
  private apiKey: string;
  private baseUrl: string;

  private static PROVIDER_BASE_URLS: Record<LLMProvider, string> = {
    openai: "https://api.openai.com/v1",
    anthropic: "https://api.anthropic.com/v1",
    ollama: "http://localhost:11434/v1",
  };

  constructor(config: LLMClientConfig) {
    this.provider = config.provider;
    this.model = config.model;
    this.apiKey = config.apiKey;
    this.maxRetries = config.maxRetries ?? 3;
    this.baseUrl =
      config.baseUrl || OpenAICompatibleClient.PROVIDER_BASE_URLS[config.provider];
  }

  describe() {
    return {
      provider: this.provider,
      model: this.model,
      baseUrl: this.baseUrl,
      maxRetries: this.maxRetries,
    };
  }
}

// Connect to OpenAI with GPT-4o
const client = new OpenAICompatibleClient({
  provider: "openai",
  model: "gpt-4o",
  apiKey: "sk-proj-a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6",
  maxRetries: 5,
});

console.log(client.describe());
// {
//   provider: 'openai',
//   model: 'gpt-4o',
//   baseUrl: 'https://api.openai.com/v1',
//   maxRetries: 5
// }

// Connect to a local Ollama instance
const localClient = new OpenAICompatibleClient({
  provider: "ollama",
  model: "llama3.2",
  apiKey: "ollama", // Ollama doesn't require a real key
  baseUrl: "http://localhost:11434/v1",
});

console.log(localClient.describe());
// {
//   provider: 'ollama',
//   model: 'llama3.2',
//   baseUrl: 'http://localhost:11434/v1',
//   maxRetries: 3
// }
TypeScript

isConfigured

isConfigured(): boolean
TypeScript

Use isConfigured() to verify that an OpenAICompatibleClient instance has a valid API key before attempting any completions.

Call this before invoking complete() — especially at startup or before processing a batch of requests — to fail fast with a clear error rather than sending requests that will be rejected by the provider.

The check confirms the API key is neither an empty string nor the placeholder value 'not-set', which is the default when no key has been supplied.

Returns: true if the client holds a real API key and is ready to make requests. false if the key is missing or still set to its default placeholder. Use the result to gate any call to complete() or to surface a configuration error to the user early.

Heads up:

  • This only validates that a key exists — it doesn't verify the key is accepted by the provider. A well-formed but revoked or incorrect key will still return true.

Example:

type LLMProvider = "openai" | "anthropic" | "groq";

interface ClientConfig {
  apiKey: string;
  model: string;
  provider: LLMProvider;
  maxRetries?: number;
}

class OpenAICompatibleClient {
  private apiKey: string;
  private model: string;
  private provider: LLMProvider;
  private maxRetries: number;

  constructor(config: ClientConfig) {
    this.apiKey = config.apiKey ?? "not-set";
    this.model = config.model;
    this.provider = config.provider;
    this.maxRetries = config.maxRetries ?? 3;
  }

  isConfigured(): boolean {
    return this.apiKey !== "" && this.apiKey !== "not-set";
  }
}

// Simulates loading API key from environment
const apiKey = process.env.OPENAI_API_KEY ?? "not-set";

const client = new OpenAICompatibleClient({
  apiKey,
  model: "gpt-4o",
  provider: "openai",
});

if (!client.isConfigured()) {
  console.error(
    "OpenAI client is not configured. Set the OPENAI_API_KEY environment variable."
  );
  process.exit(1);
}

console.log("Client ready:", client.isConfigured());
// Client ready: true  (when OPENAI_API_KEY is set)
// or: exits with error if key is missing
TypeScript

complete

async complete(request: CompletionRequest): Promise<CompletionResponse>
TypeScript

Use complete to send a prompt to any OpenAI-compatible LLM and get a structured response back — without writing raw HTTP calls or managing chat completion payloads yourself.

Reach for this when skrypt's documentation generator needs to call an LLM provider (OpenAI, Together, Mistral, etc.) to produce descriptions, parameter explanations, or code examples for a scanned API signature. It sits at the core of the generation pipeline: every doc block skrypt writes passes through here.

The method wraps the OpenAI chat completions API, falling back to the client's default model if you don't specify one in the request. Call isConfigured() before invoking complete to confirm the client holds a valid API key — complete will throw if the underlying HTTP request fails.

Parameters

NameTypeRequiredDescription
requestCompletionRequestYesThe prompt payload to send. Set request.model to override the client's default model for this call only; omit it to use whatever model was set at construction time.
request.messagesArray<{role, content}>YesThe conversation turns to send. For single-shot doc generation, pass one system message and one user message containing the API signature.
request.modelstringNoModel identifier (e.g. "gpt-4o", "mistral-large-latest"). Overrides the client-level default for this request only.
request.temperaturenumberNoControls output randomness. Lower values (0.2–0.4) produce more consistent, deterministic documentation; higher values increase creativity.

Returns

Returns a CompletionResponse containing the model's reply. Pull the generated text from response.choices[0].message.content and write it directly into your MDX output. The response also includes usage token counts — useful for tracking generation cost across a large codebase scan.

Heads up

  • complete throws if the provider returns a non-2xx status (rate limits, invalid keys, model not found). Wrap calls in try/catch and check isConfigured() first — an unconfigured client will fail immediately on the first request.
  • The model field in CompletionRequest is a per-call override, not a permanent change. The client's default model is unaffected after the call returns.

Example:

// Inline types — do not import from autodocs
interface Message {
  role: "system" | "user" | "assistant";
  content: string;
}

interface CompletionRequest {
  messages: Message[];
  model?: string;
  temperature?: number;
}

interface CompletionResponse {
  choices: Array<{
    message: { role: string; content: string };
    finish_reason: string;
  }>;
  usage: { prompt_tokens: number; completion_tokens: number; total_tokens: number };
}

// Minimal mock of OpenAICompatibleClient.complete
class OpenAICompatibleClient {
  private apiKey: string;
  private model: string;

  constructor(config: { apiKey: string; model: string }) {
    this.apiKey = config.apiKey;
    this.model = config.model;
  }

  isConfigured(): boolean {
    return this.apiKey !== "" && this.apiKey !== "not-set";
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    const model = request.model || this.model;

    const res = await fetch("https://api.openai.com/v1/chat/completions", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${this.apiKey}`,
      },
      body: JSON.stringify({ model, messages: request.messages, temperature: request.temperature }),
    });

    if (!res.ok) {
      throw new Error(`LLM request failed: ${res.status} ${res.statusText}`);
    }

    return res.json() as Promise<CompletionResponse>;
  }
}

// --- Usage ---
const client = new OpenAICompatibleClient({
  apiKey: "sk-proj-4fKzR9mXwQ2nLpT8vYcA1bNdEuJhGs7oC3iWxZqPmRtVlBk",
  model: "gpt-4o-mini",
});

const apiSignature = `async createUser(email: string, role: "admin" | "member"): Promise<User>`;

async function generateDoc() {
  try {
    if (!client.isConfigured()) {
      throw new Error("LLM client is not configured — check your API key.");
    }

    const response = await client.complete({
      messages: [
        {
          role: "system",
          content: "You are a technical writer. Generate concise API documentation for the given function signature.",
        },
        {
          role: "user",
          content: `Document this function:\n\n${apiSignature}`,
        },
      ],
      temperature: 0.3,
    });

    const doc = response.choices[0].message.content;
    console.log("Generated doc:\n", doc);
    console.log("\nTokens used:", response.usage.total_tokens);
  } catch (err) {
    console.error("Documentation generation failed:", err);
  }
}

generateDoc();
// Generated doc:
//  Use `createUser` to register a new user with a specified email and role...
// Tokens used: 187
TypeScript
Was this helpful?