Skip to content

Integrations

AnthropicClient

class AnthropicClient implements LLMClient
TypeScript

Use this to send prompts to Anthropic's Claude models and receive structured completions — with automatic retries built in.

AnthropicClient wraps the Anthropic SDK and implements a common LLMClient interface, making it easy to swap between LLM providers without changing your application logic.

Constructor Parameters

NameTypeRequiredDescription
config.modelstringYesThe Claude model to use (e.g. "claude-3-5-sonnet-20241022")
config.apiKeystringYesYour Anthropic API key
config.maxRetriesnumberNoNumber of retry attempts on failure. Defaults to 3

Properties

PropertyTypeDescription
provider'anthropic'Always 'anthropic' — useful for provider-switching logic

Methods

complete(request: CompletionRequest): Promise<CompletionResponse>

Sends a completion request to Claude and returns the model's response.

NameTypeRequiredDescription
request.messagesArray<{ role: 'user' | 'assistant', content: string }>YesConversation history to send
request.systemPromptstringNoSystem-level instruction prepended to the conversation
request.maxTokensnumberNoMaximum tokens in the response

Returns: A CompletionResponse containing:

  • content — the model's reply text
  • usage — token counts (inputTokens, outputTokens)
  • model — the model that generated the response

Example

// ── Inline types (no external imports needed) ──────────────────────────────

type LLMProvider = 'anthropic' | 'openai'

type LLMClientConfig = {
  model: string
  apiKey: string
  maxRetries?: number
}

type CompletionRequest = {
  messages: Array<{ role: 'user' | 'assistant'; content: string }>
  systemPrompt?: string
  maxTokens?: number
}

type CompletionResponse = {
  content: string
  model: string
  usage: { inputTokens: number; outputTokens: number }
}

// ── Simulated AnthropicClient (mirrors real implementation) ────────────────

class AnthropicClient {
  provider: LLMProvider = 'anthropic'
  private model: string
  private apiKey: string
  private maxRetries: number

  constructor(config: LLMClientConfig) {
    this.model = config.model
    this.apiKey = config.apiKey
    this.maxRetries = config.maxRetries ?? 3
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    const body = {
      model: this.model,
      max_tokens: request.maxTokens ?? 1024,
      system: request.systemPrompt,
      messages: request.messages,
    }

    let lastError: Error | undefined

    for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
      try {
        const response = await fetch('https://api.anthropic.com/v1/messages', {
          method: 'POST',
          headers: {
            'x-api-key': this.apiKey,
            'anthropic-version': '2023-06-01',
            'content-type': 'application/json',
          },
          body: JSON.stringify(body),
        })

        if (!response.ok) {
          throw new Error(`Anthropic API error: ${response.status} ${response.statusText}`)
        }

        const data = await response.json() as {
          content: Array<{ text: string }>
          model: string
          usage: { input_tokens: number; output_tokens: number }
        }

        return {
          content: data.content[0].text,
          model: data.model,
          usage: {
            inputTokens: data.usage.input_tokens,
            outputTokens: data.usage.output_tokens,
          },
        }
      } catch (error) {
        lastError = error instanceof Error ? error : new Error(String(error))
        if (attempt < this.maxRetries) {
          console.warn(`Attempt ${attempt} failed, retrying...`)
          await new Promise(resolve => setTimeout(resolve, 500 * attempt))
        }
      }
    }

    throw lastError
  }
}

// ── Usage example ──────────────────────────────────────────────────────────

const client = new AnthropicClient({
  apiKey: process.env.ANTHROPIC_API_KEY || 'your-anthropic-api-key',
  model: 'claude-3-5-sonnet-20241022',
  maxRetries: 3,
})

async function main() {
  try {
    const response = await client.complete({
      systemPrompt: 'You are a concise assistant. Reply in one sentence.',
      messages: [
        { role: 'user', content: 'What is the capital of Japan?' },
      ],
      maxTokens: 100,
    })

    console.log('Provider: ', client.provider)
    console.log('Model:    ', response.model)
    console.log('Reply:    ', response.content)
    console.log('Tokens:   ', response.usage)
    // Expected output:
    // Provider:  anthropic
    // Model:     claude-3-5-sonnet-20241022
    // Reply:     The capital of Japan is Tokyo.
    // Tokens:    { inputTokens: 28, outputTokens: 10 }
  } catch (error) {
    console.error('Completion failed after all retries:', error)
  }
}

main()
TypeScript

OpenAICompatibleClient

class OpenAICompatibleClient implements LLMClient
TypeScript

Use this to connect to any OpenAI-compatible LLM provider — including OpenAI, DeepSeek, Ollama, OpenRouter, and Google Gemini — through a single unified client interface.

OpenAICompatibleClient normalizes provider differences (base URLs, auth headers, retry logic) so you can swap providers without changing your application code.

Constructor Config

NameTypeRequiredDescription
providerLLMProviderProvider identifier (e.g. 'openai', 'deepseek', 'ollama', 'openrouter', 'gemini')
modelstringModel name to use (e.g. 'gpt-4o', 'deepseek-chat', 'llama3')
apiKeystringAPI key for the provider (use 'ollama' or any placeholder for local providers)
maxRetriesnumberNumber of retry attempts on failure. Defaults to 3

Properties

NameTypeDescription
providerLLMProviderThe configured provider for this client instance

Methods

complete(request: CompletionRequest): Promise<CompletionResponse>

Sends a completion request to the configured provider and returns the model's response.

NameTypeRequiredDescription
request.messagesMessage[]Array of { role: 'system' | 'user' | 'assistant', content: string }
request.temperaturenumberSampling temperature (0–2). Lower = more deterministic
request.maxTokensnumberMaximum tokens in the response

Returns: Promise<CompletionResponse> — resolves with { content: string, usage?: { promptTokens, completionTokens, totalTokens } }

Supported Providers

Providerprovider valueNotes
OpenAI'openai'Requires OpenAI API key
DeepSeek'deepseek'Requires DeepSeek API key
Ollama'ollama'Local — no API key needed
OpenRouter'openrouter'Routes to many models
Google Gemini'gemini'Requires Google API key

AnthropicClient.complete

async complete(request: CompletionRequest): Promise<CompletionResponse>
TypeScript

Use this to send a conversation to Anthropic's Claude models and receive a completion response, handling system prompts and multi-turn conversations automatically.

This method separates system messages from conversation messages internally, so you can pass a unified message array without pre-processing. It wraps Anthropic's SDK and returns a normalized CompletionResponse regardless of which Claude model you target.

Parameters

NameTypeRequiredDescription
requestCompletionRequestYesThe completion request object containing messages, model, and generation settings
request.messagesMessage[]YesArray of messages with role ("system", "user", "assistant") and content string
request.modelstringNoClaude model ID (e.g. "claude-3-5-sonnet-20241022"). Falls back to the client's default model if omitted
request.maxTokensnumberNoMaximum tokens to generate in the response
request.temperaturenumberNoSampling temperature between 0 and 1. Lower = more deterministic
request.stopSequencesstring[]NoSequences that will halt generation when encountered

Returns

Returns a Promise<CompletionResponse> that resolves with:

FieldTypeDescription
contentstringThe generated text from Claude
modelstringThe model ID that produced the response
usage.inputTokensnumberNumber of tokens in the prompt
usage.outputTokensnumberNumber of tokens in the completion
stopReasonstringWhy generation stopped (e.g. "end_turn", "max_tokens")

Rejects with an error if the API key is invalid, the model ID is unrecognized, or the request exceeds context limits.

Example

// ── Inline types (no external imports needed) ──────────────────────────────

type MessageRole = 'system' | 'user' | 'assistant'

interface Message {
  role: MessageRole
  content: string
}

interface CompletionRequest {
  messages: Message[]
  model?: string
  maxTokens?: number
  temperature?: number
  stopSequences?: string[]
}

interface CompletionResponse {
  content: string
  model: string
  usage: {
    inputTokens: number
    outputTokens: number
  }
  stopReason: string
}

// ── Simulated AnthropicClient.complete implementation ──────────────────────

class AnthropicClient {
  private model: string
  private apiKey: string

  constructor(config: { apiKey: string; model?: string }) {
    this.apiKey = config.apiKey
    this.model = config.model || 'claude-3-5-sonnet-20241022'
  }

  isConfigured(): boolean {
    return true
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    const model = request.model || this.model

    // Separate system message from conversation (mirrors real implementation)
    const systemMessage = request.messages.find(m => m.role === 'system')
    const conversationMessages = request.messages.filter(m => m.role !== 'system')

    // Build the Anthropic API payload
    const payload = {
      model,
      max_tokens: request.maxTokens ?? 1024,
      temperature: request.temperature ?? 0.7,
      ...(systemMessage && { system: systemMessage.content }),
      messages: conversationMessages.map(m => ({
        role: m.role,
        content: m.content,
      })),
      ...(request.stopSequences && { stop_sequences: request.stopSequences }),
    }

    const response = await fetch('https://api.anthropic.com/v1/messages', {
      method: 'POST',
      headers: {
        'x-api-key': this.apiKey,
        'anthropic-version': '2023-06-01',
        'content-type': 'application/json',
      },
      body: JSON.stringify(payload),
    })

    if (!response.ok) {
      const error = await response.json().catch(() => ({}))
      throw new Error(
        `Anthropic API error ${response.status}: ${(error as any).error?.message ?? response.statusText}`
      )
    }

    const data = await response.json() as any

    return {
      content: data.content[0]?.text ?? '',
      model: data.model,
      usage: {
        inputTokens: data.usage.input_tokens,
        outputTokens: data.usage.output_tokens,
      },
      stopReason: data.stop_reason,
    }
  }
}

// ── Usage example ──────────────────────────────────────────────────────────

const client = new AnthropicClient({
  apiKey: process.env.ANTHROPIC_API_KEY || 'your-anthropic-api-key',
  model: 'claude-3-5-sonnet-20241022',
})

async function main() {
  try {
    const response = await client.complete({
      messages: [
        {
          role: 'system',
          content: 'You are a concise assistant. Reply in one sentence.',
        },
        {
          role: 'user',
          content: 'What is the capital of Japan?',
        },
      ],
      maxTokens: 256,
      temperature: 0.3,
    })

    console.log('Reply:      ', response.content)
    console.log('Model:      ', response.model)
    console.log('Stop reason:', response.stopReason)
    console.log('Tokens used:', response.usage)

    // Expected output:
    // Reply:       The capital of Japan is Tokyo.
    // Model:       claude-3-5-sonnet-20241022
    // Stop reason: end_turn
    // Tokens used: { inputTokens: 32, outputTokens: 9 }
  } catch (error) {
    console.error('Completion failed:', error instanceof Error ? error.message : error)
    process.exit(1)
  }
}

main()
TypeScript

OpenAICompatibleClient.complete

async complete(request: CompletionRequest): Promise<CompletionResponse>
TypeScript

Use this to send a chat completion request to any OpenAI-compatible API endpoint (OpenAI, Azure, Groq, Together AI, etc.) and receive a structured response with the generated text and token usage.

Parameters

NameTypeRequiredDescription
requestCompletionRequestYesThe completion request object containing messages, model, and generation settings
request.messagesArray<{role: string, content: string}>YesConversation history as an array of role/content message objects
request.modelstringNoModel identifier to use (e.g. "gpt-4o", "llama-3-8b"). Falls back to the client's default model if omitted
request.temperaturenumberNoSampling temperature between 0–2. Lower = more deterministic, higher = more creative
request.maxTokensnumberNoMaximum number of tokens to generate in the response
request.systemPromptstringNoSystem-level instruction prepended to the conversation

Returns

Returns a Promise<CompletionResponse> that resolves with:

FieldTypeDescription
contentstringThe generated text from the model
modelstringThe model that was actually used to generate the response
usage.promptTokensnumberNumber of tokens consumed by the input messages
usage.completionTokensnumberNumber of tokens in the generated response
usage.totalTokensnumberTotal tokens used (prompt + completion)

Throws an error if the API request fails, the model is unavailable, or the API key is invalid.

Example

import OpenAI from 'openai'

// --- Inline types (do not import from library) ---
interface Message {
  role: 'system' | 'user' | 'assistant'
  content: string
}

interface CompletionRequest {
  messages: Message[]
  model?: string
  temperature?: number
  maxTokens?: number
  systemPrompt?: string
}

interface CompletionResponse {
  content: string
  model: string
  usage: {
    promptTokens: number
    completionTokens: number
    totalTokens: number
  }
}

// --- Inline OpenAI-compatible client implementation ---
class OpenAICompatibleClient {
  private client: OpenAI
  private model: string

  constructor(config: { apiKey: string; baseURL?: string; defaultModel?: string }) {
    this.client = new OpenAI({
      apiKey: config.apiKey,
      baseURL: config.baseURL,
    })
    this.model = config.defaultModel || 'gpt-4o-mini'
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    const model = request.model || this.model

    const messages: Message[] = []

    if (request.systemPrompt) {
      messages.push({ role: 'system', content: request.systemPrompt })
    }

    messages.push(...request.messages)

    const response = await this.client.chat.completions.create({
      model,
      messages,
      temperature: request.temperature ?? 0.7,
      max_tokens: request.maxTokens,
    })

    const choice = response.choices[0]

    return {
      content: choice.message.content ?? '',
      model: response.model,
      usage: {
        promptTokens: response.usage?.prompt_tokens ?? 0,
        completionTokens: response.usage?.completion_tokens ?? 0,
        totalTokens: response.usage?.total_tokens ?? 0,
      },
    }
  }
}

// --- Usage example ---
const client = new OpenAICompatibleClient({
  apiKey: process.env.OPENAI_API_KEY || 'sk-your-api-key-here',
  defaultModel: 'gpt-4o-mini',
  // Swap baseURL to use Groq, Together AI, etc.:
  // baseURL: 'https://api.groq.com/openai/v1'
})

async function main() {
  try {
    const response = await client.complete({
      systemPrompt: 'You are a concise assistant. Reply in one sentence.',
      messages: [
        { role: 'user', content: 'What is the capital of Japan?' },
      ],
      temperature: 0.3,
      maxTokens: 60,
    })

    console.log('Generated text:', response.content)
    // Output: "The capital of Japan is Tokyo."

    console.log('Model used:', response.model)
    // Output: "gpt-4o-mini"

    console.log('Token usage:', response.usage)
    // Output: { promptTokens: 28, completionTokens: 10, totalTokens: 38 }
  } catch (error) {
    if (error instanceof Error) {
      console.error('Completion failed:', error.message)
      // Common causes: invalid API key, unknown model name, rate limit exceeded
    }
  }
}

main()
TypeScript

AnthropicClient.constructor

constructor(config: LLMClientConfig)
TypeScript

Use this to create an Anthropic-backed LLM client that handles retries and model configuration for sending completion requests.

The AnthropicClient constructor initializes a client instance configured with your Anthropic API key, target model, and retry behavior. It wraps the Anthropic SDK to provide a consistent interface for making LLM calls.

Parameters

NameTypeRequiredDescription
config.apiKeystringYesYour Anthropic API key used to authenticate requests
config.modelstringYesThe Anthropic model to use (e.g., claude-3-5-sonnet-20241022)
config.maxRetriesnumberNoNumber of times to retry failed requests. Defaults to 3

Returns

Returns an AnthropicClient instance with:

  • provider — always 'anthropic'
  • Access to completion methods for sending prompts to the configured model

When to use

  • You need a reusable client scoped to a specific Anthropic model
  • You want automatic retry logic on transient API failures
  • You are building a multi-provider LLM system and need an Anthropic-compatible implementation

Example

// Inline type definitions (do not import from skrypt)
type LLMProvider = 'anthropic' | 'openai'

interface LLMClientConfig {
  apiKey: string
  model: string
  maxRetries?: number
}

interface CompletionRequest {
  prompt: string
  maxTokens?: number
}

interface CompletionResponse {
  text: string
  usage?: { inputTokens: number; outputTokens: number }
}

// Simulated AnthropicClient (mirrors the real implementation)
class AnthropicClient {
  provider: LLMProvider = 'anthropic'
  private model: string
  private maxRetries: number
  private apiKey: string

  constructor(config: LLMClientConfig) {
    if (!config.apiKey) throw new Error('apiKey is required')
    if (!config.model) throw new Error('model is required')

    this.apiKey = config.apiKey
    this.model = config.model
    this.maxRetries = config.maxRetries ?? 3
  }

  // Simulated completion method
  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    console.log(`[${this.provider}] Sending request to model: ${this.model}`)
    console.log(`[${this.provider}] Max retries configured: ${this.maxRetries}`)
    console.log(`[${this.provider}] Prompt: "${request.prompt}"`)

    // Simulated response (real client would call Anthropic API here)
    return {
      text: `Simulated response from ${this.model}`,
      usage: { inputTokens: 12, outputTokens: 8 },
    }
  }
}

// --- Usage Example ---

async function main() {
  try {
    // Initialize the client with your Anthropic credentials
    const client = new AnthropicClient({
      apiKey: process.env.ANTHROPIC_API_KEY || 'sk-ant-your-api-key-here',
      model: 'claude-3-5-sonnet-20241022',
      maxRetries: 5, // optional, defaults to 3
    })

    console.log('Provider:', client.provider)
    // Output: Provider: anthropic

    const response = await client.complete({
      prompt: 'Explain recursion in one sentence.',
      maxTokens: 100,
    })

    console.log('Response:', response.text)
    // Output: Response: Simulated response from claude-3-5-sonnet-20241022

    console.log('Token usage:', response.usage)
    // Output: Token usage: { inputTokens: 12, outputTokens: 8 }
  } catch (error) {
    if (error instanceof Error) {
      console.error('Failed to initialize AnthropicClient:', error.message)
    }
  }
}

main()
TypeScript

OpenAICompatibleClient.constructor

constructor(config: LLMClientConfig)
TypeScript

Use this to create an LLM client that connects to any OpenAI-compatible API provider (OpenAI, Azure, Anthropic, local models, etc.) with automatic retry logic and provider-specific configuration.

Parameters

NameTypeRequiredDescription
configLLMClientConfigConfiguration object for the LLM client
config.providerLLMProviderThe LLM provider identifier (e.g., 'openai', 'anthropic', 'azure')
config.modelstringThe model name to use for completions (e.g., 'gpt-4o', 'claude-3-5-sonnet')
config.apiKeystringAPI key for authenticating with the provider
config.baseUrlstringOverride the default base URL for the provider. Useful for local models or custom endpoints
config.maxRetriesnumberNumber of times to retry failed requests. Defaults to 3

Returns

An OpenAICompatibleClient instance with:

  • provider — the resolved provider set on the instance
  • complete() — method to send completion requests
  • Automatic retry handling on transient failures

Notes

  • If baseUrl is omitted, the client falls back to a built-in map of known provider URLs (PROVIDER_BASE_URLS)
  • maxRetries defaults to 3 if not specified
  • The underlying HTTP client is an OpenAI SDK instance, making this compatible with any API that follows the OpenAI REST spec

Example

// ── Inline types (no external imports needed) ──────────────────────────────

type LLMProvider = 'openai' | 'anthropic' | 'azure' | 'ollama' | string

interface LLMClientConfig {
  provider: LLMProvider
  model: string
  apiKey: string
  baseUrl?: string
  maxRetries?: number
}

interface CompletionRequest {
  messages: { role: 'system' | 'user' | 'assistant'; content: string }[]
  temperature?: number
  maxTokens?: number
}

interface CompletionResponse {
  content: string
  usage?: { promptTokens: number; completionTokens: number }
}

// ── Provider default URLs (mirrors PROVIDER_BASE_URLS) ─────────────────────

const PROVIDER_BASE_URLS: Record<string, string> = {
  openai: 'https://api.openai.com/v1',
  anthropic: 'https://api.anthropic.com/v1',
  ollama: 'http://localhost:11434/v1',
}

// ── Simulated OpenAICompatibleClient ──────────────────────────────────────

class OpenAICompatibleClient {
  provider: LLMProvider
  private model: string
  private maxRetries: number
  private baseUrl: string
  private apiKey: string

  constructor(config: LLMClientConfig) {
    this.provider = config.provider
    this.model = config.model
    this.maxRetries = config.maxRetries ?? 3
    this.apiKey = config.apiKey
    this.baseUrl = config.baseUrl || PROVIDER_BASE_URLS[config.provider] || ''

    console.log(`[OpenAICompatibleClient] Initialized`)
    console.log(`  Provider : ${this.provider}`)
    console.log(`  Model    : ${this.model}`)
    console.log(`  Base URL : ${this.baseUrl}`)
    console.log(`  Retries  : ${this.maxRetries}`)
  }

  async complete(request: CompletionRequest): Promise<CompletionResponse> {
    // Simulated response — replace with real OpenAI SDK call in production
    const lastMessage = request.messages.at(-1)?.content ?? ''
    return {
      content: `[Simulated response to: "${lastMessage}"]`,
      usage: { promptTokens: 12, completionTokens: 20 },
    }
  }
}

// ── Example 1: Standard OpenAI setup ──────────────────────────────────────

async function main() {
  try {
    const client = new OpenAICompatibleClient({
      provider: 'openai',
      model: 'gpt-4o',
      apiKey: process.env.OPENAI_API_KEY || 'sk-your-api-key-here',
      maxRetries: 3,
    })

    const response = await client.complete({
      messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: 'What is the capital of France?' },
      ],
      temperature: 0.7,
    })

    console.log('\n[Response]', response.content)
    console.log('[Usage]', response.usage)
    // Output: [Response] [Simulated response to: "What is the capital of France?"]
    // Output: [Usage]    { promptTokens: 12, completionTokens: 20 }

    // ── Example 2: Local Ollama model with custom base URL ─────────────────

    const localClient = new OpenAICompatibleClient({
      provider: 'ollama',
      model: 'llama3.2',
      apiKey: 'ollama', // Ollama doesn't require a real key
      baseUrl: 'http://localhost:11434/v1', // explicit override
      maxRetries: 1,
    })

    console.log('\n[Local provider]', localClient.provider)
    // Output: [Local provider] ollama

  } catch (error) {
    console.error('Failed to initialize or call client:', error)
  }
}

main()
TypeScript

AnthropicClient.isConfigured

isConfigured(): boolean
TypeScript

Use this to verify that an AnthropicClient instance is ready to make API calls before attempting completions.

This method acts as a readiness check — it confirms the client has been initialized and the underlying Anthropic SDK has taken responsibility for credential validation. Call it as a guard before executing requests in conditional flows or health checks.

Returns

ValueCondition
trueAlways — the client is initialized and the Anthropic SDK handles API key validation internally

Note: A true return does not guarantee the API key is valid or that the network is reachable. It confirms the client object is properly constructed. Actual credential errors surface when calling complete().

Example

// Inline types to simulate AnthropicClient behavior
interface LLMClientConfig {
  apiKey: string
  model?: string
  timeout?: number
  maxRetries?: number
}

// Simulated AnthropicClient class (self-contained, no external imports)
class AnthropicClient {
  private apiKey: string
  private model: string
  private timeout: number
  private maxRetries: number

  constructor(config: LLMClientConfig) {
    this.apiKey = config.apiKey
    this.model = config.model ?? 'claude-3-5-sonnet-20241022'
    this.timeout = config.timeout ?? 60000
    this.maxRetries = config.maxRetries ?? 2
  }

  isConfigured(): boolean {
    return true // Anthropic SDK handles validation
  }
}

// --- Usage Example ---

const client = new AnthropicClient({
  apiKey: process.env.ANTHROPIC_API_KEY || 'sk-ant-your-api-key-here',
  model: 'claude-3-5-sonnet-20241022',
  timeout: 30000,
})

async function runCompletion() {
  try {
    // Guard against unconfigured clients before making requests
    if (!client.isConfigured()) {
      throw new Error('AnthropicClient is not configured. Check your API key and settings.')
    }

    console.log('Client configured:', client.isConfigured())
    // Output: Client configured: true

    console.log('Proceeding with API request...')
    // In real usage, you would call client.complete({ prompt: '...' }) here

  } catch (error) {
    console.error('Client setup error:', error instanceof Error ? error.message : error)
  }
}

runCompletion()
TypeScript

OpenAICompatibleClient.isConfigured

isConfigured(): boolean
TypeScript

Use this to verify that an OpenAICompatibleClient instance is ready to make API calls before attempting completions.

This method acts as a readiness check — it always returns true because the underlying OpenAI SDK handles API key validation and configuration errors at request time rather than at initialization. Use it in guard clauses or health checks to conform to a standard client interface without worrying about pre-flight validation logic.

Returns

ConditionValue
Alwaystrue — the OpenAI SDK manages its own configuration validation

Parameters

None.

Example

// Inline the minimal interface and class needed to demonstrate isConfigured()
interface LLMClientConfig {
  apiKey: string
  model?: string
  baseURL?: string
  maxRetries?: number
}

// Simulated OpenAICompatibleClient with isConfigured behavior
class OpenAICompatibleClient {
  private apiKey: string
  private model: string
  private baseURL?: string
  private maxRetries: number

  constructor(config: LLMClientConfig) {
    this.apiKey = config.apiKey
    this.model = config.model || 'gpt-4o'
    this.baseURL = config.baseURL
    this.maxRetries = config.maxRetries ?? 3
  }

  // Always returns true — OpenAI SDK handles validation internally
  isConfigured(): boolean {
    return true
  }
}

// --- Usage Example ---

const client = new OpenAICompatibleClient({
  apiKey: process.env.OPENAI_API_KEY || 'sk-your-api-key-here',
  model: 'gpt-4o',
  maxRetries: 2
})

try {
  // Guard clause: check before attempting any completions
  if (!client.isConfigured()) {
    throw new Error('LLM client is not configured — cannot proceed.')
  }

  console.log('Client configured:', client.isConfigured())
  // Output: Client configured: true

  console.log('Proceeding to make completion requests...')
  // Output: Proceeding to make completion requests...
} catch (error) {
  console.error('Configuration check failed:', error)
}
TypeScript

Was this helpful?