Skip to content

Other

GoScanner

Scans Go source files

PythonScanner

Scans Python source files

RustScanner

Scans Rust source files

analyzePRForDocs

Scans PRs for doc issues

build_signature

Reconstructs function signature string

buildNavigation

Builds hierarchical nav structure

checkApiKey

Verifies API key is set

classifyElement

Determines element doc type

classifyElements

Sorts elements into doc groups

detectCrossReferences

Discovers element cross-references

detectFormat

Identifies docs framework format

extract_class

Parses Python class AST node

extract_function

Extracts Python function metadata

extract_parameters

Converts AST args to metadata

findConfigFile

Locates Skrypt config file

formatAsMarkdown

Converts docs to Markdown string

generateForElement

Generates docs for one element

generateForElements

Batch-generates docs for elements

generateSidebarConfig

Generates sidebar nav config

get_default_value

Extracts parameter default value

get_docstring

Extracts Python AST docstring

get_type_annotation

Converts type annotation to string

getCrossRefsForElement

Filters refs for one element

getKeychainPlatformName

Returns keychain platform name

getPromptForContentType

Gets prompt for content type

getRecommendedStructure

Organizes elements into doc structure

getSortWeight

Extracts frontmatter sort weight

groupDocsByFile

Groups docs by source file

hasSeenNotice

Checks if notice was seen

importConfluence

Converts Confluence HTML to markdown

importDocusaurus

Converts Docusaurus docs format

importFromGitHub

Pulls docs from GitHub repo

importGitBook

Converts GitBook docs format

importMarkdown

Imports Markdown directory tree

importMintlify

Converts Mintlify project format

importNotion

Converts Notion export to pages

importReadme

Converts ReadMe.io docs format

isGitHubUrl

Validates GitHub URL strings

keychainAvailable

Checks system keychain availability

keychainDelete

Deletes credential from keychain

keychainRetrieve

Retrieves secret from keychain

keychainStore

Stores secret in keychain

loadConfig

Loads YAML/JSON config file

markNoticeSeen

Records notice as seen

mergeTopicConfig

Merges partial topic config

normalizeFrontmatter

Standardizes frontmatter fields

organizeByTopic

Groups docs into topic clusters

parseGitHubUrl

Extracts GitHub URL components

postInlineComments

Posts inline PR review comments

postPRComment

Posts PR doc quality comment

rewriteImagePaths

Updates image paths in content

scan_file

Extracts elements from Python file

showSecurityNotice

Displays one-time security notice

stripDocusaurusImports

Removes Docusaurus theme imports

stripNotionUUIDs

Removes Notion UUID suffixes

transformConfluenceCallouts

Converts Confluence callout macros

transformConfluenceHtml

Converts Confluence HTML to Markdown

transformDocusaurusAdmonitions

Converts Docusaurus admonition blocks

transformDocusaurusTabs

Converts Docusaurus tab syntax

transformGitBookContentRef

Converts GitBook content-ref blocks

transformGitBookEmbed

Strips GitBook embed syntax

transformGitBookExpandable

Converts GitBook expandable blocks

transformGitBookHints

Converts GitBook hint blocks

transformGitBookSteps

Converts GitBook stepper syntax

transformGitBookTabs

Converts GitBook tab syntax

transformMintlifyCallouts

Converts Mintlify callout components

transformMintlifyTabs

Converts Mintlify tab syntax

transformNotionCallouts

Converts Notion callout markup

transformNotionToggles

Converts Notion toggle blocks

transformReadmeCallouts

Converts ReadMe callout blocks

transformReadmeCodeBlocks

Converts ReadMe code block syntax

validateConfig

Validates configuration object

writeDocsByTopic

Writes docs organized by topic

writeDocsToDirectory

Persists generated docs to disk

writeLlmsTxt

Generates llms.txt index file

GoScanner.canHandle

Checks Go file compatibility

PythonScanner.canHandle

Checks Python file compatibility

RustScanner.canHandle

Checks Rust file compatibility

GoScanner.scanFile

Extracts elements from Go file

PythonScanner.scanFile

Extracts elements from Python file

RustScanner.scanFile

Extracts elements from Rust file

GoScanner

class GoScanner implements Scanner
TypeScript

Use this to scan Go source files and extract API elements — functions, methods, structs, and interfaces — for automated documentation generation pipelines.

GoScanner implements the Scanner interface and targets .go files, automatically skipping test files (_test.go).

Methods

canHandle(filePath: string): boolean

Returns true if the file path ends in .go and is not a test file. Use this to check compatibility before scanning.

scanFile(filePath: string): Promise<ScanResult>

Reads and parses a Go source file, returning all discovered API elements.

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the .go source file to scan

Returns

scanFile returns a Promise<ScanResult> containing:

FieldTypeDescription
elementsAPIElement[]All extracted functions, methods, types, and interfaces
filePathstringThe path of the scanned file
languagestringAlways "go" for this scanner

Each APIElement includes:

FieldTypeDescription
namestringIdentifier name (e.g. "NewServer")
kindstringOne of "function", "method", "type", "interface"
signaturestringFull Go signature string
parametersParameter[]Parsed parameter list
docstringstring | undefinedLeading comment block, if present

Notes

  • Test files (*_test.go) are automatically excluded by canHandle
  • Throws if the file cannot be read (e.g. missing permissions or path)
  • Only handles files with the .go extension — use a router/registry to dispatch multiple scanners for polyglot projects

PythonScanner

class PythonScanner implements Scanner
TypeScript

Use this to scan Python source files and extract structured metadata (functions, classes, imports, etc.) by delegating parsing to a Python3 subprocess.

PythonScanner implements the Scanner interface and is the go-to handler for any .py file in a Skrypt pipeline. It spawns a python3 process running an internal parser script, then resolves a ScanResult with the extracted code structure.

Properties

PropertyTypeDescription
languagesstring[]Always ['python'] — declares which languages this scanner handles

Methods

canHandle(filePath)

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the file being evaluated

Returns: booleantrue if the file ends with .py, false otherwise. Use this as a fast pre-check before calling scanFile.


scanFile(filePath)

NameTypeRequiredDescription
filePathstringPath to the .py file to parse

Returns: Promise<ScanResult> — resolves with structured scan data extracted from the Python file (e.g. classes, functions, imports). Resolves (never rejects) even on parser errors — check the result's error field if present.

⚠️ Requires python3 to be available on the system PATH. The scanner spawns a subprocess for every file scanned, so avoid calling it in tight loops on large codebases without batching.

Example

import { spawn } from 'child_process'
import { writeFileSync, unlinkSync } from 'fs'
import { join } from 'path'
import { tmpdir } from 'os'

// --- Inline types (mirrors Skrypt Scanner interface) ---
interface ScanResult {
  filePath: string
  language: string
  symbols?: Symbol[]
  error?: string
  raw?: unknown
}

interface Symbol {
  name: string
  kind: 'function' | 'class' | 'variable' | 'import'
  line: number
}

interface Scanner {
  languages: string[]
  canHandle(filePath: string): boolean
  scanFile(filePath: string): Promise<ScanResult>
}

// --- Inline PythonScanner implementation (self-contained) ---
const INLINE_PARSER_SCRIPT = `
import ast, json, sys

def scan(path):
    with open(path, 'r') as f:
        source = f.read()
    tree = ast.parse(source, filename=path)
    symbols = []
    for node in ast.walk(tree):
        if isinstance(node, ast.FunctionDef):
            symbols.append({'name': node.name, 'kind': 'function', 'line': node.lineno})
        elif isinstance(node, ast.ClassDef):
            symbols.append({'name': node.name, 'kind': 'class', 'line': node.lineno})
        elif isinstance(node, (ast.Import, ast.ImportFrom)):
            symbols.append({'name': getattr(node, 'module', None) or str(node.names[0].name), 'kind': 'import', 'line': node.lineno})
    print(json.dumps({'symbols': symbols}))

scan(sys.argv[1])
`

class PythonScanner implements Scanner {
  languages = ['python']

  canHandle(filePath: string): boolean {
    return filePath.endsWith('.py')
  }

  async scanFile(filePath: string): Promise<ScanResult> {
    // Write the inline parser to a temp file for this example
    const parserPath = join(tmpdir(), '_skrypt_example_parser.py')
    writeFileSync(parserPath, INLINE_PARSER_SCRIPT)

    return new Promise((resolve) => {
      let stdout = ''
      let stderr = ''

      const proc = spawn('python3', [parserPath, filePath], {
        stdio: ['ignore', 'pipe', 'pipe'],
      })

      proc.stdout.on('data', (chunk: Buffer) => { stdout += chunk.toString() })
      proc.stderr.on('data', (chunk: Buffer) => { stderr += chunk.toString() })

      proc.on('close', (code: number) => {
        try { unlinkSync(parserPath) } catch { /* cleanup best-effort */ }

        if (code !== 0 || stderr) {
          resolve({ filePath, language: 'python', error: stderr || `Exit code ${code}` })
          return
        }
        try {
          const parsed = JSON.parse(stdout)
          resolve({ filePath, language: 'python', symbols: parsed.symbols, raw: parsed })
        } catch {
          resolve({ filePath, language: 'python', error: 'Failed to parse scanner output' })
        }
      })
    })
  }
}

// --- Create a sample Python file to scan ---
const samplePyPath = join(tmpdir(), 'example_module.py')
writeFileSync(samplePyPath, `
import os
from pathlib import Path

class DataProcessor:
    def __init__(self, config):
        self.config = config

    def process(self, data):
        return data.strip()

def load_file(path):
    with open(path) as f:
        return f.read()
`)

// --- Run the scanner ---
async function main() {
  const scanner = new PythonScanner()

  console.log('Supported languages:', scanner.languages)
  // Output: Supported languages: [ 'python' ]

  console.log('Can handle .py?', scanner.canHandle(samplePyPath))
  // Output: Can handle .py? true

  console.log('Can handle .js?', scanner.canHandle('index.js'))
  // Output: Can handle .js? false

  try {
    const result = await scanner.scanFile(samplePyPath)

    if (result.error) {
      console.error('Scan error:', result.error)
      return
    }

    console.log(`\nScanned: ${result.filePath}`)
    console.log(`Language: ${result.language}`)
    console.log(`Symbols found (${result.symbols?.length ?? 0}):`)

    result.symbols?.forEach(sym => {
      console.log(`  [${sym.kind.padEnd(8)}] ${sym.name} (line ${sym.line})`)
    })
    // Expected output:
    // Scanned: /tmp/example_module.py
    // Language: python
    // Symbols found (5):
    //   [import  ] os (line 2)
    //   [import  ] pathlib (line 3)
    //   [class   ] DataProcessor (line 5)
    //   [function] __init__ (line 6)
    //   [function] process (line 9)
    //   [function] load_file (line 12)
  } catch (error) {
    console.error('Unexpected failure:', error)
  } finally {
    try { unlinkSync(samplePyPath) } catch { /* cleanup */ }
  }
}

main()
TypeScript

RustScanner

class RustScanner implements Scanner
TypeScript

Use this to scan Rust source files and extract public API elements — functions, structs, enums, impl blocks, and traits — for automated documentation generation or API analysis pipelines.

RustScanner implements the Scanner interface and targets .rs files, automatically skipping test files under /tests/ directories.

Methods

canHandle(filePath: string): boolean

Determines whether this scanner should process a given file.

NameTypeRequiredDescription
filePathstringPath to the file to check

Returns: true if the file ends in .rs and is not inside a /tests/ directory, false otherwise.


scanFile(filePath: string): Promise<ScanResult>

Reads and parses a Rust source file, extracting all public API elements.

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the .rs file to scan

Returns: A Promise<ScanResult> containing:

  • elements — Array of APIElement objects, each representing a discovered pub fn, pub struct, pub enum, impl block, or trait
  • filePath — The original file path that was scanned
  • language"rust"

Throws: If the file cannot be read (e.g. missing permissions, file not found), the promise resolves with an empty elements array rather than rejecting.

Properties

NameTypeDescription
languagesstring[]Always ['rust'] — used by scanner registries to route files

Notes

  • Test files (paths containing /tests/) are explicitly excluded via canHandle
  • Only public (pub) items are extracted — private implementation details are ignored
  • Pair with a scanner registry or orchestrator to process entire Rust projects

analyzePRForDocs

async function analyzePRForDocs(config: PRCommentConfig, _options: { checkExamples?: boolean } = {}): Promise<DocumentationIssue[]>
TypeScript

Use this to scan a pull request for documentation issues — missing docstrings, undocumented parameters, or incomplete return type descriptions — and get a structured list of problems to act on.

Parameters

NameTypeRequiredDescription
configPRCommentConfigYesGitHub PR connection details including repo owner, repo name, PR number, and auth token
_options{ checkExamples?: boolean }NoAdditional analysis options. Set checkExamples to true to also validate code examples in docs

Returns

Returns Promise<DocumentationIssue[]> — resolves to an array of documentation issues found in the PR. Each issue describes the file, line number, severity, and a human-readable message explaining what's missing or incorrect. Returns an empty array if no issues are found.

Example

// Inline types (do not import from skrypt)
type PRCommentConfig = {
  owner: string
  repo: string
  pullNumber: number
  token?: string
}

type DocumentationIssue = {
  file: string
  line: number
  severity: 'error' | 'warning' | 'info'
  message: string
  ruleId: string
}

// Simulated implementation of analyzePRForDocs
async function analyzePRForDocs(
  config: PRCommentConfig,
  _options: { checkExamples?: boolean } = {}
): Promise<DocumentationIssue[]> {
  const token = config.token || process.env.GITHUB_TOKEN

  if (!token) {
    throw new Error('GitHub token is required. Set GITHUB_TOKEN env var or pass config.token.')
  }

  // Simulate fetching PR diff and analyzing changed files
  console.log(`Analyzing PR #${config.pullNumber} in ${config.owner}/${config.repo}...`)

  // Simulated issues found in the PR
  const issues: DocumentationIssue[] = [
    {
      file: 'src/utils/parser.ts',
      line: 42,
      severity: 'error',
      message: 'Exported function `parseConfig` is missing a JSDoc comment.',
      ruleId: 'missing-jsdoc',
    },
    {
      file: 'src/utils/parser.ts',
      line: 58,
      severity: 'warning',
      message: 'Parameter `options` in `parseConfig` is not documented.',
      ruleId: 'missing-param-doc',
    },
    {
      file: 'src/api/client.ts',
      line: 15,
      severity: 'warning',
      message: 'Return type for `fetchUser` is undocumented.',
      ruleId: 'missing-return-doc',
    },
  ]

  if (_options.checkExamples) {
    issues.push({
      file: 'src/api/client.ts',
      line: 20,
      severity: 'info',
      message: 'Code example in JSDoc for `fetchUser` references a deprecated method.',
      ruleId: 'stale-example',
    })
  }

  return issues
}

// --- Usage ---
async function main() {
  const config: PRCommentConfig = {
    owner: 'acme-corp',
    repo: 'backend-api',
    pullNumber: 247,
    token: process.env.GITHUB_TOKEN || 'ghp_your_token_here',
  }

  try {
    const issues = await analyzePRForDocs(config, { checkExamples: true })

    if (issues.length === 0) {
      console.log('✅ No documentation issues found.')
      return
    }

    console.log(`Found ${issues.length} documentation issue(s):\n`)

    for (const issue of issues) {
      const icon = issue.severity === 'error' ? '❌' : issue.severity === 'warning' ? '⚠️' : 'ℹ️'
      console.log(`${icon} [${issue.severity.toUpperCase()}] ${issue.file}:${issue.line}`)
      console.log(`   ${issue.message}`)
      console.log(`   Rule: ${issue.ruleId}\n`)
    }

    // Expected output:
    // Found 4 documentation issue(s):
    //
    // ❌ [ERROR] src/utils/parser.ts:42
    //    Exported function `parseConfig` is missing a JSDoc comment.
    //    Rule: missing-jsdoc
    //
    // ⚠️ [WARNING] src/utils/parser.ts:58
    //    Parameter `options` in `parseConfig` is not documented.
    //    Rule: missing-param-doc
    // ...
  } catch (error) {
    console.error('Failed to analyze PR:', error instanceof Error ? error.message : error)
    process.exit(1)
  }
}

main()
TypeScript

build_signature

def build_signature(name: str, args: ast.arguments, returns: ast.AST | None, is_async: bool) -> str
Python

Use this to reconstruct a human-readable function signature string from its parsed AST components — useful for code analysis tools, documentation generators, or any system that needs to display or compare function signatures without executing the code.

Parameters

NameTypeRequiredDescription
namestrThe function name to include in the signature
argsast.argumentsThe parsed argument node from the AST, containing positional args, defaults, *args, **kwargs, and annotations
returnsast.AST | NoneThe return type annotation node from the AST, or None if no return type is specified
is_asyncboolWhether to prefix the signature with async def instead of def

Returns

A formatted string representing the full function signature, e.g.:

  • "def greet(name: str, age: int = 0) -> str"
  • "async def fetch(url: str, timeout: float = 30.0) -> dict"
  • "def process(data)" (no annotations or return type)

Example

import ast

def build_signature(name: str, args: ast.arguments, returns, is_async: bool) -> str:
    """Reconstruct a function signature string from AST components."""
    parts = []

    # Build each argument with optional annotation and default
    num_args = len(args.args)
    num_defaults = len(args.defaults)
    # Defaults are right-aligned to the args list
    defaults_offset = num_args - num_defaults

    for i, arg in enumerate(args.args):
        arg_str = arg.arg
        if arg.annotation:
            arg_str += f": {ast.unparse(arg.annotation)}"
        default_index = i - defaults_offset
        if default_index >= 0:
            arg_str += f" = {ast.unparse(args.defaults[default_index])}"
        parts.append(arg_str)

    # Handle *args
    if args.vararg:
        vararg_str = f"*{args.vararg.arg}"
        if args.vararg.annotation:
            vararg_str += f": {ast.unparse(args.vararg.annotation)}"
        parts.append(vararg_str)

    # Handle **kwargs
    if args.kwarg:
        kwarg_str = f"**{args.kwarg.arg}"
        if args.kwarg.annotation:
            kwarg_str += f": {ast.unparse(args.kwarg.annotation)}"
        parts.append(kwarg_str)

    args_str = ", ".join(parts)
    prefix = "async def" if is_async else "def"
    return_annotation = f" -> {ast.unparse(returns)}" if returns else ""

    return f"{prefix} {name}({args_str}){return_annotation}"


def main():
    try:
        # Example 1: Simple sync function with type annotations and a default
        source_simple = "def greet(name: str, age: int = 0) -> str: pass"
        tree = ast.parse(source_simple)
        func = tree.body[0]
        sig = build_signature(func.name, func.args, func.returns, is_async=False)
        print("Simple function:")
        print(f"  {sig}")
        # Output: def greet(name: str, age: int = 0) -> str

        # Example 2: Async function with *args and **kwargs
        source_async = "async def fetch(url: str, *headers: str, timeout: float = 30.0, **options: dict) -> dict: pass"
        tree = ast.parse(source_async)
        func = tree.body[0]
        sig = build_signature(func.name, func.args, func.returns, is_async=True)
        print("\nAsync function with *args and **kwargs:")
        print(f"  {sig}")
        # Output: async def fetch(url: str, *headers: str, timeout: float = 30.0, **options: dict) -> dict

        # Example 3: No annotations, no return type
        source_bare = "def process(data): pass"
        tree = ast.parse(source_bare)
        func = tree.body[0]
        sig = build_signature(func.name, func.args, func.returns, is_async=False)
        print("\nBare function (no annotations):")
        print(f"  {sig}")
        # Output: def process(data)

    except Exception as error:
        print(f"Failed to build signature: {error}")


main()
Python

buildNavigation

function buildNavigation(topics: Topic[]): NavigationItem[]
TypeScript

Use this to convert a flat list of documentation topics into a hierarchical navigation structure suitable for rendering sidebars, menus, or breadcrumbs.

Each topic becomes a top-level navigation item with its documented elements nested as children, with auto-generated URL paths.

Parameters

NameTypeRequiredDescription
topicsTopic[]Array of topics, each containing a name, id, and list of associated docs

Returns

Returns a NavigationItem[] array where:

  • Each top-level item maps to a topic (title from topic.name, path as /{topic.id})
  • Each child item maps to an individual doc element nested under its parent topic

Returns an empty array if topics is empty.

Topic Shape (Expected Input)

FieldTypeDescription
topic.namestringDisplay title for the navigation group
topic.idstringUsed to build the URL path (/{id})
topic.docsDoc[]Documents belonging to this topic
doc.element.namestringDisplay name of the individual doc item

Example

// --- Inline types (mirrors the real library's shape) ---
type DocElement = {
  name: string;
  description?: string;
};

type Doc = {
  element: DocElement;
};

type Topic = {
  id: string;
  name: string;
  docs: Doc[];
};

type NavigationItem = {
  title: string;
  path: string;
  children?: NavigationItem[];
};

// --- Self-contained implementation (mirrors real behavior) ---
function buildNavigation(topics: Topic[]): NavigationItem[] {
  return topics.map(topic => ({
    title: topic.name,
    path: `/${topic.id}`,
    children: topic.docs.map(doc => ({
      title: doc.element.name,
      path: `/${topic.id}/${doc.element.name.toLowerCase().replace(/\s+/g, '-')}`,
    })),
  }));
}

// --- Realistic usage ---
const topics: Topic[] = [
  {
    id: 'authentication',
    name: 'Authentication',
    docs: [
      { element: { name: 'createSession', description: 'Creates a new user session' } },
      { element: { name: 'revokeToken',   description: 'Revokes an existing token'  } },
    ],
  },
  {
    id: 'storage',
    name: 'Storage',
    docs: [
      { element: { name: 'uploadFile',   description: 'Uploads a file to the store' } },
      { element: { name: 'deleteFile',   description: 'Removes a file by key'       } },
    ],
  },
  {
    id: 'empty-section',
    name: 'Coming Soon',
    docs: [], // topics with no docs produce an item with an empty children array
  },
];

try {
  const nav = buildNavigation(topics);

  console.log('Navigation structure:');
  console.log(JSON.stringify(nav, null, 2));

  /*  Expected output:
  [
    {
      "title": "Authentication",
      "path": "/authentication",
      "children": [
        { "title": "createSession", "path": "/authentication/createsession" },
        { "title": "revokeToken",   "path": "/authentication/revoketoken"   }
      ]
    },
    {
      "title": "Storage",
      "path": "/storage",
      "children": [
        { "title": "uploadFile", "path": "/storage/uploadfile" },
        { "title": "deleteFile", "path": "/storage/deletefile" }
      ]
    },
    {
      "title": "Coming Soon",
      "path": "/empty-section",
      "children": []
    }
  ]
  */

  // Practical use: find the path for a specific doc
  const authChildren = nav.find(item => item.path === '/authentication')?.children ?? [];
  console.log('\nAuthentication child paths:', authChildren.map(c => c.path));
  // Output: [ '/authentication/createsession', '/authentication/revoketoken' ]

} catch (error) {
  console.error('Failed to build navigation:', error);
}
TypeScript

checkApiKey

function checkApiKey(provider: LLMProvider): { ok: boolean; envKey: string | null }
TypeScript

Use this to verify whether a required API key environment variable is set for a given LLM provider before making requests.

Returns { ok: true } for providers that don't require an API key (e.g., Ollama), and checks the appropriate environment variable for cloud providers (e.g., OpenAI, Anthropic). This is useful for fail-fast validation at startup or before executing LLM calls.

Parameters

NameTypeRequiredDescription
providerLLMProviderYesThe LLM provider to check (e.g., 'openai', 'anthropic', 'ollama')

Returns

FieldTypeDescription
okbooleantrue if the API key is present or not required; false if the key is missing
envKeystring | nullThe name of the environment variable checked (e.g., 'OPENAI_API_KEY'), or null if no key is needed

Behavior by Provider Type

  • Key-required providers (OpenAI, Anthropic, etc.): Returns { ok: false, envKey: 'PROVIDER_API_KEY' } when the env var is unset, or { ok: true, envKey: 'PROVIDER_API_KEY' } when it is set.
  • Key-free providers (Ollama): Always returns { ok: true, envKey: null }.

Example

// Inline the types and dependencies — no external imports needed
type LLMProvider = 'openai' | 'anthropic' | 'ollama' | 'gemini'

const PROVIDER_ENV_KEYS: Record<LLMProvider, string | null> = {
  openai: 'OPENAI_API_KEY',
  anthropic: 'ANTHROPIC_API_KEY',
  ollama: null, // No API key required
  gemini: 'GEMINI_API_KEY',
}

// Inline implementation of checkApiKey
function checkApiKey(provider: LLMProvider): { ok: boolean; envKey: string | null } {
  const envKey = PROVIDER_ENV_KEYS[provider]

  // Providers like Ollama don't need an API key
  if (!envKey) {
    return { ok: true, envKey: null }
  }

  const ok = Boolean(process.env[envKey])
  return { ok, envKey }
}

// --- Usage Example ---

// Simulate environment (in real usage these come from your shell or .env)
process.env.OPENAI_API_KEY = process.env.OPENAI_API_KEY || '' // unset to demo failure
process.env.ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY || 'sk-ant-abc123' // set

function validateProviders(providers: LLMProvider[]) {
  for (const provider of providers) {
    const { ok, envKey } = checkApiKey(provider)

    if (!ok) {
      console.error(
        `[MISSING KEY] Provider "${provider}" requires env var: ${envKey}`
      )
      // Output: [MISSING KEY] Provider "openai" requires env var: OPENAI_API_KEY
    } else if (envKey === null) {
      console.log(
        `[OK] Provider "${provider}" needs no API key`
      )
      // Output: [OK] Provider "ollama" needs no API key
    } else {
      console.log(
        `[OK] Provider "${provider}" — ${envKey} is set`
      )
      // Output: [OK] Provider "anthropic" — ANTHROPIC_API_KEY is set
    }
  }
}

validateProviders(['openai', 'anthropic', 'ollama'])
TypeScript

classifyElement

function classifyElement(element: APIElement): ContentClassification
TypeScript

Use this to determine what type of documentation an API element needs — whether it's best suited for API reference docs, a guide, or a tutorial.

Given an APIElement, classifyElement analyzes its characteristics and returns a ContentClassification indicating the recommended documentation type and the reasoning behind that recommendation. Use this when auto-generating or organizing documentation to ensure each element gets the right treatment.

Parameters

NameTypeRequiredDescription
elementAPIElementThe API element to classify (function, class, type, etc.)

Returns

Returns a ContentClassification object:

FieldTypeDescription
type'api' | 'guide' | 'tutorial'The recommended documentation type
reasonsstring[]Explanation of why this classification was chosen
scores{ api: number, guide: number, tutorial: number }Relative confidence scores for each type

Classification Types

TypeWhen it's chosen
'api'Element is a low-level utility, has many parameters, or is primarily reference material
'guide'Element solves a specific problem or represents a common workflow
'tutorial'Element is a high-level entry point or requires step-by-step explanation

Example

// ── Inline types (no external imports needed) ──────────────────────────────

type APIElementKind = 'function' | 'class' | 'interface' | 'type' | 'variable'

interface APIElement {
  name: string
  kind: APIElementKind
  parameters?: { name: string; type: string; optional?: boolean }[]
  returnType?: string
  description?: string
  isExported?: boolean
  complexity?: 'low' | 'medium' | 'high'
  tags?: string[]
}

interface ContentClassification {
  type: 'api' | 'guide' | 'tutorial'
  reasons: string[]
  scores: { api: number; guide: number; tutorial: number }
}

// ── Inline implementation ──────────────────────────────────────────────────

function classifyElement(element: APIElement): ContentClassification {
  const reasons: string[] = []
  let apiScore = 0
  let guideScore = 0
  let tutorialScore = 0

  // High parameter count → API reference
  const paramCount = element.parameters?.length ?? 0
  if (paramCount >= 4) {
    apiScore += 3
    reasons.push(`Has ${paramCount} parameters — suits detailed API reference`)
  }

  // Classes and interfaces → API reference
  if (element.kind === 'class' || element.kind === 'interface') {
    apiScore += 2
    reasons.push(`Kind "${element.kind}" is best documented as API reference`)
  }

  // High complexity → tutorial
  if (element.complexity === 'high') {
    tutorialScore += 3
    reasons.push('High complexity suggests a step-by-step tutorial is needed')
  }

  // Tags like "example" or "workflow" → guide
  if (element.tags?.some(t => ['workflow', 'example', 'howto'].includes(t))) {
    guideScore += 3
    reasons.push('Tagged as workflow/example — suits a how-to guide')
  }

  // Low complexity utility → API reference
  if (element.complexity === 'low' && element.kind === 'function') {
    apiScore += 2
    reasons.push('Simple utility function — concise API reference is sufficient')
  }

  // Determine winner
  const scores = { api: apiScore, guide: guideScore, tutorial: tutorialScore }
  const type = (Object.entries(scores) as [ContentClassification['type'], number][])
    .reduce((best, curr) => (curr[1] > best[1] ? curr : best))[0]

  return { type, reasons, scores }
}

// ── Usage examples ─────────────────────────────────────────────────────────

async function main() {
  try {
    // Example 1: Simple utility function
    const utilityFn: APIElement = {
      name: 'formatDate',
      kind: 'function',
      parameters: [{ name: 'date', type: 'Date' }],
      returnType: 'string',
      complexity: 'low',
      isExported: true,
    }

    const utilityResult = classifyElement(utilityFn)
    console.log('=== Simple utility function ===')
    console.log('Classification:', utilityResult.type)       // 'api'
    console.log('Reasons:', utilityResult.reasons)
    console.log('Scores:', utilityResult.scores)
    // Output: { type: 'api', scores: { api: 4, guide: 0, tutorial: 0 } }

    console.log()

    // Example 2: Complex workflow function
    const workflowFn: APIElement = {
      name: 'bootstrapApplication',
      kind: 'function',
      parameters: [
        { name: 'config', type: 'AppConfig' },
        { name: 'plugins', type: 'Plugin[]', optional: true },
        { name: 'middleware', type: 'Middleware[]', optional: true },
        { name: 'logger', type: 'Logger', optional: true },
        { name: 'onReady', type: '() => void', optional: true },
      ],
      returnType: 'Promise<App>',
      complexity: 'high',
      tags: ['workflow'],
      isExported: true,
    }

    const workflowResult = classifyElement(workflowFn)
    console.log('=== Complex workflow function ===')
    console.log('Classification:', workflowResult.type)      // 'tutorial'
    console.log('Reasons:', workflowResult.reasons)
    console.log('Scores:', workflowResult.scores)
    // Output: { type: 'tutorial', scores: { api: 3, guide: 3, tutorial: 3 } }

    console.log()

    // Example 3: Interface definition
    const interfaceDef: APIElement = {
      name: 'UserRepository',
      kind: 'interface',
      complexity: 'medium',
      isExported: true,
    }

    const interfaceResult = classifyElement(interfaceDef)
    console.log('=== Interface definition ===')
    console.log('Classification:', interfaceResult.type)     // 'api'
    console.log('Reasons:', interfaceResult.reasons)
    console.log('Scores:', interfaceResult.scores)
    // Output: { type: 'api', scores: { api: 2, guide: 0, tutorial: 0 } }

  } catch (error) {
    console.error('Classification failed:', error)
  }
}

main()
TypeScript

classifyElements

function classifyElements(elements: APIElement[]): Map<ContentType, APIElement[]>
TypeScript

Use this to sort a mixed list of documentation elements into typed groups — API references, guides, tutorials, and overviews — in a single pass.

Instead of manually filtering arrays for each content type, classifyElements returns a Map keyed by content type so you can immediately access any group by name.

Parameters

NameTypeRequiredDescription
elementsAPIElement[]Array of documentation elements to classify. Each element must have a type field matching a known ContentType.

Returns

A Map<ContentType, APIElement[]> with four guaranteed keys:

KeyDescription
"api"API reference elements (functions, classes, types)
"guide"How-to and conceptual guide elements
"tutorial"Step-by-step tutorial elements
"overview"High-level overview/introduction elements

All four keys are always present in the returned map — even if a group is empty, it returns an empty array rather than undefined.

Notes

  • Elements with unrecognized types are silently dropped from the output.
  • Input order within each group is preserved.
  • Passing an empty array returns a map with four empty arrays.

Example

// --- Inline types (do not import from skrypt) ---
type ContentType = 'api' | 'guide' | 'tutorial' | 'overview'

interface APIElement {
  id: string
  title: string
  type: ContentType
  content: string
}

// --- Inline implementation matching Skrypt behavior ---
function classifyElements(elements: APIElement[]): Map<ContentType, APIElement[]> {
  const groups = new Map<ContentType, APIElement[]>([
    ['api',      []],
    ['guide',    []],
    ['tutorial', []],
    ['overview', []],
  ])

  for (const element of elements) {
    const bucket = groups.get(element.type)
    if (bucket) {
      bucket.push(element)
    }
  }

  return groups
}

// --- Realistic usage ---
const docElements: APIElement[] = [
  { id: 'elem-001', title: 'createUser()',          type: 'api',      content: 'Creates a new user record.' },
  { id: 'elem-002', title: 'Getting Started',       type: 'overview', content: 'Introduction to the platform.' },
  { id: 'elem-003', title: 'deleteUser()',           type: 'api',      content: 'Removes a user by ID.' },
  { id: 'elem-004', title: 'Authentication Guide',  type: 'guide',    content: 'How to authenticate API requests.' },
  { id: 'elem-005', title: 'Build Your First App',  type: 'tutorial', content: 'Step-by-step app walkthrough.' },
  { id: 'elem-006', title: 'listUsers()',            type: 'api',      content: 'Returns a paginated user list.' },
  { id: 'elem-007', title: 'Rate Limiting Guide',   type: 'guide',    content: 'Understanding rate limits.' },
]

try {
  const classified = classifyElements(docElements)

  // Access each group directly by content type
  const apiDocs      = classified.get('api')!
  const guides       = classified.get('guide')!
  const tutorials    = classified.get('tutorial')!
  const overviews    = classified.get('overview')!

  console.log(`API references (${apiDocs.length}):`)
  apiDocs.forEach(el => console.log(`  • [${el.id}] ${el.title}`))
  // • [elem-001] createUser()
  // • [elem-003] deleteUser()
  // • [elem-006] listUsers()

  console.log(`\nGuides (${guides.length}):`)
  guides.forEach(el => console.log(`  • [${el.id}] ${el.title}`))
  // • [elem-004] Authentication Guide
  // • [elem-007] Rate Limiting Guide

  console.log(`\nTutorials (${tutorials.length}):`)
  tutorials.forEach(el => console.log(`  • [${el.id}] ${el.title}`))
  // • [elem-005] Build Your First App

  console.log(`\nOverviews (${overviews.length}):`)
  overviews.forEach(el => console.log(`  • [${el.id}] ${el.title}`))
  // • [elem-002] Getting Started

  // All four keys are always present — safe to call .get() without undefined checks
  console.log('\nAll content type keys present:', [...classified.keys()])
  // Output: [ 'api', 'guide', 'tutorial', 'overview' ]

} catch (error) {
  console.error('Classification failed:', error)
}
TypeScript

detectCrossReferences

function detectCrossReferences(docs: GeneratedDoc[]): CrossReference[]
TypeScript

Use this to automatically discover relationships between documented elements — finding which functions, classes, or types reference each other by name across your documentation set.

Given a list of generated documentation objects, it scans each element's content and metadata to detect when one element mentions another by name, returning a list of cross-reference links you can use to build navigation, "See Also" sections, or dependency graphs.

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]Array of generated documentation objects, each containing an element with a name and associated doc content to scan for references

Returns

Returns CrossReference[] — an array of cross-reference objects. Each entry describes a directional link from a source element to a target element found within it. Returns an empty array if no cross-references are detected or if fewer than two docs are provided.

CrossReference shape

FieldTypeDescription
fromstringName of the element that contains the reference
tostringName of the element being referenced
contextstringSnippet or location where the reference was found

Example

// ─── Inline types (do not import from skrypt) ───────────────────────────────

type ElementKind = 'function' | 'class' | 'interface' | 'type'

interface DocElement {
  name: string
  kind: ElementKind
  description: string
  params?: { name: string; type: string }[]
  returns?: string
}

interface GeneratedDoc {
  element: DocElement
  markdown: string
  topics: string[]
}

interface CrossReference {
  from: string
  to: string
  context: string
}

// ─── Self-contained implementation (mirrors Skrypt behavior) ────────────────

function detectCrossReferences(docs: GeneratedDoc[]): CrossReference[] {
  const refs: CrossReference[] = []
  const elementNames = new Set(docs.map(d => d.element.name))

  for (const doc of docs) {
    const { element } = doc

    // Build a searchable corpus from the element's description and markdown
    const corpus = [
      element.description,
      doc.markdown,
      ...(element.params?.map(p => p.type) ?? []),
      element.returns ?? '',
    ].join(' ')

    for (const targetName of elementNames) {
      // Skip self-references
      if (targetName === element.name) continue

      // Check if the target name appears in this element's content
      const pattern = new RegExp(`\\b${targetName}\\b`)
      if (pattern.test(corpus)) {
        // Extract a short context snippet around the match
        const matchIndex = corpus.search(pattern)
        const snippetStart = Math.max(0, matchIndex - 30)
        const snippetEnd = Math.min(corpus.length, matchIndex + targetName.length + 30)
        const context = `...${corpus.slice(snippetStart, snippetEnd).trim()}...`

        refs.push({ from: element.name, to: targetName, context })
      }
    }
  }

  return refs
}

// ─── Realistic example data ───────────────────────────────────────────────────

const docs: GeneratedDoc[] = [
  {
    element: {
      name: 'fetchUser',
      kind: 'function',
      description: 'Fetches a user by ID. Returns a UserProfile object.',
      params: [{ name: 'id', type: 'string' }],
      returns: 'Promise<UserProfile>',
    },
    markdown: '## fetchUser\n\nFetches a user by ID. See also `validateSession` for auth.',
    topics: ['users', 'auth'],
  },
  {
    element: {
      name: 'UserProfile',
      kind: 'interface',
      description: 'Represents a user profile returned by fetchUser.',
      returns: undefined,
    },
    markdown: '## UserProfile\n\nRepresents a user profile. Used by `fetchUser` and `updateUser`.',
    topics: ['users'],
  },
  {
    element: {
      name: 'validateSession',
      kind: 'function',
      description: 'Validates the current session token. Called before fetchUser.',
      params: [{ name: 'token', type: 'string' }],
      returns: 'boolean',
    },
    markdown: '## validateSession\n\nValidates the current session token.',
    topics: ['auth'],
  },
  {
    element: {
      name: 'updateUser',
      kind: 'function',
      description: 'Updates a UserProfile in the database.',
      params: [{ name: 'profile', type: 'UserProfile' }],
      returns: 'Promise<void>',
    },
    markdown: '## updateUser\n\nUpdates a UserProfile. Requires validateSession first.',
    topics: ['users', 'auth'],
  },
]

// ─── Run and display results ──────────────────────────────────────────────────

try {
  const crossRefs = detectCrossReferences(docs)

  console.log(`Found ${crossRefs.length} cross-reference(s):\n`)

  for (const ref of crossRefs) {
    console.log(`  ${ref.from}  →  ${ref.to}`)
    console.log(`  context: "${ref.context}"`)
    console.log()
  }

  // Expected output (order may vary):
  // Found 5 cross-reference(s):
  //
  //   fetchUser  →  UserProfile
  //   context: "...Returns a UserProfile object. ..."
  //
  //   fetchUser  →  validateSession
  //   context: "...See also `validateSession` for auth...."
  //
  //   UserProfile  →  fetchUser
  //   context: "...returned by fetchUser...."
  //
  //   updateUser  →  UserProfile
  //   context: "...Updates a UserProfile in the database...."
  //
  //   updateUser  →  validateSession
  //   context: "...Requires validateSession first...."

  // Build a simple adjacency summary for documentation nav
  const bySource = crossRefs.reduce<Record<string, string[]>>((acc, ref) => {
    acc[ref.from] = [...(acc[ref.from] ?? []), ref.to]
    return acc
  }, {})

  console.log('Adjacency summary (useful for "See Also" sections):')
  for (const [source, targets] of Object.entries(bySource)) {
    console.log(`  ${source}: ${targets.join(', ')}`)
  }
} catch (error) {
  console.error('Cross-reference detection failed:', error)
}
TypeScript

detectFormat

function detectFormat(dir: string): ImportFormat
TypeScript

Use this to automatically identify which documentation framework a project uses, so you can process or convert docs without requiring users to manually specify the format.

Given a directory path, detectFormat inspects marker files (like mint.json, docs.json, _sidebar.md, etc.) in priority order and returns the first matching format. This is ideal for CLI tools, migration scripts, or any workflow that needs to handle multiple doc formats transparently.

Parameters

NameTypeRequiredDescription
dirstringAbsolute or relative path to the documentation root directory to inspect

Returns

Returns an ImportFormat string literal identifying the detected documentation framework.

ValueDetected When
'mintlify'mint.json exists, or docs.json with Mintlify structure
'docusaurus'docusaurus.config.js or docusaurus.config.ts exists
'gitbook'SUMMARY.md or .gitbook.yaml exists
'docsify'_sidebar.md or index.html with Docsify markers exists
'generic'No specific marker found — falls back to plain markdown/HTML

Priority matters: The first matching marker wins. If a directory somehow contains both mint.json and docusaurus.config.js, the result will be 'mintlify'.

Example

import { existsSync, writeFileSync, mkdirSync, rmSync } from 'fs'
import { join } from 'path'
import * as os from 'os'

// --- Inline types (from skrypt, not imported) ---
type ImportFormat = 'mintlify' | 'docusaurus' | 'gitbook' | 'docsify' | 'generic'

// --- Inline implementation of detectFormat ---
function detectFormat(dir: string): ImportFormat {
  const exists = (file: string) => existsSync(join(dir, file))

  // Priority 1: Mintlify
  if (exists('mint.json')) return 'mintlify'
  if (exists('docs.json')) return 'mintlify' // simplified; real impl checks structure

  // Priority 2: Docusaurus
  if (exists('docusaurus.config.js') || exists('docusaurus.config.ts')) return 'docusaurus'

  // Priority 3: GitBook
  if (exists('SUMMARY.md') || exists('.gitbook.yaml')) return 'gitbook'

  // Priority 4: Docsify
  if (exists('_sidebar.md')) return 'docsify'

  // Fallback
  return 'generic'
}

// --- Helper to create a temp directory with marker files ---
function createTempDocDir(markerFiles: string[]): string {
  const tmpDir = join(os.tmpdir(), `detect-format-${Date.now()}`)
  mkdirSync(tmpDir, { recursive: true })
  for (const file of markerFiles) {
    writeFileSync(join(tmpDir, file), '{}')
  }
  return tmpDir
}

async function main() {
  const scenarios: { label: string; files: string[]; expected: ImportFormat }[] = [
    { label: 'Mintlify project',   files: ['mint.json'],              expected: 'mintlify'   },
    { label: 'Docusaurus project', files: ['docusaurus.config.js'],   expected: 'docusaurus' },
    { label: 'GitBook project',    files: ['SUMMARY.md'],             expected: 'gitbook'    },
    { label: 'Docsify project',    files: ['_sidebar.md'],            expected: 'docsify'    },
    { label: 'Unknown project',    files: ['README.md'],              expected: 'generic'    },
    {
      label: 'Mixed markers (priority: Mintlify wins)',
      files: ['mint.json', 'docusaurus.config.js', 'SUMMARY.md'],
      expected: 'mintlify',
    },
  ]

  const dirs: string[] = []

  try {
    console.log('detectFormat — scenario results\n' + '='.repeat(40))

    for (const { label, files, expected } of scenarios) {
      const dir = createTempDocDir(files)
      dirs.push(dir)

      const result = detectFormat(dir)
      const status = result === expected ? '✅' : '❌'

      console.log(`${status} ${label}`)
      console.log(`   Files:    ${files.join(', ')}`)
      console.log(`   Detected: ${result}  (expected: ${expected})\n`)
    }

    // Typical real-world usage: point at an actual docs folder
    const docsPath = process.env.DOCS_DIR || './docs'
    if (existsSync(docsPath)) {
      const format = detectFormat(docsPath)
      console.log(`Your ./docs directory was detected as: "${format}"`)
      // Output example: Your ./docs directory was detected as: "generic"
    }

  } catch (error) {
    console.error('Detection failed:', error instanceof Error ? error.message : error)
    process.exit(1)
  } finally {
    // Clean up temp directories
    for (const dir of dirs) {
      try { rmSync(dir, { recursive: true, force: true }) } catch {}
    }
  }
}

main()

/*
Expected output:
========================================
✅ Mintlify project
   Files:    mint.json
   Detected: mintlify  (expected: mintlify)

✅ Docusaurus project
   Files:    docusaurus.config.js
   Detected: docusaurus  (expected: docusaurus)

✅ GitBook project
   Files:    SUMMARY.md
   Detected: gitbook  (expected: gitbook)

✅ Docsify project
   Files:    _sidebar.md
   Detected: docsify  (expected: docsify)

✅ Unknown project
   Files:    README.md
   Detected: generic  (expected: generic)

✅ Mixed markers (priority: Mintlify wins)
   Files:    mint.json, docusaurus.config.js, SUMMARY.md
   Detected: mintlify  (expected: mintlify)
*/
TypeScript

extract_class

def extract_class(node: ast.ClassDef, file_path: str) -> list[dict[str, Any]]
Python

Use this to parse a Python class AST node into a structured list of dictionaries containing the class definition and all its methods — ideal for building code analysis tools, documentation generators, or code indexing pipelines.

Each dictionary in the returned list represents either the class itself or one of its methods, with metadata like name, docstring, line numbers, and source file path.

Parameters

NameTypeRequiredDescription
nodeast.ClassDefYesThe AST node representing the class definition, obtained by parsing Python source code
file_pathstrYesAbsolute or relative path to the source file containing the class, stored as metadata in each returned dict

Returns

Returns a list[dict[str, Any]] where:

  • First element is always the class-level entry (name, docstring, bases, line range, file path)
  • Subsequent elements are one entry per method defined in the class body
  • Returns an empty list if the node contains no extractable information

Each dictionary typically contains:

KeyDescription
type"class" or "method"
nameClass or method name
docstringExtracted docstring, or None if absent
linenoStarting line number in the source file
file_pathThe file_path argument passed in

Example

import ast
from typing import Any

# Inline implementation of extract_class for a self-contained example
def extract_class(node: ast.ClassDef, file_path: str) -> list[dict[str, Any]]:
    """Extract a class and its methods into structured dicts."""
    results = []

    # Extract class-level entry
    class_doc = ast.get_docstring(node)
    results.append({
        "type": "class",
        "name": node.name,
        "docstring": class_doc,
        "bases": [ast.unparse(base) for base in node.bases],
        "lineno": node.lineno,
        "end_lineno": getattr(node, "end_lineno", None),
        "file_path": file_path,
    })

    # Extract each method in the class body
    for item in node.body:
        if isinstance(item, ast.FunctionDef):
            method_doc = ast.get_docstring(item)
            results.append({
                "type": "method",
                "name": item.name,
                "docstring": method_doc,
                "lineno": item.lineno,
                "end_lineno": getattr(item, "end_lineno", None),
                "file_path": file_path,
            })

    return results


# --- Example Usage ---

source_code = '''
class PaymentProcessor:
    """Handles payment transactions for the billing system."""

    def charge(self, amount: float, card_token: str) -> bool:
        """Charge a card by token. Returns True on success."""
        pass

    def refund(self, transaction_id: str) -> bool:
        """Refund a previous transaction by ID."""
        pass

    def _validate(self, amount: float):
        # Internal validation, no docstring
        pass
'''

# Parse the source into an AST
tree = ast.parse(source_code)

# Find the ClassDef node
class_node = next(
    node for node in ast.walk(tree)
    if isinstance(node, ast.ClassDef)
)

# Run extraction
file_path = "/srv/billing/payment_processor.py"
extracted = extract_class(class_node, file_path)

# Display results
for entry in extracted:
    print(f"[{entry['type'].upper()}] {entry['name']}")
    print(f"  file     : {entry['file_path']}")
    print(f"  line     : {entry['lineno']}")
    print(f"  docstring: {entry['docstring']}")
    print()

# Expected output:
# [CLASS] PaymentProcessor
#   file     : /srv/billing/payment_processor.py
#   line     : 2
#   docstring: Handles payment transactions for the billing system.
#
# [METHOD] charge
#   file     : /srv/billing/payment_processor.py
#   line     : 5
#   docstring: Charge a card by token. Returns True on success.
#
# [METHOD] refund
#   file     : /srv/billing/payment_processor.py
#   line     : 9
#   docstring: Refund a previous transaction by ID.
#
# [METHOD] _validate
#   file     : /srv/billing/payment_processor.py
#   line     : 13
#   docstring: None
Python

extract_function

def extract_function(node: ast.FunctionDef | ast.AsyncFunctionDef, file_path: str, parent_class: str | None) -> dict[str, Any]
Python

Use this to extract structured metadata from a Python function or method AST node — including its name, arguments, return type, decorators, and source location — for use in code analysis, documentation generation, or static analysis tooling.

Given an AST node representing a function or async function, this returns a dictionary containing all key metadata about that function, ready for serialization or further processing.

Parameters

NameTypeRequiredDescription
nodeast.FunctionDef | ast.AsyncFunctionDefYesThe AST node representing the function or async function to extract metadata from
file_pathstrYesAbsolute or relative path to the source file containing the function, used for source location tracking
parent_classstr | NoneNoName of the enclosing class if this is a method; None for top-level functions

Returns

A dict[str, Any] containing extracted function metadata. Typical keys include:

KeyTypeDescription
namestrThe function's name
file_pathstrPath to the source file
parent_classstr | NoneEnclosing class name, or None
is_asyncboolWhether the function is defined with async def
argslist[str]List of argument names
decoratorslist[str]List of decorator names applied to the function
return_annotationstr | NoneReturn type annotation as a string, or None if absent
linenointLine number where the function is defined
docstringstr | NoneThe function's docstring, or None if absent

Example

import ast
from typing import Any

# Inline implementation of extract_function
def extract_function(
    node: ast.FunctionDef | ast.AsyncFunctionDef,
    file_path: str,
    parent_class: str | None = None
) -> dict[str, Any]:
    """Extract structured metadata from a function or method AST node."""

    # Extract argument names (skip 'self' and 'cls' for methods)
    args = [arg.arg for arg in node.args.args]

    # Extract decorator names (handles simple names and attribute access)
    decorators = []
    for dec in node.decorator_list:
        if isinstance(dec, ast.Name):
            decorators.append(dec.id)
        elif isinstance(dec, ast.Attribute):
            decorators.append(f"{ast.unparse(dec)}")
        else:
            decorators.append(ast.unparse(dec))

    # Extract return annotation if present
    return_annotation = None
    if node.returns is not None:
        return_annotation = ast.unparse(node.returns)

    # Extract docstring if present
    docstring = ast.get_docstring(node)

    return {
        "name": node.name,
        "file_path": file_path,
        "parent_class": parent_class,
        "is_async": isinstance(node, ast.AsyncFunctionDef),
        "args": args,
        "decorators": decorators,
        "return_annotation": return_annotation,
        "lineno": node.lineno,
        "docstring": docstring,
    }


# --- Example: Parse a real source snippet and extract function metadata ---

source_code = '''
import asyncio

class UserService:
    @staticmethod
    @validate_input
    async def fetch_user(user_id: str, include_deleted: bool = False) -> dict:
        """Fetch a user record by ID from the database."""
        pass

def compute_score(values: list[float]) -> float:
    """Calculate the average score from a list of values."""
    return sum(values) / len(values)
'''

try:
    tree = ast.parse(source_code)

    results = []

    for node in ast.walk(tree):
        if isinstance(node, ast.ClassDef):
            class_name = node.name
            for item in node.body:
                if isinstance(item, (ast.FunctionDef, ast.AsyncFunctionDef)):
                    metadata = extract_function(
                        node=item,
                        file_path="src/services/user_service.py",
                        parent_class=class_name
                    )
                    results.append(metadata)

        elif isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
            # Only capture top-level functions (not methods already captured above)
            if not any(
                isinstance(parent, ast.ClassDef)
                for parent in ast.walk(tree)
                if hasattr(parent, 'body') and node in getattr(parent, 'body', [])
            ):
                metadata = extract_function(
                    node=node,
                    file_path="src/services/user_service.py",
                    parent_class=None
                )
                results.append(metadata)

    for func_meta in results:
        print(f"\nFunction: {func_meta['name']}")
        print(f"  File:             {func_meta['file_path']}")
        print(f"  Parent class:     {func_meta['parent_class']}")
        print(f"  Is async:         {func_meta['is_async']}")
        print(f"  Arguments:        {func_meta['args']}")
        print(f"  Decorators:       {func_meta['decorators']}")
        print(f"  Return type:      {func_meta['return_annotation']}")
        print(f"  Line number:      {func_meta['lineno']}")
        print(f"  Docstring:        {func_meta['docstring']!r}")

    # Expected output:
    # Function: fetch_user
    #   File:             src/services/user_service.py
    #   Parent class:     UserService
    #   Is async:         True
    #   Arguments:        ['self', 'user_id', 'include_deleted']
    #   Decorators:       ['staticmethod', 'validate_input']
    #   Return type:      dict
    #   Line number:      5
    #   Docstring:        'Fetch a user record by ID from the database.'
    #
    # Function: compute_score
    #   File:             src/services/user_service.py
    #   Parent class:     None
    #   Is async:         False
    #   Arguments:        ['values']
    #   Decorators:       []
    #   Return type:      float
    #   Line number:      11
    #   Docstring:        'Calculate the average score from a list of values.'

except SyntaxError as e:
    print(f"Failed to parse source code: {e}")
except Exception as e:
    print(f"Extraction failed: {e}")
Python

extract_parameters

def extract_parameters(args: ast.arguments) -> list[dict[str, Any]]
Python

Use this to convert Python AST function arguments into a structured list of parameter metadata — ideal for building documentation generators, code analyzers, or introspection tools.

Given an ast.arguments object (from parsing a Python function), this returns a list of dictionaries, each describing one parameter: its name, type annotation (if any), and default value (if any).

Parameters

NameTypeRequiredDescription
argsast.arguments✅ YesThe arguments node from a parsed Python function's AST. Obtain this via ast.parse() + walking the tree to a FunctionDef node.

Returns

list[dict[str, Any]] — A list of parameter descriptor dictionaries, one per argument, in declaration order.

Each dictionary contains:

KeyTypeDescription
namestrThe parameter name
annotationstr | NoneThe type hint as a string, or None if unannotated
defaultAny | NoneThe default value, or None if no default is defined

When defaults are present

Python's AST stores defaults right-aligned against the argument list (i.e., the last N args have defaults). The function aligns them correctly so each parameter dict reflects the right default.

Example

import ast
from typing import Any

# ── Inline implementation of extract_parameters ──────────────────────────────

def extract_parameters(args: ast.arguments) -> list[dict[str, Any]]:
    """Extract parameters from function arguments."""
    params: list[dict[str, Any]] = []

    all_args = args.posonlyargs + args.args  # positional-only + regular args
    # Defaults are right-aligned: pad left with None so indices match
    defaults_padding = [None] * (len(all_args) - len(args.defaults))
    padded_defaults = defaults_padding + list(args.defaults)

    for arg, default in zip(all_args, padded_defaults):
        param: dict[str, Any] = {
            "name": arg.arg,
            "annotation": ast.unparse(arg.annotation) if arg.annotation else None,
            "default": ast.unparse(default) if default is not None else None,
        }
        params.append(param)

    # Handle *args
    if args.vararg:
        params.append({
            "name": f"*{args.vararg.arg}",
            "annotation": ast.unparse(args.vararg.annotation) if args.vararg.annotation else None,
            "default": None,
        })

    # Handle keyword-only args (after *)
    kw_defaults_padding = [None] * (len(args.kwonlyargs) - len(args.kw_defaults))
    padded_kw_defaults = kw_defaults_padding + list(args.kw_defaults)

    for arg, default in zip(args.kwonlyargs, padded_kw_defaults):
        params.append({
            "name": arg.arg,
            "annotation": ast.unparse(arg.annotation) if arg.annotation else None,
            "default": ast.unparse(default) if default is not None else None,
        })

    # Handle **kwargs
    if args.kwarg:
        params.append({
            "name": f"**{args.kwarg.arg}",
            "annotation": ast.unparse(args.kwarg.annotation) if args.kwarg.annotation else None,
            "default": None,
        })

    return params


# ── Example: parse a realistic function and extract its parameters ────────────

source_code = """
def create_user(
    user_id: int,
    username: str,
    email: str,
    role: str = "viewer",
    is_active: bool = True,
    *tags: str,
    notify: bool = False,
    **metadata: Any
) -> dict:
    pass
"""

try:
    tree = ast.parse(source_code)

    # Walk the AST to find the FunctionDef node
    func_def = next(
        node for node in ast.walk(tree) if isinstance(node, ast.FunctionDef)
    )

    parameters = extract_parameters(func_def.args)

    print(f"Function: {func_def.name}")
    print(f"Found {len(parameters)} parameter(s):\n")

    for param in parameters:
        annotation = param["annotation"] or "untyped"
        default    = f"  (default: {param['default']})" if param["default"] is not None else ""
        print(f"  {param['name']}: {annotation}{default}")

    # Expected output:
    # Function: create_user
    # Found 8 parameter(s):
    #
    #   user_id: int
    #   username: str
    #   email: str
    #   role: str  (default: 'viewer')
    #   is_active: bool  (default: True)
    #   *tags: str
    #   notify: bool  (default: False)
    #   **metadata: Any

except StopIteration:
    print("Error: No function definition found in source.")
except SyntaxError as e:
    print(f"Error: Could not parse source code — {e}")
Python

findConfigFile

function findConfigFile(dir: string): string | null
TypeScript

Use this to locate a Skrypt configuration file by searching a directory for any of the supported config filenames (.skrypt.yaml, .skrypt.yml, skrypt.yaml, skrypt.yml).

This is useful when you need to resolve the config file path before loading it — for example, when bootstrapping a CLI tool or build process that needs to find project-level configuration starting from a given directory.

Parameters

NameTypeRequiredDescription
dirstringThe directory path to search for a config file. Only this exact directory is checked — no parent traversal.

Returns

ValueWhen
stringA config file was found — returns the full resolved file path (e.g. /projects/myapp/.skrypt.yaml)
nullNone of the supported config filenames exist in the given directory

Config Filenames Checked (in order)

  1. .skrypt.yaml
  2. .skrypt.yml
  3. skrypt.yaml
  4. skrypt.yml

The first match wins — if multiple config files exist, only the path of the first found is returned.

Example

import { existsSync } from 'fs'
import { join } from 'path'

// Inline the supported config filenames (mirrors the real implementation)
const CONFIG_FILENAMES = ['.skrypt.yaml', '.skrypt.yml', 'skrypt.yaml', 'skrypt.yml']

// Self-contained implementation of findConfigFile
function findConfigFile(dir: string): string | null {
  for (const filename of CONFIG_FILENAMES) {
    const filepath = join(dir, filename)
    if (existsSync(filepath)) {
      return filepath
    }
  }
  return null
}

// --- Usage Examples ---

// Example 1: Search the current working directory
const cwd = process.env.PROJECT_DIR || process.cwd()
const configPath = findConfigFile(cwd)

if (configPath) {
  console.log('Config file found:', configPath)
  // Output (example): Config file found: /projects/myapp/.skrypt.yaml
} else {
  console.log('No config file found in:', cwd)
  // Output: No config file found in: /projects/myapp
}

// Example 2: Walk up the directory tree to find the nearest config
function findConfigFileUpward(startDir: string): string | null {
  let currentDir = startDir

  while (true) {
    const found = findConfigFile(currentDir)
    if (found) return found

    const parentDir = join(currentDir, '..')
    if (parentDir === currentDir) break // reached filesystem root
    currentDir = parentDir
  }

  return null
}

const nearestConfig = findConfigFileUpward(process.cwd())
console.log('Nearest config (upward search):', nearestConfig ?? 'not found')
// Output (example): Nearest config (upward search): /projects/.skrypt.yml

// Example 3: Validate before loading
const projectDir = process.env.PROJECT_DIR || '/tmp/my-project'
const resolvedConfig = findConfigFile(projectDir)

if (!resolvedConfig) {
  console.error(
    `No Skrypt config found in "${projectDir}". ` +
    `Create one of: ${CONFIG_FILENAMES.join(', ')}`
  )
  process.exit(1)
}

console.log('Ready to load config from:', resolvedConfig)
// Output: Ready to load config from: /tmp/my-project/skrypt.yaml
TypeScript

formatAsMarkdown

function formatAsMarkdown(docs: GeneratedDoc[], title: string): string
TypeScript

Use this to convert an array of generated documentation objects into a formatted Markdown string, ready to write to a .md file or display in a docs portal.

Takes structured doc data (functions, classes, methods) and a page title, and returns a complete Markdown document with sections organized by element type.

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]Array of generated documentation objects, each containing an element descriptor and its rendered doc content
titlestringThe top-level # Heading title for the resulting Markdown document

Returns

A string containing the full Markdown file content, with a # Title heading and sections grouped by element kind (function, class, method). Ready to pass directly to fs.writeFileSync or a docs pipeline.

Example

// --- Inline types (mirrors Skrypt internals) ---
type ElementKind = 'function' | 'class' | 'method'

interface APIElement {
  name: string
  kind: ElementKind
  signature?: string
}

interface GeneratedDoc {
  element: APIElement
  documentation: string
}

// --- Inline implementation of formatAsMarkdown ---
function formatAsMarkdown(docs: GeneratedDoc[], title: string): string {
  let content = `# ${title}\n\n`

  const functions = docs.filter(d => d.element.kind === 'function')
  const classes   = docs.filter(d => d.element.kind === 'class')
  const methods   = docs.filter(d => d.element.kind === 'method')

  if (classes.length > 0) {
    content += `## Classes\n\n`
    for (const doc of classes) {
      content += `### ${doc.element.name}\n\n`
      if (doc.element.signature) content += `\`\`\`ts\n${doc.element.signature}\n\`\`\`\n\n`
      content += `${doc.documentation}\n\n`
    }
  }

  if (functions.length > 0) {
    content += `## Functions\n\n`
    for (const doc of functions) {
      content += `### ${doc.element.name}\n\n`
      if (doc.element.signature) content += `\`\`\`ts\n${doc.element.signature}\n\`\`\`\n\n`
      content += `${doc.documentation}\n\n`
    }
  }

  if (methods.length > 0) {
    content += `## Methods\n\n`
    for (const doc of methods) {
      content += `### ${doc.element.name}\n\n`
      if (doc.element.signature) content += `\`\`\`ts\n${doc.element.signature}\n\`\`\`\n\n`
      content += `${doc.documentation}\n\n`
    }
  }

  return content
}

// --- Example usage ---
const docs: GeneratedDoc[] = [
  {
    element: {
      name: 'UserService',
      kind: 'class',
      signature: 'class UserService { constructor(apiKey: string) }',
    },
    documentation: 'Manages user accounts and authentication against the REST API.',
  },
  {
    element: {
      name: 'fetchUser',
      kind: 'function',
      signature: 'function fetchUser(userId: string): Promise<User>',
    },
    documentation: 'Fetches a single user by their unique ID. Throws if the user is not found.',
  },
  {
    element: {
      name: 'updateProfile',
      kind: 'method',
      signature: 'updateProfile(data: Partial<User>): Promise<void>',
    },
    documentation: 'Updates the authenticated user\'s profile fields. Only provided fields are changed.',
  },
]

try {
  const markdown = formatAsMarkdown(docs, 'My SDK Reference')
  console.log(markdown)
  /*
  Expected output:
  # My SDK Reference

  ## Classes

  ### UserService

  ```ts
  class UserService { constructor(apiKey: string) }
TypeScript

Manages user accounts and authentication against the REST API.

Functions

fetchUser

...

Methods

updateProfile

... */ } catch (error)


---


## `generateForElement`

```typescript
async function generateForElement(element: APIElement, client: LLMClient, options: GenerationOptions, onProgress?: (progress: GenerationProgress) => void): Promise<GeneratedDoc>

Use this to generate documentation for a single API element (function, class, method, etc.) using an LLM client, with optional real-time progress tracking.

This is the core documentation generation function — pass it a parsed API element and an LLM client, and it returns structured documentation ready for output. Use onProgress to stream status updates to a UI or CLI spinner.

Parameters

NameTypeRequiredDescription
elementAPIElementThe parsed API element to document (function, class, method, etc.)
clientLLMClientConfigured LLM client used to generate the documentation
optionsGenerationOptionsControls output style, verbosity, target format, and model settings
onProgress(progress: GenerationProgress) => voidCallback fired at each generation stage — use for progress bars or logging

Returns

Returns a Promise<GeneratedDoc> that resolves to the generated documentation object, which includes the formatted doc string, metadata about the element, and generation stats.

ScenarioResult
SuccessGeneratedDoc with content, elementName, format, and tokensUsed
LLM errorPromise rejects with an error describing the failure
Invalid elementPromise rejects if the element is malformed or unsupported

Example

// ─── Inline type definitions (no external imports needed) ───────────────────

type APIElement = {
  name: string
  kind: 'function' | 'class' | 'method' | 'interface' | 'type'
  signature: string
  docstring?: string
  filePath: string
  lineNumber: number
}

type LLMClient = {
  model: string
  apiKey: string
  baseUrl?: string
  maxTokens?: number
}

type GenerationOptions = {
  format: 'markdown' | 'jsdoc' | 'plain'
  verbosity: 'concise' | 'detailed'
  includeExamples: boolean
  language?: string
}

type GenerationProgress = {
  stage: 'preparing' | 'generating' | 'formatting' | 'complete'
  message: string
  percentComplete: number
}

type GeneratedDoc = {
  elementName: string
  content: string
  format: string
  tokensUsed: number
  generatedAt: string
}

// ─── Simulated generateForElement implementation ────────────────────────────

async function generateForElement(
  element: APIElement,
  client: LLMClient,
  options: GenerationOptions,
  onProgress?: (progress: GenerationProgress) => void
): Promise<GeneratedDoc> {
  // Stage 1: Preparing
  onProgress?.({
    stage: 'preparing',
    message: `Preparing context for "${element.name}"...`,
    percentComplete: 10,
  })
  await new Promise((r) => setTimeout(r, 100))

  // Stage 2: Generating
  onProgress?.({
    stage: 'generating',
    message: `Sending to ${client.model}...`,
    percentComplete: 40,
  })
  await new Promise((r) => setTimeout(r, 200))

  // Stage 3: Formatting
  onProgress?.({
    stage: 'formatting',
    message: `Formatting as ${options.format}...`,
    percentComplete: 80,
  })
  await new Promise((r) => setTimeout(r, 100))

  // Stage 4: Complete
  onProgress?.({
    stage: 'complete',
    message: 'Documentation generated successfully.',
    percentComplete: 100,
  })

  // Simulated output — in real usage this comes from the LLM response
  const content =
    options.format === 'jsdoc'
      ? `/**\n * Calculates the sum of two numbers.\n * @param a - First operand\n * @param b - Second operand\n * @returns The sum of a and b\n */`
      : `## \`${element.name}\`\n\nCalculates the sum of two numbers.\n\n**Parameters:** \`a\`, \`b\`\n\n**Returns:** The sum of a and b`

  return {
    elementName: element.name,
    content,
    format: options.format,
    tokensUsed: 312,
    generatedAt: new Date().toISOString(),
  }
}

// ─── Usage ──────────────────────────────────────────────────────────────────

const element: APIElement = {
  name: 'addNumbers',
  kind: 'function',
  signature: 'function addNumbers(a: number, b: number): number',
  docstring: 'Adds two numbers together.',
  filePath: 'src/math/utils.ts',
  lineNumber: 42,
}

const client: LLMClient = {
  model: 'gpt-4o',
  apiKey: process.env.OPENAI_API_KEY || 'sk-your-api-key-here',
  maxTokens: 1024,
}

const options: GenerationOptions = {
  format: 'markdown',
  verbosity: 'detailed',
  includeExamples: true,
  language: 'typescript',
}

async function main() {
  try {
    console.log('Generating documentation...\n')

    const doc = await generateForElement(
      element,
      client,
      options,
      (progress) => {
        console.log(`[${progress.percentComplete}%] ${progress.message}`)
      }
    )

    console.log('\n─── Generated Documentation ───────────────────────')
    console.log(`Element : ${doc.elementName}`)
    console.log(`Format  : ${doc.format}`)
    console.log(`Tokens  : ${doc.tokensUsed}`)
    console.log(`Generated at: ${doc.generatedAt}`)
    console.log('\nContent:\n')
    console.log(doc.content)

    // Expected output:
    // [10%] Preparing context for "addNumbers"...
    // [40%] Sending to gpt-4o...
    // [80%] Formatting as markdown...
    // [100%] Documentation generated successfully.
    //
    // Element : addNumbers
    // Format  : markdown
    // Tokens  : 312
    // Content:
    // ## `addNumbers`
    // Calculates the sum of two numbers. ...
  } catch (error) {
    console.error('Documentation generation failed:', error)
    process.exit(1)
  }
}

main()
TypeScript

generateForElements

async function generateForElements(elements: APIElement[], client: LLMClient, options: GenerationOptions): Promise<GeneratedDoc[]>
TypeScript

Use this to batch-generate documentation for multiple API elements in a single call, processing an array of functions, classes, or methods through an LLM client with shared generation options.

This is the primary entry point for bulk documentation generation — pass in your scanned API elements, an LLM client, and configuration options to receive an array of generated docs ready for output.

Parameters

NameTypeRequiredDescription
elementsAPIElement[]YesArray of API elements (functions, classes, methods) to document. Each element contains name, signature, source context, and metadata.
clientLLMClientYesConfigured LLM client instance used to generate the documentation text.
optionsGenerationOptionsYesControls generation behavior: output format, verbosity, concurrency limits, and prompt customization.

Returns

Returns Promise<GeneratedDoc[]> — resolves to an array of generated documentation objects, one per input element, in the same order as the input elements array. Each GeneratedDoc contains the element name, generated markdown/text, and any metadata produced during generation.

If an element fails to generate, the behavior depends on options.continueOnError — either the error is thrown immediately or a partial result with an error flag is included in the output array.

Example

// ─── Inline type definitions (no external imports needed) ───────────────────

type APIElement = {
  name: string
  kind: 'function' | 'class' | 'method' | 'interface'
  signature: string
  docstring?: string
  sourceContext?: string
  filePath: string
}

type GeneratedDoc = {
  elementName: string
  filePath: string
  markdown: string
  tokensUsed: number
  success: boolean
  error?: string
}

type GenerationOptions = {
  model?: string
  maxTokens?: number
  continueOnError?: boolean
  concurrency?: number
  outputFormat?: 'markdown' | 'jsdoc'
}

type LLMClient = {
  apiKey: string
  baseUrl: string
  generate: (prompt: string, options: GenerationOptions) => Promise<string>
}

// ─── Simulated implementation of generateForElements ────────────────────────

async function generateForElements(
  elements: APIElement[],
  client: LLMClient,
  options: GenerationOptions
): Promise<GeneratedDoc[]> {
  const results: GeneratedDoc[] = []

  for (const element of elements) {
    try {
      const prompt = [
        `Generate ${options.outputFormat ?? 'markdown'} documentation for:`,
        `Name: ${element.name}`,
        `Kind: ${element.kind}`,
        `Signature: ${element.signature}`,
        element.docstring ? `Existing docstring: ${element.docstring}` : '',
        element.sourceContext ? `Context:\n${element.sourceContext}` : '',
      ]
        .filter(Boolean)
        .join('\n')

      const markdown = await client.generate(prompt, options)

      results.push({
        elementName: element.name,
        filePath: element.filePath,
        markdown,
        tokensUsed: Math.floor(markdown.length / 4), // rough estimate
        success: true,
      })
    } catch (err) {
      const error = err instanceof Error ? err.message : String(err)

      if (!options.continueOnError) {
        throw new Error(`Failed to generate docs for "${element.name}": ${error}`)
      }

      results.push({
        elementName: element.name,
        filePath: element.filePath,
        markdown: '',
        tokensUsed: 0,
        success: false,
        error,
      })
    }
  }

  return results
}

// ─── Mock LLM client (replace generate() with a real API call) ───────────────

function createMockLLMClient(apiKey: string): LLMClient {
  return {
    apiKey,
    baseUrl: 'https://api.openai.com/v1',
    generate: async (prompt: string, _options: GenerationOptions): Promise<string> => {
      // Simulate network latency
      await new Promise((r) => setTimeout(r, 50))

      // In production, this would call OpenAI / Anthropic / etc.
      return `## \`${prompt.split('\n')[1].replace('Name: ', '')}\`\n\nUse this to ...\n\n_Generated documentation would appear here._`
    },
  }
}

// ─── Example usage ───────────────────────────────────────────────────────────

const elements: APIElement[] = [
  {
    name: 'fetchUser',
    kind: 'function',
    signature: 'async function fetchUser(id: string): Promise<User>',
    docstring: 'Fetches a user by ID',
    sourceContext: 'const res = await db.users.findById(id)',
    filePath: 'src/api/users.ts',
  },
  {
    name: 'UserService',
    kind: 'class',
    signature: 'class UserService',
    docstring: 'Handles all user-related operations',
    filePath: 'src/services/UserService.ts',
  },
  {
    name: 'formatDate',
    kind: 'function',
    signature: 'function formatDate(date: Date, locale?: string): string',
    filePath: 'src/utils/date.ts',
  },
]

const client = createMockLLMClient(process.env.LLM_API_KEY || 'sk-your-api-key-here')

const options: GenerationOptions = {
  model: 'gpt-4o',
  maxTokens: 512,
  outputFormat: 'markdown',
  continueOnError: true, // don't abort the whole batch on a single failure
  concurrency: 3,
}

async function main() {
  try {
    console.log(`Generating docs for ${elements.length} elements...\n`)

    const docs = await generateForElements(elements, client, options)

    for (const doc of docs) {
      if (doc.success) {
        console.log(`✅ ${doc.elementName} (${doc.filePath}) — ${doc.tokensUsed} tokens`)
        console.log(doc.markdown)
        console.log('─'.repeat(60))
      } else {
        console.warn(`❌ ${doc.elementName} — Error: ${doc.error}`)
      }
    }

    const successCount = docs.filter((d) => d.success).length
    console.log(`\nDone: ${successCount}/${docs.length} elements documented successfully.`)

    // Expected output:
    // ✅ fetchUser (src/api/users.ts) — 23 tokens
    // ## `fetchUser`
    // Use this to ...
    // ✅ UserService (src/services/UserService.ts) — 22 tokens
    // ...
    // Done: 3/3 elements documented successfully.
  } catch (error) {
    console.error('Documentation generation failed:', error)
    process.exit(1)
  }
}

main()
TypeScript

generateSidebarConfig

function generateSidebarConfig(topics: Topic[]): object
TypeScript

Use this to generate a sidebar navigation configuration from a list of documentation topics — compatible with multiple documentation platforms (Mintlify, Docusaurus, etc.).

Given an array of topics (each with an ID, name, and list of docs), this function produces a structured navigation object where each topic becomes a group with slugified page paths.

Parameters

NameTypeRequiredDescription
topicsTopic[]Array of topic objects, each containing an id, name, and docs array of documented elements

Topic Shape

FieldTypeDescription
topic.idstringURL-safe identifier used as the path prefix (e.g. "api")
topic.namestringHuman-readable group label shown in the sidebar (e.g. "API Reference")
topic.docsArray<{ element: { name: string } }>Documented items — each element.name is slugified to form the page path

Returns

An object with a navigation array, where each entry is:

{
  navigation: Array<{
    group: string   // topic.name
    pages: string[] // e.g. ["api/get-user", "api/create-post"]
  }>
}
TypeScript

Page paths follow the pattern: {topic.id}/{slugified-element-name}

Notes

  • Element names are automatically slugified (e.g. "getUserById""get-user-by-id")
  • The output format is intentionally platform-agnostic and maps cleanly to Mintlify's mint.json, Docusaurus's sidebar config, and similar structures

Example

// ── Inline types (no external imports needed) ──────────────────────────────
type DocElement = { name: string }
type GeneratedDoc = { element: DocElement }
type Topic = {
  id: string
  name: string
  docs: GeneratedDoc[]
}

// ── Inline slugify utility ─────────────────────────────────────────────────
function slugify(text: string): string {
  return text
    .replace(/([a-z])([A-Z])/g, '$1-$2')   // camelCase → camel-case
    .replace(/[\s_]+/g, '-')                // spaces/underscores → hyphens
    .replace(/[^a-z0-9-]/gi, '')            // strip non-alphanumeric
    .toLowerCase()
}

// ── Inline generateSidebarConfig ───────────────────────────────────────────
function generateSidebarConfig(topics: Topic[]): object {
  return {
    navigation: topics.map(topic => ({
      group: topic.name,
      pages: topic.docs.map(doc => `${topic.id}/${slugify(doc.element.name)}`)
    }))
  }
}

// ── Realistic example data ─────────────────────────────────────────────────
const topics: Topic[] = [
  {
    id: 'api',
    name: 'API Reference',
    docs: [
      { element: { name: 'getUserById' } },
      { element: { name: 'createPost' } },
      { element: { name: 'deleteAccount' } },
    ]
  },
  {
    id: 'hooks',
    name: 'React Hooks',
    docs: [
      { element: { name: 'useAuthSession' } },
      { element: { name: 'useFetchData' } },
    ]
  },
  {
    id: 'utils',
    name: 'Utilities',
    docs: [
      { element: { name: 'formatDate' } },
      { element: { name: 'slugify' } },
    ]
  }
]

// ── Run it ─────────────────────────────────────────────────────────────────
try {
  const sidebarConfig = generateSidebarConfig(topics)
  console.log('Sidebar config:', JSON.stringify(sidebarConfig, null, 2))

  /*
  Expected output:
  {
    "navigation": [
      {
        "group": "API Reference",
        "pages": ["api/get-user-by-id", "api/create-post", "api/delete-account"]
      },
      {
        "group": "React Hooks",
        "pages": ["hooks/use-auth-session", "hooks/use-fetch-data"]
      },
      {
        "group": "Utilities",
        "pages": ["utils/format-date", "utils/slugify"]
      }
    ]
  }
  */
} catch (error) {
  console.error('Failed to generate sidebar config:', error)
}
TypeScript

get_default_value

def get_default_value(default: ast.AST | None) -> str | None
Python

Use this to extract a human-readable string representation of a Python function parameter's default value from its AST node — perfect for documentation generators, code analyzers, or introspection tools that need to display default values as they appear in source code.

Returns None if no default exists (i.e., the parameter has no default value), or a string like "42", "'hello'", or "None" when a default is present.

Parameters

NameTypeRequiredDescription
defaultast.AST | NoneNoAn AST node representing the default value of a function parameter, or None if the parameter has no default.

Returns

ConditionReturn Value
default is NoneNone
default is a constant (e.g., 42, "hello", True)String of the constant value, e.g. "42"
default is a complex expressionString of the unparsed AST expression

Example

import ast

def get_default_value(default: ast.AST | None) -> str | None:
    """Convert default value AST to string."""
    if default is None:
        return None
    return ast.unparse(default)


def extract_param_defaults(source_code: str) -> dict:
    """Parse a function and extract parameter names with their default values."""
    tree = ast.parse(source_code)
    results = {}

    for node in ast.walk(tree):
        if isinstance(node, ast.FunctionDef):
            args = node.args
            # Defaults are right-aligned to the args list
            num_args = len(args.args)
            num_defaults = len(args.defaults)
            offset = num_args - num_defaults

            for i, arg in enumerate(args.args):
                default_node = args.defaults[i - offset] if i >= offset else None
                default_str = get_default_value(default_node)
                results[arg.arg] = default_str

    return results


# --- Example usage ---

sample_function = """
def create_user(
    username,
    role="viewer",
    max_retries=3,
    is_active=True,
    tags=None,
    timeout=30.5
):
    pass
"""

try:
    param_defaults = extract_param_defaults(sample_function)

    print("Parameter default values:")
    for param, default in param_defaults.items():
        display = f'"{default}"' if default is not None else "(no default)"
        print(f"  {param:<15} -> {display}")

    # Expected output:
    # Parameter default values:
    #   username        -> (no default)
    #   role            -> "viewer"
    #   max_retries     -> "3"
    #   is_active       -> "True"
    #   tags            -> "None"
    #   timeout         -> "30.5"

    # Verify specific cases
    assert param_defaults["username"] is None,       "No default should return None"
    assert param_defaults["role"] == "'viewer'",     "String default should be quoted"
    assert param_defaults["max_retries"] == "3",     "Int default should be stringified"
    assert param_defaults["is_active"] == "True",    "Bool default should be stringified"
    assert param_defaults["tags"] == "None",         "None default should be the string 'None'"

    print("\nAll assertions passed.")

except Exception as error:
    print(f"Error: {error}")
Python

get_docstring

def get_docstring(node: ast.AST) -> str | None
Python

Use this to extract the docstring from any Python AST node (functions, classes, modules) when parsing or analyzing Python source code programmatically.

Returns the docstring as a str if one is present on the node, or None if no docstring exists.

Parameters

NameTypeRequiredDescription
nodeast.AST✅ YesAny parsed AST node — typically a Module, FunctionDef, AsyncFunctionDef, or ClassDef node

Returns

ValueWhen
strThe node has a leading string literal (docstring)
NoneThe node has no docstring, or the node type cannot contain one

Notes

  • Docstrings are only recognized as the first statement in a function, class, or module body
  • The returned string is the raw docstring value (not cleaned/dedented)
  • Nodes like If, For, or Assign will always return None since they cannot have docstrings

Example

import ast

# Inline implementation of get_docstring
def get_docstring(node: ast.AST) -> str | None:
    """Extract docstring from a node if present."""
    if not isinstance(node, (ast.Module, ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
        return None
    if not node.body:
        return None
    first_stmt = node.body[0]
    if isinstance(first_stmt, ast.Expr) and isinstance(first_stmt.value, ast.Constant):
        if isinstance(first_stmt.value.value, str):
            return first_stmt.value.value
    return None


# --- Example 1: Extract docstring from a function ---
source_with_docstring = '''
def greet(name: str) -> str:
    """Return a friendly greeting for the given name."""
    return f"Hello, {name}!"
'''

tree = ast.parse(source_with_docstring)
func_node = tree.body[0]  # The FunctionDef node

result = get_docstring(func_node)
print("Function docstring:", result)
# Output: Function docstring: Return a friendly greeting for the given name.


# --- Example 2: Extract docstring from a class ---
source_class = '''
class UserAccount:
    """Represents a user account in the system."""

    def __init__(self, user_id: str):
        self.user_id = user_id
'''

tree = ast.parse(source_class)
class_node = tree.body[0]  # The ClassDef node

result = get_docstring(class_node)
print("Class docstring:", result)
# Output: Class docstring: Represents a user account in the system.


# --- Example 3: Returns None when no docstring is present ---
source_no_doc = '''
def add(a: int, b: int) -> int:
    return a + b
'''

tree = ast.parse(source_no_doc)
func_node = tree.body[0]

result = get_docstring(func_node)
print("No docstring result:", result)
# Output: No docstring result: None


# --- Example 4: Extract docstring from a module ---
source_module = '''"""
Top-level module for processing payment transactions.
Supports Stripe and PayPal integrations.
"""

VERSION = "1.0.0"
'''

tree = ast.parse(source_module)
result = get_docstring(tree)  # ast.Module node
print("Module docstring:", result)
# Output: Module docstring:
# Top-level module for processing payment transactions.
# Supports Stripe and PayPal integrations.
Python

get_type_annotation

def get_type_annotation(annotation: ast.AST | None) -> str | None
Python

Use this to convert a Python AST type annotation node into its human-readable string representation — ideal for documentation generators, code analyzers, or any tool that needs to display type hints as readable text.

Returns None if the annotation is None or cannot be resolved. Returns a string like "str", "int", "List[str]", or "Optional[int]" for valid annotations.

Parameters

NameTypeRequiredDescription
annotationast.AST | NoneNoAn AST node representing a type annotation, typically from ast.parse() or inspecting function argument/return annotations. Pass None to safely get None back.

Returns

ConditionReturn Value
annotation is NoneNone
Valid AST annotation nodestr — the human-readable type string (e.g. "int", "Optional[str]")
Unresolvable/complex nodeNone

Example

import ast

# Inline implementation of get_type_annotation
def get_type_annotation(annotation: ast.AST | None) -> str | None:
    """Convert type annotation AST to string."""
    if annotation is None:
        return None
    try:
        return ast.unparse(annotation)
    except Exception:
        return None

# --- Example 1: Simple type annotations from a function signature ---
source_code = """
def greet(name: str, age: int, score: float) -> bool:
    pass
"""

tree = ast.parse(source_code)
func_def = tree.body[0]  # The function definition node

print("=== Function Argument Annotations ===")
for arg in func_def.args.args:
    type_str = get_type_annotation(arg.annotation)
    print(f"  {arg.arg}: {type_str}")
# Output:
#   name: str
#   age: int
#   score: float

return_type = get_type_annotation(func_def.returns)
print(f"\n  Return type: {return_type}")
# Output:
#   Return type: bool

# --- Example 2: Complex/generic type annotations ---
complex_source = """
from typing import Optional, List, Dict

def process(
    items: List[str],
    mapping: Dict[str, int],
    flag: Optional[bool]
) -> Optional[List[int]]:
    pass
"""

complex_tree = ast.parse(complex_source)
complex_func = complex_tree.body[1]  # Skip the import statement

print("\n=== Complex Generic Annotations ===")
for arg in complex_func.args.args:
    type_str = get_type_annotation(arg.annotation)
    print(f"  {arg.arg}: {type_str}")
# Output:
#   items: List[str]
#   mapping: Dict[str, int]
#   flag: Optional[bool]

return_type = get_type_annotation(complex_func.returns)
print(f"\n  Return type: {return_type}")
# Output:
#   Return type: Optional[List[int]]

# --- Example 3: Handling None annotation (unannotated parameter) ---
unannotated_source = """
def legacy_func(x, y):
    pass
"""

unannotated_tree = ast.parse(unannotated_source)
unannotated_func = unannotated_tree.body[0]

print("\n=== Unannotated Parameters (None handling) ===")
for arg in unannotated_func.args.args:
    type_str = get_type_annotation(arg.annotation)
    print(f"  {arg.arg}: {type_str!r}")
# Output:
#   x: None
#   y: None
Python

getCrossRefsForElement

function getCrossRefsForElement(elementName: string, allRefs: CrossReference[]): CrossReference[]
TypeScript

Use this to filter a list of cross-references down to only those originating from a specific element — useful when building documentation graphs, dependency trees, or navigation links for a given function, class, or module.

Parameters

NameTypeRequiredDescription
elementNamestringThe name of the element to find cross-references for (matched against fromElement)
allRefsCrossReference[]The full list of cross-references to filter

Returns

A CrossReference[] containing only the references where fromElement matches elementName. Returns an empty array if no matches are found.

Example

// Inline the CrossReference type (no external imports needed)
type CrossReference = {
  fromElement: string
  toElement: string
  type?: string
  description?: string
}

// Inline the function implementation
function getCrossRefsForElement(
  elementName: string,
  allRefs: CrossReference[]
): CrossReference[] {
  return allRefs.filter(r => r.fromElement === elementName)
}

// Realistic cross-reference data across a documentation system
const allCrossRefs: CrossReference[] = [
  { fromElement: 'UserService',    toElement: 'AuthMiddleware',  type: 'uses',    description: 'Validates tokens' },
  { fromElement: 'UserService',    toElement: 'DatabaseClient',  type: 'depends', description: 'Reads user records' },
  { fromElement: 'PaymentService', toElement: 'UserService',     type: 'uses',    description: 'Fetches billing info' },
  { fromElement: 'AuthMiddleware', toElement: 'TokenValidator',  type: 'depends', description: 'Verifies JWT tokens' },
  { fromElement: 'UserService',    toElement: 'EmailService',    type: 'uses',    description: 'Sends welcome emails' },
]

async function main() {
  try {
    // Get all cross-references originating from 'UserService'
    const userServiceRefs = getCrossRefsForElement('UserService', allCrossRefs)

    console.log(`Cross-references for "UserService": ${userServiceRefs.length} found`)
    console.log(JSON.stringify(userServiceRefs, null, 2))
    // Output:
    // Cross-references for "UserService": 3 found
    // [
    //   { fromElement: 'UserService', toElement: 'AuthMiddleware',  type: 'uses',    description: 'Validates tokens' },
    //   { fromElement: 'UserService', toElement: 'DatabaseClient',  type: 'depends', description: 'Reads user records' },
    //   { fromElement: 'UserService', toElement: 'EmailService',    type: 'uses',    description: 'Sends welcome emails' }
    // ]

    // Returns empty array when no refs exist for an element
    const unknownRefs = getCrossRefsForElement('NonExistentService', allCrossRefs)
    console.log(`Cross-references for "NonExistentService": ${unknownRefs.length} found`)
    // Output: Cross-references for "NonExistentService": 0 found

  } catch (error) {
    console.error('Failed to retrieve cross-references:', error)
  }
}

main()
TypeScript

getKeychainPlatformName

function getKeychainPlatformName(): string
TypeScript

Use this to display a human-readable name for the current platform's credential storage system in user-facing messages, logs, or CLI output.

Returns the appropriate keychain/credential manager name based on the operating system:

PlatformReturns
macOS (darwin)"macOS Keychain"
Windows (win32)"Windows Credential Manager"
Linux / other"system keyring (libsecret)"

Parameters

None.

Returns

TypeDescription
stringHuman-readable name of the platform's native credential storage system

Example

// Inline implementation of getKeychainPlatformName
function getKeychainPlatformName(): string {
  switch (process.platform) {
    case 'darwin':  return 'macOS Keychain'
    case 'win32':   return 'Windows Credential Manager'
    default:        return 'system keyring (libsecret)'
  }
}

// Example: Display credential storage info in a CLI setup wizard
function printCredentialStorageInfo() {
  const platformName = getKeychainPlatformName()

  console.log(`Credential storage: ${platformName}`)
  console.log(`Your API key will be securely saved to ${platformName}.`)
  console.log(`To remove it later, open ${platformName} and delete the entry manually.`)
}

// Example: Use in a save-confirmation message
function confirmSave(serviceName: string, username: string) {
  const store = getKeychainPlatformName()
  return `Saved credentials for "${username}" under "${serviceName}" in ${store}.`
}

try {
  printCredentialStorageInfo()
  // Output on macOS:
  //   Credential storage: macOS Keychain
  //   Your API key will be securely saved to macOS Keychain.
  //   To remove it later, open macOS Keychain and delete the entry manually.

  const message = confirmSave('my-app', 'alice@example.com')
  console.log('\n' + message)
  // Output on Linux:
  //   Saved credentials for "alice@example.com" under "my-app" in system keyring (libsecret).
} catch (error) {
  console.error('Unexpected error:', error)
}
TypeScript

getPromptForContentType

function getPromptForContentType(type: ContentType): string
TypeScript

Use this to get the appropriate documentation generation prompt string for a given content type — so you can feed the right instructions into an AI model or documentation pipeline based on what you're documenting.

This function maps a ContentType value to a tailored prompt string that instructs a documentation generator on what to include (e.g., parameter types for APIs, usage examples for components, etc.).

Parameters

NameTypeRequiredDescription
typeContentType✅ YesThe category of content being documented. Determines which prompt template is returned.

Returns

ConditionReturns
Alwaysstring — A documentation prompt tailored to the given content type

Supported ContentType Values

ValuePrompt Focus
'api'Parameter descriptions, types, return values, API reference details
Other typesPrompt varies by type (component usage, guides, etc.)

Example

// Inline the ContentType definition — no external imports needed
type ContentType = 'api' | 'component' | 'guide' | 'utility'

// Inline a representative implementation of getPromptForContentType
function getPromptForContentType(type: ContentType): string {
  switch (type) {
    case 'api':
      return `Generate detailed API reference documentation. Include:
- Clear parameter descriptions with types
- Return value documentation
- Usage examples for each endpoint
- Error handling and status codes`

    case 'component':
      return `Generate component documentation. Include:
- Props table with types and defaults
- Usage examples with JSX
- Accessibility notes
- Visual variants if applicable`

    case 'guide':
      return `Generate a developer guide. Include:
- Step-by-step instructions
- Prerequisites
- Code snippets for each step
- Common pitfalls and how to avoid them`

    case 'utility':
      return `Generate utility function documentation. Include:
- Purpose and use case
- Input/output examples
- Edge cases and limitations
- Performance considerations`

    default:
      return `Generate general documentation with clear descriptions and examples.`
  }
}

// --- Example usage ---
async function main() {
  try {
    const contentTypes: ContentType[] = ['api', 'component', 'guide', 'utility']

    for (const type of contentTypes) {
      const prompt = getPromptForContentType(type)

      console.log(`\n=== ContentType: "${type}" ===`)
      console.log(prompt)
    }

    // Practical use: pass the prompt to an AI documentation pipeline
    const selectedType: ContentType = 'api'
    const docPrompt = getPromptForContentType(selectedType)

    const mockAiRequest = {
      model: 'gpt-4',
      systemPrompt: docPrompt,
      userContent: 'Document the getUserById(id: string): Promise<User> function',
    }

    console.log('\n=== Mock AI Request Payload ===')
    console.log(JSON.stringify(mockAiRequest, null, 2))
    // Output: { model: 'gpt-4', systemPrompt: '...api prompt...', userContent: '...' }

  } catch (error) {
    console.error('Failed to get prompt for content type:', error)
  }
}

main()
TypeScript

getRecommendedStructure

function getRecommendedStructure(elements: APIElement[]): {
  sections: { name: string; type: ContentType; elements: APIElement[] }[]
  stats: { api: number; guide: number; tutorial: number; overview: number }
}
TypeScript

Use this to automatically organize a mixed set of API elements into a recommended documentation structure — grouping them into logical sections (API reference, guides, tutorials, overviews) and getting a count breakdown by content type.

This is useful when you have a large set of parsed API elements and need to determine how to structure your documentation site or output files without manually categorizing each element.

Parameters

NameTypeRequiredDescription
elementsAPIElement[]Array of API elements (functions, classes, types, etc.) to analyze and organize

Returns

An object with two properties:

PropertyTypeDescription
sections{ name: string; type: ContentType; elements: APIElement[] }[]Ordered array of recommended documentation sections, each with a display name, content type, and the elements that belong to it
stats{ api: number; guide: number; tutorial: number; overview: number }Count of elements assigned to each content type category

ContentType values

  • "api" — Reference documentation (functions, classes, types)
  • "guide" — How-to and conceptual content
  • "tutorial" — Step-by-step walkthroughs
  • "overview" — High-level introductory content

Example

// ---- Inline types (do not import from skrypt) ----
type ContentType = 'api' | 'guide' | 'tutorial' | 'overview'

type APIElement = {
  name: string
  kind: 'function' | 'class' | 'interface' | 'type' | 'variable' | 'module'
  description?: string
  tags?: string[]
}

type Section = {
  name: string
  type: ContentType
  elements: APIElement[]
}

type StructureResult = {
  sections: Section[]
  stats: { api: number; guide: number; tutorial: number; overview: number }
}

// ---- Simulated implementation of getRecommendedStructure ----
function classifyElement(el: APIElement): ContentType {
  const tags = el.tags ?? []
  if (tags.includes('tutorial')) return 'tutorial'
  if (tags.includes('guide')) return 'guide'
  if (tags.includes('overview') || el.kind === 'module') return 'overview'
  return 'api'
}

function getRecommendedStructure(elements: APIElement[]): StructureResult {
  const buckets: Record<ContentType, APIElement[]> = {
    api: [],
    guide: [],
    tutorial: [],
    overview: [],
  }

  for (const el of elements) {
    buckets[classifyElement(el)].push(el)
  }

  const sectionMeta: { type: ContentType; name: string }[] = [
    { type: 'overview', name: 'Overview' },
    { type: 'tutorial', name: 'Tutorials' },
    { type: 'guide', name: 'Guides' },
    { type: 'api', name: 'API Reference' },
  ]

  const sections: Section[] = sectionMeta
    .filter(({ type }) => buckets[type].length > 0)
    .map(({ type, name }) => ({ name, type, elements: buckets[type] }))

  const stats = {
    api: buckets.api.length,
    guide: buckets.guide.length,
    tutorial: buckets.tutorial.length,
    overview: buckets.overview.length,
  }

  return { sections, stats }
}

// ---- Realistic usage example ----
const apiElements: APIElement[] = [
  { name: 'SuperMemory',       kind: 'module',    description: 'Main SDK module',          tags: ['overview'] },
  { name: 'createClient',      kind: 'function',  description: 'Initialize the client' },
  { name: 'addMemory',         kind: 'function',  description: 'Store a memory entry' },
  { name: 'searchMemory',      kind: 'function',  description: 'Query stored memories' },
  { name: 'MemoryClient',      kind: 'class',     description: 'Client class' },
  { name: 'MemoryOptions',     kind: 'interface', description: 'Configuration options' },
  { name: 'GettingStarted',    kind: 'function',  description: 'Quickstart walkthrough',   tags: ['tutorial'] },
  { name: 'AuthGuide',         kind: 'function',  description: 'Authentication patterns',  tags: ['guide'] },
  { name: 'MigrationGuide',    kind: 'function',  description: 'v1 → v2 migration steps',  tags: ['guide'] },
]

async function main() {
  try {
    const { sections, stats } = getRecommendedStructure(apiElements)

    console.log('=== Recommended Documentation Structure ===\n')

    for (const section of sections) {
      console.log(`📂 ${section.name} (${section.type})`)
      for (const el of section.elements) {
        console.log(`   • ${el.name} [${el.kind}]`)
      }
      console.log()
    }

    console.log('=== Content Type Stats ===')
    console.log(stats)
    // Expected output:
    // === Recommended Documentation Structure ===
    //
    // 📂 Overview (overview)
    //    • SuperMemory [module]
    //
    // 📂 Tutorials (tutorial)
    //    • GettingStarted [function]
    //
    // 📂 Guides (guide)
    //    • AuthGuide [function]
    //    • MigrationGuide [function]
    //
    // 📂 API Reference (api)
    //    • createClient [function]
    //    • addMemory [function]
    //    • searchMemory [function]
    //    • MemoryClient [class]
    //    • MemoryOptions [interface]
    //
    // === Content Type Stats ===
    // { api: 5, guide: 2, tutorial: 1, overview: 1 }
  } catch (error) {
    console.error('Failed to generate structure:', error)
  }
}

main()
TypeScript

getSortWeight

function getSortWeight(content: string): number
TypeScript

Use this to determine the sort order of a documentation file by extracting its position weight from frontmatter metadata. Returns a numeric weight so you can sort an array of markdown files into the correct sidebar order.

Supports the following frontmatter keys (in priority order): sidebar_position, order, weight, position.

Parameters

NameTypeRequiredDescription
contentstringYesRaw markdown/MDX file content, including any YAML frontmatter block

Returns

ConditionValue
Frontmatter contains sidebar_position, order, weight, or positionThe numeric value of the first matching key
No frontmatter presentInfinity
Frontmatter exists but none of the supported keys are presentInfinity
Supported key exists but its value is not a numberInfinity

Files returning Infinity will naturally sort to the end of any ascending sort.

Example

// Inline frontmatter parser (simulates gray-matter / parseFrontmatterRaw)
function parseFrontmatterRaw(content: string): { data: Record<string, unknown> | null } {
  const match = content.match(/^---\n([\s\S]*?)\n---/)
  if (!match) return { data: null }

  const yaml = match[1]
  const data: Record<string, unknown> = {}

  for (const line of yaml.split('\n')) {
    const [key, ...rest] = line.split(':')
    if (key && rest.length) {
      const value = rest.join(':').trim()
      const num = Number(value)
      data[key.trim()] = isNaN(num) ? value : num
    }
  }

  return { data }
}

// Inline implementation of getSortWeight
function getSortWeight(content: string): number {
  const { data } = parseFrontmatterRaw(content)
  if (!data) return Infinity
  const weight = data.sidebar_position ?? data.order ?? data.weight ?? data.position
  return typeof weight === 'number' ? weight : Infinity
}

// --- Example usage ---

const files = [
  {
    name: 'advanced-config.md',
    content: `---
title: Advanced Configuration
sidebar_position: 3
---
# Advanced Configuration
Details here...`,
  },
  {
    name: 'quickstart.md',
    content: `---
title: Quickstart
sidebar_position: 1
---
# Quickstart
Get started fast.`,
  },
  {
    name: 'concepts.md',
    content: `---
title: Core Concepts
order: 2
---
# Core Concepts
Learn the basics.`,
  },
  {
    name: 'changelog.md',
    // No frontmatter — will sort to the end
    content: `# Changelog\nAll notable changes...`,
  },
  {
    name: 'legacy.md',
    content: `---
title: Legacy Guide
weight: 10
---
# Legacy Guide`,
  },
]

try {
  const sorted = [...files].sort(
    (a, b) => getSortWeight(a.content) - getSortWeight(b.content)
  )

  console.log('Sorted sidebar order:')
  sorted.forEach((file, index) => {
    const weight = getSortWeight(file.content)
    const display = weight === Infinity ? '∞ (no weight)' : weight
    console.log(`  ${index + 1}. ${file.name.padEnd(22)} weight: ${display}`)
  })

  // Expected output:
  // Sorted sidebar order:
  //   1. quickstart.md           weight: 1
  //   2. concepts.md             weight: 2
  //   3. advanced-config.md      weight: 3
  //   4. legacy.md               weight: 10
  //   5. changelog.md            weight: ∞ (no weight)
} catch (error) {
  console.error('Failed to sort files:', error)
}
TypeScript

groupDocsByFile

function groupDocsByFile(docs: GeneratedDoc[]): FileGenerationResult[]
TypeScript

Use this to organize a flat list of generated documentation objects into groups by their source file, making it easy to write one output file per source file.

This is the key step between generating individual doc entries and writing them to disk — it collapses a flat array into a structure where each entry represents a single source file and all the docs that belong to it.

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]YesFlat array of generated documentation objects, each containing an element with a filePath property

Returns

Returns FileGenerationResult[] — an array where each entry represents one source file. Each FileGenerationResult contains:

PropertyTypeDescription
filePathstringThe source file path shared by all docs in this group
docsGeneratedDoc[]All documentation entries that originated from this file

Returns an empty array if docs is empty. The order of groups reflects the order in which each unique file path was first encountered in the input array.

Example

// ─── Inline types (do not import from skrypt) ───────────────────────────────

interface CodeElement {
  name: string
  filePath: string
  kind: 'function' | 'class' | 'interface' | 'type'
  signature: string
}

interface GeneratedDoc {
  element: CodeElement
  markdown: string
  generatedAt: Date
}

interface FileGenerationResult {
  filePath: string
  docs: GeneratedDoc[]
}

// ─── Inline implementation ────────────────────────────────────────────────────

function groupDocsByFile(docs: GeneratedDoc[]): FileGenerationResult[] {
  const byFile = new Map<string, GeneratedDoc[]>()

  for (const doc of docs) {
    const file = doc.element.filePath
    if (!byFile.has(file)) {
      byFile.set(file, [])
    }
    byFile.get(file)!.push(doc)
  }

  return Array.from(byFile.entries()).map(([filePath, docs]) => ({
    filePath,
    docs,
  }))
}

// ─── Realistic example data ───────────────────────────────────────────────────

const generatedDocs: GeneratedDoc[] = [
  {
    element: {
      name: 'createUser',
      filePath: 'src/users/service.ts',
      kind: 'function',
      signature: 'function createUser(data: UserInput): Promise<User>',
    },
    markdown: '## createUser\n\nCreates a new user record...',
    generatedAt: new Date(),
  },
  {
    element: {
      name: 'deleteUser',
      filePath: 'src/users/service.ts',   // same file as createUser
      kind: 'function',
      signature: 'function deleteUser(id: string): Promise<void>',
    },
    markdown: '## deleteUser\n\nRemoves a user by ID...',
    generatedAt: new Date(),
  },
  {
    element: {
      name: 'hashPassword',
      filePath: 'src/auth/utils.ts',       // different file
      kind: 'function',
      signature: 'function hashPassword(plain: string): string',
    },
    markdown: '## hashPassword\n\nHashes a plaintext password...',
    generatedAt: new Date(),
  },
  {
    element: {
      name: 'AuthService',
      filePath: 'src/auth/utils.ts',       // same file as hashPassword
      kind: 'class',
      signature: 'class AuthService',
    },
    markdown: '## AuthService\n\nHandles authentication logic...',
    generatedAt: new Date(),
  },
]

// ─── Run the example ──────────────────────────────────────────────────────────

async function main() {
  try {
    const fileGroups = groupDocsByFile(generatedDocs)

    console.log(`Grouped ${generatedDocs.length} docs into ${fileGroups.length} files:\n`)

    for (const group of fileGroups) {
      const names = group.docs.map(d => d.element.name).join(', ')
      console.log(`  📄 ${group.filePath}`)
      console.log(`     ${group.docs.length} doc(s): ${names}\n`)
    }

    // Expected output:
    // Grouped 4 docs into 2 files:
    //
    //   📄 src/users/service.ts
    //      2 doc(s): createUser, deleteUser
    //
    //   📄 src/auth/utils.ts
    //      2 doc(s): hashPassword, AuthService

    // Verify grouping correctness
    const userServiceGroup = fileGroups.find(g => g.filePath === 'src/users/service.ts')
    console.log('Docs in user service:', userServiceGroup?.docs.length) // 2

    // Edge case: empty input returns empty array
    const emptyResult = groupDocsByFile([])
    console.log('Empty input result:', emptyResult) // []

  } catch (error) {
    console.error('Grouping failed:', error)
  }
}

main()
TypeScript

hasSeenNotice

function hasSeenNotice(id: string): boolean
TypeScript

Use this to check whether a user has already acknowledged a specific notice, warning, or announcement — preventing repeat displays of one-time messages.

This is ideal for CLI tools or apps that show onboarding tips, deprecation warnings, or changelog notices only once per user.

Parameters

NameTypeRequiredDescription
idstringUnique identifier for the notice to check (e.g. "welcome-v2", "deprecation-warning")

Returns

ValueCondition
trueThe notice with the given id has been previously marked as seen
falseThe notice has never been seen or no record exists

Example

import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs'
import { join } from 'path'
import { homedir } from 'os'

// --- Inline implementation (self-contained, no external imports) ---

const CONFIG_DIR = join(homedir(), '.config', 'myapp')
const NOTICES_FILE = join(CONFIG_DIR, 'notices.json')

type NoticesState = {
  seen: Record<string, string> // notice id -> ISO timestamp
}

function loadNotices(): NoticesState {
  if (existsSync(NOTICES_FILE)) {
    try {
      return JSON.parse(readFileSync(NOTICES_FILE, 'utf-8')) as NoticesState
    } catch {
      return { seen: {} }
    }
  }
  return { seen: {} }
}

function saveNotices(state: NoticesState): void {
  mkdirSync(CONFIG_DIR, { recursive: true, mode: 0o700 })
  writeFileSync(NOTICES_FILE, JSON.stringify(state, null, 2))
}

function hasSeenNotice(id: string): boolean {
  const state = loadNotices()
  return id in state.seen
}

function markNoticeSeen(id: string): void {
  const state = loadNotices()
  state.seen[id] = new Date().toISOString()
  saveNotices(state)
}

// --- Usage example ---

const NOTICE_ID = 'welcome-v2'

function showWelcomeNotice(): void {
  if (hasSeenNotice(NOTICE_ID)) {
    console.log(`Notice "${NOTICE_ID}" already seen — skipping.`)
    // Output: Notice "welcome-v2" already seen — skipping.
    return
  }

  console.log('👋 Welcome to MyApp v2! Here is what is new...')
  markNoticeSeen(NOTICE_ID)
  console.log(`Notice "${NOTICE_ID}" marked as seen.`)
}

try {
  // First run: notice has NOT been seen yet
  console.log('First check:', hasSeenNotice(NOTICE_ID)) // Output: false
  showWelcomeNotice()                                    // Output: 👋 Welcome to MyApp v2!...

  // Second run: notice HAS been seen
  console.log('Second check:', hasSeenNotice(NOTICE_ID)) // Output: true
  showWelcomeNotice()                                     // Output: Notice "welcome-v2" already seen — skipping.
} catch (error) {
  console.error('Failed to check notice state:', error)
}
TypeScript

importConfluence

function importConfluence(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a Confluence HTML space export into structured markdown pages, ready for use in documentation pipelines.

Given a directory containing a Confluence HTML export, importConfluence walks all HTML files, transforms Confluence-specific markup (callouts, tabs, code groups, etc.) into standard markdown, and returns a structured result with all imported pages and transformation statistics.

Parameters

NameTypeRequiredDescription
dirstring✅ YesPath to the root directory of the Confluence HTML space export
namestring❌ NoOptional display name for the imported space. Defaults to a value derived from the export if omitted

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestringThe name of the imported space
pagesImportedPage[]Array of transformed pages, each with title, path, content (markdown), and frontmatter
statsTransformStatsCounts of transformed elements: callouts, tabs, codeGroups, steps, accordions, images, other

ImportedPage shape

FieldTypeDescription
titlestringPage title extracted from the HTML
pathstringRelative file path within the export
contentstringTransformed markdown content
frontmatterRecord<string, unknown>Normalized frontmatter metadata

When errors occur

  • Throws if dir does not exist or is not readable
  • Returns an empty pages array (with zeroed stats) if no HTML files are found in the directory

importDocusaurus

function importDocusaurus(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a Docusaurus documentation directory into a structured, normalized ImportResult object — transforming admonitions, tabs, frontmatter, and image paths into a portable format ready for further processing or migration.

Parameters

NameTypeRequiredDescription
dirstring✅ YesAbsolute or relative path to the root of the Docusaurus project (the directory containing docusaurus.config.js or docusaurus.config.ts)
namestring❌ NoOptional display name for the imported project. Falls back to the project name extracted from docusaurus.config.js if omitted

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestringProject name (from config or provided name argument)
pagesImportedPage[]Array of transformed documentation pages, each with path, content, frontmatter, and sortWeight
statsTransformStatsCounts of transformed elements: callouts, tabs, codeGroups, steps, accordions, images, other

Behavior Notes

  • Reads docusaurus.config.ts or docusaurus.config.js to extract project metadata
  • Recursively discovers all .md and .mdx files in the docs directory
  • Transforms Docusaurus-specific syntax (admonitions like :::note, tab components) into portable equivalents
  • Strips Docusaurus-specific import statements
  • Normalizes frontmatter fields and rewrites relative image paths

importFromGitHub

async function importFromGitHub(owner: string, repo: string, path: string, ref: string, options?: { format?: ImportFormat; name?: string }): Promise<ImportResult>
TypeScript

Use this to pull documentation directly from a GitHub repository and import it into your system — no manual downloading or cloning required.

Given a repo owner, repository name, file path, and branch/tag/commit ref, this function fetches the content from GitHub and processes it as importable documentation.

Parameters

NameTypeRequiredDescription
ownerstringGitHub username or organization name (e.g. "vercel")
repostringRepository name (e.g. "next.js")
pathstringPath to the file or directory within the repo (e.g. "docs/getting-started.md")
refstringBranch name, tag, or commit SHA to fetch from (e.g. "main", "v2.1.0")
options.formatImportFormatOverride auto-detected format (e.g. "markdown", "openapi")
options.namestringCustom display name for the imported documentation

Returns

Returns a Promise<ImportResult> that resolves when the import is complete.

OutcomeDescription
ResolvesImport succeeded — result contains metadata about the imported docs (e.g. pages count, name, format used)
RejectsGitHub fetch failed (e.g. repo not found, bad ref, network error) or the content could not be parsed in the given/detected format

Notes

  • If options.format is omitted, the format is auto-detected from the file extension or content.
  • If options.name is omitted, a name is derived from the repository and path.
  • Supports both single files and directory paths (up to a recursion depth of 20 for nested directories).
  • Use a specific tag or commit SHA as ref for reproducible imports rather than a mutable branch like "main".

Example

// --- Inline types (mirroring the real library's shape) ---
type ImportFormat = 'markdown' | 'openapi' | 'html' | 'text' | 'auto'

interface ImportResult {
  name: string
  format: ImportFormat
  pagesImported: number
  source: string
  success: boolean
}

// --- Simulated implementation (replace with real `importFromGitHub` in practice) ---
async function importFromGitHub(
  owner: string,
  repo: string,
  path: string,
  ref: string,
  options?: { format?: ImportFormat; name?: string }
): Promise<ImportResult> {
  const githubApiUrl = `https://api.github.com/repos/${owner}/${repo}/contents/${path}?ref=${ref}`

  console.log(`Fetching from GitHub: ${githubApiUrl}`)

  // Simulate a network fetch (in real usage, this hits the GitHub API)
  const simulatedResponse = {
    name: path.split('/').pop() ?? path,
    content: Buffer.from('# Getting Started\n\nWelcome to the docs.').toString('base64'),
    encoding: 'base64',
  }

  if (!simulatedResponse.content) {
    throw new Error(`No content found at ${owner}/${repo}/${path}@${ref}`)
  }

  const detectedFormat: ImportFormat =
    options?.format ??
    (path.endsWith('.md') || path.endsWith('.mdx')
      ? 'markdown'
      : path.endsWith('.json') || path.endsWith('.yaml')
      ? 'openapi'
      : 'text')

  const result: ImportResult = {
    name: options?.name ?? `${owner}/${repo} — ${path}`,
    format: detectedFormat,
    pagesImported: 1,
    source: `https://github.com/${owner}/${repo}/blob/${ref}/${path}`,
    success: true,
  }

  return result
}

// --- Usage ---
async function main() {
  try {
    // Import a specific markdown file from a public repo at a pinned tag
    const result = await importFromGitHub(
      'vercel',           // owner
      'next.js',          // repo
      'docs/getting-started/installation.mdx', // path
      'v14.1.0',          // ref (pinned tag for reproducibility)
      {
        format: 'markdown',
        name: 'Next.js Installation Guide',
      }
    )

    console.log('Import complete!')
    console.log('Name:           ', result.name)
    console.log('Format:         ', result.format)
    console.log('Pages imported: ', result.pagesImported)
    console.log('Source URL:     ', result.source)
    console.log('Success:        ', result.success)

    // Expected output:
    // Import complete!
    // Name:            Next.js Installation Guide
    // Format:          markdown
    // Pages imported:  1
    // Source URL:      https://github.com/vercel/next.js/blob/v14.1.0/docs/getting-started/installation.mdx
    // Success:         true

    // --- Auto-detect format (no options passed) ---
    const autoResult = await importFromGitHub(
      'openai',
      'openai-openapi',
      'openapi.yaml',
      'master'
      // No options — format and name will be inferred automatically
    )

    console.log('\nAuto-detected import:')
    console.log('Name:  ', autoResult.name)
    console.log('Format:', autoResult.format)
    // Expected:
    // Name:   openai/openai-openapi — openapi.yaml
    // Format: openapi

  } catch (error) {
    if (error instanceof Error) {
      console.error('Import failed:', error.message)
      // Common causes:
      //   - Invalid owner/repo (404 from GitHub API)
      //   - Bad ref (branch/tag/SHA does not exist)
      //   - Path not found in the repository
      //   - Rate-limited by GitHub (add a GITHUB_TOKEN to increase limits)
    }
  }
}

main()
TypeScript

importGitBook

function importGitBook(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a GitBook documentation directory into a structured import result, transforming GitBook-specific syntax (hints, tabs, steps, expandables, content refs, embeds) into a normalized format ready for further processing or migration.

Parameters

NameTypeRequiredDescription
dirstring✅ YesPath to the root directory of the GitBook documentation
namestring❌ NoOptional name to assign to the import result (e.g., project or docs set name)

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestring | undefinedThe name passed in, if provided
pagesImportedPage[]Array of transformed documentation pages
statsTransformStatsCounts of each GitBook-specific element that was transformed (callouts, tabs, code groups, steps, accordions, images, etc.)
errorsstring[]Any non-fatal errors encountered during import

The function reads .gitbook.yaml if present at the root to determine structure, then recursively discovers all .md/.mdx files and applies GitBook-to-standard transformations on each.

Example

import { existsSync, mkdirSync, writeFileSync, rmSync } from 'fs'
import { join } from 'path'

// ── Inline types (mirrors the real library's types) ──────────────────────────

interface TransformStats {
  callouts: number
  tabs: number
  codeGroups: number
  steps: number
  accordions: number
  images: number
  other: number
}

interface ImportedPage {
  path: string
  content: string
  frontmatter: Record<string, unknown>
}

interface ImportResult {
  name?: string
  pages: ImportedPage[]
  stats: TransformStats
  errors: string[]
}

// ── Minimal self-contained simulation of importGitBook ───────────────────────

function createEmptyResult(name?: string): ImportResult {
  return {
    name,
    pages: [],
    stats: { callouts: 0, tabs: 0, codeGroups: 0, steps: 0, accordions: 0, images: 0, other: 0 },
    errors: [],
  }
}

function simulateImportGitBook(dir: string, name?: string): ImportResult {
  const result = createEmptyResult(name)

  if (!existsSync(dir)) {
    result.errors.push(`Directory not found: ${dir}`)
    return result
  }

  // Simulate discovering and transforming two markdown files
  const mockFiles = [
    {
      path: join(dir, 'README.md'),
      raw: `---
title: Introduction
---

{% hint style="info" %}
Welcome to the docs!
{% endhint %}

# Getting Started

Some introductory content here.
`,
    },
    {
      path: join(dir, 'guide', 'setup.md'),
      raw: `---
title: Setup Guide
---

{% tabs %}
{% tab title="npm" %}
\`\`\`bash
npm install my-package
\`\`\`
{% endtab %}
{% tab title="yarn" %}
\`\`\`bash
yarn add my-package
\`\`\`
{% endtab %}
{% endtabs %}
`,
    },
  ]

  for (const file of mockFiles) {
    // Simulate GitBook hint → callout transformation
    let content = file.raw.replace(
      /\{%\s*hint style="(\w+)"\s*%\}([\s\S]*?)\{%\s*endhint\s*%\}/g,
      (_, type, body) => {
        result.stats.callouts++
        return `> **${type.toUpperCase()}**\n>${body.trim()}`
      }
    )

    // Simulate GitBook tabs → standard markdown transformation
    content = content.replace(/\{%\s*tabs\s*%\}[\s\S]*?\{%\s*endtabs\s*%\}/g, (match) => {
      result.stats.tabs++
      return `<!-- tabs transformed -->\n${match
        .replace(/\{%\s*tab title="([^"]+)"\s*%\}/g, '**$1**\n')
        .replace(/\{%\s*endtab\s*%\}/g, '')
        .replace(/\{%\s*tabs\s*%\}|\{%\s*endtabs\s*%\}/g, '')}`
    })

    // Parse frontmatter (simplified)
    const fmMatch = content.match(/^---\n([\s\S]*?)\n---/)
    const frontmatter: Record<string, unknown> = {}
    if (fmMatch) {
      fmMatch[1].split('\n').forEach((line) => {
        const [key, ...val] = line.split(':')
        if (key && val.length) frontmatter[key.trim()] = val.join(':').trim()
      })
    }

    result.pages.push({ path: file.path, content, frontmatter })
  }

  return result
}

// ── Set up a temporary mock GitBook directory ────────────────────────────────

const MOCK_DIR = '/tmp/mock-gitbook-docs'

function setupMockGitBook(dir: string) {
  mkdirSync(join(dir, 'guide'), { recursive: true })
  writeFileSync(join(dir, '.gitbook.yaml'), 'root: ./\n')
  writeFileSync(join(dir, 'README.md'), '# Placeholder')
  writeFileSync(join(dir, 'guide', 'setup.md'), '# Setup')
}

function cleanup(dir: string) {
  if (existsSync(dir)) rmSync(dir, { recursive: true, force: true })
}

// ── Main ─────────────────────────────────────────────────────────────────────

async function main() {
  setupMockGitBook(MOCK_DIR)

  try {
    const result = simulateImportGitBook(MOCK_DIR, 'My Project Docs')

    console.log('Import name:  ', result.name)
    console.log('Pages found:  ', result.pages.length)
    console.log('Errors:       ', result.errors.length === 0 ? 'none' : result.errors)
    console.log('Transform stats:', result.stats)

    console.log('\n── First page ──────────────────────────────────────────')
    const first = result.pages[0]
    console.log('Path:        ', first.path)
    console.log('Frontmatter: ', first.frontmatter)
    console.log('Content preview:\n', first.content.slice(0, 200))

    // Expected output:
    // Import name:   My Project Docs
    // Pages found:   2
    // Errors:        none
    // Transform stats: { callouts: 1, tabs: 1, codeGroups: 0, steps: 0, accordions: 0, images: 0, other: 0 }
  } catch (error) {
    console.error('Import failed:', error)
  } finally {
    cleanup(MOCK_DIR)
  }
}

main()
TypeScript

importMarkdown

function importMarkdown(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a directory of Markdown or MDX files into a structured import result, where folder hierarchy becomes groups and individual files become pages.

This is the starting point when migrating a plain Markdown documentation site — it recursively scans the target directory, preserves your folder structure as navigation groups, and returns a normalized ImportResult ready for further processing or output.

Parameters

NameTypeRequiredDescription
dirstring✅ YesAbsolute or relative path to the root directory containing your .md / .mdx files
namestring❌ NoOptional display name for the top-level import group. Defaults to a name derived from the directory if omitted

Returns

Returns an ImportResult object containing:

FieldTypeDescription
typestringAlways "markdown" for this importer
namestringThe resolved display name for the root group
pagesImportedPage[]Flat list of all discovered pages with normalized frontmatter
groupsobject[]Nested group structure mirroring the folder hierarchy
statsTransformStatsCounts of transformed elements (callouts, tabs, code groups, etc.)

Returns an empty result (no pages, no groups) if the directory contains no .md or .mdx files — it does not throw.


importMintlify

function importMintlify(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a Mintlify documentation project into a standardized ImportResult format, ready for further processing or migration to another docs platform.

This function scans a Mintlify project directory, reads its configuration, transforms MDX components (callouts, tabs, code groups, steps, accordions), and returns all pages with normalized frontmatter and rewritten image paths.

Parameters

NameTypeRequiredDescription
dirstring✅ YesPath to the root directory of the Mintlify project (must contain mint.json)
namestring❌ NoOptional display name for the imported documentation set. Defaults to the project name from config if omitted

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestringThe resolved name of the documentation set
pagesImportedPage[]Array of transformed pages, each with path, content, frontmatter, and metadata
statsTransformStatsCounts of transformed elements: callouts, tabs, codeGroups, steps, accordions, images, other
configobjectThe parsed Mintlify config (mint.json)

Notes

  • The dir must point to a valid Mintlify project root containing a mint.json file
  • All .mdx files in the directory tree are discovered and processed
  • Mintlify-specific MDX components are transformed to portable equivalents
  • Image paths are rewritten to be relative to the output structure
  • stats is useful for auditing how much Mintlify-specific syntax was present in your docs

importNotion

function importNotion(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a Notion export folder into structured, importable page data — stripping UUIDs, normalizing frontmatter, and transforming Notion-specific syntax into clean markdown.

This is the entry point for processing a Notion HTML/Markdown export. Point it at the extracted export directory and it returns all discovered pages with their transformed content and transformation statistics.

Parameters

NameTypeRequiredDescription
dirstring✅ YesPath to the extracted Notion export folder (the directory containing UUID-named .md or .html files)
namestring❌ NoOptional label for the import result set (e.g. a workspace or project name)

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestring | undefinedThe label passed in via the name parameter
pagesImportedPage[]Array of discovered and transformed pages
statsTransformStatsCounts of each transformation applied (callouts, tabs, code groups, steps, accordions, images, other)

ImportedPage shape

FieldTypeDescription
titlestringPage title derived from filename (UUID stripped)
contentstringTransformed markdown content
pathstringRelative path within the export directory

TransformStats shape

FieldTypeDescription
calloutsnumberNumber of Notion callout blocks transformed
tabsnumberNumber of tab groups created
codeGroupsnumberNumber of code group blocks created
stepsnumberNumber of step sequences detected
accordionsnumberNumber of toggle/accordion blocks transformed
imagesnumberNumber of image references processed
othernumberCount of other miscellaneous transformations

Notes

  • The dir must point to the extracted export folder — not a .zip file
  • Notion exports use UUID suffixes on filenames (e.g. My Page a1b2c3d4e5f6.md) — these are automatically stripped from titles and internal links
  • Nested pages (sub-folders) are discovered recursively
  • If no pages are found, pages will be an empty array and all stats counts will be 0

importReadme

function importReadme(dir: string, name?: string): ImportResult
TypeScript

Use this to convert a ReadMe.io documentation export into a structured import result, transforming ReadMe-specific markdown syntax (callouts, code blocks, frontmatter) into a normalized format ready for further processing or migration.

Parameters

NameTypeRequiredDescription
dirstring✅ YesPath to the root directory of the ReadMe.io export. ReadMe organizes exports into category folders, each containing an _order.yaml file.
namestring❌ NoOptional name to assign to the import result. Useful for labeling the source when merging multiple imports.

Returns

Returns an ImportResult object containing:

FieldTypeDescription
namestring | undefinedThe name passed in, if any
pagesImportedPage[]Array of transformed documentation pages
statsTransformStatsCounts of transformed elements (callouts, tabs, code groups, steps, accordions, images, other)

ImportedPage shape

Each page in pages includes the normalized frontmatter, transformed markdown content, and metadata like the original file path and category.

TransformStats shape

Tracks how many ReadMe-specific constructs were converted during the import:

{ callouts: number, tabs: number, codeGroups: number, steps: number, accordions: number, images: number, other: number }

Notes

  • The dir must point to the root of a ReadMe export — the function discovers category folders automatically by looking for _order.yaml files.
  • ReadMe-specific callout syntax (e.g., > 📘 Note) and code block annotations are transformed to standard markdown equivalents.
  • Missing or malformed files are skipped gracefully; check stats to verify expected content was processed.

isGitHubUrl

function isGitHubUrl(input: string): boolean
TypeScript

Use this to validate whether a string is a GitHub repository or profile URL before processing it as a GitHub resource.

Useful for input validation pipelines, routing logic, or distinguishing GitHub URLs from local file paths or other URL types.

Parameters

NameTypeRequiredDescription
inputstringThe string to test against the GitHub URL pattern

Returns

ValueCondition
trueThe string starts with http:// or https:// (with optional www.) followed by github.com/
falseThe string is a local path, a non-GitHub URL, or any other format

Example

// Inline implementation — no external imports needed
function isGitHubUrl(input: string): boolean {
  return /^https?:\/\/(www\.)?github\.com\//.test(input)
}

// --- Example usage ---

const inputs = [
  'https://github.com/microsoft/typescript',        // ✅ valid HTTPS
  'http://github.com/facebook/react',               // ✅ valid HTTP
  'https://www.github.com/vercel/next.js',          // ✅ valid with www
  'https://gitlab.com/inkscape/inkscape',           // ❌ wrong host
  'github.com/torvalds/linux',                      // ❌ missing protocol
  '/home/user/projects/my-repo',                    // ❌ local path
  'https://github.com',                             // ❌ no trailing slash + path
  '',                                               // ❌ empty string
]

try {
  console.log('GitHub URL validation results:\n')

  for (const input of inputs) {
    const result = isGitHubUrl(input)
    const icon = result ? '✅' : '❌'
    console.log(`${icon} ${JSON.stringify(input)} → ${result}`)
  }

  // Practical routing example
  const userInput = process.env.REPO_SOURCE || 'https://github.com/openai/openai-node'

  if (isGitHubUrl(userInput)) {
    console.log(`\nDetected GitHub URL — proceeding with remote fetch: ${userInput}`)
  } else {
    console.log(`\nDetected local path — reading from disk: ${userInput}`)
  }

  // Expected output:
  // ✅ "https://github.com/microsoft/typescript" → true
  // ✅ "http://github.com/facebook/react" → true
  // ✅ "https://www.github.com/vercel/next.js" → true
  // ❌ "https://gitlab.com/inkscape/inkscape" → false
  // ❌ "github.com/torvalds/linux" → false
  // ❌ "/home/user/projects/my-repo" → false
  // ❌ "https://github.com" → false
  // ❌ "" → false
} catch (error) {
  console.error('Validation error:', error)
}
TypeScript

keychainAvailable

async function keychainAvailable(): Promise<boolean>
TypeScript

Use this to check whether the system keychain is available and functional before attempting to store or retrieve secrets. This prevents runtime errors in environments where keychain access is unavailable (e.g., headless CI servers, containers, or systems without a keyring daemon).

Returns true if the keychain is accessible and operational, false if the keyring module cannot be loaded or the keychain is non-functional.

Parameters

This function takes no parameters.

Returns

ValueCondition
trueKeychain module loaded successfully and a test read operation succeeded
falseKeyring module failed to load, or keychain is unavailable/non-functional

Common Use Cases

  • Guard credential storage — check availability before calling setPassword / getPassword
  • Graceful fallback — fall back to environment variables or encrypted config files when keychain is unavailable
  • CI/CD detection — identify headless environments that lack a keyring daemon

Example

// Inline types and simulate the keychainAvailable behavior
// (self-contained — no external imports required)

// Simulated keyring entry for demonstration
class MockEntry {
  constructor(private service: string, private account: string) {}
  async getPassword(): Promise<string | null> {
    // Simulates a successful keychain read
    return null // null = no stored value, but keychain IS accessible
  }
}

// Simulate the module loader — returns null in unavailable environments
async function loadKeyring(): Promise<{ Entry: typeof MockEntry } | null> {
  const isHeadless = process.env.CI === 'true' && !process.env.DBUS_SESSION_BUS_ADDRESS
  if (isHeadless) return null
  return { Entry: MockEntry }
}

const SERVICE_NAME = 'my-app'
const ACCOUNT_NAME = 'credentials'

// Replicated keychainAvailable logic
async function keychainAvailable(): Promise<boolean> {
  const mod = await loadKeyring()
  if (!mod) return false
  try {
    const entry = new mod.Entry(SERVICE_NAME, ACCOUNT_NAME)
    // Actually test that keychain is functional by attempting a read
    await entry.getPassword()
    return true
  } catch {
    return false
  }
}

// --- Usage example ---
async function main() {
  try {
    const available = await keychainAvailable()

    if (available) {
      console.log('✅ Keychain is available — safe to store secrets')
      // Proceed with: await keychain.setPassword('my-app', 'token', secretValue)
    } else {
      console.log('⚠️  Keychain unavailable — falling back to environment variable')
      const secret = process.env.APP_SECRET || 'fallback-secret'
      console.log(`Using fallback secret source: ${secret.slice(0, 4)}****`)
    }

    // Output (keychain available):     ✅ Keychain is available — safe to store secrets
    // Output (CI / no keyring daemon): ⚠️  Keychain unavailable — falling back to environment variable
  } catch (error) {
    console.error('Unexpected error checking keychain:', error)
    process.exit(1)
  }
}

main()
TypeScript

keychainDelete

async function keychainDelete(): Promise<boolean>
TypeScript

Use this to securely delete a stored password/credential from the system keychain. This is useful for logout flows, credential rotation, or cleaning up stored secrets from the OS-level secure storage.

Returns

ValueWhen
trueThe credential was found and successfully deleted from the keychain
falseThe keyring module could not be loaded, or deletion failed

Note: This function targets a fixed service/account name pair internally (SERVICE_NAME / ACCOUNT_NAME). It is a fire-and-forget cleanup — call it during sign-out or uninstall routines to ensure no credentials are left behind in the system keychain.

Example

// Simulated keychain store (mimics OS keychain behavior)
const keychainStore: Record<string, string> = {
  "MyApp:default-user": "super-secret-token-abc123"
}

// Inline types
type KeychainEntry = {
  service: string
  account: string
  deletePassword: () => void
}

// Simulate the keychain delete behavior
async function keychainDelete(): Promise<boolean> {
  const SERVICE_NAME = "MyApp"
  const ACCOUNT_NAME = "default-user"
  const storeKey = `${SERVICE_NAME}:${ACCOUNT_NAME}`

  // Simulate loading the keyring module (could fail in headless/CI environments)
  const keyringSupportedEnv = process.env.KEYRING_AVAILABLE !== "false"
  if (!keyringSupportedEnv) {
    console.warn("Keyring module unavailable (headless environment?)")
    return false
  }

  try {
    const entry: KeychainEntry = {
      service: SERVICE_NAME,
      account: ACCOUNT_NAME,
      deletePassword: () => {
        if (keychainStore[storeKey]) {
          delete keychainStore[storeKey]
        } else {
          throw new Error("No password found for entry")
        }
      }
    }

    entry.deletePassword()
    return true
  } catch (error) {
    console.error("Failed to delete keychain entry:", error)
    return false
  }
}

async function main() {
  try {
    console.log("Keychain before delete:", { ...keychainStore })
    // Output: { 'MyApp:default-user': 'super-secret-token-abc123' }

    const deleted = await keychainDelete()
    console.log("Delete succeeded:", deleted)
    // Output: Delete succeeded: true

    console.log("Keychain after delete:", { ...keychainStore })
    // Output: {}

    // Calling again when no credential exists
    const deletedAgain = await keychainDelete()
    console.log("Second delete attempt:", deletedAgain)
    // Output: Second delete attempt: false (entry no longer exists)
  } catch (error) {
    console.error("Unexpected error:", error)
  }
}

main()
TypeScript

keychainRetrieve

async function keychainRetrieve(): Promise<string | null>
TypeScript

Use this to securely retrieve a stored API key or password from the operating system's native keychain (macOS Keychain, Windows Credential Manager, or Linux Secret Service).

Returns null if the keyring is unavailable or no entry exists, making it safe to use as a fallback check before prompting the user for credentials.

Parameters

None — retrieves from a fixed service/account name configured internally.

Returns

ValueWhen
Promise<string>A stored password/API key was found in the system keychain
Promise<null>The keyring module is unavailable, or no matching entry exists

Example

// Simulated keychain store (mimics OS keychain behavior)
const keychainStore: Record<string, string> = {}

// Inline types
type KeychainEntry = {
  getPassword: () => string | null
}

type KeyringModule = {
  Entry: new (service: string, account: string) => KeychainEntry
}

// Constants (as used internally by the real function)
const SERVICE_NAME = 'my-cli-app'
const ACCOUNT_NAME = 'api-key'

// Simulate loading the native keyring module
async function loadKeyring(): Promise<KeyringModule | null> {
  // In real usage, this loads a native module (e.g., `keyring` crate via napi)
  // Returns null if the platform keychain is unavailable
  const isAvailable = process.platform !== 'unsupported'
  if (!isAvailable) return null

  return {
    Entry: class implements KeychainEntry {
      constructor(private service: string, private account: string) {}
      getPassword(): string | null {
        const key = `${this.service}:${this.account}`
        return keychainStore[key] ?? null
      }
    }
  }
}

// Self-contained implementation of keychainRetrieve
async function keychainRetrieve(): Promise<string | null> {
  const mod = await loadKeyring()
  if (!mod) return null
  try {
    const entry = new mod.Entry(SERVICE_NAME, ACCOUNT_NAME)
    const password = entry.getPassword()
    return password ?? null
  } catch {
    return null
  }
}

// --- Usage Example ---

// Pre-populate the keychain with a stored key (simulates a prior save)
keychainStore[`${SERVICE_NAME}:${ACCOUNT_NAME}`] = process.env.API_KEY || 'sk-abc123-your-stored-api-key'

async function main() {
  try {
    const apiKey = await keychainRetrieve()

    if (apiKey) {
      console.log('Retrieved API key from keychain:', apiKey)
      // Output: Retrieved API key from keychain: sk-abc123-your-stored-api-key
    } else {
      console.log('No API key found in keychain — prompting user for credentials...')
      // Output when keychain is empty or unavailable
    }
  } catch (error) {
    console.error('Unexpected error accessing keychain:', error)
  }
}

main()
TypeScript

keychainStore

async function keychainStore(key: string): Promise<boolean>
TypeScript

Use this to securely store an API key or secret in the operating system's native keychain (macOS Keychain, Windows Credential Manager, or Linux Secret Service).

This is the recommended way to persist sensitive credentials locally without storing them in plaintext config files or environment variables.

Parameters

NameTypeRequiredDescription
keystringThe secret value (e.g. API key, token, password) to store in the system keychain

Returns

ValueWhen
Promise<true>The secret was successfully stored in the keychain
Promise<false>Storage failed — keychain module unavailable, permissions denied, or an unexpected error occurred

Note: The key is stored under a fixed service name and account name defined internally. Subsequent calls will overwrite any previously stored value.

Example

// Inline types to simulate keychainStore behavior
// (self-contained — no external imports required)

// Simulated in-memory keychain store for demonstration
const mockKeychain: Record<string, string> = {}

const SERVICE_NAME = 'my-app'
const ACCOUNT_NAME = 'default'

// Simulated keychainStore implementation
async function keychainStore(key: string): Promise<boolean> {
  if (!key || typeof key !== 'string') return false

  try {
    // In the real implementation, this uses the OS keychain via a native module.
    // Here we simulate storing in an in-memory map.
    const entryKey = `${SERVICE_NAME}:${ACCOUNT_NAME}`
    mockKeychain[entryKey] = key
    return true
  } catch {
    return false
  }
}

// Helper to simulate retrieval (for verification in this example)
async function keychainGet(): Promise<string | null> {
  const entryKey = `${SERVICE_NAME}:${ACCOUNT_NAME}`
  return mockKeychain[entryKey] ?? null
}

async function main() {
  const apiKey = process.env.MY_API_KEY || 'sk-prod-abc123xyz456-your-real-key-here'

  try {
    console.log('Storing API key in keychain...')
    const success = await keychainStore(apiKey)

    if (success) {
      console.log('✅ Key stored successfully')

      // Verify it was stored (optional retrieval check)
      const stored = await keychainGet()
      console.log('🔑 Retrieved from keychain:', stored)
      // Output: 🔑 Retrieved from keychain: sk-prod-abc123xyz456-your-real-key-here
    } else {
      console.warn('⚠️  Failed to store key — keychain may be unavailable')
    }

    // Demonstrate failure case: empty key
    const failResult = await keychainStore('')
    console.log('Empty key result (expected false):', failResult)
    // Output: Empty key result (expected false): false

  } catch (error) {
    console.error('Unexpected error during keychain operation:', error)
  }
}

main()
TypeScript

loadConfig

function loadConfig(configPath?: string): Config
TypeScript

Use this to load and parse a YAML/JSON configuration file for the Skrypt tool, with optional path override. When no path is provided, it searches for a config file in default locations (e.g., skrypt.config.yml in the current directory). When a path is provided, it loads that specific file — throwing immediately if it doesn't exist.

Parameters

NameTypeRequiredDescription
configPathstringNoExplicit path to a config file. If omitted, default locations are searched.

Returns

Returns a Config object with all resolved configuration values. Missing fields are filled in from built-in defaults (DEFAULT_CONFIG). Throws an Error if an explicit configPath is provided but the file does not exist, or if the file cannot be parsed.

When to use each form

ScenarioCall
Use project-level config auto-discovered from CWDloadConfig()
Use a specific config file (e.g., in CI)loadConfig('./configs/skrypt.prod.yml')
Override config location via environment variableloadConfig(process.env.SKRYPT_CONFIG)

Example

import { existsSync, readFileSync } from 'fs'
import { join } from 'path'

// ── Inline types (mirrors Skrypt internals) ────────────────────────────────
type LLMProvider = 'openai' | 'anthropic' | 'gemini'

interface Config {
  provider: LLMProvider
  model: string
  outputDir: string
  include: string[]
  exclude: string[]
  apiKey?: string
}

const DEFAULT_CONFIG: Config = {
  provider: 'openai',
  model: 'gpt-4o',
  outputDir: './docs',
  include: ['src/**/*.ts'],
  exclude: ['**/*.test.ts', 'node_modules'],
}

// ── Inline YAML parser (minimal, handles simple key: value pairs) ────────────
function parseSimpleYaml(content: string): Record<string, unknown> {
  const result: Record<string, unknown> = {}
  for (const line of content.split('\n')) {
    const trimmed = line.trim()
    if (!trimmed || trimmed.startsWith('#')) continue
    const colonIdx = trimmed.indexOf(':')
    if (colonIdx === -1) continue
    const key = trimmed.slice(0, colonIdx).trim()
    const value = trimmed.slice(colonIdx + 1).trim()
    // Parse arrays written as  key: [a, b, c]
    if (value.startsWith('[') && value.endsWith(']')) {
      result[key] = value
        .slice(1, -1)
        .split(',')
        .map((v) => v.trim().replace(/['"]/g, ''))
    } else {
      result[key] = value.replace(/['"]/g, '')
    }
  }
  return result
}

// ── Self-contained loadConfig implementation ─────────────────────────────────
const DEFAULT_SEARCH_PATHS = [
  'skrypt.config.yml',
  'skrypt.config.yaml',
  'skrypt.config.json',
  '.skrypt.yml',
]

function loadConfig(configPath?: string): Config {
  let rawContent: string

  if (configPath) {
    // Explicit path — fail fast if missing
    if (!existsSync(configPath)) {
      throw new Error(`Config file not found: ${configPath}`)
    }
    rawContent = readFileSync(configPath, 'utf-8')
  } else {
    // Search default locations relative to CWD
    const found = DEFAULT_SEARCH_PATHS.map((p) => join(process.cwd(), p)).find(existsSync)
    if (!found) {
      console.warn('[skrypt] No config file found — using defaults.')
      return { ...DEFAULT_CONFIG }
    }
    rawContent = readFileSync(found, 'utf-8')
  }

  // Parse file (JSON or YAML)
  let parsed: Record<string, unknown>
  try {
    parsed = rawContent.trimStart().startsWith('{')
      ? JSON.parse(rawContent)
      : parseSimpleYaml(rawContent)
  } catch (err) {
    throw new Error(`Failed to parse config: ${(err as Error).message}`)
  }

  // Merge with defaults — explicit values win
  return {
    ...DEFAULT_CONFIG,
    ...parsed,
    apiKey: (parsed.apiKey as string | undefined) ?? process.env.SKRYPT_API_KEY,
  } as Config
}

// ── Demo ─────────────────────────────────────────────────────────────────────
async function main() {
  // ── 1. No config file → falls back to defaults ──────────────────────────
  try {
    const config = loadConfig()
    console.log('Loaded config (defaults):', config)
    // Output: { provider: 'openai', model: 'gpt-4o', outputDir: './docs', ... }
  } catch (err) {
    console.error('Unexpected error:', err)
  }

  // ── 2. Explicit path that doesn't exist → throws immediately ─────────────
  try {
    loadConfig('/nonexistent/path/skrypt.yml')
  } catch (err) {
    console.error('Expected error:', (err as Error).message)
    // Output: Config file not found: /nonexistent/path/skrypt.yml
  }

  // ── 3. Path from environment variable (CI-friendly) ──────────────────────
  const envPath = process.env.SKRYPT_CONFIG // e.g. set in CI pipeline
  if (envPath) {
    try {
      const ciConfig = loadConfig(envPath)
      console.log('CI config loaded:', ciConfig)
    } catch (err) {
      console.error('CI config error:', (err as Error).message)
    }
  } else {
    console.log('SKRYPT_CONFIG not set — skipping env-path demo.')
    // Output: SKRYPT_CONFIG not set — skipping env-path demo.
  }
}

main()
TypeScript

markNoticeSeen

function markNoticeSeen(id: string): void
TypeScript

Use this to permanently record that a user has seen a specific notice, warning, or announcement — preventing it from being shown again in future sessions.

The notice state is persisted to disk, so once marked as seen, the record survives process restarts. Each notice is stored with the exact timestamp it was acknowledged.

Parameters

NameTypeRequiredDescription
idstringUnique identifier for the notice (e.g. "welcome-v2", "deprecation-warning-api-v1")

Returns

void — The function writes to disk and returns nothing. After calling this, any subsequent call to hasSeenNotice(id) will return true.

Behavior Notes

  • Calling markNoticeSeen with the same id multiple times is safe — it will overwrite the timestamp but not create duplicates
  • The timestamp is stored in ISO 8601 format (e.g. "2024-03-15T10:30:00.000Z")
  • Notices are stored per-user in a config file in the home directory

Example

import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs'
import { join } from 'path'
import { homedir } from 'os'

// --- Inline implementation (do not import from skrypt) ---

const NOTICES_DIR = join(homedir(), '.skrypt')
const NOTICES_FILE = join(NOTICES_DIR, 'notices.json')

type NoticesState = {
  seen: Record<string, string> // notice id -> ISO timestamp
}

function loadNotices(): NoticesState {
  if (!existsSync(NOTICES_FILE)) {
    return { seen: {} }
  }
  try {
    const raw = readFileSync(NOTICES_FILE, 'utf-8')
    return JSON.parse(raw) as NoticesState
  } catch {
    return { seen: {} }
  }
}

function saveNotices(state: NoticesState): void {
  if (!existsSync(NOTICES_DIR)) {
    mkdirSync(NOTICES_DIR, { recursive: true })
  }
  writeFileSync(NOTICES_FILE, JSON.stringify(state, null, 2), 'utf-8')
}

function hasSeenNotice(id: string): boolean {
  const state = loadNotices()
  return id in state.seen
}

function markNoticeSeen(id: string): void {
  const state = loadNotices()
  state.seen[id] = new Date().toISOString()
  saveNotices(state)
}

// --- Usage example ---

const NOTICE_ID = 'deprecation-warning-api-v1'

try {
  console.log('Before marking seen:', hasSeenNotice(NOTICE_ID))
  // Output: Before marking seen: false

  markNoticeSeen(NOTICE_ID)
  console.log('After marking seen:', hasSeenNotice(NOTICE_ID))
  // Output: After marking seen: true

  // Verify the timestamp was recorded
  const state = loadNotices()
  console.log('Recorded at:', state.seen[NOTICE_ID])
  // Output: Recorded at: 2024-03-15T10:30:00.000Z

  // Safe to call multiple times — just updates the timestamp
  markNoticeSeen(NOTICE_ID)
  console.log('Calling again is safe. Still seen:', hasSeenNotice(NOTICE_ID))
  // Output: Calling again is safe. Still seen: true

} catch (error) {
  console.error('Failed to mark notice as seen:', error)
}
TypeScript

mergeTopicConfig

function mergeTopicConfig(userConfig: Partial<TopicConfig>, defaults: TopicConfig = DEFAULT_TOPIC_CONFIG): TopicConfig
TypeScript

Use this to safely merge a partial user-provided topic configuration with a set of defaults, ensuring all required fields are always present in the final config.

This is useful when users supply only a subset of configuration options (e.g., overriding a few topics) and you need a complete, valid TopicConfig to work with — without manually checking for missing fields.

Parameters

NameTypeRequiredDescription
userConfigPartial<TopicConfig>YesThe user-supplied configuration. Only the fields provided will override the defaults.
defaultsTopicConfigNoThe base configuration to fall back on. Defaults to DEFAULT_TOPIC_CONFIG if not provided.

Returns

Returns a complete TopicConfig object with all fields populated. User-provided values take precedence over defaults for any overlapping keys within topics.

Example

// Inline types (do not import from skrypt)
type Topic = {
  label: string
  description?: string
  order?: number
}

type TopicConfig = {
  topics: Record<string, Topic>
}

// Simulated DEFAULT_TOPIC_CONFIG
const DEFAULT_TOPIC_CONFIG: TopicConfig = {
  topics: {
    guides: { label: 'Guides', description: 'Step-by-step tutorials', order: 1 },
    api: { label: 'API Reference', description: 'Full API documentation', order: 2 },
    examples: { label: 'Examples', description: 'Code examples', order: 3 },
  },
}

// Simulated mergeTopicConfig implementation
function mergeTopicConfig(
  userConfig: Partial<TopicConfig>,
  defaults: TopicConfig = DEFAULT_TOPIC_CONFIG
): TopicConfig {
  return {
    topics: { ...defaults.topics, ...userConfig.topics },
  }
}

// --- Usage ---

// User only wants to override the 'api' topic and add a new 'changelog' topic
const userConfig: Partial<TopicConfig> = {
  topics: {
    api: { label: 'API Docs', description: 'Customized API reference', order: 10 },
    changelog: { label: 'Changelog', description: 'Release history', order: 99 },
  },
}

try {
  const finalConfig = mergeTopicConfig(userConfig)

  console.log('Merged TopicConfig:', JSON.stringify(finalConfig, null, 2))
  // Expected output:
  // {
  //   "topics": {
  //     "guides":    { label: 'Guides',     description: 'Step-by-step tutorials',  order: 1  },
  //     "api":       { label: 'API Docs',   description: 'Customized API reference', order: 10 },  // overridden
  //     "examples":  { label: 'Examples',   description: 'Code examples',           order: 3  },
  //     "changelog": { label: 'Changelog',  description: 'Release history',         order: 99 }   // added
  //   }
  // }

  // Using custom defaults instead of DEFAULT_TOPIC_CONFIG
  const minimalDefaults: TopicConfig = {
    topics: {
      home: { label: 'Home', order: 0 },
    },
  }

  const configWithCustomDefaults = mergeTopicConfig(
    { topics: { blog: { label: 'Blog', order: 5 } } },
    minimalDefaults
  )

  console.log('With custom defaults:', JSON.stringify(configWithCustomDefaults, null, 2))
  // Expected output:
  // {
  //   "topics": {
  //     "home": { label: 'Home', order: 0 },
  //     "blog": { label: 'Blog', order: 5 }
  //   }
  // }
} catch (error) {
  console.error('Failed to merge topic config:', error)
}
TypeScript

normalizeFrontmatter

function normalizeFrontmatter(content: string, defaults?: FrontmatterDefaults): string
TypeScript

Use this to standardize frontmatter fields in markdown content to Skrypt format, optionally injecting default values when fields are missing.

This is useful when processing markdown files from multiple sources that may use inconsistent frontmatter conventions — it normalizes them into a predictable structure while preserving the document body.

Parameters

NameTypeRequiredDescription
contentstringFull markdown string, including any existing YAML frontmatter block
defaultsFrontmatterDefaultsDefault frontmatter values to apply when fields are absent from the content

Returns

ConditionReturn Value
Content has frontmatter or defaults providedstring — the full markdown content with normalized frontmatter prepended
No frontmatter and no defaultsstring — original content unchanged

Example

// Inline types (do not import from skrypt)
type FrontmatterDefaults = {
  title?: string
  description?: string
  tags?: string[]
  author?: string
  date?: string
  [key: string]: unknown
}

// --- Simulated implementation of normalizeFrontmatter ---
function parseFrontmatterRaw(content: string): { data: Record<string, unknown> | null; body: string } {
  const match = content.match(/^---\n([\s\S]*?)\n---\n?([\s\S]*)$/)
  if (!match) return { data: null, body: content }

  const data: Record<string, unknown> = {}
  match[1].split('\n').forEach(line => {
    const [key, ...rest] = line.split(':')
    if (key && rest.length) data[key.trim()] = rest.join(':').trim()
  })

  return { data, body: match[2] }
}

function normalizeFrontmatter(content: string, defaults?: FrontmatterDefaults): string {
  const { data, body } = parseFrontmatterRaw(content)
  if (!data && !defaults) return content

  const fm: Record<string, unknown> = { ...(defaults || {}), ...(data || {}) }

  // Normalize known field aliases to Skrypt format
  if (fm['slug'] === undefined && fm['title']) {
    fm['slug'] = String(fm['title']).toLowerCase().replace(/\s+/g, '-')
  }

  const yamlLines = Object.entries(fm)
    .map(([k, v]) => `${k}: ${Array.isArray(v) ? JSON.stringify(v) : v}`)
    .join('\n')

  return `---\n${yamlLines}\n---\n${body}`
}
// --- End simulation ---

// Example 1: Normalize existing frontmatter with defaults filling in gaps
const markdownWithPartialFrontmatter = `---
title: Getting Started with TypeScript
author: Jane Doe
---

# Introduction

TypeScript adds static typing to JavaScript.
`

const defaults: FrontmatterDefaults = {
  description: 'No description provided',
  tags: ['docs', 'guide'],
  date: new Date().toISOString().split('T')[0],
}

try {
  const normalized = normalizeFrontmatter(markdownWithPartialFrontmatter, defaults)
  console.log('=== Normalized with defaults ===')
  console.log(normalized)
  // Output:
  // ---
  // description: No description provided
  // tags: ["docs","guide"]
  // date: 2024-01-15
  // title: Getting Started with TypeScript
  // author: Jane Doe
  // slug: getting-started-with-typescript
  // ---
  //
  // # Introduction
  // ...
} catch (error) {
  console.error('Normalization failed:', error)
}

// Example 2: Content with no frontmatter and no defaults — returned unchanged
const plainMarkdown = `# Just a plain doc\n\nNo frontmatter here.`

try {
  const result = normalizeFrontmatter(plainMarkdown)
  console.log('\n=== No frontmatter, no defaults (unchanged) ===')
  console.log(result)
  // Output: # Just a plain doc
  //
  // No frontmatter here.
} catch (error) {
  console.error('Normalization failed:', error)
}

// Example 3: Content with no frontmatter but defaults provided
const bodyOnly = `# Auto-tagged Doc\n\nThis doc had no frontmatter.`

try {
  const withInjectedDefaults = normalizeFrontmatter(bodyOnly, {
    author: process.env.DOC_AUTHOR || 'docs-bot',
    tags: ['auto-generated'],
  })
  console.log('\n=== Defaults injected into frontmatter-less doc ===')
  console.log(withInjectedDefaults)
  // Output:
  // ---
  // author: docs-bot
  // tags: ["auto-generated"]
  // ---
  // # Auto-tagged Doc
  // ...
} catch (error) {
  console.error('Normalization failed:', error)
}
TypeScript

organizeByTopic

function organizeByTopic(docs: GeneratedDoc[], config: TopicConfig = DEFAULT_TOPIC_CONFIG): Topic[]
TypeScript

Use this to group a flat list of generated documentation objects into organized topic clusters — ideal for building navigation menus, documentation sites, or categorized API references.

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]YesArray of generated documentation objects to organize. Each doc must include at least a topic or category field used for grouping.
configTopicConfigNoConfiguration controlling how topics are formed (e.g., custom grouping rules, fallback topic name). Defaults to DEFAULT_TOPIC_CONFIG.

Returns

Returns a Topic[] array where each Topic contains:

  • A topic name/slug
  • The subset of GeneratedDoc items belonging to that topic
  • Metadata useful for rendering navigation or index pages

Returns an empty array if docs is empty.

Example

// ── Inline types (do NOT import from skrypt) ──────────────────────────────

type GeneratedDoc = {
  id: string
  title: string
  topic: string          // primary grouping key
  content: string
  slug: string
}

type Topic = {
  name: string
  slug: string
  docs: GeneratedDoc[]
}

type TopicConfig = {
  fallbackTopic: string
  sortAlphabetically: boolean
}

// ── Inline DEFAULT_TOPIC_CONFIG ──────────────────────────────────────────────

const DEFAULT_TOPIC_CONFIG: TopicConfig = {
  fallbackTopic: "General",
  sortAlphabetically: true,
}

// ── Inline slugify helper ────────────────────────────────────────────────────

function slugify(text: string): string {
  return text.toLowerCase().replace(/\s+/g, "-").replace(/[^a-z0-9-]/g, "")
}

// ── Inline organizeByTopic implementation ───────────────────────────────────

function organizeByTopic(
  docs: GeneratedDoc[],
  config: TopicConfig = DEFAULT_TOPIC_CONFIG
): Topic[] {
  const topicDocs = new Map<string, GeneratedDoc[]>()

  for (const doc of docs) {
    const topicName = doc.topic?.trim() || config.fallbackTopic
    if (!topicDocs.has(topicName)) {
      topicDocs.set(topicName, [])
    }
    topicDocs.get(topicName)!.push(doc)
  }

  const topics: Topic[] = Array.from(topicDocs.entries()).map(([name, docs]) => ({
    name,
    slug: slugify(name),
    docs,
  }))

  if (config.sortAlphabetically) {
    topics.sort((a, b) => a.name.localeCompare(b.name))
  }

  return topics
}

// ── Realistic usage example ──────────────────────────────────────────────────

const generatedDocs: GeneratedDoc[] = [
  { id: "doc-1", title: "createUser",      topic: "Authentication", content: "Creates a new user...",      slug: "create-user"      },
  { id: "doc-2", title: "deleteUser",      topic: "Authentication", content: "Deletes a user...",          slug: "delete-user"      },
  { id: "doc-3", title: "uploadFile",      topic: "Storage",        content: "Uploads a file to S3...",    slug: "upload-file"      },
  { id: "doc-4", title: "deleteFile",      topic: "Storage",        content: "Removes a file from S3...",  slug: "delete-file"      },
  { id: "doc-5", title: "sendEmail",       topic: "Notifications",  content: "Sends a transactional email...", slug: "send-email"  },
  { id: "doc-6", title: "legacyHelper",    topic: "",               content: "Old utility function...",    slug: "legacy-helper"    },
]

try {
  // Default config — groups docs and sorts topics alphabetically
  const topics = organizeByTopic(generatedDocs)

  console.log(`Organized into ${topics.length} topics:\n`)
  for (const topic of topics) {
    console.log(`📂 ${topic.name} (slug: "${topic.slug}")`)
    for (const doc of topic.docs) {
      console.log(`   • ${doc.title}`)
    }
  }

  // Custom config — disable sorting, use a custom fallback topic name
  console.log("\n── Custom config (unsorted, custom fallback) ──")
  const customTopics = organizeByTopic(generatedDocs, {
    fallbackTopic: "Miscellaneous",
    sortAlphabetically: false,
  })
  console.log("Topics:", customTopics.map((t) => t.name))

  // Expected output:
  // Organized into 4 topics:
  // 📂 Authentication (slug: "authentication")
  //    • createUser
  //    • deleteUser
  // 📂 General (slug: "general")
  //    • legacyHelper
  // 📂 Notifications (slug: "notifications")
  //    • sendEmail
  // 📂 Storage (slug: "storage")
  //    • uploadFile
  //    • deleteFile
  //
  // ── Custom config (unsorted, custom fallback) ──
  // Topics: [ 'Authentication', 'Storage', 'Notifications', 'Miscellaneous' ]
} catch (error) {
  console.error("Failed to organize docs:", error)
}
TypeScript

parseGitHubUrl

function parseGitHubUrl(url: string): { owner: string; repo: string; path: string; ref: string }
TypeScript

Use this to extract the owner, repository name, branch/tag reference, and file path from a GitHub URL — useful when building tools that fetch repo contents, generate documentation links, or construct GitHub API requests.

Parses URLs in the format: https://github.com/owner/repo/tree/branch/path/to/file

NameTypeRequiredDescription
urlstringA full GitHub URL pointing to a repo, branch, or path

Returns

An object with the following fields:

FieldTypeDescription
ownerstringThe GitHub username or organization
repostringThe repository name
refstringThe branch or tag name (e.g. main, v1.0.0)
pathstringThe file or directory path within the repo (empty string if at root)

Throws an Error if the URL is not a valid GitHub URL.

Example

// Inline implementation of parseGitHubUrl (do not import from skrypt)
function parseGitHubUrl(url: string): { owner: string; repo: string; path: string; ref: string } {
  const match = url.match(
    /^https?:\/\/(www\.)?github\.com\/([^/]+)\/([^/]+)(?:\/tree\/([^/]+)(?:\/(.*))?)?/
  )
  if (!match) {
    throw new Error(`Invalid GitHub URL: ${url}`)
  }
  return {
    owner: match[2],
    repo: match[3],
    ref: match[4] || 'main',
    path: match[5] || '',
  }
}

// --- Examples ---

try {
  // 1. URL pointing to a specific file on a branch
  const fileUrl = 'https://github.com/acme-org/awesome-project/tree/main/src/utils/helpers.ts'
  const file = parseGitHubUrl(fileUrl)
  console.log('File URL parsed:', file)
  // Output: { owner: 'acme-org', repo: 'awesome-project', ref: 'main', path: 'src/utils/helpers.ts' }

  // 2. URL pointing to a subdirectory on a feature branch
  const dirUrl = 'https://github.com/acme-org/awesome-project/tree/feature/new-ui/src/components'
  const dir = parseGitHubUrl(dirUrl)
  console.log('Directory URL parsed:', dir)
  // Output: { owner: 'acme-org', repo: 'awesome-project', ref: 'feature/new-ui', path: 'src/components' }

  // 3. Root repo URL (no branch or path specified)
  const rootUrl = 'https://github.com/acme-org/awesome-project'
  const root = parseGitHubUrl(rootUrl)
  console.log('Root URL parsed:', root)
  // Output: { owner: 'acme-org', repo: 'awesome-project', ref: 'main', path: '' }

  // 4. Use parsed components to build a GitHub API request URL
  const apiBase = 'https://api.github.com/repos'
  const { owner, repo, ref, path } = file
  const apiUrl = `${apiBase}/${owner}/${repo}/contents/${path}?ref=${ref}`
  console.log('GitHub API URL:', apiUrl)
  // Output: https://api.github.com/repos/acme-org/awesome-project/contents/src/utils/helpers.ts?ref=main

  // 5. Invalid URL throws a clear error
  parseGitHubUrl('https://gitlab.com/someone/repo')
} catch (error) {
  console.error('Parse failed:', (error as Error).message)
  // Output: Parse failed: Invalid GitHub URL: https://gitlab.com/someone/repo
}
TypeScript

postInlineComments

async function postInlineComments(config: PRCommentConfig, issues: DocumentationIssue[]): Promise<CommentResult[]>
TypeScript

Use this to post inline review comments on specific lines of a pull request, flagging documentation issues directly in the GitHub code review interface.

This function iterates over a list of documentation issues and creates inline PR comments at the exact file and line locations where problems were detected — ideal for automated documentation linting in CI pipelines.

Parameters

NameTypeRequiredDescription
configPRCommentConfigYesGitHub PR connection details including token, repo owner, repo name, PR number, and commit SHA
issuesDocumentationIssue[]YesArray of documentation issues, each specifying the file path, line number, and description of the problem

Returns

Returns Promise<CommentResult[]> — an array of results, one per issue, each containing:

FieldTypeDescription
successbooleanWhether the comment was posted successfully
commentIdnumber | undefinedThe GitHub comment ID if successfully created
errorstring | undefinedError message if the comment failed to post
issueDocumentationIssueThe original issue that was processed

Example

// ─── Inline type definitions (no external imports needed) ───────────────────

type PRCommentConfig = {
  token: string;
  owner: string;
  repo: string;
  pullNumber: number;
  commitSha: string;
};

type DocumentationIssue = {
  filePath: string;
  line: number;
  message: string;
  severity: 'error' | 'warning' | 'info';
};

type CommentResult = {
  success: boolean;
  commentId?: number;
  error?: string;
  issue: DocumentationIssue;
};

// ─── Simulated GitHub API call ───────────────────────────────────────────────

async function postToGitHub(
  config: PRCommentConfig,
  issue: DocumentationIssue
): Promise<{ id: number }> {
  // In production, this would call:
  // POST /repos/{owner}/{repo}/pulls/{pull_number}/comments
  console.log(
    `  → Posting comment on ${issue.filePath}:${issue.line} — "${issue.message}"`
  );
  // Simulate network delay
  await new Promise((res) => setTimeout(res, 50));
  return { id: Math.floor(Math.random() * 900000) + 100000 };
}

// ─── Core function implementation ────────────────────────────────────────────

async function postInlineComments(
  config: PRCommentConfig,
  issues: DocumentationIssue[]
): Promise<CommentResult[]> {
  const token = config.token || process.env.GITHUB_TOKEN || '';

  if (!token) {
    throw new Error('GitHub token is required. Set GITHUB_TOKEN env variable.');
  }

  const results: CommentResult[] = await Promise.all(
    issues.map(async (issue): Promise<CommentResult> => {
      try {
        const response = await postToGitHub({ ...config, token }, issue);
        return {
          success: true,
          commentId: response.id,
          issue,
        };
      } catch (err) {
        return {
          success: false,
          error: err instanceof Error ? err.message : 'Unknown error',
          issue,
        };
      }
    })
  );

  return results;
}

// ─── Example usage ───────────────────────────────────────────────────────────

const config: PRCommentConfig = {
  token: process.env.GITHUB_TOKEN || 'ghp_your_token_here',
  owner: 'acme-corp',
  repo: 'backend-api',
  pullNumber: 42,
  commitSha: 'a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2',
};

const issues: DocumentationIssue[] = [
  {
    filePath: 'src/auth/tokenService.ts',
    line: 34,
    message: 'Missing JSDoc comment for exported function `generateToken`',
    severity: 'error',
  },
  {
    filePath: 'src/users/userController.ts',
    line: 78,
    message: '@param description is missing for parameter `userId`',
    severity: 'warning',
  },
  {
    filePath: 'src/utils/dateHelpers.ts',
    line: 12,
    message: 'Consider adding a @returns tag describing the formatted string',
    severity: 'info',
  },
];

async function main() {
  console.log(`Posting ${issues.length} inline review comments to PR #${config.pullNumber}...\n`);

  try {
    const results = await postInlineComments(config, issues);

    const succeeded = results.filter((r) => r.success);
    const failed = results.filter((r) => !r.success);

    console.log(`\n✅ Successfully posted: ${succeeded.length} comment(s)`);
    succeeded.forEach((r) => {
      console.log(`   Comment #${r.commentId} → ${r.issue.filePath}:${r.issue.line}`);
    });

    if (failed.length > 0) {
      console.log(`\n❌ Failed to post: ${failed.length} comment(s)`);
      failed.forEach((r) => {
        console.log(`   ${r.issue.filePath}:${r.issue.line} — ${r.error}`);
      });
    }

    // Expected output:
    // Posting 3 inline review comments to PR #42...
    //   → Posting comment on src/auth/tokenService.ts:34 — "Missing JSDoc..."
    //   → Posting comment on src/users/userController.ts:78 — "@param description..."
    //   → Posting comment on src/utils/dateHelpers.ts:12 — "Consider adding..."
    //
    // ✅ Successfully posted: 3 comment(s)
    //    Comment #482910 → src/auth/tokenService.ts:34
    //    Comment #739204 → src/users/userController.ts:78
    //    Comment #201847 → src/utils/dateHelpers.ts:12
  } catch (error) {
    console.error('Fatal error posting comments:', error);
    process.exit(1);
  }
}

main();
TypeScript

postPRComment

async function postPRComment(config: PRCommentConfig, issues: DocumentationIssue[]): Promise<CommentResult>
TypeScript

Use this to automatically post documentation quality feedback as a review comment on a GitHub Pull Request, summarizing all detected documentation issues in one structured comment.

Parameters

NameTypeRequiredDescription
configPRCommentConfigYesGitHub PR connection details including repo owner, repo name, PR number, and optional auth token
issuesDocumentationIssue[]YesArray of documentation issues to report (missing docs, incorrect types, etc.)

Returns

Returns a Promise<CommentResult> that resolves with:

FieldTypeDescription
successbooleanWhether the comment was posted successfully
commentIdnumberThe GitHub comment ID of the newly created comment
urlstringDirect URL to the posted comment on GitHub

Returns { success: false, error: string } if the request fails (bad token, repo not found, insufficient permissions, etc.).

Notes

  • Falls back to process.env.GITHUB_TOKEN if no token is provided in config
  • Requires the token to have pull_requests: write permission on the target repo
  • All issues are batched into a single comment to avoid spamming the PR timeline

Example

// --- Inline types (do not import from skrypt) ---
type PRCommentConfig = {
  owner: string;
  repo: string;
  prNumber: number;
  token?: string;
};

type DocumentationIssue = {
  file: string;
  line: number;
  severity: 'error' | 'warning' | 'info';
  message: string;
};

type CommentResult =
  | { success: true; commentId: number; url: string }
  | { success: false; error: string };

// --- Self-contained implementation mirroring Skrypt behavior ---
const GITHUB_API = 'https://api.github.com';

async function postPRComment(
  config: PRCommentConfig,
  issues: DocumentationIssue[]
): Promise<CommentResult> {
  const token = config.token || process.env.GITHUB_TOKEN;

  if (!token) {
    return { success: false, error: 'No GitHub token provided. Set GITHUB_TOKEN or pass config.token.' };
  }

  const issueLines = issues
    .map(i => `- **${i.severity.toUpperCase()}** \`${i.file}:${i.line}\` — ${i.message}`)
    .join('\n');

  const body = issues.length === 0
    ? '✅ **Skrypt:** No documentation issues found!'
    : `## 📝 Documentation Issues Found\n\n${issueLines}\n\n> Posted by Skrypt`;

  const url = `${GITHUB_API}/repos/${config.owner}/${config.repo}/issues/${config.prNumber}/comments`;

  const response = await fetch(url, {
    method: 'POST',
    headers: {
      Authorization: `Bearer ${token}`,
      'Content-Type': 'application/json',
      Accept: 'application/vnd.github+json',
    },
    body: JSON.stringify({ body }),
  });

  if (!response.ok) {
    const errorText = await response.text();
    return { success: false, error: `GitHub API error ${response.status}: ${errorText}` };
  }

  const data = await response.json() as { id: number; html_url: string };
  return { success: true, commentId: data.id, url: data.html_url };
}

// --- Usage example ---
const config: PRCommentConfig = {
  owner: 'acme-corp',
  repo: 'backend-api',
  prNumber: 42,
  token: process.env.GITHUB_TOKEN || 'ghp_your_token_here',
};

const issues: DocumentationIssue[] = [
  {
    file: 'src/auth/login.ts',
    line: 14,
    severity: 'error',
    message: 'Exported function `loginUser` is missing a JSDoc comment.',
  },
  {
    file: 'src/utils/format.ts',
    line: 38,
    severity: 'warning',
    message: '`formatDate` has untyped parameters — consider adding @param annotations.',
  },
];

async function main() {
  try {
    console.log(`Posting ${issues.length} documentation issue(s) to PR #${config.prNumber}...`);

    const result = await postPRComment(config, issues);

    if (result.success) {
      console.log('✅ Comment posted successfully!');
      console.log(`   Comment ID : ${result.commentId}`);
      console.log(`   View at    : ${result.url}`);
      // Expected output:
      // ✅ Comment posted successfully!
      //    Comment ID : 1987654321
      //    View at    : https://github.com/acme-corp/backend-api/pull/42#issuecomment-1987654321
    } else {
      console.error('❌ Failed to post comment:', result.error);
    }
  } catch (error) {
    console.error('Unexpected error:', error instanceof Error ? error.message : error);
  }
}

main();
TypeScript

rewriteImagePaths

function rewriteImagePaths(content: string, mapping: Map<string, string>): string
TypeScript

Use this to update image paths in documentation content after assets have been copied to a new location. When you move or rename image files during a build process, this function rewrites all references in your content string to point to the new paths.

Parameters

NameTypeRequiredDescription
contentstringThe raw content string (e.g., Markdown or HTML) containing image path references to be updated
mappingMap<string, string>A map where each key is an old image path and each value is the corresponding new path to replace it with

Returns

Returns a string with all old image paths replaced by their mapped new paths. If a path in the mapping is not found in the content, it is silently skipped. The original content value is not mutated — a new string is returned.

Example

// Inline implementation — no external imports needed
function rewriteImagePaths(content: string, mapping: Map<string, string>): string {
  for (const [oldPath, newPath] of mapping) {
    content = content.replaceAll(oldPath, newPath)
  }
  return content
}

// Simulate a docs page with several image references
const markdownContent = `
# Getting Started

Here is the architecture overview:
![Architecture](./assets/images/architecture.png)

And the setup flow:
![Setup](./assets/images/setup-diagram.png)

For more detail, see the architecture diagram again:
![Architecture again](./assets/images/architecture.png)
`

// After copying assets to a CDN or versioned output folder,
// build a mapping of old paths → new paths
const assetMapping = new Map<string, string>([
  ['./assets/images/architecture.png', 'https://cdn.example.com/docs/v2/architecture.png'],
  ['./assets/images/setup-diagram.png', 'https://cdn.example.com/docs/v2/setup-diagram.png'],
])

async function main() {
  try {
    const updatedContent = rewriteImagePaths(markdownContent, assetMapping)
    console.log('Rewritten content:\n', updatedContent)

    // Expected output:
    // # Getting Started
    //
    // Here is the architecture overview:
    // ![Architecture](https://cdn.example.com/docs/v2/architecture.png)
    //
    // And the setup flow:
    // ![Setup](https://cdn.example.com/docs/v2/setup-diagram.png)
    //
    // For more detail, see the architecture diagram again:
    // ![Architecture again](https://cdn.example.com/docs/v2/architecture.png)

    // Verify all old paths are gone
    const hasOldPaths = [...assetMapping.keys()].some((old) => updatedContent.includes(old))
    console.log('Old paths remaining:', hasOldPaths) // false
  } catch (error) {
    console.error('Failed to rewrite image paths:', error)
  }
}

main()
TypeScript

scan_file

def scan_file(file_path: str) -> dict[str, Any]
Python

Use this to extract all API elements from a Python source file — functions, classes, methods, imports, and metadata — in a single structured dictionary.

Ideal for building documentation generators, code analysis tools, static analyzers, or any tooling that needs to introspect a Python file programmatically.

Parameters

NameTypeRequiredDescription
file_pathstr✅ YesAbsolute or relative path to the .py file to scan

Returns

Returns a dict[str, Any] containing extracted API elements from the file. Typical keys include:

KeyTypeDescription
functionslistTop-level function definitions found in the file
classeslistClass definitions, including their methods and attributes
importslistAll import statements (import and from ... import)
docstringsdictModule-level and element-level docstrings
errorslistAny parse errors encountered during scanning

Note: Returns an empty structure (with an errors key populated) if the file cannot be read or parsed, rather than raising an exception.

Example

import ast
import os
from typing import Any

# Inline implementation of scan_file
def scan_file(file_path: str) -> dict[str, Any]:
    """Scan a Python file and extract all API elements."""
    result: dict[str, Any] = {
        "file_path": file_path,
        "functions": [],
        "classes": [],
        "imports": [],
        "docstrings": {},
        "errors": [],
    }

    try:
        with open(file_path, "r", encoding="utf-8") as f:
            source = f.read()
    except OSError as e:
        result["errors"].append(f"Could not read file: {e}")
        return result

    try:
        tree = ast.parse(source)
    except SyntaxError as e:
        result["errors"].append(f"Syntax error while parsing: {e}")
        return result

    # Extract module-level docstring
    module_doc = ast.get_docstring(tree)
    if module_doc:
        result["docstrings"]["module"] = module_doc

    for node in ast.walk(tree):
        # Extract top-level functions
        if isinstance(node, ast.FunctionDef):
            func_info = {
                "name": node.name,
                "lineno": node.lineno,
                "args": [arg.arg for arg in node.args.args],
                "docstring": ast.get_docstring(node),
            }
            result["functions"].append(func_info)

        # Extract classes and their methods
        elif isinstance(node, ast.ClassDef):
            methods = [
                {
                    "name": n.name,
                    "args": [arg.arg for arg in n.args.args],
                    "docstring": ast.get_docstring(n),
                }
                for n in ast.walk(node)
                if isinstance(n, ast.FunctionDef)
            ]
            class_info = {
                "name": node.name,
                "lineno": node.lineno,
                "methods": methods,
                "docstring": ast.get_docstring(node),
            }
            result["classes"].append(class_info)

        # Extract imports
        elif isinstance(node, ast.Import):
            for alias in node.names:
                result["imports"].append({"module": alias.name, "alias": alias.asname})
        elif isinstance(node, ast.ImportFrom):
            for alias in node.names:
                result["imports"].append({
                    "module": node.module,
                    "name": alias.name,
                    "alias": alias.asname,
                })

    return result


# --- Demo: write a temporary Python file and scan it ---
sample_code = '''"""A sample module for payment processing."""
import os
from typing import Optional

class PaymentProcessor:
    """Handles payment transactions."""

    def charge(self, amount: float, currency: str = "USD") -> bool:
        """Charge a customer."""
        return True

def validate_card(card_number: str) -> bool:
    """Validate a credit card number using Luhn algorithm."""
    return len(card_number) == 16
'''

# Write sample file to a temp location
sample_path = "/tmp/payment_processor.py"
with open(sample_path, "w") as f:
    f.write(sample_code)

# Run the scanner
try:
    api_data = scan_file(sample_path)

    print(f"📄 Scanned: {api_data['file_path']}")
    print(f"📦 Module docstring: {api_data['docstrings'].get('module', 'None')}")
    print(f"\n🔧 Functions ({len(api_data['functions'])}):")
    for fn in api_data["functions"]:
        print(f"  - {fn['name']}({', '.join(fn['args'])}) → \"{fn['docstring']}\"")

    print(f"\n🏛  Classes ({len(api_data['classes'])}):")
    for cls in api_data["classes"]:
        print(f"  - {cls['name']}: {len(cls['methods'])} method(s)")
        for method in cls["methods"]:
            print(f"      • {method['name']}({', '.join(method['args'])})")

    print(f"\n📥 Imports ({len(api_data['imports'])}):")
    for imp in api_data["imports"]:
        print(f"  - {imp}")

    if api_data["errors"]:
        print(f"\n⚠️  Errors: {api_data['errors']}")

    # Expected output:
    # 📄 Scanned: /tmp/payment_processor.py
    # 📦 Module docstring: A sample module for payment processing.
    # 🔧 Functions (1):
    #   - validate_card(card_number) → "Validate a credit card number..."
    # 🏛  Classes (1):
    #   - PaymentProcessor: 2 method(s)
    #       • __init__ / charge
    # 📥 Imports (2):
    #   - {'module': 'os', 'alias': None}
    #   - {'module': 'typing', 'name': 'Optional', 'alias': None}

except Exception as e:
    print(f"Unexpected error during scan: {e}")
finally:
    # Clean up temp file
    if os.path.exists(sample_path):
        os.remove(sample_path)
Python

showSecurityNotice

function showSecurityNotice(): void
TypeScript

Use this to display a one-time security notice to users in the terminal — the message is shown only once per machine and silently skipped on all subsequent runs.

This is ideal for CLI tools that need to surface important security information (e.g., API key handling, data privacy warnings) without repeatedly interrupting the user's workflow.

Parameters

This function takes no parameters.

Returns

ConditionResult
First time called on this machinePrints a formatted security notice to stdout and marks it as seen
Already seen on this machineReturns immediately, prints nothing

The seen-state is persisted to disk (typically in the user's home directory), so the notice survives across sessions and terminal restarts.

Example

import * as fs from 'fs'
import * as path from 'path'
import * as os from 'os'

// --- Inline implementation (mirrors Skrypt internals) ---

const NOTICES_DIR = path.join(os.homedir(), '.skrypt')
const NOTICES_FILE = path.join(NOTICES_DIR, 'notices.json')

type NoticesState = {
  seen: Record<string, string> // noticeId -> ISO timestamp
}

function loadNotices(): NoticesState {
  try {
    if (fs.existsSync(NOTICES_FILE)) {
      return JSON.parse(fs.readFileSync(NOTICES_FILE, 'utf-8'))
    }
  } catch {
    // Corrupt file — start fresh
  }
  return { seen: {} }
}

function saveNotices(state: NoticesState): void {
  try {
    if (!fs.existsSync(NOTICES_DIR)) {
      fs.mkdirSync(NOTICES_DIR, { recursive: true })
    }
    fs.writeFileSync(NOTICES_FILE, JSON.stringify(state, null, 2), 'utf-8')
  } catch {
    // Non-fatal: if we can't persist, we just show the notice again next time
  }
}

function hasSeenNotice(id: string): boolean {
  const state = loadNotices()
  return Boolean(state.seen[id])
}

function markNoticeSeen(id: string): void {
  const state = loadNotices()
  state.seen[id] = new Date().toISOString()
  saveNotices(state)
}

function showSecurityNotice(): void {
  if (hasSeenNotice('security-v1')) return

  console.log('')
  console.log('  \x1b[36m🔒 Security Notice\x1b[0m')
  console.log('')
  console.log('  Your API keys are stored locally and never transmitted.')
  console.log('  Review ~/.skrypt for stored credentials at any time.')
  console.log('')

  markNoticeSeen('security-v1')
}

// --- Usage ---

async function main() {
  try {
    console.log('--- First run ---')
    showSecurityNotice()
    // Output:
    //   🔒 Security Notice
    //
    //   Your API keys are stored locally and never transmitted.
    //   Review ~/.skrypt for stored credentials at any time.

    console.log('--- Second run (same session or later) ---')
    showSecurityNotice()
    // Output: (nothing — notice already seen)

    console.log('Done. Notice state saved to:', NOTICES_FILE)
  } catch (error) {
    console.error('Unexpected error:', error)
  }
}

main()
TypeScript

stripDocusaurusImports

function stripDocusaurusImports(content: string): string
TypeScript

Use this to clean Docusaurus-specific theme import statements from MDX/markdown content before processing it in non-Docusaurus environments, or when extracting plain content from Docusaurus documentation files.

Strips all lines matching the pattern import ... from '@theme/...' — the special Docusaurus theme component imports that would cause errors or noise outside of a Docusaurus build context.

Parameters

NameTypeRequiredDescription
contentstringRaw MDX or markdown string containing Docusaurus theme import statements to be removed

Returns

A new string with all import ... from '@theme/...' lines removed. Lines are stripped cleanly — no blank lines are left behind from the removed imports. Returns the original string unchanged if no matching imports are found.

What Gets Stripped

PatternExample
Named importsimport Tabs from '@theme/Tabs';
Default importsimport CodeBlock from '@theme/CodeBlock'
Aliased importsimport { Details as Detail } from '@theme/Details';
With or without semicolonsBoth ...'; and ...'

Example

// Inline implementation — no external imports needed
function stripDocusaurusImports(content: string): string {
  return content.replace(/^import\s+.*from\s+['"]@theme\/.*['"];?\s*\n?/gm, '')
}

// --- Example usage ---

const rawMdxContent = `import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock'
import { Details } from '@theme/Details';

# My API Reference

This is the main documentation content.

Use the tabs below to switch between languages.

\`\`\`typescript
const client = new MyClient({ apiKey: 'sk-abc123' });
\`\`\`
`

async function main() {
  try {
    const cleaned = stripDocusaurusImports(rawMdxContent)

    console.log('=== Cleaned Content ===')
    console.log(cleaned)
    // Output:
    // # My API Reference
    //
    // This is the main documentation content.
    //
    // Use the tabs below to switch between languages.
    //
    // ```typescript
    // const client = new MyClient({ apiKey: 'sk-abc123' });
    // ```

    // Verify no @theme imports remain
    const remainingImports = cleaned.match(/import\s+.*from\s+['"]@theme\//gm)
    console.log('Remaining @theme imports:', remainingImports ?? 'none ✅')
    // Output: Remaining @theme imports: none ✅

    // Edge case: content with no Docusaurus imports is returned unchanged
    const plainMarkdown = '# Hello\n\nJust plain markdown.'
    const unchanged = stripDocusaurusImports(plainMarkdown)
    console.log('\nPlain markdown unchanged:', unchanged === plainMarkdown ? 'yes ✅' : 'no ❌')
    // Output: Plain markdown unchanged: yes ✅

  } catch (error) {
    console.error('Failed to strip imports:', error)
  }
}

main()
TypeScript

stripNotionUUIDs

function stripNotionUUIDs(filename: string): string
TypeScript

Use this to clean up Notion export filenames by removing the 32-character hex UUID suffixes that Notion automatically appends to every file and folder name.

When you export content from Notion, filenames look like My Page a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4.md. This function strips those suffixes, giving you human-readable filenames like My Page.md.

Parameters

NameTypeRequiredDescription
filenamestringThe Notion-exported filename, optionally containing one or more 32-char hex UUID suffixes preceded by whitespace

Returns

A string with all UUID suffixes removed. If no UUID suffix is found, the original filename is returned unchanged.

ScenarioReturn Value
Filename with UUID suffixCleaned filename without UUID
Filename without UUID suffixOriginal filename, unchanged
Nested path with UUIDs in folder namesAll UUIDs stripped from every path segment

Example

// Inline implementation — no external imports needed
function stripNotionUUIDs(filename: string): string {
  return filename.replace(/\s+[0-9a-f]{32}/g, '')
}

async function main() {
  try {
    // Basic usage: single file with UUID suffix
    const raw = 'Getting Started a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4.md'
    const clean = stripNotionUUIDs(raw)
    console.log('Basic:', clean)
    // Output: "Getting Started.md"

    // Works with nested Notion export paths (UUIDs in folder names too)
    const nestedPath = 'My Workspace 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d/Project Notes 9f8e7d6c5b4a9f8e7d6c5b4a9f8e7d6c/README abcdef1234567890abcdef1234567890.md'
    const cleanedPath = stripNotionUUIDs(nestedPath)
    console.log('Nested path:', cleanedPath)
    // Output: "My Workspace/Project Notes/README.md"

    // No UUID present — filename is returned unchanged
    const alreadyClean = 'Introduction.md'
    console.log('No UUID:', stripNotionUUIDs(alreadyClean))
    // Output: "Introduction.md"

    // Batch processing a list of exported Notion files
    const notionExports = [
      'Home 00112233445566778899aabbccddeeff.md',
      'Meeting Notes deadbeefdeadbeefdeadbeefdeadbeef.md',
      'Archive 11223344556677889900aabbccddeeff/Old Docs 99887766554433221100ffeeddccbbaa.md',
    ]

    console.log('\nBatch cleaned filenames:')
    notionExports.map(stripNotionUUIDs).forEach(f => console.log(' -', f))
    // Output:
    //  - Home.md
    //  - Meeting Notes.md
    //  - Archive/Old Docs.md

  } catch (error) {
    console.error('Failed to strip Notion UUIDs:', error)
  }
}

main()
TypeScript

transformConfluenceCallouts

function transformConfluenceCallouts(content: string): string
TypeScript

Use this to convert Confluence structured macro callouts (info, note, warning, tip) into clean <Callout> components suitable for rendering in documentation systems like Nextra or MDX.

When migrating Confluence pages to Markdown/MDX, callout macros come through as verbose XML-like <ac:structured-macro> tags. This function strips that boilerplate and outputs a compact, framework-friendly <Callout type="..."> element with the inner text content preserved.

Parameters

NameTypeRequiredDescription
contentstringRaw Confluence HTML/XML string containing one or more <ac:structured-macro> callout blocks

Returns

ConditionReturn Value
Callout macros foundString with each matched macro replaced by <Callout type="[type]">cleaned text</Callout>
No callout macros foundOriginal string unchanged
Multiple calloutsAll matching macros replaced in a single pass

Supported callout types: info, note, warning, tip

Note: HTML tags inside <ac:rich-text-body> are stripped — only the plain text content is preserved in the output.

Example

// Inline implementation of the helper and main function (self-contained)
function stripHtmlTags(html: string): string {
  return html.replace(/<[^>]*>/g, '')
}

function transformConfluenceCallouts(content: string): string {
  return content.replace(
    /<ac:structured-macro[^>]*ac:name="(info|note|warning|tip)"[^>]*>[\s\S]*?<ac:rich-text-body>([\s\S]*?)<\/ac:rich-text-body>[\s\S]*?<\/ac:structured-macro>/g,
    (_match, type: string, body: string) => {
      const cleaned = stripHtmlTags(body).trim()
      return `<Callout type="${type}">${cleaned}</Callout>`
    }
  )
}

// --- Example usage ---

const confluencePage = `
<h1>Deployment Guide</h1>
<p>Follow these steps carefully.</p>

<ac:structured-macro ac:name="warning" ac:schema-version="1" ac:macro-id="abc-123">
  <ac:parameter ac:name="title">Danger Zone</ac:parameter>
  <ac:rich-text-body>
    <p>This action is <strong>irreversible</strong>. Back up your data first.</p>
  </ac:rich-text-body>
</ac:structured-macro>

<ac:structured-macro ac:name="info" ac:schema-version="1" ac:macro-id="def-456">
  <ac:rich-text-body>
    <p>You need <em>admin privileges</em> to complete this step.</p>
  </ac:rich-text-body>
</ac:structured-macro>

<ac:structured-macro ac:name="tip" ac:schema-version="1" ac:macro-id="ghi-789">
  <ac:rich-text-body>Use the --dry-run flag to preview changes before applying them.</ac:rich-text-body>
</ac:structured-macro>

<p>Deployment complete.</p>
`

try {
  const transformed = transformConfluenceCallouts(confluencePage)
  console.log('Transformed output:\n', transformed)

  // Expected output:
  // <h1>Deployment Guide</h1>
  // <p>Follow these steps carefully.</p>
  //
  // <Callout type="warning">This action is irreversible. Back up your data first.</Callout>
  //
  // <Callout type="info">You need admin privileges to complete this step.</Callout>
  //
  // <Callout type="tip">Use the --dry-run flag to preview changes before applying them.</Callout>
  //
  // <p>Deployment complete.</p>

  // Verify no-op on plain content
  const plainContent = '<p>No callouts here.</p>'
  const unchanged = transformConfluenceCallouts(plainContent)
  console.log('\nPlain content (should be unchanged):', unchanged)
  // Output: <p>No callouts here.</p>

} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformConfluenceHtml

function transformConfluenceHtml(content: string): string
TypeScript

Use this to convert Confluence HTML pages (including Atlassian Confluence macros) into clean, readable Markdown strings — ideal for ingesting Confluence content into AI pipelines, search indexes, or documentation systems.

This function handles:

  • ac:structured-macro code blocks (with optional language tags) → fenced Markdown code blocks
  • Common HTML elements → Markdown equivalents
NameTypeRequiredDescription
contentstringRaw HTML string from a Confluence page, including ac: macro tags

Returns: A string containing the Markdown representation of the input Confluence HTML. Unrecognized tags are passed through or stripped depending on the transformation rules.

Example

// Inline implementation of transformConfluenceHtml (self-contained, no imports needed)
function transformConfluenceHtml(content: string): string {
  // Code macros: <ac:structured-macro ac:name="code"> with optional language
  content = content.replace(
    /<ac:structured-macro[^>]*ac:name="code"[^>]*>[\s\S]*?(?:<ac:parameter ac:name="language">([^<]*)<\/ac:parameter>)?[\s\S]*?<ac:plain-text-body><!\[CDATA\[([\s\S]*?)\]\]><\/ac:plain-text-body>[\s\S]*?<\/ac:structured-macro>/g,
    (_match: string, lang: string | undefined, code: string) =>
      `\`\`\`${lang || ''}\n${code}\n\`\`\``
  )

  // Headings
  content = content.replace(/<h([1-6])[^>]*>([\s\S]*?)<\/h\1>/gi, (_m, level, text) =>
    `${'#'.repeat(Number(level))} ${text.trim()}\n`
  )

  // Bold / strong
  content = content.replace(/<(strong|b)[^>]*>([\s\S]*?)<\/\1>/gi, (_m, _tag, text) =>
    `**${text.trim()}**`
  )

  // Paragraphs
  content = content.replace(/<p[^>]*>([\s\S]*?)<\/p>/gi, (_m, text) => `${text.trim()}\n\n`)

  // Line breaks
  content = content.replace(/<br\s*\/?>/gi, '\n')

  // Strip remaining HTML tags
  content = content.replace(/<[^>]+>/g, '')

  // Collapse excessive blank lines
  content = content.replace(/\n{3,}/g, '\n\n').trim()

  return content
}

// --- Example Usage ---

const confluencePageHtml = `
<h1>Deployment Guide</h1>
<p>Follow these steps to deploy the service.</p>

<h2>Prerequisites</h2>
<p>Ensure you have <strong>Node.js 18+</strong> installed.</p>

<ac:structured-macro ac:name="code" ac:schema-version="1" ac:macro-id="abc-123">
  <ac:parameter ac:name="language">bash</ac:parameter>
  <ac:plain-text-body><![CDATA[npm install
npm run build
npm start]]></ac:plain-text-body>
</ac:structured-macro>

<p>After deployment, verify the service is running.</p>

<ac:structured-macro ac:name="code" ac:schema-version="1">
  <ac:plain-text-body><![CDATA[curl http://localhost:3000/health]]></ac:plain-text-body>
</ac:structured-macro>
`

async function main() {
  try {
    const markdown = transformConfluenceHtml(confluencePageHtml)

    console.log('=== Transformed Markdown Output ===\n')
    console.log(markdown)

    // Expected output:
    // # Deployment Guide
    //
    // Follow these steps to deploy the service.
    //
    // ## Prerequisites
    //
    // Ensure you have **Node.js 18+** installed.
    //
    // ```bash
    // npm install
    // npm run build
    // npm start
    // ```
    //
    // After deployment, verify the service is running.
    //
    // ```
    // curl http://localhost:3000/health
    // ```

    // Verify code block extraction
    const hasCodeFence = markdown.includes('```bash')
    const hasUnlabeledFence = markdown.includes('```\n')
    const hasHeading = markdown.includes('# Deployment Guide')

    console.log('\n=== Validation ===')
    console.log('Has bash code fence:', hasCodeFence)       // true
    console.log('Has unlabeled fence:', hasUnlabeledFence)  // true
    console.log('Has H1 heading:     ', hasHeading)         // true
  } catch (error) {
    console.error('Transformation failed:', error)
  }
}

main()
TypeScript

transformDocusaurusAdmonitions

function transformDocusaurusAdmonitions(content: string): string
TypeScript

Use this to convert Docusaurus admonition blocks (:::note, :::tip, :::danger, etc.) into a standardized callout format for rendering in other documentation systems.

This is useful when migrating or syncing Docusaurus docs to platforms that use a different callout/admonition syntax — the function handles all standard Docusaurus admonition types and optional custom titles.

Parameters

NameTypeRequiredDescription
contentstringRaw markdown string containing one or more Docusaurus admonition blocks

Returns

Returns a string with all Docusaurus admonition blocks (:::type[Title]\n...\n:::) replaced by the target callout format. Content outside admonition blocks is left unchanged.

Supported admonition types: note, tip, info, caution, danger, warning

Example

// Inline the admonition type mapping (mirrors the real implementation)
const DOCUSAURUS_ADMONITION_MAP: Record<string, string> = {
  note: 'note',
  tip: 'tip',
  info: 'info',
  caution: 'warning',
  danger: 'danger',
  warning: 'warning',
}

// Self-contained implementation of transformDocusaurusAdmonitions
function transformDocusaurusAdmonitions(content: string): string {
  return content.replace(
    /:::(note|tip|info|caution|danger|warning)(?:\[(.+?)\])?\n([\s\S]*?):::/g,
    (_match, type: string, title: string | undefined, body: string) => {
      const calloutType = DOCUSAURUS_ADMONITION_MAP[type] || 'info'
      const titleAttr = title ? ` title="${title}"` : ''
      return `<Callout type="${calloutType}"${titleAttr}>\n${body.trim()}\n</Callout>`
    }
  )
}

// --- Example usage ---

const exampleDoc = `
# Getting Started

:::note
Make sure you have Node.js 18+ installed before proceeding.
:::

:::tip[Pro Tip]
Use \`pnpm\` for faster installs in monorepos.
:::

:::danger[Breaking Change]
The \`legacyConfig\` option has been removed in v3.
Migrate to the new \`config\` object before upgrading.
:::

:::caution
Caution blocks map to "warning" in the output format.
:::

Regular paragraph content is left untouched.
`

try {
  const transformed = transformDocusaurusAdmonitions(exampleDoc)
  console.log('Transformed output:\n', transformed)
  /*
  Expected output:

  # Getting Started

  <Callout type="note">
  Make sure you have Node.js 18+ installed before proceeding.
  </Callout>

  <Callout type="tip" title="Pro Tip">
  Use `pnpm` for faster installs in monorepos.
  </Callout>

  <Callout type="danger" title="Breaking Change">
  The `legacyConfig` option has been removed in v3.
  Migrate to the new `config` object before upgrading.
  </Callout>

  <Callout type="warning">
  Caution blocks map to "warning" in the output format.
  </Callout>

  Regular paragraph content is left untouched.
  */

  // Verify a no-op case — content with no admonitions is returned unchanged
  const plainContent = '# Just a heading\n\nSome regular text.'
  const unchanged = transformDocusaurusAdmonitions(plainContent)
  console.log('\nNo-op case (should be identical):', unchanged === plainContent ? '✅ unchanged' : '❌ modified')
} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformDocusaurusTabs

function transformDocusaurusTabs(content: string): string
TypeScript

Use this to convert Docusaurus <Tabs> / <TabItem> JSX syntax into plain markdown or a renderable alternative — useful when migrating Docusaurus docs to another format or stripping framework-specific markup from content.

Parameters

NameTypeRequiredDescription
contentstringRaw documentation string containing Docusaurus <Tabs> and <TabItem> JSX elements

Returns

ConditionReturn Value
Content contains <Tabs> blocksstring with Docusaurus tab syntax transformed into plain text/markdown
Content has no <Tabs> blocksOriginal string unchanged

Input Format

The function expects Docusaurus tab syntax:

<Tabs>
  <TabItem value="js" label="JavaScript">content here</TabItem>
  <TabItem value="ts" label="TypeScript">other content</TabItem>
</Tabs>
JSX

Example

// Inline implementation of transformDocusaurusTabs (self-contained, no imports needed)
function transformDocusaurusTabs(content: string): string {
  return content.replace(
    /<Tabs[^>]*>([\s\S]*?)<\/Tabs>/g,
    (_match, inner: string) => {
      const tabs: { value: string; label: string; content: string }[] = []
      const tabRegex =
        /<TabItem\s+value="([^"]+)"(?:\s+label="([^"]*)")?>([\s\S]*?)<\/TabItem>/g

      let match: RegExpExecArray | null
      while ((match = tabRegex.exec(inner)) !== null) {
        tabs.push({
          value: match[1],
          label: match[2] || match[1],
          content: match[3].trim(),
        })
      }

      // Transform each tab into a labeled markdown section
      return tabs
        .map((tab) => `**${tab.label}**\n\n${tab.content}`)
        .join('\n\n---\n\n')
    }
  )
}

// --- Example Usage ---

const docusaurusDoc = `
# Installation Guide

Install the package using your preferred package manager:

<Tabs>
  <TabItem value="npm" label="npm">
\`\`\`bash
npm install my-package
\`\`\`
  </TabItem>
  <TabItem value="yarn" label="Yarn">
\`\`\`bash
yarn add my-package
\`\`\`
  </TabItem>
  <TabItem value="pnpm" label="pnpm">
\`\`\`bash
pnpm add my-package
\`\`\`
  </TabItem>
</Tabs>

After installation, import the package in your project.
`

try {
  const transformed = transformDocusaurusTabs(docusaurusDoc)
  console.log('=== Transformed Output ===\n')
  console.log(transformed)
  /*
  Expected output:
  # Installation Guide

  Install the package using your preferred package manager:

  **npm**

  ```bash
  npm install my-package
TypeScript

Yarn

Terminal
yarn add my-package

pnpm

Terminal
pnpm add my-package

After installation, import the package in your project. */

// Edge case: content with no tabs is returned unchanged const plainContent = '# No tabs here\n\nJust regular markdown.' const unchanged = transformDocusaurusTabs(plainContent) console.log('\n=== No-Tab Content (unchanged) ===\n') console.log(unchanged) // Output: # No tabs here\n\nJust regular markdown.

// Edge case: TabItem without an explicit label falls back to value const noLabelContent = `

` const noLabelResult = transformDocusaurusTabs(noLabelContent) console.log('\n=== TabItem Without Label (uses value as label) ===\n') console.log(noLabelResult) // Output: bash\n\necho "hello" } catch (error)


---


## `transformGitBookContentRef`

```typescript
function transformGitBookContentRef(content: string): string

Use this to convert GitBook content-ref shortcode blocks into standard markdown links during documentation migration or content processing pipelines.

GitBook uses a proprietary {% content-ref %}...{% endcontent-ref %} block syntax to create page references. This function strips that syntax and produces a clean [label](url) markdown link, where the label is automatically derived from the URL's filename (with .md extension removed).

Parameters

NameTypeRequiredDescription
contentstringA string containing one or more GitBook {% content-ref %} blocks to transform

Returns

Returns a string with all GitBook content-ref blocks replaced by standard markdown links. The link label is derived from the last path segment of the URL, with any .md extension stripped. Non-matching content is passed through unchanged.

Label derivation examples

URLGenerated Label
getting-started.mdgetting-started
guides/authentication.mdauthentication
https://example.comhttps://example.com

Example

// Inline implementation — no external imports needed
function transformGitBookContentRef(content: string): string {
  return content.replace(
    /\{%\s*content-ref\s+url="([^"]+)"\s*%\}[\s\S]*?\{%\s*endcontent-ref\s*%\}/g,
    (_match, url: string) => {
      const label = url.replace(/\.md$/, '').split('/').pop() || url
      return `[${label}](${url})`
    }
  )
}

// --- Examples ---

// 1. Simple .md file reference
const singleRef = `
{% content-ref url="getting-started.md" %}
getting-started.md
{% endcontent-ref %}
`

// 2. Nested path reference
const nestedRef = `
{% content-ref url="guides/authentication.md" %}
authentication
{% endcontent-ref %}
`

// 3. Multiple refs in one document
const multipleRefs = `
# My Docs

Check out these pages:

{% content-ref url="setup/installation.md" %}
installation.md
{% endcontent-ref %}

And also:

{% content-ref url="api/reference.md" %}
reference.md
{% endcontent-ref %}
`

// 4. Content with no GitBook refs (should pass through unchanged)
const plainMarkdown = `# Regular Markdown\n\nJust a [normal link](https://example.com).`

try {
  console.log('--- Single ref ---')
  console.log(transformGitBookContentRef(singleRef).trim())
  // Output: [getting-started](getting-started.md)

  console.log('\n--- Nested path ref ---')
  console.log(transformGitBookContentRef(nestedRef).trim())
  // Output: [authentication](guides/authentication.md)

  console.log('\n--- Multiple refs in document ---')
  console.log(transformGitBookContentRef(multipleRefs).trim())
  // Output:
  // # My Docs
  //
  // Check out these pages:
  //
  // [installation](setup/installation.md)
  //
  // And also:
  //
  // [reference](api/reference.md)

  console.log('\n--- Plain markdown (unchanged) ---')
  console.log(transformGitBookContentRef(plainMarkdown))
  // Output: # Regular Markdown
  //
  // Just a [normal link](https://example.com).

} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformGitBookEmbed

function transformGitBookEmbed(content: string): string
TypeScript

Use this to strip GitBook {% embed url="..." %} syntax from content, replacing it with plain URLs for rendering in non-GitBook environments.

This is useful when migrating GitBook documentation to other platforms, or when processing GitBook markdown files that contain embed tags that would appear as raw syntax instead of rendered links.

Parameters

NameTypeRequiredDescription
contentstringA string containing GitBook embed tags in the format {% embed url="..." %}

Returns

A string with all {% embed url="..." %} tags replaced by their bare URL values. Content without any embed tags is returned unchanged.

Example

// Inline implementation of transformGitBookEmbed
function transformGitBookEmbed(content: string): string {
  return content.replace(
    /\{%\s*embed\s+url="([^"]+)"\s*%\}/g,
    (_match, url: string) => url
  )
}

// --- Examples ---

// 1. Single embed tag
const singleEmbed = `Check out this resource: {% embed url="https://docs.example.com/quickstart" %}`
const singleResult = transformGitBookEmbed(singleEmbed)
console.log("Single embed:")
console.log(singleResult)
// Output: Check out this resource: https://docs.example.com/quickstart

// 2. Multiple embed tags in one document
const multiEmbed = `
## References

{% embed url="https://api.example.com/docs" %}

For authentication details, see {% embed url="https://auth.example.com/guide" %}
`
const multiResult = transformGitBookEmbed(multiEmbed)
console.log("\nMultiple embeds:")
console.log(multiResult)
// Output:
// ## References
//
// https://api.example.com/docs
//
// For authentication details, see https://auth.example.com/guide

// 3. Content with no embed tags (returned unchanged)
const plainContent = `This is regular markdown with no embed tags.`
const plainResult = transformGitBookEmbed(plainContent)
console.log("\nNo embed tags (unchanged):")
console.log(plainResult)
// Output: This is regular markdown with no embed tags.

// 4. Embed tag with extra whitespace (still matched)
const looseEmbed = `{%  embed  url="https://example.com/video"  %}`
const looseResult = transformGitBookEmbed(looseEmbed)
console.log("\nLoose whitespace embed:")
console.log(looseResult)
// Output: https://example.com/video
TypeScript

transformGitBookExpandable

function transformGitBookExpandable(content: string): string
TypeScript

Use this to convert GitBook expandable blocks into Accordion components for rendering in documentation frameworks that support <Accordion> syntax (e.g., Mintlify).

Transforms {% expandable title="X" %}...{% endexpandable %} GitBook shortcodes into <Accordion title="X">...</Accordion> HTML/JSX elements. Handles multiline content and multiple expandable blocks in a single pass.

Parameters

NameTypeRequiredDescription
contentstringRaw documentation string containing one or more GitBook {% expandable %} blocks

Returns

ConditionReturn Value
Expandable blocks foundString with all {% expandable %} blocks replaced by <Accordion> components
No expandable blocks foundOriginal string unchanged

Example

// Inline implementation of transformGitBookExpandable
function transformGitBookExpandable(content: string): string {
  return content.replace(
    /\{%\s*expandable\s+title="([^"]+)"\s*%\}([\s\S]*?)\{%\s*endexpandable\s*%\}/g,
    (_match, title: string, body: string) =>
      `<Accordion title="${title}">\n${body.trim()}\n</Accordion>`
  );
}

// --- Example Usage ---

// Single expandable block
const singleBlock = `
## Installation

{% expandable title="Prerequisites" %}
Make sure you have Node.js v18+ installed.
Run \`npm install\` before proceeding.
{% endexpandable %}
`;

// Multiple expandable blocks in one document
const multipleBlocks = `
## FAQ

{% expandable title="What is the rate limit?" %}
The API allows 1000 requests per minute per API key.
{% endexpandable %}

{% expandable title="How do I authenticate?" %}
Pass your API key in the Authorization header:
\`Authorization: Bearer sk-abc123\`
{% endexpandable %}
`;

// Content with no expandable blocks (should pass through unchanged)
const noBlocks = `## Simple Heading\n\nJust regular markdown content here.`;

try {
  const resultSingle = transformGitBookExpandable(singleBlock);
  console.log("=== Single Block Output ===");
  console.log(resultSingle);
  // Output:
  // ## Installation
  //
  // <Accordion title="Prerequisites">
  // Make sure you have Node.js v18+ installed.
  // Run `npm install` before proceeding.
  // </Accordion>

  const resultMultiple = transformGitBookExpandable(multipleBlocks);
  console.log("\n=== Multiple Blocks Output ===");
  console.log(resultMultiple);
  // Output:
  // ## FAQ
  //
  // <Accordion title="What is the rate limit?">
  // The API allows 1000 requests per minute per API key.
  // </Accordion>
  //
  // <Accordion title="How do I authenticate?">
  // Pass your API key in the Authorization header:
  // `Authorization: Bearer sk-abc123`
  // </Accordion>

  const resultNoBlocks = transformGitBookExpandable(noBlocks);
  console.log("\n=== No Blocks (passthrough) ===");
  console.log(resultNoBlocks);
  // Output: ## Simple Heading
  //
  // Just regular markdown content here.

  console.log("\n=== Verification ===");
  console.log("Contains <Accordion>:", resultSingle.includes("<Accordion"));
  console.log("Original GitBook tag removed:", !resultSingle.includes("{% expandable"));
  // Output:
  // Contains <Accordion>: true
  // Original GitBook tag removed: true
} catch (error) {
  console.error("Transformation failed:", error);
}
TypeScript

transformGitBookHints

function transformGitBookHints(content: string): string
TypeScript

Use this to convert GitBook hint/callout blocks into <Callout> JSX components during documentation migration or preprocessing.

GitBook uses a custom {% hint style="..." %} syntax that won't render in standard MDX or React-based doc systems. This function transforms those blocks into <Callout type="..."> components, preserving the hint style and content.

Supported style mappings (based on typical GitBook hint styles):

GitBook StyleCallout Type
infoinfo
warningwarning
dangerdanger
successsuccess
(unknown)info

Parameters

NameTypeRequiredDescription
contentstring✅ YesRaw markdown/MDX string containing GitBook hint blocks

Returns

ConditionReturn Value
Content contains hint blocksString with all {% hint %}...{% endhint %} replaced by <Callout> tags
Content has no hint blocksOriginal string returned unchanged
Unknown hint styleFalls back to type="info"

Example

// Inline the hint style map (mirrors the real implementation)
const GITBOOK_HINT_MAP: Record<string, string> = {
  info: 'info',
  warning: 'warning',
  danger: 'danger',
  success: 'success',
}

// Self-contained implementation of transformGitBookHints
function transformGitBookHints(content: string): string {
  return content.replace(
    /\{%\s*hint\s+style="(\w+)"\s*%\}([\s\S]*?)\{%\s*endhint\s*%\}/g,
    (_match, style: string, body: string) => {
      const calloutType = GITBOOK_HINT_MAP[style] || 'info'
      return `<Callout type="${calloutType}">${body.trim()}</Callout>`
    }
  )
}

// --- Example usage ---

const rawGitBookContent = `
# Getting Started

{% hint style="info" %}
Make sure you have Node.js v18 or higher installed before proceeding.
{% endhint %}

Some regular paragraph text here.

{% hint style="warning" %}
Changing this setting will restart your server automatically.
{% endhint %}

{% hint style="danger" %}
Deleting this resource is irreversible. Proceed with caution.
{% endhint %}

{% hint style="success" %}
Your API key has been generated successfully!
{% endhint %}

{% hint style="custom" %}
This uses an unknown style and will fall back to "info".
{% endhint %}
`

async function main() {
  try {
    const transformed = transformGitBookHints(rawGitBookContent)
    console.log('Transformed content:\n', transformed)

    // Expected output:
    // # Getting Started
    //
    // <Callout type="info">Make sure you have Node.js v18 or higher installed before proceeding.</Callout>
    //
    // Some regular paragraph text here.
    //
    // <Callout type="warning">Changing this setting will restart your server automatically.</Callout>
    //
    // <Callout type="danger">Deleting this resource is irreversible. Proceed with caution.</Callout>
    //
    // <Callout type="success">Your API key has been generated successfully!</Callout>
    //
    // <Callout type="info">This uses an unknown style and will fall back to "info".</Callout>

    // Verify no-op on plain content
    const plainContent = '# Just a heading\n\nNo hints here.'
    const unchanged = transformGitBookHints(plainContent)
    console.log('\nPlain content (should be unchanged):\n', unchanged)
    // Output: # Just a heading\n\nNo hints here.

  } catch (error) {
    console.error('Transformation failed:', error)
  }
}

main()
TypeScript

transformGitBookSteps

function transformGitBookSteps(content: string): string
TypeScript

Use this to convert GitBook stepper/step syntax into clean markdown, stripping the {% stepper %}, {% step %}, {% endstep %}, and {% endstepper %} tags while preserving the step content in a readable format.

This is useful when migrating GitBook documentation to standard markdown or rendering GitBook content in environments that don't support GitBook's custom block syntax.

Parameters

NameTypeRequiredDescription
contentstringA string containing GitBook stepper syntax with {% stepper %}, {% step %}, {% endstep %}, and {% endstepper %} tags

Returns

  • string — The transformed content with GitBook stepper tags replaced by standard markdown. Each step's content is extracted and formatted sequentially. Non-stepper content in the string is left unchanged.

Example

// Inline implementation of transformGitBookSteps (self-contained, no imports needed)
function transformGitBookSteps(content: string): string {
  return content.replace(
    /\{%\s*stepper\s*%\}([\s\S]*?)\{%\s*endstepper\s*%\}/g,
    (_match, inner: string) => {
      const steps = inner
        .split(/\{%\s*step\s*%\}/)
        .filter((s) => s.trim())
        .map((step, index) => {
          // Remove {% endstep %} tags and trim whitespace
          const cleaned = step.replace(/\{%\s*endstep\s*%\}/g, '').trim()
          return `**Step ${index + 1}**\n\n${cleaned}`
        })
      return steps.join('\n\n---\n\n')
    }
  )
}

// --- Example 1: Basic stepper with two steps ---
const basicStepper = `
# Getting Started

{% stepper %}
{% step %}
### Install the package

Run \`npm install my-package\` in your terminal.
{% endstep %}
{% step %}
### Configure your environment

Add your API key to \`.env\`:

\`\`\`
API_KEY=your-api-key-here
\`\`\`
{% endstep %}
{% endstepper %}
`

console.log('=== Example 1: Basic stepper ===')
console.log(transformGitBookSteps(basicStepper))
// Output:
// # Getting Started
//
// **Step 1**
//
// ### Install the package
// Run `npm install my-package` in your terminal.
//
// ---
//
// **Step 2**
//
// ### Configure your environment
// Add your API key to `.env`: ...

// --- Example 2: Multiple steppers in one document ---
const multiStepper = `
## Setup

{% stepper %}
{% step %}
Clone the repository.
{% endstep %}
{% step %}
Install dependencies with \`npm install\`.
{% endstep %}
{% endstepper %}

## Deployment

{% stepper %}
{% step %}
Build the project with \`npm run build\`.
{% endstep %}
{% step %}
Deploy using \`npm run deploy\`.
{% endstep %}
{% endstepper %}
`

console.log('\n=== Example 2: Multiple steppers ===')
console.log(transformGitBookSteps(multiStepper))

// --- Example 3: Content with no stepper tags (passthrough) ---
const plainMarkdown = `
# Regular Markdown

This content has no GitBook tags and should pass through unchanged.
`

console.log('\n=== Example 3: No stepper tags (passthrough) ===')
console.log(transformGitBookSteps(plainMarkdown))
// Output: identical to input — no transformation applied

// --- Example 4: Edge case — empty stepper block ---
const emptyStepper = `{% stepper %}{% endstepper %}`

console.log('\n=== Example 4: Empty stepper block ===')
console.log(JSON.stringify(transformGitBookSteps(emptyStepper)))
// Output: "" (empty string — no steps found)
TypeScript

transformGitBookTabs

function transformGitBookTabs(content: string): string
TypeScript

Use this to convert GitBook tab syntax into clean markdown, stripping the {% tabs %} / {% tab %} / {% endtab %} / {% endtabs %} block tags while preserving each tab's title and content.

This is useful when migrating GitBook documentation to standard markdown or when rendering GitBook-flavored content in environments that don't support GitBook's templating syntax.

Parameters

NameTypeRequiredDescription
contentstringA string containing GitBook tab syntax with {% tabs %}, {% tab title="..." %}, {% endtab %}, and {% endtabs %} block tags

Returns

ConditionReturn Value
Content contains GitBook tab blocksstring — the input with tab blocks replaced by formatted markdown sections, each tab rendered with its title as a heading and its content preserved
Content has no GitBook tab blocksstring — the original content unchanged

Example

// Inline implementation of transformGitBookTabs (no external imports needed)
function transformGitBookTabs(content: string): string {
  return content.replace(
    /\{%\s*tabs\s*%\}([\s\S]*?)\{%\s*endtabs\s*%\}/g,
    (_match, inner: string) => {
      const tabs: { title: string; content: string }[] = []
      const tabRegex =
        /\{%\s*tab\s+title="([^"]+)"\s*%\}([\s\S]*?)\{%\s*endtab\s*%\}/g

      let match: RegExpExecArray | null
      while ((match = tabRegex.exec(inner)) !== null) {
        tabs.push({ title: match[1], content: match[2].trim() })
      }

      return tabs
        .map((tab) => `### ${tab.title}\n\n${tab.content}`)
        .join('\n\n')
    }
  )
}

// --- Example usage ---

const gitbookContent = `
# API Reference

{% tabs %}
{% tab title="Node.js" %}
\`\`\`js
const client = new Client({ apiKey: process.env.API_KEY || 'sk-demo-1234' })
await client.connect()
\`\`\`
{% endtab %}
{% tab title="Python" %}
\`\`\`python
client = Client(api_key=os.environ.get("API_KEY", "sk-demo-1234"))
client.connect()
\`\`\`
{% endtab %}
{% tab title="cURL" %}
\`\`\`bash
curl -H "Authorization: Bearer sk-demo-1234" https://api.example.com/connect
\`\`\`
{% endtab %}
{% endtabs %}

More documentation below.
`

try {
  const result = transformGitBookTabs(gitbookContent)
  console.log('Transformed output:\n')
  console.log(result)

  /*
  Expected output:

  # API Reference

  ### Node.js

  ```js
  const client = new Client({ apiKey: process.env.API_KEY || 'sk-demo-1234' })
  await client.connect()
TypeScript

Python

client = Client(api_key=os.environ.get("API_KEY", "sk-demo-1234"))
client.connect()
Python

cURL

Terminal
curl -H "Authorization: Bearer sk-demo-1234" https://api.example.com/connect

More documentation below. */

// Verify no GitBook tags remain const hasRemainingTags = /{%.*?%}/.test(result) console.log('GitBook tags remaining:', hasRemainingTags) // false console.log('Tab sections found:', (result.match(/^### /gm) || []).length) // 3 } catch (error)


---


## `transformMintlifyCallouts`

```typescript
function transformMintlifyCallouts(content: string): string

Use this to convert Mintlify-flavored callout components (<Note>, <Warning>, <Tip>, <Info>, <Check>) into a unified <Callout type="..."> format during documentation migration or preprocessing.

This is useful when migrating docs from Mintlify to another platform (e.g., Nextra, Fumadocs) that uses a generic <Callout> component with a type prop instead of named tags.

Parameters

NameTypeRequiredDescription
contentstringRaw markdown/MDX string containing Mintlify callout tags to be transformed

Returns

Returns a string with all recognized Mintlify callout tags replaced by <Callout type="..."> equivalents. Unrecognized tags are left untouched.

Tag Mapping

Mintlify TagOutput
<Note><Callout type="note">
<Warning><Callout type="warning">
<Tip><Callout type="tip">
<Info><Callout type="info">
<Check><Callout type="check">

Notes

  • Handles multiline content inside callout tags.
  • Processes all occurrences in the string, not just the first.
  • Tags not in the mapping are passed through unchanged.

Example

// Inline the callout tag mapping (mirrors the library's internal map)
const MINTLIFY_CALLOUT_MAP: Record<string, string> = {
  Note: 'note',
  Warning: 'warning',
  Tip: 'tip',
  Info: 'info',
  Check: 'check',
}

// Self-contained implementation of transformMintlifyCallouts
function transformMintlifyCallouts(content: string): string {
  for (const [tag, type] of Object.entries(MINTLIFY_CALLOUT_MAP)) {
    const regex = new RegExp(`<${tag}>([\\s\\S]*?)<\\/${tag}>`, 'g')
    content = content.replace(regex, `<Callout type="${type}">$1</Callout>`)
  }
  return content
}

// --- Example usage ---

const mintlifyDoc = `
# Getting Started

<Note>
Make sure you have Node.js 18+ installed before proceeding.
</Note>

<Warning>
Do not expose your API key in client-side code.
</Warning>

<Tip>
Use environment variables to manage secrets safely.
</Tip>

<Info>
This feature is available on all plans.
</Info>

<Check>
Your setup is complete!
</Check>

Some regular paragraph text that should be untouched.

<CustomTag>This unknown tag should pass through unchanged.</CustomTag>
`

try {
  const transformed = transformMintlifyCallouts(mintlifyDoc)
  console.log('Transformed output:\n', transformed)

  // Expected output (each callout becomes a unified <Callout type="..."> tag):
  // <Callout type="note">
  // Make sure you have Node.js 18+ installed before proceeding.
  // </Callout>
  //
  // <Callout type="warning">
  // Do not expose your API key in client-side code.
  // </Callout>
  //
  // <Callout type="tip">
  // Use environment variables to manage secrets safely.
  // </Callout>
  //
  // <Callout type="info">
  // This feature is available on all plans.
  // </Callout>
  //
  // <Callout type="check">
  // Your setup is complete!
  // </Callout>
  //
  // <CustomTag>This unknown tag should pass through unchanged.</CustomTag>

  // Verify a specific transformation
  const simple = transformMintlifyCallouts('<Note>Hello world</Note>')
  console.log('\nSimple transform:', simple)
  // Output: <Callout type="note">Hello world</Callout>

} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformMintlifyTabs

function transformMintlifyTabs(content: string): string
TypeScript

Use this to convert Mintlify-flavored <Tabs>/<Tab> JSX syntax into standard Markdown or a compatible tab format — useful when migrating docs from Mintlify to another platform or pre-processing MDX content.

Parameters

NameTypeRequiredDescription
contentstringRaw documentation string containing Mintlify <Tabs> and <Tab> components

Returns

ConditionReturn Value
Content contains <Tabs> blocksstring — transformed content with Mintlify tab syntax replaced
No <Tabs> blocks foundstring — original content unchanged

Notes

  • Handles multiple <Tabs> blocks in a single string
  • Preserves content inside each <Tab> — only the wrapper syntax is transformed
  • Nested or multiline tab content is supported (uses non-greedy multiline matching)

Example

// Inline implementation of transformMintlifyTabs (self-contained, no imports needed)
function transformMintlifyTabs(content: string): string {
  return content.replace(
    /<Tabs>([\s\S]*?)<\/Tabs>/g,
    (_match, inner: string) => {
      const tabs: { title: string; content: string }[] = []
      const tabRegex = /<Tab\s+title="([^"]+)">([\s\S]*?)<\/Tab>/g

      let match: RegExpExecArray | null
      while ((match = tabRegex.exec(inner)) !== null) {
        tabs.push({ title: match[1], content: match[2].trim() })
      }

      // Convert to a Markdown-friendly tab representation
      return tabs
        .map(
          (tab) =>
            `#### ${tab.title}\n\n${tab.content}`
        )
        .join('\n\n---\n\n')
    }
  )
}

// --- Example Usage ---

const mintlifyDoc = `
# Getting Started

Install the SDK using your preferred package manager:

<Tabs>
  <Tab title="npm">
    \`\`\`bash
    npm install @myorg/sdk
    \`\`\`
  </Tab>
  <Tab title="yarn">
    \`\`\`bash
    yarn add @myorg/sdk
    \`\`\`
  </Tab>
  <Tab title="pnpm">
    \`\`\`bash
    pnpm add @myorg/sdk
    \`\`\`
  </Tab>
</Tabs>

Continue with the setup guide below.
`

try {
  const transformed = transformMintlifyTabs(mintlifyDoc)
  console.log('=== Transformed Output ===\n')
  console.log(transformed)

  // Expected output:
  // # Getting Started
  //
  // Install the SDK using your preferred package manager:
  //
  // #### npm
  //
  // ```bash
  // npm install @myorg/sdk
  // ```
  //
  // ---
  //
  // #### yarn
  //
  // ```bash
  // yarn add @myorg/sdk
  // ```
  //
  // ---
  //
  // #### pnpm
  //
  // ```bash
  // pnpm add @myorg/sdk
  // ```
  //
  // Continue with the setup guide below.

  // Verify no-op when no Tabs are present
  const plainContent = '# No tabs here\n\nJust regular markdown.'
  const unchanged = transformMintlifyTabs(plainContent)
  console.log('\n=== No-op Check (no <Tabs> present) ===')
  console.log('Input === Output:', unchanged === plainContent)
  // Output: Input === Output: true

} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformNotionCallouts

function transformNotionCallouts(content: string): string
TypeScript

Use this to convert Notion-style callout markup — both <aside> HTML blocks and :::callout fenced syntax — into a normalized format suitable for rendering or further processing.

This is especially useful when parsing exported Notion content that mixes raw HTML callouts with Markdown-style callout blocks, ensuring consistent output regardless of which format Notion used.

Parameters

NameTypeRequiredDescription
contentstringRaw string content containing Notion <aside>...</aside> blocks and/or :::callout fenced syntax

Returns

ConditionReturn Value
Alwaysstring — The input content with Notion callout syntax transformed into a normalized format. Non-callout content is passed through unchanged.

Example

// Inline implementation of transformNotionCallouts for demonstration
// (mirrors the real behavior: converts Notion callout formats to normalized blockquotes)
function transformNotionCallouts(content: string): string {
  // Transform <aside>...</aside> HTML blocks
  content = content.replace(
    /<aside>([\s\S]*?)<\/aside>/g,
    (_match, body: string) => {
      // Strip leading emoji characters (common Notion aside prefixes)
      const stripped = body.replace(/^[\u{1F000}-\u{1FFFF}\u{2600}-\u{26FF}️\s]+/u, '').trim()
      return `> **Note:** ${stripped}`
    }
  )

  // Transform :::callout ... ::: fenced blocks
  content = content.replace(
    /:::callout\s*([\s\S]*?):::/g,
    (_match, body: string) => {
      const stripped = body.trim()
      return `> **Callout:** ${stripped}`
    }
  )

  return content
}

// --- Example usage ---

const rawNotionExport = `
# My Notion Page

Here is some introductory text.

<aside>
💡 This is a helpful tip exported from Notion as an aside block.
</aside>

Some regular paragraph content in between.

:::callout
⚠️ This is a warning callout using the fenced syntax.
:::

Final paragraph with no callouts.
`

try {
  const transformed = transformNotionCallouts(rawNotionExport)
  console.log('Transformed content:\n')
  console.log(transformed)
  /*
  Expected output:

  # My Notion Page

  Here is some introductory text.

  > **Note:** This is a helpful tip exported from Notion as an aside block.

  Some regular paragraph content in between.

  > **Callout:** ⚠️ This is a warning callout using the fenced syntax.

  Final paragraph with no callouts.
  */
} catch (error) {
  console.error('Transformation failed:', error)
}

// Edge case: content with no callouts passes through unchanged
const plainContent = 'Just a regular paragraph with no callouts.'
const result = transformNotionCallouts(plainContent)
console.log('\nPlain content (unchanged):', result)
// Output: Just a regular paragraph with no callouts.
TypeScript

transformNotionToggles

function transformNotionToggles(content: string): string
TypeScript

Use this to convert Notion-exported toggle blocks (HTML <details>/<summary> elements) into <Accordion> components for MDX or documentation frameworks like Mintlify or Docusaurus.

When you export content from Notion, toggles become <details><summary>Title</summary>content</details> HTML. This function transforms all of them in a single pass into <Accordion title="..."> components ready for rendering.

Parameters

NameTypeRequiredDescription
contentstringA string of HTML or MDX content containing one or more Notion-exported <details> toggle blocks

Returns

ConditionReturn Value
Content contains <details> blocksReturns the full string with all matching blocks replaced by <Accordion title="...">...</Accordion>
No <details> blocks foundReturns the original string unchanged

Note: The function handles multiline content inside both <summary> and <details> tags, and trims whitespace from the title and body automatically.

Example

// Inline implementation — no imports needed
function transformNotionToggles(content: string): string {
  return content.replace(
    /<details>\s*<summary>([\s\S]*?)<\/summary>([\s\S]*?)<\/details>/g,
    (_match, title: string, body: string) =>
      `<Accordion title="${title.trim()}">\n${body.trim()}\n</Accordion>`
  );
}

// --- Example 1: Single toggle block ---
const singleToggle = `
<details>
  <summary>What is Supermemory?</summary>
  Supermemory is an AI-powered memory layer for your applications.
</details>
`.trim();

const singleResult = transformNotionToggles(singleToggle);
console.log("=== Single Toggle ===");
console.log(singleResult);
// Output:
// <Accordion title="What is Supermemory?">
// Supermemory is an AI-powered memory layer for your applications.
// </Accordion>

// --- Example 2: Multiple toggles in a larger MDX document ---
const mdxPage = `
# FAQ

Here are some common questions.

<details>
  <summary>How do I get started?</summary>
  Sign up at supermemory.ai and grab your API key from the dashboard.
</details>

<details>
  <summary>Is there a free tier?</summary>
  Yes! The free tier includes up to 1,000 memories per month.
</details>
`.trim();

const mdxResult = transformNotionToggles(mdxPage);
console.log("\n=== MDX Page with Multiple Toggles ===");
console.log(mdxResult);
// Output:
// # FAQ
//
// Here are some common questions.
//
// <Accordion title="How do I get started?">
// Sign up at supermemory.ai and grab your API key from the dashboard.
// </Accordion>
//
// <Accordion title="Is there a free tier?">
// Yes! The free tier includes up to 1,000 memories per month.
// </Accordion>

// --- Example 3: Content with no toggles (passthrough) ---
const plainContent = "# Just a heading\n\nSome regular paragraph text.";
const unchanged = transformNotionToggles(plainContent);
console.log("\n=== No Toggles (Passthrough) ===");
console.log(unchanged === plainContent ? "✅ Content unchanged" : "❌ Unexpected change");
// Output: ✅ Content unchanged
TypeScript

transformReadmeCallouts

function transformReadmeCallouts(content: string): string
TypeScript

Use this to convert ReadMe RDMD-style callout blocks (using emoji prefixes) into standard markdown or HTML callout components — ideal for preprocessing .md files exported from ReadMe before rendering them in your own docs pipeline.

Parameters

NameTypeRequiredDescription
contentstringRaw markdown string containing ReadMe RDMD callout syntax (e.g., > 📘 Title\n> body text)

Returns

ConditionReturns
Content contains RDMD callout blocksstring — transformed markdown with callouts converted to the target format
Content has no RDMD callout blocksstring — original content unchanged

ReadMe RDMD Callout Format

ReadMe uses emoji-prefixed blockquotes to denote callout types:

EmojiCallout Type
📘Info / Note
👍Success / Tip
🚧Warning
Error / Danger

Input format:

> 📘 Note Title
> This is the body of the callout.
> It can span multiple lines.

Example

// Inline the emoji map and transformation logic (self-contained, no imports needed)
const README_EMOJI_MAP: Record<string, string> = {
  '📘': 'info',
  '👍': 'success',
  '🚧': 'warning',
  '❗': 'danger',
}

// Inline implementation of transformReadmeCallouts
function transformReadmeCallouts(content: string): string {
  const emojiPattern = Object.keys(README_EMOJI_MAP).join('|')
  const rdmdRegex = new RegExp(
    `> (${emojiPattern}) (.+)\\n((?:> .+\\n?)*)`,
    'g'
  )

  return content.replace(rdmdRegex, (match, emoji, title, body) => {
    const calloutType = README_EMOJI_MAP[emoji] ?? 'info'

    // Strip leading "> " from each body line
    const cleanBody = body
      .split('\n')
      .map((line: string) => line.replace(/^> ?/, ''))
      .filter((line: string) => line.trim() !== '')
      .join('\n')

    // Transform to a fenced callout block (common in many doc systems)
    return `:::${calloutType} ${title}\n${cleanBody}\n:::\n`
  })
}

// --- Example usage ---

const readmeMarkdown = `
# API Reference

> 📘 Authentication Required
> All endpoints require a valid API key.
> Pass it via the Authorization header.

Some regular paragraph text here.

> 🚧 Rate Limiting
> This endpoint is limited to 100 requests per minute.
> Exceeding this will return a 429 status code.

> 👍 Pro Tip
> Use connection pooling to maximize throughput.

> ❗ Deprecated
> This endpoint will be removed in v3.0.
`

try {
  const transformed = transformReadmeCallouts(readmeMarkdown)
  console.log('Transformed output:\n')
  console.log(transformed)
  /*
  Expected output:

  # API Reference

  :::info Authentication Required
  All endpoints require a valid API key.
  Pass it via the Authorization header.
  :::

  Some regular paragraph text here.

  :::warning Rate Limiting
  This endpoint is limited to 100 requests per minute.
  Exceeding this will return a 429 status code.
  :::

  :::success Pro Tip
  Use connection pooling to maximize throughput.
  :::

  :::danger Deprecated
  This endpoint will be removed in v3.0.
  :::
  */

  // Verify passthrough: content with no callouts is returned unchanged
  const plainMarkdown = '# Hello\n\nJust a regular paragraph.\n'
  const unchanged = transformReadmeCallouts(plainMarkdown)
  console.log('Passthrough (no callouts):', unchanged === plainMarkdown ? '✅ unchanged' : '❌ modified')
  // Output: Passthrough (no callouts): ✅ unchanged

} catch (error) {
  console.error('Transformation failed:', error)
}
TypeScript

transformReadmeCodeBlocks

function transformReadmeCodeBlocks(content: string): string
TypeScript

Use this to convert ReadMe.io's proprietary [block:code] syntax into standard <CodeGroup> components for modern documentation platforms.

When migrating content from ReadMe.io, code blocks are stored as JSON-wrapped [block:code]...[/block] tags. This function parses those blocks and transforms them into portable <CodeGroup> elements that work with MDX-based doc systems like Mintlify or Nextra.

Parameters

NameTypeRequiredDescription
contentstringRaw markdown/MDX string containing one or more ReadMe [block:code] blocks

Returns

ConditionReturns
Content contains valid [block:code] blocksString with all matched blocks replaced by <CodeGroup> components
Block JSON is malformedThat block is left unchanged (safe fallback)
No [block:code] blocks foundOriginal string returned unchanged

Example

// Inline implementation of transformReadmeCodeBlocks
// (mirrors the real function behavior for demonstration)

function transformReadmeCodeBlocks(content: string): string {
  return content.replace(
    /\[block:code\]\n?([\s\S]*?)\n?\[\/block\]/g,
    (_match, jsonStr: string) => {
      try {
        const data = JSON.parse(jsonStr);
        const codes: Array<{ name: string; language: string; code: string }> =
          data.codes ?? [];

        const codeBlocks = codes
          .map(
            ({ name, language, code }) =>
              `  \`\`\`${language} ${name}\n${code}\n  \`\`\``
          )
          .join("\n");

        return `<CodeGroup>\n${codeBlocks}\n</CodeGroup>`;
      } catch {
        // Return original block unchanged if JSON is invalid
        return _match;
      }
    }
  );
}

// --- Example usage ---

const readmeContent = `
# API Authentication

Here's how to authenticate with the API:

[block:code]
{
  "codes": [
    {
      "name": "Node.js",
      "language": "javascript",
      "code": "const client = new ApiClient({\\n  apiKey: process.env.API_KEY\\n});"
    },
    {
      "name": "Python",
      "language": "python",
      "code": "client = ApiClient(api_key=os.environ['API_KEY'])"
    }
  ]
}
[/block]

More content follows here.
`;

const malformedContent = `
[block:code]
{ this is not valid JSON }
[/block]
`;

const noBlocksContent = `
# Plain Markdown

Just a regular paragraph with a fenced code block:

\`\`\`js
console.log("hello")
\`\`\`
`;

try {
  console.log("=== Valid ReadMe block ===");
  const transformed = transformReadmeCodeBlocks(readmeContent);
  console.log(transformed);
  // Output:
  // # API Authentication
  // Here's how to authenticate with the API:
  // <CodeGroup>
  //   ```javascript Node.js
  //   const client = new ApiClient({
  //     apiKey: process.env.API_KEY
  //   });
  //   ```
  //   ```python Python
  //   client = ApiClient(api_key=os.environ['API_KEY'])
  //   ```
  // </CodeGroup>
  // More content follows here.

  console.log("=== Malformed JSON block (safe fallback) ===");
  const malformedResult = transformReadmeCodeBlocks(malformedContent);
  console.log(malformedResult);
  // Output: original [block:code]....[/block] left unchanged

  console.log("=== No ReadMe blocks (passthrough) ===");
  const unchanged = transformReadmeCodeBlocks(noBlocksContent);
  console.log(unchanged);
  // Output: identical to noBlocksContent
} catch (error) {
  console.error("Transformation failed:", error);
}
TypeScript

validateConfig

function validateConfig(config: Config): string[]
TypeScript

Use this to validate a configuration object before using it, catching all errors upfront rather than failing at runtime.

validateConfig checks a Config object against known rules (such as supported version numbers) and returns a list of human-readable error messages. An empty array means the config is valid.

Parameters

NameTypeRequiredDescription
configConfigYesThe configuration object to validate. Must include at minimum a version field.

Returns

ResultDescription
string[] (empty)Config is valid — no errors found
string[] (non-empty)One or more validation error messages describing what is wrong

Common validation errors

  • "Unsupported config version: <n>" — the version field is not 1

Example

// Inline type definition (mirrors the real Config shape)
type LLMProvider = 'openai' | 'anthropic' | 'ollama'

type Config = {
  version: number
  provider?: LLMProvider
  model?: string
  license?: string
  [key: string]: unknown
}

// Inline implementation matching the real validateConfig logic
function validateConfig(config: Config): string[] {
  const errors: string[] = []

  if (config.version !== 1) {
    errors.push(`Unsupported config version: ${config.version}`)
  }

  // Additional validation rules would appear here in the real implementation
  return errors
}

// --- Usage examples ---

const validConfig: Config = {
  version: 1,
  provider: 'openai',
  model: 'gpt-4o',
  license: 'MIT',
}

const invalidConfig: Config = {
  version: 3,       // unsupported version
  provider: 'anthropic',
  model: 'claude-3-5-sonnet-20241022',
}

try {
  // Validate a correct config
  const validErrors = validateConfig(validConfig)
  if (validErrors.length === 0) {
    console.log('✅ Config is valid — proceeding with initialization')
    // Output: ✅ Config is valid — proceeding with initialization
  } else {
    console.error('Config errors:', validErrors)
  }

  // Validate a broken config
  const invalidErrors = validateConfig(invalidConfig)
  if (invalidErrors.length > 0) {
    console.error('❌ Invalid config detected:')
    invalidErrors.forEach((err) => console.error(`  - ${err}`))
    // Output:
    // ❌ Invalid config detected:
    //   - Unsupported config version: 3
  } else {
    console.log('Config is valid')
  }
} catch (error) {
  console.error('Unexpected error during validation:', error)
}
TypeScript

writeDocsByTopic

async function writeDocsByTopic(docs: GeneratedDoc[], outputDir: string): Promise<{ filesWritten: number; totalDocs: number; topics: Topic[] }>
TypeScript

Use this to write generated documentation files to disk, organized into topic-based subdirectories — ideal for creating structured doc sites where related functions and classes are grouped together.

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]Array of generated documentation objects, each containing content and metadata about a code element
outputDirstringRoot directory path where topic-organized subdirectories and markdown files will be written

Returns

Returns a Promise that resolves to an object with:

FieldTypeDescription
filesWrittennumberCount of files successfully written to disk
totalDocsnumberTotal number of documentation entries processed
topicsTopic[]Array of topic objects describing how docs were grouped

Resolves once all files are written. Rejects if the output directory cannot be created or a file write fails.

Example

import { mkdir, writeFile } from 'fs/promises'
import { join } from 'path'

// --- Inline types (mirrors Skrypt internals) ---
type GeneratedDoc = {
  elementName: string
  topic: string
  content: string
  filePath: string
}

type Topic = {
  name: string
  slug: string
  docCount: number
}

// --- Simulated implementation of writeDocsByTopic ---
async function writeDocsByTopic(
  docs: GeneratedDoc[],
  outputDir: string
): Promise<{ filesWritten: number; totalDocs: number; topics: Topic[] }> {
  let filesWritten = 0

  // Group docs by topic
  const topicMap = new Map<string, GeneratedDoc[]>()
  for (const doc of docs) {
    const group = topicMap.get(doc.topic) ?? []
    group.push(doc)
    topicMap.set(doc.topic, group)
  }

  // Write each topic's docs into its own subdirectory
  for (const [topicName, topicDocs] of topicMap.entries()) {
    const topicSlug = topicName.toLowerCase().replace(/\s+/g, '-')
    const topicDir = join(outputDir, topicSlug)
    await mkdir(topicDir, { recursive: true })

    for (const doc of topicDocs) {
      const fileName = `${doc.elementName.toLowerCase().replace(/\s+/g, '-')}.md`
      const filePath = join(topicDir, fileName)
      await writeFile(filePath, doc.content, 'utf-8')
      filesWritten++
    }
  }

  // Build topics summary
  const topics: Topic[] = Array.from(topicMap.entries()).map(([name, topicDocs]) => ({
    name,
    slug: name.toLowerCase().replace(/\s+/g, '-'),
    docCount: topicDocs.length,
  }))

  return { filesWritten, totalDocs: docs.length, topics }
}

// --- Example usage ---
const exampleDocs: GeneratedDoc[] = [
  {
    elementName: 'createUser',
    topic: 'Authentication',
    filePath: 'src/auth/createUser.ts',
    content: '# createUser\n\nCreates a new user account.\n\n## Parameters\n- `email`: string\n- `password`: string',
  },
  {
    elementName: 'verifyToken',
    topic: 'Authentication',
    filePath: 'src/auth/verifyToken.ts',
    content: '# verifyToken\n\nVerifies a JWT token and returns the decoded payload.',
  },
  {
    elementName: 'fetchProducts',
    topic: 'Catalog',
    filePath: 'src/catalog/fetchProducts.ts',
    content: '# fetchProducts\n\nRetrieves a paginated list of products from the catalog.',
  },
  {
    elementName: 'updateInventory',
    topic: 'Catalog',
    filePath: 'src/catalog/updateInventory.ts',
    content: '# updateInventory\n\nUpdates stock levels for one or more products.',
  },
  {
    elementName: 'sendEmail',
    topic: 'Notifications',
    filePath: 'src/notifications/sendEmail.ts',
    content: '# sendEmail\n\nSends a transactional email via the configured provider.',
  },
]

const OUTPUT_DIR = process.env.DOCS_OUTPUT_DIR || './docs-output'

async function main() {
  try {
    const result = await writeDocsByTopic(exampleDocs, OUTPUT_DIR)

    console.log('✅ Documentation written successfully!')
    console.log(`   Files written : ${result.filesWritten}`)
    console.log(`   Total docs    : ${result.totalDocs}`)
    console.log(`   Topics found  : ${result.topics.length}`)
    console.log('\n📂 Topic breakdown:')
    for (const topic of result.topics) {
      console.log(`   [${topic.slug}]  →  ${topic.docCount} doc(s)`)
    }

    // Expected output:
    // ✅ Documentation written successfully!
    //    Files written : 5
    //    Total docs    : 5
    //    Topics found  : 3
    //
    // 📂 Topic breakdown:
    //    [authentication]  →  2 doc(s)
    //    [catalog]         →  2 doc(s)
    //    [notifications]   →  1 doc(s)
  } catch (error) {
    console.error('❌ Failed to write docs:', error)
    process.exit(1)
  }
}

main()
TypeScript

writeDocsToDirectory

async function writeDocsToDirectory(results: FileGenerationResult[], outputDir: string, sourceDir: string): Promise<{ filesWritten: number; totalDocs: number }>
TypeScript

Use this to persist generated documentation to disk, organizing output files in a directory structure that mirrors your source code layout.

After generating docs for your codebase, writeDocsToDirectory takes the in-memory results and writes each file's documentation to the corresponding path under outputDir, preserving the relative structure from sourceDir. It returns a summary of how many files were written and how many total doc entries were produced.

Parameters

NameTypeRequiredDescription
resultsFileGenerationResult[]Array of generation results, each containing a source file path and its generated documentation
outputDirstringRoot directory where documentation files will be written (created if it doesn't exist)
sourceDirstringRoot directory of the original source files, used to compute relative output paths

Returns

A Promise that resolves to an object with:

PropertyTypeDescription
filesWrittennumberNumber of documentation files successfully written to disk
totalDocsnumberTotal number of individual documentation entries across all files

Returns { filesWritten: 0, totalDocs: 0 } if results is empty.

Example

import { mkdir, writeFile } from 'fs/promises'
import { dirname, join, relative, resolve, basename } from 'path'
import { existsSync } from 'fs'

// ── Inline types (mirrors Skrypt internals) ────────────────────────────────

interface GeneratedDoc {
  name: string
  kind: 'function' | 'class' | 'interface' | 'type' | 'variable'
  markdown: string
}

interface FileGenerationResult {
  filePath: string
  docs: GeneratedDoc[]
  error?: string
}

// ── Inline implementation of writeDocsToDirectory ───────────────────────────

async function writeDocsToDirectory(
  results: FileGenerationResult[],
  outputDir: string,
  sourceDir: string
): Promise<{ filesWritten: number; totalDocs: number }> {
  let filesWritten = 0
  let totalDocs = 0

  const resolvedSource = resolve(sourceDir)
  const resolvedOutput = resolve(outputDir)

  for (const result of results) {
    if (result.error || result.docs.length === 0) continue

    // Mirror the source path under the output directory
    const relPath = relative(resolvedSource, resolve(result.filePath))
    const docFileName = relPath.replace(/\.(ts|tsx|js|jsx)$/, '.md')
    const outPath = join(resolvedOutput, docFileName)

    // Build markdown content from all docs in this file
    const content = result.docs
      .map(doc => `## ${doc.name}\n\n${doc.markdown}`)
      .join('\n\n---\n\n')

    // Ensure the output subdirectory exists
    await mkdir(dirname(outPath), { recursive: true })
    await writeFile(outPath, content, 'utf-8')

    filesWritten++
    totalDocs += result.docs.length
  }

  return { filesWritten, totalDocs }
}

// ── Realistic usage example ──────────────────────────────────────────────────

const OUTPUT_DIR = process.env.DOCS_OUTPUT_DIR || './docs/generated'
const SOURCE_DIR = process.env.SOURCE_DIR || './src'

// Simulated results from a doc generation step
const mockResults: FileGenerationResult[] = [
  {
    filePath: './src/utils/format.ts',
    docs: [
      {
        name: 'formatDate',
        kind: 'function',
        markdown: 'Formats a `Date` object into a human-readable string.\n\n**Params:** `date: Date` — the date to format.',
      },
      {
        name: 'formatCurrency',
        kind: 'function',
        markdown: 'Formats a number as a currency string.\n\n**Params:** `amount: number`, `currency: string`.',
      },
    ],
  },
  {
    filePath: './src/models/User.ts',
    docs: [
      {
        name: 'User',
        kind: 'class',
        markdown: 'Represents an authenticated user in the system.',
      },
    ],
  },
  {
    // Skipped — generation failed for this file
    filePath: './src/legacy/old.ts',
    docs: [],
    error: 'Parse error: unexpected token',
  },
]

async function main() {
  try {
    console.log(`Writing docs to: ${OUTPUT_DIR}`)
    console.log(`Source root:     ${SOURCE_DIR}\n`)

    const { filesWritten, totalDocs } = await writeDocsToDirectory(
      mockResults,
      OUTPUT_DIR,
      SOURCE_DIR
    )

    console.log('✅ Documentation written successfully')
    console.log(`   Files written : ${filesWritten}`)  // Expected: 2
    console.log(`   Total docs    : ${totalDocs}`)     // Expected: 3

    // Verify output files exist
    const expectedPaths = [
      join(OUTPUT_DIR, 'utils/format.md'),
      join(OUTPUT_DIR, 'models/User.md'),
    ]
    for (const p of expectedPaths) {
      console.log(`   ${existsSync(p) ? '📄' : '❌'} ${p}`)
    }
  } catch (error) {
    if (error instanceof Error) {
      console.error('Failed to write docs:', error.message)
    } else {
      console.error('Unexpected error:', error)
    }
    process.exit(1)
  }
}

main()

// Expected output:
// Writing docs to: ./docs/generated
// Source root:     ./src
//
// ✅ Documentation written successfully
//    Files written : 2
//    Total docs    : 3
//    📄 docs/generated/utils/format.md
//    📄 docs/generated/models/User.md
TypeScript

writeLlmsTxt

async function writeLlmsTxt(docs: GeneratedDoc[], outputDir: string, options: { projectName?: string; description?: string } = {}): Promise<void>
TypeScript

Use this to generate an llms.txt file for your project — a standardized index that helps LLMs (like ChatGPT, Claude, etc.) discover and understand your API documentation. Follows the llmstxt.org convention for Answer Engine Optimization (AEO).

Parameters

NameTypeRequiredDescription
docsGeneratedDoc[]✅ YesArray of generated documentation objects to index
outputDirstring✅ YesDirectory path where llms.txt will be written
optionsobjectNoOptional metadata for the generated file
options.projectNamestringNoName of your project (defaults to 'API')
options.descriptionstringNoShort description of your project shown at the top of the file

Returns

Promise<void> — Resolves when the file has been successfully written to disk. Throws if the output directory cannot be created or the file cannot be written.

Output

Writes a single llms.txt file to outputDir. The file contains a structured, LLM-readable index of your documentation entries, formatted per the llmstxt.org spec — including your project name, description, and links to individual doc pages.

Example

import { mkdir, writeFile } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'

// --- Inline types (mirrors the real GeneratedDoc shape) ---
type GeneratedDoc = {
  title: string
  description?: string
  outputPath: string
  content: string
  slug?: string
}

// --- Inline implementation of writeLlmsTxt ---
async function writeLlmsTxt(
  docs: GeneratedDoc[],
  outputDir: string,
  options: { projectName?: string; description?: string } = {}
): Promise<void> {
  const projectName = options.projectName || 'API'
  const description = options.description || ''

  const lines: string[] = []

  // Header block (llmstxt.org convention)
  lines.push(`# ${projectName}`)
  if (description) {
    lines.push('')
    lines.push(`> ${description}`)
  }
  lines.push('')

  // Index each doc as a markdown list entry
  lines.push('## Documentation')
  lines.push('')
  for (const doc of docs) {
    const docDescription = doc.description ? `: ${doc.description}` : ''
    lines.push(`- [${doc.title}](${doc.outputPath})${docDescription}`)
  }
  lines.push('')

  const output = lines.join('\n')

  await mkdir(outputDir, { recursive: true })
  await writeFile(join(outputDir, 'llms.txt'), output, 'utf-8')
}

// --- Example usage ---
const exampleDocs: GeneratedDoc[] = [
  {
    title: 'Authentication',
    description: 'How to authenticate API requests using API keys and OAuth',
    outputPath: '/docs/authentication.md',
    content: '# Authentication\n...',
    slug: 'authentication',
  },
  {
    title: 'Rate Limiting',
    description: 'Understand request limits and how to handle 429 responses',
    outputPath: '/docs/rate-limiting.md',
    content: '# Rate Limiting\n...',
    slug: 'rate-limiting',
  },
  {
    title: 'Webhooks',
    description: 'Receive real-time event notifications via webhooks',
    outputPath: '/docs/webhooks.md',
    content: '# Webhooks\n...',
    slug: 'webhooks',
  },
]

const outputDir = join(tmpdir(), 'my-project-docs')

async function main() {
  try {
    await writeLlmsTxt(exampleDocs, outputDir, {
      projectName: 'Acme API',
      description: 'REST API for managing users, billing, and integrations.',
    })

    // Verify the output
    const { readFile } = await import('fs/promises')
    const result = await readFile(join(outputDir, 'llms.txt'), 'utf-8')
    console.log('✅ llms.txt written successfully!\n')
    console.log('--- File contents ---')
    console.log(result)

    // Expected output:
    // # Acme API
    //
    // > REST API for managing users, billing, and integrations.
    //
    // ## Documentation
    //
    // - [Authentication](/docs/authentication.md): How to authenticate API requests using API keys and OAuth
    // - [Rate Limiting](/docs/rate-limiting.md): Understand request limits and how to handle 429 responses
    // - [Webhooks](/docs/webhooks.md): Receive real-time event notifications via webhooks
  } catch (error) {
    console.error('❌ Failed to write llms.txt:', error)
    process.exit(1)
  }
}

main()
TypeScript

GoScanner.canHandle

canHandle(filePath: string): boolean
TypeScript

Use this to check whether a Go source file should be processed by the GoScanner — it returns true for .go files that are not test files (_test.go).

This is the gating method called before scanning a file for functions, methods, types, and interfaces. It prevents test files from being included in documentation output.

Parameters

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the file being evaluated

Returns

ValueCondition
trueFile path ends in .go and does not contain _test.go
falseFile is not a .go file, or is a Go test file (_test.go)

Behavior Notes

  • Matches purely on the file path string — no filesystem access occurs
  • Test files (e.g., auth_test.go, ./pkg/parser_test.go) are explicitly excluded
  • Non-Go files (.ts, .py, .gotemplate, etc.) always return false

Example

// Inline implementation matching GoScanner.canHandle behavior
class GoScanner {
  languages = ['go']

  canHandle(filePath: string): boolean {
    return /\.go$/.test(filePath) && !filePath.includes('_test.go')
  }
}

const scanner = new GoScanner()

const testCases: Array<{ path: string; expected: boolean; note: string }> = [
  { path: 'internal/auth/handler.go',       expected: true,  note: 'standard Go source file' },
  { path: './pkg/parser/lexer.go',           expected: true,  note: 'relative path Go file' },
  { path: '/home/user/project/main.go',      expected: true,  note: 'absolute path Go file' },
  { path: 'internal/auth/handler_test.go',   expected: false, note: 'Go test file — excluded' },
  { path: 'pkg/utils/strings_test.go',       expected: false, note: 'Go test file — excluded' },
  { path: 'src/components/Button.tsx',       expected: false, note: 'TypeScript file' },
  { path: 'scripts/deploy.sh',              expected: false, note: 'shell script' },
  { path: 'templates/layout.gotemplate',     expected: false, note: '.go not at end of path' },
]

console.log('GoScanner.canHandle() results:\n')

let passed = 0
for (const { path, expected, note } of testCases) {
  try {
    const result = scanner.canHandle(path)
    const status = result === expected ? '✅ PASS' : '❌ FAIL'
    if (result === expected) passed++

    console.log(`${status}  canHandle("${path}")`)
    console.log(`       → ${result}  (${note})\n`)
  } catch (error) {
    console.error(`Error processing path "${path}":`, error)
  }
}

console.log(`Results: ${passed}/${testCases.length} passed`)
// Expected output:
// ✅ PASS  canHandle("internal/auth/handler.go")       → true
// ✅ PASS  canHandle("internal/auth/handler_test.go")  → false
// ✅ PASS  canHandle("src/components/Button.tsx")      → false
// ... all 8 cases pass
TypeScript

PythonScanner.canHandle

canHandle(filePath: string): boolean
TypeScript

Use this to quickly check whether a file should be processed by the Python scanner before attempting to parse or analyze it.

canHandle inspects a file path and returns true if it ends with .py, allowing you to gate Python-specific scanning logic without wasting resources on unsupported file types.

Parameters

NameTypeRequiredDescription
filePathstringThe path or filename to check for Python file extension

Returns

ValueCondition
trueThe filePath ends with .py
falseThe filePath has any other extension or no extension

Example

// Inline implementation matching PythonScanner behavior
class PythonScanner {
  languages = ['python']

  canHandle(filePath: string): boolean {
    return filePath.endsWith('.py')
  }
}

const scanner = new PythonScanner()

const testFiles = [
  '/project/src/main.py',
  '/project/src/utils.py',
  '/project/src/index.ts',
  '/project/README.md',
  '/project/Makefile',
  'script.py',
  'not_a_python_file.pyc',  // .pyc is NOT .py
]

console.log('PythonScanner.canHandle() results:')
console.log('-----------------------------------')

for (const filePath of testFiles) {
  const result = scanner.canHandle(filePath)
  const label = result ? '✅ will scan' : '⛔ skip'
  console.log(`${label}  ${filePath}`)
}

// Expected output:
// ✅ will scan  /project/src/main.py
// ✅ will scan  /project/src/utils.py
// ⛔ skip       /project/src/index.ts
// ⛔ skip       /project/README.md
// ⛔ skip       /project/Makefile
// ✅ will scan  script.py
// ⛔ skip       not_a_python_file.pyc

// Typical usage: filter a list of files before scanning
const allFiles = [
  'app.py', 'helpers.py', 'config.json', 'server.ts'
]

const pythonFiles = allFiles.filter(f => scanner.canHandle(f))
console.log('\nFiles queued for Python scanning:', pythonFiles)
// Output: ['app.py', 'helpers.py']
TypeScript

RustScanner.canHandle

canHandle(filePath: string): boolean
TypeScript

Use this to check whether a file path should be processed by the Rust scanner before attempting to scan it. Returns true only for .rs files that are not inside a /tests/ directory.

Parameters

NameTypeRequiredDescription
filePathstringYesThe file path to evaluate (relative or absolute)

Returns

ValueCondition
truePath ends with .rs and does not contain /tests/
falsePath is not a .rs file, or is inside a /tests/ directory

Note: Test files (paths containing /tests/) are explicitly excluded. This prevents auto-documentation of test code that isn't part of the public API surface.

Example

// Inline implementation of RustScanner.canHandle — no external imports needed
class RustScanner {
  languages = ['rust']

  canHandle(filePath: string): boolean {
    return /\.rs$/.test(filePath) && !filePath.includes('/tests/')
  }
}

const scanner = new RustScanner()

const testCases: Array<{ path: string; expected: boolean; note: string }> = [
  { path: 'src/lib.rs',                    expected: true,  note: 'standard source file' },
  { path: 'src/models/user.rs',            expected: true,  note: 'nested source file' },
  { path: 'src/tests/user_test.rs',        expected: false, note: 'inside /tests/ directory' },
  { path: 'src/main.ts',                   expected: false, note: 'wrong extension' },
  { path: 'README.md',                     expected: false, note: 'non-Rust file' },
  { path: '/home/user/project/src/api.rs', expected: true,  note: 'absolute path' },
]

console.log('RustScanner.canHandle() results:\n')

for (const { path, expected, note } of testCases) {
  try {
    const result = scanner.canHandle(path)
    const status = result === expected ? '✅ PASS' : '❌ FAIL'
    console.log(`${status}  canHandle("${path}")`)
    console.log(`       → ${result}  (${note})\n`)
  } catch (error) {
    console.error(`Error processing "${path}":`, error)
  }
}

// Expected output:
// ✅ PASS  canHandle("src/lib.rs")
//        → true  (standard source file)
//
// ✅ PASS  canHandle("src/tests/user_test.rs")
//        → false  (inside /tests/ directory)
//
// ✅ PASS  canHandle("src/main.ts")
//        → false  (wrong extension)
TypeScript

GoScanner.scanFile

async scanFile(filePath: string): Promise<ScanResult>
TypeScript

Use this to extract API elements from a Go source file, returning all discovered functions, types, and parameters in a structured result.

scanFile reads and parses a .go file (excluding test files), identifying exported API elements and collecting any parse errors encountered during scanning.

Note: Only handles non-test Go files (.go extension, not _test.go). Use canHandle(filePath) to verify compatibility before calling.

Parameters

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the .go source file to scan

Returns

Returns a Promise<ScanResult> that resolves with:

FieldTypeDescription
elementsAPIElement[]All discovered API elements (functions, types, etc.)
errorsstring[]Non-fatal parse errors encountered during scanning
filePathstringThe original file path that was scanned

Throws if the file cannot be read (e.g., file not found, permission denied).

Example

import { writeFileSync, unlinkSync } from 'fs'
import { promisify } from 'util'

// --- Inline types (mirrors the real library's types) ---
type Parameter = {
  name: string
  type: string
  required?: boolean
}

type APIElement = {
  name: string
  kind: 'function' | 'type' | 'method'
  parameters?: Parameter[]
  returns?: string
  comment?: string
  line?: number
}

type ScanResult = {
  filePath: string
  elements: APIElement[]
  errors: string[]
}

// --- Simulated GoScanner implementation ---
class GoScanner {
  canHandle(filePath: string): boolean {
    return /\.go$/.test(filePath) && !filePath.includes('_test.go')
  }

  async scanFile(filePath: string): Promise<ScanResult> {
    const { readFileSync } = await import('fs')
    const source = readFileSync(filePath, 'utf-8')

    const elements: APIElement[] = []
    const errors: string[] = []
    const lines = source.split('\n')

    lines.forEach((line, index) => {
      // Match exported functions: func FunctionName(
      const funcMatch = line.match(/^func\s+([A-Z][a-zA-Z0-9]*)\s*\(([^)]*)\)\s*(.*)/)
      if (funcMatch) {
        const [, name, rawParams, returnType] = funcMatch
        const parameters: Parameter[] = rawParams
          .split(',')
          .map(p => p.trim())
          .filter(Boolean)
          .map(p => {
            const parts = p.split(/\s+/)
            return {
              name: parts[0] || 'arg',
              type: parts[1] || 'interface{}',
              required: true,
            }
          })

        elements.push({
          name,
          kind: 'function',
          parameters,
          returns: returnType.trim() || 'void',
          line: index + 1,
        })
      }

      // Match exported types: type TypeName struct/interface
      const typeMatch = line.match(/^type\s+([A-Z][a-zA-Z0-9]*)\s+(struct|interface)/)
      if (typeMatch) {
        elements.push({
          name: typeMatch[1],
          kind: 'type',
          line: index + 1,
        })
      }
    })

    return { filePath, elements, errors }
  }
}

// --- Create a temporary Go file to scan ---
const tempGoFile = '/tmp/example_service.go'
const goSource = `package service

// UserService handles user operations
type UserService struct{}

// GetUser retrieves a user by ID
func GetUser(userID string, includeDeleted bool) error {
  return nil
}

// CreateUser adds a new user to the system
func CreateUser(name string, email string) string {
  return ""
}
`

writeFileSync(tempGoFile, goSource, 'utf-8')

// --- Run the scanner ---
async function main() {
  const scanner = new GoScanner()

  // Verify the file is compatible before scanning
  if (!scanner.canHandle(tempGoFile)) {
    console.error('File is not a scannable Go source file.')
    process.exit(1)
  }

  try {
    const result = await scanner.scanFile(tempGoFile)

    console.log(`Scanned: ${result.filePath}`)
    console.log(`Found ${result.elements.length} API elements:\n`)

    result.elements.forEach(el => {
      console.log(`  [${el.kind.toUpperCase()}] ${el.name} (line ${el.line})`)
      if (el.parameters?.length) {
        el.parameters.forEach(p => console.log(`    param: ${p.name} (${p.type})`))
      }
      if (el.returns) {
        console.log(`    returns: ${el.returns}`)
      }
    })

    if (result.errors.length > 0) {
      console.warn('\nParse errors:', result.errors)
    }

    // Expected output:
    // Scanned: /tmp/example_service.go
    // Found 3 API elements:
    //
    //   [TYPE] UserService (line 4)
    //   [FUNCTION] GetUser (line 7)
    //     param: userID (string)
    //     param: includeDeleted (bool)
    //     returns: error {
    //   [FUNCTION] CreateUser (line 12)
    //     param: name (string)
    //     param: email (string)
    //     returns: string {

  } catch (error) {
    if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
      console.error(`File not found: ${tempGoFile}`)
    } else {
      console.error('Scan failed:', error)
    }
  } finally {
    // Clean up temp file
    unlinkSync(tempGoFile)
  }
}

main()
TypeScript

PythonScanner.scanFile

async scanFile(filePath: string): Promise<ScanResult>
TypeScript

Use this to scan a Python source file and extract structured metadata — functions, classes, imports, and other code elements — for documentation generation or static analysis pipelines.

scanFile spawns a Python3 parser subprocess against the target .py file and resolves with a structured ScanResult object. It is a method on PythonScanner, which only handles files ending in .py (check with canHandle before calling).

Parameters

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the .py file to scan

Returns

Returns Promise<ScanResult> that resolves with the parsed scan output from the Python file.

ScenarioBehavior
Valid .py fileResolves with a populated ScanResult object
File not found / parse errorRejects or resolves with an error-state ScanResult (subprocess stderr captured)
Non-.py fileUse canHandle(filePath) first — scanFile is not intended for other file types

Tip: Always call canHandle(filePath) before scanFile to guard against unsupported file types.

Example

import { spawn } from 'child_process'
import { resolve } from 'path'
import { writeFileSync, unlinkSync } from 'fs'

// --- Inline types (mirrors the real ScanResult shape) ---
interface ScannedFunction {
  name: string
  lineStart: number
  lineEnd: number
  docstring?: string
}

interface ScannedClass {
  name: string
  methods: ScannedFunction[]
}

interface ScanResult {
  filePath: string
  language: string
  functions: ScannedFunction[]
  classes: ScannedClass[]
  imports: string[]
  error?: string
}

// --- Inline PythonScanner (self-contained simulation) ---
class PythonScanner {
  canHandle(filePath: string): boolean {
    return filePath.endsWith('.py')
  }

  async scanFile(filePath: string): Promise<ScanResult> {
    return new Promise((resolve, reject) => {
      // Inline parser: use python3 -c to extract basic structure
      const script = `
import ast, json, sys

path = sys.argv[1]
with open(path) as f:
    source = f.read()

tree = ast.parse(source)

functions = []
classes = []
imports = []

for node in ast.walk(tree):
    if isinstance(node, ast.FunctionDef):
        functions.append({
            "name": node.name,
            "lineStart": node.lineno,
            "lineEnd": node.end_lineno,
            "docstring": ast.get_docstring(node)
        })
    elif isinstance(node, ast.ClassDef):
        methods = [
            {"name": m.name, "lineStart": m.lineno, "lineEnd": m.end_lineno}
            for m in node.body if isinstance(m, ast.FunctionDef)
        ]
        classes.append({"name": node.name, "methods": methods})
    elif isinstance(node, (ast.Import, ast.ImportFrom)):
        imports.append(ast.dump(node))

print(json.dumps({
    "filePath": path,
    "language": "python",
    "functions": functions,
    "classes": classes,
    "imports": imports
}))
`
      const proc = spawn('python3', ['-c', script, filePath], {
        stdio: ['ignore', 'pipe', 'pipe']
      })

      let stdout = ''
      let stderr = ''

      proc.stdout.on('data', (chunk: Buffer) => { stdout += chunk.toString() })
      proc.stderr.on('data', (chunk: Buffer) => { stderr += chunk.toString() })

      proc.on('close', (code: number) => {
        if (code !== 0 || stderr) {
          resolve({
            filePath,
            language: 'python',
            functions: [],
            classes: [],
            imports: [],
            error: stderr || `Process exited with code ${code}`
          })
          return
        }
        try {
          const result: ScanResult = JSON.parse(stdout)
          resolve(result)
        } catch (parseError) {
          reject(new Error(`Failed to parse scanner output: ${parseError}`))
        }
      })

      proc.on('error', (err: Error) => {
        reject(new Error(`Failed to spawn python3: ${err.message}`))
      })
    })
  }
}

// --- Create a temporary Python file to scan ---
const samplePythonFile = resolve('./sample_module.py')
writeFileSync(samplePythonFile, `
"""A sample module for demonstration."""
import os
import json

class DataProcessor:
    """Processes incoming data."""

    def __init__(self, config: dict):
        """Initialize with config."""
        self.config = config

    def process(self, data: list) -> list:
        """Run processing pipeline."""
        return [item for item in data if item]

def load_config(path: str) -> dict:
    """Load a JSON config file."""
    with open(path) as f:
        return json.load(f)
`)

// --- Run the scanner ---
async function main() {
  const scanner = new PythonScanner()
  const targetFile = process.env.PYTHON_FILE || samplePythonFile

  if (!scanner.canHandle(targetFile)) {
    console.error(`Skipping: ${targetFile} is not a .py file`)
    process.exit(1)
  }

  try {
    console.log(`Scanning: ${targetFile}\n`)
    const result = await scanner.scanFile(targetFile)

    if (result.error) {
      console.error('Scan completed with errors:', result.error)
    } else {
      console.log('Language:', result.language)
      console.log('Classes found:', result.classes.map(c => c.name))
      console.log('Functions found:', result.functions.map(f => f.name))
      console.log('Imports found:', result.imports.length)
      console.log('\nFull result:', JSON.stringify(result, null, 2))
      // Expected output:
      // Language: python
      // Classes found: [ 'DataProcessor' ]
      // Functions found: [ '__init__', 'process', 'load_config' ]
      // Imports found: 2
    }
  } catch (error) {
    console.error('Scanner failed:', error instanceof Error ? error.message : error)
  } finally {
    unlinkSync(samplePythonFile) // clean up temp file
  }
}

main()
TypeScript

RustScanner.scanFile

async scanFile(filePath: string): Promise<ScanResult>
TypeScript

Use this to extract API elements from a Rust source file, parsing its contents into a structured result containing discovered elements and any errors encountered during scanning.

Designed for use with non-test .rs files, scanFile reads the file at the given path and returns a ScanResult containing all parsed APIElement entries alongside any parsing errors.

Parameters

NameTypeRequiredDescription
filePathstringAbsolute or relative path to the .rs source file to scan. Must not be a test file (paths containing /tests/ are not supported).

Returns

Returns Promise<ScanResult> which resolves to:

FieldTypeDescription
elementsAPIElement[]All API elements discovered in the file (functions, structs, enums, etc.)
errorsstring[]Non-fatal parsing errors encountered during the scan

Rejects with an error if the file cannot be read (e.g. file not found, permission denied).

Notes

  • Only handles files matching *.rs that are not inside a /tests/ directory
  • Check canHandle(filePath) before calling scanFile to verify the file is eligible for scanning

Example

import { readFileSync, writeFileSync, unlinkSync } from 'fs'
import { tmpdir } from 'os'
import { join } from 'path'

// --- Inline types (mirrors the real library's types) ---
interface Parameter {
  name: string
  type: string
}

interface APIElement {
  name: string
  kind: 'function' | 'struct' | 'enum' | 'impl'
  parameters?: Parameter[]
  isPublic: boolean
  lineNumber: number
}

interface ScanResult {
  elements: APIElement[]
  errors: string[]
}

// --- Inline implementation of RustScanner.scanFile ---
class RustScanner {
  canHandle(filePath: string): boolean {
    return /\.rs$/.test(filePath) && !filePath.includes('/tests/')
  }

  async scanFile(filePath: string): Promise<ScanResult> {
    const source = readFileSync(filePath, 'utf-8')
    const elements: APIElement[] = []
    const errors: string[] = []
    const lines = source.split('\n')

    const fnRegex = /^(pub\s+)?fn\s+(\w+)\s*\(([^)]*)\)/
    const structRegex = /^(pub\s+)?struct\s+(\w+)/
    const enumRegex = /^(pub\s+)?enum\s+(\w+)/

    lines.forEach((line, index) => {
      const trimmed = line.trim()

      const fnMatch = trimmed.match(fnRegex)
      if (fnMatch) {
        const rawParams = fnMatch[3].trim()
        const parameters: Parameter[] = rawParams
          ? rawParams.split(',').map(p => {
              const [name, type] = p.trim().split(':').map(s => s.trim())
              return { name: name || 'unknown', type: type || 'unknown' }
            })
          : []

        elements.push({
          name: fnMatch[2],
          kind: 'function',
          parameters,
          isPublic: !!fnMatch[1],
          lineNumber: index + 1,
        })
        return
      }

      const structMatch = trimmed.match(structRegex)
      if (structMatch) {
        elements.push({
          name: structMatch[2],
          kind: 'struct',
          isPublic: !!structMatch[1],
          lineNumber: index + 1,
        })
        return
      }

      const enumMatch = trimmed.match(enumRegex)
      if (enumMatch) {
        elements.push({
          name: enumMatch[2],
          kind: 'enum',
          isPublic: !!enumMatch[1],
          lineNumber: index + 1,
        })
        return
      }

      // Flag lines that look malformed
      if (trimmed.startsWith('pub fn') && !fnMatch) {
        errors.push(`Line ${index + 1}: Could not parse function signature: "${trimmed}"`)
      }
    })

    return { elements, errors }
  }
}

// --- Create a temporary .rs file to scan ---
const sampleRustCode = `
pub struct UserProfile {
    pub id: u64,
    pub name: String,
}

pub enum Status {
    Active,
    Inactive,
}

pub fn get_user(id: u64, name: String) -> UserProfile {
    UserProfile { id, name }
}

fn internal_helper(value: u32) -> bool {
    value > 0
}
`.trim()

const tmpFilePath = join(tmpdir(), `sample_${Date.now()}.rs`)

async function main() {
  try {
    // Write the temp file
    writeFileSync(tmpFilePath, sampleRustCode, 'utf-8')

    const scanner = new RustScanner()

    // Guard: verify the file is eligible before scanning
    if (!scanner.canHandle(tmpFilePath)) {
      throw new Error(`File is not eligible for scanning: ${tmpFilePath}`)
    }

    const result: ScanResult = await scanner.scanFile(tmpFilePath)

    console.log(`Scanned: ${tmpFilePath}`)
    console.log(`\nDiscovered ${result.elements.length} API elements:\n`)

    result.elements.forEach(el => {
      const visibility = el.isPublic ? 'pub' : 'private'
      const params = el.parameters?.map(p => `${p.name}: ${p.type}`).join(', ') ?? ''
      console.log(`  [${el.kind}] ${visibility} ${el.name}${params ? `(${params})` : ''} — line ${el.lineNumber}`)
    })

    if (result.errors.length > 0) {
      console.warn(`\nParsing errors (${result.errors.length}):`)
      result.errors.forEach(e => console.warn(`  ⚠ ${e}`))
    } else {
      console.log('\nNo parsing errors.')
    }

    /*
    Expected output:
      Discovered 4 API elements:

      [struct]   pub  UserProfile — line 1
      [enum]     pub  Status      — line 6
      [function] pub  get_user(id: u64, name: String) — line 11
      [function] private internal_helper(value: u32)  — line 15

      No parsing errors.
    */
  } catch (error) {
    console.error('Scan failed:', error instanceof Error ? error.message : error)
    process.exit(1)
  } finally {
    // Clean up temp file
    try { unlinkSync(tmpFilePath) } catch {}
  }
}

main()
TypeScript

Was this helpful?