Index
Functions
applyFeedback
async function applyFeedback(feedback: ReviewFeedback, client: LLMClient): Promise<ReviewResult>
Use applyFeedback to regenerate a specific section of documentation based on a reviewer's critique, without re-processing your entire codebase.
When skrypt's review pass flags a doc entry as inaccurate, incomplete, or poorly written, applyFeedback lets you act on that signal immediately. It fits naturally after reviewDocs in a quality loop: review → collect feedback items → apply each one.
It reads the existing generated file at feedback.filePath, sends the flagged section back to the LLM with the critique as context, and writes the improved content in place. Only the relevant section is regenerated — the rest of the file is untouched.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
feedback | ReviewFeedback | Yes | A single feedback item from a review pass — contains the file path, the flagged section, and the critique to address. Iterate over the array returned by reviewDocs to process multiple items. |
client | LLMClient | Yes | The configured LLM client used to regenerate the section. Must be the same provider and model you used during the original generation pass to keep output style consistent. |
Returns
Returns a Promise<ReviewResult> describing whether the section was successfully rewritten, along with the updated content. Check result.status to confirm the fix was applied before moving to the next feedback item — a failed result leaves the original file unchanged.
Heads up
applyFeedbackwrites directly to the file atfeedback.filePath. If you're running this in CI or want a dry-run, snapshot the file beforehand or checkReviewResultbefore committing.- Process feedback items sequentially if multiple items target the same file — concurrent calls can cause write conflicts.
Example:
const { readFileSync, writeFileSync } = require("fs");
const { OpenAI } = require("openai");
// --- Inline types (mirrors autodocs internals) ---
const ReviewStatus = { SUCCESS: "success", FAILURE: "failure" };
// --- Minimal LLMClient implementation ---
const openai = new OpenAI({ apiKey: "sk-proj-abc123xyz789" });
const client = {
async complete(prompt) {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
return response.choices[0].message.content ?? "";
},
};
// --- Inline applyFeedback implementation ---
async function applyFeedback(feedback, client) {
try {
const content = readFileSync(feedback.filePath, "utf-8");
const prompt = `
You are improving API documentation. Here is the existing documentation:
${content}
A reviewer flagged the section for "${feedback.elementName}" with this critique:
"${feedback.critique}"
Rewrite only that section to address the critique. Return the full updated file content.
`.trim();
const updated = await client.complete(prompt);
writeFileSync(feedback.filePath, updated, "utf-8");
return {
status: ReviewStatus.SUCCESS,
filePath: feedback.filePath,
elementName: feedback.elementName,
updatedContent: updated,
};
} catch (err) {
return {
status: ReviewStatus.FAILURE,
filePath: feedback.filePath,
elementName: feedback.elementName,
error: err.message,
};
}
}
// --- Example usage ---
const feedbackItem = {
filePath: "./content/docs/payments.mdx",
elementName: "createCharge",
critique:
"The parameter descriptions restate the type without explaining impact. The code example uses placeholder data instead of realistic values.",
};
(async () => {
try {
const result = await applyFeedback(feedbackItem, client);
if (result.status === "success") {
console.log(`✓ Rewrote docs for "${result.elementName}" in ${result.filePath}`);
} else {
console.error(`✗ Failed to apply feedback: ${result.error}`);
}
} catch (err) {
console.error("Unexpected error:", err.message);
}
})();
// Expected output:
// ✓ Rewrote docs for "createCharge" in ./content/docs/payments.mdx