Agentic AIverifiedVerified

Evaluator-Optimizer Agent

An iterative refinement loop where an 'Evaluator' provides granular feedback on an 'Optimizer’s' output until quality thresholds are met.

Coming Soon

errorThe Challenge

LLMs often struggle with complex tasks requiring high precision in a single pass. A single prompt may produce output that is partially correct but contains subtle errors in logic, formatting, or completeness that are difficult to catch without structured review.

check_circleThe Solution

Decouple the roles. One agent (Optimizer) focuses on creative generation, while a second agent (Evaluator) applies structured rubrics to critique the output. The loop continues—refine, evaluate, refine—until the Evaluator’s criteria are fully satisfied or a maximum iteration count is reached.

Architecture Overview

hourglass_empty

Rendering diagram...

Implementation

interface Feedback {
  isPass: boolean;
  critique: string;
  score: number;
}

interface Optimizer {
  generate(task: string): Promise<string>;
  refine(current: string, feedback: string): Promise<string>;
}

interface Evaluator {
  check(output: string): Promise<Feedback>;
}

const MAX_ITERATIONS = 5;

async function refinementLoop(
  task: string,
  optimizer: Optimizer,
  evaluator: Evaluator
): Promise<string> {
  let currentOutput = await optimizer.generate(task);

  for (let i = 0; i < MAX_ITERATIONS; i++) {
    const feedback = await evaluator.check(currentOutput);

    if (feedback.isPass) return currentOutput;

    currentOutput = await optimizer.refine(
      currentOutput,
      feedback.critique
    );
  }

  return currentOutput;
}
Compiled with TypeScript 5.6
lightbulb

Real-World Analogy

Think of a student writing an essay (Optimizer) and a teacher grading it with detailed feedback (Evaluator). The student revises based on the red-ink comments and resubmits. This cycle repeats until the essay meets the teacher’s standards—or the deadline (max iterations) is reached.