CritiqueLoopAgent

CritiqueLoopAgent

CritiqueLoopAgent is a Crowe-native, iteration-first agent that improves outputs via a propose → judge → mentor → revise loop. It’s designed for tasks where quality emerges through cycles rather than one-shot generation.

Overview

Internally, CritiqueLoopAgent coordinates three specialized roles:

  • Proposer — drafts the initial answer or plan

  • Judge — scores the draft against a rubric and flags defects

  • Mentor — formulates actionable guidance and heuristics for revision

Each cycle leverages prior feedback and stored insights to steadily raise answer quality.


Initialization

from crowe.agents import CritiqueLoopAgent

agent = CritiqueLoopAgent(
    agent_name="critique-loop",
    model="openai/gpt-4o",
    max_cycles=3,
    memory_limit=128,
    system_prompt=None  # optional
)

Parameters

Parameter
Type
Default
Description

agent_name

str

"critique-loop"

Identifier for logging/routing

model

str

"openai/gpt-4o"

Backbone LLM

max_cycles

int

3

Max improvement cycles

memory_limit

int

128

Max long-term insight items

system_prompt

str

None

Optional global system prompt

return_list

bool

False

Return conversation as list

return_dict

bool

False

Return conversation as dict


Methods

propose

Generates a first-pass answer (Proposer).

judge

Evaluates quality against a rubric (Judge).

mentor

Creates actionable guidance and reusable heuristics (Mentor).

revise

Refines the draft using the judge report + mentor advice.

cycle

Runs one full loop: propose → judge → mentor → revise.

Returns: { task, draft, report, advice, revised, score, iteration }

run

Executes multiple cycles until convergence or max_cycles.

  • If include_intermediates=False: returns final revised text

  • If True: returns a full trace (per-iteration artifacts)


Example


Memory System

Backed by GuidanceMemory, which stores:

  • Recent drafts, reports, and advice (short-term)

  • Durable heuristics and do/don’t rules (long-term)

  • Relevance retrieval + deduplication to avoid noise


Best Practices

  • Be explicit: Provide target style, audience, length, constraints.

  • Use a rubric: e.g., clarity, accuracy, structure, tone (0–1 per dimension).

  • Tune cycles: Increase max_cycles for complex domains.

  • Curate memory: Periodically prune stale or redundant insights.

  • Harden outputs: Validate and sanitize if results feed production systems.

Last updated