CritiqueLoopAgent is a Crowe-native, iteration-first agent that improves outputs via a propose → judge → mentor → revise loop. It’s designed for tasks where quality emerges through cycles rather than one-shot generation.
Overview
Internally, CritiqueLoopAgent coordinates three specialized roles:
Proposer — drafts the initial answer or plan
Judge — scores the draft against a rubric and flags defects
Mentor — formulates actionable guidance and heuristics for revision
Each cycle leverages prior feedback and stored insights to steadily raise answer quality.
Initialization
from crowe.agents import CritiqueLoopAgentagent =CritiqueLoopAgent(agent_name="critique-loop",model="openai/gpt-4o",max_cycles=3,memory_limit=128,system_prompt=None# optional)
Parameters
Parameter
Type
Default
Description
agent_name
str
"critique-loop"
Identifier for logging/routing
model
str
"openai/gpt-4o"
Backbone LLM
max_cycles
int
3
Max improvement cycles
memory_limit
int
128
Max long-term insight items
system_prompt
str
None
Optional global system prompt
return_list
bool
False
Return conversation as list
return_dict
bool
False
Return conversation as dict
Methods
propose
Generates a first-pass answer (Proposer).
judge
Evaluates quality against a rubric (Judge).
mentor
Creates actionable guidance and reusable heuristics (Mentor).
revise
Refines the draft using the judge report + mentor advice.
cycle
Runs one full loop: propose → judge → mentor → revise.