A prompt optimization system that adapts your prompts for different AI providers.
This is not a simple prompt-in/prompt-out system. Each Optimizer Agent is a true autonomous agent that:
- Discovers knowledge - Uses
list_provider_docstool to find available documentation - Reads selectively - Uses
read_provider_doctool to retrieve specific guidelines (12K+ chars each) - Applies learning - Transforms prompts based on provider-specific patterns it learned
- Reports changes - Uses
submit_optimizationto return structured results with detailed changelog
The agent makes autonomous decisions in a ReAct loop (Reason โ Act โ Observe โ Repeat).
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AGENTIC SYSTEM โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ ORCHESTRATOR AGENT โ โ
โ โ โ โ
โ โ โข Validates providers against docs/ directory โ |
โ โ โข Spawns parallel optimizer agents (asyncio.gather) โ โ
โ โ โข Aggregates results from all agents โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ OPTIMIZER AGENT โ โ OPTIMIZER AGENT โ โ OPTIMIZER AGENT โ โ
โ โ (OpenAI) โ โ (Anthropic) โ โ (Google) โ โ
โ โ โ โ โ โ โ โ
โ โ โโโโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโโโ โ โ
โ โ โ ReAct Loop โ โ โ โ ReAct Loop โ โ โ โ ReAct Loop โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ 1. Reason โ โ โ โ 1. Reason โ โ โ โ 1. Reason โ โ โ
โ โ โ 2. Act (Tool) โ โ โ โ 2. Act (Tool) โ โ โ โ 2. Act (Tool) โ โ โ
โ โ โ 3. Observe โ โ โ โ 3. Observe โ โ โ โ 3. Observe โ โ โ
โ โ โ 4. Repeat โ โ โ โ 4. Repeat โ โ โ โ 4. Repeat โ โ โ
โ โ โโโโโโโโโฌโโโโโโโโ โ โ โโโโโโโโโฌโโโโโโโโ โ โ โโโโโโโโโฌโโโโโโโโ โ โ
โ โ โ โ โ โ โ โ โ โ โ
โ โ [FileLogger] โ โ [FileLogger] โ โ [FileLogger] โ โ
โ โโโโโโโโโโโโผโโโโโโโโโโโ โโโโโโโโโโโโผโโโโโโโโโโโ โโโโโโโโโโโโผโโโโโโโโโโโ โ
โ โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ TOOL LAYER โ โ
โ โ โ โ
โ โ list_provider_docs(provider) โ ["index.md", "prompting.md"] โ โ
โ โ read_provider_doc(provider, doc_name) โ "12K chars of guidelines..." โ โ
โ โ submit_optimization(prompt, changes) โ Final structured result โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ KNOWLEDGE BASE (docs/) โ โ
โ โ โ โ
โ โ โโโ openai/prompting.md (Official prompting guide) โ โ
โ โ โโโ anthropic/prompting.md (Be clear, direct, detailed) โ โ
โ โ โโโ google/prompting.md (Prompt design strategies) โ โ
โ โ โโโ kimi/prompting.md (Kimi-specific guidelines) โ โ
โ โ โ โ
โ โ โ Auto-detected on startup (add folder = new provider) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Each optimizer runs an autonomous ReAct loop (Reasoning + Acting):
class OptimizerAgent:
"""
Agentic optimizer using ReAct pattern.
The agent autonomously decides which documents to read,
rather than having context pre-loaded (prevents context rot).
"""
def __init__(self):
self.llm = ChatOpenAI(
model=PRIMARY_MODEL,
api_key=OPENROUTER_API_KEY,
base_url=OPENROUTER_BASE_URL,
max_tokens=16384, # Prevent output truncation
)
async def _run_agent_loop(self, task, provider, original, log, file_log):
"""
The core ReAct loop:
1. Send messages to LLM
2. Parse tool call from response
3. Execute tool, get result
4. Add result to conversation
5. Repeat until submission
"""
messages = [
SystemMessage(content=AGENT_SYSTEM_PROMPT),
HumanMessage(content=task),
]
for iteration in range(max_iterations):
# REASON: LLM decides what to do
response = await self.llm.ainvoke(messages)
# Check for final submission
if "submit_optimization" in response.content:
return self._parse_final_submission(response.content, ...)
# ACT: Parse and execute tool
tool_call = self._parse_tool_call(response.content)
if tool_call:
name, args = tool_call
result = self._execute_tool(name, args) # Tool execution
# OBSERVE: Add result to conversation
messages.append(AIMessage(content=response.content))
messages.append(HumanMessage(content=f"TOOL RESULT:\n{result}"))The agent uses a simple text-based tool calling format:
# Tool format the agent uses:
# TOOL: list_provider_docs | ARGS: provider=openai
# TOOL: read_provider_doc | ARGS: provider=openai, doc_name=prompting.md
# TOOL: submit_optimization | ARGS: done
def _list_provider_docs(provider: str) -> str:
"""List available docs for a provider."""
provider_path = Path(DOCS_BASE_PATH) / provider.lower()
files = [f.name for f in provider_path.iterdir() if f.suffix == ".md"]
return f"Available docs for {provider.upper()}: {', '.join(files)}"
def _read_provider_doc(provider: str, doc_name: str) -> str:
"""Read specific documentation file (returns full content ~12K chars)."""
doc_path = Path(DOCS_BASE_PATH) / provider.lower() / doc_name
content = doc_path.read_text()
return f"=== {provider.upper()}: {doc_name} ===\n\n{content}"User: "Optimize 'You are a helpful assistant' for Anthropic"
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ITERATION 1 (0ms) โ
โ โ
โ LLM Response: "TOOL: list_provider_docs | ARGS: provider=anthropic"
โ โ
โ Tool Result: "Available docs for ANTHROPIC: index.md, prompting.md"
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ITERATION 2 (2.5s) โ
โ โ
โ LLM Response: "TOOL: read_provider_doc | ARGS: provider=anthropic, doc_name=prompting.md"
โ โ
โ Tool Result: "=== ANTHROPIC: prompting.md === โ
โ โ
โ Prompt engineering โ
โ Be clear, direct, and detailed โ
โ โ
โ When interacting with Claude, think of it as a brilliant but โ
โ very new employee (with amnesia) who needs explicit instructionsโ
โ ..." (12,082 characters) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ITERATION 3 (8.8s) โ
โ โ
โ LLM Response: "TOOL: read_provider_doc | ARGS: provider=anthropic, doc_name=index.md"
โ โ
โ Tool Result: (416 characters of index content) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ITERATION 4 (11.8s) - FINAL SUBMISSION โ
โ โ
โ LLM Response: โ
โ "Based on my review of Anthropic's prompting guidelines... โ
โ โ
โ TOOL: submit_optimization | ARGS: done โ
โ โ
โ OPTIMIZED_PROMPT: โ
โ ``` โ
โ You are a helpful assistant designed to provide clear, โ
โ accurate, and thoughtful responses to user questions... โ
โ ``` โ
โ โ
โ CHANGES: โ
โ 1. [clarity] - Added explicit description following the โ
โ guideline to be specific about what you want Claude to do" โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
OPTIMIZATION COMPLETE (26s total, 4 iterations)
Every agent execution is logged in two ways:
Each optimization result includes detailed logs in the JSON response:
{
"agent_logs": [
{"timestamp": "...", "elapsed_ms": 0, "type": "system", "content": "Starting optimization for ANTHROPIC"},
{"timestamp": "...", "elapsed_ms": 2475, "type": "tool_call", "content": "Calling tool: list_provider_docs", "metadata": {"args": {"provider": "anthropic"}}},
{"timestamp": "...", "elapsed_ms": 2476, "type": "tool_result", "content": "Available docs for ANTHROPIC: index.md, prompting.md"},
{"timestamp": "...", "elapsed_ms": 8790, "type": "tool_call", "content": "Calling tool: read_provider_doc", "metadata": {"args": {"provider": "anthropic", "doc_name": "prompting.md"}}},
{"timestamp": "...", "elapsed_ms": 8792, "type": "tool_result", "content": "=== ANTHROPIC: prompting.md ===...", "metadata": {"result_length": 12082}},
{"timestamp": "...", "elapsed_ms": 26142, "type": "submit", "content": "Agent submitting final result"}
]
}Full execution traces are saved to rosetta_prompt/logs/:
$ ls rosetta_prompt/logs/
20251205_055106_859143_anthropic.log # 37KB
20251205_055106_861382_google.log # 37KB
$ cat rosetta_prompt/logs/20251205_055106_859143_anthropic.log
================================================================================
ROSETTA PROMPT - AGENT EXECUTION LOG
================================================================================
Provider: ANTHROPIC
Started: 2025-12-05T05:51:06.859193
================================================================================
------------------------------------------------------------
[2025-12-05T05:51:06.859383] [0.000s] SYSTEM
------------------------------------------------------------
Starting optimization for ANTHROPIC
Model: anthropic/claude-opus-4.5
Original prompt length: 28 chars
------------------------------------------------------------
[2025-12-05T05:51:06.859441] [0.000s] TASK_INPUT
------------------------------------------------------------
## TASK: Optimize for ANTHROPIC
...
------------------------------------------------------------
[2025-12-05T05:51:15.651451] [8.792s] TOOL_RESULT
------------------------------------------------------------
Tool: read_provider_doc
Result (12082 chars):
=== ANTHROPIC: prompting.md ===
Prompt engineering
Be clear, direct, and detailed
...Log files contain:
- Full system prompt sent to LLM
- Complete LLM responses (not truncated)
- Full tool results (12K+ chars of documentation)
- Timing data for each step
- Final parsed output
curl -X POST http://localhost:8000/optimize \
-H "Content-Type: application/json" \
-d '{
"prompt": "You are a helpful assistant.",
"providers": ["openai", "anthropic", "google"]
}'Response:
{
"original": "You are a helpful assistant.",
"optimized": {
"openai": {
"provider": "openai",
"prompt": "# Identity\nYou are an AI assistant designed to help...",
"changes": [
{"category": "structure", "description": "Added markdown sections..."},
{"category": "formatting", "description": "Included examples..."}
],
"success": true,
"agent_logs": [...]
},
"anthropic": {
"provider": "anthropic",
"prompt": "You are a helpful assistant designed to provide clear...",
"changes": [...],
"success": true,
"agent_logs": [...]
}
}
}curl http://localhost:8000/providers
# ["anthropic", "google", "kimi", "openai"]| Component | Technology | Why |
|---|---|---|
| LLM | OpenRouter (free tier) | Zero cost to experiment |
| Agent Pattern | ReAct (Reason + Act) | Industry standard for tool-using agents |
| Messages | LangChain SystemMessage, HumanMessage, AIMessage |
Clean conversation management |
| LLM Client | langchain_openai.ChatOpenAI |
OpenRouter compatible |
| API | FastAPI | Async support for parallel agents |
| Frontend | React + Three.js | 3D visualization of results |
| State | Zustand | Minimal React state management |
from langchain_openai import ChatOpenAI
from langchain.messages import SystemMessage, HumanMessage, AIMessage
# LLM client compatible with OpenRouter
llm = ChatOpenAI(
model="amazon/nova-2-lite-v1:free",
api_key=OPENROUTER_API_KEY,
base_url="https://openrouter.ai/api/v1",
)
# Async invocation
response = await llm.ainvoke([
SystemMessage(content="You are an optimizer agent..."),
HumanMessage(content="Optimize this prompt for OpenAI..."),
])TheRosettaPrompt/
โโโ rosetta_prompt/
โ โโโ main.py # FastAPI endpoints
โ โโโ config.py # LLM + provider config
โ โ
โ โโโ agents/
โ โ โโโ orchestrator.py # Parallel agent coordination
โ โ โโโ optimizer.py # ReAct agent with tool loop
โ โ
โ โโโ utils/
โ โ โโโ logger.py # FileLogger for local logs
โ โ
โ โโโ models/
โ โ โโโ schemas.py # Pydantic models + AgentLogEntry
โ โ
โ โโโ logs/ # Agent execution logs (auto-created)
โ โ โโโ *.log
โ โ
โ โโโ docs/ # Knowledge base (auto-detected)
โ โโโ openai/prompting.md
โ โโโ anthropic/prompting.md
โ โโโ google/prompting.md
โ โโโ kimi/prompting.md
โ
โโโ updater/ # Claude Agent SDK doc updater
โ โโโ agent.py # Main updater agent
โ โโโ tools.py # Custom tools (Firecrawl, file ops)
โ โโโ config.py # Provider URLs configuration
โ โโโ scheduler.py # Weekly update scheduler
โ
โโโ ui/
โโโ src/
โโโ components/
โ โโโ InputScreen.js # Prompt input + provider selection
โ โโโ ProcessingScreen.js # Live agent logs
โ โโโ ResultsScreen.js # 3D card carousel
โโโ store.js # API calls + Zustand state
Providers are auto-detected from docs/. To add one:
# 1. Create provider directory
mkdir rosetta_prompt/docs/mistral
# 2. Add documentation (scrape from official docs)
cat > rosetta_prompt/docs/mistral/prompting.md << 'EOF'
# Mistral Prompting Guidelines
## Best Practices
- Use clear, structured instructions
- Mistral models respond well to...
EOF
# 3. Restart server - new provider appears automaticallyThe agent will now:
list_provider_docs("mistral")โ["prompting.md"]read_provider_doc("mistral", "prompting.md")โ Full guidelines- Apply Mistral-specific patterns to optimize prompts
The updater/ directory contains an autonomous agent that automatically updates prompting guides by scraping provider documentation using Firecrawl and synthesizing content with Claude Opus.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Scheduler (Weekly) โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Claude Opus (Anthropic SDK) โ โ
โ โ Native Tool Calling + ReAct Loop โ โ
โ โ โ โ
โ โ Tools: โ โ
โ โ โข list_providers โ Get configured providers โ โ
โ โ โข batch_scrape_urls (Firecrawl) โ Fetch all docs โ โ
โ โ โข read_current_guide โ Compare with existing โ โ
โ โ โข update_guide โ Write synthesized content โ โ
โ โ โข write_update_log โ Record update status โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ rosetta_prompt/docs/*.md โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
cd updater
pip install -r requirements.txt
# Manual update (all providers)
python agent.py
# Update specific providers
python agent.py anthropic openai
# Update multiple providers
python agent.py google kimi
# Weekly scheduler
python scheduler.pyAdd URLs for new providers in updater/config.py:
PROVIDER_CONFIGS = {
"mistral": {
"name": "Mistral",
"urls": ["https://docs.mistral.ai/capabilities/completion/"],
"doc_file": "prompting.md"
}
}
CLAUDE_MODEL = "claude-opus-4-5-20251101" # Model for synthesis
MAX_TURNS = 5 # Max agent iterationsRequires ANTHROPIC_API_KEY and FIRECRAWL_API_KEY in .env.
# Backend
cd rosetta_prompt
pip install -r requirements.txt
echo "OPENROUTER_API_KEY=your_key" > .env
uvicorn main:app --reload --port 8000
# Frontend
cd ui
npm install
npm startMIT