Documentation
PromptEngineer API reference and integration guide
Quick Start
Set up
Start the backend and set your ADMIN_KEY environment variable. Then create an API key from the Dashboard admin panel.
# Windows (PowerShell) $env:ADMIN_KEY = "your-secret" # Mac/Linux export ADMIN_KEY=your-secret python -m uvicorn backend.main:app --reload --port 8000
Set your API key in the UI
Click the key icon in the top-right corner and paste your API key. It is saved to localStorage automatically.
Optimize your first prompt
Go to the Optimizer, paste any prompt, and click Optimize. Results are adapted for ChatGPT, Midjourney, and Stable Diffusion.
Score a prompt
Use the Score page to analyze any prompt across five quality dimensions: clarity, specificity, structure, conciseness, and context richness. If ANTHROPIC_API_KEY is set, scoring uses Claude for deep analysis; otherwise a rule-based fallback runs automatically.
Run a batch job
Paste multiple prompts (one per line) on the Batch page. Job state can persist in localStorage for up to 2 hours so you can navigate away and return.
Save and manage prompts
Save results to the Library from the Optimizer or Batch flows where available. Browse, filter, and edit saved prompts on the Library page.
Claude Integration
How it works
PromptEngineer uses Claude (claude-sonnet-4-20250514) when ANTHROPIC_API_KEY is set: the optimizer can rewrite prompts per target, and the Score endpoint returns richer analysis.
| Feature | Without Claude | With Claude |
|---|---|---|
| Optimization | Rule-based pipeline | Claude rewrites for each target model |
| Scoring | Heuristic analysis | Deep multi-dimension analysis |
| Intent detect | Keyword classifier | Stronger context when optimizing via Claude |
| Quality report | Fixed formula scores | Nuanced issues & suggestions (mapped to app schema) |
Setup
# Add to your environment before starting the backend ANTHROPIC_API_KEY=sk-ant-...
Once set, the Optimizer shows a ✦ Claude-powered badge when enabled, and the Score page returns richer analysis with actionable issues.
Fallback behavior
If ANTHROPIC_API_KEY is not set or an API call fails, features fall back to the rule-based pipeline automatically. No extra configuration — degradation is silent.
Cross-Page Features
Shared result links
After running a prompt, use Copy Link in the results toolbar to share a URL: /?run={run_id}. Opening it loads the saved run and full adapter outputs.
Batch → Optimizer
From Batch results, open a row in the Optimizer when available. URLs use /?prompt={encoded_prompt} — the textarea pre-fills; optimization does not auto-run.
Optimizer → Score
In the results toolbar, Score opens /score?prompt={encoded_prompt} with the original input. The Score page auto-runs when opened via this link.
Batch → Library
Where the UI offers it, save a batch result row to the Library (e.g. bookmark action) to keep that prompt for later browsing and editing.
Endpoint reference
66 routes at https://api.promptengineer.libanreach.com. Expand a group to see methods. Core and Score are open by default.
/None—MessageResponse/parseNone{ prompt }ParsedPromptResponse/enhanceNone{ prompt }EnhancedPromptResponse/optimizeNone{ prompt }OptimizeResponse/adaptNone{ prompt }OptimizeResponse/processAPI Key{ prompt, model_output? }ProcessResponseScoreResponse: overall_score, grade (A–F), clarity, specificity, structure, conciseness, context_richness, estimated_token_count, issues[], strengths[], summary, powered_by (claude | rules).
/scoreAPI Key{ prompt, target? }ScoreResponse/playground/runAPI Key{ prompt, system_prompt?, max_tokens?, temperature? }SSE stream (text/event-stream)Streams response as Server-Sent Events. Three event types:
Requires ANTHROPIC_API_KEY. Returns 503 if not configured.
SDK Reference
Use sdk/promptengineer.py (repo root) or backend/sdk/client.py. Below: facade client with API key.
from sdk.promptengineer import PromptEngineerClient
client = PromptEngineerClient(api_key="pe_live_your_key_here")
# Optimize a prompt (full /process pipeline)
result = client.optimize(
"write me a story",
goals=["clarity", "specificity"], # accepted for API compat; optional
)
print(result["chatgpt_prompt"])
# Score a prompt (POST /score)
score = client.score("write me a story", target="general")
print(f"Grade: {score['grade']} ({score['overall_score']}/100)")
# Batch optimize (async job)
job_id = client.batch_optimize([
"prompt one",
"prompt two",
"prompt three",
])
status = client.get_batch_status(job_id)
print(status.get("status")) # queued | running | completed | failedMethods (facade / base client)
optimize(prompt, goals=None, model="any")→POST /process— full ProcessResponsescore(prompt, target="general")→POST /score— ScoreResponsebatch_optimize(prompts, goals=None)→POST /jobs/batch— returns job_id stringget_batch_status(job_id)→GET /jobs/{id}save_prompt(title, content, category=None)→POST /library/promptslist_prompts(category=None, search=None)→GET /library/prompts— list of itemscompare(prompt_a, prompt_b, context=None)→POST /compare— CompareResponse
Base PromptEngineerClient in backend/sdk/client.py also exposes parse, enhance, batch_process, webhooks, analytics helpers, and admin methods when an admin key is configured.
HTTP errors
Common codes: 401 missing/invalid API key, 429 rate limit, 422 validation, 503 admin not configured.