Documentation

PromptEngineer API reference and integration guide

Quick Start

1

Set up

Start the backend and set your ADMIN_KEY environment variable. Then create an API key from the Dashboard admin panel.

# Windows (PowerShell)
$env:ADMIN_KEY = "your-secret"

# Mac/Linux
export ADMIN_KEY=your-secret

python -m uvicorn backend.main:app --reload --port 8000
2

Set your API key in the UI

Click the key icon in the top-right corner and paste your API key. It is saved to localStorage automatically.

3

Optimize your first prompt

Go to the Optimizer, paste any prompt, and click Optimize. Results are adapted for ChatGPT, Midjourney, and Stable Diffusion.

4

Score a prompt

Use the Score page to analyze any prompt across five quality dimensions: clarity, specificity, structure, conciseness, and context richness. If ANTHROPIC_API_KEY is set, scoring uses Claude for deep analysis; otherwise a rule-based fallback runs automatically.

5

Run a batch job

Paste multiple prompts (one per line) on the Batch page. Job state can persist in localStorage for up to 2 hours so you can navigate away and return.

6

Save and manage prompts

Save results to the Library from the Optimizer or Batch flows where available. Browse, filter, and edit saved prompts on the Library page.

Claude Integration

How it works

PromptEngineer uses Claude (claude-sonnet-4-20250514) when ANTHROPIC_API_KEY is set: the optimizer can rewrite prompts per target, and the Score endpoint returns richer analysis.

FeatureWithout ClaudeWith Claude
OptimizationRule-based pipelineClaude rewrites for each target model
ScoringHeuristic analysisDeep multi-dimension analysis
Intent detectKeyword classifierStronger context when optimizing via Claude
Quality reportFixed formula scoresNuanced issues & suggestions (mapped to app schema)

Setup

# Add to your environment before starting the backend
ANTHROPIC_API_KEY=sk-ant-...

Once set, the Optimizer shows a ✦ Claude-powered badge when enabled, and the Score page returns richer analysis with actionable issues.

Fallback behavior

If ANTHROPIC_API_KEY is not set or an API call fails, features fall back to the rule-based pipeline automatically. No extra configuration — degradation is silent.

Cross-Page Features

Shared result links

After running a prompt, use Copy Link in the results toolbar to share a URL: /?run={run_id}. Opening it loads the saved run and full adapter outputs.

Batch → Optimizer

From Batch results, open a row in the Optimizer when available. URLs use /?prompt={encoded_prompt} — the textarea pre-fills; optimization does not auto-run.

Optimizer → Score

In the results toolbar, Score opens /score?prompt={encoded_prompt} with the original input. The Score page auto-runs when opened via this link.

Batch → Library

Where the UI offers it, save a batch result row to the Library (e.g. bookmark action) to keep that prompt for later browsing and editing.

Endpoint reference

66 routes at https://api.promptengineer.libanreach.com. Expand a group to see methods. Core and Score are open by default.

GET/None
Body
ResponseMessageResponse
POST/parseNone
Body{ prompt }
ResponseParsedPromptResponse
POST/enhanceNone
Body{ prompt }
ResponseEnhancedPromptResponse
POST/optimizeNone
Body{ prompt }
ResponseOptimizeResponse
POST/adaptNone
Body{ prompt }
ResponseOptimizeResponse
POST/processAPI Key
Body{ prompt, model_output? }
ResponseProcessResponse

ScoreResponse: overall_score, grade (A–F), clarity, specificity, structure, conciseness, context_richness, estimated_token_count, issues[], strengths[], summary, powered_by (claude | rules).

POST/scoreAPI Key
Body{ prompt, target? }
ResponseScoreResponse
POST/playground/runAPI Key
Body{ prompt, system_prompt?, max_tokens?, temperature? }
ResponseSSE stream (text/event-stream)

Streams response as Server-Sent Events. Three event types:

{ type: 'text', text: string } — streamed token chunk
{ type: 'done' } — stream complete
{ type: 'error', message: string } — failure

Requires ANTHROPIC_API_KEY. Returns 503 if not configured.

SDK Reference

Use sdk/promptengineer.py (repo root) or backend/sdk/client.py. Below: facade client with API key.

from sdk.promptengineer import PromptEngineerClient

client = PromptEngineerClient(api_key="pe_live_your_key_here")

# Optimize a prompt (full /process pipeline)
result = client.optimize(
    "write me a story",
    goals=["clarity", "specificity"],  # accepted for API compat; optional
)
print(result["chatgpt_prompt"])

# Score a prompt (POST /score)
score = client.score("write me a story", target="general")
print(f"Grade: {score['grade']} ({score['overall_score']}/100)")

# Batch optimize (async job)
job_id = client.batch_optimize([
    "prompt one",
    "prompt two",
    "prompt three",
])
status = client.get_batch_status(job_id)
print(status.get("status"))  # queued | running | completed | failed

Methods (facade / base client)

  • optimize(prompt, goals=None, model="any") POST /process — full ProcessResponse
  • score(prompt, target="general") POST /score — ScoreResponse
  • batch_optimize(prompts, goals=None) POST /jobs/batch — returns job_id string
  • get_batch_status(job_id)GET /jobs/{id}
  • save_prompt(title, content, category=None) POST /library/prompts
  • list_prompts(category=None, search=None) GET /library/prompts — list of items
  • compare(prompt_a, prompt_b, context=None) POST /compare — CompareResponse

Base PromptEngineerClient in backend/sdk/client.py also exposes parse, enhance, batch_process, webhooks, analytics helpers, and admin methods when an admin key is configured.

HTTP errors

Common codes: 401 missing/invalid API key, 429 rate limit, 422 validation, 503 admin not configured.