Teach AI to amaze you with better prompts
Design clear, consistent, and OpenAI-friendly prompts. Build → audit → share in seconds.
Interactive
OpenAI-ready
Copy & share
Interactive Prompt Builder
Role / Persona
Task
Context
Constraints
Output format
• Bullet list
1. Numbered steps
Markdown with code
JSON
Markdown table
Style / Tone
Concise
Friendly
Expert
Enthusiastic
Formal
Teacher
Language
Extras
Ask clarifying questions before answering
Provide at least one example
Cite sources when possible
Use headings and subheadings
Verify the final answer against the constraints
Shortcut: press Ctrl + Enter to compose.
Preview
Chars: 0
Words: 0
Token est.: 0
Score: –
Quick Templates
Click to load into the builder.
JSON Output Helper
Ask for JSON with a schema. Great with the OpenAI Responses API.
{
"type": "object",
"properties": {
"title": {"type":"string"},
"steps": {"type":"array","items":{"type":"string"}},
"risk_notes": {"type":"string"}
},
"required": ["title","steps"]
}
Quality Checklist
- Clear role and task
- Concrete constraints and metrics
- Explicit output format
- Audience and tone specified
- Examples or test cases
- Safety and scope boundaries
What hurts prompts?
Ambiguity, missing output format, no constraints, hidden assumptions, and overlong context.
Prompt Patterns
Role + Task + Format
Role: [specialist persona]
Task: [clear directive]
Constraints: [limits and must/avoid]
Output: [exact format]
Style: [tone]
Task: [clear directive]
Constraints: [limits and must/avoid]
Output: [exact format]
Style: [tone]
Few-shot with Rubric
You will see examples and a rubric. Follow the rubric strictly.
Examples:
- Input: ... → Output: ...
- Input: ... → Output: ...
Rubric: completeness(0-3), accuracy(0-3), clarity(0-3).
Return a JSON with scores and final_answer.
Examples:
- Input: ... → Output: ...
- Input: ... → Output: ...
Rubric: completeness(0-3), accuracy(0-3), clarity(0-3).
Return a JSON with scores and final_answer.
Self-Check
After producing an answer, verify it against constraints and fix issues before final output. Report any assumptions made.
Learn Prompt Engineering
Prompt Anti-Patterns
- Do everything: split tasks instead
- Missing target audience
- Unbounded length or timeframe
- Vague verbs: “analyze” without criteria
- Implicit format: be explicit
Micro-tips
- Start with a persona: “You are...”
- Replace “good” with measurable adjectives
- Prefer JSON or tables for programmatic use
- Add a rubric for grading or selection
- Tell the model when to ask clarifying questions