Prompt Best Practices
Get better results from AI by following these proven strategies when using HeroPrompt templates.
1. Be Specific
AI models work best with clear, detailed instructions.
❌ Avoid vague requests:
"Make it better"
"Add some features"
"Improve the design"✅ Be specific:
"Improve performance by reducing database queries — use caching"
"Add real-time validation with error messages below each form field"
"Update design to use Material Design 3 with rounded corners and elevated cards"2. Provide Context
Give the AI background information to make informed decisions.
Essential context:
- Target audience — Beginners, experts, general users?
- Use case — Production code, prototype, learning exercise?
- Constraints — Time, budget, technology limitations?
- Environment — Languages, frameworks, platforms in use?
Example:
Context: Building a React dashboard for internal ops team (20 users).
Tech stack: React 18, TypeScript, TailwindCSS, React Query.
Priority: Fast development over perfect code. Must be mobile-responsive.3. Use Examples
Show, don't just tell. Examples clarify expectations.
Instead of:
"Return the data in a good format"Provide an example:
Return data in this format:
{
"id": "user-123",
"name": "Alice Smith",
"created_at": "2024-01-15T10:30:00Z",
"tags": ["admin", "verified"]
}4. Specify Output Format
Tell the AI exactly how you want the response structured.
Format options:
- Code only — No explanations, just implementation
- Code + comments — Inline documentation
- Step-by-step — Numbered instructions
- Markdown — Headers, lists, code blocks
- JSON — Structured data
- Table — Comparison or feature matrix
Example:
Provide the response as:
1. Executive summary (3 sentences)
2. Implementation steps (numbered list)
3. Code example (TypeScript with comments)
4. Trade-offs (pros/cons table)5. Set Constraints
Define what not to do.
Common constraints:
- No external libraries — Use standard library only
- Max lines — Keep functions under 50 lines
- No premature optimization — Simple and readable first
- Specific style — Follow PEP 8, ESLint Airbnb, etc.
- Performance — Must handle 10k items without lag
Example:
Constraints:
- No React class components (hooks only)
- No external state management (use useState/useContext)
- Must work in browsers back to Chrome 90
- Total bundle size < 100KB6. Ask for Reasoning
Request explanations to understand the AI's decisions.
Add to your prompt:
Explain your reasoning for:
- Why you chose this approach
- What alternatives you considered
- Any trade-offs madeThis helps you evaluate the solution and learn best practices.
7. Iterate Incrementally
Start simple, then add complexity.
Good workflow:
- First prompt: Basic implementation
- Second prompt: "Add error handling"
- Third prompt: "Optimize for performance"
- Fourth prompt: "Add comprehensive tests"
Don't try to get everything perfect in one prompt.
8. Use Role-Play
Frame the AI as an expert in the domain.
Examples:
"You are a senior DevOps engineer with 10 years of Kubernetes experience..."
"You are a database performance specialist optimizing PostgreSQL queries..."
"You are a UX designer following Material Design 3 guidelines..."This primes the AI to respond with appropriate expertise level.
9. Request Multiple Options
Ask for alternatives to compare.
Provide 3 approaches:
1. **Simple** — Easiest to implement, fewer features
2. **Balanced** — Good trade-off between complexity and features
3. **Advanced** — Full-featured, production-grade
For each, explain pros/cons and implementation complexity.10. Check for Errors
Don't trust AI output blindly.
Always:
- Run the code — Test before using in production
- Review logic — Verify the approach makes sense
- Check edge cases — Does it handle errors, empty inputs, etc.?
- Validate — Use linters, type checkers, security scanners
11. Use the Right AI Model
Different models excel at different tasks.
| Task | Best Model |
|---|---|
| Code generation | Claude, GPT-4 |
| Creative writing | Claude, GPT-4 |
| Data analysis | GPT-4, Claude |
| Image generation | DALL-E 3, Midjourney |
| Fast prototyping | GPT-3.5 Turbo |
| Long context | Claude 2 (100k tokens) |
Check the "AI Models" tag on each HeroPrompt for recommendations.
12. Leverage the Optimizer
Use the Prompt Optimizer to improve any prompt:
- Enhance & Clarify — Make vague prompts specific
- Structured Format — Organize complex requirements
- Chain of Thought — Add step-by-step reasoning
- Few-Shot Examples — Add input/output examples
- Model-Specific — Tailor for Claude, GPT, or Gemini
Common Mistakes to Avoid
❌ Too vague — "Make it work"
✅ Specific — "Fix the authentication bug where users can't log in with special characters in passwords"
❌ No context — "Write a function"
✅ Context — "Write a Python function for a FastAPI backend that validates JWT tokens"
❌ Overloading — One massive prompt with 10 requirements
✅ Incremental — Break into smaller, focused prompts
❌ Assuming knowledge — Using jargon without explanation
✅ Clear — Define acronyms and domain-specific terms
Advanced: Prompt Chaining
For complex tasks, chain multiple prompts together:
Prompt 1: "Design the database schema for a blog platform"
↓ (use output)
Prompt 2: "Write SQLAlchemy models for this schema"
↓ (use output)
Prompt 3: "Create Alembic migrations for these models"
↓ (use output)
Prompt 4: "Generate API endpoints using FastAPI"For automated prompt chaining, upgrade to PRO Kit for Skills.
Next Steps
- Try the Prompt Optimizer
- Explore Skills for multi-step workflows
- Install the CLI to sync prompts locally