Why Review Prompts?
Catches errors before production
Enforces consistency
Spreads knowledge
Reduces security risks
Simple 3-Step Review Process
- 1Author writes and tests prompt
- 2Reviewer checks: clarity, examples, edge cases, safety
- 3Approve or request changes
Review Checklist
- Is purpose clear?
- Are examples sufficient (3+)?
- Are edge cases handled?
- Is output format specified?
- Are there safety risks?
- Is metadata complete?
Tools That Support Review
- Braintrust: Built-in approvals
- GitHub: Code review via pull requests
- Notion: Comment-based feedback
- Custom: Spreadsheet with approval column
Governance: Who Reviews What?
Low-risk (internal tools): Self-approval + spot checks
Medium-risk (customer-facing): 1 peer review
High-risk (legal, security): 2+ reviews, specialist sign-off
Sources
- Braintrust. Review workflows
- GitHub. Code review best practices
- OpenAI. Safety practices
Common Mistakes
- Reviewing too strictly (slows iteration)
- No clear acceptance criteria
- Not documenting why prompts were rejected
- Bypassing review for "urgent" changes
- Not archiving rejected versions