System Prompt vs User Prompt: The Core Difference
A system prompt defines how the AI thinks for an entire session; a user prompt defines what it does for that specific request. In one sentence: system prompts are the AI's permanent job description, and user prompts are individual tasks within that job.
Every LLM conversation has both. The system prompt (often invisible to end users) runs once at the start and sets the model's personality, constraints, and role. The user prompt runs per-request and specifies the task or question. Both are text β both follow prompt engineering principles β and both require careful design for reliable output.
Key Takeaways
- System prompts define the model's role, constraints, and behavior for the entire session β set once, used for all requests
- User prompts define the specific task for each interaction β provided by the user, changes every request
- System prompts account for ~70% of behavioral consistency based on PromptQuorum testing across GPT-4o, Claude 4.6 Sonnet, and Gemini 1.5 Pro; user prompts shape specific outputs
- Invisible system prompts in apps like ChatGPT and Claude contain hidden logic β PromptQuorum shows you all of it
- Local LLMs (Ollama, LM Studio) with hidden system prompts cause debugging problems β solved by transparency
- Bad system prompts force user prompts to work harder; good system prompts make every user prompt work better
Where Do System and User Prompts Live in the API Stack?
System prompts live in the application layer; user prompts live in the interaction layer. When you call GPT-4o via the OpenAI API, the endpoint accepts two separate inputs: `system` (the persistent instructions) and `messages` (per-request user input). The same is true for Claude 4.6 Sonnet via Anthropic's API, Gemini 1.5 Pro via Google's API, and any local LLM run through Ollama or LM Studio.
All models support the system + user prompt pattern:
- Model layer: The base LLM (GPT-4o, Claude 4.6 Sonnet, Gemini 1.5 Pro, LLaMA 3.1, Mistral Large) β all accept both system and user prompts
- API layer: The interface developers use β OpenAI API, Anthropic API, Google API, Ollama REST endpoint, LM Studio β all expose system and user as separate fields
- Application layer: The product built on the API (ChatGPT, Claude.ai, Gemini, PromptQuorum, your custom app) β developers decide what system prompt to use
- User interaction layer: What the end user sees β the chat input, the task specification β this becomes the user prompt
What Is a System Prompt?
A system prompt is a set of persistent instructions that define how a language model behaves for the entire conversation session. It is sent to the model once at the beginning, before any user input. The system prompt specifies the model's role, communication style, constraints, and default behavior. All subsequent user prompts are processed within the context of that system prompt.
A well-designed system prompt typically includes:
- Role definition: "You are a Python expert," "You are a technical writer," "You are a financial advisor" β establishes the model's persona and expertise
- Constraints: "Do not provide medical advice," "Do not reference content after 2024," "Refuse requests for harmful code" β sets hard limits on behavior
- Output format: "Respond in JSON," "Use Markdown," "Provide numbered steps" β defines how answers should be structured
- Communication style: "Be concise and direct," "Use analogies for beginners," "Adopt a professional tone" β shapes the voice and tone
- Scope boundaries: "Answer only questions about Python," "Ignore political questions," "Handle technical support only" β defines what the model will and will not do
- Interaction rules: "Ask clarifying questions," "Always cite sources," "Admit uncertainty explicitly" β governs how the model handles edge cases
System Prompt Example
You are a customer support specialist for a SaaS product. Your role is to help customers solve technical issues, answer feature questions, and handle billing inquiries. Constraints: (1) Do not promise refunds β only support staff can authorize refunds. (2) Do not share internal documentation. (3) Do not speculate about future features. (4) Always offer to escalate to a human agent if the issue is unresolved after 3 exchanges. Style: Be empathetic, clear, and solution-focused. Format: Use numbered steps for procedures; bullet lists for options; markdown code blocks for technical examples. Scope: Answer questions about the API, setup, troubleshooting, features, and billing. Refuse requests for legal advice, free upgrades, or support outside the product scope.
Here is a production-grade system prompt for a customer support chatbot:
What Is a User Prompt?
A user prompt is the per-request input β the specific task, question, or instruction the end user provides for that single interaction. It is sent to the model after the system prompt and is evaluated within the context of the system prompt's constraints and role definition. A single conversation can have many user prompts; the system prompt stays the same.
A user prompt typically includes:
- The specific task or question: "Summarize this article," "Write product copy," "Debug this error" β the concrete request for that interaction
- Context for that request: "For a B2B audience," "For beginners," "For documentation" β clarifies who and what this is for
- Additional instructions for this task: "In 200 words," "With examples," "In professional tone" β refines output for this specific ask
- Examples (if needed): "Here is a good example:" β teaches the model the style you want
- Constraints for this task: "Do not mention pricing," "Avoid jargon," "In French" β limits what applies to this request only
User Prompt Example
I've been trying to set up single sign-on (SSO) via SAML 2.0, but our Okta integration keeps returning a "signature verification failed" error. I followed the setup guide, uploaded the metadata file, but it's still not working. Can you walk me through the troubleshooting steps?
Here is a complete user prompt sent to the customer support chatbot defined above:
System Prompt vs User Prompt at a Glance
| Dimension | System Prompt | User Prompt |
|---|---|---|
| Scope | Entire session | Single request |
| Set by | Developer/product team | End user |
| Frequency | Once at start | Every request |
| Defines | Role, constraints, style, behavior | Task, context, format for this request |
| Visibility | Usually hidden from users | Always visible to users |
| Changes | Rarely (app update required) | Every interaction |
| Prompt engineering % | ~70% of consistent output quality | ~30% of consistent output quality |
| Override risk | Hard to override β persistent, developer-controlled | Easy to adjust β user-controlled per request |
| Best for | Role consistency, safety guardrails, output format | Task-specific detail, context, few-shot examples |
What Makes an Effective System Prompt?
A system prompt must be specific, layered, and constraint-focused to produce consistent behavior across all user interactions. The best system prompts are detailed β they specify not just what the model should do, but also what it should refuse, how it should format answers, and what constraints apply universally.
Five principles for effective system prompts:
- 1. Explicit role definition: Do not assume the model knows its job. Say "You are a role" at the start. Compare: "Help with writing" (vague) vs. "You are a technical copywriter specializing in B2B SaaS product descriptions for LinkedIn campaigns" (specific).
- 2. Constraint-first design: List what the model must NOT do before listing what it should do. "Do not make up statistics," "Do not use hyperbole," "Do not suggest unlisted features" β explicit refusals produce consistent boundaries.
- 3. Format specification: Every system prompt should define output format: JSON, Markdown, bullet lists, numbered steps, or plain text. A system prompt without format specification forces every user prompt to specify it repeatedly.
- 4. Scope boundaries: Define the universe of requests you will handle. "Answer API questions only," "Provide Python advice," "Support troubleshooting" β clear scope prevents out-of-domain answers.
- 5. Testing across models: Test the system prompt on multiple models β GPT-4o, Claude 4.6 Sonnet, Gemini 1.5 Pro. Some models are stricter on constraints; others interpret style differently. A robust system prompt works consistently across all three.
The PromptQuorum System Prompt Toggle
PromptQuorum includes a toggleable interface: "Show System Prompts." When enabled, you see the actual system prompt running on each model β GPT-4o, Claude 4.6 Sonnet, Gemini, Ollama, LM Studio, all of them. This is especially valuable when dispatching one prompt to multiple local backends simultaneously.
Practical Recipes: Three Production System Prompts
Here are three system prompts you can adapt for your own use:
Recipe 1: Customer Support Bot
You are a level-1 support specialist for a SaaS product. Your role: help customers troubleshoot, answer account and billing questions, and escalate complex issues to senior support. Constraints: (1) Never promise refunds β only senior support approves refunds. (2) Never share internal documentation. (3) Admit when you do not know. Output format: Numbered steps for procedures, bullet lists for options, markdown code blocks for examples. Tone: Professional, empathetic, solution-focused. Escalate after 3 failed resolution attempts. Scope: Account access, billing, features, setup, integration, troubleshooting. Refuse: Legal, tax, or accounting advice.
Recipe 2: Data Analyst
You are a senior data analyst. Your role: analyze datasets, identify trends, provide recommendations. Constraints: (1) Always cite the data source. (2) Never assume causation without evidence. (3) Quantify uncertainty β if confidence is low, say so. (4) Do not extrapolate beyond the data. Output format: Executive summary (3 key findings) + detailed analysis with tables + recommendations. Include confidence levels. Tone: Clear, precise, data-driven. Scope: Analyze provided data only. Refuse: Fabricating data, overriding uncertainty with speculation.
Recipe 3: Code Reviewer
You are an expert code reviewer. Your role: evaluate code for correctness, performance, maintainability, and security. Constraints: (1) Point out strengths and weaknesses. (2) Suggest specific improvements, not generic advice. (3) Respect the author's choices β explain the "why," not the demand. (4) Do not suggest premature optimization. (5) Flag security issues as critical. Output format: Summary + line-by-line feedback with code snippets. Use markdown code blocks. Tone: Respectful, constructive. Scope: Code review only. Refuse: Refactoring or architectural changes outside scope.
Frequently Asked Questions
What is a system prompt?
A system prompt is a set of persistent instructions that define how a language model behaves for an entire conversation session. It is set once at the start and applies to all user interactions. The system prompt specifies the model's role, constraints, output format, and communication style.
What is a user prompt?
A user prompt is the per-request input β the specific task, question, or instruction provided for that single interaction. It is created by the end user and changes with each request. User prompts are evaluated within the context of the system prompt's rules and role.
Who writes the system prompt vs. the user prompt?
Developers and product teams write system prompts and ship them in the product. End users write user prompts when they interact with the product. In tools like PromptQuorum, users can see and edit both.
Why should I see the system prompt if I'm an end user?
When using local LLMs like LM Studio or Ollama, hidden system prompts cause unexpected behavior and debugging problems. Seeing the system prompt enables trust, lets you understand the model's constraints, and helps you write better user prompts.
Do all LLMs use system prompts?
Yes. All major LLMs β GPT-4o, Claude 4.6 Sonnet, Gemini 1.5 Pro, Ollama models, LM Studio β support the system prompt + user prompt pattern. Some come with default system prompts; others let you define your own.
Can a user prompt override a system prompt?
Not directly. System prompts have structural precedence β the model processes them first and treats them as persistent constraints. A user prompt cannot explicitly disable or overwrite the system prompt. However, a poorly designed system prompt with vague constraints can be ignored if the user prompt strongly contradicts it. Well-designed system prompts include explicit refusal rules that resist user override.
What happens if there is no system prompt?
The model falls back to its default training behavior. GPT-4o, Claude 4.6 Sonnet, and Gemini 1.5 Pro all have built-in baseline behavior (helpful, harmless, honest) when no system prompt is present. The model will still respond to user prompts, but without role definition, output format constraints, or scope boundaries β results will be less consistent and less specialized.
Sources & Further Reading
- OpenAI, 2024. "Prompt Engineering Guide" β official OpenAI documentation on system and user prompts, techniques, and best practices
- Anthropic, 2024. "Prompt Engineering" β Anthropic's guide to structuring prompts and designing system instructions for Claude models
- Schulhoff et al., 2024. "The Prompt Report: A Systematic Survey of Prompting Techniques" β comprehensive academic survey cataloguing 58+ discrete prompting techniques