How to Write a System Prompt for ChatGPT
System prompts are the most powerful and most misunderstood feature of modern AI models. They let you define who the AI is, how it behaves, what it knows, and what it refuses to do β before a single user message is sent. Master system prompts and you stop getting generic chatbot responses. You start getting a purpose-built AI assistant that behaves exactly the way you need it to. This guide covers everything: what system prompts are, why they matter, how to structure them, five real examples you can steal and adapt, and the mistakes that trip up even experienced users.
What is a system prompt β and why does it matter?
A system prompt is a set of instructions given to a large language model before any user interaction begins. Think of it as a briefing document. When you open ChatGPT and just start typing, you are using it with a default system prompt that OpenAI has written β a generic one designed to be helpful, harmless, and safe. When you write your own system prompt, you replace that generic briefing with one tailored to your exact use case.
The system prompt sits in a privileged position in the model's context window. It is processed before any user message, which means it shapes how the model interprets everything that follows. A well-written system prompt can turn a general-purpose chatbot into a specialised tool: a customer support agent that knows your product inside out, a writing assistant that matches your brand voice, a code reviewer that enforces your team's standards.
Why does this matter? Because the quality gap between a prompted and unprompted AI is enormous. Without a system prompt, you get a polite, generic assistant. With a good system prompt, you get a tool that understands context, follows rules, maintains consistency, and produces output you can actually use in professional workflows. If you have completed Enigmatica's Foundations level, you already understand how context shapes AI output. System prompts are that principle taken to its logical extreme β you set the context once, and it applies to every interaction.
System prompts work across all major models: ChatGPT (via the API or Custom GPTs), Claude (via the API or Projects), Gemini, and open-source models like Llama. The syntax varies slightly, but the principles are universal. Learn them once, apply them everywhere.
The anatomy of an effective system prompt
A strong system prompt has a consistent structure. You do not need every element in every prompt, but knowing the full toolkit lets you pick what matters for your use case. Here is the structure, broken into six components β which, not coincidentally, maps closely to the CONTEXT Framework taught throughout Enigmatica's curriculum.
Role and identity. Start by telling the model who it is. "You are a senior customer support agent for Acme Software" is more effective than "Help me with customer support." The role anchors the model's behaviour, vocabulary, and decision-making. Be specific: include the domain, seniority level, and any specialisation.
Core task. Define the primary job. What is this AI supposed to do? "Your job is to respond to customer tickets, diagnose issues, and provide step-by-step solutions" is clear and actionable. Vague instructions like "be helpful" produce vague output.
Rules and constraints. This is where system prompts earn their keep. Spell out what the model must always do and what it must never do. "Always ask for the customer's account ID before troubleshooting." "Never promise refunds β escalate refund requests to a human agent." "Do not speculate about features that are not in the current product documentation." Rules prevent the model from freelancing in dangerous directions.
Knowledge and context. Provide the reference material the model needs. This might be a product FAQ, a style guide, a list of pricing tiers, or a set of company policies. The more specific context you provide, the less the model has to guess β and guessing is where hallucinations come from.
Output format. Specify exactly how you want responses structured. "Respond in three sections: Diagnosis, Steps to Resolve, and Follow-up Action." "Use bullet points, not paragraphs." "Keep responses under 200 words." Format instructions eliminate the back-and-forth of asking the model to restructure its output.
Tone and style. Define the voice. "Professional but warm. Use the customer's first name. Avoid jargon unless the customer uses it first." Tone instructions are especially important for customer-facing applications where brand consistency matters.
You do not need to label these sections explicitly in your system prompt, but having all six elements covered produces dramatically better results than a one-line instruction.
5 real system prompts you can steal and adapt
Here are five production-ready system prompts for common use cases. Each one demonstrates the structure above in practice. Copy them, modify them for your context, and deploy them.
Example 1: Customer Service Bot
"You are a senior customer support specialist for Meridian SaaS, a project management platform. Your job is to help customers resolve issues with their accounts, billing, and product features. Always greet the customer by name if provided. Ask clarifying questions before jumping to solutions β never assume you understand the problem from the first message. If a customer requests a refund, cancellation, or has a billing dispute, acknowledge their frustration, explain that these requests require a specialist, and offer to escalate immediately. Never promise outcomes you cannot guarantee. Reference the product documentation below for accurate feature information. Keep responses concise β under 150 words unless a step-by-step walkthrough is needed. Tone: professional, patient, genuinely helpful. Never use phrases like 'I understand your frustration' β instead, directly address what went wrong."
Example 2: Writing Assistant
"You are a senior content editor with 15 years of experience in B2B technology writing. Your job is to help the user draft, edit, and improve written content. When drafting: write in active voice, use short sentences (average 15 words), and favour concrete language over abstractions. When editing: identify the three biggest weaknesses in the draft and suggest specific fixes, then provide a revised version. Follow AP style for punctuation and formatting. The user's brand voice is authoritative but approachable β imagine explaining complex topics to an intelligent friend who is not a specialist. Never use the following words: leverage, synergy, ecosystem, paradigm, holistic, innovative, cutting-edge. Flag any factual claims that need verification. If the user asks you to write something misleading or factually incorrect, decline and explain why."
Example 3: Code Reviewer
"You are a principal software engineer conducting code reviews. Your job is to review code snippets and pull requests for correctness, readability, performance, and security. For each review, structure your feedback as: (1) Summary β one sentence on overall quality, (2) Critical Issues β bugs, security vulnerabilities, or logic errors that must be fixed, (3) Improvements β suggestions that would make the code better but are not blocking, (4) Nitpicks β style and formatting notes, clearly labelled as optional. Be direct and specific. Reference line numbers. When suggesting changes, show the improved code, do not just describe it. Assume the author is competent β explain why a change matters, not how to make it. If the code is good, say so briefly and move on. Do not invent issues to seem thorough. Languages you are most experienced with: TypeScript, Python, Go, and Rust."
Example 4: Data Analyst
"You are a senior data analyst helping the user explore, clean, and analyse datasets. When the user shares data or describes a dataset, your first step is always to ask clarifying questions: What is the business question? What decisions will this analysis inform? What is the data source and how recent is it? When writing analysis code, use Python with pandas and matplotlib unless the user specifies otherwise. Always include data validation steps: check for nulls, duplicates, outliers, and type mismatches before proceeding to analysis. Present findings in plain English first, then support with charts or tables. When reporting statistics, always include sample size, confidence intervals where applicable, and caveats about what the data cannot tell you. Never present correlation as causation. If the dataset is too small or too messy to support reliable conclusions, say so clearly rather than forcing an answer."
Example 5: Meeting Summariser
"You are an executive assistant summarising meeting transcripts. For each transcript, produce a structured summary with exactly four sections: (1) Key Decisions β bullet list of decisions made, including who made them, (2) Action Items β bullet list with owner, task, and deadline for each, (3) Open Questions β unresolved topics that need follow-up, (4) Context β a 2-3 sentence summary of the meeting's purpose and overall trajectory. Keep the total summary under 300 words. Use the participants' names, not pronouns. If a decision or action item is ambiguous in the transcript, flag it with '[UNCLEAR β verify with participants]' rather than guessing. Do not editorialize or add opinions about the meeting's productivity. Omit small talk, tangents, and off-topic discussions entirely."
Each of these prompts is opinionated, specific, and packed with constraints. That is what makes them work. The more precisely you define the boundaries, the better the model performs within them.
Common mistakes that sabotage your system prompts
Most system prompts fail for predictable reasons. Here are the five mistakes I see most often, along with the fix for each.
Mistake 1: Being too vague. "Be helpful and professional" gives the model almost nothing to work with. It will fall back on its default behaviour, which may not match what you need. The fix: replace adjectives with rules. Instead of "be professional," write "use formal English, address the user by their surname, and never use slang or emoji."
Mistake 2: Conflicting instructions. "Be concise" followed three paragraphs later by "provide comprehensive, detailed explanations" leaves the model guessing which instruction to prioritise. The fix: review your prompt for contradictions. If you want both brevity and depth, specify when each applies: "Keep initial responses under 100 words. If the user asks for more detail, provide a comprehensive explanation."
Mistake 3: No output format specification. Without format instructions, the model will choose its own structure β and it will choose differently every time. If you need consistent output (and in professional workflows, you always do), you must specify the format. The fix: include explicit structure requirements. "Respond with a numbered list," "Use markdown headers," or "Return JSON with the following keys" β be prescriptive.
Mistake 4: Trying to cover everything in one prompt. A system prompt that tries to handle customer support, sales, technical troubleshooting, and billing disputes in a single set of instructions will do none of them well. The fix: create separate system prompts for separate use cases. One well-scoped prompt will outperform one sprawling prompt every time.
Mistake 5: Never testing or iterating. Your first draft of a system prompt will not be your best. The fix: test with real queries, identify where the model deviates from your expectations, and add specific rules to address those deviations. The best system prompts are built through iteration, not inspiration. This iterative approach is central to prompt engineering as a discipline β and it is the core skill taught in Enigmatica's Essentials and Practitioner levels.
Advanced techniques for power users
Once you have the fundamentals down, these advanced techniques will take your system prompts further.
Few-shot examples in the system prompt. Instead of just describing what you want, show it. Include 2-3 example interactions β a user message and the ideal response β directly in your system prompt. This is dramatically more effective than description alone. The model learns from examples far more reliably than from abstract instructions. If you want your AI to respond in a specific format or with a specific reasoning style, showing beats telling every time.
Conditional logic. Build branching behaviour into your prompt: "If the user's question is about billing, follow process A. If the user's question is about a technical issue, follow process B. If you cannot determine the category, ask the user to clarify before proceeding." This turns a simple prompt into a decision tree that handles multiple scenarios gracefully.
Chain-of-thought enforcement. For complex tasks, instruct the model to reason step by step before producing output: "Before answering, work through the problem in a <thinking> block. Consider at least two possible approaches. Then provide your final answer." This reduces errors on multi-step reasoning tasks β the same principle behind Enigmatica's CONTEXT Framework, which structures the thinking process before generating output.
Negative examples. Show the model what bad output looks like and tell it to avoid it: "Do NOT respond like this: [bad example]. Instead, respond like this: [good example]." Negative examples are particularly effective for eliminating persistent bad habits, like excessive hedging or unnecessary caveats.
Version control your prompts. Treat system prompts like code. Keep them in a document, version them, and log changes. When you modify a prompt, note what you changed and why. This makes it possible to roll back when a change makes things worse, and it builds a knowledge base of what works for your specific use case.
Temperature and model selection. Your system prompt interacts with model settings. A detailed, constraining system prompt pairs well with a lower temperature (0.2-0.5) for consistent, predictable output. A creative system prompt β for brainstorming or writing β works better with higher temperature (0.7-1.0). Match your settings to your intent.
System prompts are the bridge between "using AI" and "deploying AI." They are the foundation of every custom GPT, every AI-powered workflow, and every enterprise AI deployment. If you want to go deeper, Enigmatica's Practitioner and Advanced levels cover prompt architecture, chaining, and workflow design in detail β building on the principles covered here.
Getting started: your first system prompt in 10 minutes
You do not need to build a perfect system prompt on your first attempt. Here is a practical exercise to get you started right now.
Pick a task you do repeatedly β answering customer emails, summarising documents, writing social media posts, reviewing reports. Something you do at least a few times per week.
Write a rough system prompt using the six-component structure: role, task, rules, knowledge, format, and tone. Do not overthink it. Spend five minutes getting the basics down.
Test it with three real examples from your recent work. Paste your system prompt into ChatGPT's custom instructions or Claude's project instructions, then give it three real inputs from tasks you have actually done.
Compare the output against what you would have produced yourself. Where does the AI nail it? Where does it miss? Every gap is a clue about what your system prompt is missing.
Add 2-3 rules to address the gaps. If the AI was too verbose, add a word limit. If it missed an important consideration, add it to the knowledge section. If the tone was off, be more specific about voice.
Test again. Two or three rounds of this iterative refinement will produce a system prompt that genuinely saves you time on a real task. That is the whole point β not theoretical elegance, but practical utility.
If you want a structured approach to this process, Enigmatica's CONTEXT Framework provides a step-by-step methodology for building effective prompts. It is taught in our free Essentials level and reinforced throughout every subsequent lesson. You can also explore the Prompt Template Library in our tools section for pre-built templates across dozens of use cases.
Related Terms
Put this into practice β for free
Enigmatica's curriculum covers these topics in depth with interactive lessons and quizzes. Completely free.
Start Learning Free