PromptQuorumPromptQuorum
ホーム/プロンプトエンジニアリング/The APE Framework: Analyze, Plan, Execute — Structured Prompts That Show Their Reasoning
Frameworks

The APE Framework: Analyze, Plan, Execute — Structured Prompts That Show Their Reasoning

·7 min read·Hans Kuepper 著 · PromptQuorumの創設者、マルチモデルAIディスパッチツール · PromptQuorum

The APE Framework is a simple three-step prompt structure built around Analyze, Plan, and Execute so that large language models produce clear, traceable outputs from a single instruction. In PromptQuorum, the APE Framework is available as a ready-made option that any user can select and apply across all supported models.

⚡ Quick Facts

  • ·APE stands for: Analyze → Plan → Execute
  • ·Purpose: Make AI reasoning visible and inspectable before the final output
  • ·When to use: Complex, high-stakes tasks where reasoning matters (analysis, strategy, code review, research)
  • ·When NOT to use: Simple factual questions, short tasks, quick drafts
  • ·Complexity: 3 stages — simpler than CRAFT (5), CO-STAR (6), SPECS (5)
  • ·Works on: All language models — cloud APIs and local models via Ollama/LM Studio

What the APE Framework Is

The APE Framework is a prompt template that forces large language models to separate their thinking into analysis, planning, and execution. Instead of getting one undifferentiated answer, you see how the model understood the problem, how it intends to solve it, and the final output. This structure improves reliability because you can inspect each stage.

APE is especially useful when you are dealing with complex or high-stakes tasks. By asking the model to show its reasoning path explicitly, you reduce the chance that hidden assumptions or shortcuts stay invisible. The same three-part pattern works across all models—cloud APIs and local models via Ollama or LM Studio—keeping results consistent.

The Three Steps: Analyze, Plan, Execute

The core of the APE Framework is that every prompt instructs the model to first analyze the problem, then plan the solution, and only then execute the final answer. These three steps map directly to how humans handle reasoning and give you clear checkpoints.

A typical breakdown looks like this:

  • Analyze: Restate the task in your own words, identify key constraints, and surface any missing information.
  • Plan: Propose a short step-by-step approach that you will follow to solve the task.
  • Execute: Produce the final answer following the plan, with the requested structure and formatting.

When to Use the APE Framework

You should use the APE Framework when your task is complex enough that you care about the model's reasoning, not just its final output. This includes technical analysis, multi-step research, strategic writing, and any situation where errors are costly.

Typical use cases include:

  • Breaking down a product requirement into user stories and acceptance criteria.
  • Designing a content strategy from raw notes and market information.
  • Reviewing and refactoring code while explaining trade-offs and risks.
  • Planning and drafting long-form reports where structure matters as much as wording.

⚠️ Warning

APE adds token overhead — the Analyze and Plan stages consume output tokens before the final Execute. For high-volume production tasks where you only need the final answer and don't inspect reasoning, use a simpler single-step prompt to reduce cost and latency.

How to Write an APE Prompt

An effective APE prompt mentions the three stages by name and specifies what you expect in each part: analysis notes, a step-by-step plan, and a final output. You can do this in a compact way so that it still counts as a single prompt.

A generic pattern is:

"You are role. First, Analyze the task by listing the key goals, constraints, and missing information. Then, Plan your approach in 3–5 bullet points. Finally, Execute by producing desired output format, strictly following your plan."

You can then customize this base pattern with domain details such as audience, tone, file structure, or citation requirements. Once defined, you can reuse the same APE prompt across multiple tasks by changing only the objective and context.

🔍 Pro Tip

After the model completes its Analysis and Plan, read them BEFORE looking at the Execute output. If the Analysis missed a constraint or the Plan has a wrong step, tell the model to revise — this is cheaper and faster than regenerating the entire response.

Example: Bad vs Good APE Prompt

The difference between an unstructured prompt and an APE prompt becomes clear when you compare them on the same task. Here is a simple example for a product launch email.

Bad Prompt

"Write an email announcing our new analytics dashboard."

Good Prompt

"You are a SaaS product marketer. Objective: Create an announcement email for our new analytics dashboard aimed at existing customers. APE structure: 1) Analyze: Briefly list the target audience, their main pain points, and the key benefits this dashboard addresses. 2) Plan: Outline the email structure in 3–5 bullet points (hook, key benefits, call to action, etc.). 3) Execute: Write the final email (max 220 words) in a clear, professional tone. Include a subject line, preview text, and body."

With the APE Framework, the model shows its understanding of the problem and the plan before producing the email, which makes it easier to spot misalignment early.

How PromptQuorum Implements the APE Framework

PromptQuorum is a multi-model AI dispatch tool that includes the APE Framework as one of its built-in prompt structures so users can apply Analyze–Plan–Execute prompting with a single click. When you choose the APE option in PromptQuorum, the app automatically injects the three-step structure around your objective and context.

Within PromptQuorum, the APE Framework:

  • Provides labeled sections for analysis, planning, and execution expectations so you do not have to remember the pattern each time.
  • Sends the same APE-structured prompt to multiple models in parallel, making it easy to compare how different models respond at each stage.
  • Can be saved as a template for repeated workflows such as code reviews, strategy memos, or research briefs.

Choosing APE vs Other Frameworks

You should choose the APE Framework over other prompt frameworks when you want explicit reasoning steps but do not need a large number of parameters or sections. APE is deliberately compact: three stages are often enough to improve clarity without overwhelming the user.

In practice:

  • Pick APE for complex but self-contained tasks where reasoning matters.
  • Pick a Single Step-style framework when you already know the exact output format and only need one well-specified instruction.
  • Pick more detailed frameworks (with many sections and parameters) only when you have strict internal standards that must be encoded into the prompt.
FrameworkStages/SectionsBest ForReasoning Visible?
APE3 (Analyze, Plan, Execute)Complex tasks needing inspectable reasoningYes — explicit stages
Single-Step1 (one instruction)Simple, well-defined tasksNo
CRAFT5 (Context, Role, Action, Format, Target)Comprehensive context definitionOptional
CO-STAR6 (Context, Objective, Style, Tone, Audience, Response)Marketing and communicationsNo
SPECS5 (Situation, Problem, Examples, Constraints, Style)Problem-solving with examplesOptional
Chain-of-Thought1 (with "think step by step")Math, logic, single-pass reasoningYes — but unstructured

🔍 Did You Know

APE's three-stage structure maps directly to how human experts approach complex problems: understand the problem (Analyze), design the approach (Plan), then produce the deliverable (Execute). Cognitive science research shows this separation reduces errors in both human and AI reasoning.

APE Also Means: Automatic Prompt Engineering (Different Concept)

Note: "APE" is also used to refer to Automatic Prompt Engineering — a separate technique from the Analyze-Plan-Execute framework described above. Automatic Prompt Engineering (Zhou et al., 2022) uses AI to generate and score prompt variants automatically, finding optimal phrasings without manual trial-and-error. Here's how that works:

  1. 1
    Define your task, success metric, and a few seed examples. Example: Task = 'classify customer feedback sentiment.' Success metric = 'accuracy on 20 labeled examples.' Seed examples = 3 diverse customer messages with correct sentiment labels.
  2. 2
    Use an APE tool or ChatGPT to automatically generate prompt variants. Provide your task and examples, and ask: 'Generate 5 different prompt variations that might solve this task. Vary instruction style, examples, and constraints.' Evaluate each variant on your test set.
  3. 3
    Score each variant on your success metric. Run all variants on your held-out examples. Record accuracy, speed, cost. APE's goal is to find the best prompt without manual trial-and-error.
  4. 4
    Iterate: pick the top 2 variants, ask the optimizer to generate mutations of those. If variant 3 scored 85% accuracy and variant 5 scored 82%, ask the optimizer to 'generate variations similar to variant 3 but with specific tweak.' Refine iteratively.
  5. 5
    Once you have a strong prompt, test it on fresh data to confirm it generalizes. Your optimized prompt scored well on your test set—now verify it works on new, unseen examples. If performance drops, you may have overfit to your test data.

Sources

これらのテクニックをPromptQuorumで25以上のAIモデルに同時に適用しましょう。

PromptQuorumを無料で試す →

← プロンプトエンジニアリングに戻る

APEフレームワーク:分析・計画・実行 — 思考プロセスが見える構造化プロンプト | PromptQuorum