PromptQuorumPromptQuorum

How PromptQuorum Works

A 4-stage workflow: write a structured prompt using one of 9 frameworks, optimize it with your own LLM, dispatch simultaneously to 25+ AI services, then analyze all responses using 13 consensus analysis types.

Runs entirely in your browser β€” no PromptQuorum server ever sees your prompts or API keys
1
Write/prompt

Structure Your Prompt

Prompts structured with frameworks produce higher quality outputs. PromptQuorum includes 9 built-in frameworks (Single Prompt Line, CRAFT, CO-STAR, RISEN, TRACE, APE, SPECS, Google Prompt, RTF) plus 2 fully custom framework slots.

  • βœ“Single Prompt Line β€” minimal structure for quick tasks
  • βœ“CRAFT β€” Context, Role, Action, Format, Target (creative writing)
  • βœ“CO-STAR β€” Context, Objective, Style, Tone, Audience, Response (marketing, business)
  • βœ“RISEN β€” Role, Instructions, Steps, End Goal, Narrowing (sequential enterprise tasks)
  • βœ“TRACE β€” Task, Request, Action, Context, Example (few-shot learning)
  • βœ“APE, SPECS, Google Prompt, RTF β€” optimized for specific task types
A Framework Wizard recommends the best framework based on your task type.
2
Optimize/optimize

Refine with Your Own LLM

Prompt quality improves measurably with optimization β€” structured prompts score 25–45% higher in LLM evaluation. PromptQuorum applies 8 refinement types (Make Concise, Expand Detail, Break Into Steps, Increase Specificity, Simplify, Add Quality Controls, Multi-Expert Consultation, Compress to Essence) plus smart temperature detection.

  • βœ“Quality Assessment β€” 0-100% scoring on clarity, specificity, structure, and constraints
  • βœ“Smart Temperature β€” recommends optimal creativity level (0.0-1.0) based on task type
  • βœ“Version History β€” every refinement saved; branch and compare refinement paths
  • βœ“Teaching Mode β€” explains why each change improves quality and clarity
  • βœ“8 One-Click Refinements β€” apply structured transformations instantly
  • βœ“Custom Instruction β€” free-text refinement using your own LLM
Your LLM. Your API key. Nothing passes through PromptQuorum servers.
3
Dispatch/dispatch

Send to 25+ AI Services

Sending the same prompt to multiple AI models reveals which model performs best for your task. PromptQuorum opens parallel browser tabs to 25+ destinations with zero copy-pasting required.

  • βœ“Auto-dispatch (17 services): OpenAI ChatGPT, Google Gemini, Anthropic Claude, Perplexity, xAI Grok, DeepSeek, Mistral, Cohere, Azure, Together, Groq, and more
  • βœ“Copy-paste (8 services): Qwen, Meta AI, Poe, Kimi, LM Studio, Jan AI, GPT4All, and others
  • βœ“Perplexity auto-submits β€” prompt sent immediately on arrival
  • βœ“2 custom URL slots β€” configure any AI service not on the default list
  • βœ“Optional pre-dispatch refinement β€” final LLM enhancement before sending
  • βœ“Parallel execution β€” all tabs open simultaneously; collect responses in under 1 minute
All browser tabs open in parallel. No manual copy-pasting between tabs.
4
Quorum/quorum

Find Consensus Across All Models

When 5+ independent models agree on an answer, confidence is higher than with a single model. Paste all responses back into PromptQuorum and apply 13 consensus analysis types.

  • βœ“Consensus Summary β€” identifies shared themes and unanimous agreements
  • βœ“Contradiction Detection β€” flags where models diverge; identifies minority opinions
  • βœ“Hallucination Detection β€” identifies claims appearing in few models; potential false facts
  • βœ“Confidence Scoring β€” certainty level per model and per claim
  • βœ“Best Answer Selection β€” selects the highest-quality individual response
  • βœ“Weighted Merge β€” synthesizes a hybrid response using best elements from all models
When 5+ independent models converge on the same answer, hallucination risk is lower than with a single model.

9 Built-in Prompt Frameworks

Structured prompts using frameworks produce measurably better outputs than unstructured requests. Each framework organizes input differently for specific task types. A Framework Wizard recommends the best fit, or build 2 custom frameworks.

FrameworkOptimal For
Single Prompt LineQuick, ad-hoc queries without structure
APE3-field minimal structure; simple tasks
CRAFTCreative writing; general-purpose tasks
CO-STARMarketing copy; business communication
SPECSAnalysis; research; technical writing
RISENMulti-step enterprise workflows
TRACEFew-shot learning; example-based tasks
Google PromptProfessional tasks; role-based prompts
RTFMinimal structure; 3 core fields only

13 Quorum Analysis Types

Apply 2 or all 13 analyses to responses from multiple models. Each analysis is executed by your connected LLM, not PromptQuorum servers. Identify consensus, contradictions, hallucinations, and confidence levels across all model outputs.

Synthesis (3)
  • β†’Consensus Summary β€” shared themes across all models
  • β†’Weighted Merge β€” hybrid answer combining best from each model
  • β†’Atomic Facts Extraction β€” break all claims into discrete facts; count model agreement
Comparison (3)
  • β†’Overlap Mapping β€” identify which models produced identical outputs
  • β†’Contradiction Detection β€” flag claims where models diverge; identify disagreements
  • β†’Confidence Scoring β€” measure certainty level per model and per claim
Quality (3)
  • β†’Completeness Check β€” verify all required information is present
  • β†’Hallucination Detection β€” identify claims appearing in few models; potential false facts
  • β†’Redundancy Elimination β€” remove duplicate or near-duplicate claims
Selection (4)
  • β†’Best Answer Selection β€” pick the single highest-quality response
  • β†’Multi-Model Ensemble β€” combine outputs using model reliability weighting
  • β†’Controversy Flag β€” highlight claims where model agreement is weak
  • β†’Custom Analysis β€” user-defined analysis template
Export results in 6 formats
.txt.md.json.csv.html.pdf

Multiple formats β†’ downloaded as a .zip archive. File System Access API for folder selection (Chrome/Edge/Safari 16+).

Key Concepts

Multi-Model Dispatch
Sending one prompt simultaneously to 25+ AI models in a single click. PromptQuorum pre-loads your prompt into each destination via URL β€” no copy-pasting, all tabs open in parallel.
Quorum Analysis
Structured comparison of responses from multiple AI models to identify consensus, contradictions, and confidence levels. PromptQuorum offers 13 analysis types including Hallucination Detection and Best Answer Selection.
Consensus Scoring
A confidence rating derived from the degree of agreement across multiple model responses. Higher consensus = higher reliability. Lower consensus flags areas of uncertainty or potential hallucination.
Hallucination Detection
Identifying factual claims that appear in only one or a minority of model responses, indicating potential AI fabrication. Cross-referencing 5+ independent models dramatically reduces the rate of undetected hallucinations.
BYOM β€” Bring Your Own Model
Connecting your own API keys directly to AI providers. Keys are stored only in your browser's localStorage and connect directly to providers β€” no PromptQuorum server ever receives or transmits your credentials.

Bring Your Own Model (BYOM) β€” No PromptQuorum Infrastructure

PromptQuorum does not host or execute any LLM models. Every API call goes directly from your browser to your chosen provider. Your API keys stay in browser localStorage and are never transmitted to PromptQuorum servers.

Cloud APIs (bring your own API key)
  • OpenAI (GPT-4, GPT-4o)
  • Anthropic (Claude 3.5)
  • Google Gemini 1.5
  • Grok (xAI)
  • DeepSeek
  • Mistral
  • Cohere
  • Together AI
  • Groq
  • OpenRouter (free tier)
Local models (no API key needed; runs on your machine)
  • Ollama (localhost:11434)
  • LM Studio (localhost:1234)
  • Jan AI (localhost:1337)
  • GPT4All (localhost:4891)
  • Open WebUI
  • KoboldCpp
  • vLLM
  • oobabooga
  • Any OpenAI-compatible endpoint
βœ“

No telemetry

No analytics, tracking, logging, or data collection. Not even anonymous usage statistics or session timing.

βœ“

No registration

Zero signup required. No email, no account, no login. Open the app; start immediately.

βœ“

Offline-capable

Desktop app (Electron) and mobile app (Capacitor) support full offline operation with local models via Ollama, LM Studio, Jan AI, or compatible endpoints.

Ready to try it?

Join the waitlist for early access. First users get lifetime premium features.

Join the Waitlist

← Back to Home

How Multi-Model AI Dispatch & Consensus Works | PromptQuorum