The DevSpark Tiered Prompt Model: Resolving Context at Scale
How DevSpark's cascading prompt hierarchy — framework defaults, project overrides, user personalization — injects the right context without repetition.
DevSpark Series — 24 articles
- DevSpark: Constitution-Driven AI for Software Development
- Getting Started with DevSpark: Requirements Quality Matters
- DevSpark: Constitution-Based Pull Request Reviews
- Why I Built DevSpark
- Taking DevSpark to the Next Level
- From Oracle CASE to Spec-Driven AI Development
- Fork Management: Automating Upstream Integration
- DevSpark: The Evolution of AI-Assisted Software Development
- DevSpark: Months Later, Lessons Learned
- DevSpark in Practice: A NuGet Package Case Study
- DevSpark: From Fork to Framework — What the Commits Reveal
- DevSpark v0.1.0: Agent-Agnostic, Multi-User, and Built for Teams
- DevSpark Monorepo Support: Governing Multiple Apps in One Repository
- The DevSpark Tiered Prompt Model: Resolving Context at Scale
- A Governed Contribution Model for DevSpark Prompts
- Prompt Metadata: Enforcing the DevSpark Constitution
- Bring Your Own AI: DevSpark Unlocks Multi-Agent Collaboration
- Workflows as First-Class Artifacts: Defining Operations for AI
- Observability in AI Workflows: Exposing the Black Box
- Autonomy Guardrails: Bounding Agent Action Safely
- Dogfooding DevSpark: Building the Plane While Flying It
- Closing the Loop: Automating Feedback with Suggest-Improvement
- Designing the DevSpark CLI UX: Commands vs Prompts
- The Alias Layer: Masking Complexity in Agent Invocations
The most tedious part of working with AI coding agents isn't writing prompts — it's re-explaining context that shouldn't need explaining. Every new session, I found myself restating the same architectural boundaries, the same logging patterns, the same testing requirements. A flat .github/copilot-instructions.md file was the first answer. But a file that tries to capture everything becomes too broad to be useful for specific tasks and too specific to share across projects.
The problem isn't the file. It's the assumption that context belongs in one place.
That tension — how do you balance consistency across a team or organization with the flexibility that individual projects and developers need — is what the tiered prompt model was built to resolve. The answer that emerged after months of iteration is a cascading hierarchy: framework defaults at the base, project-specific overrides in the middle, and developer personalization at the top. Each layer narrows the context window to what actually matters for the task at hand.
Two Tiers That Actually Matter
The most useful framing I've found is to think about ownership rather than granularity. In DevSpark, context files live in exactly two places:
.devspark/ is framework-managed. These are the baseline prompts, templates, and agent configurations that come with DevSpark and get updated when you upgrade. I don't edit these directly — treating them as an upstream dependency I pull from rather than modify means I always have a clean path to receiving improvements from the evolving framework.
.documentation/ is repository-owned. This is where the project-specific context lives: the constitution, the spec artifacts, command overrides tuned to this codebase's specific patterns. When the DevSpark CLI prepares a prompt, it checks .documentation/ for a shadow file before falling back to the .devspark/ baseline. The override wins. The baseline applies only when no override exists.
This two-tier model — formalized as ADR-001 in DevSpark v2.1.0 — is the structural decision that makes everything else composable. The framework can evolve without overwriting project customizations. Projects can customize without forking the framework. And when I upgrade DevSpark in a project, I can clearly see what changed in .devspark/ (framework) versus what I own in .documentation/ (mine).
Personalization as the Third Layer
On top of these two tiers sits the developer-level personalization that the v0.1.0 article covers in depth. User-scoped overrides live at .documentation/{git-username}/commands/, where they take priority over the shared canonical default without touching anyone else's workflow.
The resolution order at command invocation time:
- User-scoped override (
.documentation/{username}/commands/{command}.md) — if it exists, use it - Project override (
.documentation/commands/{command}.md) — if it exists, use it - Framework baseline (
.devspark/templates/commands/{command}.md) — fallback - Constitution — injected universally into every resolution, regardless of which tier supplied the prompt
That final step is the one that holds the system together. The constitution is never overridable by any lower tier. A user personalization can change how I approach writing a specification; it can't change what the specification must satisfy. The prompt content is customizable. The architectural standards it enforces are not.
What This Looks Like at Runtime
The /devspark.specify command is the best example of tiered resolution in practice. When I invoke it in a new project, it runs through route-aware intake — asking a handful of questions to determine whether I need a one-off fix, a quick spec from the intermediate template, or a full spec with the complete clarification loop. The intake logic lives in the framework baseline.
But on a project where my team has established norms — a specific acceptance criteria format, a particular way of documenting failure modes — I have a .documentation/commands/specify.md override that applies those norms automatically. I don't answer intake questions I've already answered at the project level. The override folds them in silently.
And if I have a personal preference for leading with failure modes before user stories, my user-scoped override adds that emphasis without changing the team's shared template. The same command, tuned at three layers, producing output that reflects both institutional standards and individual thinking style.
Context Is a Scalability Problem
The underlying insight that shaped this architecture isn't about AI — it's about knowledge management. The models are capable enough to write useful code given the right constraints. The hard problem is getting the right constraints into the context window at the right time without flooding it with irrelevant guidance.
The tiered model mirrors how human organizations actually distribute knowledge. There are standards that apply everywhere: security requirements, commit signing, minimum test coverage. There are standards that apply to specific projects or technology stacks: which logging framework, which testing patterns, which ORM. And there are preferences that belong to individual contributors: how they structure requirements, how aggressive they want adversarial review to be.
Flattening all of that into one file is where context management breaks down. The tiered model keeps each layer responsible for its own concerns — and lets the resolution engine assemble exactly what the current task needs, without the overhead of everything it doesn't.
The result is a system where adding a new project takes minutes (point to the framework, add a constitution, optionally override what's project-specific), and switching between projects costs nothing — because the context travels with the code, not with my memory.
