Back to blog

The DevSpark Tiered Prompt Model: Resolving Context at Scale

April 10, 20268 min read

How DevSpark's cascading prompt hierarchy — framework defaults, project overrides, user personalization — injects the right context without repetition.

DevSpark Series — 24 articles
  1. DevSpark: Constitution-Driven AI for Software Development
  2. Getting Started with DevSpark: Requirements Quality Matters
  3. DevSpark: Constitution-Based Pull Request Reviews
  4. Why I Built DevSpark
  5. Taking DevSpark to the Next Level
  6. From Oracle CASE to Spec-Driven AI Development
  7. Fork Management: Automating Upstream Integration
  8. DevSpark: The Evolution of AI-Assisted Software Development
  9. DevSpark: Months Later, Lessons Learned
  10. DevSpark in Practice: A NuGet Package Case Study
  11. DevSpark: From Fork to Framework — What the Commits Reveal
  12. DevSpark v0.1.0: Agent-Agnostic, Multi-User, and Built for Teams
  13. DevSpark Monorepo Support: Governing Multiple Apps in One Repository
  14. The DevSpark Tiered Prompt Model: Resolving Context at Scale
  15. A Governed Contribution Model for DevSpark Prompts
  16. Prompt Metadata: Enforcing the DevSpark Constitution
  17. Bring Your Own AI: DevSpark Unlocks Multi-Agent Collaboration
  18. Workflows as First-Class Artifacts: Defining Operations for AI
  19. Observability in AI Workflows: Exposing the Black Box
  20. Autonomy Guardrails: Bounding Agent Action Safely
  21. Dogfooding DevSpark: Building the Plane While Flying It
  22. Closing the Loop: Automating Feedback with Suggest-Improvement
  23. Designing the DevSpark CLI UX: Commands vs Prompts
  24. The Alias Layer: Masking Complexity in Agent Invocations

The most tedious part of working with AI coding agents isn't writing prompts — it's re-explaining context that shouldn't need explaining. Every new session, I found myself restating the same architectural boundaries, the same logging patterns, the same testing requirements. A flat .github/copilot-instructions.md file was the first answer. But a file that tries to capture everything becomes too broad to be useful for specific tasks and too specific to share across projects.

The problem isn't the file. It's the assumption that context belongs in one place.

That tension — how do you balance consistency across a team or organization with the flexibility that individual projects and developers need — is what the tiered prompt model was built to resolve. The answer that emerged after months of iteration is a cascading hierarchy: framework defaults at the base, project-specific overrides in the middle, and developer personalization at the top. Each layer narrows the context window to what actually matters for the task at hand.

Two Tiers That Actually Matter

The most useful framing I've found is to think about ownership rather than granularity. In DevSpark, context files live in exactly two places:

.devspark/ is framework-managed. These are the baseline prompts, templates, and agent configurations that come with DevSpark and get updated when you upgrade. I don't edit these directly — treating them as an upstream dependency I pull from rather than modify means I always have a clean path to receiving improvements from the evolving framework.

.documentation/ is repository-owned. This is where the project-specific context lives: the constitution, the spec artifacts, command overrides tuned to this codebase's specific patterns. When the DevSpark CLI prepares a prompt, it checks .documentation/ for a shadow file before falling back to the .devspark/ baseline. The override wins. The baseline applies only when no override exists.

This two-tier model — formalized as ADR-001 in DevSpark v2.1.0 — is the structural decision that makes everything else composable. The framework can evolve without overwriting project customizations. Projects can customize without forking the framework. And when I upgrade DevSpark in a project, I can clearly see what changed in .devspark/ (framework) versus what I own in .documentation/ (mine).

Personalization as the Third Layer

On top of these two tiers sits the developer-level personalization that the v0.1.0 article covers in depth. User-scoped overrides live at .documentation/{git-username}/commands/, where they take priority over the shared canonical default without touching anyone else's workflow.

The resolution order at command invocation time:

  1. User-scoped override (.documentation/{username}/commands/{command}.md) — if it exists, use it
  2. Project override (.documentation/commands/{command}.md) — if it exists, use it
  3. Framework baseline (.devspark/templates/commands/{command}.md) — fallback
  4. Constitution — injected universally into every resolution, regardless of which tier supplied the prompt

That final step is the one that holds the system together. The constitution is never overridable by any lower tier. A user personalization can change how I approach writing a specification; it can't change what the specification must satisfy. The prompt content is customizable. The architectural standards it enforces are not.

What This Looks Like at Runtime

The /devspark.specify command is the best example of tiered resolution in practice. When I invoke it in a new project, it runs through route-aware intake — asking a handful of questions to determine whether I need a one-off fix, a quick spec from the intermediate template, or a full spec with the complete clarification loop. The intake logic lives in the framework baseline.

But on a project where my team has established norms — a specific acceptance criteria format, a particular way of documenting failure modes — I have a .documentation/commands/specify.md override that applies those norms automatically. I don't answer intake questions I've already answered at the project level. The override folds them in silently.

And if I have a personal preference for leading with failure modes before user stories, my user-scoped override adds that emphasis without changing the team's shared template. The same command, tuned at three layers, producing output that reflects both institutional standards and individual thinking style.

Context Is a Scalability Problem

The underlying insight that shaped this architecture isn't about AI — it's about knowledge management. The models are capable enough to write useful code given the right constraints. The hard problem is getting the right constraints into the context window at the right time without flooding it with irrelevant guidance.

The tiered model mirrors how human organizations actually distribute knowledge. There are standards that apply everywhere: security requirements, commit signing, minimum test coverage. There are standards that apply to specific projects or technology stacks: which logging framework, which testing patterns, which ORM. And there are preferences that belong to individual contributors: how they structure requirements, how aggressive they want adversarial review to be.

Flattening all of that into one file is where context management breaks down. The tiered model keeps each layer responsible for its own concerns — and lets the resolution engine assemble exactly what the current task needs, without the overhead of everything it doesn't.

The result is a system where adding a new project takes minutes (point to the framework, add a constitution, optionally override what's project-specific), and switching between projects costs nothing — because the context travels with the code, not with my memory.