Back to blog

DevSpark: From Fork to Framework — What the Commits Reveal

March 19, 202612 min read

Writing about building something and actually building it are two different activities. This article uses the DevSpark commit history as primary source material — tracking what got built, when, and why from the first fork through the many iterations that produced DevSpark v0.1.0. The result is a practitioner's record of how an idea becomes a tool through persistence, iteration, and a willingness to throw things out and start again.

DevSpark Series — 24 articles
  1. DevSpark: Constitution-Driven AI for Software Development
  2. Getting Started with DevSpark: Requirements Quality Matters
  3. DevSpark: Constitution-Based Pull Request Reviews
  4. Why I Built DevSpark
  5. Taking DevSpark to the Next Level
  6. From Oracle CASE to Spec-Driven AI Development
  7. Fork Management: Automating Upstream Integration
  8. DevSpark: The Evolution of AI-Assisted Software Development
  9. DevSpark: Months Later, Lessons Learned
  10. DevSpark in Practice: A NuGet Package Case Study
  11. DevSpark: From Fork to Framework — What the Commits Reveal
  12. DevSpark v0.1.0: Agent-Agnostic, Multi-User, and Built for Teams
  13. DevSpark Monorepo Support: Governing Multiple Apps in One Repository
  14. The DevSpark Tiered Prompt Model: Resolving Context at Scale
  15. A Governed Contribution Model for DevSpark Prompts
  16. Prompt Metadata: Enforcing the DevSpark Constitution
  17. Bring Your Own AI: DevSpark Unlocks Multi-Agent Collaboration
  18. Workflows as First-Class Artifacts: Defining Operations for AI
  19. Observability in AI Workflows: Exposing the Black Box
  20. Autonomy Guardrails: Bounding Agent Action Safely
  21. Dogfooding DevSpark: Building the Plane While Flying It
  22. Closing the Loop: Automating Feedback with Suggest-Improvement
  23. Designing the DevSpark CLI UX: Commands vs Prompts
  24. The Alias Layer: Masking Complexity in Agent Invocations

Why Track the Commits

Articles about frameworks describe what they should do. Commit histories record what they actually did.

There's a version of this series that stays entirely in the abstract — philosophy, principles, methodology, lessons learned. That version is useful. But there's a more honest version, the one that starts with git log and works backward from the evidence to understand what actually happened. That's this article.

What follows is a chronicle of the DevSpark fork from its first commit on January 25, 2026 through the iterations that produced v0.1.0. Many commits. Many small releases and revisions. Eight weeks. Not what I planned to build — what I actually built, in the order I built it, including the false starts and the fixes that followed every feature. v0.1.0 wasn't a plan. It was the point where the accumulated iterations felt stable enough to name.

The Starting Line: January 25, 2026

The fork began on January 25, 2026. DevSpark started from a well-maintained upstream — actively developed, with multiple contributors and a community of AI coding agent integrations. At the time of the initial fork, it was already a complete specification-driven development tool.

What it wasn't was a brownfield tool. It had no mechanism for reviewing existing pull requests against a constitution. No adversarial pre-mortem. No codebase auditing. Those were the gaps I wanted to fill.

The first two commits after the fork — on January 26 — were infrastructure: scripts and documentation for release management. Not the features themselves. The first thing I built was the machinery to ship things, which, in retrospect, was the right call. Everything that came after depended on being able to release reliably.

Week One: Three Commands

Within the first three days, three commands that didn't exist in upstream got added.

January 26: /devspark.pr-review — Constitution-based PR reviews. The commit message is detailed enough to tell the story:

Add pr-review command template with comprehensive review workflow. Implement get-pr-context scripts (bash and PowerShell) for GitHub CLI integration. Works for any PR in any branch (main, develop, feature, etc.). Auto-detects PR number from environment/branch. Persistent reviews in /specs/pr-review/pr-{id}.md. Tracks commit SHA and timestamps for version tracking. Severity classification (Critical/High/Medium/Low).

That's a complete feature shipped in a single commit. The PR review command is what Part 2 of this series described — making constitutions enforce standards at review time, not just at development time.

January 27: /devspark.critic — Adversarial risk analysis. A shorter commit message: "Add /devspark.critic command for adversarial risk analysis and update changelog." But the critic became the command that Part 8 identified as the single most valuable element of the workflow. It started as a one-line commit entry. It became the thing I run on everything.

January 29: /devspark.discover-constitution — The brownfield tool. This arrived alongside the first rebranding commit: "Update documentation and references for Mark Hazleton Edition of Spec Kit." The discover-constitution command was the capability that made the fork meaningfully different from upstream: the ability to analyze an existing codebase and extract its implicit standards into a working constitution.

Three commands in four days. That pace didn't last — and it shouldn't have. What followed was slower and more important.

The Naming Decision

The January 29 commit references the "Mark Hazleton Edition of Spec Kit." By January 31, after several development iterations, the commit message used a different name: "DevSpark."

That naming shift matters more than it sounds. "Mark Hazleton Edition" is a fork. "DevSpark" is a product. The rename committed to maintaining something with an identity of its own — a version scheme, a roadmap, a design philosophy distinct from but related to upstream. It also meant owning the responsibility for that identity.

The January 31 release notes explicitly articulated what the fork was adding:

  • /devspark.pr-review — Constitution-aware PR review command
  • /devspark.site-audit — Comprehensive codebase audit command
  • /devspark.critic — Adversarial risk analysis command

Three things that didn't exist upstream. That's a fork with a reason to exist.

The Structural Pivot: .documentation/

The most consequential decision in the fork's history arrived on February 3, in a commit titled: "feat: use .documentation/ instead of .specify/ as Spark fork identifier."

The commit body explains it plainly:

Updated all templates, scripts, and documentation to use .documentation/. Modified CLI to create .documentation/ directory instead of .specify/. This distinguishes DevSpark from the upstream project. Consolidates all AI agent output in .documentation/ separate from code.

This was architectural, not cosmetic. The upstream tool uses .specify/ as its working directory. DevSpark uses .documentation/. Every project that installs Spark creates a .documentation/ folder. That directory name is visible in every repository that uses the tool — it's the mark of the fork in the filesystem itself.

The decision also had downstream costs. Every script, template, and reference had to be updated. Path bugs followed — a cluster of hotfix iterations in early February was almost entirely path corrections. But the structural clarity was worth it: when you look at a project directory, you know immediately whether it's using the upstream project or DevSpark.

Finding a Stable Foundation: The First Real Iteration

The structural pivot on February 3 coincided with an important shift in mindset. Before this point, the versions were upstream-adjacent development builds. After this, DevSpark was on its own iteration track — closely related to the upstream project but with a distinct identity and scope.

The divergence is intentional and communicates something real: DevSpark and the upstream project are related tools with different scopes. One is a specification pipeline. The other is a lifecycle governance framework. They share DNA but have diverged architecturally.

By this iteration, the fork contained:

Upstream CommandsSpark-Only Commands
/devspark.specify/devspark.pr-review
/devspark.plan/devspark.critic
/devspark.tasks/devspark.site-audit
/devspark.implement/devspark.discover-constitution
/devspark.quickfix

The quickfix command is interesting — it appears in both lists because Spark's implementation is substantially enhanced from upstream's. The Spark version validates against the constitution and generates structured quickfix records. It's the same command name serving a richer workflow.

The Governance Layer: Three More Commands

February 2 produced the most feature-dense day in the fork's history. Three commands in a single commit:

/devspark.evolve-constitution — Analyzes PR review findings and generates amendment proposals. The command that makes the constitution a living document rather than a write-once artifact. It's the mechanism behind the governance principle stated in Part 4: every PR is both a compliance check and a potential evolution trigger.

/devspark.quickfix (enhanced) — Rapid lightweight fixes that bypass the full spec-plan-task pipeline while preserving constitutional compliance. The "right-sized workflow" principle in practice: not every task deserves the same process.

/devspark.release — Archives development artifacts, distills key decisions into ADRs, generates CHANGELOG entries, and prepares for the next cycle. The command that closes the lifecycle loop and makes "adaptive documentation" operational rather than theoretical.

These three commands complete the governance layer. With them, the toolkit covers the full lifecycle: new features get the full pipeline, bug fixes get quickfix, PRs get constitutional review, the constitution evolves from those reviews, and releases archive what was learned.

Upstream Sync in Practice

Part 6 of this series described the upstream sync automation in detail. February 20 is when that automation was actually built.

The commit on February 20 added sync-upstream.ps1: a PowerShell script implementing the four-category decision matrix (auto-cherry-pick, adapt-and-merge, ignore, evaluate) that the article described. The day after, the script was used in practice — three upstream commits were evaluated, two were applied, one was deferred.

The commit message from February 21 tells the story concisely:

chore: remove incoming integration plans folder. All integration plans have been evaluated and decisions made: fc3b98e (Qoder fix): Applied. aeed11f (Extension system): Applied. 12405c0 (Template refactoring): Approved for future manual merge. 07077d0 (V-Model v0.2.0): Deferred until extension testing complete.

Four decisions, documented, with rationale. That's the sync process working exactly as designed.

The same day brought the extension system — a community catalog structure borrowed and adapted from upstream, making DevSpark capable of hosting community-contributed workflows without requiring changes to the core templates.

The Upgrade Problem: An Iteration That Earned Its Keep

By early February, DevSpark was being used on real projects. That created a problem the upstream project doesn't face as acutely: what happens when users need to upgrade an existing installation?

A February 8 iteration addressed this directly: upgrade command and migration tools. The commit added 317 lines of upgrade command implementation, migration scripts for both PowerShell and Bash, and safety features including dry-run mode and automatic git state checks.

The upgrade problem is a sign of maturity, not a flaw. You only need migration tooling when people are actually using the tool in ways they want to preserve. The existence of migration tooling in these early iterations is evidence that DevSpark had real users with real installations that needed to be handled carefully.

Production Hardening: The Iterations Before v0.1.0

The most intense bug-fixing period in the fork's history happened in early March — the final set of iterations before declaring v0.1.0. Six commits between March 7 and March 17 addressed a cascade of path-related issues in the migration system:

  1. Migrate specs/ to .documentation/specs/ and defer migration until after template install
  2. Include migration scripts in release packages (they were being bundled in the wrong location)
  3. Fix migration to use Python instead of PowerShell/bash subprocesses (Windows encoding issues)
  4. Fix migration to never overwrite user files with blank templates
  5. Update paths to specs directory in scripts and documentation
  6. Add settings.local.json for Bash permissions configuration

This is the "felt slow, finished fast" dynamic from Part 8 in commit form. Each fix looks like overhead. Cumulatively, they represent a migration system that reliably handles the transition from older DevSpark installations to the current .documentation/ structure without data loss. These were the final iterations — the ones that transformed "mostly works" into the v0.1.0 stability target.

The Windows-specific fix is worth calling out:

fix: replace PS1/bash migration subprocess with Python implementation. Eliminates line-ending and encoding issues on Windows (PowerShell 5.1 would misparse UTF-8 scripts with LF line endings).

This is the kind of problem you only discover when someone on Windows tries to upgrade. The fix is a pure Python implementation that sidesteps the encoding divergence between platforms. It's invisible to users who never hit the bug and essential to users who did.

What v0.1.0 Actually Represents

DevSpark v0.1.0 isn't a declaration that the tool is finished. It's a declaration that the tool is stable enough to use with confidence — that the iterations have produced something I'd be comfortable recommending to another developer without a long list of caveats.

The upstream project and DevSpark are now genuinely different tools. Upstream is a specification pipeline optimized for greenfield projects. DevSpark is a lifecycle governance framework that works on real codebases — existing, messy, evolving ones. They share DNA but have diverged architecturally, and v0.1.0 is the first release that makes that divergence explicit and intentional.

What DevSpark v0.1.0 has that upstream doesn't:

  • Constitution-based PR reviews and evolution
  • Adversarial critic analysis
  • Comprehensive site auditing
  • Brownfield constitution discovery
  • Structured release archiving
  • Version tracking and upgrade automation
  • Upstream sync tooling with documented decision criteria

What upstream has that DevSpark watches and selectively absorbs:

  • New AI agent integrations (17+ supported)
  • Template improvements and terminology updates
  • Extension system enhancements
  • Community contributions from a much larger contributor base

Neither track is wrong. They're different tools for different purposes. v0.1.0 is simply where the iterations converged into something I was ready to stand behind.

Many Commits, Many Iterations, Eight Weeks

The complete record — every period an iteration that moved the tool one step closer to v0.1.0:

PeriodFocusKey Deliverable
Jan 25-27Core commandsPR review, critic commands
Jan 29-31Brownfield + brandingdiscover-constitution, DevSpark name, early dev build
Feb 2-3Governance layerevolve-constitution, quickfix, release commands, .documentation/ pivot
Feb 8Upgrade toolingMigration scripts, upgrade command
Feb 20-21Upstream syncsync-upstream.ps1, extension system, 3 upstream commits absorbed
Mar 7Version trackingDEVSPARK_VERSION stamp, /devspark.upgrade command
Mar 7-17Production hardening → v0.1.0Migration fixes, Python implementation, path corrections, v0.1.0

What the commit log doesn't show is the specification work that drove many of these features — the specs and constitutions applied to the DevSpark repository itself. That recursive quality (using the tool to govern the tool) is either charming or inevitable, depending on how you look at it. I find it both.

The fork started with a gap between upstream's greenfield focus and real-world brownfield needs. Eight weeks later, that gap is filled — not perfectly, not finally, but meaningfully. The commits are the evidence.

Explore More