DevSpark Blogging Workflow: How I Built Better Articles
Writing the Cloudflare and IIS article made me realize I needed the same kind of governed workflow for content that I expect from code. This is how I layered write-article, critique, editorial, and SEO prompts on top of DevSpark so a rough idea could become a stronger, more publishable article through deliberate iteration.
DevSpark Series — 25 articles
- DevSpark: Constitution-Driven AI for Software Development
- Getting Started with DevSpark: Requirements Quality Matters
- DevSpark: Constitution-Based Pull Request Reviews
- Why I Built DevSpark
- Taking DevSpark to the Next Level
- From Oracle CASE to Spec-Driven AI Development
- Fork Management: Automating Upstream Integration
- DevSpark: The Evolution of AI-Assisted Software Development
- DevSpark: Months Later, Lessons Learned
- DevSpark in Practice: A NuGet Package Case Study
- DevSpark: From Fork to Framework — What the Commits Reveal
- DevSpark v0.1.0: Agent-Agnostic, Multi-User, and Built for Teams
- DevSpark Monorepo Support: Governing Multiple Apps in One Repository
- The DevSpark Tiered Prompt Model: Resolving Context at Scale
- A Governed Contribution Model for DevSpark Prompts
- Prompt Metadata: Enforcing the DevSpark Constitution
- Bring Your Own AI: DevSpark Unlocks Multi-Agent Collaboration
- Workflows as First-Class Artifacts: Defining Operations for AI
- Observability in AI Workflows: Exposing the Black Box
- Autonomy Guardrails: Bounding Agent Action Safely
- Dogfooding DevSpark: Building the Plane While Flying It
- Closing the Loop: Automating Feedback with Suggest-Improvement
- Designing the DevSpark CLI UX: Commands vs Prompts
- The Alias Layer: Masking Complexity in Agent Invocations
- DevSpark Blogging Workflow: How I Built Better Articles
When Writing Started Looking More Like Engineering
I do not usually think of writing an article as the same kind of task as implementing a feature or tightening a deployment workflow. It feels softer at first. You have an idea, you get words on the page, you tighten them up, and eventually you publish.
The recent Cloudflare and IIS article made that assumption much harder to keep.
What began as a straightforward request to write about hosting .NET applications on a Windows VM behind Cloudflare turned into something much more revealing. The first pass looked finished faster than it should have. The title worked. The technical nouns were all there. The generator ran. But when I read it back, it felt wrong in a way that was harder to tolerate than a build failure. The article was competent, but it had no center of gravity. It knew about Cloudflare, IIS, Origin CA certificates, and SNI bindings, but it did not yet know why I had needed to write it.
That was the first real signal that the writing workflow itself was underdesigned. In this thread, the article did not simply get drafted once and lightly polished. It got critiqued, rewritten to become more personal, checked editorially, checked for SEO, challenged again when the terminal started showing a lot of red, and then traced back through the docs and scripts until the real PowerShell issue became visible. By the end, I was no longer just writing a post. I was refining a workflow.
That is the part I want to capture here. The interesting outcome was not only the finished Cloudflare/IIS article. It was the process that produced it: a set of writing-specific DevSpark prompts and scripts that let me draft, challenge, revise, and validate a post with far more discipline than a single chat prompt ever could.
The Cloudflare and IIS Post Became the Test Case
The article itself was grounded in a real operational problem. I had been moving multiple .NET systems onto one Windows VM, with Cloudflare in front and IIS behind it. The key turning point in that story was renaming SampleCRUD to UISampleSpark and relaunching it, then realizing that what looked like a small cleanup step actually forced me to revalidate DNS, bindings, certificates, and firewall assumptions across the whole stack.
That gave me a real anchor. But the first draft still behaved too much like a polished technical memo. It described sensible practices, yet it had not fully earned its narrative shape. The piece knew about Cloudflare Origin CA certificates, SNI bindings, and trust boundaries, but it had not yet committed to the lived moment that made those ideas matter.
That mismatch is exactly why I started thinking harder about the writing workflow itself. A single drafting prompt can produce useful content quickly. It is much worse at telling you where the draft is emotionally flat, structurally over-explanatory, or missing the one remembered incident that would make the article stick.
In other words, the Cloudflare/IIS post was not only the subject of the workflow. It became the proof that the workflow needed to exist.
Why One Prompt Was Not Enough
What I have found with AI-assisted writing is that the first answer is often good enough to be dangerous. It can sound polished long before it becomes specific. It can sound organized long before it becomes honest. And for a site like mine, that is a real problem, because the writing is not filler around the engineering work. The writing is part of the product.
That is why I did not want one large catch-all prompt that claimed to handle everything. Writing quality actually breaks in different ways, and each failure mode benefits from a different kind of pressure.
The old way was not disastrous. It was just expensive in the wrong places. A single drafting prompt could get me 80 percent of the way to something readable, but that last 20 percent was where the real risk lived: generic prose, vague structure, too much cleanup hidden in chat history, and no reliable way to prove that an article really sounded like me or met the standards I wanted the site to represent.
The draft stage needs help with structure, metadata, and getting a complete article file onto disk. Narrative review needs a sharper voice that cares about tension, scene, and payoff. Editorial review needs to hold the article against the constitution so it does not slip into lecture mode or generic filler. SEO review needs to care about frontmatter quality, heading hierarchy, canonical URLs, and internal links. Those are related problems, but they are not the same problem.
So instead of trying to force one prompt to do all of that at once, I created a sequence.
How the Cloudflare and IIS Draft Actually Moved Through the Workflow
The first useful design decision was not glamorous. I kept the editor-facing prompt names simple and stable, then pushed the actual behavior into repository-owned command files. That gave me a thin surface at the prompt layer and a more explicit, version-controlled place to define what each command was supposed to do.
The result was a content workflow built around four specialized prompts. What mattered in practice was not that I had four commands. It was that the Cloudflare/IIS article moved through them in order, and each stage exposed a different failure that the earlier stage had missed.
Draft: /devspark.write-article
This is the drafting and file-creation step. It takes a seed idea, an angle, or an existing draft and turns that into a full article file with frontmatter, body structure, internal links, and a generated articles.json update.
That mattered immediately in this thread because the first output was not just prose pasted into chat. It was a real article file that could be critiqued, regenerated, checked, and revised in place. A writing prompt that only returns a block of prose still leaves too much manual work behind: picking a slug, populating metadata, checking the section, adding related links, and remembering to run the generator. The write-article command closes that gap.
Narrative Pressure: /devspark.critique
This is the part I needed once I saw how easily a technically correct article could still miss the real story. The critique prompt is not there to score compliance. It is there to ask harder questions: What is the actual tension? Is the narrator really in the piece? Does the opening promise something the ending earns?
For the Cloudflare/IIS article, that is where the most valuable guidance appeared. The critique pushed the piece away from being a generic guide and toward being a more personal account of how the architecture became real once the operational risk was shared across multiple sites. That was the moment the draft stopped feeling merely unfinished and started feeling mis-aimed. The critique was not telling me to add polish. It was telling me the story was still hiding behind the information.
Voice and Compliance: /devspark.editorial
Editorial is a different kind of pressure. It is less interested in hidden story shape and more interested in whether the article actually sounds like it belongs on this site.
That means checking the piece against the constitution: opening with tension instead of a definition, keeping the tone exploratory rather than prescriptive, maintaining good paragraph rhythm, and preserving the kind of practitioner voice that feels observed instead of performed. In the Cloudflare/IIS workflow, editorial helped catch where the article still sounded a little too much like a platform guide and where the phrasing was slipping toward polished explanation instead of lived observation.
Discoverability: /devspark.seo-check
This is the final validation layer for metadata quality and discoverability. It checks the frontmatter, social metadata, heading structure, internal links, and whether the article actually delivers on the promise its metadata is making.
That sounds mechanical, but it matters. A good article with weak metadata is still easy to miss. In this case, the SEO pass helped tighten title choices, improve image alt text, and make sure the article had a contextual internal link rather than relying only on a footer list. By the time the article cleared that pass, I had more confidence that the workflow was not only producing a stronger essay but also a cleaner publishing artifact.
Repair and Rerun: The Part I Had Not Planned On
The stage I had not designed explicitly at first was what happened when the workflow itself looked broken. In this thread, the terminal started showing a lot of red text while I was working through the article updates. That could have been a content problem, a generator problem, or a script problem. Instead, it turned out to be a shell problem: PowerShell was treating cat as Get-Content, so a command pattern that looked harmless in Unix-style examples was producing noise and confusion in the wrong place.
That moment mattered because it exposed the same design flaw the article itself had exposed. The workflow was still carrying too much implicit knowledge. Fixing the article was only part of the job. Fixing the surrounding documentation so the workflow behaved cleanly in the actual shell I was using was part of the same system improvement.
The Process Worked Because the Prompts Disagreed Usefully
What I like about this setup is that the prompts are not pretending to be one unified voice. They are allowed to care about different things.
The critique prompt wanted the Cloudflare/IIS article to stop hovering above the problem and name the scene that made it real. The editorial pass wanted the tone to stay aligned with the constitution and avoid slipping into checklist language. The SEO pass cared about title length, image metadata, and link distribution. None of those reviews were redundant. They were useful precisely because they were not asking the same question.
That changed the cadence of the work. Instead of trying to make the first draft perfect, I could let each stage expose a different weakness. The story became more personal. The phrasing became less generic. The metadata became more deliberate. By the time I reran the critique after the revisions, the conversation had shifted from "What is this article trying to be?" to "What small changes would make it more memorable?"
That is a much better place to be.
Scripts and Docs Matter as Much as Prompts
One of the more useful surprises in this process had nothing to do with prose. At one point the terminal was showing a lot of red text, and it would have been easy to assume the article generator was broken.
It was not. The actual problem was that some of the surrounding docs and examples were written with Unix shell assumptions in mind. In PowerShell, cat is not Unix cat; it maps to Get-Content, which changes what happens when you try to pipe command output the way you would on another shell. The red text was coming from that mismatch, not from the article generation step itself.
That ended up being part of the content workflow story too. If the prompts are supposed to help produce reliable writing, then the scripts and quickstarts around them cannot quietly push the wrong shell habits. So part of finishing the article process meant fixing the surrounding docs to be PowerShell-safe, not just polishing the prose inside the article.
That is one of the things I keep relearning with DevSpark. Prompts do not live in isolation. They sit inside a system of scripts, documentation, generators, conventions, and validation steps. If one part of that system is sloppy, the overall workflow feels less trustworthy than it should.
Built on DevSpark, Not Beside It
The reason this content workflow came together quickly is that I was not inventing it from scratch. The broader DevSpark foundation was already there.
The constitution was already doing the job of making project standards explicit. The command model was already there. The out-of-the-box workflows for things like /devspark.create-pr, /devspark.pr-review, and /devspark.address-pr-review were already part of the toolchain. I did not need a second system for writing. I needed a content-specific layer that respected the same philosophy.
That matters to me professionally for the same reason governed code matters to me professionally. If the site is part of how I demonstrate judgment, then the writing process cannot depend on vague cleanup habits and private memory any more than a codebase should depend on them.
That distinction matters. I was not trying to turn DevSpark into a blogging toy. I was extending a constitution-driven workflow system into another area where quality drifts easily when the rules stay implicit. Software changes are not the only artifacts that benefit from governed iteration. Articles do too.
That is also why Workflows as First-Class Artifacts still feels like the right framing for this direction. Once the workflow becomes explicit, version-controlled, and repeatable, it stops being a one-off trick and starts becoming something I can trust and improve.
What Changed in How I Think About Writing
Before this, it would have been easy for me to treat article writing as a mostly private act with a little cleanup at the end. Draft the idea. Read it once or twice. Fix whatever feels awkward. Publish it.
What changed is not that I now think writing should become rigid. The point is almost the opposite. A better workflow gives me more room to think clearly, because I am not trying to hold every quality bar in my head at once.
I can let the first step focus on getting a real article file created. I can let critique push on narrative honesty. I can let editorial defend the voice. I can let SEO make sure the piece is discoverable and structurally sound. The result is not formulaic. If anything, it gives the article a better chance of becoming more personal, because the workflow keeps exposing where I am still hiding behind competence.
That is what the Cloudflare/IIS article gave me in the end. Yes, it produced a finished post I like more than the first version. But more importantly, it forced me to admit that writing a strong technical article deserves the same kind of deliberate process I already expect everywhere else.
Final Thought
I started with a request to write about Cloudflare, IIS, and a Windows VM. I ended up with a much more useful result: a writing workflow that now feels native to the way I already build software.
That is the part worth carrying forward. The article was the immediate deliverable. The deeper win was realizing that the prompts, scripts, constitution, and review stages could work together as a content system instead of a loose collection of helpful tricks.
That is why the moment still matters to me. A draft that looked done and a terminal that looked broken turned out to be warnings about the same thing: when the workflow is implicit, you spend too much time guessing. Once that became explicit, writing a new article stopped feeling like starting over every time.
Explore More
- Cloudflare and IIS: Hosting My .NET Sites on One VM -- The article that became the test case for this writing workflow
- Why I Built DevSpark -- The brownfield governance problem that led me to build the broader framework
- Workflows as First-Class Artifacts: Defining Operations for AI -- Why explicit, version-controlled workflows matter beyond one-off prompts
- DevSpark: Constitution-Based Pull Request Reviews -- How the same constitution-driven model applies to PR governance
- DevSpark: Constitution-Driven AI for Software Development -- The broader series context for how DevSpark keeps evolving
