Cloudflare and IIS: Hosting My .NET Sites on One VM
Getting one .NET site online behind Cloudflare is manageable. Hosting several low-traffic demonstration sites on one Windows VM to keep cost and maintenance low forced me to think less about hosting checklists and more about blast radius, boundaries, and what I would do if one of them ever outgrew this setup.
Case Studies Series — 19 articles
- Mastering Web Project Mechanics
- From Concept to Live: Unveiling WichitaSewer.com
- Taking FastEndpoints for a Test Drive
- Fixing a Runaway Node.js Recursive Folder Issue
- Windows to Mac: Broadening My Horizons
- Using NotebookLM, Clipchamp, and ChatGPT for Podcasts
- A Full History of the EDS Super Bowl Commercials
- OpenAI Sora: First Impressions and Impact
- Riffusion AI: Revolutionizing Music Creation
- The Creation of ShareSmallBiz.com: A Platform for Small Business Success
- Kendrick Lamar's Super Bowl LIX Halftime Show
- Pedernales Cellars Winery in Texas Hill Country
- From README to Reality: Teaching an Agent to Bootstrap a UI Theme
- Building ArtSpark: Where AI Meets Art History
- Building TeachSpark: AI-Powered Educational Technology for Teachers
- Exploring Microsoft Copilot Studio
- Safely Launching a New MarkHazleton.com
- SupportSpark: A Lightweight Support Network Without the Noise
- Cloudflare and IIS: Hosting My .NET Sites on One VM
When "Getting the Site Online" Stopped Being a Small Task
The part that changed the story for me was not Cloudflare or IIS by themselves. It was realizing that several low-traffic demonstration sites were now sharing one public Windows VM, and that a mistake in one layer could widen the blast radius for all of them at once.
That choice was deliberate. These are demonstration sites, reference applications, and smaller experiments that I want online for as little cost and maintenance overhead as possible. I am not pretending this is the final architecture I would choose for a high-growth production workload. If one of these sites really takes off, I can move it to something more isolated, scalable, and dependable. But for low traffic and ease of maintenance today, one VM is a pragmatic compromise.
I did not arrive at that compromise by sitting down with a clean diagram and deciding to build something elegant. I arrived there the way a lot of infrastructure decisions actually happen: one site at a time, one hostname at a time, one failure at a time.
The moment it stopped feeling like ordinary hosting work was when I needed to rename SampleCRUD to UISampleSpark and relaunch the site. That should have been a simple cleanup step. Instead, it sent me back through Cloudflare and IIS at the same time, checking DNS and proxy behavior on one side, bindings and certificates on the other, and realizing I was not just publishing a renamed application. I was validating whether I actually understood the path a request was taking from the edge to the VM, and whether a routine change in one application could accidentally break several others.

At first it was just about getting applications online for people to see. Then it became a mix of hosting patterns that all had to feel intentional. WebSpark became a master site at webspark.markhazleton.com with subsites such as /promptspark and /asyncspark. Alongside that, artspark.markhazleton.com, dataanalysisdemo.markhazleton.com, prismspark.markhazleton.com, and uisamplespark.markhazleton.com each needed to stand on their own without feeling like experiments held together with loose DNS records and luck.
That combination changed the question for me. I was no longer asking, "How do I expose an IIS site to the internet?" I was asking, "What does a more deliberate hosting architecture look like when several .NET systems with different purposes have to coexist on one public VM without stepping on each other?"
The Architecture Only Became Real When the Risk Became Shared
One site can hide a lot of loose thinking. Multiple sites cannot.
If a binding is vague, the wrong application answers. If SSL is misconfigured, Cloudflare might look healthy while the origin remains untrustworthy. If the firewall stays too open, the clean edge story disappears because traffic can still bypass Cloudflare and hit the VM directly. With one site, those problems are easy to wave away as temporary. With a mix of standalone subdomains and path-based subsites under markhazleton.com, they start to look less like setup details and more like architectural debt.
That was the turning point for me. I stopped thinking of Cloudflare as a convenience layer and started treating the Cloudflare-to-origin boundary as the actual system that needed to be designed.
It also forced me to be honest about the single point of failure. One Windows VM is easier for me to manage than a spread of containers, app services, or orchestrated infrastructure for a set of sites that mostly exist as portfolio pieces and working demonstrations. The trade-off is obvious: lower cost and lower maintenance in exchange for a larger shared blast radius. Once I accepted that trade-off explicitly, the rest of the security work stopped feeling optional.
That matters because the alternative is not theoretical. If one of these properties starts attracting sustained traffic, needs stronger availability guarantees, or deserves its own deployment cadence, then the right answer is probably to move it. This setup makes sense because of the current workload profile, not because I think every multi-site .NET environment belongs on one VM forever.
Cloudflare Was Not Just DNS Anymore
The first practical lesson was that Cloudflare changes the meaning of "the site is online."
It is easy to look at a resolved hostname, a running IIS application pool, and a listening port 443 socket and assume the job is basically done. In my experience, that is exactly when the confusing failures start showing up. A 521 from the edge. A redirect loop that makes no sense until you remember who is terminating TLS. A hostname landing on the wrong site because the certificate looked right, but the binding logic was not explicit enough.
Once I had multiple systems behind the same front door, I had a harder time tolerating fuzzy assumptions. Cloudflare was handling the public trust relationship. That meant the connection from Cloudflare to the VM could not be an afterthought.
What I found effective was moving decisively to Full (Strict) mode and using Cloudflare Origin CA certificates for the origin connection. The wildcard support mattered because I was dealing with multiple subdomains under markhazleton.com. More importantly, it clarified the model. The browser trusts Cloudflare. Cloudflare trusts the origin certificate. The VM is not pretending to be public PKI for the whole internet; it is proving its identity to the edge service sitting in front of it.
That may sound obvious written out cleanly. It felt less obvious in the middle of configuration work, especially when the application itself looked healthy and the real issue was the contract between the edge and the origin.
That contract mattered even more because I was using one origin for multiple low-traffic sites. In a higher-scale environment, I would expect isolation to come from the platform itself. Here, I had to create more of that safety manually through explicit boundaries, because I was deliberately choosing a simpler and cheaper hosting model.
IIS Stopped Feeling Like a Web Server and Started Feeling Like an Identity Boundary
The more sites I hosted on the same VM, the less I thought about IIS as "the thing that serves pages" and the more I thought about it as a routing and identity boundary.
For uisamplespark.markhazleton.com, artspark.markhazleton.com, dataanalysisdemo.markhazleton.com, and prismspark.markhazleton.com, the shared IP address was not really the hard part. The hard part was making sure each request had exactly one valid place to land while webspark.markhazleton.com also carried its own internal structure through path-based subsites. Renaming SampleCRUD to UISampleSpark made that more obvious, because a small naming change was enough to force a full review of how hostnames, certificates, and fallback behavior were actually wired together. That is where SNI, explicit hostnames, and a default catch-all site stopped being optional hygiene and started looking like architecture.
What I have learned is that wildcard certificates can create a false sense of completion. It is easy to think, "The cert covers the names, so the hosting layer is basically solved." It is not. The certificate proves a name can be served. It does not decide which application should respond. IIS still needs to be explicit.
After seeing how easily loose bindings create confusion, these are the three habits I keep coming back to whenever I am hosting multiple sites on one Windows VM:
- Every HTTPS binding uses SNI.
- Every HTTPS binding has an explicit hostname.
- There is a default site with nothing interesting in it for direct-IP or unmatched traffic.
I have a hard time treating those as optional anymore, because loose bindings are exactly how a polished setup quietly turns into the wrong app answering the right request.
"Behind Cloudflare" Has to Mean Something
One of the more uncomfortable realizations in this kind of setup is that you can spend time configuring Cloudflare beautifully and still leave the VM exposed in a way that makes most of that effort negotiable.
If the server continues accepting arbitrary inbound traffic on ports 80 and 443 from anywhere, then Cloudflare is a preferred path, not an enforced path. That distinction matters more when the same VM hosts both a master site with subsites and several standalone applications. The blast radius is wider. A direct hit to the origin is no longer just a nuisance to one demo app.
The pattern that made the most sense to me was to restrict inbound web traffic to Cloudflare's published IP ranges and let the firewall posture reinforce the architectural story. If Cloudflare is the front door, the network needs to behave like it. For more sensitive workloads, Authenticated Origin Pulls push that model further by having IIS validate that the caller is actually Cloudflare, not just some client that happened to reach the origin.
I also found it worth tightening the protocol surface itself. Disabling TLS 1.0 and 1.1 on the VM is not glamorous work, but it is the sort of quiet improvement that makes the environment feel intentional. What I keep coming back to is that a professional setup usually looks less like new tooling and more like removing ambiguity.
The Performance Part Was Mostly About Restraint
I expected the interesting work to be in certificates and firewall rules. Some of it was. But performance and reliability turned out to depend just as much on not making the origin do work the edge was already better positioned to handle.
That was especially relevant because these systems are not identical. WebSpark carries multiple experiences inside one site, which means path behavior and redirect behavior matter differently there than they do for a standalone hostname like UISampleSpark. ArtSpark has a different interaction model. PrismSpark and DataAnalysisDemo represent older or narrower slices of the stack that still deserve stable hosting. Once several workloads share a VM, every unnecessary origin round trip feels a little more expensive.
What I settled into was fairly simple:
- Make sure the DNS records are actually proxied in Cloudflare.
- Leave Keep-Alive enabled in IIS so Cloudflare can reuse origin connections.
- Prefer edge-level HTTP-to-HTTPS redirects when the redirect logic is straightforward.
None of those choices are exotic. What changed for me was understanding why they matter together. They reduce friction at the exact point where several applications are sharing the same hosting environment. That is part of what a professional architecture looks like to me now: not squeezing every possible feature into the stack, but letting each layer do the work it is best suited to do.
The Troubleshooting Steps That Actually Helped
The most useful operational pattern I found was learning how to separate "application problem" from "origin problem" from "edge problem" quickly.
When something looked wrong, I kept coming back to the same three checks.
- Check
C:\inetpub\logs\LogFiles. If the request never shows up there and Cloudflare is returning a 521, I start by assuming the edge could not complete a TCP or TLS conversation with the origin at all. In practice, that usually sends me toward firewall rules, listener state, or certificate trust before I waste time on application code. - Add a temporary hosts-file entry on the VM and browse locally using the actual hostname. If the right site loads there, the IIS bindings are probably doing their job. If the browser throws a certificate warning or I land on the wrong site, that is usually my sign that an SNI binding, hostname binding, or certificate assignment is not as explicit as I thought it was.
- Pause Cloudflare or temporarily switch to DNS Only for controlled testing when I need to isolate whether the issue lives in Cloudflare policy or on the server itself. If direct requests start working but proxied requests do not, the problem usually sits in the edge-to-origin contract rather than in the application.
That hosts-file test in particular earned its place. It sounds almost too simple, but it answers an important question quickly: if artspark.markhazleton.com or uisamplespark.markhazleton.com resolves locally on the VM and lands on the correct site, then I know the problem is probably not the binding. That kind of fast elimination matters when several sites share the same machine and any one symptom can send you digging in the wrong layer for too long.
The default catch-all site also became more useful once I started thinking of it as a diagnostic tool rather than just a safety net. If I see traffic in the IIS logs for an unexpected host header or from an IP range that is clearly not Cloudflare, that tells me something is bypassing the intended front door. Even when the site itself returns nothing interesting, the fact that the request fell into that catch-all bucket tells me the firewall or hostname routing still has a gap worth closing.
What I Mean by a More Deliberate Setup
I do not mean gold-plating. I do not mean pretending a single Windows VM is the same thing as a large multi-region platform. What I mean is something narrower and, honestly, more useful: a setup where the architecture is explicit enough that I can trust what happens when I add another site, rotate a certificate, or troubleshoot a failure under pressure.
For me, that means Cloudflare is the public edge rather than just a DNS provider, the origin connection stays encrypted with Full (Strict) TLS, IIS bindings are explicit enough that each hostname has one clear destination, and the VM firewall posture reinforces the story that Cloudflare is the intended path. Just as important, the troubleshooting path stays simple enough that I can locate the failing layer quickly.
What I like about this model is that it scales mentally even before it scales horizontally. It gives me a cleaner way to think about bringing new systems online under markhazleton.com, whether they live as standalone sites like ArtSpark and UISampleSpark or as subsites inside WebSpark.
That is probably the part I would underline most strongly now: this is an architecture for the current reality of low-traffic demonstration systems, not a claim that one Windows VM is the ideal destination for every serious workload. It is a low-cost, low-maintenance way to keep useful projects online while keeping the path open to split them apart later if their traffic, importance, or risk profile changes.
Final Thought
What started as an effort to get a few systems online turned into a better lesson about architecture. WebSpark, PromptSpark, AsyncSpark, ArtSpark, DataAnalysisDemo, PrismSpark, and UISampleSpark are different kinds of applications, but putting them on the same public VM forced the same question every time: is this setup explicit enough that I trust it when something goes wrong?
That is the standard I keep coming back to now. Not whether the site loads on a good day, but whether the boundaries are clear on a bad one. Renaming and relaunching UISampleSpark made the cost of ambiguity hard to ignore: more guesswork, more cross-layer debugging, and less confidence that the next change would behave the way I expected. Once I started treating Cloudflare, IIS, and the VM firewall as one connected design problem, the path to a more professional hosting setup became much easier to see.
For a small collection of low-traffic demo sites, that feels like the right level of rigor. It keeps the monthly cost and maintenance load reasonable, and it keeps me honest about the fact that success changes architecture. If one of these projects grows beyond the boundaries this VM can safely support, the next responsible step is not to defend the compromise forever. It is to move the project into an environment that matches what it has become.

