Back to blog

TaskListProcessor - Enterprise Async Orchestration for .NET

May 30, 202520 min read

Explore TaskListProcessor, an enterprise-grade .NET 10 library for orchestrating asynchronous operations. Learn about circuit breakers, dependency injection, interface segregation, and building fault-tolerant systems with comprehensive telemetry.

TaskListProcessor: Enterprise Async Orchestration for .NET

A production-ready .NET 10 library for orchestrating asynchronous operations with circuit breakers, dependency injection, advanced scheduling, and comprehensive telemetry.

Source Code Available: The complete source code for TaskListProcessor is available on GitHub. Clone the repository to explore the examples and follow along with the implementation.

Modern applications require sophisticated coordination of multiple async operations—API calls, database queries, file I/O, microservice interactions—while maintaining resilience, observability, and performance under varying loads. This challenge is worth examining closely, as it touches on fundamental patterns in distributed systems and concurrent programming.

What started as an exploration of concurrent processing fundamentals has evolved into an enterprise-grade library that addresses the practical challenges developers face when orchestrating multiple asynchronous operations. The TaskListProcessor library provides a battle-tested framework with fault isolation, comprehensive observability, and advanced scheduling capabilities.

Building on Concurrent Processing Fundamentals: This journey began with exploring concurrent processing fundamentals and learning how to manage multiple tasks with SemaphoreSlim. If you're new to concurrent programming concepts, consider starting with the foundational article first: Concurrent Processing Basics.

Contents

  1. Why TaskListProcessor?
  2. Enterprise Features
  3. Architecture & Design Patterns
  4. Development Challenges
  5. Getting Started
  6. Task.WhenAll vs Parallel Methods
  7. Travel Website Use Case
  8. Core Implementations
  9. Advanced Features
  10. Travel Dashboard Demo
  11. Performance & Telemetry
  12. Explore Further

Why TaskListProcessor?

The question worth asking is: what makes managing multiple asynchronous operations so challenging? In my experience building distributed systems, the complexity doesn't come from executing tasks—it comes from coordinating them reliably at scale.

Consider these common scenarios:

  • A dashboard aggregating data from multiple microservices, where some services might be slower or fail intermittently
  • Batch processing workflows with dependencies between tasks
  • API gateways coordinating calls to downstream services with varying SLAs
  • Data pipelines processing streams of events with different priorities

Traditional approaches often lead to tangled code where task management becomes brittle. You might use

Task.WhenAll
, but what happens when one task fails? How do you track which operations took how long? How do you prevent cascading failures when a downstream service starts timing out?

TaskListProcessor emerged from wrestling with these questions in production environments. It provides:

  • Fault Isolation: Circuit breakers and individual task failure isolation prevent one failing operation from cascading
  • Enterprise Observability: OpenTelemetry integration with rich metrics and distributed tracing
  • Advanced Scheduling: Priority-based, dependency-aware task execution for complex workflows
  • Type Safety: Strongly-typed results with comprehensive error categorization
  • Dependency Injection: Native .NET DI integration following modern architectural patterns
  • Interface Segregation: Clean, focused interfaces following SOLID principles

Enterprise Features

What distinguishes TaskListProcessor from simpler concurrent processing utilities is its focus on production-ready, enterprise scenarios. These features emerged from real-world requirements in high-scale systems.

Core Processing Capabilities

Concurrent Execution: Parallel task processing with configurable concurrency limits. The library uses efficient load balancing to distribute work across available threads without overwhelming system resources.

Circuit Breaker Pattern: Automatic failure detection prevents cascading failures. When a particular operation starts failing consistently, the circuit breaker opens to prevent further attempts, then gradually allows test requests to check if the service has recovered.

Rich Telemetry: Comprehensive timing, success rates, error tracking, and OpenTelemetry integration provide the observability needed for production systems. Every task execution is instrumented with detailed metrics.

Type Safety: Strongly-typed results with full IntelliSense support and error categorization help catch issues at compile time and provide clear error handling patterns.

Timeout & Cancellation: Built-in support for graceful shutdown and per-task timeouts ensures your application can shut down cleanly and operations don't hang indefinitely.

Task Dependencies: Dependency resolution with topological sorting enables complex workflows where certain tasks must complete before others can begin.

Architectural Features

The library follows SOLID principles and modern .NET architectural patterns:

Dependency Injection: Native .NET DI integration with a fluent configuration API makes it easy to integrate into ASP.NET Core and other DI-based applications.

Interface Segregation: Clean, focused interfaces (ITaskProcessor, ITaskBatchProcessor, ITaskStreamProcessor) let you depend only on what you need.

Decorator Pattern: Pluggable cross-cutting concerns for logging, metrics, and circuit breakers allow you to compose functionality without modifying core components.

Advanced Scheduling: Priority-based, FIFO, LIFO, and custom scheduling strategies support different execution patterns.

Thread Safety: Lock-free concurrent collections and thread-safe operations throughout ensure correctness in highly concurrent scenarios.

Memory Optimization: Object pooling and efficient memory management reduce GC pressure in high-throughput scenarios.

Architecture and Design Patterns

Understanding the architecture helps explain why TaskListProcessor behaves the way it does. The design follows a layered approach with clear separation of concerns:

┌─────────────────────────────────────────────────────────────────┐
│                    Dependency Injection Layer                   │
│        services.AddTaskListProcessor().WithAllDecorators()      │
└─────────────────────────────────────────────────────────────────┘
                                │
                                ▼
┌─────────────────────────────────────────────────────────────────┐
│                      Decorator Chain                            │
│  LoggingDecorator → MetricsDecorator → CircuitBreakerDecorator  │
└─────────────────────────────────────────────────────────────────┘
                                │
                                ▼
┌─────────────────────────────────────────────────────────────────┐
│                Interface Segregation Layer                      │
│  ITaskProcessor │ ITaskBatchProcessor │ ITaskStreamProcessor    │
│              ITaskTelemetryProvider                             │
└─────────────────────────────────────────────────────────────────┘
                                │
                                ▼
┌─────────────────────────────────────────────────────────────────┐
│                    Core Processing Engine                       │
│            TaskListProcessorEnhanced (Backward Compatible)      │
└─────────────────────────────────────────────────────────────────┘
                                │
              ┌─────────────────┼─────────────────┐
              │                 │                 │
    ┌─────────▼────────┐ ┌─────▼──────┐ ┌───────▼──────┐
    │ TaskDefinition   │ │TaskTelemetry│ │TaskProgress  │
    │ + Dependencies   │ │ + Metrics   │ │ + Reporting  │
    │ + Priority       │ │ + Tracing   │ │ + Streaming  │
    │ + Scheduling     │ │ + Health    │ │ + Estimates  │
    └──────────────────┘ └────────────┘ └──────────────┘

This layered architecture provides several benefits:

Dependency Injection Layer: Integrates seamlessly with ASP.NET Core and other .NET applications using Microsoft's DI container. The fluent configuration API makes setup intuitive.

Decorator Layer: Cross-cutting concerns like logging, metrics, and circuit breakers are implemented as decorators. This keeps the core processing logic clean and allows you to compose functionality as needed.

Interface Segregation: Following the Interface Segregation Principle, the library provides focused interfaces for different scenarios. Need to process a single task? Use ITaskProcessor. Processing batches? Use ITaskBatchProcessor. Want streaming results? Use ITaskStreamProcessor.

Processing Engine: The core engine handles thread-safe orchestration, dependency resolution, and scheduling. It's backward compatible with existing code while supporting new enterprise features.

Supporting Components: TaskDefinition models tasks with metadata, TaskTelemetry captures detailed metrics, and TaskProgress provides real-time feedback for long-running operations.

Development Challenges

The TaskListProcessor addresses common challenges in .NET concurrent programming, providing a structured approach to running concurrent tasks with different return types.

Issue: Diverse Return Types

A common issue with concurrent async methods in .NET is handling different return types. When you're calling multiple APIs or services, each might return a different type: weather data, user information, stock prices, etc. Traditional approaches like

Task.WhenAll
require homogeneous types, leading to awkward workarounds with
Task<object>
casts or separate collections for each type.

The TaskListProcessor uses generics and a unified result wrapper to handle heterogeneous task results elegantly. Each task can return its own type, with results wrapped in a consistent

TaskResult<T>
structure.

Issue: Error Propagation

Without proper structure, errors from individual tasks can propagate and cause widespread failures. A single failing API call might bring down an entire dashboard. Or worse, exceptions bubble up uncaught and crash the application.

In distributed systems, partial failures are normal. A weather service might be down while activity recommendations still work. Users expect to see available data rather than an error page when one service fails.

Solution: Fault Isolation

The TaskListProcessor isolates failures at the task level. When one task throws an exception, it's caught, logged, and recorded in the task's result—but other tasks continue executing. This fault isolation prevents cascading failures and allows partial success scenarios.

The circuit breaker pattern takes this further. If a particular operation starts failing repeatedly, the circuit breaker opens to prevent further attempts, reducing load on failing services and speeding up failure responses.

Benefit: Enhanced Performance

Beyond fault tolerance, the library enhances performance through intelligent scheduling and concurrency management. Features like

WhenAllWithLoggingAsync
enhance the standard
Task.WhenAll
with error oversight, while configurable concurrency limits prevent overwhelming system resources.

Dependency-aware scheduling ensures tasks execute in the correct order when there are dependencies, while independent tasks execute in parallel for maximum throughput.

Getting Started

There are two approaches to using TaskListProcessor, depending on your architecture preferences and requirements.

Direct Instantiation (Quick Start)

For simpler scenarios or when you want explicit control:

using TaskListProcessing.Core;
using Microsoft.Extensions.Logging;

// Set up logging (optional but recommended)
using var loggerFactory = LoggerFactory.Create(builder => builder.AddConsole());
var logger = loggerFactory.CreateLogger<Program>();

// Create the processor
using var processor = new TaskListProcessorEnhanced("My Tasks", logger);

// Define your tasks using the factory pattern
var taskFactories = new Dictionary<string, Func<CancellationToken, Task<object?>>>
{
    ["Weather Data"] = async ct => await GetWeatherAsync("London"),
    ["Stock Prices"] = async ct => await GetStockPricesAsync("MSFT"),
    ["User Data"] = async ct => await GetUserDataAsync(userId)
};

// Execute all tasks concurrently
await processor.ProcessTasksAsync(taskFactories, cancellationToken);

// Access results and telemetry
foreach (var result in processor.TaskResults)
{
    Console.WriteLine(quot;{result.Name}: {(result.IsSuccessful ? "✅" : "❌")}");
}

This approach is straightforward and works well for console applications, background jobs, or scenarios where you're managing dependencies manually.

Dependency Injection (Recommended)

For ASP.NET Core applications and services that use dependency injection:

using TaskListProcessing.Extensions;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

// Program.cs or Startup.cs
var builder = Host.CreateApplicationBuilder(args);

// Configure TaskListProcessor with decorators
builder.Services.AddTaskListProcessor(options =>
{
    options.MaxConcurrentTasks = 10;
    options.EnableDetailedTelemetry = true;
    options.CircuitBreakerOptions = new() { FailureThreshold = 3 };
})
.WithLogging()
.WithMetrics()
.WithCircuitBreaker();

var host = builder.Build();

// Usage in your services
public class MyService
{
    private readonly ITaskBatchProcessor _processor;
    
    public MyService(ITaskBatchProcessor processor)
    {
        _processor = processor;
    }
    
    public async Task ProcessDataAsync()
    {
        var tasks = new Dictionary<string, Func<CancellationToken, Task<object?>>>
        {
            ["API Call"] = async ct => await CallApiAsync(ct),
            ["DB Query"] = async ct => await QueryDatabaseAsync(ct)
        };
        
        await _processor.ProcessTasksAsync(tasks);
    }
}

The DI approach provides several advantages: automatic lifetime management, easy testing with mock implementations, and integration with ASP.NET Core's configuration and logging systems.

Development Challenges

The TaskListProcessor addresses common challenges in .NET concurrent programming, providing a structured approach to running concurrent tasks with different return types.

Issue: Diverse Return Types

A common issue with concurrent async methods in .NET is handling different return types. Traditional approaches often lead to tangled code, where task management becomes cumbersome and error-prone.

Issue: Error Propagation

Without proper structure, errors from individual tasks can propagate and cause widespread failures. This necessitates a robust mechanism to encapsulate errors and handle them gracefully.

Solution: TaskListProcessor

The TaskListProcessor class addresses these issues head-on, providing a cohesive way to manage a list of tasks regardless of their return types, with built-in error handling and logging.

Benefit: Enhanced Performance

Features methods like

WhenAllWithLoggingAsync
, which enhance the standard
Task.WhenAll
with error oversight, flexibility, and scalability using generics and TaskResult objects.

Task.WhenAll vs. Parallel Methods

Task.WhenAll vs Parallel.ForEach comparison Task.WhenAll vs Parallel.ForEach. Image Credit ChatGPT with DALL-E

The choice between using

Task.WhenAll
and Parallel methods can be effectively illustrated through the metaphor of runners on a track and a tug-of-war contest. Understanding these differences will help us appreciate the benefits of the TaskListProcessor and select the right approach for your concurrent processing needs.

Task.WhenAll as Runners

Imagine runners, each in their own lane on a track. This represents

Task.WhenAll
for handling asynchronous, I/O-bound tasks. Each task runs independently without blocking others, ensuring efficiency for network requests or file I/O operations.

Parallel Methods as Tug-of-War

A tug-of-war contest represents Parallel methods for CPU-bound tasks. Teams work together with synchronized effort, similar to how parallel processing distributes computational weight across multiple threads for maximum CPU utilization.

Travel Website Use Case

Task List Processor Use Case for travel website Task List Processor Use Case. Image Credit ChatGPT with DALL-E

Consider a travel website displaying a dashboard of top destination cities, aggregating data like weather, attractions, events, and flights from multiple sources. This scenario illustrates why concurrent processing architecture matters.

The Challenge

Each city's dashboard requires data from multiple external services:

  • Weather API for current conditions and forecasts
  • Activities service for local attractions and events
  • Flight API for pricing and availability
  • Hotel search for accommodation options

Fetching this data sequentially would be unacceptably slow. If each service averages 500ms response time and you need four calls per city for eight cities, that's 16 seconds of sequential processing. Users won't wait that long.

The naive solution is to fire off all requests in parallel using

Task.WhenAll
. But what happens when the flights API is down? Or when the weather service in Tokyo takes 5 seconds instead of 500ms? Without proper orchestration, these failures cascade, timeouts accumulate, and the entire dashboard fails to load.

The TaskListProcessor Solution

With TaskListProcessor, you get:

Concurrent Data Retrieval: All service calls execute in parallel, reducing total load time to roughly the slowest single call rather than the sum of all calls.

Fault Isolation: When the flights API fails, weather and activities data still display. Users see partial results rather than an error page.

Circuit Breaker Protection: If the Tokyo weather service consistently times out, the circuit breaker opens after a threshold, failing fast rather than waiting for timeouts on subsequent requests.

Comprehensive Telemetry: Detailed metrics show which services are slow, which are failing, and overall dashboard performance. This data drives optimization decisions.

Graceful Degradation: Priority-based scheduling ensures critical data (like weather) loads before nice-to-have data (like event recommendations).

Business Impact

This architecture translates directly to business outcomes:

  • Faster page loads improve user engagement and conversion
  • Partial data display maintains functionality during service degradation
  • Detailed telemetry enables proactive performance optimization
  • Circuit breakers reduce load on failing services, aiding recovery

Core Implementations

Let's examine the key methods that power TaskListProcessor's functionality. Understanding these implementations helps explain how the library achieves its fault tolerance and observability goals.

WhenAllWithLoggingAsync Method

The

WhenAllWithLoggingAsync
method enhances the standard
Task.WhenAll
with robust error handling and centralized logging capabilities.

public static async Task WhenAllWithLoggingAsync(IEnumerable<Task> tasks, ILogger logger)
{
  ArgumentNullException.ThrowIfNull(logger);
  try
  {
    await Task.WhenAll(tasks);
  }
  catch (Exception ex)
  {
    logger.LogError(ex, "TLP: An error occurred while executing one or more tasks.");
  }
}

This simple wrapper provides several benefits worth considering:

Enhanced Error Handling: Instead of allowing exceptions to propagate and potentially crash the application, it catches exceptions and logs them for debugging and analysis. In production systems, this structured error handling is essential for stability.

Consolidated Logging: Centralized logging of task exceptions with consistent formatting makes integration with logging solutions like Serilog, NLog, or Application Insights straightforward. All task failures flow through a single point with consistent context.

Non-Blocking Operation: The method logs errors internally and allows program continuation. This is beneficial for non-critical tasks that shouldn't block the overall process. A failing recommendation service shouldn't prevent weather data from displaying.

Improved Maintenance: Detailed error information aids in faster debugging and simplifies maintenance in complex systems with many concurrent tasks. When something fails at 3 AM, good logs make the difference between a 5-minute fix and a 5-hour investigation.

GetTaskResultAsync Method

The

GetTaskResultAsync
method wraps async calls with telemetry features, measuring execution time and providing performance metrics.

public async Task GetTaskResultAsync<T>(string taskName, Task<T> task) where T : class
{
  var sw = new Stopwatch();
  sw.Start();
  var taskResult = new TaskResult { Name = taskName };
  try
  {
    taskResult.Data = await task;
    sw.Stop();
    Telemetry.Add(GetTelemetry(taskName, sw.ElapsedMilliseconds));
  }
  catch (Exception ex)
  {
    sw.Stop();
    Telemetry.Add(GetTelemetry(taskName, sw.ElapsedMilliseconds, "Exception", ex.Message));
    taskResult.Data = null;
  }
  finally
  {
    TaskResults.Add(taskResult);
  }
}

This implementation demonstrates several interesting patterns:

Performance Metrics: Using

Stopwatch
to measure and record task execution time provides valuable performance insights. Over time, this data reveals trends—is the weather API getting slower? Are certain cities consistently slow?

Error Tracking: Catching exceptions during task execution and logging them with task names and elapsed time enables comprehensive failure analysis. You can track not just what failed, but how long it took before failing.

Execution Isolation: Each task executes in a separate logical block with its own error handling. One task's failure doesn't interfere with others. The

finally
block ensures results are always recorded, even on failure.

Generic Flexibility: The generic type parameter allows returning various object types from different tasks within a single list. This enables heterogeneous task processing without losing type safety.

TaskResult Class

The TaskResult class is a cornerstone of the TaskListProcessor architecture, designed to encapsulate task outcomes with a unified structure.

public class TaskResult<T> : ITaskResult
{
  public TaskResult()
  {
    Name = "UNKNOWN";
    Data = null;
  }

  public TaskResult(string name, T data)
  {
    Name = name;
    Data = data;
  }
  public T? Data { get; set; }
  public string Name { get; set; }
}

This simple class provides important capabilities:

Purpose & Standardization: It offers a standardized object representing any task outcome, regardless of the task's nature or return data type. This consistency simplifies result handling across applications.

Generic Flexibility: The generic design allows holding any result data type, making it versatile across projects and scenarios. Whether you're processing weather data, user records, or financial transactions, the same pattern applies.

Error Handling: When tasks fail, the result stores error details alongside the original task information. This makes it invaluable for error tracking and debugging processes—you know exactly which operation failed and why.

Telemetry Integration: The class can be extended to include telemetry data like execution duration, which is crucial for performance monitoring and optimization in complex systems.

Advanced Features

Beyond basic concurrent execution, TaskListProcessor provides enterprise-grade features for complex scenarios.

Task Dependencies & Scheduling

Real-world workflows often have dependencies. Database initialization must complete before running queries. Authentication must succeed before API calls. TaskListProcessor handles these scenarios with dependency resolution:

using TaskListProcessing.Models;
using TaskListProcessing.Scheduling;

// Configure with dependency resolution
var options = new TaskListProcessorOptions
{
    DependencyResolver = new TopologicalTaskDependencyResolver(),
    SchedulingStrategy = TaskSchedulingStrategy.Priority,
    MaxConcurrentTasks = Environment.ProcessorCount * 2
};

using var processor = new TaskListProcessorEnhanced("Advanced Tasks", logger, options);

// Define tasks with dependencies and priorities
var taskDefinitions = new[]
{
    new TaskDefinition
    {
        Name = "Initialize",
        Factory = async ct => await InitializeAsync(ct),
        Priority = TaskPriority.High
    },
    new TaskDefinition
    {
        Name = "Process Data",
        Factory = async ct => await ProcessDataAsync(ct),
        Dependencies = new[] { "Initialize" },
        Priority = TaskPriority.Medium
    },
    new TaskDefinition
    {
        Name = "Generate Report",
        Factory = async ct => await GenerateReportAsync(ct),
        Dependencies = new[] { "Process Data" },
        Priority = TaskPriority.Low
    }
};

await processor.ProcessTaskDefinitionsAsync(taskDefinitions);

The dependency resolver uses topological sorting to determine the correct execution order. Tasks with no dependencies execute immediately, while dependent tasks wait for their prerequisites. Priority determines execution order among tasks that are ready to run.

Circuit Breaker Pattern

The circuit breaker pattern prevents cascading failures in distributed systems. When a service starts failing, continuing to call it wastes resources and delays failure responses. The circuit breaker pattern addresses this:

var options = new TaskListProcessorOptions
{
    CircuitBreakerOptions = new CircuitBreakerOptions
    {
        FailureThreshold = 5,
        RecoveryTimeout = TimeSpan.FromMinutes(2),
        MinimumThroughput = 10
    }
};

using var processor = new TaskListProcessorEnhanced("Resilient Tasks", logger, options);

// Tasks will automatically trigger circuit breaker on repeated failures
var taskFactories = new Dictionary<string, Func<CancellationToken, Task<object?>>>
{
    ["Resilient API"] = async ct => await CallExternalApiAsync(ct),
    ["Fallback Service"] = async ct => await CallFallbackServiceAsync(ct)
};

await processor.ProcessTasksAsync(taskFactories);

// Check circuit breaker status
var cbStats = processor.CircuitBreakerStats;
if (cbStats?.State == CircuitBreakerState.Open)
{
    Console.WriteLine(quot;Circuit breaker opened at {cbStats.OpenedAt}");
}

When failures exceed the threshold, the circuit opens, immediately failing subsequent requests without attempting the operation. After the recovery timeout, the circuit moves to a half-open state, attempting a test request to see if the service has recovered.

Streaming Results

For long-running batch operations, waiting for all tasks to complete before processing results isn't always desirable. Streaming results allows processing data as it becomes available:

using TaskListProcessing.Interfaces;

public class StreamingService
{
    private readonly ITaskStreamProcessor _streamProcessor;
    
    public StreamingService(ITaskStreamProcessor streamProcessor)
    {
        _streamProcessor = streamProcessor;
    }
    
    public async Task ProcessWithStreamingAsync()
    {
        var tasks = CreateLongRunningTasks();
        
        // Process results as they complete
        await foreach (var result in _streamProcessor.ProcessTasksStreamAsync(tasks))
        {
            Console.WriteLine(quot;Completed: {result.Name} - {result.IsSuccessful}");
            
            // Process result immediately without waiting for all tasks
            await HandleResultAsync(result);
        }
    }
}

This pattern is particularly useful for dashboards, where you can update the UI as each piece of data arrives rather than waiting for all data to load.

Travel Dashboard Demo

Task List Processor Dashboard demonstration Task List Processor Dashboard showing concurrent data loading

Let's examine a practical implementation that demonstrates fetching weather forecasts and activities for multiple cities concurrently, simulating a real-world travel dashboard.

The Implementation

using var processor = new TaskListProcessorEnhanced("Travel Dashboard", logger);
using var cts = new CancellationTokenSource(TimeSpan.FromMinutes(2));

var cities = new[] { "London", "Paris", "New York", "Tokyo", "Sydney", "Chicago", "Dallas", "Wichita" };
var taskFactories = new Dictionary<string, Func<CancellationToken, Task<object?>>>();

// Create tasks for each city
foreach (var city in cities)
{
    taskFactories[quot;{city} Weather"] = ct => weatherService.GetWeatherAsync(city, ct);
    taskFactories[quot;{city} Activities"] = ct => activitiesService.GetActivitiesAsync(city, ct);
}

// Execute and handle results
try 
{
    await processor.ProcessTasksAsync(taskFactories, cts.Token);
    
    // Group results by city
    var cityData = processor.TaskResults
        .GroupBy(r => r.Name.Split(' ')[0])
        .ToDictionary(g => g.Key, g => g.ToList());
    
    // Display results with rich formatting
    foreach (var (city, results) in cityData)
    {
        Console.WriteLine(quot;\n🌍 {city}:");
        foreach (var result in results)
        {
            var status = result.IsSuccessful ? "✅" : "❌";
            Console.WriteLine(quot;  {status} {result.Name.Split(' ')[1]}");
        }
    }
}
catch (OperationCanceledException)
{
    logger.LogWarning("Operation timed out after 2 minutes");
}

This example demonstrates several important concepts:

Concurrent Data Retrieval: Asynchronous programming makes non-blocking calls to multiple services for each city. Instead of sequentially fetching weather then activities for London, then Paris, etc., all calls execute in parallel. The total execution time approaches the slowest single call rather than the sum of all calls.

Timeout Management: The

CancellationTokenSource
with a 2-minute timeout ensures the operation doesn't hang indefinitely if services are unresponsive. This is critical for production systems where timeouts must be enforced.

Result Grouping: After execution, results are grouped by city, demonstrating how to organize heterogeneous task results for presentation. This pattern is useful for dashboards where related data should display together.

Graceful Degradation: The status indicator (

or
) shows which services succeeded and which failed. Users see available data rather than a blank page when some services fail.

Sample Output

Telemetry:
Chicago Activities: Task completed in 602 ms with ERROR Exception: Random failure occurred fetching activities data.
Paris Weather: Task completed in 723 ms with ERROR Exception: Random failure occurred fetching weather data.
Dallas Activities: Task completed in 1,009 ms
Sydney Weather: Task completed in 1,318 ms
Tokyo Activities: Task completed in 1,921 ms
London Weather: Task completed in 2,789 ms

Results:
🌍 Dallas:
  ✅ Weather
  ✅ Activities

🌍 Sydney:
  ✅ Weather
  ❌ Activities

🌍 Tokyo:
  ✅ Weather
  ✅ Activities

🌍 London:
  ✅ Weather
  ✅ Activities

Notice how failures don't prevent successful results from displaying. Chicago activities failed, but Chicago weather (if available) would still show. This partial success pattern is essential for resilient user experiences.

Enhanced Implementation

Updated Task List Processor Dashboard with activities Enhanced dashboard showing both weather and activities

The latest enhancement demonstrates mixing different service types seamlessly:

var thingsToDoService = new CityThingsToDoService();
var weatherService = new WeatherService();
var cityDashboards = new TaskListProcessorGeneric();
var cities = new List<string> { "London", "Paris", "New York", "Tokyo", "Sydney", "Chicago", "Dallas", "Wichita" };
var tasks = new List<Task>();

foreach (var city in cities)
{
  tasks.Add(cityDashboards.GetTaskResultAsync(quot;{city} Weather", weatherService.GetWeather(city)));
  tasks.Add(cityDashboards.GetTaskResultAsync(quot;{city} Things To Do", thingsToDoService.GetThingsToDoAsync(city)));
}

await cityDashboards.WhenAllWithLoggingAsync(tasks, logger);

This enhancement shows how TaskListProcessor handles diverse data sources with uniform error handling and telemetry, reflecting real-world scenarios where dashboards aggregate data from multiple microservices.

Performance and Telemetry

TaskListProcessor provides comprehensive telemetry out of the box, giving you deep insights into how your concurrent operations perform.

Built-in Metrics

After execution, detailed telemetry is available for analysis:

// Access telemetry summary
var telemetrySummary = processor.GetTelemetrySummary();
Console.WriteLine(quot;📊 Success Rate: {telemetrySummary.SuccessRate:F1}%");
Console.WriteLine(quot;⏱️ Average Time: {telemetrySummary.AverageExecutionTime:F0}ms");
Console.WriteLine(quot;🚀 Throughput: {telemetrySummary.TasksPerSecond:F1} tasks/second");

// Individual task telemetry
var telemetry = processor.Telemetry;
var slowTasks = telemetry.Where(t => t.DurationMs > 1000).ToList();
foreach (var task in slowTasks)
{
    logger.LogWarning("Slow task detected: {TaskName} took {Duration}ms", 
        task.TaskName, task.DurationMs);
}

Sample Telemetry Output

=== 📊 TELEMETRY SUMMARY ===
📈 Total Tasks: 16
✅ Successful: 13 (81.2%)
❌ Failed: 3
⏱️ Average Time: 1,305ms
🏃 Fastest: 157ms | 🐌 Slowest: 2,841ms
⏰ Total Execution Time: 20,884ms

=== 📋 DETAILED TELEMETRY ===
✅ Successful Tasks (sorted by execution time):
  🚀 London Things To Do: 157ms
  🚀 Dallas Things To Do: 339ms
  ⚡ Chicago Things To Do: 557ms
  🏃 London Weather: 1,242ms
  ...

❌ Failed Tasks:
  💥 Sydney Things To Do: ArgumentException after 807ms
  💥 Tokyo Things To Do: ArgumentException after 424ms

Health Monitoring

Built-in health checks provide operational insights:

var options = new TaskListProcessorOptions
{
    HealthCheckOptions = new HealthCheckOptions
    {
        MinSuccessRate = 0.8, // 80% success rate threshold
        MaxAverageExecutionTime = TimeSpan.FromSeconds(5),
        IncludeCircuitBreakerState = true
    }
};

using var processor = new TaskListProcessorEnhanced("Health Monitored", logger, options);

// After processing tasks
var healthResult = processor.PerformHealthCheck();
if (!healthResult.IsHealthy)
{
    logger.LogWarning("Health check failed: {Message}", healthResult.Message);
    
    // Take corrective action: restart services, trigger alerts, etc.
    await NotifyOpsTeamAsync(healthResult);
}

This telemetry integration enables:

  • Real-time performance monitoring
  • Trend analysis over time
  • Proactive alerting on degradation
  • Data-driven optimization decisions

OpenTelemetry Integration

For enterprise environments with existing observability infrastructure, TaskListProcessor integrates with OpenTelemetry:

builder.Services.AddTaskListProcessor(options =>
{
    options.EnableDetailedTelemetry = true;
})
.WithOpenTelemetry(telemetryOptions =>
{
    telemetryOptions.ServiceName = "TravelDashboard";
    telemetryOptions.ExportToJaeger("localhost", 6831);
});

This provides distributed tracing across your entire stack, showing how task execution correlates with downstream service performance.

Explore Further

The journey from basic concurrent processing to enterprise-grade async orchestration raises interesting questions about system design and resilience. TaskListProcessor represents one approach, evolved through practical use in production environments.

Try It Yourself

Clone the repository and experiment with the examples:

git clone https://github.com/markhazleton/TaskListProcessor.git
cd TaskListProcessor
dotnet run --project examples/TaskListProcessor.Console

The console demo showcases multi-city travel data aggregation with dependency resolution, showing how circuit breakers, telemetry, and fault isolation work together in a realistic scenario.

Learning Outcomes

Working with TaskListProcessor offers insights into:

Concurrent Programming Patterns: Understanding the difference between

Task.WhenAll
,
Task.WhenAny
, and Parallel methods, and when each is appropriate.

Error Handling Strategies: Learning how fault isolation and circuit breakers prevent cascading failures in distributed systems.

Performance Monitoring: Implementing comprehensive telemetry to understand and optimize async operations.

SOLID Principles in Practice: Seeing how interface segregation, dependency injection, and the decorator pattern create maintainable, testable systems.

Production Readiness: Understanding what separates a working demo from a production-ready library: timeout handling, cancellation support, health monitoring, and observability.

Version Compatibility

TaskListProcessor is developed for .NET 10, offering the latest framework features and optimizations. The library follows semantic versioning, with regular updates ensuring compatibility with new .NET releases and performance enhancements.

For applications on earlier .NET versions, check the repository's release notes for compatible versions and migration guidance.

Community and Contributing

The library is open source under the MIT License, welcoming contributions from the community. Whether you're fixing bugs, adding features, or improving documentation, contributions help make the library better for everyone.

GitHub Issues and Discussions provide spaces for questions, feature requests, and technical discussions about concurrent programming patterns and async orchestration.

Further Reading

For deeper exploration of the concepts underlying TaskListProcessor:

The intersection of concurrent programming, fault tolerance, and observability continues to evolve. TaskListProcessor represents current thinking on these topics, but the conversation continues as systems scale and requirements change. What patterns have you found effective in your async orchestration challenges?

1. Prioritization

Prioritizing tasks is essential for effective task management. Use methods like the Eisenhower Box or ABC prioritization to determine which tasks require immediate attention and which can be scheduled for later.

2. Automation

Automating repetitive tasks can save time and reduce errors. Tools like Zapier or IFTTT can help automate workflows, allowing you to focus on more strategic activities.

3. Time Blocking

Allocate specific time slots for different tasks to ensure focused work periods. This technique helps in managing distractions and maintaining productivity throughout the day.

Tools for Task List Processing

  • Trello: A visual tool for organizing tasks into boards and lists.
  • Asana: A project management tool that allows for task assignment and tracking.
  • Todoist: A simple yet powerful task manager that integrates with various platforms.

Benefits of Effective Task List Processing

  • Increased Productivity: By organizing tasks efficiently, you can accomplish more in less time.
  • Reduced Stress: Knowing what needs to be done and when reduces anxiety and improves focus.
  • Better Time Management: Effective task processing leads to improved time management skills.

Conclusion

Mastering task list processing is vital for anyone looking to enhance their productivity and efficiency. By implementing the techniques and tools discussed, you can optimize your workflow and achieve your goals with greater ease.


For more insights on productivity and task management, explore our blog for additional resources and tips.