Azure – Durable Functions

By | 15/10/2025

In this post, we will see what a durable functions is and how we can implement it.
First of all, what is a Durable Functions?
An Azure Durable Functions is essentially an extension of Azure Functions that allows us to write stateful workflows in a serverless environment. Instead of manually managing state or relying on external storage, Durable Functions uses an orchestration pattern under the hood. We write a special orchestrator function in code, and Durable Functions takes care of persisting its state, replaying the workflow after restarts, and handling scale-out.
Under the hood, our orchestrator function uses the IDurableOrchestrationContext to call activity functions, create timers, and manage control flow.

We should consider Durable Functions when we are dealing with function chaining scenarios where the output of one function becomes the input of another. Instead of managing this coordination manually (and dealing with all the potential failure points), Durable Functions handle the orchestration for us.
The main scenarios where a Durable Funcitons is perfect are:

  • Human interaction patterns: sometimes we need to pause a workflow and wait for human input or external events. Traditional functions would timeout, but Durable Functions can wait patiently for days or even weeks if needed.
  • Long-running processes: that exceed the typical function timeout limits become manageable with Durable Functions. They can break down complex workflows into smaller, manageable chunks while maintaining the overall state and progress.
  • Fan-out/fan-in scenarios: where we need to execute multiple operations in parallel and then aggregate the results are handled elegantly.


Let’s see a simple Durable Functions: we’ll kick it off via an HTTP endpoint, then let an orchestrator “wake up” every 10 seconds to check for a blob named data.csv.
Once the file shows up, the orchestrator calls an activity to process it and, thanks to Durable Functions built-in state persistence and timers, we get a clean, straight-line code path without worrying about manual retries, storage bookkeeping, or function timeouts.
It’s a simple example, but it demonstrates the core power of durable orchestrations for long-running, stateful serverless workflows.

using System;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using Azure.Storage.Blobs;

public static class FilePollingOrchestration
{
    // 1. Starter Function: Launches the orchestrator via HTTP
    [FunctionName("HttpStart_Polling")]
    public static async Task<IActionResult> HttpStart(
        [HttpTrigger(AuthorizationLevel.Function, "get", Route = "start/{fileName}")] HttpRequest req,
        [DurableClient] IDurableOrchestrationClient client,
        string fileName,
        ILogger log)
    {
        string instanceId = await client.StartNewAsync("Orchestrator_PollForFile", fileName);
        log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
        return client.CreateCheckStatusResponse(req, instanceId);
    }

    // 2. Orchestrator Function: Polls until the file appears, then processes it
    [FunctionName("Orchestrator_PollForFile")]
    public static async Task<string> RunOrchestrator(
        [OrchestrationTrigger] IDurableOrchestrationContext context)
    {
        string fileName = context.GetInput<string>();
        bool exists = false;

        while (!exists)
        {
            // Check existence
            exists = await context.CallActivityAsync<bool>("Activity_CheckFileExists", fileName);

            if (!exists)
            {
                // Wait 10 seconds before next check
                DateTime nextCheck = context.CurrentUtcDateTime.AddSeconds(10);
                await context.CreateTimer(nextCheck, CancellationToken.None);
            }
        }

        // Once the file exists, process it
        await context.CallActivityAsync("Activity_ProcessFile", fileName);
        return $"Processing of '{fileName}' completed at {context.CurrentUtcDateTime}.";
    }

    // 3. Activity: Checks if the blob exists
    [FunctionName("Activity_CheckFileExists")]
    public static bool CheckFileExists([ActivityTrigger] string fileName, ILogger log)
    {
        var blobClient = new BlobContainerClient(
            Environment.GetEnvironmentVariable("AzureWebJobsStorage"), "incoming");
        bool exists = blobClient.GetBlobClient(fileName).Exists();
        log.LogInformation($"Checked existence of '{fileName}': {exists}");
        return exists;
    }

    // 4. Activity: Processes the file (dummy implementation)
    [FunctionName("Activity_ProcessFile")]
    public static void ProcessFile([ActivityTrigger] string fileName, ILogger log)
    {
        // Imagine parsing CSV, storing data, etc.
        log.LogInformation($"Processing file '{fileName}'...");
        // ... our processing logic here ...
    }
}

Below is a walkthrough of what’s happening in each part of the example:

  • HTTP Starter (HttpStart_Polling): when we hit the /api/start/{fileName} endpoint with an HTTP GET, this function spins up a new orchestration instance. Behind the scenes it uses the IDurableOrchestrationClient to call StartNewAsync, passing in the name of the orchestrator function and the file name we want to watch for. The response we get back contains a unique instance ID plus a set of handy URLs (status checks, event injection, termination, etc.) so we can monitor or control that specific workflow.
  • Orchestrator (Orchestrator_PollForFile): this is the heart of ourstateful logic. It retrieves the blob name from context.GetInput<string>(), then enters a simple loop:
    – Call an activity to see if the file exists.
    – If it’s not there yet, create a 10-second timer and yield control back to the Durable runtime.
    – When the timer fires, the orchestrator “replays” from the last checkpoint with updated time.
    – As soon as the check returns true, it breaks out of the loop and invokes the processing activity.
  • Activity–Check (Activity_CheckFileExists): This function does one thing: connect to our blob container and ask “Does this file exist?” It returns a boolean, and it’s where we’d add any storage-connection logic or configuration. Because it’s a plain activity function, we can retry it on failure, scale it independently, and keep your orchestrator clean.
  • Activity–Process (Activity_ProcessFile): Once the file shows up, this final activity kicks in. In a real project we’d parse the CSV, write to a database, call other APIs, or whatever your business logic demands. Here it simply logs that it’s “processing” the blob, but we can expand it into any unit of work—knowing that the orchestrator will only call it when the preconditions are met.


FUNCTION OUTPUT, ERROR HANDLING AND MANAGEMENT ENDPOINTS
When we call HttpStart_Polling, we’ll immediately get a 202 Accepted with a JSON body like this:

{
  "id": "abc123",                     // Unique orchestration instance ID
  "statusQueryGetUri": "...",         // Poll here to get status, input, output
  "sendEventPostUri": "...",          // POST here to raise an external event
  "terminatePostUri": "...",          // POST here to force-stop the orchestration
  "rewindPostUri": "...",             // POST here to reset a failed run
  "purgeHistoryDeleteUri": "..."      // DELETE here to remove all history/state
}


[statusQueryGetUri]
It shows us how our workflow is doing:

{
  "instanceId":     "abc123",
  "runtimeStatus":  "Completed",      // Finished normally
  "input":          "data.csv",
  "output":         "Processing of 'data.csv' completed at 2025-06-23T14:05:10Z.",
  "createdTime":    "2025-06-23T14:00:00Z",
  "lastUpdatedTime":"2025-06-23T14:05:10Z"
}

unhandled exception:

{
  "instanceId":    "abc123",
  "runtimeStatus": "Failed",         // Uncaught exception
  "output":        null,             // No output on failure
  "lastUpdatedTime":"2025-06-23T14:02:00Z"
}

friendly error (with try/catch in orchestrator):

{
  "instanceId":    "abc123",
  "runtimeStatus": "Completed",      // Completed because exception was caught
  "output":        "Workflow failed: Connection to blob storage timed out.",
  "lastUpdatedTime":"2025-06-23T14:02:00Z"
}


[sendEventPostUri]
If our orchestrator is waiting on WaitForExternalEvent(“MyEvent”), we can push data into it:

https://myfuncapp.azurewebsites.net/runtime/webhooks/durabletask/instances/abc123/raiseEvent/MyEvent?code=XYZ


[terminatePostUri]
Force-stop any running instance. After this, runtimeStatus become Terminated:

https://myfuncapp.azurewebsites.net/runtime/webhooks/durabletask/instances/abc123/terminate?code=XYZ


[rewindPostUri]
Reset a Failed orchestration back to its start so we can try again:

"https://myfuncapp.azurewebsites.net/runtime/webhooks/durabletask/instances/abc123/rewind?reason=FixedBlobName&code=XYZ


[purgeHistoryDeleteUri]
Once we’re done, delete all stored state and history for abc123. Subsequent status checks return 404 not Found:

https://myfuncapp.azurewebsites.net/runtime/webhooks/durabletask/instances/abc123/purge?code=XYZ


Azure Durable Functions delivers a comprehensive, serverless orchestration framework that streamlines complex workflow scenarios—incorporating state persistence, built-in timers, and configurable retry policies—all within a single, maintainable codebase.
Although equivalent functionality can be assembled using discrete Azure Functions paired with Storage Queues, custom correlation logic, and manual retry handling, this approach introduces additional boilerplate, operational overhead, and potential points of failure.
By leveraging Durable Functions, we benefit from automatic checkpointing, seamless error recovery, and a unified developer experience, allowing us to focus squarely on business requirements rather than infrastructure plumbing. For any multi-step, long-running, or stateful process, Durable Functions represents the more resilient, concise, and scalable solution.




Leave a Reply

Your email address will not be published. Required fields are marked *