Modern distributed tracing with dotnet(5 min read)

For any modern dotnet system, distributed tracing is already built in to the default web client, server, and other operations.

You can see this with a basic example, by configuring your logging to display the correlation identifier. Many logging systems, such as Elasticsearch, display correlation by default. The identifiers can also be passed across messaging, such as Azure Message Bus.

Logging has always been a useful way to understand what is happening within a system and for troubleshooting. However, modern systems are very complex, with multiple layers and tiers, including third party systems.

Trying to understand what has happened when a back end service log reports an error, and then correlating that to various messages and front end actions that triggered it requires some kind of correlation identifier.

This problem has existed ever since we have had commercial websites, for over 20 years of my career, with various custom solutions.

Finally, in 2020, a standard was created for the format of the correlation identifier, and how to pass the values: W3C Trace Context. In a very short amount of time all major technology providers have implemented solutions.

The examples below show how this is already implemented in modern dotnet.

Built in distributed tracing

These example are also available in Github:

A basic example of the built in distributed tracing consists of communication between a front end web app and a back end service.

Create a back end Demo.Service and a front end Demo.WebApp, using the Dotnet weather forecast example.

Instead of having the web app API return the data directly, instead use the standard HttpClient to call the back end service, registering the built in system factory in Startup.cs to ensure the correct lifecycle is applied.

public void ConfigureServices(IServiceCollection services)

Modify WeatherForecastController.cs in the web app to inject the HttpClient and ILogger<T>, and then using them in the Get handler.

private readonly System.Net.Http.HttpClient _httpClient;
private readonly ILogger<WeatherForecastController> _logger;

public WeatherForecastController(ILogger<WeatherForecastController> logger, 
  System.Net.Http.HttpClient httpClient)
  _logger = logger;
  _httpClient = httpClient;

public Task<string> Get(System.Threading.CancellationToken cancellationToken)
  _logger.LogInformation(2001, WebApp API weather forecast request forwarded);
  return _httpClient.GetStringAsync(https://localhost:44301/WeatherForecast, cancellationToken);

You also need to configure the default logging to output scopes in appSettings.Development.json.

  Logging: {
    Console: {
      FormatterName: simple,
      FormatterOptions: {
        IncludeScopes: true

Make similar changes in the back end to write out a log message and include scopes in the output, and then run both applications.

Distributed tracing correlation is built in to the default HttpClient and web server, and passed between the tiers to be included in log messages. All you need to do is enable the output in the logger configuration.

Example console output showing basic logs with correlated trace identifiers

Elasticsearch + Kibana example

Distributed trace correlation is also supported out of the box by many logging providers.

For example, you can run a local Elasticsearch service to send logs to from multiple services, so they can be viewed together.

Using the above sample simple example, with a web api and back end, you can easily add the Elasticsearch.Extensions.Logging ( logging provider.

dotnet add Demo.WebApp package Elasticsearch.Extensions.Logging --version 1.6.0-alpha1

And then configure it in Program.cs.

using Elasticsearch.Extensions.Logging;
  .ConfigureLogging((hostContext, loggingBuilder) =>

You can run a local instance of Elasticsearch and Kibana, e.g. via Docker, to view the combined logs and filter on specific traces.

Example Kibana log output

Azure message bus example

Although the Azure message bus documentation talks about Service Bus calls done by your service are automatically tracked and correlated, and does provide tracing instrumentation points, the tracing is only automatic if you are using a tracing provider, such as Application Insights or OpenTelemetry.

If you do not have a tracing provider, then traces are not directly correlated (and activities aren\'t even used if there is no DiagnosticsListener attached).

For manual correlation, the Diagnostic-Id application property is automatically set when sending messages with the traceparent details of the source activity, so you only need to configure the receiving side.

Refer to GitHub for the full example. You need to:

  • Create a message bus and queue in Azure to use (a script is provided).
  • Configure Azure message bus in the web app project and send a demo message.
  • Create a new console Worker app and configure it with basic code to receive the demo message.

With basic message receiving code the trace correlation identifier is sent automatically (it can be seen when debugging), but not handled at the receiving end (in the worker).

To handle the correlation identifier, add the following to the beginning of the message processing code to start an Activity set with the provided parent.

serviceBusProcessor.ProcessMessageAsync += args =>
  using var activity = new System.Diagnostics.Activity(ServiceBusProcessor.ProcessMessage);
  if (args.Message.ApplicationProperties.TryGetValue(Diagnostic-Id, out var objectId) &&
  objectId is string traceparent)

  _logger.LogInformation(2003, Message received: {MessageBody}, args.Message.Body);
  return Task.CompletedTask;

With all three tiers logging to Elasticsearch, you can see the correlated results by Trace ID in Kibana.

Example Kibana output wth correlated identifiers

Next steps

The examples above show how distributed tracing correlation is already built in to modern dotnet applications, fully supported in the HttpClient and server, and partially supported in Azure message bus.

Many modern applications use Azure, which has Application Insights (part of Azure Monitor), so future examples will also show how the built in distributed tracing is supported in Application Insights, and displayed in things like the Application Map.

Leave a Reply

Your email address will not be published. Required fields are marked *