2187-just_azure.svg

In the last article we covered two advanced scenarios related to configuring Azure diagnostics. We explored how to remotely change diagnostic configuration at runtime, and how to perform an on-demand transfer. We are nearing the end of this series on Azure diagnostics, but fear not – there is one more important topic to cover!

This time we are going to explore how to include the popular .NET logging component, NLog, into a Cloud Service web/worker role and configure Azure diagnostics to save the log data to Azure blob storage. Please know that the method explained here uses NLog for example purposes only; any custom logging component can be used with the same results.

We will wrap up this series on Azure diagnostics by taking a quick look at some of the new features and changes introduced in Azure diagnostics 1.3. So far, we’ve been exploring Azure diagnostics 1.0 and there are some fundamental differences in 1.3 that you should be aware of before deciding to use the new version.

Directory Configuration

Azure diagnostics has the ability to transfer files from local storage on the Azure role instance to Azure blob storage. In this scenario, we will use this feature for logging purposes, but it can be used for any scenario that needs to write files to the configured directory.

NOTE: The procedure outline here is relative to Azure SDK 2.4 and diagnostics 1.0.

To enable this feature, you’ll need to edit the diagnostics.wadcfg file and modify (or create if not already present) the <Directories> element, as seen in the XML configuration sample below.

Let’s break down this XML configuration:

  • The container attribute is instructing Azure diagnostics to copy all files to the “logs” container in Azure blob storage. If this container does not already exist, Azure diagnostics will create the container.
  • The directoryQuotaInMB attribute indicates there is 4096 MB available of local storage available (recall the buffering process discussed in previous articles).
  • The <LocalResource> element indicates to Azure diagnostics to use local storage as the place to look for the source data before copying to Azure blob storage.
  • The name attribute indicates the name of the local resource containing the directory to monitor.
  • The relativePath attribute indicates the path relative to the local resource to monitor. You can set this value to “.” to monitor the root of the directory if so desired.

Within <DirectoryConfiguration> there is an option to use either <Absolute> or <LocalResource>. Using the local resource approach is generally preferred, especially for logging, because the absolute path can better harder to ascertain, especially between emulated and Azure environments. Please see the MSDN guidance for the Azure diagnostics 1.0 configuration schema for an example of using the absolute path.

An astute reader might notice that I have yet to discuss the Local Resource portion of the above configuration. Let’s get to that right now. In the properties for the role, you can add one or more Local Storage options – effectively carving out space on the local VM for which your application can read, edit or delete files as necessary. You can see an example of this in Figure 1 below.

2220-Diag-4-Fig1-620.png

Figure 1 – Configuration for Local Storage

Alternatively, the ServiceDefinition.csdef file contains the same information:

Note the ‘DiagnosticStore’ local storage referenced above. This special setting alters the default diagnostic storage size of 4096 MB to be double that, 8192 MB. Why? In this case more local space for the log files is anticipated. Be sure to update the overallQuotaInMB attribute in the diagnostics.wacfg to reflect this new size.

Let’s take a minute to recap process so far:

  1. Set up local storage as a place on the role instance (virtual machine) where log files are written.
  2. Add a <DirectoryConfiguration> element to the diagnostics.wadcfg file to instruct Azure diagnostics to create and use the “logs” container in blob storage.
  3. Add a <LocalResource> element within <DirectoryConfiguration> to instruct Azure diagnostics to monitor the “archive” folder within the LogStorage local resource location.

Any file(s) place in the “archive” folder within the LogStorage local resource will be copied to the “logs” container in blob storage. Azure diagnostics does not delete or remove any files – it simply copies files from the source to destination. If you want to clean up files in blob storage (i.e. to save some pennies), then you’ll need to implement your own clean-up strategy.

What does this setup look like? Glad you asked! In an emulated environment, you can view local storage by right-clicking on an instance and selecting Open local store…, as seen in Figure 2.

2220-Diag-4-Fig2-620.png

Figure 2 – Open the local store on the Microsoft Azure Compute Emulator

That will open Windows Explorer at the root of the local storage. From there you will navigate to directory and then LogStorage (the name of the local storage configured previously). You will see something like the image in Figure 3 below:

2220-Diag-4-Fig3-620.png

Figure 3 – Local storage for the Compute Emulator as seen through Windows Explorer

For an actual Azure role instance in Azure, you’ll see something like the image in Figure 4:

2220-Diag-4-Fig4-620.png

Figure 4 – Local storage for an Azure role instance

Configure NLog

Now that the basics of Azure diagnostics are configured, the next step is to configure NLog to write to the designated local storage directory. There are a few steps necessary (remember these steps are for NLog, your logging mechanism may differ. It doesn’t matter as long as you end up writing to the expected location):

  1. Add NLog via NuGet
  2. Configure the NLog.config file
  3. Add logic to role startup / logging initialization to modify a few NLog configuration settings
  4. Write log statements

Add NLog Via Nuget

This one is pretty basic – install the NLog package via NuGet. From the Package Manager Console in Visual Studio, execute Install-Package NLog. This will add the necessary NLog assembly references to your project. At the time of writing, NLog version 3.1.0 was the current version.

Configure the NLog.config file

You can get a starter version of an NLog configuration file via NuGet by executing Install-Package NLog.Config. This will add a default configuration file to your project. You will next need to edit the file to include your necessary logging configuration. Below is a basic example that will write to a local file and keep a set number of archive files (rolling every hour).

There are two important things to notice about the above configuration. The filename attribute includes the name of a directory – “logs”. In addition, the archiveFileName attribute includes the name of the archive directory – “archive”. Why is this important? Recall earlier the Azure diagnostic configuration for directory configuration was set to monitor the “archive” directory. Meaning only archived files will be persisted to blob storage. In this case, I’ve made a choice to be willing to lose (potentially) an hour’s worth of log data since the archiveEvery property is set to an hour.

The cleanOnRoleRecylce setting for local storage was previously set to false. If the role instance is recycled, the local files should remain, be archived, and persisted to blob storage. Only in the event of a hardware failure is there the risk to lose some logging data. In this case, I’ve made the choice that historical data is the most important. You may make a different choice for your situation.

It is also worth pointing out here that you should not use log files (or any other scheduled transfer mechanism via Azure diagnostics) for emergency or “real time” notifications. Instead, you will want to consider an alternative strategy (a service such as Twilio for sending text messages, notifications via your monitoring system of choice, pager notifications, a bat-signal, etc. . . . ok, maybe not the last two, but you get the idea).

Modify NLog Configuration on Startup

There are a few minor changes that need to be made before NLog is fully ready. Somewhere in the initial startup logic for your application’s logging logic (where you do the initial NLog configuration/setup), you will need to add code such as in the sample below.

To view a complete sample you can download the zip file from the link at the bottom of the article, or for the latest code please visit the JustAzure.Diagnostics repository on GitHub.

The above code is taking care of a few things for us:

  1. Obtaining a reference to the local resource path. This will vary based on if the code is running in the Azure Compute Emulator or in the cloud.
  2. Ensuring the necessary ‘logs’ and ‘archive’ directories are created.
  3. Setting variables for the role and instance name.
  4. Modifying the file target used for logging to use the full path the local storage resource, and update the file name to include the name of the role and instance. Including the role and instance in the file name can be helpful when running multiple instances and each are added into the same container in blob storage. A unique name is necessary to ensure files are not overwritten and we can appropriately identify the files.

Write Log Statements

That’s pretty much all there is too it. You can add log statements to the code as you see fit.

To view the logs, you can use any tool that can access Azure blob storage, such as Visual Studio or Cerebrata Azure Management Studio. First, query the WADDirectories table to get a sense of the directories Azure diagnostics is monitoring and where the related files are in blob storage. You can see an example in Figure 5 below.

2220-Diag-4-Fig5-620.png

Figure 5 – View of the WADDirectories table indicating where the log files are located.

Then, if you were to browse to the “logs” container in the storage account, you could find the files as indicated in the WADDirectories table, as seen in Figure 6 below.

2220-Diag-4-Fig4-6-620.png

Figure 6 – The application’s NLog log files saved to Azure blob storage

Overall, not too bad. The basic premise is to set up local storage and have your custom logging code write to local storage. Then, to have Azure diagnostic monitor local storage and automatically transfer any files in local storage to Azure blob storage.

Azure Diagnostics 1.3

Azure Diagnostics 1.3 marks a fundamental shift in how Azure diagnostics are collected. Previously, diagnostics for Azure Cloud Services was handled via a plug-in module: Diagnostics. The mere presence of the Diagnostic module would establish basic diagnostic data (as discussed in previous articles in this series). One of the “problems” with this approach is that it worked only for Azure Cloud Service web and worker roles – not Azure Virtual Machines. Starting in Azure SDK 2.5 and Azure Diagnostics 1.3, diagnostics are configured via an extension model. The extension model is becoming increasingly popular in working with Azure Cloud Services and Azure Virtual Machines. Diagnostics 1.3 enables an extension that works the same with Cloud Services as it does with Virtual Machines.

Rather than me provide you with an introductory look at Azure SDK 2.5 and Diagnostics 1.3, I will encourage you to read the Azure blog post that introduces the topic titled “Announcing Azure SDK 2.5 for .NET and Visual Studio 2015 Preview“. This post does a good job of providing the highlights of the new approach to diagnostics.

Instead, I think it is worthwhile to highlight some of the key differences between Azure diagnostics 1.0 and 1.3.

  • Azure Diagnostics 1.3, as an extension, is supported in Cloud Services (web and worker roles) and Azure Virtual Machines. Diagnostics 1.0 is supported only in Azure Cloud Services.
  • Azure Diagnostics 1.3 does not support System.Diagnostic.Trace logs. Instead, you are to use EventSource. The System.Diagnostic.Trace logging statements can still be used in the emulated environment to write trace/debug statements to the Azure Compute Emulator console.
  • Azure Diagnostics 1.3 and Azure SDK 2.5 automatically configure collection of common crash dumps.
  • The schema of the various diagnostic collection tables (WADLogsTable, WADDiagnosticInfrastructure, etc.) has changed. Azure Diagnostics 1.3 tables use a few new fields, including PreciseTimeStamp, ActivityId, RowIndex, and TimeStamp (amongst a few others depending on the table). If you have existing code or tooling that expects specific columns to be in these tables, be aware they might have changed.
  • Azure Diagnostics 1.3 uses a new configuration file, diagnostics.wadcfgx. The configuration schema is similar, but not exactly the same. For example, you will notice there is a <PublicConfig> and a <PrivateConfig> section in the new configuration file.

A rather big change (ok, one of them) in Azure Diagnostics 1.3 is the deployment model. If we look back at Azure Diagnostics 1.0, it is fully integrated with the deployment experience. What I mean by this statement is because Diagnostics 1.0 is based on the plug-in model, and the plug-in is part of the deployment configuration, there is no extra step necessary to deploy or update the diagnostics configuration. Simply deploy the cloud package and all is well. This is not the case with Azure Diagnostics 1.3.

As it stands today, Visual Studio automates the deployment experience and the steps necessary to update the deployment using the extension model. If you are not using Visual Studio, there are two steps necessary to deploy your solution: deploy the cloud package and then deploy the diagnostic extension.

  1. Deploy the cloud solution just as you normally would: PowerShell, third-party tools or the Azure Management Portal being common choices.
  2. Deploy the diagnostic extension
    1. From the diagnostic.wadcfgx file, copy the <PublicConfig> section to a new XML file, and name it diagnostics_config.xml
    2. You can optionally obtain the configuration schema by executing the PowerShell script listed below in case you want to validate your file against the schema.

      In Visual Studio, associate the schema with the new diagnostics_config.xml file. Validation of the XML file is important to ensure the XML is what the diagnostic extension is anticipating. To associate the schema, make sure the diagnosics.config.xml file is open as the active window. Open the Properties window (press F4 or select the window). Click on the Schemas property in the Properties window. Click the ellipse (…) in the Schemas property. Click the Add… button and navigate to the location where you saved the XSD file and select the file XSD. Click OK.

     

  3. For a new service, execute the following PowerShell script:

  4. To update an existing service, the command is nearly the same. The difference being you need to define the role(s) for which to apply the update (by providing the -Role parameter which can take an array of role names):

Azure Diagnostics 1.3 (included in Azure SDK 2.5) includes two important breaking changes:

  • You will no longer be able to collect diagnostic logs when using the Azure compute emulator. If you want to see how your diagnostic configuration is working with respect to performance counters, IIS logs, event logs, etc., then you will need to deploy the Cloud Service to Azure and inspect the resulting tables or blobs. No more emulator support.
  • We talked previously about how you should not configure diagnostics via code. Starting in Azure SDK 2.5 you have no choice; diagnostic configuration via code is no longer supported. All configuration will need to be done via the new extension or diagnostics.wadcfgx file in Visual Studio.

For more information, please see Azure SDK for .NET 2.5 Release Notes. If you run into issues with the new diagnostics you can also read the documentation on Enabling Diagnostics in Azure Cloud Services and Virtual Machines.

Let’s Wrap It Up Already

In this series we have reviewed several features of Azure diagnostics, including what data can be collected, how it is collected, where the data is stored, and a few different scenarios related to diagnostic configuration. In this article we explored how to configure Nlog to work with Azure diagnostics and ensure the log files are safely persisted to Azure blob storage. The technique discussed in this article could be used for any solution (doesn’t have to be logging or diagnostic data related) that needs to write files to the local instance and periodically transfer the local files to Azure blob storage.

This article also highlighted a few important differences between Azure diagnostics 1.0 (which this series has been based upon) and the new Azure diagnostics 1.3 model available starting with Azure SDK 2.5. There are some important differences to be aware of when using the new model, so plan accordingly.

Finally, I would like to thank you for reading this series on Azure diagnostics. I hope you have enjoyed it and maybe even learned a little along the way.