It’s time for getting serious: after the first two parts about main concepts and basic API calls for Azure Storage Queues I’d like to focus on two topics that no one likes to implement but everybody needs: security and performance.

Security with Storage Access Keys

As you’ve seen in the samples of part 2, you can connect to your storage account by using storage access keys. These keys are generated by Azure and have to be passed in addition to the storage account name to authenticate requests.

What’s sometimes confusing for people starting with Azure Storage is the presence of two keys: the primary and the secondary access key. Both of them are full access keys to the whole storage account with no difference at all. So does it matter which you use? There are two main scenarios for why there are two keys.

As with every password, you should change the key regularly to augment security. That’s not an easy task if you have a production environment running, because changing the key leads to unsuccessful connections by your code until you’ve copied the new key to your application configuration. You can avoid that by using the secondary key. Change your application configuration to the secondary access key, and after that regenerate the now unused primary access key. The next time you want to change, do it the opposite way. Your system is never offline and you can regenerate keys in a safe manner.

The secondary key can also be useful when you want to grant access to some other client, but you want to keep your own production key secret. To revoke the other client’s rights you just have to regenerate its key, but that doesn’t affect your system. Be aware that this is only an option if you completely trust the client: a secondary key is still a full-access key with no limitations.

You can learn more about how to regenerate the keys from the Azure documentation: How to: View, copy, and regenerate storage access keys.

Security with Shared Access Signatures

When talking about granting and revoking rights, another concept becomes important and could be a better approach than exposing your full-access storage account keys: Shared Access Signatures (SAS).

“A shared access signature provides delegated access to resources in your storage account. This means that you can grant a client limited permissions to your blobs, queues, or tables for a specified period of time and with a specified set of permissions…” (Shared Access Signatures, Part 1: Understanding the SAS Model)

You can still serve clients with a presumably “permanent” access, but eventually request them to get a new Shared Access Key every hour or day for example. If the key gets lost or you want to lock out the client, you simply don’t provide a new key upon next request.

Depending on the scenario it might be useful to give out Shared Access Signatures that are meant to enable a one-time access: very limited permissions for a very limited time period. Just enough to put a message in the cloud.

Let’s take a look how to create a SAS with the .NET Storage API (4.3.0). I’m extending my Ticketing as a Service (TaaS) sample application from Part 2 to give away a write-only SAS to external clients (e.g. ticket agencies).

Creating a SAS for a queue:

Please notice a few things here: whenever creating a SAS, use HTTPS (second parameter in the CloudStorageAccount constructor) to avoid man-in-the-middle attacks. The SAS string is a secret and should remain so.

The Permissions property can be set to the following options or in any combination:

  • None (just for checking the existence of the queue and reading the name)
  • Add
  • Read (for reading messages but not deleting them)
  • ProcessMessages (for reading and deleting messages)
  • Update

Due to inaccuracies in clock time on different machines it is strongly recommended to avoid setting the SharedAccessStartTime property to current time, but specifying a value in the past. Same is relevant for the SharedAccessExpiryTime: clients should start renewing their Shared Access Tokens early enough.


Fig 1: Warnings for setting SharedAccessStartTime in VS 2015

With the GetSharedAccessSignature method you get a string that contains all the necessary parameters. Just concatenate your queue Uri and the Shared Access Signature string and pass that string to the client. For example, below is a result of combining the queue Uri with the result of the call above. Note that the query string (from the question mark on) comes from the GetSharedAccessSignature call.

There’s an alternative way for creating a SAS where you can reuse permissions. This is especially useful when you want the ability to extend or revoke rights of already generated SAS. Unlike the adhoc SAS string generated above, the configuration of permissions is not included in the signature string but it is stored in Azure and reevaluated for each request, which is why these are referred to as stored SAS Policies. Note the only real difference in generating a stored SAS Policy from an adhoc one is adding the SharedAccessQueuePolicy to the queue’s permissions using the SetPermissions method.

Creating an Stored SAS Policy

To revoke the SAS, just overwriteretrieve the permissions for the queue and remove the policy, then set the permissions again. You can also overwrite the policies using the same methods with a different ExpiryTime or modified permissions. Here’s an example of removing the policy completely.

Note that in the code above we call GetPermissions and then modify that. If you don’t get the current permissions for a queue and send in a new QueuePermissions object like the first example you will overwrite all permissions. To make changes to the permission set you should always retrieve what is there first unless you are wanting to replace all of the permissions.

What’s the advantage of giving direct access to the Queue?

Maybe there’s one question in your mind: Why would we want to give this direct access to our clients at all? As with databases we could provide a service that handles authentication, performs some business logic and writes to the database afterwards. In the most scenarios we provide APIs and not connection strings, don’t we?

The reason is: scalability. If you have large amounts of data or many requests, you would possibly create a bottleneck with your service in the middle. Instead the service just handles the creation of Shared Access Signatures but the heavy load goes directly between the client and the queue without any indirection.

What if you don’t need that scalability? Maybe you don’t need SAS and you’re fine with the Storage Access Keys or with creating your own API to broker the messages onto your queues. But if you rely only on Storage Access Keys be careful who is in possession of your keys and think of scenarios how to lock bad clients out.

Azure Storage Queues: How to improve Performance

So what can you do to improve performance if you want to add or read lots of messages? First of all: there’s no free lunch. But there are some tips and tricks that you should know to see if they work for your scenario.

First of all let’s take a look at the official performance targets for Azure Storage Queues: a single queue should be able to process up to 2,000 messages per second. Each storage account has a limit of about 20,000 requests per second (assuming 1KB object size). That’s a lot.

Create Queues Wisely

What’s important to know is that every queue is treated as a partition in Azure Storage. That means that all the messages of one queue are served by a single server. Different queues in the same storage account can be served by different servers. If you need more than 2,000 messages per second you have to think about creating more than one queue, or even more than one storage account to get more throughput. I recommend reading the “” blog post by Alexandre Brisebois for further details about distributing storage across many storage accounts (it’s about blob storage but the concepts can be applied to queues as well).

Writing Messages

If you have ever measured the time needed for inserting messages you might have been surprised by the slowness. The throughput goal of 2,000 messages per second might seem very far away indeed- and even if you can assume that this amount of messages cannot be produced by only one client, the time per message seems to be very high. For example, a test of inserting 1,000 messages with a client running on an Azure VM took about 100 seconds (about 10 messages per second).

One main reason can be found in the TCP/IP protocol. As you know, the Storage API sends REST calls to perform queue operations. The underlying .NET implementation uses the Nagle’s algorithm for “improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network” (Wikipedia). Unfortunately that’s most of the time a bad idea when it comes to the performance of Azure Storage and you should turn it off whenever possible (more information can be found on the Azure Storage Team Blog). With only one line you can improve the performance significantly to about 180 messages per second (in my personal tests on one Azure VM without multi-threading).

Setting Nagle Algorithm:

Be careful: It’s important to turn off the algorithm before you connect to your queue.

In addition, also the Expect100Continue and the DefaultConnectionLimit of the ServicePointManager class can have a positive performance impact, see this forum post by the Azure Storage Team for more details.

Reading Messages

When reading messages you should take the possibility of reading more than one message at a time to improve performance. By pulling more than one message at a time you reduce the number of round trips when processing the messages. With one request you can get up to 32 messages, but be careful: all messages have to be processed during the specified timeout period. It might take some experimenting to find the right balance of how many messages to pull at a time.


Azure Storage Queues are really great and well-prepared for big scenarios, but you have to be aware of some facts to make the right decisions. Who should be in possession of your full-access key? Are Shared Access Signatures a better way of handling security and scalability? How do you get the power and performance out of Azure that you need? A good summary – also for other Azure Storage topics – is the Microsoft Azure Storage Performance and Scalability Checklist.

This was the 3rd part of the article series, next time we’ll take a look at some useful tools and how to debug Azure Storage Queues. And as always: any suggestions or feedback is very welcome!