In the previous article, we were introduced to the idea of Microsoft Azure Cloud Services, the different ways you can deploy your application to that platform, and a few examples of the benefits of using Cloud Services. In part 2 of this series, we’ll be focusing on the different ways you can build with a Cloud Service, and what exactly happens in terms of virtual machines when you deploy your application.

What does an application consist of?

Modern applications are much more advanced than the traditional master/detail software we used to build a few years ago. They’re built with multiple layers, modules, and so many other things to solve the challenges we’re facing today. For the purposes of discussion, let’s break out the internals of an “average” application.

  • To start with, our application typically stores data somewhere – this could be a relational/document/key-value database or a (distributed) file system, for example.
  • Then we probably have some application tier which hosts web services, a web API, workflows, and business logic.
  • In addition to that, we probably have some (scheduled) jobs running in the background which might be invoked by cron jobs, by the Windows Task Scheduler, or we might even have a Windows Service that is running all the time.
  • Finally, there’s a good chance our application comes with a few dashboards, views, reports or some other UI elements. These which might have been created with one of the many web platforms like ASP.NET MVC, Node JS, PHP, or any technology that supports a presentation layer.

All of these components and layers could be running on a single machine, but in a typical setup you’ll have multiple machines each fulfilling their own role in the application. Some servers will be responsible for storing the data, others will be responsible for hosting the services, and still others will be responsible for the public-facing components (like the public web application).

The easiest way to move any kind of application to the cloud – or to Microsoft Azure in particular -would be to create a few Virtual Machines (VMs), configure each VM to match a specific role in your application (as if they were physical servers), and then configure the network to make sure that not all your components and machines are available over the internet. Cloud- hosted Virtual Machines are Infrastructure-as-a-Service (IaaS) and, while they do give you the most flexibility of any cloud-based offering, it’s also entirely up to you to do everything in terms of configuration and management!

At the other end of the spectrum we have Azure Web Sites , which are Platform-as-a-Service (PaaS). These allow you to easily deploy web applications in an environment which is completely managed and controlled by Microsoft to the extent that, other than actually uploading your web application, everything is managed for you. In fact, with the Azure Web Jobs functionality, it’s even possible to run your own background jobs (similar to Windows Services). The only downside to this option is that you’re not the administrator of the machine on which your application is deployed, and you have much less control over the network topology.

Between Virtual Machines and Web Sites you’ll find the Cloud Services functionality, which is the best of both worlds. You get the power of Virtual Machines (you’re an administrator on the machines that are deployed) and the automated management of Web Sites (the application is automatically deployed by the Fabric Controller, an intelligent management component running in Microsoft Azure). As a bonus, in addition to automated deployments, the Fabric Controller is also responsible for monitoring and repairing your application if something goes wrong. This is something we’ll be covering in one of the next articles of this series.

Cloud Services, Roles, Role Instances

A Cloud Service is a deployment boundary which, in most cases, matches your application. Within a Cloud Service you’ll typically have one or more Roles (like your web front-end, your backend services, your scheduled jobs etc.)

Each Role you define in your Cloud Service will have one or more Role Instances – these are the actual virtual machines that are provisioned, and on which specific parts of your application are deployed.


Figure 1: Sample Cloud Service

Figure 1 shows an example of how a Cloud Service could look like, in the case of building an ecommerce site. In this case my web shop will have a public-facing web application, which might also host an API for our mobile applications. This could be one of the Roles in our application, deployed over 3 Role Instances.

We should also be able to manage products, orders, and invoices, and in order to gain some insights into our business we will also need to generate some reports. To make sure we’re not impacting the visitors of our site, it could be a good idea to deploy this aspect of the application on a different set of machines. This is means we already have at least one different Role in our application, which in this case we will only be deploying on 2 instances.

In the background, we’ll also have a few scheduled/batch jobs running every few hours or at specific intervals. These could be jobs responsible for sending out newsletters, jobs that go over all products to verify which products could go out of stock, or jobs for generating and sending out invoices. More importantly, this could be yet another different Role in our application, with 4 Role Instances available because we’ll probably need the resources to handle all those important processes.

Finally, as mentioned earlier, if a Role needs to be accessible from the internet then we’ll be able to specify that the load balancer should route each request to a specific port to one of the Role Instances within a Role (I know, it can get a bit confusing.)

It’s all about the service package

By now you’re probably wondering how Azure will know how to provision those servers, and how the different tiers of your application will be deployed over the different Role Instances. Thankfully, all of this is possible thanks to the Service Package and the Service Configuration.

The Service Package (what you get when building an Azure project) contains everything the Fabric Controller needs to know in order to successfully deploy your application over different Roles and Role Instances. In the next article we’ll be taking a closer look at what is exactly deployed to Azure, and the different ways this can be done, but for now you can think of the Service Package as one big compressed file that contains the code of all your Roles. In addition to that, the Service Package also comes with an XML file (the Service Definition) that explains how the Roles should be deployed: which ports need to be opened, certificates that need to be installed, custom configuration of the machines, which OSes we want to use… everything.

Finally there’s also the Service Configuration, which contains all the variable settings that can still change after the deployment: the number of Role Instances we want in each Role, connection string and app settings, certificate thumbprints, and so on.

Implications – Scaling, Repairing, Storing

The Fabric Controller (FC) knows exactly how to deploy and configure every aspect of your application by simply looking at the Service Package and the Service Configuration. This means that, if a server should go down for some reason, the FC can simply start a new VM and configure it exactly as needed by looking at the Service Package and Service Configuration.

The same is also true for scaling out – if we want to scale from 2 to 10 Role Instances, the FC can immediately provision 8 new servers and configure them exactly as needed. Or if we want to scale down, it can simply delete the Role Instances on the fly (because it knows how to rebuild them when we need them again).

Our application is now what defines the fundamental units of infrastructure we’re provisioning, and the virtual machines are just the invisible “things” our application can run on and which can be disposed of when we no longer need them. We’ve achieved commoditized virtual hardware!

The only consequence of this model is that it requires you to make sure your application is stateless – storing data locally on the VM is not an option because said machine could disappear at any moment. However, in this series we’ll also be looking at a few options for moving the state away from your compute resources (the Role Instances) to some centralized storage.

Different types of Roles

By looking back at figure 1 we can identify 2 types of Roles – Roles that host a web application and Roles that don’t – and these are actually the 2 different types of Roles that are available in Microsoft Azure today: Web Roles and Worker Roles.

Web and Worker Roles are very similar – the main difference is that a Web Role comes with IIS and a Worker Roles doesn’t, but this doesn’t stop you from hosting a web application on a Worker Role. Remember that with Cloud Services we get the power of Virtual Machines, and that we’re therefore administrators on the Role Instances that are deployed.

This means we’re able to control whatever happens when these instances start, allowing us to do virtually anything we want with a Worker Role. The rule of thumb you should remember: if it runs on Windows, it will run on your Web/Worker Role.

Worker Roles

Let’s start by looking at Worker Roles. When you deploy a Worker Role as part of a Cloud Service you’re able to run some custom code each time an instance (re)starts. This means you could write some code that hosts a WCF Service, an API using the ASP.NET Web API, starts Node JS… literally anything you need.


Figure 2: Worker Role with three Role Instances

After the virtual machines are created, your code will be deployed on each machine, each of which will then start a process called WaWorkerHost.exe. This process will run your custom code (a class inheriting from the RoleEntryPoint class available in the SDK), or you could also specific a custom executable that should be executed when the Role Instance starts.

If the custom code or executable fails to start or fails afterwards, then the affected Role Instances will restart (and cycle) until they’re back online (one of the self-repair responsibilities of the Fabric Controller).

Listening Worker Role Pattern

The code in Figure 3 below is a very simple example of what you could call the “Listening Worker Role Pattern”. In this code I’m simply starting the ASP.NET Web API Self Host – this could be my application tier, which could then be consumed by my presentation tier. In this case we’ll want to run the Self Host on the IP address of the instance, and on a port that we configured in the Service Definition. All of this info is accessible through the RoleEnvironment class (part of the SDK) and this is what I’m using to define the base URI to start my Self Host. If you want to try this yourself, you can download the code from the link above, or get the latest from the JustAzure.CloudServices repo on GitHub.

Figure 3: Web API Self Host

Queue Polling Work Role Pattern

Unlike the application pool of a Web Application, there’s no idle-timeout for your code running in the Worker Role, which means it’s a great place to do things that should keep running indefinitely (For example, hosting a Web API, a WCF Service, etc.)

This also means that Worker Roles are perfect to handle your asynchronous workloads. Take the example of an online photo album: after the user has uploaded pictures, we’ll need to generate thumbnails to show display in many different views of our application. In that case, we could simply store the newly uploaded files at a location that is accessible by the Worker Role, and then notify the Worker Role that it should extract a thumbnail for each image.

The compute-intensive work is now being offloaded to the Worker Roles, which reduces the load on your frontend and also improves the responsiveness of your application. Notifying a Worker Role that something should happen is typically done through Storage Queues or Service Bus Queues, hence the name of this pattern: Queue Polling Worker Roles, whose key characteristic is that they start listening for new messages in the Run method of the Worker Role.

As it happens, this pattern is common enough that when you add a Worker Role to your Cloud Service, you’ll see the option to add a Worker Role with Service Bus Queues.

Figure 4: A basic example of a Queue Polling Worker Role

External Process Worker Role Pattern

A final pattern is the External Process Worker Role, where you simply start an executable that can run on Windows. This means you could run virtually anything up to and including a Node JS server in a Worker Role. This is configured through the ProgramEntryPoint element in the ServiceDefinition file:

Figure 5: External Process Worker Role

You might be thinking: “Why not simply start a new process from within the Run method through Process.Start?” While this would probably work fine, you would be missing out on something great: free monitoring. If the process in the ProgramEntryPoint fails, then the Fabric Controller knows that something is wrong with your Role Instances and it will be able to take required actions. On the other hand, if the process crashes after you ran the Process.Start method, the Fabric Controller will have no clue that something went wrong, and will take no action.

Web Roles

As I mentioned before, Web Roles are essentially just Worker Roles with IIS. There’s a process that starts whenever the instance starts (WaIISHost.exe) which allows you to write the same logic as you would in your Worker Role, and then there’s IIS to run your Web Application.


Seeing Double

And this is where it can get a little confusing. Essentially, when you use a Web Role, you have to keep in mind that your code will be running in 2 processes on each Role Instance: the WaIISHost.exe that starts when the instance starts, and the process of the Application Pool (IIS) which runs your Web Application.


Figure 7: A sample Web Role project, where some code will be running in the WaIISHost.exe process and other code will be running in the w3wp.exe process

This also means that, if you want to do something when the Web Application starts, you’ll still do this from the Global.asax.cs file (or an HttpModule). On the other hand, if you want to do something when the Instance starts you’ll do this from the WebRole.cs file (as this code runs in the WaIISHost.exe process). Since these are 2 different processes, there’s no way to share variables/objects/etc. between the code running in your WebRole.cs and the code running in your Global.asax.cs/Web Application.

Working with Web Roles

In order to reduce the cost of your application you could even decide not to use a Worker Role at all, but to write your code in the WebRole.cs file, since this code runs outside of your Web Application. This is mainly interesting for the small workloads, where it would be overkill to pay for a dedicated Worker Role. I don’t advise you to do this for resource-intensive tasks because it might impact the performance of the Web Application running on that same instance.

The Service Definition file also allows you to do some basic configuration of IIS: you’ll be able to define multiple sites that should run within 1 Web Role (again, good if you want to reduce the cost of your application), configure the endpoints, configure SSL certificates, configure host headers if you want to run multiple applications on port 80, and so on.


Figure 8: Running multiple web applications in a single Web Role and using host headers

But what if that’s not enough control? Maybe you want to make some advanced changes at the IIS level, like configuring the idle timeout on Application Pools, or ASP.NET App Suspend. Remember that the FC could potentially remove instances, deploy them somewhere else, or scale out the application, so using Remote Desktop to make changes to each new or recycled instance really isn’t an option.

Alternatively, we could script the change using a batch file or PowerShell run by Startup Tasks (more about this in a following article), or we could do this with the WebRole.cs file which, just like the WorkerRole.cs file, contains an OnStart method. For a Web Role this code is executed before the load balancer routes traffic to the instance, so this is the best way to write custom code before the application starts.

There’s one more consideration you should bear in mind when working with Web Roles – in order to make changes to IIS, our code needs to run with administrative permissions. Using the Service Definition file (Figure 9, below) we need to change the executionContext to elevated, and then the code in the WebRole.cs class will run with elevated privileges, sufficient to make changes to IIS.

Figure 9: Execution Context

With the application running in elevated mode we can add the Microsoft.Web.Administration NuGet package to our project, and we can then use the ServerManager class to make whatever change we need to IIS. Figure 10 shows how I’m changing the idle timeout of my application pool.

Figure 10: IIS configuration using the OnStart method

The point of this example is to demonstrate that you really can do virtually anything with the current instance from the OnStart method, because it can run with elevated privileges. This gives you an enormous amount of control and power, and in a future article we’ll look at a better way to configure IIS and the rest of the instance that opens up fewer opportunities for mishap.

Tip of the iceberg

In this article we’ve cracked open Azure Cloud Services to see how they are composed of Roles and Role Instances, and we’ve seen how the Fabric Controller can deploy your application over multiple Roles and Role Instances in a matter of minutes, as opposed to the hours or days of manual work you would need to do the same on-premises. We’ve also seen how, thanks to the deployment and configuration definition files, the Fabric Controller can also automatically repair your application if your roles fail for whatever reason.

We’ve also discussed the differences between the Web and Worker Roles – two ways to run your code in a Cloud Service which, between them, allow you to run virtually any application in this PaaS offering. With that understood, we touched upon some of the considerations you should bear in mind when working with each of these roles, and how to make the most out of their unique features.

Now that we have a basic understanding of the platform, we can continue on our journey by looking at more specific concepts like deployment, configuration, life cycle, networking, and so on.

Hope to see you soon for part 3 where we’ll be looking at the contents of a Service Package and everything that can be set-up in the Service Configuration file!