Caching: the Good, the Bad and the Hype

One of the more important aspects of the scalability of an ASP.NET site is caching. To do this effectively, one must understand the relative permanence and importance of the data that is presented to the user, and work out which of the four major aspects of caching should be used. There is always a compromise, but in most cases it is an easy compromise to make considering its effects in a heavily-loaded production system

At the very end of the day, being a web developer building up a web site on top of the ASP.NET stack is not the hardest thing you can be called upon to do. Whether you’re an MVC person or you still find Web Forms more comfortable is no big deal to give life to a bunch of pages following some business logic. The real challenge today is not much in making web things work; but it’s all in making them work in the real-world: And the real-world is getting wild.

Web sites must be responsive to devices and must provide an attractive user-experience. This doesn’t simply mean classy styles and top-quality aesthetics but also interactivity, smooth refreshes and updates; maybe also local memory of what was done and happened within previous sessions. Finally, a web site must be fast. We could almost say that it doesn’t really matter what a web site does as long as it does it as fast as possible.

Any users can tell you whether a given web site is fast or slow-and often it’s just a matter of perceived, rather than measured, speed. It’s is subtle distinction, but it is more the ingredients and the overall recipe that leads to that perception of speed. In a nutshell, it’s not obvious how to build a web site that performs like a champ. It depends on the tasks it has to perform; it depends on the business domain; it depends on users (number, average skills, age, method of work, et cetera.)

In this article, I’ll focus on one aspect of web site development that is somewhat related to performance, whatever is your definition of performance. When talking of performance, then caching is always involved in one way or another.

Four Aspects of Caching

In the context of a web application, caching comes in at least four flavors. Even though all are termed caching, they touch on some very different technologies. The four flavors of web caching are:

  • Application output caching
  • Proxy output caching
  • Application data caching
  • Query caching

In all cases, caching saves the web server and running application a lot of work and saves blind recalculation of data that might already be there. The use of caching makes serving a request faster-sometimes much faster-with the obvious result that the number of requests that the web server can serve per unit of time is higher. From here, you get better throughput for the site and overall better performance.

Before delving deeper into these four aspects of web caching, I must start by emphasizing that caching is helpful; caching deserves high consideration; but caching is not free. The bill of caching has two items. The first item in the bill is the cost of calculating in advance the data that can be shared with next incoming requests. The other item in the bill is the cost of having stale data for a time. Cached data is any data that you calculate at some time (T1) that remains valid and will be served until some other time (T2) regardless of what happens in the system between T1 and T2. As an application developer, you are in control of the interval T1-T2 and are well positioned to determine the impact of stale data on your application.

Application Output Caching

From the very first version of ASP.NET, the team built an extremely powerful feature called output caching. The feature takes the form of the OutputCache attribute that in ASP.NET MVC you can attach to any controller methods, as below:

[OutputCache(Duration=5, VaryByParam=”None”)]public ActionResult Index(){ :}

By attaching the attribute to the controller class, instead, the feature works for all action methods in the controller. The net effect of the OutputCache attribute is that the response of the method-whether HTML, JSON, XML or whatever else-will stay cached somewhere in the ASP.NET memory for the number of seconds that have been set through the Duration parameter.

According to the code snippet above, the code within the body of the controller method is run once every five seconds. All requests for Index coming in the interval will not require any work from the ASP.NET application. It is IIS that will silently return the cached response. The first requests that come for Index after five seconds will trigger regular server-side work; the cached result is replaced and all steps repeat.

Whether this contrivance works for you mostly depends on what the method is expected to do. If the method runs a complex and time-consuming database query that will occur once every five seconds. It’s up to you to determine whether a five seconds delay in database reads is harmful for the application. For what that matters, my experience says that all applications can happily survive a few seconds delay; and a few seconds delay is typically good enough to give clogged sites some relief at peak times.

Another point to consider is that you might need different responses depending on the parameters you pass. When you call a URL, for example, you may add query parameters each of which can determine a different output. The VaryByParam attribute on the OutputCache attribute indicates how many distinct versions of the response you want to cache-one for each distinct value of the specified property. If you use None, then only one version of the output will be cached.

The output cache mechanism is implemented via a predefined HTTP module that kicks in for each request that hits the application. The module fires after request authorization thus ensuring that you still go through authentication/authorization even for cached requests. Frankly, output caching may not be a decisive factor to speed up your site, but I don’t see reasons for not using it on any site where clogging could be causing real problems.

Proxy Output Caching

Proxy output caching is the same idea as application output caching, but just moved one level upwards and outside the realm of the application. Proxy output caching adds yet another tier of servers on top of web servers so that the workload is further distributed. Proxy caching is widely used by sites that face a considerable traffic such as airlines or news portals. Interestingly, proxy caching also enables you to optimize sites geographically. Overall, I think the best way to illustrate proxy caching is through a concrete example whose details are public through a series of slide decks, interviews and blog posts. Globo.com is the largest portal and Internet service provider in Latin America. To give you an idea of the traffic on their portal servers, it’s been measured to 45 million views per day. Internally, Globo.com is essentially a CMS written for the most part in Python and Django.

A CMS is often based on monolithic software which is hard to cluster and when it grows big it ends up working on top of a huge database. As a result, content generation becomes expensive and the site slows down. Caching comes to rescue in the form of a reverse caching proxy such as Varnish. A reverse caching proxy is a tool that sits in between clients and servers. It gets requests directly and serves them directly or decides where to forward them, returning any content as if it was itself the server. Similar to Varnish are Squid and products from Akamai, Google and Microsoft (Azure CDN).

It’s interesting to compare proxy caches with a Content Delivery Network (CDN). Both a CDN and proxy caches are essentially a collection of servers deployed in multiple data centers. Anything that passes through them doesn’t hit the web servers which end up facing a more sustainable workload. Also both a CDN and proxy caches allow you distributing content geographically to locations closer to end users.

The difference is that all CDN URLs must be referenced in the code. This means that in situations where the largest share of clients resides in the same area, a CDN is not ideal as it will be receiving a significant workload itself. A reverse caching proxy instead is a plain tier of servers that appear like the origin servers to the client’s eyes so that no changes are required in code anywhere. If you want to improve performance of the site only in a certain geographical area, then you can place, say, a Varnish server only for that geographical area. This said, consider that CDN and proxies are not an either/or choice. They often go together.

Application Data Caching

Generally speaking, caching indicates the application’s ability to save frequently-used data to an intermediate storage medium. In a typical web scenario, the canonical intermediate storage medium is the web server’s memory. However, you can design caching around the requirements and characteristics of each application, thus using as many layers of caching as needed to reach your performance goals. A popular site like StackOverflow.com, for example, uses up to five levels of data caching. First, they cache at the network level through CDN. Next, they cache at the application level using the ASP.NET Cache built-in object and Redis. Finally, they leverage the SQL Server cache at the database level. For more information on the StackOverflow.com architecture you might want to check out http://speakerdeck.com/sklivvz.

ASP.NET offers the Cache object to store data that must be globally available and survive the boundary of sessions. Created on a per-AppDomain basis, the Cache object remains valid while that AppDomain is up and running. The object is unique in its capability to automatically scavenge the memory and get rid of unused items. Developers can assign cached items a priority, and associate them with various types of dependencies, such as disk files, other cached items, and database tables. When any of these (external) items change, the cached item is automatically invalidated and removed. Aside from that, the Cache object provides a plain dictionary-based programming model.

The Cache object has a dark side too. More specifically, the Cache object is limited to the current AppDomain and, subsequently, to the current process. This design was fairly good a decade ago, but it shows more and more limitations today. If you’re looking for a global repository object that works across a web farm, the native ASP.NET Cache object is not for you. You must resort to a distributed cache such as Windows Server AppFabric Caching services or to some commercial frameworks (such as ScaleOut or NCache) or open-source frameworks (such as Redis or Memcached). Better yet, you can use a homemade caching API that can switch to a variety of caching technologies. In my book “Programming ASP.NET MVC” I provide an infrastructure for such a service. An even better example is the .NET Cache Adapter you can get from Nuget at http://www.nuget.org/packages/Glav.CacheAdapter.

Query Caching

Most of the time the data you return from web requests is taken from a location that is expensive to reach. The benefit of cache is therefore just in reducing the time it takes to get the raw data or the time it takes to massage the data in a format that’s quicker and easier for clients to consume. The canonical location that’s expensive to reach is the database.

Caching-wise, the database is a critical resource in the sense that it is the repository of your information. If anything changes, it changes in the database. And once it’s been changed in the database everybody expects it to spread around throughout the application’s user interface. Do you have control over caching when it comes to database queries?

All DBMS systems do a bit of work the first time they process a query-such as preparing and caching the execution plan. While effective, this doesn’t explain what many experienced: a complex query or stored procedure may take several seconds the first time and much less the second time. The point here is that when, say, SQL Server performs a query it reads data into pages saved in memory. Depending on the amount of memory installed on the server, these pages may become obsolete soon. In other words, every time a newer query is run SQL Server attempts to cache read pages. If it runs short of space, it just kicks off the memory the least recently loaded page. It goes without saying that a query is much faster if the pages that must be read are already in memory. There’s not much a developer can do, though, to force certain pages in memory or to remain in memory. No prioritization mechanism is provided as in the ASP.NET Cache. The only tool you can leverage is physical memory. The SQL Server instance behind the StackOverflow.com web site, for example, has almost half a terabyte of RAM.

Summary

When talking about caching, I sometimes receive objections along the lines of “mine is a truly real-time system and I need up-to-date responses, always.” Software makes life a lot easier but is not magic. You can hardly have thousands of users calling into a site every second, each one triggering a long-running operation that may last a few seconds. It’s against all laws of physics and common sense.

Caching is a powerful mechanism to speed up applications and it shows its effect in production more than in debugging. The power of caching shines when you have concurrent requests and saving heavy operations to occur concurrently makes a big difference. Beyond this, caching is a matter of tradeoff but overall I personally think it is probably one of the best compromises you’ll face in IT.