Static Content CAT

From CloudScale
Revision as of 12:20, 8 July 2014 by Marianocecowski (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Static Content CAT is an architectural pattern that aims to increase efficiency by separating requests for static contents (static HTML, images, media and other mostly static files) to a instance (or set of them) independent from computing, and directly accessible to the end users.

This allows for a very specific deployment for servicing the static content that can be tuned for high through-output, for instance with in-memory cache files and fast read-only storage, with minimal computing power, as well as other improvements such as geographically distributed deployment.


Also Known As

Static Content Hosting, Cloud Storage Service, File Hosting Service, Web Storage




A web application communicates with a cloud service that needs to handle very different tasks, among them keeping a user's session, obtaining information from different sources, computing the business logic and providing static content to the web application. Depending on the amount of static content, the service provider might be spending an important percent of time transmitting them, and disk and memory resources which could otherwise be used to serve more clients. At the same time, the server is configured to a configuration which benefits neither the computing nor the fast servicing of the static content.


Splitting the service for static content is a very simple solution to the problem. The generated web content must thus refer the static content to a different deployment which will be created specially for that purpose.

Even if we don't add new machines and only re-purpose some of the original ones into static content and dynamic content, we will benefit from the ability of configuring each one in order to maximize their particular job. Computing nodes might not need fast SSD disks, but could benefit from having several CPUs, while a static content server can have a single CPU unit with plenty of RAM for cache, and fast RAIDed SSD disks.

Beyond specialization, in a geographically distributed situation we can make use of Content Caching to reduce traffic, or in case of secure content also deploy instances of the static content service in different locations, making sure the web pages generated point to the most appropriate one (i.e. based on IP). A simple example would be a company with two main offices in different continents. Having a static service deployment on each location and with in-house fast-Ethernet avoids slow and expensive intercontinental traffic, while keeping the hardware requirements to a minimum. Even public services can benefit from this geographic distribution, deploying instances close to the most important users' concentrations.

Another benefit of the separation of the Static Content service is the possibility of making use of an existing PaaS to reduce the complexity of the system, or even a third-party service such as Amazon S3 or Azure Storage, which might provide flexibly pricing models.