By Iqbal Khan, Jeremiah Talkar
Microsoft Azure is rapidly becoming the cloud choice for .NET applications. Besides its rich set of cloud features, Azure provides full integration with the Microsoft .NET Framework. It’s also a good choice for Java, PHP, Ruby and Python apps. Many of the applications moving to Azure are high-traffic, so you can expect full support for high scalability. In-memory distributed cache can be an important component of a scalable environment.
This article will cover distributed caching in general and what it can provide.
The features described here relate to general-purpose in-memory distributed cache, and not specifically Azure Cache or NCache for Azure. For .NET applications deployed in Azure, in-memory distributed cache has three primary benefits:
Azure makes it easy to scale an application infrastructure. For example, you can easily add more Web roles, worker roles or virtual machines (VMs) when you anticipate higher transaction load. Despite that flexibility, data storage can be a bottleneck that could keep you from being able to scale your app.
This is where an in-memory distributed cache can be helpful. It lets you cache as much data as you want. It can reduce expensive database reads by as much as 90 percent. This also reduces transactional pressure on the database. It will be able to perform faster and take on a greater transaction load.
Unlike a relational database, an in-memory distributed cache scales in a linear fashion. It generally won’t become a scalability bottleneck, even though 90 percent of the read traffic might go to the cache instead of the database. All data in the cache is distributed to multiple cache servers. You can easily add more cache servers as your transaction load increases. Figure 1 shows how to direct apps to the cache.
Read full Article© Copyright Alachisoft 2002 - . All rights reserved. NCache is a registered trademark of Diyatech Corp.