About Girish

girishthegeek.wordpress.com Follow me on Twitter @girishracharya

Once upon a time in Azureland!

Premise

Let me tell you my story about Azure. While Azure is very resourceful now, it used to be very experimental during its first incarnation back in 2009/ 2010. There were only a handful of services available and very few bells n’ vessels to play with.

This story is from 2009/2010. During the summer of 2009, Accenture acquired a company called Origin Digital. As the name suggests, Origin Digital was into digital media. They had an online product that used to transcode the videos and images from any format to any other required format. They used to get tons of traffic during Thanksgiving and Holiday Season since more folks logged-on to transcode videos. During the 2008 Holiday season, Origin Digital saw a large spike that took down the online transcoder service since the on-premise data center in Virginia got overwhelmed.

While this was happening back in the US, a couple of very important events were happening in Redmond and India respectively.

  • Event 1: While Amazon’s AWS was extremely popular, Microsoft decided to take a completely fresh approach of looking at the cloud computing with flagship Platform As Service Offering called Azure (it was then called as Windows Azure)
  • Event 2:  Avanade and Accenture partnered with Microsoft and decided to form a Center Of Excellence (CoE) in Pune, India.

One of the first challenges given to Azure Factory was to take Origin Digital’s online transcoder service to Azure. Gauntlet was thrown!

Breaking Shackles

The idea was – since Azure provides scalability, elasticity and processing muscles on-demand – it should be able to handle the traffic spike that was observed by Origin Digital’s on-premise datacenter during the Holiday season.

Back then, Azure PaaS had very limited services like Cloud Services (Web Roles, Worker Roles), Azure SQL Database to handle relation data and Azure Storage to handle the unstructured data like files.

We decided to do this in 2 phases. In the first phase only “lift and shift” was targeted. The idea was to move the code as-is to Azure with as minimal changes as possible. The second phase was all about making the architecture conducive for Microsoft Azure.

As part of Phase 1, the web APIs and web applications were migrated and encapsulated into Web Roles and Windows Services were packaged as worker roles. The existing SQL database was migrated to Azure SQL. Code was required to be changed since the old code was storing the files into on-premise storage while for scalability and cheap cost reasons Azure Blob Storage was the best. While Azure Migration Assistant was not available back then, we were not only successfully able to lift and shift existing app to the cloud successfully but were able to share a lot of our lessons to the community. Remember these were early days and lessons learned were like goldmines since the industry was still struggling to understand what Azure is!

Phase 2 took a while as we designed and refactored for the cloud without a lot of documentation and patterns/practices. We were able to implement auto-scaling. Per problem statement, the business wanted a design that would instantiate more Encoder service instances looking at the number of jobs on queue and at CPU/Memory of other encoder instances. We wrote custom orchestrator service that would monitor the metadata emitted by each instance and add /remove worker role instances using Windows Azure Service Management APIs. (Remember this was early 2010 😊).

We had a lot of fun in making the service production-ready! This was pre-VSTS – when DevOps itself was a relatively new concept for the industry. Implementing CI/CD pipelines was interesting. Microsoft had published few cmdlets and PowerShell commands wrapping Service Management APIs that were configured in Team Foundation Server (TFS) to deploy the services, web apps into Azure.

The monitoring and logging story was interesting as well. Azure had Windows Azure Diagnostics (WAD) and these WAD agents used to run on all cloud service VMs. This agent used to collect all the diagnostics events, event logs, errors, crash dumps and upload them to the Azure Storage Account.

Conclusion

It was fascinating what we were able to achieve with so little! 

 

When I see new and shiny toys that engineers get to play with Azure these days- it puts a smile on my face. It is very apparent the vision Microsoft had for Azure was different and it transformed into this reliable platform after years of innovation and evolution. Glad that I was part of the journey and more to come! This Origin Digital story was published as a success story in MSDN magazine and was showcased by Microsoft in 2010. 

Where are all the Cowboys?

Premise

Where are all the Cowboys? Where did all the men with strong, straight spine go? (not being chauvinistic pig here, but using “men” word in a very gender non-specific way in this blog.) Where are all the guys taking pride in asking questions, accepting defeats gracefully? Where are those strong shoulder fathers who carried their children on their shoulders to see the independence day parade?

Faceless Faggots!

Call it social pressure, society structure — but we are lacking people taking a decisive stance! Folks are interested in trolling someone online and run away when challenged! The so-called leaders and faceless politicians are interested in doing the under-the-table transactions instead of facing challenges heads on. People looking for throwing others under the bus and heartless greedy ones wagging their tongues for tasting money are dying within.

Call me the old fashioned, but I really admire my father’s generation. People kept it very simple back then. They had courage, pride in raising the families and earning the hard, right way. They worked with a chip on the shoulder and smile on the faces. There were pressures back then as well. Frustrations, poverty and crime had plagued that generation as well, but people were honest, with a blue-collared mentality and showed immense strength about facing diversity. The social fabric, society structure was rock solid.

Where did these people go? Dinosaurs?

Searching..

Searching for the Cowboys who hunted for bad blood, rode under clear skies and faced every challenge with a grin on the face, swagger. Searching the strong spine men who stood for the truth, fought wars for humanity, took pride in respecting others and believed in actions than showing timid opinions on social media.

Belichick’s Team Building Theory

I love American Football, especially the NFL! I admire the hard work NFL coaches and players put in. Football is fun, it’s a sport — but for these coaches and players — it’s a way of life.

I love and respect particularly one (future hall of fame) coach. He coaches for New England Patriots and his name is Bill Belichick. He is well-respected thought out the league and considered as GOAT (greatest of all time). Guy has won 5 super bowls for Patriots.

Apart from his rich legacy as a sports coach – as a software engineering Team Lead I really love his philosophy. The way he handles his teams, the way he instils confidence and winning spirit in his teams is absolutely commendable. I, as an engineering team lead like to draw parallels with Bellichik’s football philosophies.

In one of the recent interviews, he said “I like dependable players. I like players who put the team in front of anything and everything. Brady (Patriot’s captain and all-time great quarterback) is not the best athlete, he is not the best player football has seen, but he is dependable. He brings his A+ game in pressure situations, utilizes the resources at the disposal and wins a game. He’s a very smart instinctive football player. It’s not all about talent, it’s about dependability, consistency, and being able to improve.

Belichick has proven this philosophy right again and again. Many times in his career he has retained dependable and consistent players over superstar athletes. He has been right everytime. (Going to 8 super bowls and winning 5 of them is not a joke).

What does it say to us who are building software teams?

  • Build your teams using dependable members.
  • Members might not be rockstar performers, they might not be the best programmers, but you can depend on them when the time comes.
  • Consistent members who are showing attitude and aptitude to improve over a period of time over folks that have sort of already peaked!
  • Last but not the least — members who put the team ahead above anything and everything. Delivering software projects is like playing a team sport and we need members who can stick together.

Enjoy team building. Adiós!

Test transformers

Hello rock stars!

I have a story to tell. Story of two transformers. Heroes who changed my outlook and perception of looking at the test code. Writing bunch of test cases was *not* boring anymore.

Premise

The biggest problem I see with a typical automated test code is that it is dumb and dry! It does not resolute. It does not connect with customers because it’s just pile of syntax and assertions while business folks are interested in semantics.

How about writing test cases focusing on business scenarios? How about using the same domain language, same vocabulary that business folks like to use, right in the test cases? How about treating test cases and associated code as the first class citizens? Suddenly appreciation level and interest in the test code increases. The tests start talking. This idea of writing the test cases using the business vocabulary and then use such test cases for quality control and go/no decision making is called acceptance testing, in my book.

Let’s see how to get it done. Goal of this write up is to introduce the tooling to get the acceptance tests done.

Breaking shackles

Optimums prime

Let me introduce you to Optimums Prime (Check out : specflow). Specflow is Cucumber for .NET. Some of you, especially folks with Java background might know about Cucumber. Specflow is all about describing the business rules, requirements in a English-like syntax that is extremely “readable”. Idea is that- business guys or PMs would describe a business feature which consists of multiple scenarios. Each scenario would be then described in a structured manner using easy syntax like Given, When and Then.

Here is a simple example to demonstrate the same. We all like to write validator classes that do some sort of validations. And then we write bunch of test cases to make sure if the response is as expected. Here is a simple validator class that I would be using for demonstration.

public class ClientRequestIdValidator
{
    private readonly string clientRequestId;

    public ClientRequestIdValidator(string         clientRequestId)
    {
        this.clientRequestId = clientRequestId;
    }

    public ValidatorResponse Validate()
    {
        var validatorResponse = new ValidatorResponse(string.Empty, string.Empty);

        if (string.IsNullOrEmpty(this.clientRequestId))
        {
            validatorResponse.ErrorCode = "1000";
            validatorResponse.ErrorMessage = "Client Request Id can not be empty or null value";
            return validatorResponse;
        }

        Guid clientRequestGuid;
        var isGuidParsingSuccessful = Guid.TryParse(this.clientRequestId, out clientRequestGuid);

        if (!isGuidParsingSuccessful)
        {
            validatorResponse.ErrorCode = "2000";
            validatorResponse.ErrorMessage = "Client Request Id must be a guid value";
            return validatorResponse;
        }

        if (clientRequestGuid.Equals(Guid.Empty))
        {
            validatorResponse.ErrorCode = "3000";
            validatorResponse.ErrorMessage = "Client Request Id can not be empty guid";
            return validatorResponse;
        }

        return validatorResponse;
    }
}

public class ValidatorResponse
{
    public ValidatorResponse(string errorCode, string errorMessage)
    {
        this.ErrorCode = errorCode;
        this.ErrorMessage = errorMessage;
    }
    public string ErrorCode { get; set; }
    public string ErrorMessage { get; set; }
} 

As you can see, no rocket science going on here. Standard, simple C# code that validates a Guid value. Let’s see how we can write test cases for the same using SpecFlow. Ideally I should be writing the test cases first! So sorry Uncle Bob! 🙂

In SpecFlow, like in any other test framework you put the world in a known state, change it by performing some action and then assert and check if the changed state matches the expected state. We use Given keyword to set the state, use When keyword to perform the action and Then keyword to assert. 

Assuming you have SpecFlow for Visual Studio installed, let’s add a SocFlow Feature. Feature in SpecFlow is a top level entity. A feature can have multiple scenarios.

image

This is how a feature file looks like:

As shown below using Given, the Client Request Id is initialized. Using When we are invoking validator which actually performs validation and using Then we are asserting if the response code and response message are as per expectation. Using key words Scenario Outline and Examples we can set the table of values that are to be used in multiple cases.

Feature: Validate
	Make sure that API call validates the client request id

@mytag
Scenario Outline: Validate Client Request Id
	Given Client Request Id provided is <ClientRequestId>
	When Client Request Id validated
	Then Error Code should be <ErrorCode>
	And Error Message should be <ErrorMessage>

Examples: 

| TestCase               | ClientRequestId                      | ErrorCode | ErrorMessage                                     |
| EmptyClientRequestId   |                                      | 1000      | Client Request Id can not be empty or null value |
| EmptyGuid              | 00000000-0000-0000-0000-000000000000 | 3000      | Client Request Id can not be empty guid          |
| InvalidClientRequestId | Blah                                 | 2000      | Client Request Id must be a guid value           |
| ValidClientRequestId   | 76E6B8C8-7D26-4A83-B7C5-A052B82B6A21 |           |                                                  |

Once the feature file is set, just right click and use option Generate Step Definitions as shown below and then it generates the code behind file for the feature file.

image

Make sure you have SpecFlow nuget package installed and correctly configured

This is how the feature code behind looks like:

[Binding]
public class ValidateClientRequestIdSteps
{
  [Given(@"Client Request Id provided is (.*)")]
  public void GivenClientRequestIdProvidedIs(string clientRequestId)
  {   
    ScenarioContext.Current["ClientRequestId"] = clientRequestId;
  }
        
   [When(@"Client Request Id validated")]
   public void WhenClientRequestIdValidated()
   {
    var validator = new ClientRequestIdValidator(
                            ScenarioContext.Current["ClientRequestId"].ToString());
    var result = validator.Validate();
    ScenarioContext.Current["ValidationResult"] = result;
   }
        
   [Then(@"Error Code should be (.*)")]
   public void ThenErrorCodeShouldBe(string errorCode)
   {
      var validationResult = ScenarioContext.Current["ValidationResult"] 
                                          as ValidatorResponse;
      validationResult.Should().NotBeNull();
      validationResult.ErrorCode.ShouldBeEquivalentTo(errorCode);
   }
        
   [Then(@"Error Message should be (.*)")]
   public void ThenErrorMessageShouldBe(string errorMessage)
   {
      var validationResult = ScenarioContext.Current["ValidationResult"] 
                                           as ValidatorResponse;
      validationResult.ErrorMessage.ShouldBeEquivalentTo(errorMessage);
    }
  }

Then you can bind the specflow generated test cases with your favorite framework like NUnit, XUnit, MStest etc. from the configuration itself and use your favorite test runner to execute the test cases. In my case, I have below lines in my app.config

<specFlow>
  <unitTestProvider name="MsTest" />
</specFlow>

Bumblebee

Second hero I would like to introduce is Bumblebee (fluent assertions). Fluent assertions – as the name suggests is a nice and really intuitive way of writing assertions. With fluent assertions, the asserts look beautiful, natural and most importantly extremely readable. To add to the joy, when the test case fails, it tells exactly why it has failed with a nice descriptive message.

Make sure you pull the Fluent Assertions nuget package and to make it work.

image

As you can see in below snippet, I am asserting and checking if the actual response is null or empty. In this scenario – it would throw the exception if the response is not null or empty. It will also throw the exception if actual and expected are not equal. You can explore more and have fun with it.

var actual = "actual";
var expected = "expected";

//Message displayed:Expected string to be <null> or empty, 
actual.Should().BeNullOrEmpty();

//Message displayed:Expected subject to be "expected"
actual.ShouldBeEquivalentTo(expected); 

Happy testing!

Power of startup tasks in Azure

Premise

Hello rock stars, ready for some hacking? Off course!

Some of us might know that Azure (or for that matter many cloud providers) presents two main implementation paradigms:

  • PaaS : Platform-as-service – is more like renting a serviced apartment. Which means the focus is on implementing requirements. Consumers need *not* think about infrastructure procuring, installing operating system patches etc.Just zip the package and give it to Azure which then manages the show for you.
  • IaaS : Infrastructure-as-service – is like renting a condo. Which means apart from implementing business requirements – it’s consumer’s duty to maintain and manage the virtual infrastructure.

IaaS provides more power for sure, but remember what uncle Ben told to Peter Parker. Great power comes with great responsibilities! 🙂

PaaS world is little different and especially if you are a Dev-Test-Deploy kind of shop it makes sense to invest in PaaS paradigm. But then many have this question that if Microsoft is providing all the software and platform then how do I run my custom software in Azure PaaS? How do I install piece of software that Azure does not provide out of the box? Azure does answer that question in the form of start up tasks.

Idea is – you run a scripted command in form of *.bat or *.cmd in a bootstrap manner. So even if Azure recycles or reimages your machines, the start up task/script always makes sure that things are correctly installed.It is very powerful approach when you have a small task to perform or a tiny custom widget to install . E.g. : you want to install an ISAPI extension, so that you can run PHP on the hosted IIS. Or you want to install your custom diagnostics monitor that records a stream of SLL events and do something about it.

Just to be clear, start up tasks are *not* meant to perform heavy duty operations. E.g if you want install let’s say a SAP instance on Azure – you should go for IaaS and start up tasks are not meant for you.

Breaking shackles

Scenario

We all might know that Azure is offering a distributed Redis cache-as-service. It is true platform as service, wherein consumers need not worry about disaster recovery, business continuity planning etc.  Let’s just say that you want to run and consume your own Redis cache instance as a local cache inside a Azure web role. Let’s see if we can pull it off using start up tasks.

MS Open Tech has forked Redis cache and have ported Redis to Windows. We would be using those bits.

Step 1 : Building the correct file structure

Let’s download the Redis cache bits for windows from MS Open Tech page. We need to add two files at the root of the role that would be used in a start up task.

  • InstallPatch.bat : You can name this file whatever you please. This file is where we would write a command that would be executed by the startup task.
  • Redis-x64.msi  : This is the actual Redis cache windows service executable (msi in this case) we are installing.

Make sure both these files are marked as “copy to output directory”. This is to ensure these two files are copied as part of the package. image

Step 2 : Building a bootstrap command

Let’s build a command that can be used to install the Redis cache windows service (.msi in this case) on the Azure VMs. The idea is – you want to install this MSI in a silent/quite mode since you do not want a GUI to wait for user’s action. After looking into the documentation (that gets downloaded along with the bits) here is how we can achieve it :

As you can see below we can use /quiet switch to msiexec command that makes sure the installation happens silently. Step 1 is going to copy the the command and exe file to the package.

To get to the root of the role we can use an environment variable called %RoleRoot. PORT=6379 parameter makes sure that the Redis cache server starts listening on port 6379 after installation

msiexec /quiet /i %RoleRoot%\AppRoot\Redis-x64.msi PORT=6379 FIREWALL_ON=1
Step 3 : Encapsulating the command in a start up task

Now that command is setup let’s see how to encapsulate this in a start up task. Good news is it is configuration driven. Service definition configuration (csdef) to be specific. Insert following config in your csdef file.

<Startup>
  <Task commandLine="InstallPatch.bat" executionContext="elevated" 
    taskType="background" />
</Startup>

Above config instructs Azure role to run InstallPatch.bat file in an elevated manner (as an Admin) in the background. It executes the command we prepared and makes sure that redis-x64.msi is installed correctly on the Azure VM. Apart from the background, taskType can take following 2 values:

  • foreground : foreground tasks are executed asynchronously like background. The key difference between a foreground and a background task is that a foreground task keeps the role in a running state. Which means a role can not recycle until and unless a foreground task is complete or failed. If the task is background in nature then the role can be recycled even if the task is still running
  • Simple : simple task is run synchronously one at a time
Step 4 : Using the Redis cache

If the startup task described is successfully complete then we have a Redis windows service installed and listening on 6479 port. Let’s see how we can use it in a app.

Let’s pull a nuget package called ServiceStack.Redis.Signed which is a client library to consume the Redis service. There are many Redis clients available for .NET, you can choose any as per your liking. We can do something like below:

using System;
using ServiceStack.Redis;

namespace WebRole1
{
    public partial class Index : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
	    // Creates proxy to Redis service listening on 127.0.0.1:6379
            var client = new RedisClient("127.0.0.1", 6379);

            var key = Guid.NewGuid().ToString();

            // Sets a key-value pair in the cache
	    client.Set(key, Guid.NewGuid().ToString());

	    // Gets the value from the cache
            var val = client.Get(key);

            Response.Write("Cache Value : " + val);
        }
    }
}

Once again, I am not advocating that we should use Redis cache service as described above.  We should actually use the Azure Redis cache service for any cache implementation. I used it just to demonstrate what we can achieve using startup tasks. It’s a little hack, which I am proud off 🙂

Happy coding, hacking!

Azure worker role as a service

Premise

Hello rock stars, ready to try something crazy? Ok, here you go. We know that Azure worker roles are meant to perform long running, background tasks. E.g. the most popular implementation pattern is that the frontend website/service accepts the requests and then worker role that is running in a separate process goes through the requests in an asynchronous fashion. Very powerful approach, especially since you can just scale-out and yank though millions of requests coming in. 

One important point to note here though is – worker roles do not have IIS. They can have endpoints, but they do not have IIS (or hosted web core for that matter). This means- though you can have endpoints but you can not really have IIS hosted services in the worker roles. You need to self host them.

Another angle that bothers many is worker role’s testability. As far as testing worker role as a white-box, you can just use the worker role dll/code and use hooks and mocks to test it as a unit. But there is a no real easy story to test worker role as the black box from outside.

Breaking Shackles

Here I am making a proposition to host the worker role code as a service, which then can be invoked from outside in a request-response manner. Again, this is little crazy, out-of-the-box approach so weigh it carefully and use it if it fits your bill.

Defining input endpoint

Azure worker roles can have internal or input endpoints. To make it simple – input endpoints are exposed externally while internal endpoints are not. More about internal and inputs endpoints for some other post (or you can Google for it). In this scenario – since we want to expose the worker role code to be invoked from outside we are going with an input endpoint.

As you can see below a new input http endpoint has been created which would use the port 80.

image

Self hosting service

Now that we have an endpoint, we can host the service using it. Going back to the worker role phenomenon we discussed above – worker roles do not contain IIS. So we need to host this http service without IIS. While I was thinking about it (and believe me there are many ways of doing it), the cleanest approach that came across was to use OWIN or what we refer to as Katana in the Microsoft world. Katana is probably the simplest, light-weight way to self-host a http service. Again, I am not going in much of Katana details here.

Go ahead and pull these nuget packages in your worker role project

As you can see in the code snippet below – we are using the endpoint (Name : Endpoint1) that we created in the above step. Best place to write code to self-host the service is in worker role’s OnStart() method that gets executed as the first thing after the deployment. We are hosting a basic ASP.NET Web API style REST service using Katana.

public class WorkerRole : RoleEntryPoint { public override bool OnStart() { var endpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"]; var baseUri = String.Format("{0}://{1}", endpoint.Protocol, endpoint.IPEndpoint); WebApp.Start<KatanaStartup>(new StartOptions(baseUri)); return base.OnStart(); } } public class KatanaStartup { public void Configuration(IAppBuilder app) { var conifiguration = new HttpConfiguration();  

conifiguration.Routes.MapHttpRoute("Default", "{controller}/{id}",

new { id = RouteParameter.Optional }); app.UseWebApi(conifiguration); } }

Once the service is hosted, it’s just matter matter of defining the controller that matches the URI template defined in the service host. Here is how my basic controller looks like:

///

/// Actual Worker Role code that's getting called in WorkerRole Run() /// public class Processor { public bool Process(int value) { return true; } } /// /// Service controller implementation that invokes actual worker role code /// public class ProcessorController : ApiController { public HttpResponseMessage Get() { try { var flag = new Processor().Process(12); } catch (Exception ex) { return new HttpResponseMessage() { Content = new StringContent(ex.Message + ex.StackTrace) }; } return new HttpResponseMessage() { Content = new StringContent("Success!") }; } } }

As shown in the above code snippet – Processor class encapsulates actual worker role logic that gets executed every time worker role’s Run() method is called. ProcessorController (forgive me about naming it ProcessorController 🙂 ) is the class that gets instantiated and invoked by the OWIN hosted service. This class is nothing but a pass-through piece that ultimately invokes the actual worker role code. Here I have shown a basic Get implementation where in exception cases we are responding with exception details and a dummy string in case of success. You are encouraged to be creative and implement the best REST service patterns and practices or pass parameters using the Post implementation.

Using service for testing

Go ahead and host it in Azure or in the local emulator. Try accessing using:

Now that the service is hosted, it can be invoked from outside to perform the black box testing as we were planning.

PS : Here I haven’t given much thought to the endpoint security. I know it is pristine important, but that’s not the focus of this post.    

Service slow?Blame it on IIS..

Situation

Alright, here is the situation we were thrown in recently. Our bread n’ butter API got released. It went well initially and then intermittently started producing extraordinarily high response times. Everybody freaked out. We had tested the API for large, consistent load and here it was not withstanding moderate, intermittent traffic.

Speculation

Our initial reaction was the cache (we are using in-memory, old fashioned System.Runtime.Cache) is getting invalidated after every 30 odd minutes as an absolute expiry. But the instrumentation logs were *not* suggesting that. There was no fixed pattern. It was totally wired! After some more instrumentation and analysis we zeroed down on some heavy singleton objects. We use Ninject as our IoC container. We have couple of objects created on startup and thrown into Ninject singleton bucket to be used later during the lifetime. What we observed that these objects were getting destroyed and created after certain amount of time! And creation of the objects was obviously taking hit. We were freaked out even more. Isn’t the definition of singleton – create object only once? Or is it Ninject – that is destroying the objects? What we found would surprise some of you. Ready for the magic?

Solution

While just browsing through the event logs we found following warning message in the Systems log (event source – WAS):

The default idle timeout value is twenty minutes, which means your app pool is shut down after twenty minutes if it’s not being used. Some people want to change this, because it means their apps are a bit slow after twenty minutes of inactivity.

That started the ball rolling. Ah, so it was the good old buddy IIS and the app pool. Even if its Azure and you want to call it as hosted web core, end of the day it is IIS. And IIS hosted website/web services run under app pools. And app pools, the good babies they are tend to recycle. 🙂 They recycle after every 20 minutes (default value) if kept idle.

We had 4 instances and with the moderate traffic, Azure load balancer was not routing traffic to all instances consistently. So couple instances were getting idle and and app pools on them were recycling. After some time – when Azure load balancer used to divert traffic to these idle instances, those singleton objects which were destroyed would get created again and the incoming service call would pay the price to create these objects which is unfair!

To resolve this – we decided (and this solution worked for us and I do not claim that it applies to your scenarios as well. So if you want to try it at your home, make sure to wear helmet 🙂 ) to do couple of things:

1. We had Akamai (or traffic manager) ping the service after every 5/10 seconds. This way the app pool does not go idle and hopefully never recycles.

2. We had a startup task in the web role to go set the IIS app pool idle timeout as 0. This means app pool lives forever. Obviously every now and then Azure fabric controller recycles the machines itself , but that’s ok.. I do not think we can prevent that.

Start up task

Added following cmd as the start up task in the web role.

%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:00:00:00