Curious case of Windows Phone

Scenario

I like Microsoft. I like their products. I am comfy and feel homely while using their stuff. Call me Microsoft fan boy, but I like Windows Phone as well. That’s why — when I was planning to buy new phone — I wanted to go for new shiny flagship Windows Phone on the block. After some research and reading reviews I zeroed down on Nokia Lumia 930.

As any other guy interested in buying a smart phone I went to Best Buy as I was hoping to touch and feel the device. While Best Buy had too many people jumping on new iPhone 6 [on a side note- if you have so many people jumping on your phone, your device is going to bend for sure ;)] , Microsoft’s section was relatively empty. When I asked the attendant about Nokia Lumia 930,

He said “You mean 920?”

I said “No, I mean 930 only and I am looking forward to buy the same”

He said “Well, there is nothing called Nokia Lumia 930”. I google-binged on my phone and showed him.

He felt convinced and said “Oh, you mean Nokia Icon?”

I replied “No, I mean 930. I want to buy it unlocked and I am not interested in buying the Verizon version.”

Blue shirt replied “We do not have one!”

Duh! So the new shiny flagship Nokia Lumia 930 was not available in the Best Buy store. Please note that this store I am talking about is in Redmond/Bellevue where Microsoft is head quartered! It’s like I am ready to spend money and the opportunity has been denied.

Me being good citizen — walked out of Best Buy and went to Microsoft Store in Bellevue. In Microsoft store also the unlocked Lumia 930 was not on shelves and folks there didn’t know the estimated arrival date.

I was saddened. I was taken aback. How can you announce a device and not be available on shelves for people to walk-in and buy?

Issue

After thinking for few days I think I might know the reason. I might be wrong, but do not correct me. The reason could be exclusivity. Verizon got exclusivity rights (at least in early days of 930 and may be only in US) and they packaged it as Nokia Icon. And they are denying 930 to appear in unlocked version, so that other people cannot buy it. That’s evil. Pure evil.

I still love Windows Phone and would still would wait for 930 to come in unlocked format. But maybe this is one of the reasons why Windows Phone is not working as it should be. Software is great. Hardware is great. Ecosystem is nice and improving as we speak. But what about marketing it? What about taking an extra effort to make sure devices are ready to be picked by customers from various stores? What about retail? May be that’s not Microsoft’s forte and they are losing the race in last round. May be.

Stepping out with WebJobs

A while back, Microsoft Azure team announced preview of WebJobs and Hanselman blogged about it. Since then, I always wanted check out Web Jobs and finally I did it recently. And ..holy mother of god! Smile

Premise

WebJobs are cron jobs. For operation(s) that are to be performed repetitively or in a scheduled manner WebJob  is good proposition. WebJobs have been packaged along with Azure websites. The popular use case Microsoft is targeting to use WebJobs is as backend for the Websites.

Let’s say you have an e-com website and you are receiving tons of orders. For better scalability, your website (front end) would submit those order to let’s say a Azure queue. You would then have WebJob(s) configured as backend to read messages off top of the queue and process them. Or you can have a nightly cron job, which goes through the orders and sends a digest or suggestions in an email  etc. Or you can have it to do a monthly invoicing. So all those backend jobs that you do not want to perform in front end for obvious reasons can be done using WebJobs.

Even if WebJobs comes packaged along with Azure Websites – nobody is stopping us from using them without websites. For doing certain operations repetitively, Azure PaaS already offers what is termed as Worker Role. But I personally find worker roles very ceremonial. They do make sense when you want to perform long running, heavy-duty operations and you need horizontal/ vertical scaling to do that. But for doing tiny, repetitive, loopy code blocks or doing certain things in scheduled manner worker roles are expensive (time, money-wise). Worker roles are powerful, no doubt, but with great power comes great responsibility. (Can’t believe I just said that Smile) Primarily what they offer is an infinite while loop and then it’s your responsibility to implement scheduling, triggers etc.  WebJobs are light-weight, deployment-friendly and provide in-built mechanism to schedule stuff.

Implementing popular use cases using WebJobs

Conceptually, WebJob (the way I understood them) is an infinite while loop that listens for an event or trigger. You can make this listener trigger again and again continuously or you can schedule it if that’s the story you want to tell.

For all sorts of WebJobs development, Azure Web Jobs SDK and respective nuget packages needs to be pulled.

Here is an example of a WebJob that is implemented as a command line .NET exe. This one reads a message off top of a storage queue and writes that entity to the Azure table storage.  Every time a new message is available in the queue the ProcessQueueMessage() method is trigged automatically along with the respective trigger inputs.

 

image

Below code block shows another type of trigger that is listening on a topic/subscription. As soon as a new message is delivered to the subscription it picks up that brokered message writes it to the table storage.
 
image
 
In both examples, and instance of JobHost is created and registered in Main() which ensures the timely or scheduled execution of WebJob.
 

Deployment

Interestingly WebJobs support multiple ways to deliver the package. (I prefer and like the command line .NET exe, but suit yourself).Using Azure management portal you can drop a file in a certain location on the IIS box and rest is magic. Following file types are supported :

  • .exe – .NET assemblies
  • .cmd, .bat, .exe
  • .sh
  • .php 
  • .py
  • .js

Once the deployment is done one can manage the WebJob – start, stop, delete etc.

Using Website => WebJobs => Add a job switch a new job can be added.

image

A WebJob can be set to run continuously or scheduled.

imageimage

Schedule can be as granular as your requirement demands. Interestingly a WebJob can be configured as recurring or as a one-time-deal as well.

imageimage

Quick word of caution. Webjobs are deployed on the IIS box, so make sure you are not doing such an operation as part of it that hogs all the memory and CPU. Good news though is it does not use IIS’s thread pool. For scaling WbJobs you need to scale your website.

Feel free to ping me in case of any help.

Importance of dead lettering

Not sure we have a word called “dead lettering”. But in the below post I have used it to repetitively.Smile 

In every messaging/queuing technology, there is a provision for poison queue or a mechanism to deal with bad messages. Idea is – once you are done with the message if the processing still faulters, move the message(s) aside and deal with them later. With Microsoft Azure Service Bus topics and subscriptions there is a solid provision to move such poison messages aside, so that the next set of message(s) can be picked up for processing. In Azure Service Bus world, it’s called as dead lettering the messages. 

When and how should we dead letter the message?

There are many scenarios in which message are moved to dead letter implicitly and in some scenarios it’s wise to move the message to dead letter explicitly.

As for all other Azure managed services, Service Bus also throttles as it needs to protects itself from denial of service (DoS) attack. As a developer, it’s your responsibility to have retry mechanism and retry policy in place (Recommended : exponential backoff retry interval policy). One of the ways to implement it is by using Delivery Count attribute on the message itself. Delivery count on the message gets incremented every time it reads/de-queues the message off top of a subscription. Max Delivery Count can be set on subscription to make sure the retry is not happening indefinitely. Once the Max Delivery Count is reached the retry attempts can be assumed to be exhausted and message can moved to dead letter. Following screenshot shows Max Delivery Count property config for subscription.

image

In some cases we should not retry multiple times. E.g : the message itself is bad. Required attributes are missing or message is not getting serialized correctly, in such non-transient cases even if you retry multiple times the processing is going to fail every time. In scenarios like this, we can explicitly dead letter the message and save some CPU cycles. 

This is how you can move messages to dead letter explicitly.

   1: if (message != null)

   2: {

   3:     message.BrokeredMessage.DeadLetter(string.Format("DeadLetter Reason {0} Error 

   4:     message {1} ", "SerializationException", ex.Message), ex.StackTrace);

   5: }

 

Important things to remember while dead-lettering the message

As shown in the above snippet make sure you are specifying the dead letter reason and dead letter exception details etc. which helps in obvious debugging.

Also make sure following two properties are enabled for the subscription. This would make sure the messages are dead lettered implicitly if the filter evaluation goes wrong or the message gets expired.  

image

Remember, dead letter messages would not be processed until something is not done to them. The recommended practice here is to fix the messages (fix the missing attributes etc.) and move them back to the subscription. How to do that? That’s for another blog post.Smile

Azure Endpoint Monitoring

Came across this cool feature on Windows Azure Management Portal called “Endpoint Monitoring”. The feature is still in preview, but worth giving a shout-out. Azure Org lately has hit this nice momentum of releasing features one after the other. They initially release the feature as preview and once it stabilizes, once enough hands are dirty and issues are ironed out, they GA it. This is a right way of releasing features to production, in my opinion.

Endpoint Monitoring, as the name suggests monitors if your web service/web site endpoint is up or not. The idea is, you provide an endpoint to Azure configuration and they call back that endpoint periodically and maintain the log for the same. The good thing is, you can make Azure call back your endpoint from different datacenters across the geography. This helps in scenarios where the endpoint is up in let’s say Chicago DC, but is down in Dublin.

Endpoint monitoring lets you monitor the availability of HTTP or HTTPS endpoints from geo-distributed locations. You can test an endpoint from up to 3 geo-distributed locations at a time(for now). A monitoring test fails if the HTTP response code is greater than or equal to 400 or if the response takes more than 30 seconds. An endpoint is considered available if its monitoring tests succeed from all the specified locations.

In Configuration tab of your service/website on Azure management portal, you will see the Endpoint Monitoring section.

 

Image

As you see above, two endpoints have been configured called Ping-Chicago, Ping-Dublin. This means whichever endpoint you provide there would be called back periodically from Chicago and Dublin.

The results of the endpoint monitoring are shown on the Dashboard as below:

Image

The detailed log can be found by clicking on the endpoint hyperlink

Image

A typical ping endpoint code should ping all the dependencies the service relies on. E.g. if service uses a SQL Azure database, Azure Storage etc. then your ping endpoint should call these dependent endpoints and return HTTP 200 if good and HTTP 500 if bad. Here is a simple code that can be used in your SOAP/REST service as the ping method.

Image

Know it. Learn it. Love it.

 

Monitoring Azure deployed services using NewRelic

Windows Azure is becoming a de facto platform to distribute the applications and services using Microsoft stack. With the platform gaining momentum with tons of new features getting added every 3 weeks, one dimension cannot be missed which is maintaining these apps and services. For IT folks – it’s a total paradigm shift. Laying down infrastructure topology, deployment, formulating disaster recovery strategy etc. are few examples. Here we are going to discuss only monitoring aspect.

Application monitoring can be defined as processing and use of related IT tools to detect, diagnose, remediate and report an application’s performance to ensure that it meets or exceeds end-users’ and businesses’ expectations.

There are two basic methods by which application performance can be assessed.

  1. Measuring the resources used by the application
  2. Measuring the response time of applications

The most common aspect from operations perspective is monitoring of all components and services and the ability to troubleshoot if things go wrong. This typically involves monitoring various performance counters, monitoring log files for alerts, monitoring availability of the application and the ability to configure automated alerts for all these monitoring activities.

With Azure deployed applications, applications/services need to be enabled in certain way so that those can be monitored by operations team.  For such applications/services health monitoring, operations team can follow following strategies:

Inside out monitoring

If the application and services has logging, tracing and performance counters embedded directly into the code then we can use this strategy. Code reports all the health and performance counters, which Windows Azure Diagnostics monitor can then collect and report.

This also can be achieved using a health dashboard page which is authorized to be viewed only by the operations team. This page or component makes dummy service calls to all the services and components based upon the scheduled frequency and displays the success/failure for the same. It can ping the database (SQL Azure), external endpoints (E.g.: endpoints exposed on Service Bus, Storage etc.) and publish the response on the page.

Outside-in monitoring

This means that application itself does not take care of reporting the health and performance and it’s the duty of the external component. In this technique – you need a piece of software (an agent) that gets deployed along with the package which works in a low priority demon and reports the health and performance to a remote server. The agent looks at the deployed code and “reflects” the status dynamically.

My vote always will be with outside-in strategy. This gives you more flexibility and achieves what’s popularly known as separation of concerns. Separating the actual product code from monitoring code. E.g. If Ops team want to change let’s say some perf. counter they do not need to touch the actual code for that.

I came across this awesome tool/service in Windows Azure Store called NewRelic. NewRelic can be used to monitor Web Sites, Web/Worker Roles and VMs. The kind of statistics it produces with a large degree of simplicity is just breathtaking. New Relic is the all-in-one web application performance tool that lets you see performance from the end user experience, through servers, and down to the line of app code. As far as I know, New Relic is the only SaaS app monitoring tool that monitors the entire app using a single product and a single UI. It also gives you this beautiful real-time performance dashboard that can be accessed anytime, anywhere.

In the Windows Azure portal if you go to New -> Store -> Choose an Add-on you will see New Relic add-on. This can be used to create a New Relic end point.Image

Image

As you can see New Relic has multiple plans to choose from. The paid plans offer more detailed diagnostics and data. Visit their web site for more details regarding plans. For the tutorial sake – I am sticking to Slandered (Free) which is far more than sufficient. Go ahead, and provide a name to your New Relic endpoint, which creates the New Relic endpoint.Image

If you go on your dashboard it’s going to show as follows. Click on the endpoint name to see more details.Image

Clicking on Connection Info will give you the API/license key. Save it, we are going to need it.Image

In Visual Studio go to your Azure Web Site or Web role. Open Nuget Package Manager window and install NewRelicWindowsAzure package. You will prompted for authorization, provide the API key which we got from the above step.  After the setup is done, deploy the application/website. Image

Go back to New Relic endpoint and click on “Manage” to navigate to the New Relic dashboard for your application. Image

If things are deployed correctly and successfully you see various beautiful graphs like as follows:

Image

Image

ImageImageImageImage

Azure Pack – my 20 cents

Couple years back, Microsoft started talking about the “mystical” Azure Appliance and entire community thought they finally found the Holy Grail solution for running their services/applications securely on cloud. The idea was that Microsoft was to give all Windows Azure goodness in “one box” and enterprises can run their stuff from behind their fire walls using all awesome things that Azure provides. Azure Compliance never happened. Nobody knows why.

Meanwhile Scott Guthrie – The Rockstar performer – was asked to govern and own Windows Azure organization. Scott with his technical brilliance and roots being firmly buried into community completely shifted the momentum and Windows Azure became this cool, “open” technology. Scott Gu quickly realized the pulse and got no. of features onboarded one after another. One of the most important being a huge release they did called Infrastructure As Service (IaaS). Suddenly Microsoft was talking about the Cloud OS. Cloud OS has been Microsoft’s vision wherein they are talking about a simple, uniform cloud platform for everybody which provides clear interface for Public Cloud as well enterprise folks to get quickly onboarded and use the power of cloud. OS no bar. Technology no bar. Just plain cloud power. Well, Windows Azure was always a complete (sorta ;) ) solution for folks who want to deploy their apps/services in public domain. But in the light of fierce completion, price war and the failed attempt to address the enterprise cloud (Appliance) it became very essential for Microsoft to provide a consistent story for enterprises as well. Microsoft always had Hyper V and System Center for enterprise people, but how can this infrastructure use Windows Azure’s appeal and goodness? In comes Azure Pack.

Azure Pack – if I go by the definition – is a technology that runs inside the enterprise datacenters, behind the firewall and provides the self-served, multi-tenant services like Windows Azure. Community starts dreaming again. Private cloud in box? Holy Grail? Azure Appliance reborn?  Well not really, let me explain.

Azure Pack is far from “private cloud in a box”. A better description would be a Windows Azure like management portal, management API in front of the existing Hyper-V/System Center infrastructure. It has absolutely nothing to do with the Azure Appliance. So Azure Pack works as the wrapper on your Hyper V with a better user experience. But it definitely becomes critical piece of Cloud OS jigsaw puzzle, as it enables enterprises, to put a very nice Azure-like portal/interface in front of their private cloud infrastructure.

The fact that using your in-house infrastructure to scale your enterprises with VMs, Web Sites, Service Bus is really exciting. Bunch of same administrators and developers can build and distribute the code very securely without any special training gives me goose bumps. Azure Pack definitely opens tons of new business scenarios, e.g. now you can create and distribute “Well-Defined” VM Templates for your enterprises very seamlessly from the Gallery.

As of now, Azure Pack comes with following stuff:

  • Management portal for tenants

Windows Azure like self-service portal experience for provisioning, monitoring and managing services such as Web Sites, Virtual Machines and Service Bus.

  • Management portal for admins

A portal for admins to configure and manage resources, user accounts, tenants and billing.

  • Service management API

Doing all the management stuff using neatly designed APIs, so anything and everything can be scripted

  • Web Sites
  • VMs
  • Service Bus

Nobody knows how tough it’s going to be to setup an Azure Pack in the data centers. The pricing model is not defined yet as well. So I can’t comment on the success yet. The existing thing for me as the Windows Azure developer is there are more avenues to go and implement stuff now. Hopefully, this release provides opportunity to more and more enterprises (especially in banking, healthcare sector) to go and taste Windows Azure. Amen!

DevOps – My 20 cents

Context

Before going in DevOps details – let’s think how business requirements are delivered these days. While many organizations have understood the importance of delivering features in multiple small sprints instead of doing them in one big monolithic waterfall block – practically every organization is practicing some version of “agile” methodology (XP, CanBan and so on). Which means teams get requirements in the form of user stories that traverse to production from Development team to Test team and Ops team in the end. Nothing wrong with this methodology really, it works ok sprint after sprint. But with the fast paced world we live in, what if the critical business requirements just pour in at a brisk pace which need to be deployed to production immediately. Probably the version of Agile we use might not scale.

We know and exercise “Continuous Integration” which means any piece of code that is being checked in has to be rock solid and is tested automatically every time. Now – organizations (especially the ones who use some sort of virtualized environments to deploy their services/apps) have gone a little ahead and have implemented what is known as “Continuous Delivery”. This means every check-in is not only run against the tests suite on build server, but gets deployed to staging and production as well. Many organizations, do multiple deployments to production on any particular day. This becomes possible when everything your team is doing is very well automated, planned scripted and all your engineering departments collaborate with each other.

E.g. Facebook does multiple deployments/day. The way they have implemented is any feature that Development team commits is tested rigorously against automated test cases locally and staging before getting exposed to the production. Obviously the feature is not enabled to the entire audience directly. Using traffic routing in load balancer partial traffic is routed to the new feature which yields performance numbers and user feedback. Once confident – the feature is made available globally to all users. This concept is known as testing in production which is the topic for my next post. This is obviously done using precise planning, scripting and automation.

What is DevOps?

DevOps – in my mind is the way continuous delivery/deployment is achieved by making Development and Operations/IT folks talk with each other, collaborate with each other and last but not the least trust each other. The way test engineers collaborate with Dev teams and start writing test cases from Day 1; with the amount of deployments that are happening to cloud these days, time has come to invite Ops team in the design reviews and get them working from Day 1.

The applications/services getting deployed to cloud differ from the traditional deployments in many ways. Logging/monitoring, SCOM packs or alerting, security, scalability and availability – are buzz words with cloud deployments. E.g. For logging and monitoring, Dev team cannot afford to say that it’s Ops headache and for me it’s a low priority task. Logging and monitoring has to be baked into the code from Day1.  Your services and apps need to be secure from Day1, because once your feature is checked in, it might be deployed to production straight away. DevOps practice – essentially tears the walls down between Dev and Ops teams. Both teams work collaboratively on planning, preparing environments, configuring security, writing the SCOM packs etc. To implement DevOps – everything including integration tests, BVTs, Smoke Tests, environments, configuration and deployments need to be scripted and automated.

Delivery methodologies that are used by organizations traditionally with classic setup of separate teams for Devs, Test and Ops did not have deep cross-teams integration. DevOps promotes a suite of common processes, methods and tools for communication and collaboration between teams. DevOps – really is a collaborative work relationship between development team and operations team.

Conlusion

Traditionally, Devs target set of goals like architecture/implementation, while Ops team worry about availability and stability. It is designed that way keeping separation of concerns in mind. With DevOps – both these teams do not have 2 separate goals. While Dev team need to care about the availability of the service in production, Ops team need to care about how security has been implemented. Like I said before, it’s about tearing the walls down, if not – at least making holes in walls, while marching towards a common set of goals. In fact, many organizations have started hiring developers with some sort of IT/Ops background. Looking at the Cloud computing burst, that day is not too far when there would not be two separate departments for engineering and IT.

To conclude, I am trying to define DevOps again! In my mind, it’s a relationship that increases the overall delivery efficiency and reduces the production risk associated with frequent releases.