An invisible Azure Message

 

When creating an Azure queue, you specify a lock duration, once a message is read from the queue it’s marked as invisible for other readers for a period of time, e.g. one minute.

image

Choosing the invisibility time is a trade-off between expected processing time and application recovery time.

When a message is dequeued, the application specifies the amount of time for which the message is invisible to workers dequeueing messages from the same queue. This time should be large enough to complete the operation specified by the queue message.

If the timeout is too large, the time it takes to finish processing the message is affected when there are failures. For example, if the invisibility time is set at 30 minutes , and the application crashes after 10 minutes, the message will not have a chance of being started again for another 20 minutes.

If the invisibility time is too small, the message may become visible when someone is still processing it. Thus, multiple workers could end up processing the same message, and one may not be able to delete the message from the queue (see the next section).

The application could address this as follows

1. If the amount of time to process a message is predictable, set the invisibility timeout large enough so that a message can be completed within that time.

2. Sometimes the processing time for different types of messages may vary significantly. In that case, one can use separate queues for different types of messages, where messages in each queue take a similar amount of time to be processed. Appropriate invisibility timeout value can then be set to each queue.

3. Furthermore, ensure that the operations performed on the messages are idempotent and resume-able. The following can be done to improve efficiency

a. The processing should be stopped before the invisibility time is reached to avoid redundant work.

b. The work for a message can be done in small chunks, where a small invisibility time may be sufficient. In this way, the next time the work is picked up from the queue after it becomes visible again, the work can be resumed from where it is left off.

4. Finally, if the message invisibility time is too short and too many dequeued messages are becoming visible before they can be deleted, applications may want to dynamically change the invisibility time that is being set for new messages put onto the queues. This could be detected by counting at the worker roles how many times message deletes are failing due to messages becoming visible. Then based on a threshold communicate that back to the front-end web roles, so they can increase the invisibility time for new messages put into the queue if the invisibility time needs to be tuned.

Manage the invisibility on the fly

The “Update Message” REST API is used to extend the lease period (aka visibility timeout) and/or update the message content. A worker that is processing a message can now determine the extra processing time it needs based on the content of a message. The lease period, specified in seconds, must be >= 0 and is relative to the current time. 0 makes the message visible at that time in the queue as a candidate for processing. The maximum value for lease period is 7 days. Note, when updating the visibility timeout it can go beyond the expiry time (or time to live) that is defined when the message was added to the queue. But the expiry time will take precedence and the message will be deleted at that time from the queue.

Azure Service Bus

 

When communicating between roles in an Azure application we’ve a few options; to name a few:

  • Http
  • Tcp
  • Queues

While Http and Tcp are tried and trusted they do come with some limitations that queues help overcome.

In the last few months Microsoft have released pub/sub service bus to the world. This is similar to a basic queue, in the basic queue, each message is consumed by an individual consumer, but with subscription topics, multiple clients can consume the same message, each subscription logically maintains its own queue of messages.

image

 

 

 

The diagram above shows a typical communication between worker roles and web roles on the Azure platform.

As previously stated, this decoupling has several advantages over direct messaging.

Load Leveling

In the system the load can vary over time, where the amount of effort in processing the mid-tier business logic remains somewhat constant, with the queue in place it’s only necessary to have enough servers to handle the average load irrelevant of peak load. This can save money in terms of infrastructure required to handle peak load.

Temporal Decoupling

With queues decoupling the messaging effectively making the messaging async, publishers and subscriber need not be online at the same time, the service bus reliably stored the messages in the queue until the subscriber pulls them off and processes them. This allows different roles to be taken offline for maintenance etc.

Load Balancing

As load increases more worker roles can be added to service the queue (e.g. an online toy shop around the Christmas period). The system ensures that only one worker role will process the message, also in given that the worker roles are pulling the messages off the queue, they don’t have to be running on the same infrastructure, (Azure favours multiple low power roles in comparison the fewer higher powered roles).

image

Migrate SqlServer DB to Azure Sql

 

Here’s one way to migrate your SqlServer Database to the Azure platform.

1) Get the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/

image

2) Start the wizard and select SQL Database Migrate option

image

3) Select your source database

image

4) Choose the objects you wish to migrate (all in my case)

image

image

5) See the results and review the SQL Script if necessary.

image

6) Now we need Sql Azure in the cloud for the next part, log into your http://windows.azure.com account (get a 3 month free trial if you don’t have one)

Select your Azure Server and create a new database.

image

7) You’ll be prompted to select where you want your server located if you don’t already have one.

image

 

image

8) Add some rules to your database, you’ll need to do this to allow access for MS Services and Visual Studio

image

image

9) So now that you have a database in the cloud you’ll need to continue with your migration wizard by selecting this database.

image

 

image

 

image

10) That’s pretty much it. Hope these screenshots helps someone out.

Azure Tools

 

This evening I decided I’d install the new Azure tools after watching the latest vids that have appeared.

I right click on my MVC3 app and choose to: Add Windows Azure Deployment Project

image

 

Then I hit F5 to run the project and I get an error

Microsoft Visual Studio Unable to find file DFUI.exe  Baring teeth smile

Solution

 

In the 1.5 SDK there used to be a registry key that pointed to the emulator, with 1.6 this no longer exists and Visual Studio is looking for the dfui.exe in a different location (use Process Monitor from Sysinternals.com to tell you where)

image

Once you find where Visual is looking for it, it’s a matter of copying the files in
C:\Program Files\Windows Azure Emulator\emulator\ to this location.

Try run you app now and it should work.

image

Synchronize you controllers when necessary

Earlier today I happened to lend a hand to a friend of mine that was experiencing a race condition in an ASP.MVC application, like a rag to a bull is multithreading to me.

Here’s the scenario; my friend was calling two web services using methods like BeginXXX/EndXXX. Because her website was IO bound she was correctly using an AsyncController.

She called method to increment the outstanding operations by 2, then proceeded to call

service1.BeginGetValuations(v, ar => {
    AsyncManager.Parameters["valuations"] = service1.EndGetValuations();
    AsyncManager.OutstandingOperations.Decrement();
}, null);
    
service2.BeginGetValuations(v, ar => {
    AsyncManager.Parameters["valuationsActual"] = service2.EndGetValuations();
    AsyncManager.OutstandingOperations.Decrement();
},null);

 

Looked pretty much ok, except once in a while when load tested the valuationsActual parameter was null.
So what could be the cause… Well basically it turned out that there was a race condition accessing the dictionary from two threads.

The solution:

synchronize access to the Parameters, i first thought of doing this with a plain old lock but I was worried about other access on the parameters from the framework itself so I had a quick read of the documentation and turns out that the AsyncManager has a sync method.

 
service1.BeginGetValuations(v, ar => {
    AsyncManager.Sync(() => {
        AsyncManager.Parameters["valuations"] = service1.EndGetValuations();
        AsyncManager.OutstandingOperations.Decrement();
    });
}, null);
    

Do the same for service2.