Thursday, January 20, 2011

Azure Bootcamp Pittsburgh January 2010: Show Notes

I had a fun time visiting Pittsburgh and presenting some Azure Goodness during Day 2 of the Azure Bootcamp! Thanks to Rich Dudley for inviting me – he organized a great event and the audience was very engaged. Rich, along with Scott Klein, presented the bulk of the material.

During my presentation, we discussed a few Azure-101 tips and hints. I did my best to capture them here. Please let me know if I missed any and I’ll update this post.

My presentation slide deck is here:

Configuration Setting Publisher

With Azure v1.3, Web roles now run with full IIS, which means you’ll need to separately tell your web app to use the Azure configuration setting publisher. The easiest way to do this is in the Application_Start() method in global.asax:

        protected void Application_Start(object sender, EventArgs e)
        {
           
CloudStorageAccount
.SetConfigurationSettingPublisher(
            (configName, configSettingPublisher) =>
            {
               
var
connectionString =
                   
RoleEnvironment.GetConfigurationSettingValue(configName);
                configSettingPublisher(connectionString);
            });
        }


You’ll need this when working with the Azure Platform Training Kit samples, such as the Guestbook demo we walked through.


Connection Strings



When creating your new Azure application, your default diagnostic connection strings point to the local simulation environment:



devstorage



When testing locally, this works fine. However, when you deploy to Azure, you’ll quickly discover that you have no local development storage, and must use “real” Azure storage. Don’t forget to change these settings prior to publishing! This is easily done through Visual Studio:



configurestorage



Queues and Poison Messages



We talked a bit about handling poison messages on a queue; that is, messages that consistently cause the processing code to not complete successfully. This could stem from any number of reasons, without a way to correct its processing while the role is running. If a message is failing to be processed simply because the GetMessage() timeout wasn’t set for a long-enough period of time, this can be adjusted when read a  2nd (or 3rd, or nth) time from the queue.



So, assuming the message truly cannot be processed, it needs to be removed from the main queue permanently, and then stored in a separate poison message area queue to be evaluated later (maybe the developer downloads these messages and tries processing them locally in a debugger…). The typical pattern is to allow message-processing to be retried a specific number of times (maybe allow for 3 attempts) or allowed to live for a specific amount of time before moving it to a poison area. This area could be another queue, an Azure table, etc.



Here’s a very simple example of a queue-processing loop that incorporates a poison message queue:



                while (true)
                {
                   
var msg = queue.GetMessage(TimeSpan
.FromMinutes(1));
                   
if (msg == null
)
                    {
                       
Thread.Sleep(TimeSpan
.FromSeconds(10));
                       
continue
;
                    }
                   
if
(msg.DequeueCount > 3)
                    {
                        poisonQueue.AddMessage(
new CloudQueueMessage
(msg.AsString));
                        queue.DeleteMessage(msg);
                       
continue
;
                    }
                   
// process queue message normally

                    queue.DeleteMessage(msg);
                }


There was a question that came up about queue processing time limits. A queue message processing timeout can be set from 30 seconds to two hours. See the MSDN documentation for more detail.

No comments:

Post a Comment