Monday, January 31, 2011

The Jawbone: Icon for the dog, Era for the human

Last week, I was the proud owner of a most-excellent Bluetooth headset, the Jawbone ICON. Compared to my years-old Motorola H700, the Jawbone was simply outstanding. I had owned this beautiful piece of technology less than 3 months, and I was still enjoying its newness, along with its excellent audio quality.

Unfortunately, my puppy Nimbus felt that he, too, should have the privilege of playing with a Jawbone. As it turns out, Bluetooth headsets are not very durable as a chew-toy:

IMG_7464

In a desperate plea for empathy, I tweeted my Horrible Headset Happenstance to the world:

jawbone-tweet

The great folks at Jawbone Customer Support heard (ok, saw) my plea, along with the photo, and reached out to me. They told me they laughed when they saw my mutilated headset (and assured me they wept a bit, too). They assumed my dog must be really cute to be able to get away with something this mischievous and still live to bark about it. I assured them he was:

IMG_6389

Graciously, Customer Support offered to assist me in my replacement quest. I decided to eschew (no, not chew) another ICON, and upgrade to the brand-newest Jawbone: The ERA. And much to my surprise, it arrived in the mail today. I can’t imagine these photos do it justice, but I wanted to share my unboxing experience. First, the box itself:

IMG_7765

After staring at it for an undetermined amount of time, it was time to open it up:

IMG_7766

And peeling back the front cover revealed a cornucopia of ear-fitting goodness.

IMG_7769

I gently removed the ERA from its perch and gave it a full charge. It was time… Time for its maiden voyage. I called my wife, listening intensely to the tonal quality of her phone ringing across the airwaves. She picked up, and I asked her to say something that I could quote to the world. “Anything,” I said. “Just lay it on me.” And she responded, so eloquently and with zero distortion:

“That’s what she said.”

Sunday, January 30, 2011

My BitLocker Moment of Panic

Back in October 2010, when I joined Microsoft, I received my shiny (well, matte) new laptop. Security is of paramount importance around here, and I had to enable BitLocker, included with Windows 7 Ultimate. The encryption process was actually painless. I saw minimum performance degradation, and the only (minor) annoyance was the bootup workflow, which requires a PIN to be entered prior to Windows booting.

Fast-forward 2 months: I decided it was time for a performance boost, so I picked up a 256GB Kingston SSD drive. It came with a copy of Acronis’ disk-cloning software, which made the transfer extremely easy. I contacted Microsoft IT Tech Support before doing the transfer, to see if there were any known caveats; the only thing they suggested was removing encryption prior to cloning. So I did, and the cloning went smoothly. I was back up and running with my new drive in about 2 hours, with BitLocker re-applied.

So there I was, with my new SSD. I was set for Blazing Speed. After 2 weeks, I was only seeing a moderate improvement, certainly not worthy of the high cost of the drive (my disk performance index jumped from about 5.4 to around 5.7). So I went hunting for SSD optimization information. Thanks to a tweet by Brian Prince, I found a few good tips like disabling SuperFetch (which I did), updating the controller driver (which I did), and updating the BIOS (I held off).

So… what about the panic???

My performance index was now at 6.8, with a very noticeable performance improvement. But I wanted more, so I decided to update my BIOS. And that, my friends, did not go as planned.

The BIOS upgrade itself was easy, thanks to Lenovo’s updater tool. It then told me to reboot, which I did. I was taken to the BitLocker PIN Entry screen, and I entered my secret key.  But then… BitLocker told me that something on my computer changed since last booting, and that I needed to enter my recovery key. Oh, you mean that key on my USB drive? That key from when I encrypted my original drive?

Yes, that’s right, I had not backed up my new key after re-encrypting. At this point, I was unable to boot. I was, um, toast.

Recovery

I know what you’re thinking: Just restore from a backup. Easy enough: I had one handy: my old drive, which had the OS relatively current. And my working files were all backed up to offsite storage. However, I kept thinking there was Some Important File I hadn’t backed up.

On a whim, I decided to download the previous BIOS version from Lenovo. I created a bootable CD on another computer, and booted up. Interestingly, the DOS version of the BIOS updater gave a stern warning a about updating BIOS firmware when BitLocker is enabled (the Windows version has no such warning). I down-rev’d, rebooted, and… just like magic, I was able to boot once again into my SSD.

Lessons Learned

Maybe you’re way smarter than me and will never make this mistake, but I thought it be worth pointing out the obvious anyway:

  • When encrypting with BitLocker, always create a recovery disk afterward.
  • When updating BIOS firmware, be sure to suspend BitLocker prior to the update (you don’t need to unencrypt; you just need to suspend BitLocker).
  • Prior to any type of system update when BitLocker is enabled, be sure to have a backup handy, just in case.

Thursday, January 20, 2011

Azure Bootcamp Pittsburgh January 2010: Show Notes

I had a fun time visiting Pittsburgh and presenting some Azure Goodness during Day 2 of the Azure Bootcamp! Thanks to Rich Dudley for inviting me – he organized a great event and the audience was very engaged. Rich, along with Scott Klein, presented the bulk of the material.

During my presentation, we discussed a few Azure-101 tips and hints. I did my best to capture them here. Please let me know if I missed any and I’ll update this post.

My presentation slide deck is here:

Configuration Setting Publisher

With Azure v1.3, Web roles now run with full IIS, which means you’ll need to separately tell your web app to use the Azure configuration setting publisher. The easiest way to do this is in the Application_Start() method in global.asax:

        protected void Application_Start(object sender, EventArgs e)
        {
           
CloudStorageAccount
.SetConfigurationSettingPublisher(
            (configName, configSettingPublisher) =>
            {
               
var
connectionString =
                   
RoleEnvironment.GetConfigurationSettingValue(configName);
                configSettingPublisher(connectionString);
            });
        }


You’ll need this when working with the Azure Platform Training Kit samples, such as the Guestbook demo we walked through.


Connection Strings



When creating your new Azure application, your default diagnostic connection strings point to the local simulation environment:



devstorage



When testing locally, this works fine. However, when you deploy to Azure, you’ll quickly discover that you have no local development storage, and must use “real” Azure storage. Don’t forget to change these settings prior to publishing! This is easily done through Visual Studio:



configurestorage



Queues and Poison Messages



We talked a bit about handling poison messages on a queue; that is, messages that consistently cause the processing code to not complete successfully. This could stem from any number of reasons, without a way to correct its processing while the role is running. If a message is failing to be processed simply because the GetMessage() timeout wasn’t set for a long-enough period of time, this can be adjusted when read a  2nd (or 3rd, or nth) time from the queue.



So, assuming the message truly cannot be processed, it needs to be removed from the main queue permanently, and then stored in a separate poison message area queue to be evaluated later (maybe the developer downloads these messages and tries processing them locally in a debugger…). The typical pattern is to allow message-processing to be retried a specific number of times (maybe allow for 3 attempts) or allowed to live for a specific amount of time before moving it to a poison area. This area could be another queue, an Azure table, etc.



Here’s a very simple example of a queue-processing loop that incorporates a poison message queue:



                while (true)
                {
                   
var msg = queue.GetMessage(TimeSpan
.FromMinutes(1));
                   
if (msg == null
)
                    {
                       
Thread.Sleep(TimeSpan
.FromSeconds(10));
                       
continue
;
                    }
                   
if
(msg.DequeueCount > 3)
                    {
                        poisonQueue.AddMessage(
new CloudQueueMessage
(msg.AsString));
                        queue.DeleteMessage(msg);
                       
continue
;
                    }
                   
// process queue message normally

                    queue.DeleteMessage(msg);
                }


There was a question that came up about queue processing time limits. A queue message processing timeout can be set from 30 seconds to two hours. See the MSDN documentation for more detail.

Azure Tip: Consider Remote Desktop when Planning Role Endpoints

 

Azure roles offer the ability to configure endpoints for exposing things such as WCF services, web servers, and other services (such as MongoDB). Endpoints may be exposed externally or restricted to inter-role communication purposes. External endpoints may be http, https, or tcp, and internal endpoints may be http or tcp.

Each role in a deployment can be configured to have up to 5 endpoints.  This can be any combination of external and internal endpoints. For instance, you might configure a worker role with 3 input endpoints (meaning the outside world can reach these endpoints) and two internal endpoints (meaning only other role instances in your deployment can reach these endpoints). If you’re optimizing your Azure deployment for a cost-saving configuration, you might be combining multiple services into a single role.

This brings me to today’s tip: Consider Remote Desktop when planning role endpoints.

Remote Desktop is a new capability of Azure SDK v1.3. You now have the ability to RDP into any specific instance of your deployment. It might not seem obvious (it wasn’t to me until I received a deployment error), but Remote Desktop consumes an endpoint for its RemoteAccess module (leaving you with 4 to work with). If, say, you had 5 endpoints defined, and you then enabled Remote Desktop and attempted to deploy from Visual Studio, you’d see something like this:

deployment-error

What might seem even less obvious is that your Azure deployment requires an RDP RemoteForwarder module as well. This module is responsible for handling all RDP traffic and forwarding it to the appropriate role instance. Just like the RemoteAccess module, this forwarding module consumes an endpoint. But this doesn’t necessarily mean you’re down to 3 endpoints to work with, as your deployment only requires a single role to be designated as the forwarder. You can enable it on any role in your deployment, so you can move the forwarder to a role with less endpoints in use. Of course, if you don’t need any more than 3 endpoints, you can leave the forwarder as-is.

You can see the settings for RemoteAccess and RemoteForwarder if you look at the Settings tab of your role’s properties:

rdp-settings

So… as you plan your Azure deployment, keep Remote Desktop in mind when working through your endpoint configurations. RDP is a very powerful debugging tool, and requires at least one endpoint on each of your roles (possibly two, especially if you only have one role defined).

Additional information

Jim O’Neil has this post detailing RDP configuration for the Azure@home project.