Sunday, October 16, 2011

New Role, Same Great Windows Azure Goodness

It’s October already! This means I’m now three months late posting this little nugget. In July, I changed roles at Microsoft. I’m still in the Windows Azure ISV Incubation team, but no longer a member of the US East Region sub-team, I now have a broader position covering architecture and guidance across our worldwide team. Last year, as an Architect Evangelist (AE), I was working with specific ISV’s within my region (the mid-Atlantic states), providing architectural and technical guidance. This year, I’m still focused on Windows Azure application architecture for our ISV’s; however, I no longer cover a specific region. I’ll be providing architectural guidance for several key ISVs, as well as supporting our illustrious team of AE’s worldwide.

One interesting side-effect of the new position: I’ve had the opportunity to broaden my view of opportunities and challenges when going beyond the US border. There are ISV challenges, such as data sovereignty. There are team dynamics differences within our organization, as roles vary in responsibility depending on region. There are differences in the general cloud computing landscape, where a particular cloud vendor has a strong presence in one region but little to no presence in another.

My new team hosts a weekly internal  “power hour” Live Meeting for our Windows Azure architects worldwide, where we discuss various challenges when building or migrating our customers’ applications, and dive into details of new or updated Windows Azure features. Three power hours, actually: one for Americas, one for Europe, and one for Asia. I lead the Americas and Europe power hours. I plan to blog about a few tips & tricks based on some of these discussions (the non-NDA stuff, of course…).

I’ve been traveling quite a bit over the past few months, with several trips to Seattle and one to Europe. As I write this post, I’m traveling yet again, only this time for a much-needed vacation with my wife (who’s shown considerable patience with me!). Destination: Aruba. Time for a bit of R&R…

Wednesday, June 29, 2011

My team is hiring–Work with ISV’s and Windows Azure!

I work with a really cool team at Microsoft. Our charter is to help Independent Software Vendors (ISV’s) build or migrate applications to the Windows Azure platform. We’re growing, and looking to fill two specific roles in the US:

Architect Evangelist (DC area, Silicon Valley area). 80% technical, 20% business development experience. Excerpt from the job description:

“You will be responsible for identifying, driving, and closing Azure ISV opportunities in your region. You will help these partners bring their cloud applications to market by providing architectural guidance and deep technical support during the ISV’s evaluation, development, and deployment of the Azure services. This position entails evangelizing the Windows Azure Platform to ISV partners, removing roadblocks to their deployment, and driving partner satisfaction.”

Platform Strategy Advisor (New York area).  50% technical, 50% business development experience. Excerpt from the job description:

“You will be responsible for identifying, driving, and closing Azure ISV opportunities in your region. You will help these partners bring their cloud applications to market by providing business model and architectural guidance and supporting them through their development and go-to-market activities. This position entails evangelizing the Windows Azure Platform to ISV partners, removing roadblocks to their deployment, and driving partner satisfaction.”

Both of these roles are work-from-home positions, along with onsite customer visits and additional travel as necessary.

Job postings are on LinkedIn and the Microsoft Careers site:

I’ll post an update as soon as the DC-area position is posted (should be later today).

Wednesday, May 25, 2011

Why did the turtle cross the road?

This morning, I was prepping a camera for my wife to take on our daughter’s field trip. This meant that I was surrounded by camera bodies, lenses, and a big cup of coffee.

I just happened to look out the front window, as I saw two cars stop in front of my house. I saw what looked like a black, flattened basketball sitting in the middle of the road. A basketball with a head sticking out.

Reaching for the nearest camera+lens combo (and almost knocking my coffee cup over), I bolted outside to get a closer look:



Its shell looked to be about 15 inches long, along with some nails in need of a mani-pedi.


I had time for one more close-up before another car drove up:

IMG_0393 (1)

So… why did the turtle cross the road? To remind me that there’s an entire world around me that doesn’t involve staring at a computer screen.

Time to go back to work now, staring at the computer screen…

Tuesday, May 17, 2011

Windows Azure Tip: Go Beyond 5 Endpoints per Role, beyond 25 per Deployment

A few months ago, I blogged about the impact Remote Desktop has on your Windows Azure Deployment. The basic premise was simple: Remote Desktop consumes one endpoint on each of your roles. And, if you only had one role in your deployment, you’d actually lose two endpoints on your role, because of the Remote Desktop Forwarder. Given the restriction of 5 endpoints per role, having only three usable endpoints could be limiting if, say, you were trying to host a public-facing website (port 80), secure website (port 443), and a few WCF services (port 8000, and then… what???), all in a single role.

This brings me to today’s tip: Go beyond 5 endpoints in a role

The way deployments are set up, there was a maximum of 25 total endpoints:

  • Five total roles per deployment
  • Five endpoints per role

While there’s still an endpoint total in effect, there’s been a subtle change to role definitions, which went into effect some time in March. In fact, if you look at the What’s New in Windows Azure MSDN Library page, you’ll see the subtle change mentioned, under the March 31, 2011 update summary:

A recent update has changed the manner in which endpoints can be distributed among roles in a hosted service. A service can now have a total of 25 input endpoints which can be allocated across the 5 roles allowed in a service. For example, you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role. Internal endpoints are limited to 5 per role. Input and Internal endpoints are allocated separately.

For internal endpoints (meaning inter-role communication only), nothing changes – only five per role. However, with input endpoints (meaning public-facing ports), you can divide the 25 up any way you want across your roles.

Just for fun, I wanted to see what would happen if I went with all 25 input endpoints, along with five internal endpoints per role, across 5 roles.

Here’s part of my web role’s endpoint definition, with 25 input endpoints and 5 internal endpoints.


I added 4 more roles (all worker roles), with 5 internal endpoints apiece:


I published this to Windows Azure, and was able to see my website on each port. The default.aspx page shows the total endpoint count for my webrole, along with the port number for Endpoint15, which is read from the role environment with this simple code:

txtInstanceCount.Text = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints.Count.ToString();
txtEndpoint15.Text =

I also enumerated the total endpoint count across the entire deployment.

The output shows that these ports, indeed, exist and are active. Take a good look at my total endpoint count!


What this shows is:

  • 25 input endpoints on my web role, plus 5 internal endpoints on the same role

  • 5 internal endpoints each on the 4 worker roles, adding another 20 endpoints.

  • A grand total of 50 endpoints in my deployment!

If anyone’s ever tried publishing to Windows Azure in the past, with more than 5 endpoints per role, you received a deployment error. This app published with no issues. Here’s a snapshot of the portal, with everything humming along:


Disclaimer: I did not wire up all the endpoints to services. I’m assuming everything will work, since it published with no errors. Feel free to disprove this.

So, as you’re planning your Windows Azure role usage, you can breathe a bit easier if you were bumping into endpoint limitations. Enjoy!

Sunday, February 13, 2011

Windows Azure Tip: Overload your Web Role

Recently, I blogged about endpoint usage when using Remote Desktop with Windows Azure 1.3. The gist was that, even though roles support up to five endpoints, Remote Desktop consumes one of those endpoints, and an additional endpoint is required for the Remote Desktop forwarder (this endpoint may be on any of your roles, so you can move it to any role definition).

To create the demo for the RDP tip, I created a simple Web Role with a handful of endpoints defined, to demonstrate the error seen when going beyond 5 total endpoints. The key detail here is that my demo was based on a Web Role. Why is this significant???

This brings me to today’s tip: Overload your Web Role.

First, a quick bit of history is in order. Prior to Windows Azure 1.3, there was an interesting limit related to Role definitions. The Worker Role supported up to 5 endpoints. Any mix of input and external endpoints was supported. Input endpoints are public-facing, while internal endpoints are only accessible by role instances in your deployment. These input and internal endpoints supported http, https, and tcp.

However, the Web Role, while also supporting 5 total endpoints, only supported two input endpoints: one http and one https. Because of this limitation, if your deployment required any additional externally-facing services (for example, a WCF endpoint ), you’d need a Web Role for the customer-facing web application, and a Worker Role for additional service hosting. When considering a live deployment taking advantage of Windows Azure’s SLA (which requires 2 instances of a role), this equates to a minimum of 4 instances: 2 Web Role instances and 2 Worker Role instances (though if your worker role is processing lower-priority background tasks, it might be ok to maintain a single instance).

With Windows Azure 1.3, the Web Role endpoint restriction no longer exists. You may now define endpoints any way you see fit, just like with a Worker Role. This is a significant enhancement, especially when building low-volume web sites. Let’s say you had a hypothetical deployment scenario with the following moving parts:

  • Customer-facing website (http port)
  • Management website (https port)
  • sftp server for file uploads (tcp port)
  • MongoDB (or other) database server (tcp port)
  • WCF service stack (tcp port)
  • Some background processing tasks that work asynchronously off an Azure queue

Let’s further assume that your application’s traffic is relatively light, and that the combining of all these services still provides an acceptable user experience . With Windows Azure 1.3, you can now run all of these moving parts within a single Web Role. This is easily configurable in the role’s property page, in the Endpoints tab:


Your minimum usage footprint is now 2 instances! And if you felt like living on the wild side and forgoing SLA peace-of-mind, you could drop this to a single instance and accept the fact that your application will have periodic downtime (for OS updates, hardware failure/recovery, etc.).


This example might seem a bit extreme, as I’m loading up  quite a bit in a single VM. If traffic spikes, I’ll need to scale out to multiple instances, which scales all of these services together. This is probably not an ideal model for a high-volume site, as you’ll want the ability to scale different parts of your system independently (for instance, scaling up your customer-facing web, while leaving your background processes scaled back).

Don’t forget about Remote Desktop: If you plan on having an RDP connection to your overloaded Web Role, restrict your Web Role to only 3 or 4 endpoints (see my Remote Desktop tip for more information about this).

Lastly: Since you’re loading up a significant number of services on a single role, you’ll want to carefully monitor performance (CPU, web page connection latency, average page time, IIS request queue length,  Azure Queue length (assuming you’re using one to control a background worker process), etc. As traffic grows, you might want to consider separating processes into different roles.

Monday, January 31, 2011

The Jawbone: Icon for the dog, Era for the human

Last week, I was the proud owner of a most-excellent Bluetooth headset, the Jawbone ICON. Compared to my years-old Motorola H700, the Jawbone was simply outstanding. I had owned this beautiful piece of technology less than 3 months, and I was still enjoying its newness, along with its excellent audio quality.

Unfortunately, my puppy Nimbus felt that he, too, should have the privilege of playing with a Jawbone. As it turns out, Bluetooth headsets are not very durable as a chew-toy:


In a desperate plea for empathy, I tweeted my Horrible Headset Happenstance to the world:


The great folks at Jawbone Customer Support heard (ok, saw) my plea, along with the photo, and reached out to me. They told me they laughed when they saw my mutilated headset (and assured me they wept a bit, too). They assumed my dog must be really cute to be able to get away with something this mischievous and still live to bark about it. I assured them he was:


Graciously, Customer Support offered to assist me in my replacement quest. I decided to eschew (no, not chew) another ICON, and upgrade to the brand-newest Jawbone: The ERA. And much to my surprise, it arrived in the mail today. I can’t imagine these photos do it justice, but I wanted to share my unboxing experience. First, the box itself:


After staring at it for an undetermined amount of time, it was time to open it up:


And peeling back the front cover revealed a cornucopia of ear-fitting goodness.


I gently removed the ERA from its perch and gave it a full charge. It was time… Time for its maiden voyage. I called my wife, listening intensely to the tonal quality of her phone ringing across the airwaves. She picked up, and I asked her to say something that I could quote to the world. “Anything,” I said. “Just lay it on me.” And she responded, so eloquently and with zero distortion:

“That’s what she said.”

Sunday, January 30, 2011

My BitLocker Moment of Panic

Back in October 2010, when I joined Microsoft, I received my shiny (well, matte) new laptop. Security is of paramount importance around here, and I had to enable BitLocker, included with Windows 7 Ultimate. The encryption process was actually painless. I saw minimum performance degradation, and the only (minor) annoyance was the bootup workflow, which requires a PIN to be entered prior to Windows booting.

Fast-forward 2 months: I decided it was time for a performance boost, so I picked up a 256GB Kingston SSD drive. It came with a copy of Acronis’ disk-cloning software, which made the transfer extremely easy. I contacted Microsoft IT Tech Support before doing the transfer, to see if there were any known caveats; the only thing they suggested was removing encryption prior to cloning. So I did, and the cloning went smoothly. I was back up and running with my new drive in about 2 hours, with BitLocker re-applied.

So there I was, with my new SSD. I was set for Blazing Speed. After 2 weeks, I was only seeing a moderate improvement, certainly not worthy of the high cost of the drive (my disk performance index jumped from about 5.4 to around 5.7). So I went hunting for SSD optimization information. Thanks to a tweet by Brian Prince, I found a few good tips like disabling SuperFetch (which I did), updating the controller driver (which I did), and updating the BIOS (I held off).

So… what about the panic???

My performance index was now at 6.8, with a very noticeable performance improvement. But I wanted more, so I decided to update my BIOS. And that, my friends, did not go as planned.

The BIOS upgrade itself was easy, thanks to Lenovo’s updater tool. It then told me to reboot, which I did. I was taken to the BitLocker PIN Entry screen, and I entered my secret key.  But then… BitLocker told me that something on my computer changed since last booting, and that I needed to enter my recovery key. Oh, you mean that key on my USB drive? That key from when I encrypted my original drive?

Yes, that’s right, I had not backed up my new key after re-encrypting. At this point, I was unable to boot. I was, um, toast.


I know what you’re thinking: Just restore from a backup. Easy enough: I had one handy: my old drive, which had the OS relatively current. And my working files were all backed up to offsite storage. However, I kept thinking there was Some Important File I hadn’t backed up.

On a whim, I decided to download the previous BIOS version from Lenovo. I created a bootable CD on another computer, and booted up. Interestingly, the DOS version of the BIOS updater gave a stern warning a about updating BIOS firmware when BitLocker is enabled (the Windows version has no such warning). I down-rev’d, rebooted, and… just like magic, I was able to boot once again into my SSD.

Lessons Learned

Maybe you’re way smarter than me and will never make this mistake, but I thought it be worth pointing out the obvious anyway:

  • When encrypting with BitLocker, always create a recovery disk afterward.
  • When updating BIOS firmware, be sure to suspend BitLocker prior to the update (you don’t need to unencrypt; you just need to suspend BitLocker).
  • Prior to any type of system update when BitLocker is enabled, be sure to have a backup handy, just in case.

Thursday, January 20, 2011

Azure Bootcamp Pittsburgh January 2010: Show Notes

I had a fun time visiting Pittsburgh and presenting some Azure Goodness during Day 2 of the Azure Bootcamp! Thanks to Rich Dudley for inviting me – he organized a great event and the audience was very engaged. Rich, along with Scott Klein, presented the bulk of the material.

During my presentation, we discussed a few Azure-101 tips and hints. I did my best to capture them here. Please let me know if I missed any and I’ll update this post.

My presentation slide deck is here:

Configuration Setting Publisher

With Azure v1.3, Web roles now run with full IIS, which means you’ll need to separately tell your web app to use the Azure configuration setting publisher. The easiest way to do this is in the Application_Start() method in global.asax:

        protected void Application_Start(object sender, EventArgs e)
            (configName, configSettingPublisher) =>
connectionString =

You’ll need this when working with the Azure Platform Training Kit samples, such as the Guestbook demo we walked through.

Connection Strings

When creating your new Azure application, your default diagnostic connection strings point to the local simulation environment:


When testing locally, this works fine. However, when you deploy to Azure, you’ll quickly discover that you have no local development storage, and must use “real” Azure storage. Don’t forget to change these settings prior to publishing! This is easily done through Visual Studio:


Queues and Poison Messages

We talked a bit about handling poison messages on a queue; that is, messages that consistently cause the processing code to not complete successfully. This could stem from any number of reasons, without a way to correct its processing while the role is running. If a message is failing to be processed simply because the GetMessage() timeout wasn’t set for a long-enough period of time, this can be adjusted when read a  2nd (or 3rd, or nth) time from the queue.

So, assuming the message truly cannot be processed, it needs to be removed from the main queue permanently, and then stored in a separate poison message area queue to be evaluated later (maybe the developer downloads these messages and tries processing them locally in a debugger…). The typical pattern is to allow message-processing to be retried a specific number of times (maybe allow for 3 attempts) or allowed to live for a specific amount of time before moving it to a poison area. This area could be another queue, an Azure table, etc.

Here’s a very simple example of a queue-processing loop that incorporates a poison message queue:

                while (true)
var msg = queue.GetMessage(TimeSpan
if (msg == null
(msg.DequeueCount > 3)
new CloudQueueMessage
// process queue message normally


There was a question that came up about queue processing time limits. A queue message processing timeout can be set from 30 seconds to two hours. See the MSDN documentation for more detail.

Azure Tip: Consider Remote Desktop when Planning Role Endpoints


Azure roles offer the ability to configure endpoints for exposing things such as WCF services, web servers, and other services (such as MongoDB). Endpoints may be exposed externally or restricted to inter-role communication purposes. External endpoints may be http, https, or tcp, and internal endpoints may be http or tcp.

Each role in a deployment can be configured to have up to 5 endpoints.  This can be any combination of external and internal endpoints. For instance, you might configure a worker role with 3 input endpoints (meaning the outside world can reach these endpoints) and two internal endpoints (meaning only other role instances in your deployment can reach these endpoints). If you’re optimizing your Azure deployment for a cost-saving configuration, you might be combining multiple services into a single role.

This brings me to today’s tip: Consider Remote Desktop when planning role endpoints.

Remote Desktop is a new capability of Azure SDK v1.3. You now have the ability to RDP into any specific instance of your deployment. It might not seem obvious (it wasn’t to me until I received a deployment error), but Remote Desktop consumes an endpoint for its RemoteAccess module (leaving you with 4 to work with). If, say, you had 5 endpoints defined, and you then enabled Remote Desktop and attempted to deploy from Visual Studio, you’d see something like this:


What might seem even less obvious is that your Azure deployment requires an RDP RemoteForwarder module as well. This module is responsible for handling all RDP traffic and forwarding it to the appropriate role instance. Just like the RemoteAccess module, this forwarding module consumes an endpoint. But this doesn’t necessarily mean you’re down to 3 endpoints to work with, as your deployment only requires a single role to be designated as the forwarder. You can enable it on any role in your deployment, so you can move the forwarder to a role with less endpoints in use. Of course, if you don’t need any more than 3 endpoints, you can leave the forwarder as-is.

You can see the settings for RemoteAccess and RemoteForwarder if you look at the Settings tab of your role’s properties:


So… as you plan your Azure deployment, keep Remote Desktop in mind when working through your endpoint configurations. RDP is a very powerful debugging tool, and requires at least one endpoint on each of your roles (possibly two, especially if you only have one role defined).

Additional information

Jim O’Neil has this post detailing RDP configuration for the Azure@home project.