Recently I was helping someone debug a bizarre Azure Table storage issue. For some reason, the role’s state went into a busy/running loop as soon as the OnStart() event handler attempted to set up some Azure tables. To make matters worse, once the startup code attempted to connect to the storage account and create a table, we no longer received Trace log output. This doesn’t help much when the only log message is “In OnStart()…”.
To get our diagnostics back, we created a separate storage account exclusively for diagnostics. Once we did this, we had an uninterrupted flow of trace statements, even though the table-access code was still having issues with the table storage account.
This leads me to my tip of the day: Set up a separate storage account for diagnostics.
Aside from isolating storage connectivity issues, there are other benefits to having a separate storage account for diagnostics:
- You can have a separate access key for diagnostics, granting this to a broader audience. For instance, you could give out the access key for people to use inside a diagnostics tool such as Cerabrata’s Diagnostics Manager, without having to give out the access key to your production data storage account.
- Storage accounts have a transactional limit of approx. 500 transactions / second. Beyond that, and the Azure fabric throttles your access. If your app is writing even a single trace statement to diagnostic tables for every real data transaction, you’re doubling your transaction rate and you could experience throttling much sooner than expected.
- An additional storage account does not necessarily equate to additional cost. You’re simply billed for the storage you consume. If the total amount of storage across two accounts remains the same as with a single account, your cost will remain the same.
Setting things up
First head to the Azure portal, and set up two accounts. I advise putting them in the same affinity group, alongside your Azure services.
Each account will have its own access keys. Simply configure both in your role.
Now, all that’s left is specifying the diagnostic storage account for the DiagnosticMonitor, and the data storage account for “real” data access. For instance, this example enables the DiagnosticsMonitor using MyAppDiagnosticStorage, while the table service uses MyAppDataStorage:public override bool OnStart()
Trace.TraceInformation("Writing to diagnostic storage");
var dataStorageAccount = CloudStorageAccount
var tableClient = dataStorageAccount.CreateCloudTableClient();
RoleEnvironment.Changing += RoleEnvironmentChanging;
I think this is very much a best practice rather than merely a tip of the day.ReplyDelete
The following is not strictly true:
-- Storage accounts have a transactional limit of approx. 500 transactions / second. Beyond that, and the Azure fabric throttles your access. The 500 transactions/second targer is per partition.
The Azure Storage Team post on scalability explicitly states:
The throughput target for a single partition is:
◦Up to 500 transactions per second
◦Note, this is for a single partition, and not a single table. Therefore, a table with good partitioning, can process up to a few thousand requests per second (up to the storage account target).