Don't be afraid to store it twice

In most cases we try to follow DRY approach - Don't Repeat Yourself. This simple yet powerful statement, which - if executed correctly - has the power of changing meh projects to good ones, doesn't work well in all scenarios and use cases. As one once said "One size doesn't fit all", you shouldn't always follow each pattern or approach blindly. You can easily get burnt.

Problem

Imagine following scenario(or rather requirement):

We have following concepts in the application: Company, Group and User. User can have access to multiple Companies(from which one is his main). Group is a logical thing, which exists within a Company

The question is - how to store data in Table Storage, so we can easily query both all Groups within a Company and a single Group. 

Let's skip User for a second and consider following implementation:

/
[FunctionName("GroupList")]
public static Task<HttpResponseMessage> GroupList(
	[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "group")] HttpRequestMessage req,
	[Table(TableName, "AzureWebJobsStorage")] IQueryable<GroupEntity> groups,
	[Table(Company.TableName, "AzureWebJobsStorage")] IQueryable<Company.CompanyEntity> companies,
	TraceWriter log)
{
	var groupsQuery = groups.Take(100).ToList().Select(_ => new
	{
		Id = _.RowKey,
		Name = _.Name,
		Description = _.Description,
		// ReSharper disable once ReplaceWithSingleCallToFirst
		CompanyName = companies.Where(company => company.RowKey == _.CompanyId.ToString()).FirstOrDefault()?.Name
	}).ToList();

	var response = req.CreateResponse(HttpStatusCode.OK,
		groupsQuery);

	return Task.FromResult(response);
}

[FunctionName("GroupGet")]
public static Task<HttpResponseMessage> GroupGet(
	[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "group/{id}")] HttpRequestMessage req,
	[Table(TableName, "{partitionKey}", "{id}", Connection = "AzureWebJobsStorage")] GroupEntity group,
	TraceWriter log)
{
	var response = req.CreateResponse(HttpStatusCode.OK, group);
	return Task.FromResult(response);
}

public class GroupEntity : TableEntity
{
	public GroupEntity(Guid companyId)
	{
		PartitionKey = companyId.ToString();
		RowKey = Guid.NewGuid().ToString();
	}

	public string Name { get; set; }
	public Guid CompanyId { get; set; }
	public string Description { get; set; }
}

All right, initially it all looks fine. If we want to get a list of User specific Groups, we'll just add some context(e.g. store all user's companies somewhere and just use their identifiers as a parameter in a query). Now let's request a single Group. We have a Group identifier, we can get the whole row from a database...

NO WE CAN'T!

Since GroupEntity's PK is equal to Company's identifier, we'd have to make one query per each company a user has access to. Not very smart, not very clean. What to do? Change PK in GroupEntity to a generic one? We'll lost the possibility to make fast queries for all groups within a company. Make a combined identifier and user it as a PK? We still have to perform multiple queries. Go for SQL and perform proper JOINs? This is definitely a possibility - but we don't need other features of a relative database. Is it a dead end? 

Solution

One thing in Azure Storage is really cheap - it's the storage itself. How can we remodel our tables so we can improve both performance and lower transactions amount? Well, we can store our data twice!

Before you start throwing tomatoes at me, consider following example:

/
[FunctionName("GroupCreate")]
public static async Task<HttpResponseMessage> GroupCreate(
	[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "group")] HttpRequestMessage req,
	[Table(TableName, "AzureWebJobsStorage")] IAsyncCollector<GroupEntity> groups,
	TraceWriter log)
{
	var data = JsonConvert.DeserializeObject<GroupEntity>(await req.Content.ReadAsStringAsync());

	var group = new GroupEntity(data.CompanyId)
	{
		Name = data.Name,
		CompanyId = data.CompanyId,
		Description = data.Description
	};

	await groups.AddAsync(group);
	await groups.AddAsync(new GroupEntity
	{
		RowKey = group.RowKey,
		Name = data.Name,
		CompanyId = data.CompanyId,
		Description = data.Description
	});

	var response = req.CreateResponse(HttpStatusCode.OK, group);
	return response;
}

Here we're storing a row twice but with slight changes. The first insert stores data with no changes to the previous version. The second one is the crucial one - it performs one change, which is changing PK to a generic one named "group". Thanks to this solution we can have two almost the same rows, one for being displayed as a part of a list, one for storing all info regarding a row. 

Now you may ask - how it secures a row from being displayed if a User doesn't have access to a given Company? That's why we're storing company identifier along with a row in CompanyId column. This is much quickier and cleaner solution than performing several requests to Table Storage - we can cache the data locally and just check whether identifiers match.

Summary

Modeling Table Storage is both challenging and rewarding - you can easily hit performance problems if tables are not designed carefully, on the other hand wise design allow you to really push the limit. Those redesigns are important also because of one more thing - they save time. And in a cloud time = money. Make sure you pay only as much as needed.

Appending data to Azure Storage Blob concurrently

Append Blob is a fairly common feature of Azure Storage, which makes all kinds of logs or data aggregations a piece of cake. While the whole concept is super-easy(at least from the SDK client point of view), using it in a real scenario could give you headaches. Why you may ask? Well, the nature of appends is not so obvious and at the first glance our perceptions could be deceived.

A quick look at the documentation reveals how limited are our options to append data to a blob concurrently

Four Horsemen of the Parallelypse

There're 5 methods in total available on CloudAppendBlob type:

  • AppendBlob
  • AppendFromStream
  • AppendText
  • AppendFromByteArray
  • AppendFromFile

As you can see, I grouped them a little so we have two categories:

  • methods for multiple writers scenario
  • methods for a single writer scenario

Now the question is - how do we know, that one method is designed for this specific scenario? Well, the easiest option is to read the documentation. This is a description of AppendText taken from the API reference:

Appends a string of text to an append blob.
This API should be used strictly in a single writer scenario because the API internally uses the append-offset conditional header to avoid duplicate blocks which does not work in a multiple writer scenario.

So what happens if you try to use such method in a e.g. Azure Function?

The remote server returned an error: (412) The append position condition specified was not met 

This is not the best description, isn't it?

Gimme a code snippet!

The easiest way to fix this issue is to transform a string to a stream:

/
using (var ms = new MemoryStream())
using(var sw = new StreamWriter(ms))
{
	await sw.WriteLineAsync("Serialized_data");
	await sw.FlushAsync();

	ms.Position = 0;

	await blob.AppendBlockAsync(ms);
}

Summary

Personally I found this issue is not so obvious - I involuntarily used AppendText method, which looked as the best match to my code and after some time I noticed those 412 error codes. The one thing you have to remember when using AppendBlock is the fact, that each block cannot exceed 4 MB size each. This - along with the limit of 50k append operations - allows for building a blob of max size equal to 195 GBs, which should be fine for the most of projects.