Analyzing Table Storage bindings locally in Azure Functions

In one of my current projects I heavily rely on Table Storage bindings used in many of my functions. In fact I have several dozen API functions, which are the base of the whole system. Because codebase grows each day, I needed a tool, which will allow me easily validate whether I query Table Storage correctly - that means I follow some basic principles like:

  • using both PartitionKey and RowKey in a query
  • if RowKey is unavailable - using PartitionKey so I won't have to read the whole table
  • using $top whenever possible so I won't load the whole partition
  • using query projection - leveraging $select for selecting only a subset of columns in a row

In fact I knew two ways of doing that:

  1. Checking logs of Storage Emulator, what I described in this blog post. The disadvantage of that solution is that is logs nearly each and every request so it is hard to find a particular one you're interested in
  2. Using SQL Server Profiler to check what kind of queries are materialized 

As you can see above logs from Storage Emulator are quite detailed, yet painful to work with

I needed a tool, which would combine features of both solutions.

Reading SQL Server Profiler

The idea was to somehow read what SQL Server Profiler outputs when queries are sent to Storage Emulator. Fortunately it is really simple using following classes:

  • SqlConnectionInfo
  • TraceServer

Both are easily accessible in SQL Server directory:

  • C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.SqlServer.ConnectionInfo.dll
  • C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.SqlServer.ConnectionInfoExtended.dll

There is however a small gotcha. Since SQL Server Profiler is a 32-bit application, you cannot use above classes in 64-bit one. Additionally those assemblies are SQL Server version sensitive - locally I have an instance of SQL Server 2017, if you have other version, you'd have to change the path to point to the correct one.

Does it work?

After some initial testing it seems it works. Let's assume you have following code:

public static Task<HttpResponseMessage> DeviceList(
	[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "device")] HttpRequestMessage req,
	[Table(TableName, Connection = Constants.TableStorageConnectionName)] IQueryable<DeviceEntity> devices,
	[Table(Firmware.Firmware.TableName, Connection = Constants.TableStorageConnectionName)] IQueryable<Firmware.Firmware.FirmwareEntity> firmware,
	[Table(CounterType.CounterType.TableName, Connection = Constants.TableStorageConnectionName)] IQueryable<CounterType.CounterType.CounterTypeEntity> counterTypes,
	[Table(Location.Location.TableName, Connection = Constants.TableStorageConnectionName)] IQueryable<Location.Location.LocationEntity> locations,
	[Identity] UserIdentity identity,
	TraceWriter log)
	if (identity.IsAuthenticated() == false) return identity.CreateUnauthorizedResponse();

	var firmwareVersionsCached = firmware.Take(50).ToList();
	var counterTypesCached = counterTypes.Take(50).ToList();
	var locationsCached = locations.Take(50).ToList();

	var query = devices.Where(_ => _.PartitionKey != "device").Take(100).ToList().Select(_ => new
		Id = _.RowKey,
		Name = _.Name,
		SerialNumber = _.SerialNumber,
		Firmware = firmwareVersionsCached.First(f => f.RowKey == _.FirmwareId.ToString()).Version,
		CounterType = counterTypesCached.First(ct => ct.RowKey == _.CounterTypeId.ToString()).Name,
		Location = locationsCached.First(l => l.RowKey == _.LocationId.ToString()).Name

	var response = req.CreateResponse(HttpStatusCode.OK,

	return Task.FromResult(response);


Here you can find a part of diagnostic logs from executing above function:

You can find the whole project on GitHub: After some time spent with this tools I found planty of issues in my code like:

  • not using PartitionKey
  • reading the same table twice
  • materializing all rows from a table when I needed only a subset

I guess I will even more flaws in the next days. 

Application Insights Analytics - digging deeper into your application metrics

Data which is provided by an instance of Application Insights connected with your application is in most cases more than enough. As long as you're logging satisfying amount of information, you can easily track all your metrics and diagnose problems with ease. But what if you'd like to get a deeper insight into "what is really going on there"? I guess it'd possible to use AI's REST API and fetch all the data into your custom tool(or any kind of 3rd-party software) to analyze it - but who needs it when you have Application Insights Analytics?

First look

To access Analytics you need only to access this link - When accessed, you'll see a welcome screen, which out-of-the-box allows you to access some common queries.

A welcome screen gives you a rapid start when it comes to analyze common statistics

I strongly recommend you to try out common queries - they allow you to quickly get an overview of the capabilities of this tool.

Querying the data

For sure you'll notice, that charts and other statistics are the result of a query. This is what makes Analytics a really powerful tool - you can query any kind of metric available to you(like dependency duration, custom events, client OS and many many more) and combine them to get a what you're looking for.

An expanded tab of traces of the left - still there're some missing on the screen...

What is more, when creating or editing a query you can take advantage of inbuilt editor, which helps with a syntax and highlights all your errors. It's definitely much more polished than the one from the Function Apps :)

You can easily add metrics from the tabs on the left and then use an intuitive editor to combine them

Smart Diagnostics

There's a one cool feature, which makes Analytics really helpful in searching the root cause of a problem - Smart Diagnostics. It allows you to quickly discover what is "strange" in this particular fragment of your log(maybe one dependency fails to respond or responds three times longer than usual). 

Those highlighted dots on the chart allow you to run a smart detection on those parts of the data, which doesn't match the rest.

You have to be aware of the fact, that this diagnostics is not perfect and relies on the data you provide(so if you provide not enough data, it will tell you, that something is wrong, but not what it exactly is). Nonetheless is encourage you to gather more and more data, so its proposals are more and more valid and precise.

In the next post we'll try to run more advanced queries and find where limits of this tool are.