Imperative bindings in Azure Functions

In-built bindings provided by Azure Functions for most cases are more than sufficient - you don't have to manage and configure them on your own, all you're supposed to do is to focus on business logic and your codebase. There's a gap however when using bindings from a function.json file - what if I have to create a binding at runtime? Fortunately even this case is covered and you can easily build bindings, even using data delivered in your HTTP request.

Use case

Let's say you'd like to create a function, which fetches data from a CosmosDB instance. The issue here is, that you want to query a particular collection of documents. Normally you'd use following binding:

/
{
  "name": "inputDocument",
  "type": "documentDB",
  "databaseName": "MyDatabase",
  "collectionName": "{collectionId}",
  "connection": "MyAccount_COSMOSDB",     
  "direction": "in"
}

but there's no way to dynamically replace {collectionId} using data from a HTTP request(it works for other triggers like queue for the sqlQuery parameter however). We need to find another way of doing this avoiding building our own repositories and services, which would obscure the whole function.

If you read documentation carefully(especially C# reference), you'll find, that there's a possibility to use dynamic bindings via imperative bindings. The example of usage could like this:

/
using (var output = await binder.BindAsync<T>(new BindingTypeAttribute(...)))
{
    ...
}

With such feature we can implement our dynamic CosmosDB collection binding.

Implementing imperative binding in your function

I'll show you an example of using CosmosDB imperative binding in a CSX file, but the same rules apply to regular C# files. The first thing we need here is to reference a binding attribute. In our case it will be DocumentDBAttribute from Microsoft.Azure.WebJobs.Extensions

/
#r "Microsoft.Azure.WebJobs.Extensions.DocumentDB"

Then we have to update function's signature so it accepts Binder type:

/
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Binder binder, TraceWriter log)

Now we can use imperative binding e.g. to load a collection dynamically:

/
var collection = await binder.BindAsync<IEnumerable<dynamic>>(
        new DocumentDBAttribute("foo", "{dynamic_parameter}"));

The whole function could look like this(don't pay attention to all those dynamics, normally I'd use slightly other structures and methods):

/
#r "Microsoft.Azure.WebJobs.Extensions.DocumentDB"

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Binder binder, TraceWriter log)
{
	dynamic data = await req.Content.ReadAsAsync<object>();
	using(var collection = await binder.BindAsync<IEnumerable<dynamic>>(
			new DocumentDBAttribute("foo", data.collectionId.Value.ToString()))) 
	{
		var firstDocumentId = collection.First().id.Value.ToString();
		return req.CreateResponse(HttpStatusCode.OK, new {id = firstDocumentId});
	}
}

When I called my function using a following payload:

/
{
    "collectionId": "bar"
}

I'll get following result:

/
{"id":"I am FOO!"}

Summary

This was just a simple example of how you can extend your functions using imperative bindings. The best things is that is allows you to avoid implementing typical patterns usable in traditional web applications and keeps functions relatively clean. Feel free to experiment with those - using dynamic bindings is a really powerful feature and for sure will make your function even more powerful. 

Limiting data being logged using Application Insights in Azure Functions

As you may know, Azure Functions have a preview of Application Insights integration enabled. This is another great addition to our serverless architecture since we don't have to add this dependency on our own - it's just there. However, there're some problems when it comes to handling the amount of data, which is being collected, especially when your're on an MSDN subscription.

Problem

When you enable Application Insights for your Function App, each function will start collecting different metrics(traces, errors, requests) at different scale. When you go to Azure Portal and access Data volume management tab in the Application Insights blade, you'll see, that there's one metric, which really exceeds our expectations(at least when it comes to the volume of the data traced):

As you can see, Message data takes 75% of the total amount of data collected

When you click on any bar, you'll access Data point volume tab - now we can understand, what kind of 'message' is really being logged:

Although chart says Message, data type related to this particular type of message is Trace

Configuring AI integration

Logging traces is perfectly fine, however we don't always want to do so(especially if you're on an MSDN subscription and don't want to be blocked). If you go to this page, you'll see a detailed information regarding both enabling and working with Application Insights. The most interesting part for us is the configuration section:

/
{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Information",
      "categoryLevels": {
        "Host.Results": "Error",
        "Function": "Error",
        "Host.Aggregator": "Information"
      }
    },
    "aggregator": {
      "batchSize": 1000,
      "flushTimeout": "00:00:30"
    }
  },
  "applicationInsights": {
    "sampling": {
      "isEnabled": true,
      "maxTelemetryItemsPerSecond" : 5
    }
  }
}

As you can see, we're able to set different levels for each category of data being logged. According to comments in this issue on GitHub, the easiest way to actually limit the data being logged is to set your configuration to the following:

/
{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Error",
      "categoryLevels": {
        "Host.Aggregator": "Information"
      }
    }
}

This way you should be able to avoid logging to much data or hitting your daily cap for Application Insights.

I strongly recommend you to play with AI integration in Azure Functions and provide feedback regarding possible features or enhancements. It's a great way to collaborate with a team responsible for a product and a chance to make it even better.