NDC Oslo 2017

I was very fortunate to be selected as one of the speakers for this year’s NDC conference. Together with my colleague and friend Hans we delivered our blockchain talk. It was based on the talk we gave at Cloudbrew in December although the structure and the code have undergone a lot of changes.

The idea of the talk hasn’t changed though. We dived straight into technology and explained how Bitcoin provides the security it needs to make sure transactions are safe. We then zoomed out and illustrated how blockchain technology can be used for much more than just the financial world and proved that with our own DApp or distributed application running on Ethereum.

The slides can be downloaded here and the sample code is up at GitHub. The app is still a work in progress. It might be unrelated but there’s currently a spike in Ethereum transactions :).

Today we received the scores of the talk. Of those that placed a card in the box: 60 were green, 23 were yellow and 5 were red. So overall 70% liked it. I do wonder what the yellow people wanted to see or what they didn’t like.

We tried to put in the 60 minutes all the things we would love to hear in a talk. There were a lot of questions during and after the session so I think we achieved what we set out to do and thats to inspire people to look at blockchain technology. It’s no silver bullet but might be a perfect fit for some projects.

Raspberry Pi Meetup

Last week I was at the 3rd meetup of Raspberry Pi Belgium. I presented my endeavour to monitor utility meters with a Raspberry Pi and Mono, the successes and the failures.

My talk was based on a couple of previous blogposts:

But also an article I have yet to write where I use a DHT22 sensor to monitor humidity and temperature, like every step in this journey it had its challenges.

The other presentation was done by Jan Tielens. He showed us around the IoT hub in Azure.

Slides are available here.

Creating solid classes with AutoFac

Let’s say we’ve encapsulated an operation our application has to perform. I tend to write little classes which express the desired behaviour or which delegate to other classes if the need arises. An example of such an operation is the class illustrated here ‘UpdateCustomerCommand’.

class UpdateCustomerCommand
    : IRequestHandler<UpdateCustomerRequest, UpdateCustomerResponse>
{
    private IDbContext _context;        
 
    public UpdateCustomerCommand(IDbContext context)
    {
        _context = context;
    }
 
    public async Task Handle(UpdateCustomerRequest request)
    {
        var customer = _context.Customers.Single(x => x.Id == request.Id);
        customer.Name = request.Name;
        customer.Address = request.Address.ToModel();          
        await _context.SaveChangesAsync();
        return new UpdateCustomerResponse();
    }
}

In any application of reasonable size you end up with a lot of these classes. You want to keep these small and easy to read and keep any infrastructure out of the way. Let’s say we want to use transactionscope to manage the transaction, perhaps we’re connecting to a bunch of databases or reading from a message queue. You don’t want to modify all these command classes to add this behaviour. One solution is to introduce a base class which has this behaviour, but we all prefer composition over inheritance right? Let’s create a simple transactionalhandler.

class TransactionalCommandHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>;
    where TRequest : IRequest
    where TResponse : IResponse
{
    private IRequestHandler<TRequest, TResponse> _innerHandler;
 
    public TransactionalCommandHandler(IRequestHandler<TRequest, TResponse> handler)
    {
        _innerHandler = handler;
    }
 
    public async Task Handle(TRequest request)
    {
        using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
        {
            var response = await _innerHandler.Handle(request);
            scope.Complete();
            return response;
        }
    }
}

This class is short and sweet and allows us to write this code once and not repeat it again and again. We just need to find a way to create the transactionalhandler and then pass in an instance of our class which does actual work as the inner handler. Enter AutoFac (or pretty much any IoC framework).

You can wire this up in AutoFac with the following code (copied and adapted from their website).

var builder = new ContainerBuilder();
 
builder.RegisterGeneric(typeof(IRequestHandler<,>))
    .Named("handler", typeof(IRequestHandler<,>));
 
 
builder.RegisterGenericDecorator(
    typeof(TransactionalCommandHandler<>),
    typeof(IRequestHandler<>),
    fromKey: "handler");
 
var container = builder.Build();
 
// You can then resolve closed generics and they'll be
// wrapped with your decorator.
var updateCustomerHander = container.Resolve<IRequestHandler<UpdateCustomerRequest, UpdateCustomerResponse>>();

This approach is great if you have one cross cutting concern handler (like the transactionalhandler) and want to apply it to every implementation. There is no built in approach though if you want to apply this behaviour only to some classes. If you need more control you need to write some infrastructure code.

I introduce an attribute “NotTransactional” and apply it to some classes that I don’t want to take part in this decorator process. I then change the registration process.

private void RegisterCommandHandlers(ContainerBuilder containerBuilder)
{
    containerBuilder.RegisterGeneric(typeof(TransactionalCommandHandler<,>));
 
    foreach (var handlerType in typeof(CommandModule).Assembly
        .GetTypes().Where(i => i.IsClosedTypeOf(typeof(IRequestHandler<,>))))
    {
        containerBuilder.RegisterType(handlerType);
        var registerAsInterfaceType = handlerType.GetInterfaces().Single(t => t.GetGenericTypeDefinition() == typeof(IRequestHandler<,>));
        containerBuilder.Register(c =>
        {
            var handler = c.Resolve(handlerType);
 
            if (!handlerType.GetCustomAttributes(typeof(NotTransactionalAttribute), true).Any())
            {
                handler = c.Resolve(typeof(TransactionalCommandHandler<,>)
                    .MakeGenericType(registerAsInterfaceType.GetGenericArguments()),
                    new TypedParameter(registerAsInterfaceType, handler));
            }
            return handler;
        }).As(registerAsInterfaceType);
    }
}
  1. First I register the open generic type of the transactionalhandler.
  2. I then loop trough every concrete class that implements the request handler interface and register them in the container.
  3. The classes that will request the concrete handlers will always ask for an instance of the interface IRequestHandler with concrete request and response objects. The UpdateCustomerRequest for instance. So I register that type as well.
  4. The registration mentioned above is done with a lambda. Whenever someone asks for an command handler we resolve the concrete implementation and if it wants to participate in a transaction we also ask for the transactional handler and pass our concrete handler to its constructor.

The same approach can be used to further decorate your commands. Adding a claims check can become pretty trivial.

First create an attribute.

[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)]
sealed class AuthorizeUserAttribute : Attribute
{  
    public string ClaimType { get; set; }
    public string ClaimValue { get; set; }         
}

Then apply it to the necessary classes.

[AuthorizeUser(ClaimType = AppClaims.Administrator)]
class ManageCustomerCommand
    : IRequestHandler<ManageCustomerRequest, ManageCustomerResponse>
{
    //omitted for brevity
}

Create a handler which does the user authorization.

class AuthorizedUserCommandHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest
    where TResponse : IResponse
{
    private IRequestHandler<TRequest, TResponse> _innerHandler;
 
    private string _checkExistingClaim;
    private string _existingClaimValue;        
 
    public AuthorizedUserCommandHandler(IRequestHandler<TRequest, TResponse> handler, string checkExistingClaim, string existingClaimValue)
    {
        _innerHandler = handler;         
        _checkExistingClaim = checkExistingClaim;
        _existingClaimValue = existingClaimValue;
    }
 
    public async Task<TResponse> Handle(TRequest request)
    {            
        CheckAccess(System.Threading.Thread.CurrentPrincipal as ClaimsPrincipal);
        return await _innerHandler.Handle(request);
    }
 
    private void CheckAccess(ClaimsPrincipal principal)
    {
        //omited for brevity
    }
}

And finally wire everything up.

private void RegisterCommandHandlers(ContainerBuilder containerBuilder)
{
    containerBuilder.RegisterGeneric(typeof(TransactionalCommandHandler<,>));
    containerBuilder.RegisterGeneric(typeof(AuthorizedUserCommandHandler<,>));
    foreach (var handlerType in typeof(CommandModule).Assembly.GetTypes().Where(i => i.IsClosedTypeOf(typeof(IRequestHandler<,>))))
    {
        containerBuilder.RegisterType(handlerType);
        var registerAsInterfaceType = handlerType.GetInterfaces().Single(t => t.GetGenericTypeDefinition() == typeof(IRequestHandler<,>));
        containerBuilder.Register(c =>
        {
            var handler = c.Resolve(handlerType);
            handler = ConfigureAuthorizationHandler(c, handlerType, registerAsInterfaceType, handler);
            handler = ConfigureTransactionalHandler(c, handlerType, registerAsInterfaceType, handler);
            return handler;
        }).As(registerAsInterfaceType);
    }
 
}
 
private static object ConfigureTransactionalHandler(IComponentContext c, Type handlerType, Type registerAsInterfaceType, object handler)
{
    if (!handlerType.GetCustomAttributes(typeof(NotTransactionalAttribute), true).Any())
    {
        handler = c.Resolve(typeof(TransactionalCommandHandler<,>).MakeGenericType(registerAsInterfaceType.GetGenericArguments()),
            new TypedParameter(registerAsInterfaceType, handler));
    }
 
    return handler;
}
 
private static object ConfigureAuthorizationHandler(IComponentContext c, Type handlerType, Type registerAsInterfaceType, object handler)
{
    var authorizeAttr = (AuthorizeUserAttribute)handlerType.GetCustomAttributes(typeof(AuthorizeUserAttribute), true).SingleOrDefault();
    if (handlerType.GetCustomAttributes(typeof(AuthorizeUserAttribute), true).Any())
    {
        var parameters = new List<Parameter>{
            new TypedParameter(registerAsInterfaceType, handler),
            new PositionalParameter(1, authorizeAttr.ClaimType),
            new PositionalParameter(2, authorizeAttr.ClaimValue)
        };                
        handler = c.Resolve(typeof(AuthorizedUserCommandHandler<,>).MakeGenericType(registerAsInterfaceType.GetGenericArguments()), parameters);
    }
 
    return handler;
}

And we’re done!

Streaming files with httpclient and multiple controllers

In a recent project I was faced with a requirement which stated that all access to databases, file system and what not, had to go via trusted endpoints. It’s not uncommon but you do hit some roadblocks. Since the application was file system intensive I had to look for a way to stream files across machine and web application boundaries. Simplified, the application looked like this:

If you want to stream files in this situation you don’t want to load the entire file in memory in the web api part, then sent it to the public facing web app which in turn loads the complete file in memory and then gives it to the client browser. To increase performance of the system the file should be able to get streamed from where its stored to the browser of the user.

The API controller in the private API part is pretty straightforward. Lookup the file and pass it along using a custom HttpActionResult: FileResult.

[RoutePrefix("api/files")]
[Authorize]
public class FilesController : ApiController
{       
    [Route("{fileId:long}")]
    [HttpGet]
    public IHttpActionResult Get(long fileId)
    {
        FileMetaData metadata = LoadFileMetaData(fileId);
        if (File.Exists(metadata.Location))
        {
            return new FileResult(File.Open(metadata.Location, FileMode.Open, FileAccess.Read), metadata.ContentType, metadata.FileName);
        }
        else
        {
            return NotFound();
        }   
    }
}

Creating a custom HttpActionResult is pretty straightforward, if you look around online you’ll find plenty of examples. This is the one I ended up with myself. What’s important in this case, apart from loading the file, is populating the mimetype and setting the contentdisposition header.

class FileResult : IHttpActionResult
{
    private readonly string _filePath;
    private readonly string _contentType;
    private readonly string _filename;
    private readonly Stream _stream;
 
    public FileResult(string filePath, string contentType = null, string filename = null)
    {
        if (filePath == null) throw new ArgumentNullException("filePath");
 
        _filePath = filePath;
        _contentType = contentType;
        _filename = filename;
    }
 
    public FileResult(Stream stream, string contentType = null, string filename = null)
    {
        if (stream == null) throw new ArgumentNullException("stream");
 
        _stream = stream;
        _contentType = contentType;
        _filename = filename;
    }
 
    public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
    {
        var response = new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new StreamContent(_stream ?? File.OpenRead(_filePath))
        };
 
        var contentType = _contentType ?? MimeMapping.GetMimeMapping(Path.GetExtension(_filePath));
        response.Content.Headers.ContentType = new MediaTypeHeaderValue(contentType);
        if (!string.IsNullOrWhiteSpace(_filename))
        {
            response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
            {
                FileName = _filename
            };
        }
 
        return Task.FromResult(response);
    }
}

Then comes the tricky part: streaming the file directly to the user from the public facing web application. In this case the public application was an MVC site. I’m using HttpClient to call the private API and indicate I want my code to process the headers immediately after they are received. This allows me to see if the file was found or not. If we don’t have a 401 error code I check if I might have another error, if not I asynchronously load the response stream and pass it along the standard FileActionResult. An essential part is the Response.BufferOutput part. By default it’s set to true and will force the web app to load the file completely in memory before giving it to the client.

[Route("file/{fileId:long}")]
[HttpGet]
public virtual async Task<ActionResult> File(long fileId)
{
    var httpClient = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });
 
    var response = await httpClient.GetAsync(ConfigurationManager.AppSettings["App.PrivateApi"] + $"/{fileId}", HttpCompletionOption.ResponseHeadersRead);
    if (response.StatusCode == System.Net.HttpStatusCode.NotFound)
    {
        return HttpNotFound();
    }
    else
    {
        response.EnsureSuccessStatusCode();
        var fileStream = await response.Content.ReadAsStreamAsync();
        Response.BufferOutput = false;
        return File(fileStream, response.Content.Headers?.ContentType?.MediaType, response.Content.Headers?.ContentDisposition?.FileName);
    }
}

CloudBrew: The blockchain and you

Last Saturday I presented at CloudBrew. In 2015 my talk focused on doing IoT in your own home. This year it was time for something completely different.

The past months I’ve been looking at blockchain technology and together with my colleague Hans we introduced the audience in this new world. Since it was a technical conference we explained how the basic principles are implemented and gave a demo of our own little distributed application.

You can get the slides here and the demo code is located in my GitHub account. Over the past months I plan to write down what I’ve learned so far and will also improve the sample application.

Tracking water usage with Rasbian, Mono and Azure

Although I had this up and running back in March, it took me a while to find some time to write everything down. Here it goes.

Part 1: how does stuff work?

With my .NET application running on the Pi. I now had to see how I could monitor my water usage. Typical water meters in Belgium, or at least where I live, look quite dull but they have everything what’s needed to make them smart.

One way to get automatic readouts is to contact the supplier and let them add a logger but it’s not cheap. Their website is not really helpful but it looks like it will cost more than €100 and then there is a yearly or monthly fee. Not a valid option.

However if you do some research you’ll find info on how these meters work and chances are your meter is outputting a magnetic pulse. In the video below you will see a very nice explanation of it at around 0:35.

With my newfound knowledge I installed the application “Magnetic Detector!” on my iPhone and headed down in the basement while the tap was running and sure enough a sine wave appeared.

sine wave

While doing further research I learned that detecting magnetism can be done with a reed switch or a Hall effect sensor. I chose the first one since the Hall effect sensor would need power all the time and eventually I want to replace the Pi with a tiny board. Basically the reed switch is just like a manually operated switch except it will close whenever a magnet is nearby.

Part 2: Wiring it up

I bought a reed switch for €3 and set everything up. I first tested with a regular magnet to see if everything worked and then placed my setup near my water meter.

schema

The reed switch can be inserted into a cavity of the water meter, apparently it’s really built to accept these kinds of devices.

meter

Part 3: Storing the readouts

With the hard part behind me, I created a free Azure web app and connected it to a free SQL database. Note that you can only create a free SQL database from within an Azure web app, if you go directly to SQL databases you will not find that option.

free database

Since it’s a .NET app I also installed the package Newtonsoft.Json to transfer my pulse counts to the Azure web app. I spent several hours trying to get it working though as I, once again, was faced with a mysterious error.

System.TypeInitializationException: The type initializer for 'Newtonsoft.Json.JsonWriter' threw an exception. ---> System.BadImageFormatException: Could not resolve field token 0x04000493
File name: 'Newtonsoft.Json'
  at Newtonsoft.Json.JsonWriter.BuildStateArray () <0x76860c90 + 0x0009f> in <filename unknown>:0

Don’t know why, but I eventually went looking at the dependencies of Newtonsoft.Json and then explicitly updated my mono installation with the necessary bits from the Debian package list. Everything started working and uploading the pulse was just plain C#.

public void Upload(Tick tick)
{
    using (HttpClient client = new HttpClient())
    {
        client.PostAsync(_apiEndpoint, new StringContent(JsonConvert.SerializeObject(tick), Encoding.UTF8, "application/json")).Wait();
    }
}

My database started filling up and after only one day I could calculate that my showers were consuming 70 liters and the dishwasher 30 liters. Time to cut back on the time spent in the shower!

ticksdb

In order to keep the program running in the background I’m using a program called screen, you can find more info on that in this excellent post.

Part 4: Next steps
I had the Pi running in April, but had some issues with the Wifi. Some days I received no pulses at all and I had to reboot to gain access. Since then, well actually yesterday, I’ve changed to code to keep track of the pulse counts that failed to be uploaded and transfer them at a later time. Next up will be to create a dashboard to view the pulses or add further sensors to monitor gas and electricity.

Getting up and running with Mono and Raspberry Pi 3

Last week I got my hands on a Raspberry Pi and this weekend I finally found some time to sit down and get my first project with a Pi going. Naturally I ran into several issues and with today being Pi day, I thought I’d share my notes.

Last year I started a project with a bunch of colleagues where we try to monitor our gas, water and electricity meter. I’ve presented what we achieved so far at Cloudbrew last year. The aim is to build an IoT solution with a mobile application, cloud backend and lots of devices (Arduino’s and Pi’s for now). I didn’t want to wait on us finishing this to gather readouts from our utility meters so I thought I’d grab a Pi or two and get started. Since we already had figured out how to get the readouts I thought an hour or two would be all I need to put the solution on a Pi.

First thing I did was to download Raspbian Jessie Lite and follow the steps from the official site. Jessie is a headless operating system, since I won’t be connecting a monitor. I’m not choosing Windows 10 IoT for now because the onboard Wifi, a main selling point of the new Pi, is not supported at the time of this writing.

After connecting an ethernet cable and a power supply I used PuTTY to open an SSH session to connect to the Pi. So far so good.

The next task was to get the Wifi going. This turned out to be rather easy. Open “/etc/wpa_supplicant/wpa_supplicant.conf” in your editor and add the SSID and password of the network you want to connect to. Like most of the time, someone else had written down the instructions.

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

blog_pi_wifi_commands

I unplugged the ethernet cable and connected via Wifi. I then updated the Pi with “sudo apt-get update” and “sudo apt-get upgrade” so I was running the latest bits.

sudo apt-get update
sudo apt-get upgrade

Next up, programming. I briefly looked at the options I had. I could program in C, Python, C++, and many others. But time was limited this weekend. I live and breath .NET so I changed the list to .NET Core or Mono. I chose Mono because I had experimented with it years ago and .NET Core has not yet reached a stable point. Toying around with alpha and beta releases was not on my todo list for today.

The default package repository has a very old version of Mono, so you need to follow the instructions on the Mono site. Add the signing key and package repository to your system then run “sudo apt-get install mono-runtime”.

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
sudo apt-get install mono-runtime

I created a Hello World console application on my Windows 10 laptop and used PSFTP to copy the exe to the Pi. It just worked.

pasted_image_at_2016_03_12_12_57_pm

Then the search was on to find a library to interface with the GPIO pins on the Pi. After looking around I found Raspberry Sharp IO . It had the API I wanted. You can use the event model to track changes in the GPIO pins, just what I needed.

var pin2Sensor = ConnectorPin.P1Pin11.Input();
 
GpioConnection connection = new GpioConnection(pin2Sensor);
connection.PinStatusChanged += (sender, statusArgs) 
                    => Console.WriteLine("Pin changed", statusArgs.Configuration.Name);

Deploying this to the Pi however resulted in catastrophic failure. Some weird error message:

pi@raspberrypi:~/ticktack $ sudo mono TickTackConsole.exe
Missing method .ctor in assembly /home/pi/ticktack/Raspberry.IO.GeneralPurpose.dll, type System.Runtime.CompilerServices.ExtensionAttribute
Can't find custom attr constructor image: /home/pi/ticktack/Raspberry.IO.GeneralPurpose.dll mtoken: 0x0a000014
* Assertion at class.c:5597, condition `!mono_loader_get_last_error ()' not met
 
Stacktrace:
 
 
Native stacktrace:
 
 
Debug info from gdb:
 
[New LWP 1965]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
0x76e67ee8 in __libc_waitpid (Cannot access memory at address 0x1
pid=1966, stat_loc=0x7e904960, options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
40      ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory.
  Id   Target Id         Frame
  2    Thread 0x769f3430 (LWP 1965) "mono" 0x76e65a40 in do_futex_wait (isem=isem@entry=0x3181a4) at ../nptl/sysdeps/unix/sysv/linux/sem_wait.c:48
* 1    Thread 0x76f5e000 (LWP 1961) "mono" 0x76e67ee8 in __libc_waitpid (Cannot access memory at address 0x1
pid=1966, stat_loc=0x7e904960, options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
 
Thread 2 (Thread 0x769f3430 (LWP 1965)):
#0  0x76e65a40 in do_futex_wait (isem=isem@entry=0x3181a4) at ../nptl/sysdeps/unix/sysv/linux/sem_wait.c:48
#1  0x76e65af4 in __new_sem_wait (sem=0x3181a4) at ../nptl/sysdeps/unix/sysv/linux/sem_wait.c:69
#2  0x00219f98 in mono_sem_wait ()
#3  0x0019091c in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
 
Thread 1 (Thread 0x76f5e000 (LWP 1961)):
Cannot access memory at address 0x1
#0  0x76e67ee8 in __libc_waitpid (pid=1966, stat_loc=0x7e904960, options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1  0x000c0ba4 in ?? ()
Cannot access memory at address 0x1
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
 
=================================================================
Got a SIGABRT while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================

Fast forward 4 hours. It turns out installing mono runtime doesn’t quite get you all the bits you need. You need to run the following command:

sudo apt-get install  libmono-system-core4.0-cil

After that the application started again. The button I had connected to the pins activated my code. Yay! With that my time for the weekend was all used up but I’m now ready to be create my actual application.

pasted_image_at_2016_03_12_05_18_pm

To be continued. #MarchIsForMakers

Running Gulp and NPM in VSO Build fails with “Run as Administrator” message

The past months I’ve been heavily involved with Angular and ASP.NET Web API projects. While moving a project from an on-premise build server to Visual Studio Online I ran into an issue for which I could not find any solution at first. The default npm task was failing for no apparent reason and just outputted that the task had to be run as Administrator.

Long story short, I added the –force attribute and suddenly the issue was resolved.

ForceBuild

Since I wanted to write this down, I had to reproduce the issue. So I removed the attribute again but the build somehow kept building. I had also deleted the failed builds so I have no log anymore to illustrate the issue.

Something has changed though as the builds now take much longer than before, even without the force attribute. You can see that in the image below. Hopefully I don’t run into it again, I added the force attribute after reading this GitHub issue.

madness

 

Session: Storm with HDInsight

Two weeks ago I spoke at the Belgian Azure user group (AZUG). I gave an introduction on Storm with HDInsight. You can find a recording of the session on their website.

My talk was divided in three parts, a introduction, a deep dive to give an overview of the main concepts of a Storm topology and then several scenario’s and how they can be solved.

The deep dive centered around creating a Twitter battle where hashtags were counted and the results then displayed on a website. You can find the code on my GitHub account.

Scaling an Azure Event Hub: Throughput units

When navigating to the scale tab of an event hub there are only two options you can choose: messaging tier and eventhub throughput units.

Scale settings

The messaging tier enables features and sets the amount you pay for messages or connections. You can find more info on the Azure website.

A throughput unit (TU) has quite a direct impact on the performance. A TU currently has these limits: 1 MB/s Ingress, 2 MB/s egress, and up to 84 GB of event storage. The default value is 1 TU.

In the picture below you can see that I had one cloud service pushing messages to an event hub until 10:00. I then scaled out the service to 20 instances. This resulted in about twice the amount of messages being sent (from 200k to 400k), not really what you expect. I was also getting more errors, from time to time the event hub was sending back server busy messages.

TP

At about 10:30 I increased the TU from 1 to 3, this not only stopped the errors from occurring but further increased the throughput from 400k to over 1 million messages being received on the event hub per 5 minutes.