Auditing with EF Core and Sql Server – Part 2: Triggers, Session context and dependency injection

In part 1 I looked at using temporal tables to cover auditing requirements. Although a great feature, it did not quite cover the scenario I was after. I wanted a solution that did not impact my .NET code or at least keep the necessary change as small as possible.

So I decided to use a SQL feature I tend to avoid. Triggers. There I’ll say it again: triggers.

They are very powerful, but can be misused. Certainly in legacy systems where people went all out with them, you can be spending quite some time looking into where your SQL statements are actually processed or why your delete is doing an update. But for my current use case they seemed to be the right fit. It’s a feature of the datastore and it would keep all logic out of my code.

So first I created an AuditInfo class in my codebase so I could generate a migration with the create table statement and, should I need it, be able to query it in code.

public class AuditInfo
{
    public long Id { get; set; }
    public string SchemaName { get; set; }
    public string TableName { get; set; }
    public long RowId { get; set; }
    public DateTime ActionDate { get; set; }
    public string Action { get; set; }
    public string Values { get; set; }
    public string ModifiedBy { get; set; }
}

Short and simple. In my current project I gave every entity an Id, even if I could use a composite key. It just makes things a lot easier and means I can get the history of a specific row in an easy way. I also created an extension method to add an audit trigger on every entity.

public static class EnableAuditTriggerExtensions
{
	public static void EnableAuditTrigger(this MigrationBuilder migrationBuilder, IEntityType entityType)
	{
		var tableName = entityType.Relational().TableName;
		var schemaName = entityType.Relational().Schema ?? "dbo";
		migrationBuilder.Sql($@"
IF EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[{schemaName}].[{tableName}_Audit]'))
DROP TRIGGER [{schemaName}].[{tableName}_Audit]");
		migrationBuilder.Sql($@"
CREATE TRIGGER [{schemaName}].[{tableName}_Audit] ON  [{schemaName}].[{tableName}]
AFTER INSERT, DELETE, UPDATE
AS 
BEGIN
 
SET NOCOUNT ON;
DECLARE @Action AS CHAR(1)
DECLARE @Id AS BIGINT
DECLARE @Values AS NVARCHAR(MAX)
SET @Action = NULL
SET @Id = NULL
SET @Values = NULL
 
IF EXISTS (SELECT * FROM inserted)
BEGIN
	IF EXISTS(SELECT * FROM deleted)
	BEGIN
		SET @Action = 'U'
	END
	ELSE
	BEGIN
		SET @Action = 'I'
	END
END
ELSE IF EXISTS(SELECT * FROM deleted)
BEGIN
	SET @Action = 'D'
END
IF @Action = 'D'
BEGIN
	INSERT INTO dbo.AuditInfo (
		SchemaName,
		TableName,
		RowId,
                ActionDate,
		ModifiedBy,
		[Action],
		[Values])
		SELECT
		'{schemaName}',
		'{tableName}',
		d.Id,
                SYSUTCDATETIME(),
		(SELECT CONVERT(nvarchar(250), SESSION_CONTEXT(N'CurrentApplicationUser'))),
		@Action,
		(SELECT * FROM deleted vals where vals.Id = d.Id FOR JSON PATH)
		FROM deleted d
END
ELSE
BEGIN
	INSERT INTO dbo.AuditInfo (
		SchemaName,
		TableName,
		RowId,
                ActionDate,
		ModifiedBy,
		[Action],
		[Values])
		SELECT
		'{schemaName}',
		'{tableName}',
		i.Id,
                SYSUTCDATETIME(),
		(SELECT CONVERT(nvarchar(250), SESSION_CONTEXT(N'CurrentApplicationUser'))),
		@Action,
		(SELECT * FROM inserted vals where vals.Id = i.Id FOR JSON PATH)
		FROM inserted i
END
END");
	}
}

Since you get access to the deleted and inserted values in a trigger I can inspect it and insert the correct values in my audit table. It’s not that complicated.

You can also see that I’m using “SESSION_CONTEXT”. All the users connect to the database with the same connection string, since I want to use connection pooling, but I want to log the correct application user along with the changes they made. There are several ways you can solve this but using “SESSION_CONTEXT” gives me an easy way to share information between my application and its database. It is a new feature in SQL Server 2016 and can be used with SQL Azure. SESSION_CONTEXT allows you to associate key value pairs to your current SQLConnection.

Won’t this then fail because you are using the same connection for multiple users? No because if you review the log of what happens to your SQL Server you’ll notice that “sp_reset_connection” gets called. This does a couple of things, including clearing whatever is in the SESSION_CONTEXT. I tried hard to find it being explicitly mentioned on MSDN but could not find any reference. This article proved valuable to clear up some of the underlying functionality. Since the stored procedure clears “CONTEXT_INFO” which I could also have used but is a bit more cumbersome to work with.

The migration itself is then pretty straightforward. Again the same caveat as before, many to many associations need to be modelled as an entity using this approach as I iterate across all entities to add the trigger to the underlying table.

public partial class LogWithTrigger : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        foreach (var entity in TargetModel.GetEntityTypes())
        {
            if (entity.Name != typeof(AuditInfo).FullName)
            {
                migrationBuilder.EnableAuditTrigger(entity);
            }
        }
    }
}

So what remained is to actually call the stored procedure to add the current username to the key (‘CurrentApplicationUser’) that’s being referenced in my trigger. I had to dig a bit into how entity framework works to get this working as I need control over the actual connection. I could just create a SqlConnection and give that to the DbContext. If you do this Entity Framework will just use the connection you supply, however I wanted to use dependency injection. I subclassed SqlServerConnection to get the additional behaviour.

This is the version for EFCore < 2.0:

public class PrincipalAwareConnection
    : SqlServerConnection
{
    private IPrincipal principal;
 
    public PrincipalAwareConnection(IDbContextOptions options, ILogger<SqlServerConnection> logger, IPrincipal principal)
        : base(options, logger)
    {
        this.principal = principal;
    }
 
    public override async Task OpenAsync(CancellationToken cancellationToken = default(CancellationToken))
    {
        bool setSessionState = this.DbConnection.State != ConnectionState.Open;
        await base.OpenAsync();
        if (setSessionState)
        {
            SetSessionInfo();
        }
    }
 
    private void SetSessionInfo()
    {
        using (var cmd = base.DbConnection.CreateCommand())
        {
            cmd.Transaction = base.CurrentTransaction?.GetDbTransaction();
            cmd.CommandType = System.Data.CommandType.StoredProcedure;
            cmd.CommandText = $"sp_set_session_context";
            var keyParameter = cmd.CreateParameter();
            cmd.Parameters.Add(keyParameter);
            keyParameter.ParameterName = "key";
            keyParameter.Value = "CurrentApplicationUser";
            var valueParameter = cmd.CreateParameter();
            cmd.Parameters.Add(valueParameter);
            valueParameter.ParameterName = "value";
            valueParameter.Value = principal.Identity.Name;
            var readOnlyParameter = cmd.CreateParameter();
            cmd.Parameters.Add(readOnlyParameter);
            readOnlyParameter.ParameterName = "read_only";
            readOnlyParameter.Value = 1;
            cmd.ExecuteNonQuery();
        }
    }
 
    public override void Open()
    {
        bool setSessionState = this.DbConnection.State != ConnectionState.Open;
        base.Open();
        if (setSessionState)
        {
            SetSessionInfo();
        }
    }
}

This is the version for EFCore >= 2.0:

public class PrincipalAwareConnection
    : SqlServerConnection
{
    private IPrincipal principal;
 
    public PrincipalAwareConnection(RelationalConnectionDependencies dependencies, IPrincipal principal)
        : base(dependencies)
    {
        this.principal = principal;
    }
 
    public override async Task<bool> OpenAsync(CancellationToken cancellationToken, bool errorsExpected = false)
    {
        bool setSessionState = this.DbConnection.State != ConnectionState.Open;
        bool opened = await base.OpenAsync(cancellationToken, errorsExpected);
        if (setSessionState)
        {
            SetSessionInfo();
        }
        return opened;
    }
 
    private void SetSessionInfo()
    {
        using (var cmd = base.DbConnection.CreateCommand())
        {
            cmd.Transaction = base.CurrentTransaction?.GetDbTransaction();
            cmd.CommandType = System.Data.CommandType.StoredProcedure;
            cmd.CommandText = $"sp_set_session_context";
            var keyParameter = cmd.CreateParameter();
            cmd.Parameters.Add(keyParameter);
            keyParameter.ParameterName = "key";
            keyParameter.Value = "CurrentApplicationUser";
            var valueParameter = cmd.CreateParameter();
            cmd.Parameters.Add(valueParameter);
            valueParameter.ParameterName = "value";
            valueParameter.Value = principal.Identity.Name;
            var readOnlyParameter = cmd.CreateParameter();
            cmd.Parameters.Add(readOnlyParameter);
            readOnlyParameter.ParameterName = "read_only";
            readOnlyParameter.Value = 1;
            cmd.ExecuteNonQuery();
        }
    }
 
    public override bool Open(bool errorsExpected = false)
    {
        bool setSessionState = this.DbConnection.State != ConnectionState.Open;
        var opened = base.Open(errorsExpected);
        if (setSessionState)
        {
            SetSessionInfo();
        }
        return opened;
    }
}

In order to override the default SqlServerConnection and allow for the dependency injection of “IPrincipal” I also had to modify the “ConfigureServices” method in my “Startup.cs” class.

services.AddEntityFrameworkSqlServer();
services.AddTransient<ISqlServerConnection, PrincipalAwareConnection>();
services.AddDbContext<MyContext>((sp, options) =>
    options.UseSqlServer(Configuration.GetConnectionString("MyConnectionString"))
           .UseInternalServiceProvider(sp));

When adding, updating or deleting rows the audit table is now populated with values.

This snippet hooks up Entity Framework with the container you use in your application. So now whenever an entity is changed the audit table is filled with the correct values. Perhaps I’ll look into building a diff screen but for now I consider this the best fit for my goals. What do you think?

Topconf Talinn 2017

I’m currently in Talinn where I’m taking part in Topconf Talinn 2017. After a day of travelling and looking around in the city my talk was scheduled in the morning of the first day.

I presented my blockchain talk which I had slightly modified to fit it in the time slot of 40 minutes. This forced me to remove some parts but still it was difficult to get it done in the allotted time. Luckily I took part in the open space lab in the afternoon where a couple of attendees approached me to learn more about the subject. My talk currently has a rating of 4.6/5.

Slides can be downloaded here and the source of my Ethereum game is available on GitHub.

I haven’t applied this talk to any future conferences, so for now this is the last time I gave it. A recording of my session at NDC Oslo 2017 is also available on YouTube.

PowerBI with .NET Core

In May I integrated PowerBI in a .NET Core and Angular application. In June a new version of PowerBI was released with an entire new way of getting reports to your customers, so I had to do the exercise again. The C# SDK only supports the .NET framework so I used the PowerBI API directly to implement it. Although the API is documented I was only able to get it all working by reading it carefully and also following every step mentioned in this article. The end result is great, Power BI offers a great reporting solution, getting it to run however was not a walk in the park. You can read about different scenarios on the official website.

The ability to alter and publish reports without writing any code is awesome. In my current project we have an admin user that creates reports in PowerBI desktop, he uploads the report in our web application and we then publish it to each customer. They all have their own database and PowerBI allows you to basically change the connection string. In theory, it’s mentioned on the PowerBI site, we could also allow the customers to modify the published report or create their own. A scenario which we currently don’t need but which I might investigate nonetheless.

This weekend I decided to take a look again at the C# SDK. It’s really just a small layer between the API and your application and you don’t really need it, but it’s the natural place to start looking around. Since it only targets the .NET framework I couldn’t use it in .NET core. I forked the project, modified the csproj files to target .NET Standard 2.0 and updated the packages that were referenced. Zero warnings, zero errors. Not bad for a conversion!

The only thing that is different from the official samples is the way to get a token to call functions in the API. The official samples use UserPasswordCredentials which is not available. So as another Github user mentioned you need to do the heavy lifting yourself.

private async Task<TokenCredentials> GetAccessToken()
{
    using (HttpClient client = new HttpClient())
    {
        string tenantId = "";
        var tokenEndpoint = "";
        var accept = "application/json";
        var userName = "";
        var password = "";
        var clientId = "";
 
        client.DefaultRequestHeaders.Add("Accept", accept);
        string postBody = null;
 
        postBody = $@"resource=https%3A%2F%2Fanalysis.windows.net/powerbi/api
                        &client_id={clientId}
                        &grant_type=password
                        &username={userName}
                        &password={password}
                        &scope=openid";
 
        var tokenResult = await client.PostAsync(tokenEndpoint, new StringContent(postBody, Encoding.UTF8, "application/x-www-form-urlencoded"));
        tokenResult.EnsureSuccessStatusCode();
        var tokenData = await tokenResult.Content.ReadAsStringAsync();
 
        JObject parsedTokenData = JObject.Parse(tokenData);
 
        var token = parsedTokenData["access_token"].Value<string>();
        return new TokenCredentials(token, "Bearer");
    }
}

Once you have the token you can use the SDK as is.

    var tokenCredentials = await GetAccessToken();
 
    using (var powerBiclient = new PowerBIClient(new Uri("https://api.powerbi.com/"), tokenCredentials))
    {
        var reports = powerBiclient.Groups.GetGroups();
    }

I’ve pushed the code to GitHub and published it as a NuGet package.

Auditing with EF Core and Sql Server – Part 1: Temporal tables

It’s part of a lot of projects in which I’m involved. Keeping track of data that is changed, when the change occurred and, the most important part, who made the change. Apart from tracking down unauthorised changes it also helps when users claim they didn’t make a change. They tend to forget changes they made. It’s a recurring requirement, but most of the time it’s there without the need for any further features. I’ve never had to create a diff screen or the like. We just need to keep track of all the changes to be on the safe side.

Most of the time you will see this being implemented with an interface that has to be implemented by your entities. The IAuditable interface with ModifiedByUser and ModifiedDate. It’s not that hard to implement and there are plenty of examples out there. Entity Framework has all the info you need, you can even get access to the old and new values of the properties and log them as well.

A quick Google search shows you how others are doing it, like here or even a library that keeps track of it. It’s not that hard to roll your own implementation and you’ll learn more about EF in the process of implementing it.

I always consider it to be noise. It is a cross cutting concern showing up in all your entities and then there is the additional code in the dbcontext which needs to look at the changed entities and their properties. For my current project I wanted to look at it from a different angle and keep everything out of my entities and if possible the DbContext.

I’m using SQL Azure (or SQL 2016) and had read about temporal tables last year and it seemed this would be a perfect fit. SQL server can automatically insert the validity date of data and their value at the time in a history table.

When enabling it on a table you need to add two columns for the validity period and the rest is taken care of by SQL Server. After inserting one record in the Products table, it will contain one row and its history table will be empty.

If I then issue an update statement, I get one row in the products table and one in its history table.

So far so good, we get an automatic history log. I was concerned that if I would add a column to the source table I would have to create an update to the corresponding history table but that is also handled by SQL Server. Migrations would just work. There are some restrictions however, as cascading deletes are no longer possible and would need to be handled by the application. You can find more limitations here.

To enable temporal tables on all my entities I created a couple of extension methods and added a migration to my code base.

public partial class EnableTemporalTable : Migration
{
    private string schemaName = "Logging";
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.Sql($"CREATE SCHEMA {schemaName}");
        var entityTypes = base.TargetModel.GetEntityTypes();
        foreach (var entityType in entityTypes)
        {
            migrationBuilder.AddAsTemporalTable(entityType, schemaName);
        }
    }
 
    protected override void Down(MigrationBuilder migrationBuilder)
    {
        var entityTypes = base.TargetModel.GetEntityTypes();
        foreach (var entityType in entityTypes)
        {
            migrationBuilder.RemoveAsTemporalTable(entityType, schemaName);
        }
        migrationBuilder.Sql($"DROP SCHEMA {schemaName}");
    }
}

Note that I’m iterating all entities. If you have a many to many relationship you will have to model it as an association entity to have it also gain the temporal features.

using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata;
using Microsoft.EntityFrameworkCore.Migrations;
 
public static class EnableTemporalDataOnTable
{
    public static void AddAsTemporalTable(this MigrationBuilder migrationBuilder, IEntityType entityType, string temporalScheme, string temporalTableName)
    {
        var tableName = entityType.Relational().TableName;
        var schemaName = entityType.Relational().Schema ?? "dbo";
        migrationBuilder.Sql($@"
                    IF NOT EXISTS (SELECT * FROM sys.[tables] t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = '{tableName}' AND temporal_type = 2 and s.name = '{schemaName}')
                    BEGIN
                        ALTER TABLE {schemaName}.{tableName}   
                        ADD  ValidFrom datetime2 (2) GENERATED ALWAYS AS ROW START HIDDEN    
                                constraint DF_{tableName}_ValidFrom DEFAULT DATEADD(second, -1, SYSUTCDATETIME())  
                            , ValidTo datetime2 (2)  GENERATED ALWAYS AS ROW END HIDDEN     
                                constraint DF_{tableName}_ValidTo DEFAULT '9999.12.31 23:59:59.99'  
                            , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);   
 
                        ALTER TABLE {schemaName}.{tableName}    
                        SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = {temporalScheme}.{temporalTableName})); 
                    END
                ");
 
    }
 
    public static void AddAsTemporalTable(this MigrationBuilder migrationBuilder, IEntityType entityType, string temporalScheme)
    {
        var tableName = entityType.Relational().TableName;
        AddAsTemporalTable(migrationBuilder, entityType, temporalScheme, tableName);
    }
 
    public static void RemoveAsTemporalTable(this MigrationBuilder migrationBuilder, IEntityType entityType, string temporalScheme, string temporalTableName)
    {
        var tableName = entityType.Relational().TableName;
        var schemaName = entityType.Relational().Schema ?? "dbo"; 
        string alterStatement = $@"ALTER TABLE {tableName} SET (SYSTEM_VERSIONING = OFF);";
        migrationBuilder.Sql(alterStatement);
        alterStatement = $@"ALTER TABLE {tableName} DROP PERIOD FOR SYSTEM_TIME";
        migrationBuilder.Sql(alterStatement);
        alterStatement = $@"ALTER TABLE {tableName} DROP DF_{tableName}_SysStart, DF_{tableName}_SysEnd";
        migrationBuilder.Sql(alterStatement);
        alterStatement = $@"ALTER TABLE {tableName} DROP COLUMN SysStartTime, COLUMN SysEndTime";
        migrationBuilder.Sql(alterStatement);
        alterStatement = $@"DROP TABLE {temporalScheme}.{temporalTableName}";
        migrationBuilder.Sql(alterStatement);
    }
 
    public static void RemoveAsTemporalTable(this MigrationBuilder migrationBuilder, IEntityType entityType, string temporalScheme)
    {
        var tableName = entityType.Relational().TableName;
        RemoveAsTemporalTable(migrationBuilder, entityType, temporalScheme, tableName);
    }
}

This is largely based on this blog post. I just adapted it to be a bit more generic. If I would create a migration for a new table I could just call the extension method and have it temporal in one go.

But there was still one missing piece. Who made the change? I have no property on my entities to store this information and thus no column in the table or history table to store that info. In one of the first versions of the article on the MSDN website that introduces temporal tables it was mentioned that you could track the user, but that reference has since been deleted. So custom code is necessary. I didn’t want to add the property to each of my entities so I used a new feature in EF core called shadow properties. Funny enough, the example on the MSDN site is about auditing.

So I created an override for the OnModelCreating and the SaveChanges and SaveChangesAsync method of the DbContext. So not completely honouring my initial goal to keep it out of my application code.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    foreach (var entityType in modelBuilder.Model.GetEntityTypes())
    {
        modelBuilder.Entity(entityType.ClrType).Property<string>("ModifiedBy");
    }
}

The SaveChanges and SaveChangesAsync would call into this method and the principal was injected in the constructor of the DbContext.

public void SetModifiedInformation()
{
    var modifiedByEntries = ChangeTracker.Entries()
        .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified || e.State == EntityState.Deleted);
 
    foreach (EntityEntry entry in modifiedByEntries)
    {
        entry.Property("ModifiedBy").CurrentValue = this.principal?.Identity?.Name;
    }
}
 
public override int SaveChanges()
{
    SetModifiedInformation();
    return base.SaveChanges();
}
 
public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken())
{
    SetModifiedInformation();
    return await base.SaveChangesAsync(cancellationToken);
}

This worked, but one scenario is now missing in this solution. We know who created or updated a row, but with a delete we lose the value in the ModifiedBy property. The deleted row is copied from the source table to the history table. I considered the ultimate hack: create an instead of trigger on the table and issue an update and then a delete but luckily SQL server prevented me from even trying it. Instead of triggers are not possible with table that is temporal. I considered adding a separate table to keep track of who deleted a certain row, but this would again introduce more code than I wanted and clutter the codebase. I could introduce soft deletes so that a delete will become an update but that just opens another can of worms.

So I went back to the drawing board to find a better solution, which turned out to be a blast from the past combined with a new feature in SQL server 2016. But that’s for next time.

Migrating databases with EF Core 2.0 during a VSTS release

A while ago I wrote down how to migrate a database using the default tooling with EF Core 1.1. With 2.0 being released last August it was finally time to upgrade projects on which I’m working to the latest bits. A reader had already reported that the same approach as used with 1.1 no longer worked for EF Core 2.0.

Again I ran the update database command with the verbose option and I saw that the command itself has not changed. What has changed however is the location of EF.dll. Even when forcing the package restore to not use any caching and explicitly stating where the packages had to be installed, EF.dll just would not show up. It is however located (at least on my Mac) at /usr/local/share/dotnet/sdk/NuGetFallbackFolder/microsoft.entityframeworkcore.tools.dotnet/2.0.0/tools/netcoreapp2.0/ef.dll.

I tried for an hour or two to get this file in a zip archive on Windows and on macOS. The easy way out would be to just find the dll and add it to source control. Of course I went for another solution: download the EF.dll during a release and use it to update the databases.

I edited the release definition of VSTS and in my first attempt I tried to use the package management task which you get when you install this extension. Unfortunately this is not meant to be used to download public packages from the NuGet servers.

Second attempt was to use the Download file task. It was not able to download the NuGet package I needed giving me a 404 error.

So when everything fails, Powershell will fix it. With just a single line I was able to download the NuGet package containing EF.dll.

I used an inline script, but you could just create a script and add it to source control so you can use variables for the package name.

wget "https://www.nuget.org/api/v2/package/Microsoft.EntityFrameworkCore.Tools.DotNet/2.0.0" -outfile "EF.nupkg"

Then I extract all the files, since the build output contains one or more zip files and a nupkg file is also just a zip archive. My sample project just contained one zip.

Then move the EF.dll to the location where the assembly resides containing the DbContext and migrations. The final step is to execute the update command.

mv ./tools/netcoreapp2.0/ef.dll ./
dotnet exec --depsfile migratetest20.deps.json  --runtimeconfig migratetest20.runtimeconfig.json ef.dll database update --assembly migratetest20.dll --startup-assembly migratetest20.dll

Migratetest20 was my test project.

Topconf Duesseldorf 2017

Last week I was fortunate enough to be part of Topconf Duesseldorf. It was the first time the conference was organized in Duesseldorf. My talk on blockchain technology and more specific on Bitcoin and Ethereum had been selected and together with 46 other speakers we would fill two days packed with sessions.

I had changed the abstract a little bit after reviewing the feedback from NDC and also removed parts of the presentation to fit it in 40 minutes. In the end though I did go over time, and with only 10 minutes between sessions I had to rush through the last part. Lessons learned for next month when I will be giving this talk at Topconf Talinn.

There were a lot of questions during and after the talk so the audience was quite engaged, in fact the atmosphere of the entire conference was very amicable. The reviews which have been submitted for my session also reflect my own thoughts on how it went. At the time of writing this post my session got an average score of 4.29 / 5. The slides can be downloaded here and the code of the DApp is available on GitHub. I’ve now written the game 3 times and it was also the first time I felt confident enough to let the attendees participate.

In the audience was also Joy Clarck who made a great summary.

Migrating databases with EF Core during a VSTS release

I wanted to migrate databases during a release. The default way of doing this is to run code when the application starts. Great for little demo projects but not when you are releasing a real app to production. When you are pushing a new version to production and the migration fails the release should fail, alarms should go off etc. With the “old” Entity Framework I usually wrapped the migration code in a console application. I packaged it alongside the other code and then executed an additional release step to execute the migrations. Since I’ve been using .NET Core on my latest projects I wanted to see if I could use all the default tooling.

If you run the update command with the verbose option you will see that “dotnet exec” is called using “EF.dll” together with a bunch of other options. This is the behaviour I want to replicate during a release.

dotnet ef database update -v

The first thing to figure out is where to find “EF.dll”. This is of course in a NuGet package. By default they are downloaded in your user’s home folder. I was not able to figure out a way to just grab it from there using a hosted build agent. However by modifying the restore command I can specify where the packages need to be downloaded.

dotnet restore --packages /my/location

I then added an additional “Archive files” build step to my build definition. This step creates a zip file and adds it to the build artifacts staging directory. This means it will automatically be picked up with all the other deliverables.

Those are all the necessary changes for the build definition. Now the extra bits in the release definition. I need four additional steps, but it’s also important to select the correct agent. As I’ll be running .NET Core commands I need a VS2017 agent.

The first step is to unarchive the archive which has the appsettings.json with the connectionstring, in my case it’s the output of a project which houses the web api and an Angular app, the other archive we need has the EF Core bits we need.

I then need the correct connectionstring, this is of course a variable in my release environment. The easiest way to replace the connectionstring in appsettings.json of the unarchived files of the web application is to use an extension of VSTS. It’s a two step process, we first tokenize the file so the second step can find the placeholders.

The final step is to do the same command we execute on our development machine. The hard part is figuring out all the parameters. In my case the migrations (and all of the logic) is in a dll named: App.dll while the web application and the connectionstring is from another dll: App.Web.dll

exec --depsfile App.Web.deps.json --runtimeconfig App.Web.runtimeconfig.json ef.dll database update --assembly App.dll --startup-assembly App.Web.dll --verbose --root-namespace MyAppNamespace

And that’s it, now the database is upgraded during a release.

Managing settings for an Angular app with VSTS

Recently I needed to have client side settings in an Angular app. My app had to call some 3rd party webservices and their location was different per environment (test, QA and production), I also had to configure a couple of parameters that were necessary when calling them.

There’s support for this in Angular using “environments”. If you scaffold a new Angular app with angular-cli you’ll get a folder “Environments”.

environments

These files can be used while building the application. Specifying the environment will bundle the environment specific file together with the rest of your application. Which is great but I don’t want to rebuild the application when I go from test to QA and finally to production. Typically settings are changed when deploying to an environment, at least that’s what I usually do with my .NET projects.

Trying to have one unified way of handling the settings gives two challenges: how to bootstrap the Angular app with dynamic settings and how to manage these during deployment.

I added a client-config.json file in my asset folder to contain all the settings and the values which were correct during development.

client-config.json

This file needs to be downloaded before most of the application actually starts so I added a settings provider.

import { Http } from '@angular/http';
import { Injectable } from '@angular/core';
 
@Injectable()
export class SettingsProvider {
 
  private config : any;
 
  constructor(private http: Http) {
  }
 
  public loadConfig() : Promise<any>{
    return this.http.get("assets/client-config.json")
      .map(res => res.json())
      .toPromise()
      .then(settings => this.config = settings);
  }
 
  public get configuration(): any {
    return this.config;
  }
}

Then in my app.module.ts I add a new function to load the configuration.

export function init(settingsProvider: SettingsProvider) {
  return () => settingsProvider.loadConfig();
}

I also changed the NgModule declaration to include

  providers: [{
    'provide': APP_INITIALIZER,
    'useFactory': init,
    'deps': [SettingsProvider],
    'multi': true
  },
  SettingsProvider]

The crux here is the APP_INITIALIZER token which Angular provides to run logic before the application starts. You can read more about it here and here.

With the client side now done, the application still needs to get the correct values when the application is pushed to different environments. This is the easy part as VSTS has support for file transforms and variable substitution. I just add my client-config.json to the list of files that need to be changed. VSTS will replace any variable it finds that can be matched with variables defined for the environment where I deploy.

VSTS Config

The client-config.json is also added to the list of files the Angular build needs to pick up.

 "apps": [
    {
      "root": "src",
      "outDir": "../wwwroot",
      "assets": [
        "assets",
        "favicon.ico",
        "client-config.json"
      ],
      ....
    }]

The end result is a single way to manage environment specific settings for my .NET Core and Angular applications.

NDC Oslo 2017

I was very fortunate to be selected as one of the speakers for this year’s NDC conference. Together with my colleague and friend Hans we delivered our blockchain talk. It was based on the talk we gave at Cloudbrew in December although the structure and the code have undergone a lot of changes.

The idea of the talk hasn’t changed though. We dived straight into technology and explained how Bitcoin provides the security it needs to make sure transactions are safe. We then zoomed out and illustrated how blockchain technology can be used for much more than just the financial world and proved that with our own DApp or distributed application running on Ethereum.

The slides can be downloaded here and the sample code is up at GitHub. The app is still a work in progress. It might be unrelated but there’s currently a spike in Ethereum transactions :).

Today we received the scores of the talk. Of those that placed a card in the box: 60 were green, 23 were yellow and 5 were red. So overall 70% liked it. I do wonder what the yellow people wanted to see or what they didn’t like.

We tried to put in the 60 minutes all the things we would love to hear in a talk. There were a lot of questions during and after the session so I think we achieved what we set out to do and thats to inspire people to look at blockchain technology. It’s no silver bullet but might be a perfect fit for some projects.

Raspberry Pi Meetup

Last week I was at the 3rd meetup of Raspberry Pi Belgium. I presented my endeavour to monitor utility meters with a Raspberry Pi and Mono, the successes and the failures.

My talk was based on a couple of previous blogposts:

But also an article I have yet to write where I use a DHT22 sensor to monitor humidity and temperature, like every step in this journey it had its challenges.

The other presentation was done by Jan Tielens. He showed us around the IoT hub in Azure.

Slides are available here.