Flemish media HTTPS bankruptcy

Note: I usually blog in something that resembles English, this post however will be in Dutch, in an effort to make the Flemish corner of the internet a little safer.

Voor wie het nog niet wist, ik werk in de IT-sector. Af en toe geef ik een presentatie, zowel voor professionals als voor leken. Eén van de onderwerpen die ik wel eens behandel, is beveiliging. Hierbij hoef ik niet te werken met een fictieve casus, want er zijn jammer genoeg voldoende voorbeelden voorhanden. In het verleden heb ik bijvoorbeeld al verschillende keren Canvas gewezen op hun onveilige inlogpagina. Eerst wat context.

Web 101

Het web is gebouwd rond het HTTP protocol. Het komt erop neer dat er een hoop tekst heen en weer wordt gestuurd. Navigeer je naar een website met je browser dan wordt er een “GET” verzoek naar een bepaalde url gestuurd. Wanneer je een formulier invult op een website, doet je browser normaal gezien een “POST” verzoek. Ik kan bijvoorbeeld surfen naar de website “tweakers.net”. Wat er achter de schermen allemaal gebeurt, kan je zien door in je browser een keer op de toets “F12″ te drukken. Wat er dan tevoorschijn komt is de “developer console”, die wordt gebruikt door ontwikkelaars wanneer ze een website bouwen of een probleem moeten oplossen. Je kan deze ook zelf gebruiken om te leren hoe het allemaal werkt. In de screenshot zie je onderaan de technische informatie. Het eerste verzoek dat mijn browser doet is een GET verzoek naar de url tweakers.net.

01tweakers

De computer waar de website op draait, krijgt dit verzoek binnen en zal een hoop tekst terugsturen. Je browser zal dit dan interpreteren en je krijgt een website te zien. Al deze tekst wordt op een leesbare manier doorgestuurd, wat op zich geen probleem is. Soms is er echter informatie die je niet zomaar als leesbare tekst wilt versturen, bijvoorbeeld wanneer je een wachtwoord of kredietkaartnummer moet invullen.

Ook dit kunnen we nakijken op de website. Wanneer je op inloggen klikt, word je doorgestuurd naar een andere pagina. In de adresbalk kunnen we zien dat we niet meer HTTP gebruiken, maar HTTPS.

02tweakers

De ontwikkelaars van de website hebben ervoor gekozen om HTTPS te gebruiken. De S staat voor secure en zolang dat er bijstaat, worden de door jou ingevulde en verstuurde gegevens versleuteld. Ook de gegevens die de website naar jou stuurt, worden geëncrypteerd. Andere personen kunnen dus niet meer meekijken.

03tweakers

Ligt al je internetverkeer dan zomaar op straat? Eigenlijk wel, maar wanneer je thuis op je eigen netwerk zit, is de kans klein dat er mensen meekijken. Ben je echter in een restaurant, station of op een andere publieke plaats waar er gratis WiFi wordt aangeboden, dan bevind je je eigenlijk wel in een potentiële jungle. Met een tool zoals Wireshark kan je al het netwerkverkeer, bedraad en draadloos, inkijken. Als het iets mag kosten, dan kan je ook een WiFi Pineapple kopen waarmee een man-in-the-middle-aanval kinderspel is, zeker in deze tijd van smartphones. Genoeg theorie, laten we een keer kijken naar enkele Vlaamse media websites.

Knack

Op de Knack site staat bovenaan een “Aanmelden” link. Wanneer je verder klikt, krijg je een popup die je inloggegevens vraagt. Op het eerste zicht werkt het niet over HTTPS, we moeten naar de ontwikkelaarstools van de browser gaan om dit te achterhalen. Daar zien we gelukkig dat de inhoud van deze popup wel via HTTPS wordt geladen.

04knack

Bij het invullen van een gebruikersnaam en wachtwoord wordt ook alles netjes via HTTPS verstuurd. Het wachtwoord wordt zelfs niet gewoon als tekst verstuurd. Interessant

05knack

 

07knack

De computers van Knack ontvangen een MD5 hash van mijn wachtwoord. Je kan in de Javascript die de website gebruikt, opzoeken wat er precies gebeurt. Met deze gegevens kan ik veronderstellen dat mijn wachtwoord op deze manier wordt opgeslagen, wat ook weer een risico met zich meebrengt, maar dat is niet de focus van deze blogpost.

06knack

Vier

Ook op de website van Vier kan ik me aanmelden. Wanneer je op de profielpagina klikt, verschijnt er een extra stukje op de pagina. Dit stond al verborgen op de pagina toen ik er naartoe ben gesurft. Een man-in-the-middle-aanval is dus mogelijk.

08vier

Na het invullen van een willekeurige gebruikersnaam (test@test.be) en wachtwoord (test), ben ik tot mijn verbazing niet alleen ingelogd, maar mijn gebruikersnaam en wachtwoord worden onversleuteld doorgestuurd. Iedereen kan dus mijn gegevens zien. Waarschijnlijk is dit een testaccount die men bij Vier gebruikt. Je kan je dus ook vragen stellen over de verplichte wachtwoordcomplexiteit, maar ook hier ga ik niet dieper op in in deze blogpost.

09vier

Vijf

Op de site van Vijf kan je lezen dat je kan inloggen met accounts van Vier. Dus ook op deze site ben ik met mijn willekeurige testaccount ingelogd en ook hier wordt mijn wachtwoord onversleuteld verstuurd.

10vijf

Vlaamse media websites

Voor andere sites heb ik dezelfde methode gehanteerd en de resultaten staan in de tabel hieronder. Ik heb de volgende puntencriteria gebruikt:

  • Wordt de inlogpagina versleuteld opgevraagd: 2,5 punten
  • Wordt de inlogpagina versleuteld verstuurd: 5 punten
  • Het certificaat dat wordt gebruikt om alles te beveiligen is gecontroleerd via de website SSLLabs. Indien zij een A geven, krijgt de website 2,5 punten. Een A- wordt 2 enzovoort.
Laden Versturen Certificaat Totaal
Knack 2,5 5 2,5 10
Vier 0 0 0 0
Vijf 0 0 0 0
VTM 2,5 5 2 9,5
GVA 0 0 0 0
Beleggerscompetitie 0 0 0 0
Canvas 0 0 0 0
Radio1 0 0 0 0
MNM 0 0 0 0
Belang van Limburg 0 0 0 0
Nieuwsblad 0 0 0 0
De Standaard 0 0 0 0
De Morgen 2,5 5 1,5 9

De resultaten van SSLLabs kan je hier vinden voor Knack, VTM en De Morgen

Conclusie

Dat er een slechte leerling in de klas zou zitten, had ik wel verwacht. Dat de situatie echter zo slecht is, is voer tot nadenken. Deze websites kiezen er momenteel voor om de zwakste schakel te zijn. Mensen hergebruiken wachtwoorden en elke keer je een verbinding maakt met één van deze websites bestaat er dus de kans dat iemand je wachtwoord kan zien. Het is net hetzelfde als je pincode invullen terwijl de bankautomaat op een scherm wordt geprojecteerd.

Zoals ik al heb vermeld, heb ik al enkele keren Canvas op de hoogte gebracht van het probleem. Ik kreeg toen te horen dat het te moeilijk was. Als dat daadwerkelijk zo is, kunnen ze beter hun hele inlogpagina wegnemen of een alternatief zoeken. Zo gebruikt VTM een derde partij en kan je bij Newsmonkey enkel inloggen via sociale netwerksites. Dan hoef je niet een zoveelste gebruikersnaam wachtwoord combinatie te maken.

Knack komt als beste uit de vergelijking, maar de beleggerscompetitie die zij organiseren is wel gebuisd.

Toekomst

Hopelijk veranderen de bovenvermelde websites binnenkort en kan ik hier neerschrijven dat de wereld weer een beetje veiliger is. Denk ook twee keer na voordat je gevoelige informatie invult, controleer dat je over HTTPS werkt en verbind niet met elk WiFi netwerk dat je tegenkomt.

#httpscrusade

Global Azure Bootcamp 2015 – Belgium

Last Saturday I organized, together with my employer Info Support and the Azure user group Azug, the Belgian location for the Global Azure Bootcamp. Across the entire globe 183 locations were doing the same thing, getting people together to learn and hack away on Microsoft’s cloud platform. We organised sessions and provided room so everyone could join in the labs that had been created.

The first session was presented by Johnny Hooyberghs. He gave an introduction on Visual Studio Online with a focus on getting builds set up. His session covered both the hosted build environment and creating a custom VM with your own build server. He also showed how you could add custom build steps to extend the process.

The second session was presented by Tim Mahy. He dived into Azure Search as a Service. He used his own experiences to fuel his talk, an approach I always like. He also explained everything that works underneath the public API of Azure Search which showed that it’s built on proven technology.

Session Setup

This third session was presented by myself. I’ve been experimenting with Azure Machine Learning for some time now and wanted to share what I’ve learned so far. I introduced the basic concepts of machine learning and how they relate to concepts in AzureML. I created one experiment to predict the income level of somebody, based on sample labs you can find in AzureML. For the second half of my talk I had created an online movie database (how original). I used the API of The Movie Database to get some realistic data. I then created an experiment in AzureML to get suggestions for these movies. I closed with some info on what I’ve been working on in my spare time.

The fourth session was presented by Hans Peeters and Glenn Dierckx. They had created an enormous demo around everything App service related. They started off with an API service and eventually created a Web App, a mobile app and closed by creating a logic app which combined everything they had done so far.

Last Session

The final session was presented by Reinhart De Lille. Not a deep dive in technology this time, his talk showed the other side of the coin: “How to get your company from on-premise to a cloud first world”. Quite a way to end the day, as many of the attendees probably don’t dwell on this much.

I’ve gathered the slides here.

People could also deploy compute instances to aid in breast cancer research. At the end of the day 117 billion data points were analysed and little Belgium was on the in the top 10 of contributing countries!

ScienceLab Top 10

 

Looking forward to next year!

WCF HTTPS And Request Entity Too Large

While working on a project yesterday I ran into a typical WCF error scenario. The requests that were being sent to the service were larger than the defaults allow. This results in a 413 error with the text ‘Request Entity Too Large’ The default of 64kb is there to prevent DoS attacks. If you want to increase the allowed request size you need to modify the binding configuration. You can find many topics on this on the internet. Below you can find a sample:

<system.serviceModel>
  <bindings>
    <basicHttpBinding>
      <binding maxReceivedMessageSize="5242880">
        <readerQuotas ... />
      </binding>
    </basicHttpBinding>
  </bindings>  
</system.serviceModel>

Unfortunately this did not solve the problem for me, I spent quite some time to resolve the issue. Turns out I also had to modify the IIS settings of the website that was hosting the WCF service. The config setting is called ‘uploadReadAheadSize’ and can be found in the serverRuntime section below system.webServer and you can use the configuration editor feature of IIS manager to modify it. Give it the same or a larger value you specify in your WCF configuration.

Model binding with Headers in ASP.NET WebAPI

While writing the previous blog post I noticed that Outlook sends an additional header “IfModifiedSince” when updating its subscription of the iCal feed. It would be nice to support this additional parameter in the API to retrieve appointments. Instead of always returning the entire list of appointments, an additional filter will be used to limit the appointments to those that were changed since our last synchronization.

ifModifiedSinceWhile we could just read the header value inside of our controller action, it would be much nicer if our action would receive it as a parameter. The extension points we need in this case fall under the model binding category and while it shares the same idea and goals with ASP.NET MVC, there are some differences between ASP.NET MVC and WebAPI. There is a great MSDN article which covers most of the things we need.

The two concepts we need to grasp are model binders and value providers. Value providers are an abstraction over, well, values. For instance there’s a query string value provider that reads the query string and will allow those values to be used as parameters in your actions or by model binders. Model binders actually do something with values, they will use value providers to retrieve i.e. a first name and last name value and create a more complex instance.

So in this case we need to read a value from a header inside our HttpRequestMessage. Let’s implement a value provider for our IfModifiedSince header by implementing the IValueProvider interface.

public class IfModifiedValuesProvider
    : IValueProvider
{
    private HttpRequestMessage _request;
    private const string header = "IfModifiedSince";
 
    public IfModifiedValuesProvider(HttpRequestMessage requestMessage)
    {
        _request = requestMessage;
    }
 
    public bool ContainsPrefix(string prefix)
    {
        bool found = false;
        if (string.Equals(header, prefix, StringComparison.OrdinalIgnoreCase))
        {
            found = _request.Headers.Any(x => x.Key == prefix);    
        }
        return found;            
    }
 
    public ValueProviderResult GetValue(string key)
    {
        var headerValue = _request.Headers.IfModifiedSince;
        ValueProviderResult result = null;
        if (headerValue.HasValue)
        {
            result = new ValueProviderResult(headerValue, headerValue.ToString(), CultureInfo.InvariantCulture);
        }
        return result;
    }
}

The two methods we need to implement are ContainsPrefix and GetValue. The ContainsPrefix is of no importance in this case, GetValue is where the magic happens. In this method we read the the value from the header and, if it exists, return a ValueProviderResult populated with the current values. ValueProviders are always accompanied by ValueProviderFactories. It’s the responsibility of the factory to create and setup the value provider. In this case we want to supply our value provider with a reference to the current HttpRequestMessage.

public class IfModifiedValuesProviderFactory 
    : ValueProviderFactory
{
    public override IValueProvider GetValueProvider(HttpActionContext actionContext)
    {
        return new IfModifiedValuesProvider(actionContext.Request);
    }
}

Inheriting from the abstract base ValueProviderFactory allows us to override the GetValueProvider method where we initialize our value provider. We now have enough infrastructure to go back to our AppointmentController.

public IEnumerable<AppointmentModel> Get([ValueProvider(typeof(IfModifiedValuesProviderFactory))]DateTimeOffset? ifModifiedSince = null)
{
    IEnumerable<AppointmentModel> models = null;
    using (var context = new AppointmentsEntities())
    {               
        IQueryable appointments = context.Appointments;
        if (ifModifiedSince.HasValue) 
        {
            appointments = appointments.Where(x => x.LastModifiedDate >= ifModifiedSince.Value);
        }
        models = MapAppointents(appointments);
    }
    return models;
}

By decorating the ifModifiedSince parameter with the ValueProvider attribute it will be populated with the result of the GetValue call. Which does resolve our issue, but it would be even better if users of our API would be able to pass the ifModifiedSince date by using the header or supply it via a parameter in the query string. There are several ways to make this happen.

One approach would be to use the ValueProvider attribute again, chaining along every value provider we want to use.

public IEnumerable<AppointmentModel> Get(
    [ValueProvider(typeof(IfModifiedValuesProviderFactory), typeof(QueryStringValueProviderFactory ))]
    DateTimeOffset? ifModifiedSince = null)
{
    // omitted
}

Adding the QueryStringValueProviderFactory to the list of value providers will help us, but every time we want to add another source of our ifModifiedSince parameter we will have to add it here.

A better approach is to remove the attribute on the parameter entirely and add our value provider to the configuration of our WebAPI.

public static void Register(HttpConfiguration config)
{
    config.Services.Add(typeof(ValueProviderFactory), new IfModifiedValuesProviderFactory());
}

If we now run the application and use a query string to supply the value for our action, we will see that the date is passed along to our controller. Unfortunately if a client application uses the header, our custom value provider is not invoked at all. What’s missing?

Well it turns out that when you declare actions on your controller, by default only data that’s present in the route data dictionary or the query string will be passed to the controller. It’s like putting [FromUri] on your parameters. If we want have our own value provider come into play we have to use the [ModelBinder] attribute as well.

public IEnumerable<AppointmentModel> Get([ModelBinder]DateTimeOffset? ifModifiedSince = null)
{
   // omitted
}

Now we’re telling WebAPI to use the model binding infrastructure. The default model binder will use all the registered value providers to create a match. Since we’ve registered our IfModifiedValuesProviderFactory in the WebAPI configuration, it will be automatically picked up. If a user of our API uses a query string to pass along the ifModifiedSince value, that will keep working as well. If we add a CookieValueProvider in the future, we will only have to implement the value provider and add it to the configuration of our application. We will not have to inspect every method to see where we should add them explicitly. Best of both worlds. There’s a nice poster of the lifecycle of an HttpRequestMessage on MSDN which includes an illustration on how model binding works.

Exposing iCal data in WebAPI

With ASP.NET Web API it’s now easier than ever to create lightweight HTTP services in .NET. Out of the box the ApiControllers you implement can read json, xml and form encoded values from the HTTP request but also write xml and json to the HTTP response.

HTTP has the concept of content negotiation. This means that when a client requests a resource, it can tell the server that it wants the result in a specific format.

Below you can see an HTTP request that requests json:

And in the HTTP response the data is formatted accordingly:

If the client requests the response to be formatted as xml:

Then the result will be returned as xml:

This mechanism can be extended to support different kind of formatters to read from or write to the body of a request or response.

Let’s say we want to support an additional format that can write appointments in iCal format. To create a custom formatter you inherit from BufferedMediaTypeFormatter or MediaTypeFormatter. For this example I chose the first one.

The code is pretty straightforward and represents a very simple implementation of the iCal standard. The only WebAPI specific code can be found in the constructor.There we add the mapping for the headers we want the formatter to be invoked for. After we add the formatter to the configuration object, it will be invoked automatically whenever a client says it accepts “text/iCal”.

The current setup works fine in Fiddler, or when you use a custom client (JavaScript or HttpClient). But for a true end-to-end sample I want to use Outlook to connect to my appointment service.

Unfortunately Outlook does not send an accept header with text/iCal when it requests resources from an internet calendar. So we need to work around this problem.

Here another extensibility point of ASP.NET Web API comes into play: MessageHandlers.

MessageHandlers allow you to plug into the request processing pipeline on its lowest level. You can inspect the request and response message and make changes. In this case we can inspect the user agent that is added to the request when Outlook contacts our service. When we find a match, we will add an additional header to the incoming request.

We also add this message handler to the configuration object.

We now have everything in place to add an internet calendar in Outlook and view the appointments in our WebAPI.

  1. Open Outlook
  2. Go to Calendar
  3. In the ribbon, click on “Open Calendar” and then “From Internet”
  4. Fill in the url of the AppointmentService in WebAPI i.e. http://localhost:58250/api/appointments
  5. Click Ok.

You now have one AppointmentController serving json, xml and iCal! The complete source can be downloaded here.

Update MVC4 project to MVC5 within Visual Studio

If you are using VS2012 and start a new ASP.NET MVC4 project, you will be greeted by an enormous list of packages which can be updated when clicking through to the NuGet package managers.

Capture01

 

With the new release of Visual Studio 2013, MVC5, WebAPI2,… a lot of new binaries are ready to be used in your application. So updating the packages in Visual Studio should get you going. After clicking “yes” and “I agree” several times though, you will receive this error message:

broken

If you now close the NuGet package manager and then open it again, only one package needs to be updated at the moment (ANTLRv3). So click update once more.

If you now start the application, instead of receiving a nice MVC start screen you will run into a yellow screen of death:

yellowscreenodeath

 

We are almost there. Navigate to the Web.config inside of the Views directory and change all references from MVC 4.0.0.0 to 5.0.0.0 and the Razor version from 2.0.0.0 to 3.0.0.0. I’ve included the changes in this gist.

You are now ready to go!

UPDATE: Ran into this MSDN article which shows you the steps I mentioned and more!

SignalR, Ninject and WebActivator sitting in a tree

On a project I’ve been working on I’ve been having some issues with combining these technologies. We already had an MVC application running using Ninject to wire everything together.

The Ninject MVC package, which you can install via NuGet, uses WebActivator to initialize the kernel. WebActivator enables you to wire up different packages without the need of putting everything in your Global.asax, it’s also more powerfull compared to the default PreApplicationStartMethod attribute which WebActivator actually leverages so you can have multiple startup methods in your assembly.

In SignalR I needed my own ConnectionIdGenerator. The framework has been build to allow an IoC container to manage dependencies so it was trivial to add my custom class. I just added one line to my RegisterServices method in the NinjectMVC3 class which is added in the App_Start folder when you add the Ninject MVC package. By default this code is wired up by the PreApplicationStartMethod of WebActivator.

private static IKernel CreateKernel()
{           
    //more wire up stuff
 
    kernel.Bind<IConnectionIdGenerator>().To<MyConnectionFactory>();
    GlobalHost.DependencyResolver = new SignalR.Ninject.NinjectDependencyResolver(kernel);           
}

Everything was looking good and it worked.

Sometimes.

Occasionally I could see that my own ConnectionIdGenerator was being used and sometimes the default ConnectionIdGenerator that ships with SignalR was still active. I could not really find usefull information on the web and moved code around from WebActivator to the Global.asax file and back again. Even changing the order of steps that were executed to see if that had any effect, but nothing really worked.

I went back to the Signlar Wiki and reread the page on extensibility and especially the part about changing the DependencyResolver.

It clearly states that you need to configure SignalR first and ASP.NET MVC later. A bit more down on the page it’s stated that when you want to use another DependencyResolver and if you are using WebActivator, you need to wire it up in the PostApplicationStart.

Clearly there were more moving pieces than I had thought.

So, working with this new insight, I removed all MVC configuration code that was in the Global.asax file, where it happens by default when you create a new project, and moved it to a method which I decorated with the PostApplicationStartAttribute.

[assembly: WebActivator.PostApplicationStartMethod(typeof(MyNamespace.App_Start.NinjectMVC3), "PostStart")]
 
public static void PostStart()
{
    GlobalHost.DependencyResolver = new SignalR.Ninject.NinjectDependencyResolver(bootstrapper.Kernel);
    RouteTable.Routes.MapHubs();
 
    AreaRegistration.RegisterAllAreas();
 
    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);
}

Running the application again clearly showed everything was in the right place now. SignalR was now properly configured to use my own Ninject kernel. The kernel itself is still being constructed and configured in the PreApplicationStart phase, SignalR and MVC are configured in the PostApplicationStart phase.

Getting started with node.js tutorials and books

I found these great resources to get me going with node.

Totally new to it? Take a look at this video from the creator of node himself.

If you are unafraid and want a bit more lenghtly tutorial, in which you actually create something following best practices, you should check out nodebeginner.org. I was able to compare a lot of the instructions with my own way of working in .net.

Hands on node.js is an ebook with accompanying exercises, great for a more traditional way of learning. The first part of the book is free and the full version will only set you back a few dollars.

Finally there’s another Manning book under way, you can already grab the MEAP.

There’s plenty more out there but I found these to be the most helpful for me.

Legacy Code Retreat Leuven

On Saturday I spent the day near Leuven on the first publicly published Legacy code retreat. We were given an existing code base which seemed rather small, I guess around 400 lines of code, but after digging a bit deeper my colleagues and I found out that it was quite a monster.

Several techniques were covered. Subclassing to override behaviour, introducing a golden master, moving behaviour in collaborating classes and making “pure” functions.

The golden master is probably best suited if you encounter a black box and need to make a change. The code base had a bunch of console.writelines so if you redirect the output and run the applications a number of times with different kinds of input, 10000 times was suggested as a good number, you end up with a number of test files. With those in place can then make the change and compare the new output with your golden master. Automate this and you have a reasonable case. It all depends on having some kind of instrumentation in place so you can harvest this kind of information.

Creating subclasses to override, or rather get around, methods in order to test which paths the program flow follows is something I had done before. Heck you even do that with “new” code when you’re stubbing/mocking, but we were not allowed to use any sort of framework so we had to hand roll them. This eventually leads to a situation where you’re testing more the subclass rather then the system under test.

The answer to that resulting code base is to abstract the code you have to other new classes which are injected through the constructor. Typical dependency injection and inversion.

The last technique was the most eye opening to me. It was suggested to make “pure functions”, meaning any method you wrote was not allowed to directly change the object state. Somehow I was able to see code duplication and underlying algorithms a lot faster throughout the code base. Probably need to look a bit into functional programming as it was quite interesting.

The last two iterations we were allowed to keep the code we were working on, in sharp contrast to a normal code retreat, wich gave us some more sense of accomplishment by improving the code base.

I really liked my first code retreat, gave me a change to work with people who work in different parts of the industry, embedded programming to name one and from different parts of Europe.

Looking forward to the next one.