Algorithm Computation in the Cloud: Microsoft Azure Web Roles

Worker and Web Roles are some of the great features that Microsoft Azure has to offer. These two features are designed to do computation for you in the cloud. A worker role is basically a virtual machine that can be used as a back-end application server in the cloud. Similarly, a web role is a virtual machine hosted in the cloud, but the difference is that this one is used as a front-end server that requires Internet Information Services (IIS). So you can use a web role if you want to have an interface exposed to the client – for example an ASP.NET web site – that makes interaction from the outside possible.

In the following tutorial, I will focus on web roles. I will show you how to create a web role and host it on an Azure web site. The web role’s task will be to launch a console application – which is a path finder algorithm that I wrote in C++ – that reads input from the client, computes and then returns results. The reason for choosing a web role for this, is to make communication possible through REST, so the client can use a web site or simply do a GET request to fire up the console application in the cloud and get back the results as JSON.

Creating the Azure Web Role

The first step is to download the Microsoft Azure SDK. Since I’m a .NET developer, I downloaded the Visual Studio 2013 version. The SDK includes an Azure emulator, that we will be using locally. The web role has to be hosted somewhere, we will choose a cloud service for that. So, start up Visual Studio and create a Windows Azure Cloud Service project:

cloud

Choose ASP.NET Web Role:

cloud2

Choosing the front-end stack type is up to you, I choose an MVC Web API without authentication:

cloud3

Go to the WebRole class in the WebRole project and set a break point inside the OnStart() method. Now build and run your cloud service, you will see that an Azure Compute Emulator is started up and the break point is reached:

cloud4

Press F5 to continue. Your web role web site is now up and running. Right click the Azure Compute Emulator icon in the task bar, and choose “Show Compute Emulator UI”. A new window shows up, click on your web role in the left column:

cloud5

This shows you the status of the web role. A web role inherits the abstract class RoleEntryPoint that has three virtual methods; OnStart(), Run() and OnStop(). These methods are called as their name suggests and can be overridden. We already overrode the OnStart() method as we saw earlier. Now, the next step is to launch the console application as a process from the web role.

Starting a Console Application Process from the Azure Web Role

Delete the default override of OnStart() in the WebRole class. We want to call a custom method from our web role independently. Create an interface IWebRole that looks like this:

public interface IWebRole
{
   string RunInternalProcess(string stringParams);
}

Make WebRole implement this interface. RunInternalProcess(string stringParams) is a custom method that we will call from the client. The method will then launch a console application process and return results back to the client as JSON. We want the process to do the operation asynchronously. Here is part of how the implementation looks like:

public string RunInternalProcess(string stringParams)
{
    var path = HttpContext.Current.Server.MapPath("..\\TSP_Genetic_Algorithm.exe");
    var result = RunProcessAndGetOutputAsync(stringParams, path).Result;
    return result.Replace(" ", "\n");
}

private static async Task<string> RunProcessAndGetOutputAsync(string stringParams, string path)
{
    return await RunProcessAndGetOutput(stringParams, path);
}

private static Task<string> RunProcessAndGetOutput(string stringParams, string path)
{
    var process = CreateProcess(stringParams, path);
    process.Start();
    var result = process.StandardOutput.ReadToEnd();
    process.WaitForExit();
    var taskCompletionSource = CreateTaskCompletionSourceAndSet(result);
    return taskCompletionSource.Task;
}

As you can see, the method starts a process called TSP_Genetic_Algorithm.exe which is included in the project. It’s important to set the “Copy to Output Directory” property of this executable file to “Copy always”, such that it’s always copied to the project directory. You can do this by right clicking the executable, and choosing “Properties”:

cloud15

The next step is to make the client call up the web role through an HTTP GET request.

Calling the Azure Web Role from the Client

We need to make it possible for the client to call the web role, we will do this by creating an HttpGet ActionResult. Go to HomeController and inject the WebRole interface there:

private readonly IWebRole _webRole;

public HomeController(IWebRole webRole)
{
    _webRole = webRole;
}

Create an HttpGet ActionResult, it can look like this:

[HttpGet]
public ActionResult Solve(string[] c)
{
    var coordinates = c.Aggregate(string.Empty, (current, t) =&gt; current + (t + " "));
    _results = _webRole.RunInternalProcess(coordinates);
    return Json(new { results = _results }, JsonRequestBehavior.AllowGet);
}

This GET request takes in an array of string coordinates, calls the web role with these coordinates, which in turn launches up the process with the input and then finally return the results back to the client as JSON. Beautiful, isn’t it? Build and run your cloud service. Now type this in the browser address field:

http://127.0.0.1:81/Home/Solve?c=200,300&c=400,300&c=400,400&c=400,200&c=500,200&c=200,400&c=400,300

And here are the results:

cloud11

Publishing the Azure Cloud Service

The final step is to publish our Azure cloud service so it goes online. This process is pretty straightforward, but it assumes that you have a Microsoft Azure subscription and a target profile. Once you have registered a subscription, right click the cloud service project and choose “Publish…”:

cloud7

Go through the wizard to create a target profile:

cloud8

Enable Remote Desktop if you want remote access to your virtual machine in Azure, this is pretty handy. Once done, click on Publish in the last dialog:

cloud9

The publishing process will start in Visual Studio:

cloud10

And then complete:

cloud12

That’s it! Your Azure web role web site is now online, and you can now do the GET request through the web:

cloud14

You can go to the Azure Portal to view statistics and maintain your web role there:

cloud13

Hope you enjoyed this tutorial!

I will speak at the Norwegian Developers Conference in Oslo

It’s finally official, I will be speaking at the Norwegian Developers Conference in Oslo this year! If you didn’t know already, NDC is a famous worldwide conference that takes place in June and lasts for five whole days (two days pre-conference workshops). This means a great deal to me, since it is the first time that I will do a talk there. NDC is my favorite conference and I always wished this day would come, so I am very excited! The best thing of all, I have been granted two full hours at the conference. I will be doing a two part workshop about test driven development in AngularJS and TypeScript. So make sure to book your conference tickets today and I will see you in June! :)

ReSharper Downfalls and Anti-Patterns

As part of a dedicated refactoring team in a big customer project, I get to use JetBrains ReSharper quite heavily on a daily basis. If you didn’t know already, ReSharper is the best refactoring tool made for Visual Studio. Not only does it increase programming efficiency by multitudes, it also changes the way you think as a programmer. Especially when you’ve just started out on your programming career. I’ve been using ReSharper professionally for at least three years, and today I can’t even imagine how I survived as a (.NET) programmer without it.

Ironically, it wasn’t until I joined a professional refactoring team that I discovered the downfalls and anti-patterns of using ReSharper. Without doubt, the tool is continuously being developed by a brilliant team, so the downfalls that I see today may very well be investigated in the next versions of the tool. There are also some features that ReSharper simply lack today, that I hope will be added in the future. I will explain what I am talking about in the next sections.

Helper Methods are Extracted as Static by Default

If you press Ctrl+RM you’ll get the option to extract a method. A local helper method is by default extracted as static:

Capture

You get to choose whether to make it static, but it is static by default. You’ll find that this is extremely annoying, as there is a chance that you have to insert non-static content in the method at a later time. So, the fix? Remove the static part manually (!). Note, in the following animation below, the lack of intellisense as I start typing the _webRole object:

animation

This may not be a big deal in a small project, but it quickly becomes cumbersome in a big legacy application.

Lack of Listing Multiple Object Properties Feature

Very often you will create objects and want to set their properties. There is no way to list all the properties of the created object, leaving you having to manually type and set each property:

properties

Wouldn’t it be nice to have ReSharper list all the properties for us?

Lack of “Find Usages in Multiple Solutions” Feature

One of the most fundamental things when you are refactoring a huge application with many solutions, is the ability to locate the usage of a component across those solutions. If you right click on a class or method and choose “Find Usages Advanced…”:

find_usages

ReSharper will show you a dialog where you can choose where to search:

find_usages_2

What would be nice is to have the option “Solutions…” where you can specify solutions or simply choose all. This makes our life easier, and saves us the need to open each solution manually and perform the search per solution.

Refactoring Overkill

This one falls under anti-patterns. Every now and then, ReSharper will suggest refactorings that actually break readability of your code. Inverted ifs and complicated LINQ expressions fall under this category. How many times has ReSharper asked you to invert your if statement when it looked perfectly readable? Or how many refactorings were you asked to do by ReSharper on your one single LINQ expression? Chances are, many times. There is no need to invert your if, if you and your code reviewer agree that it looks fine. Although you can probably turn off this type of suggestion, ReSharper should be intelligent enough not to ask you to do this.

Complicated LINQ expressions, where do I start? Once you write a LINQ expression that does something, ReSharper will often suggest to write it differently. As the expression gets more complicated, so does ReSharper. It will ask you to keep refactoring, sometimes up to 3 times (!) on a single LINQ expression. The ending result is a hideous piece of code that takes time to understand. So again, ReSharper should be intelligent enough to take readability into account here.

Different Key Binding Schemes?!

Something that has annoyed me recently is that the key binding schemes of ReSharper seem to vary from environment to another. Ever since I installed ReSharper 8.1 (which I upgraded to 8.2 today, by the way), the schema on my development machine has changed, and I have to memorize different key bindings. This is a hassle when I do pair programming with another developer using his machine, as he would have the older schema. Ultimately I memorized both schemes in order to work efficiently. You would think that simply applying the Visual Studio schema would make things consistent in all developer machines:

Capture

In final words, ReSharper is a wonderful refactoring tool that I will be using for years and years to come. That doesn’t by any means make it perfect, and there are still features that I see lacking. Seeing how the ReSharper team is doing an incredible job developing the product (ReSharper 8.2 was just released), I’m not worried that they’ll look into this and make our favorite tool even better. :)

My Genetic Algorithm Solving the TSP Problem

I wrote a genetic algorithm (GA) back in 2011 to solve the traveling salesman problem (TSP). I wrote the algorithm in C++ and made a recording of the algorithm that I later uploaded on YouTube. It was during the time that I was obsessed with algorithms. While having a major interest in GAs, my primary focus was on writing a new algorithm to efficiently solve the boolean satisfiability problem (SAT). Which I eventually did, getting my results published in the Scientific Research Journal. Oh, those were fun times! So the recording of the GA has been on YouTube for close to 3 years, and has over 10000 views today. I thought I’d share it with you:

The video shows my algorithm solving the TSP problem, where the number of cities is 100. I used an elitist approach here, with 60 % order crossover, 10 % mutation and the algorithm ran for 10000 generations. What you see is the cities being plotted on a circle, and the different paths being computed by the algorithm. Instead of showing you the different paths, I probably should have only showed you the best paths being picked (elitism). But if I had done that, you wouldn’t have been able to see so many pretty lines being drawn.

If I were to re-write the algorithm today, I would have made it better in many ways. Primarily in terms of readability of the code, but also structurally. Oh, and I wouldn’t use a crappy laptop with 2 GB of RAM :)

TypeScript Release Candidate

A release candidate version of TypeScript was finally released last week. New features have been added along the many bug fixes that were made. As of the Spring Update CTP2 for Visual Studio 2013, TypeScript is now fully supported making it a first class citizen. That means after getting the Spring Update CTP2, you won’t need to download TypeScript as a plugin to Visual Studio 2013. Notable features that have been added are simpler, more generic type declarations and order declaration merging of interfaces. Improvements to the lib.d.ts typings library have also been made, adding typings support for touch and WebGL development, making your life easier when working with HTML 5.

Generic Type System

The typing has been enhanced, making it more flexible. It is now possible to use any more freely, making type checking especially softer when working with inheritance. Consider the class Person that extends Human:

class Human {
    eyeColor: string;
}

class Person extends Human {
    eyeColor: any;
}

Giving the property eyeColor type any in the subclass was not possible prior to the RC version. Giving this possibility now, you are not forced to define the specific type found in the super class, which may be inaccessible at times. When it comes to generics, the same thing is possible. Consider the interface IPerson with a generic function and the implementing class Person:

interface IPerson<Value> {
    then<T>(f: (v: Value) => Person<T>): Person<T>;
}

class Person<Value> implements IPerson<Value> {
    then<T>(f: (v: Value) => Person<T>): Person<T>;
    then<T>(f: (v: Value) => any): Person<T> {        // This is also possible
        return undefined;
    }    
}

Declaration Merging Precedence

In addition, declaration merging of interfaces is now also possible. This makes precedence order easier when you work with external libraries. Say you have external interfaces IExternal with some functions:

interface IExternal {
    a1(): void;
    a2(): number;
}

interface IExternal {
    b1(): string;
    b2(): Date;
}

When these two interfaces are merged, they become this interface:

interface IExternal {
    b1(): string;
    b2(): Date;
    a1(): void;
    a2(): number;    
}

Notice the precedence order of the declared functions. Declaration merging is absolutely recommended, especially when you don’t want to change how the external libraries are initially referenced in your web application.

These are awesome features from the TypeScript team! I am happy that the community influenced the team to add these features, and I also would like to congratulate the team on reaching version 1.0. Well done! :)

I will speak at The Gathering 2014 event in Hamar

I will be speaking at the The Gathering 2014 event in Hamar (Norway) this April. If you didn’t know already, this is the biggest gaming event in Norway. It is a huge computer LAN event that started out back in 1996 and has since taken place every year during the Easter holidays in Norway. The event takes place at the awesome Viking Ship Olympic Arena, where over five thousand national and international participants meet to have fun. The majority of the participants are gamers, but there are also artists, designers and developers. Participants get to play games and attend creative seminars and workshops in the span of five days.

I will be holding a talk about “Clean Code” in the creative lounge section, I will also hold various HTML5/JavaScript workshops. I’m excited about this and hope to see you there too! :)

Capture

Directives in AngularJS

So I thought that I’d make a detailed post about directives in AngularJS. If you’ve just started out with AngularJS, chances are you are curious to learn about this powerful feature. Directives are great if you want to create your own custom HTML elements. In fact, directives give us a glimpse of the future. In HTML 5, we got new elements and the goal is to increase this in the future so you won’t need to use the class attribute so often. Learning to use directives may seem a bit complex for the AngularJS beginner, but trust me, it’s really easy once you get the hang of it. Using directives isn’t really mandatory when you work with AngularJS, unfortunately this leads to developers not using this feature. I hope to change that with my post, and make you use it if you don’t already.

A directive can be written in four different ways; as an element, attribute, comment or class. The best practice is to write it as an element or attribute. Here are the four ways a directive can be written in HTML:

<my-directive></my-directive>
<div my-directive></div>
<!-- directive: my-directive -->
<div class="my-directive"></div>

The directive is loaded into Angular at the start up of a web application. Usually in a bootstrap.js where you load your other Angular components, such as controllers and factories. The directive is loaded like such (line 2):

var myApp = angular.module('myApp', []);
myApp.directive('myDirective', myDirective);
function myDirective() {
    return ""
};

Note that in the HTML we called the directive my-directive, this is transformed to camel case and therefore we load the directive as myDirective in Angular. As you can see, this directive doesn’t do much. It simply returns an empty string. A directive function has up to eleven different attributes that can be used. It has priority, template, templateUrl, replace, transclude, restrict, scope, controller, require, link and compile. By no means will you be using all of those attributes at once, usually you would use three to four attributes depending on your web application. I’m going to give a short description of each attribute.

Priority defines the order of how the directives get compiled, this is the case when you have created multiple directives on a single DOM element. Directives with a greater priority number get compiled first.

Template is used to add your HTML template code that you want to be shown, and templateUrl is a URL to your HTML template.

Replace is used to either replace the directive element with your HTML template (true) or to replace the contents of it (false). This is by default set to false.

Transclusion is a sibling of the isolate scope (which I will explain soon), which means that it’s bound to the parent scope. This makes the content of your directive have its own private state. transclude can be set to true (transclude the content) or to ‘element’ (transclude the whole directive).

Restrict is used to define the type of your directive. It can be set to E (element), A (attribute), M (comment) or C (class). The default value is A.

By using scope, you can define an isolated scope for your directive content. If scope is not used, the directive will use the parent scope by default. Scope can be set to true (new scope created for the directive), false (use parent scope), pass in scope for two-way binding, pass in type and other custom attributes.

Controller is used if you want your directive to have its own controller. You simply set it with the name of the controller that you’ve loaded in Angular.

Use require if you want to have a dependency on other directives, in that case inject the dependency directive’s controller as the fourth argument in the link function (which we will get to shortly). require is set to the name of the dependency directive’s controller, if there are several dependency directives, then an array is used containing the controller names.

Link is used to define directive logic, it is responsible for registering DOM listeners and updating the DOM. Link takes in several arguments, and it is here where you define your directive logic. However, this attribute is unnecessary if you are using the controller attribute and got the directive logic inside a controller.

Compile is used to transform the template DOM. ngRepeat is an example of a template DOM that needs to be transformed into HTML. Since most directives do not transform DOM templates, this attribute is not often used. Compile takes in several arguments.

For more information on each of the directive attributes, I suggest checking the AngularJS documentation.

Time for examples of usage, let’s assume we have written the following directive in the HTML:

<my-directive></my-directive>

Now, in the JavaScript this directive is implemented like this (line 3):

var myApp = angular.module('myApp', []);
myApp.directive('myDirective', myDirective);
function myDirective() {
    return {
    restrict: 'EA',
    template: '<div>Hello World!</div>'
    }
};

I set restrict to 'EA' on purpose, to show you that you can make the directive implementation work on both an element and an attribute if you wish. This is a simple directive that returns a template div with the text “Hello World!”. Now, we can extract this template to a file:

function myDirective() {
    return {
    restrict: 'EA',
    templateUrl: 'helloWorld.html'
    }
};

And the content of helloWorld.html:

<div>Hello World!</div>

Let’s put our directive inside a controller:

<div ng-controller="myController"> 
<my-directive></my-directive>
</div>

The controller is simple, it looks like this:

myApp.controller('myController', myController);
function myController($scope) {
    $scope.message = "Hello World!";
};

Now, we change the directive:

function myDirective() {
    return {
    restrict: 'EA',
    template: '{{message}}'
    }
};

Can you guess what happens? The directive uses the parent controller and scope, and returns the message defined in the controller – “Hello World!”. Now let’s create a controller specifically for our directive, and call it myDirectiveController:

myApp.controller('myDirectiveController', myDirectiveController);
function myDirectiveController($scope) {
    $scope.directiveMessage = "Hello World!";
};

And change the directive to use this controller and an isolated scope:

function myDirective() {
    return {
    restrict: 'EA',
    scope: true,
    controller: 'myDirectiveController',
    template: '{{directiveMessage}}'
    }
};

Again, the directive will return the message “Hello World!”, this time from its own controller. If you want to pass parameters from the parent controller, that’s possible. Let’s assume that the parent controller myController has a messages collection, that we loop through in the HTML:

<div ng-controller="myController"> 
<div ng-repeat="message in messages">
<my-directive message="message"></my-directive>
</div>
</div>

We loop through the collection and pass each message to the directive. The directive then looks like this:

function myDirective() {
    return {
    restrict: 'EA',
    scope: {
    message: '='
    },
    controller: 'myDirectiveController',
    template: '{{message}}'
    }
};

The message is passed to the directive controller, by setting the message attribute in the scope to ‘=’. This can be done differently, you can type message="myMessage" in the div (instead of message="message"), in that case you need to add message: '=myMessage' in the scope. Using ‘=?’ means that a parameter is optional. Another thing you can do is send in type. You use the type attribute in the HTML, then add type: '@' in the scope.

Another thing you can do of course, is to use the link attribute to define your logic there. Let’s do that:

function myDirective() {
    return {
    restrict: 'EA',
    link: function(scope, element, attributes) {
        element.addClass('alert');
    }
};

Link takes in current scope as argument, the directive element itself and custom attributes that you can pass in. Link uses jQuery Lite, a lighter version of jQuery. You can do jQuery operations inside of this function, in this case we are adding a css class alert to the directive element. You can do a lot of other different things with directives, for more I suggest checking out the AngularJS documentation.

I’ve explained most of what you need to know about directives. When you write directives, I recommend that you create controllers specifically for the directives. Put all logic in the controllers, and do not use the link function. Avoid jQuery operations and stick to pure Angular. That way you keep your code clean and nicely isolated. I hope that this gives you a good introduction to directives, and helps you get started using them. As you can see, it’s not that hard, is it? :)