Trying my hand at the MEAN stack.

mean 2-c6c9d2d665

I came up with the idea for a web application a while back but have been procrastinating around actually building it. I initially was going to build it using my bread and butter languages and frameworks (namely C#, ASP.Net, Entity Framework etc) but I’ve been hearing about this MEAN stack and wanted to broaden my horizons in the web dev world.

I have used Linux and OSX (I’m currently mostly using a Macbook right now) before and I am used to the Bash shell and have also used MySQL before so I wasn’t coming from a completely Microsoft focused point of view.

I have also been using JavaScript for years and lately been using more and more JavaScript frameworks in my commercial projects, like JQuery, Bootstrap and Knockout.

All this stood me in good stead for jumping into the NodeJS and Angular world and getting started creating this web app on the MEAN stack.

As a predominately Microsoft focused developer in my commercial career, moving over to creating an application on the MEAN stack  was a slow process and I was definitely less productive in getting the basics of my application up and running at first.  However in spite of this I was pleasantly surprised at how easy the MEAN stack was to get to grips with and how abundant the information was out on the web to help you get up and running.

I wanted to write down my rough experience of someone coming from the .Net world into the MEAN stack and which technologies I struggled with and which I needed to read up on.

What is this MEAN stack?

MEAN stands for MongoDB, Express, Angular and NodeJS.  MongoDB is the document database where all my data is persisted. Express is a NodeJS web server for serving up my REST API and my we front end. Angular is the popular client side web framework for building my application out and NodeJS is a server side JavaScript framework.

What should I get to know?

Bash shell

The main thing to brush up on is using the Bash shell to get your work done. Whether this is starting your NodeJS server, connecting to MongoDB or committing your changes to GitHub make sure your comfortable at the command line.

Package management.

Just like Nuget in .Net there are package managers in the MEAN world. Mainly you need to know NPM for node packages and every Node module you use will be installed with NPM.

Bower is used for installing client side frameworks into your project. I used this to install Angular, JQuery and Bootstrap.

What I did was split my projects into two. My backend services and my front end client. I used NPM for my backends libraries and Bower for my front end libraries. This seems to work well.

You will then need to learn the obvious technologies; MongoDB, Express, Angular and Node. There will also be external libraries like bcrypt that you will need to learn for encrypting and hashing passwords and sensitive data. A must for web applications.

Getting started with MongoDB

The first thing I needed to do was download MongoDB and install it on my local machine. This step probably took the longest as I needed to learn how to setup and connect to the database from my MacBook.  I then needed to learn to use the Mongoose library to persists my models in the databases. Coming from a RDBMS background I had to also get my head around the document storage approach to persistence. I already had some knowledge around this so it didn’t take long.

Build an API with NodeJS

My architecture approach was going to be service based. I built an API using Express and NodeJS and this was going to serve up all my data to the client. The client was then going to just make HTTP calls to the API from my Angular controllers.

I needed to learn about routing in express and how to load and save documents to MongoDB with Mongoose.

Authentication with Passport.

My next biggest problem was how to do authentication. I wanted to provide a simple form based login option and also provide and option to login with a Facebook account. I used the Passport library for NodeJS which supports most authentication methods around today.

I followed this great article series by Scott Smith http://scottksmith.com/blog/2014/05/05/beer-locker-building-a-restful-api-with-node-crud/ that helped me get started with my CRUD and authentication pieces.

I chose to just use Basic Auth with HTTPS to start with and I integrated Facebook authentication on top to allow a user to create an account with their Facebook login details.

When the user hits the client Facebook login they are first  authenticated with Facebook and my application gets back a user id and other details about the user. My front end then calls an endpoint in my API that checks whether the user has previously registered, if they have it redirects them to their dashboard. If not they are redirected to a register page where they can enter a password and an account is then created in the database for them.

This all has to be over HTTPS as with basic auth the credentials are passed with each request. The password are also hashed within the database using the bcrypt library.

Front end development with Angular.

I did a great Pluralsight course on Angular fundamentals so I was up to speed on the basics of Angular. In my client I created controllers that make HTTP calls to my REST API and my dashboard page is essentially a single page application (SPA) with various components making calls to the API.  This is a basic service architecture and later means I can add OAuth authentication on my service layer and allow other clients to connect to it.

Continuous deployment, GitHub and Docker.

I wanted top get my basic application lifecycle management in place so I can version and deploy my source code into dev and production environments.

I chose to host my application on Azure as I was pretty comfortable with it and I had an account. Azure has great support for GitHub and Docker so I decided to use those to deploy my application.

First I wanted to setup my dev environment with CI. I created two GitHub repositories, one for my API project and one for my client front end. I decided to keep them as separate modules that can be deployed individually. This avoids tight coupling within my applications CI strategy.

I decided to use Azure WebSites to host my dev environment as the CI support for GitHub is excellent. I just needed to point my websites to the GitHub repository and each time I pushed a commit it would deploy the changes and even start my Node servers automatically.

For production I decided to use Docker containers. I created a Linux VM with Docker support in Azure then created a Bash build script that automatically SSHs into the VM, does a git pull to get latest code then builds and runs the docker container which stands up the Node servers.

This seems to be a good approach so far, as I build out the application I will update on the things I find that may be of interest to anyone moving to the MEAN stack.

 

Azure AD JavaScript authentication tutorial series (Part 1)

I have put together a video tutorial series that goes through step by step a full end to end solution that shows how to authenticate an Azure AD Web API application from JavaScript code using the adal.js library.

Now days I am finding myself designing my applications to use a web service layer to serve up data from data stores.  Providing REST API endpoints on top of your data gives alot of benefits when it comes to integrating your data across different client applications. JavaScript runs pretty much everywhere now and it’s the to go to language to build client side apps so accessing your REST endpoints from JavaScript is a really appealing solution and this is why pretty much everyone is doing JavaScript and REST now.

The JavaScript ecosystem today is massive with libraries to help you build pretty comprehensive applications. When I am building SharePoint Add-Ins I tend to expose the data using Web API and stick to using JavaScript in the application to render the data and build out the UI. Most if the time there is no need for server side code inside my client application.

Inevitably you will want to secure your web service layer at some point and if your are building on the Azure platform, then Azure AD is a great OAuth solution.

It is especially a good solution if you are building SharePoint Add-Ins in Office 365. When you are logged into your Office 365 SharePoint site you have already authenticated against your Azure AD and as long as you deploy your applications to the same Azure AD instance then you get automatically authenticated when accessing your Web API layer.

When building these apps I found that there was plenty of examples on authenticating from C# code but I found the examples lacking if I just wanted to use JavaScript to authenticate against my Web API.

The adal.js library comes in very handy here but I found all the examples were based around using it with Angular. Although Angular is a great framework for building client side apps I found most of the time it was overkill for what I wanted to do.  So these set of videos show how you might want to design and build a client side application and in this case a SharePoint Add-In that uses Azure AD authentication, Web API, JavaScript and TypeScript.

The general architecture looks like this.

Thumbnail

 

The first video is up and it shows how to create a SQL Azure database, create a Web API layer and how to model and scaffold the data using Entity Framework.

 

CORS Support in WebAPI and XDomainRequest with IE.

The WebAPI framework in the latest release of .Net 4.5 is a great way to easily create HTTP based web services from scratch, it gives you a lot of great features out of the box which allows you to return JSON or XML data back to a client application using Javascript or the server side HttpClient class.

If you would like to know more about WebAPI the head over to the official site to get up to speed with how it works. http://www.asp.net/web-api

One thing that isn’t included with the 4.5 release is the ability to do cross domain calls into your WebAPI services, there is no support for CORS out of the box with the current release. However CORS support is coming with the next release of ASP.Net and can be seen if you browse the ASP.Net source over at codeplex http://aspnetwebstack.codeplex.com/.

If you are reading this and wondering what CORS stands for its Cross-Origin Resource Sharing and it’s a new specification from W3C that aims to standardise the mechanism for cross domain requests by using standard HTTP headers in the request and reponse. You can read more about the specification at the W3C site http://www.w3.org/TR/cors/.

You can also read about CORS support for ASP.Net and WebAPI at this site http://aspnetwebstack.codeplex.com/wikipage?title=CORS%20support%20for%20ASP.NET%20Web%20API.

The problem is that if you want CORS support right now then you have two choices:

  1. Download the full ASP.Net web stack dev branch, compile it and use the 5.0.0.0 assemblies in your application.
  2. Write you own HTTP handler to add support.

The problem with option one is that most people don’t want to build their application on an unstable dev release of the framework. CORS support comes in the form of two new assemblies System.Web.Cors.dll and System.Web.Http.Cors.dll. The later is the assembly that you would use for WebAPI and the former is what you would use for ASP.Net. The problem is that these assemblies are both complied with version 5.0.0.0 of System.Web.dll and System.Web.Http.dll so you can’t just download the code for these assemblies and compile them against version 4.0.0.0 of the relevant assemblies or even grab the 5.0.0.0 version of the dependencies and compile against them. You will get either a compile time or a runtime security exception stating that there is a version mismatch between dependencies.

So bottom line is that unless you are willing to build your application on a dev release you are stuck with creating your own HTTP handler to deal with this. The approach below was taken from this blog post by Carlos Figueira http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d and shows how you would implement a handler to deal with CORS support.

Creating a HTTP Handler for CORS Support

 public class CorsDelegatingHandler : DelegatingHandler
 {
     protected override Task<HttpResponseMessage> SendAsync(
         HttpRequestMessage request,
         CancellationToken cancellationToken)
     {
         string allowedDomains = WebConfigurationManager.AppSettings["CORSAllowCaller"];
         const string Origin = "Origin";
         const string AccessControlRequestMethod = "Access-Control-Request-Method";
         const string AccessControlRequestHeaders = "Access-Control-Request-Headers";
         const string AccessControlAllowOrigin = "Access-Control-Allow-Origin";
         const string AccessControlAllowMethods = "Access-Control-Allow-Methods";
         const string AccessControlAllowHeaders = "Access-Control-Allow-Headers";

         if (string.IsNullOrEmpty(allowedDomains))
         {
             return base.SendAsync(request, cancellationToken);
         }

         bool isCorsRequest = request.Headers.Contains(Origin);
         bool isPreflightRequest = request.Method == HttpMethod.Options;

         if (isCorsRequest)
         {
             if (isPreflightRequest)
             {
                 return Task.Factory.StartNew(() =>
                 {
                     HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
                     response.Headers.Add(AccessControlAllowOrigin, request.Headers.GetValues(Origin).First());

                     string accessControlRequestMethod =
                     request.Headers.GetValues(AccessControlRequestMethod).FirstOrDefault();

                     if (accessControlRequestMethod != null)
                     {
                         response.Headers.Add(AccessControlAllowMethods, accessControlRequestMethod);
                     }

                     string requestedHeaders = string.Join(", ", request.Headers.GetValues(AccessControlRequestHeaders));

                     if (!string.IsNullOrEmpty(requestedHeaders))
                     {
                         response.Headers.Add(AccessControlAllowHeaders, requestedHeaders);
                     }

                     return response;

                 }, cancellationToken);
             }
             else
             {
                 return base.SendAsync(request, cancellationToken).ContinueWith(t =>
                 {
                     HttpResponseMessage response = t.Result;
                     response.Headers.Add(AccessControlAllowOrigin, request.Headers.GetValues(Origin).First());
                     return response;
                 });
             }
         }
         else
         {
             return base.SendAsync(request, cancellationToken);
         }
     }
 }

The code above essentially looks for an Origin header in the request which indicates that the caller is coming from another domain. The requesting browser will add this header if the originating domain is different to the requested domain. It then adds the relevant CORS headers to the response which tells the browser that this call is allowed by the server. Most browsers support the Origin header using XHTTPRequest object so JQuery AJAX requests work fine, however IE does not add this header when using XHTTPRequest object so you need to use the XDomainRequest object instead which I will show in the next section.

Calling WebAPI from Javascript.

As I said above you will have to use XDomainRequest instead of XHTTPRequest object when making a cross domain request using CORS, otherwise the Origin header does not get added to the request.

You can see below how to achieve this:


if ($.browser.msie && window.XDomainRequest) {

    // Use Microsoft XDR
    var xdr = new XDomainRequest();
    xdr.open("get", this.url);
    xdr.onload = function () {
        bindData(JSON.parse(xdr.responseText), bindingNodeName);
    };
    xdr.send();

} else {

    $.ajax({
        type: "GET",
        url: this.url,
        dataType: "json",
        success: function (data) {
            bindData(data, bindingNodeName);
        }
    });
}

We essentially just need to check whether the browser is IE and supports the XDomainRequest object and if it does go ahead and use that object instead. The only thing to note here is that you will be getting back a string of data instead of the raw data when using XDomainRequest object and you will need to get your data out of the string before you use it. In the case of JSON it’s as easy at using the JSON.Parse helper method to get at your raw JSON object.