Thursday, March 5, 2015

Unit and E2E Testing with AngularJs

Testing is really important for software development. That is the only way to ensure the intended outcome of a software project. There are two types of tests that we should do in a software project.
  • Unit testing     
  • End to end testing

Unit Testing
Unit testing is about testing individual units of code. A unit test selects few lines of code that was written for a specific task and mock the other dependencies of that code. Then it runs the code and test for an expected output for some given inputs.

End to end testing (integration testing)
End to end testing is for looking at the software in user’s perspective. It integrates all the modules and runs user work flaws as tests. It checks output as well in the form of end users. It’s tedious to test an application for manually for all the use cases and in all the browsers and devices. So we automate that process in e2e testing.

When we work with AngularJs, we have some interesting tools for above purposes. Most of them are based on Node server and Command line. Let’s have a look on those tools and libraries

Node and npm
Node server comes with npm package manager. npm is used to download tools and libraries that runs on Node.

Grunt is a javascript based tasks runner. We create two files in the root directory of a project, when we integrate grunt into a project. Those files are,
Package.json: This file is used by npm to store metadata for projects published as npm modules. We mention them under devDependencies in this file.
Gruntfile.js: This file is used to configure or define tasks and load Grunt plugins.

Bower is the client side package manager. It finds and downloads packages from Git repositories. So it needs Git as well in your local box. It keeps track of packages under “dependencies” part of bower.json file.

Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. It’s has a simple and very descriptive syntax to define behaviors of selected code units, with keywords like, describe, it, beforeEach etc.

Karma is a JavaScript command line tool that can be used to spawn a web server which loads your application's source code and executes your tests. You can configure Karma to run against a number of browsers, which is useful for being confident that your application works on all browsers you need to support. Karma is executed on the command line and will display the results of your tests on the command line once they have run in the browser. Karma is a NodeJS application, and should be installed through npm. Karma use Jasmine to define test cases.

Protractor is a Node.js program, and runs end-to-end tests that are also written in JavaScript and run with node. Protractor uses WebDriver to control browsers and simulate user actions. Protractor uses Jasmine for its test syntax.

WebDriver is a tool for automating testing web applications, and in particular to verify that they work as expected. It aims to provide a friendly API that's easy to explore and understand, which will help make your tests easier to read and maintain.

Setting up the environment
(Better if you can start to work with ConEmu console for windows. Alpha version is available,

  • Install nodeJs (
  • Create a new folder and cd to new folder
  • Type npm init. It will ask parameters for package.json file and generate it for you.
  • Install bower using, npm install bower --save–dev
  • Install Jquery and angular for your project
    • bower install jquery --save
    • bower install angular --save
  • create a app folder and add your angular application into app folder
  • install node server into the project using, npm install -g http-server
  • run your application using, http-server -a localhost -p 8000 -c-1
if you want to properly do these things, please create an npm command, in scripts object of package.json file
            "postinstall": "bower install",
"prestart": "npm install",
"start": "http-server -a localhost -p 8000 -c-1"

using the browser’s console, you can find out the errors of your angular app
After installing all the packages and starting a working application, it is the high time to start writing tests. Here are the steps.

Unit Testing
Here we write test cases with Jasmine and run them with Karma test runner.

  • Install karma and its plugins into your project using npm
    •  npm install karma --save-dev
    •  npm install karma-jasmine --save-dev
    • npm install -g karma-cli //(-g for global)
    • npm install karma-chrome-launcher --save-dev

  • add unit test file into the module and write test cases (jasmine syntax is at
  • start karma and run unit tests
    • karma start
    • karma run

End to End Testing
This is exactly what you do in Test Automation with Selenium or Coded UI. Here we use Selenium WebDriver on Node server and we run tests using protractor test runner. Again tests are written with Jasmine.

  • Install protractor into your project using, npm install -g protractor
  • Write the test case that you want, using basic Jasmine syntax. But you have to use some global variables created by protractor, like browser, element, by etc.

Full documentation is here,


describe('angularjs homepage todo list', function() {  it('should add a todo', function() {


    element(by.model('todoText')).sendKeys('write a protractor test');

    element(by.css('[value="add"]')).click();    var todoList =          element.all(by.repeater('todo in todos')); 

    expect(todoList.get(2).getText()).toEqual('write a protractor test');  

  • Create protractor configuration file (no command for this. have to do it manually)
Protractor uses selenium webdriver to control browsers. Protractor has a helper tool called webdriver-manager. You can use it to start a selenium server instance, using these commands.

    • webdriver-manager update
    • webdriver-manager start
  • Run your test cases with, protractor e2eTesting/protractor.conf.js

This is the end of brief guidance for Unit Testing and E2E Testing with angularJs. Hope you have the basic idea and request you to go through all the reference materials and build a sound knowledge on test frameworks. All the best!

Monday, April 16, 2012

Creating a Structure for a web page using Razor Syntax

Generally a web page contains several different parts. For example, Header content, Navigation pane, Common page layout items, Common page markups (CSS, Scripts), etc. So these parts should be managed properly. All of these parts should not be loaded always and should behave separately. To handle these burdens, we should manage a proper structure for the page.

In ASP.NET web forms, you can do this using master pages, content pages, user controls, place holders and likewise. But with ASP.NET MVC 3 and Razor View Engine, you can achieve more complex structures and behaviors than web forms. Razor View Engine is the power of ASP.NET MVC 3 and its API lies on top of ASP.NET API. This API gives us all the facilities to maintain the structure of the page.

 (image source :

 I’ll explain step by step, how to make a structure for your ASP.NET web page.

01). in the default MVC 3 project, you can see a folder “~/Views/Shared”. This folder is the place for the content that is shared among all controllers.

 It contains a file called “_Layout.cshtml”.

(Note: As a practice, we put ‘_’ before the file name, if it is definitely used with some other View)

This file is the Master page for all the views. You can add all unaltered content of your site like Search bar, Navigation Pane, Script and CSS links (Common Page Markups) into this file.

How is this set as the master page?

Here you can see a code line in this file as “@RenderBody()”. This line allocates a space to render a View into this page. Then you should set the layout on top of your View as,

                     Layout = "~/Views/Shared/_Layout.cshtml";

But this is not required to do in all View files. You can do it in ViewStart.cshtml file, as you see in default MVC 3 project. Because web servers go through ViewStart file, for all web requests, before it goes to any other View file.  Only when you use different Layout, you can set it in particular View file.

02). There is another Razor method called “RenderSection()”. It gives more control on rendering. There you can specify a part of the View file that should be rendered in one place.

This false value says that the section is not “required”. Unless it gives an Error Message saying “the section is not declared”.

03). if you want to render the content of another View file into the file that you are working in, you can use “RenderPartial(<file name>)” method.
It doesn’t need more work. You can just type @RenderPartial(<file name>), and it will render the whole content into the file.

I think using these methods; you can create any complex structure for your Web Site.

Tuesday, April 10, 2012

Working with JSON

JSON, Java Script Object Notation, is a lightweight data-interchange format. Simply it’s a text format that is used to represent Objects of any Programming Language. When two systems communicate using JSON, they parse JSON Objects into their runtime Objects and send the results back again in JSON.

This is the official site for JSON. It explains the structure and functions of JSON clearly.

I found this Stack Overflaw Question on how to check a JSON Object for a particular property. When I’m working on this, I found that there are no many options to work with JSON objects directly. Most popular way is to convert the JSON Object into a JS Object and work with it. So this is the way of converting a JSON Object into a JS Object and check for a property in it.

//The JSON Object
var JSONObject={  "element1":
"element2": { "Number":  "0" },
"element3": { "Number":  "1" },
"element4": { "Number":  "2" }

//Converting JSON into JS
var JSObject=eval('('JSONObject + ')');

//Check whether there is a property called “element3”
var IsExistElement3 = JSObject.element1.hasOwnProperty("element3");

Thursday, April 5, 2012

Introduction to ASP.NET MVC3

If you are an ASP.NET web forms developer, you will not like to shift to MVC3 so quickly, because it’s not providing “drag and drop”. But when you get familiarize with MVC3 and Razor view engine, you will be amazed with its customizability. You can use web forms view engines as well in MVC3. But Razor provides syntax easier than web forms.

If you are confusing about what exactly this “View Engine” means, simply that is the part of .NET Framework that compiles the Server side code that we write in our .aspx or .cshtml files.

You can just shift from HTML code to C# code and vice versa, just using an “@” sign, not like open close tags of web forms, pretty much easier.

If you are using VS 2010, it has lot of supports to scaffold (generate) views and controllers for basic Add, Edit, Delete, Details and List View operations.

This is the MSDN article for MVC3.

and the Video Tutorials for “MVC3 – Razor”

This set of tutorials explains clearly how to make a simple MVC3 site with the support of SQL Server and Entity Framework.

As they mentioned in the tutorial as well, adding controllers and Views are supported with dedicated windows in VS 2010. They are so helpful to generate the basic code that is used in any project.

Add Controller Window

Add View Window

But if you want to do advance developments, definitely you should hack into this generated code and learn to modify it. Hope to explain it in my Next Blog Post.

Thursday, January 19, 2012

Data Migrations for Continuous Integration

Why we need Data Migrations for CI

In a typical Software Project, there are several developers who are working on a shared source code, and QA people use another version for testing and there may be another version in Production Server. Then can we make sure that all of these people get the latest version that comes out of the developer’s environment?
Yes. If we can push the changes everywhere (Build Server, Test Server and Production Server) as soon as someone commits his changes, then the problem is solved. But this should be performed automatically, since it’s tedious to do the same thing again and again by a human been. To do this, we need to configure the build server to,
     ·         Pull the latest version from the source controller
     ·         Build the project.
     ·         Make deployable packages with relevant configuration files
     ·         Copy files into relevant server. 

      below diagram depicts the work flow.

s  Here you can see a comprehensive explanation of doing this using Jenkins build server.
Purpose of this post is to explain the importance and necessity of Data Migrations in this process.

Problems of doing this

When we use Entity Framework Code First approach, we get a typical error “the model backing the context has changed since the database was created”. This occurs when the model in our code and model in DB are different. To prevent this error, people were using implementations of “IDatabaseInitializer” interface. With this interface, we have to drop and create either entire DB or set of tables. This results a complete data loss. We can preserve only a set of master data by executing some SQL scripts.


While a lot of people concern about this matter, Microsoft has come up with an intuitive solution – Migrations. Now they have released its beta version. This package can change most of the database changes at the moment while preserving your data.
It has two work flows.

      01)  Code Base Migration – Generates the code and let developer to do required changes.(sample)
   02)  Auto Migration – Generates and applies the changes automatically.(sample)

  Migration creates a table in the database named as “MigrationHistory”. It contains all the migrations applied to the database.

Tips and Tricks for Data Migrations

In above samples given in the MSDN Blog, it explains the way of applying Migration using Package Console (Power Shell). But it’s not enough for a real project. We need to execute migration commands using our code. So I wrote to the owner of that post asking the methods that are invoked by those power shell commands. Then Brice Lambson from Microsoft sent me following map between power shell command and Migration API.

·         Update-Database = DbMigrator.Update
·         Update-Database -Script = MigratorScriptingDecorator.ScriptUpdate
·         Add-Migration = MigrationScaffolder.Scaffold
·         Get-Migrations = DbMigrator.GetDatabaseMigration

Then I could use many interesting features by hacking into these classes. Here is simply how we can apply automatic migrations using C#.

DbMigrationsConfiguration configuration = new DbMigrationsConfiguration()
                MigrationsAssembly = typeof(ProductContext).Assembly,
                ContextType = typeof(ProductContext),
                AutomaticMigrationsEnabled = false,

DbMigrator dbMigrator = new DbMigrator(configuration);

By examine above classes further, you will be able to play with Migrations nicely.