Friday, June 28, 2013

baseapp: Other Grunt Goodies

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 5.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp provides some other grunt goodies as well beyond client-side unit tests, server-side unit tests, and integration tests! Let's take a look...

jshint

What JavaScript project would be complete with jslint/jshint? Here is the grunt configuration::

    jshint: {
        all: ['Gruntfile.js', 'public/javascripts/**/*.js', 'routes/**/*.js', 'spec/**/*.js', 'app.js']
        , options: {
            globals: {
                jQuery: true
            }
            , 'laxcomma': true
            , 'multistr': true
        }
    }

I like putting commas at the beginning of expressions and occasionally have been known to use multi-line strings - change these to suit you. Also I don't want jshint to complain about the 'jQuery' global variable. Finally I want to run jshint on my Gruntfile, all of my client-side JavaScript, server-side JavaScript, test files, and app.js. To use simply:

% grunt jshint

And jshint will complain loudly if it finds something it does not like. Note this task is the first task run as part of 'grunt test'.

Template Compilation

I like to pre-compile my dustjs-linkedin templates - who doesn't? This makes all of my templates available to be rendered by the client/browser as appropriate. To the grunt configuration!

    dustjs: {
        compile: {
            files: {
                "public/javascripts/templates.js": ["views/**/*.dust"]
            }
        }
    }

Not much here, this task will simply compile all the dust templates found in the 'view' tree and put them into 'templates.js', suitable for framing or loading into your HTML as you see fit:

    <script src="/vendor/dust-core-1.2.3.min.js"></script>
    <script src="/javascripts/templates.js"></script>

Note you need to of course also load up 'dust-core' to get the dust magic to actually fill out and render a template from templates.js. Here is a snippet from login.js that does this:

    dust.render('index', { user: data }, function(err, out){
        var newDoc = document.open("text/html", "replace");
        newDoc.write(out);
        newDoc.close();
    });

This is slightly different than a typical app as I am replacing the entire page once a user has successfully logged in - typically you just replace a piece of the page. Here I tell dust I want to render the 'index' template (compiled from 'index.dust') and am passing in some data to the template (the username). Dust asycnhonously returns me the redered text, which is HTML, and then I merrily replace the entire current document with it.

Here is the magic in index.dust that handles that:

{?user}
    <div>Howdy {user.username}</div>
    <button type="button" class="btn btn-large btn-primary" id="logout">Logout</button>
{:else} 
    {>login type="login"/}
    <button type="button" id="lB" class="btn btn-large btn-primary" data-toggle="modal" data-target="#loginForm">Login</button>
    {>login type="register"/}
    <button type="button" id="rB" class="btn btn-large btn-primary" data-toggle="modal" data-target="#registerForm">Register</button>
{/user}

This simply says if the 'user' property is defined show them their name and the logout button, otherwise show the login and register buttons.

Note further I include the 'login' template (compiled from login.dust) with a 'login' parameter. Called a 'partial' in the vernacular, this template creates both the 'login' and 'register' modals, since they are pretty much exactly the same it is templated out - that is what templates are for! The id names are different and some text is slightly different but the modals are 95% the same, hence the template.

SO running:

% grunt dustjs

Will compile all templates and create the 'templates.js' file in the right place. This task is run as part of 'grunt test'

I heartily suggest you become good friends with your favorite templating engine and use it extensively!

Static Code Analysis

Beyond the style and syntax-checker that is jshint, baseapp also includes plato for static code analysis. Plato outputs pretty HTML pages for application-level and file-level code statistics. Also especially nice is plato tracks the history of how our files change over time. Here is the config:

    plato: {
        dashr: {
          options : {
            jshint : false
          }
          , files: {
            'public/plato': ['public/javascripts/**/*.js', 'routes/**/*.js', 'spec/**/*.js', 'app.js'],
          }
        }
      }

This tells plato NOT to use jshint (we have already do that ourselves) and where all of our application files are - including our test files. Plato's output is dumped to 'public/plato' so load up the file 'public/plato/index.html' to see static code analysis in all of its glory. From there you can drill down to specific files if you see any red flags. Remember to keep it simple people!

This task is run as part of 'grunt test'

Total Coverage

This task aggragates coverage information from all sources: client-side unit tests, server-side unit tests, and webdriver/integration tests. If you (or your boss) wants to 'total total' number of all coverage from all tests this is it! To the configuration batman:

    total_coverage: {
        options: {
            coverageDir: './build/reports'
            , outputDir: 'public/coverage/total'
        }
    }

Unbeknownst to you we have been careful about putting all 'coverage.json' files (generated by istanbul) in under the single root 'options.coverageDir' for this very reason - to aggregate them all. This command will recursively look through this directory looking for files named 'coverage*.json' and will mash them all together to generate a single mongo report which is put in the 'options.outputDir' directory. Pointing your browser there and loading up 'lcov-report/index.html' will give you the full monty.

Istanbul can also output coverage information using the 'cobertura' format (vs. the 'lcov' format we have been using). This is useful for CI (and other) tools like Jenkins that understand that format. This task also output 'cobertura' format for the total coverage - which is also dumped into the 'options.outputDir' directory.

This task is run as part of 'grunt test'

The Whole Enchilada

So now we can finally see and understand everything that 'grunt test' does - take a look!

grunt.registerTask('test', [
    'jshint', 
    'jasmine',
    'jasmine_node_coverage',
    'dustjs', 
    'webdriver_coverage',
    'total_coverage',
    'plato'
]); 

Run jshint, run client-side unit tests, run server-side unit tests, compile our templates, run integration tests with code coverage, total up all of the coverage, and run plato. Whew that was fun!

Thursday, June 27, 2013

baseapp: Integration Tests Using WebDriver

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 4.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles integration tests using jasmine-node running the camme/webdriver npm module for Selenium and istanbul for code coverage. The grunt tasks to run the tests is 'grunt webdriver' or 'grunt webdriver_coverage' to run with code coverage. The only reason to run without code coverage is to debug the tests themselves so don't get used to it.

The deal is this, you write jasmine-node style tests like your server-side unit tests but use the webdriverjs module to bring up a browser and interact with it. The grunt configuration options for the webdriver tests are within the 'webd' object:

    webd: {
        options: {
            tests: 'spec/webdriver/*.js'
            , junitDir: './build/reports/webdriver/'
            , coverDir: 'public/coverage/webdriver'
        }
    }

Nothing surprising here, simply a pointer to all of your webdriver tests and where you want the JUnit XML and coverage output to go. The grunt webdriver tasks will collect coverage output for BOTH the server side AND client side code that is touched during the tests. At the end of the entire test run all of that coverage information is aggregated together (from all webdriver tests) into a total and that total is what is put into the 'options.coverDir' directory. Note to aggregate coverage information from the client-side unit tests, server-side unit tests, and the webdriver tests use the 'grunt total_coverage' task.

WebDriver

Webdriver itself is an interesting beast, and an ugly one. The bulk of your tests should be unit tests. At most 20% of your total tests should be webdriver tests. Webdriver is fickle and you will get false negatives for a variety of reasons, some you can control and others you cannot. Prepare to be frustrated. Unfortunately it is the only tool we have to test 'real' code in 'real' browsers so we suck it up and deal with it. Running tests through phantomjs helps while developing but at some point you must run through 'real' browsers on 'real' hosts, so let's try to make this as painless as possible.

To use WebDriver you must start the Selenium daemon on the host that will run the brower(s). In a real testbed you will run the Selenium grid and farm out Selenium tests to hosts of various OSes with various browser versions, but for development we will run the selenium server locally and have it fire off browsers on our development machine. If you are ssh'ed into your dev box don't despair! You can easily use phantomjs to run your webdriver tests with no display needed as we shall soon see...

SO first start the Selenium server and just let it run in the background forever:

% java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &

Now you are ready to run webdriver tests!

The Tests

WebDriver tests are just jasmine-node tests that use the WebDriver protocol to do stuff and webdriverjs to verify the DOM is in a state you expect and 'assert' for other general purpose assertions. So check out loginSpec.js and I'll walk through it since there is a lot going on. But once you get the hang of it, it's simple.

First we grab a handle to the redis database - the ONLY thing we use it for is to flush the DB after each test so we start with a clean slate. You can see that in the 'afterEach' function (also note we select DB '15' to not touch our production DB).

I will skip the 'saveCoverage' require statement for now, it is not important yet, moving on to the 'login tests' suite. For webdriver we need jasmine to NOT time out as these tests can take an unknown amount of time (otherwise jasmine times out after 5 seconds).

To set up each test our 'beforeEach' reconnects to the Selenium process and requests a browser provided by the BROWSER environment variable (or 'phantomjs' if that is not set). Grunt will set this for us from a command line option:

% grunt webdriver --browser=firefox  # or any grunt task that will eventually call 'webdriver' like 'test'

Replace with with 'safari' or 'chrome' or 'iexplore' for other browsers. Now give Selenium the URL to connect to, from values placed in the environment by grunt. Finally we pause for half a second to give the browser time to load our page. Here begins Selenium issues, having to wait unknown amounts of time to ensure the browser has loaded your page completely.

Our 'afterEach' function flushes our test database and gets code coverage for the test if the user requested coverage. Since the browser is refreshed after each test we need to grab coverage information after each test, at the end of all tests this incremental coverage information is aggragated to give total coverage. You do not need worry about the internals of the 'saveCoverage' function, there be dragons. I will disucss in detail at the end of this post for the curious.

Each test now just manipulates and queries the DOM for expected values. Each webdriver method accepts an optional second function parameter whose signature is 'function(err, val)' - within this function you can assert values and ensure 'err' is null (if you are not expecting an error that is!). Regardless you chain all of your actions together, each method is actually asynchronous but webdriverjs handles all of that for us beind the scenes. Finally at the end of your test '.call(done)' so webdriverjs knows this test is finished.

Let's follow the flow of one of the tests - the biggest meatiest one: 'register & login user'

it("register & login user", function(done) {

The first thing to notice is the 'done' function passed into the test - we call this when the test is finished. At this point we have navigated to our page & should be read to go...

    client
        .pause(1)

Ahh another bit of WebDriver/Selenium weirdness - need this pause here for the click buttons to work!

        .click('button[data-target="#registerForm"]')

Click the 'Register' button...

        .pause(500)

... and wait for the modal to come up...

        .isVisible('#registerForm', function(err, val) {
            assert(val);
        })

... and verify it is visible

Now set some form values...

        .setValue("#emailregister", 'testdummy')
        .setValue("#passwordregister", 'testdummy')
        .click('#registerSubmit')
        .pause(500)

... and submit the form & wait ...

        .isVisible('#registerForm', function(err, val) {
            // register form no longer visible
            assert(!val);
        })

... and now the register modal is gone (hopefully!)

        .click('button[data-target="#loginForm"]')

... now click the 'Login' button ...

        .pause(500)

... and wait ...

        .isVisible('#loginForm', function(err, val) {
            assert(val);

.. and the hopefully now the login modal is showing

        })

so set values on its form...

        .setValue("#emaillogin", 'testdummy')
        .setValue("#passwordlogin", 'testdummy')
        .click('#loginSubmit')
        .pause(200)

and submit it and wait...

        .isVisible('#loginForm', function(err, val) {
            // login form gone now
            assert(err);
        })

... modal should be gone now and hopefully the 'Logout' button is visible

        .isVisible('#logout', function(err, val) {
            // logout button now visible
            assert(val);
        })

... and the 'Login' button is gone (since we have just logged in)

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button gone
            assert(err);
        })

... and the 'Register' button is gone

        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button gone
            assert(err);
        })

... Now click the 'Logout' button and wait

        .click('#logout')
        .pause(500)

... and hopefully the 'Logout' button is now gone

        .isVisible('#logout', function(err, val) {
            // logout button gone now
            assert(err);
        })

... and the 'Login' and 'Register' buttons are back...

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button back
            assert(val);
        })
        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button back
            assert(val);
        })

... and tell webdriverjs this test is done

        .call(done);
    }

OK simple enough to follow that flow. All of the tests are executed and we are done.

Running The Server

To run these tests we need the Express server up and running. The gunt task 'express' handles this - here is the config from the Gruntfile:

    express: {
        server: {
            options: {
                server: path.resolve('./app.js')
                , debug: true
                , port: 3000
                , host: '127.0.0.1'
                , bases: 'public'
            }
        }
    }

The grunt-express plugin provides this functionality. We tell it where our app.js is, our static directory ('public') and a host/port number (which are placed in the environment by the grunt environment plugin using grunt templates).

Executing the '% grunt express' task will fire up our Express server - but note it ONLY lives for the duration of the grunt process itself - so once grunt quits our server does too (look at the 'express keepalive' task to have it run forever or even better just use 'npm start').

Now take a quick look at the bottom of app.js to see how this works:

// run from command line or loaded as a module (for testing)
if (require.main === module) {
    var server = http.createServer(app);
    server.listen(app.get('port'), app.get('host'), function() {
        console.log('Express server listening on http://' + app.get('host') + ':' + app.get('port'));
    });
} else {
    exports = module.exports = app;
}

This 'trick' check if app.js was run from the command line (which we do when we run 'npm start') or loaded up as a module (which the 'grunt express' task does). If loaded as a module we export our Express app so grunt can work with it, if executed from the command line we start up the server ourselves. That's what this block here is for.

Grunt Tasks

Look at our 'grunt webdriver' and 'grunt webdriver_coverage' tasks:

// webdriver tests with coverage
grunt.registerTask('webdriver_coverage', [
    'env:test'  // use test db
    , 'env:coverage' // server sends by coverage'd JS files
    , 'express'
    , 'webd:coverage'
]);

// webdriver tests without coverage
grunt.registerTask('webdriver', [
    'env:test'  // use test db
    , 'express'
    , 'webd'
]);

The only difference is how the environment is set up and how the base 'webd' task is executed. Using grunt's env plugin to set up the environment - let's take a quick look:

    , env: {
        options : {
            //Shared Options Hash
        }
        , test: {
            NODE_ENV : 'test'
            , HOST: '<%= express.server.options.host %>'
            , PORT: '<%= express.server.options.port %>'
            , BROWSER: '<%= webd.options.browser %>'
        }
        , coverage: {
            COVERAGE: true
        }
    }

Both set the HOST and PORT environment variables which the webdriver tests use here:

client.init()
    .url("http://" + process.env.HOST + ':' + process.env.PORT)
    .pause(500, done);

and here to pass to getCoverage:

if (process.env.COVERAGE) {
    saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
}

NOTE when our Express server is started by grunt these lines are NOT USED:

app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '0.0.0.0');

Those are ONLY used when our Express server is started on the command line (via 'npm start'). See the 'Running The Server' section below. The 'express' grunt task will use the 'post' and 'host' configuration properties directly.

While the 'coverage' rule also sets COVERAGE to true which both our Express server and webdriver tests check. Also the BROWSER environment variable is set here to be picked up by our webdriver tests when initializing the webdriverjs client in the 'beforeEach' method:

client = webdriverjs.remote({ desiredCapabilities: { browserName: process.env.BROWSER }});

Running Our Webserver

Here is how app.js reacts to these environment variables (note these variable are only in the grunt process environment - as soon as the grunt process ends it takes its environment away with it):

isCoverageEnabled = (process.env.COVERAGE == "true")

First app.js determines if coverage is requested or not...

if (isCoverageEnabled) {
    im.hookLoader(__dirname);
}

If so app.js installs istanbul's hook loader which instruments all subsequent 'require'd modules for code coverage. That is why our server-side modules are required AFTER this statement, so if coverage IS requested those modules will be properly instrumented:

// these need to come AFTER the coverage hook loader to get coverage info for them
var routes = require('./routes')
    , user = require('./routes/user')
;

Now those two modules will have coverage information associated with them - sweet.

The next bit of magic is this:

if (isCoverageEnabled) {
    app.use(im.createClientHandler(path.join(__dirname, 'public'), { 
        matcher: function(req) { return req.url.match(/javascripts/); }
    }));
    app.use('/coverage', im.createHandler());
}

This does two things:

  1. Tells istanbul to check all files requested from the 'public' directory against the provided 'matcher' function. If that matcher function returns 'true' then those files will be dynamically instrumented before being sent back to the requesting client (the browser). In our case any requested file in the 'javascripts' directory will be dynamically instrumented with coverage information.
  2. Any request to '/coverage' will be handed off to istanbul - we will see this in use later while the webdriver tests are running. URLs under '/coverage' speak directly to istanbul which accepts and provides coverage output as we shall see.

Finally this:

if ('test' == app.get('env')) {
  app.use(express.errorHandler());
  redis.select(15);
}

This checks the value of the NODE_ENV environment variable which our Gruntfile set to 'test' - so this matches and therefore the 'test' database ('15') is selected for use so as not to interfere with any production data.

And with that our Express server is off and running, with potentially instrumented server-side modules and ready to potentially dyanmically instrument our client-side JavaScript. Plus istanbul is handling all requests under the '/coverage' URL and we are using our test database - whew!

If we are running our webdriver tests without code coverage we are all ready to go, the server is running connected to a test database, our regular JavaScript files are served unchanged and our tests run and JUnit XML output is generated. Things are more interesting when code coverage information is requested.

Dealing With Code Coverage

So we have seen the server setup for running webdriver tests with code coverage what about on the client side? Yes client-side JavaScript is being dynamically instrumented by istanbul but remember after each test the browser is completely refreshed so we must grab and persist all converage information after each test. This is where this magic comes to the fore:

saveCoverage = require('./GetCoverage')

Remember our 'afterEach' method:

afterEach(function(done) {
    if (process.env.COVERAGE) {
        saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
    }

    // just blows out all of the session & user data
    redis.flushdb();

    client.end(done);
});

If we requested coverage we utilize this handy-dandy module to save off all current coverage information - the details how it is done are not important but since I know how curious you are it works like this - which you can follow along with in GetCoverage.js:

  1. Execute some JavaScript in the browser via WebDriver to get the CLIENT-SIDE coverage info (which is stored in the global '__coverage__' variable)
  2. POST that strigified JSON of the '__coverage__' variable to instanbul at '/coverage/client' (remember istanbul is in charge of every URL under '/coverage'). This causes istanbul to aggregate the POST'ed client side coverage with the server-side coverage.
  3. GET the entire coverage info from '/coverage/object' - this is the aggregated server + client-side coverage information.
  4. PERSIST (as in 'save to a file') the aggregated coverage information

Those 4 steps are done after every test.

After all tests are finished the 'webd' grunt task then aggregates each of those individual coverage objects into one total one and generates the final HTML report which is available at: 'public/coverage/webdriver/lcov-report/index.html' and we are done.

SIMPLE - The point this is all done and you do not need to know about the gory details, just write webdriver tests following the ones I wrote and you are fine.

Wednesday, June 26, 2013

baseapp: Server-Side Unit Tests

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 3.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles server-side unit tests using jasmine-node and istanbul for code coverage. The grunt task to run the tests is 'grunt jasmine_node_coverage'. The tests always run with code coverage enabled.

Unlike the client-side unit tests I do not use any third party grunt plugin to execute this grunt task. Let's go to the configuration!

    jasmine_node_coverage: {
        options: {
            coverDir: 'public/coverage/server'
            , specDir: 'spec/server'
            , junitDir: './build/reports/jasmine_node/'
        }
    }

Not a lot here - 'options.coverDir' is the directory where coverage information will be dropped, 'options.specDir' is where all of the server-side unit tests live, and finally 'options.junitDir' is where the JUnit XML test output (one file per suite) is placed.

Jasmine-node works almost identically to client-side jasmine. The biggest (only?) difference is how asynchronous tests are handled. While both jasmines match 'runs()' and 'waitsFor()' functions to handle asynchronous tests, jasmine-node also provides 'jasmine.asyncSpecDone();' matched with 'jasmine.asyncSpecWait();' to handle async tests easier.

Let's take a look at the userSpec.js suite. This file unit tests the authentication code in routes/user.js. Unlike the client-side tests these tests are not run in a browser and objects under test are pulled in via the normal 'require' method.

In this case I expect the redis database to be up, you can use jasmine spies to mock it all out, but instead I select redis database '15' to not interfere with production data (redis has 16 databases named 0-15, by default you get database 0 unless you change it as I do in the 'select' call).

Also I have an 'afterEach' method that completely blows out the contents of the redis database after each test so I'm guaranteed each test gets a fresh database (redis.flushdb()).

The async tests themsevles are very straightforward, just call methods and verify resopnses, easy peasy.

Application Architecture

A quick note about ease of testing, you will note in user.js, I was clear about separating HTTP from the functionality of the module to make testing easier. I was very careful to not mix dealing with HTTP-protocol based values with authentication routines. I specifically did not want to mix up HTTP with authentication to make testing easier. This way I do not have to mock/deal with HTTP when testing the authentication routines. Be sure to keep separation of concerns in mind while writing your code. Writing tests first will help keep your code clean.

Code Coverage

Istanbul has great integration with jasmine-node, looking further down in the Gruntfile you can see the implementation of the jasmine_node_coverage task. Simply running istanbul with jasmine-node will automatically generate code coverage and the coverage output is written to 'public/coverage/server/lcov-report/index.html' in fancy HTML.

Tuesday, June 25, 2013

baseapp: Client-Side Unit Tests

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 2.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles clide-side unit tests using jasmine and istanbul for code coverage. The grunt task to run the tests is 'grunt jasmine'. They always run with code coverage enabled.

Test Configuration

Let's look at the configuration in the Gruntfile:

    jasmine : {
        test: {
            src : 'public/javascripts/**/*.js',
            options : {
                specs : 'spec/client/**/*.js'
                , keepRunner: true  // great for debugging tests
                , vendor: [ 
                      'public/vendor/jquery-2.0.2.min.js'
                    , 'public/vendor/jasmine-jquery.js' 
                    , 'public/vendor/dust-core-1.2.3.min.js' 
                    , 'vendor/bootstrap/js/bootstrap.min.js'
                ]
                , junit: {
                    path: "./build/reports/jasmine/"
                    , consolidate: true
                }
                , template: require('grunt-template-jasmine-istanbul')
                , templateOptions: {
                    coverage: 'public/coverage/client/coverage.json'
                    , report:   'public/coverage/client'
                }
            }
        }
    },

Here is what is happening - the grunt-contrib-jasmine plugin collects all of your client side JavaScript (the 'src' property), all of your test files ('opitons.specs') and any other JavaScript files we need to execute our tests ('options.vendor') and creates a single HTML file 'SpecRunner.html' which is loaded into a browser (phantomjs). The grunt jasmine plugin automatically adds the jasmine client-side libraries to actually run the tests too. SO when SpecRunner.html is loaded into a browser (phantomjs) all of your tests are run.

The output of these tests (in JUnit XML format) is dumped into the options.junit.path directory - one XML file per test suite.

Finally the grunt-template-jasmine-istanbul package is leveraged to generate code coverage information, the HTML output of which is dumped into the 'options.templateOptions.report' directory. We also persist the 'coverage.json' file so it can be aggragated later with other tests (like server-side unit tests and webdriver tests).

Ok that's the setup - for now just know that any '*Spec.js' file placed into the 'spec/client' directory will get executed and any client-side JavaScript file you write in public/javascripts will get loaded to be tested.

Executing:

% grunt jasmine

Will run all of this stuff.

Test Files

How about actually writing the tests? First get basically familiar with jasmine if not already. Now Let's take a look at logoutSpec.js first - a suite called 'logout' is created with two tests.

The most interesting bits are the fixture setup and the jQuery AJAX spy.

Fixtures

Most client-side JavaScript manipulates the DOM, but we don't want to load in all of our application's HTML to test our code, so we use 'fixtures' instead. The 'setFixtures' call is provided by the jasmine-jquery JavaScript library we loaded in via the 'vendors' array in our Gruntfile.js. This lets us set some HTML for the following test that is automatically cleaned up after the test ends. So note I have to 'setFixtures' for each test. If your HTML is the same for each test then you should use a 'beforeEach' function so you DRY (Don't Repeat Yourself). You'll see the HTML is slightly different for each test so I wasn't able to do that here.

Jasmine-jquery and also load fixtures from a webserver, which is especially nice if you are using templates (which you are!), so you can test using your application's actual HTML. As these fixtures are very simple I did not do that here. Also beware of how the fixture loading interacts with jQuery's AJAX object which I will discuss with more detail next.

Spies

Jasmine is espcially nice for providing spies for interacting with your code's dependencies. The logout component relies on an AJAX call to actually log the user out, but while unit testing we do not have a web server running so we need to mock out the AJAX call. We do that using the "spyOn($, 'ajax')" function call.

All subsequent calls that use jQuery's AJAX mechanism will instead get routed to this spy, the underlying AJAX call will NOT get executed (note jasmine does provide a way to also call through to the actualy implementation but we do not want to do that here).

The spy allows us to verify the ajax method was called with the expect arguments. Spies let us do more than just verify arguments and call through to the real underlying implementation, take a look at loginSpec.js to see more!

Search for the 'andCallFake' method - using that method you can have the spied on method execute and return any code you like! Let's take a close look at the "should handle failure default response" test case.

First I set up the HTML fixture for this test, then I create the login component I will test along with some other canned values I will be using more than once so I DRM (Don't Repeat Myself). Now I create a spy for the $.ajax method and inform jasmine whenever that method is called to actually execute the given function instead. Note my function receives the argument list, and in this case I simply turn around and call the provided 'error' callback with some canned data to test that code path. I then set some form elements and 'submit' the form. As this is all synchronous the error callback will get called via 'andCallFake' and finally I expect the error message is being shown properly. Finally note the 'toHaveText' matcher came from jasmine-jquery, which comes chock full of lots of great jQuery-specific matches for us to leverage. Be sure to check them out.

Test Files

So all of our client-side unit test files must reside in the spec/client directory and be named with the word 'Spec' in them - follow that pattern for your own sanity! As you create more component keep adding tests, they can be at any depth in the spec/client tree, jasmine will find them!

_SpecRunner.html and Debugging Tests

Very rarely things do not work the very first time. You'll note there is a grunt configuration property 'keepRunner'. I told you about the SpecRunner.html file grunt-contrib-jasmine generates each time you run 'grunt jasmine' - by default that file is deleted after all the tests complete. But if there a problem running the tests it is very hard to tell what is going on, especially as the tests are all run in phantomjs. By setting 'keepRunner' to 'true' grunt-contrib-jasmine will not delete SpecRunner.html so you can load it up into Chrome (or any other browser not named Chrome) and manually execute unit tests yourself. They will run quickly! Most importantly you can open up a JavaScript console/debugger window to really tell what is going on when your tests are not running correctly.

Test Output

Test results are output in the JUnit XML format with pass/fail information and dumped to the build/reports/jasmine/ directory with one XML file per-suite. The most interesting thing about this format is its wide support.

Code Coverage

All JavaScript files under test are automatically instrumented for code coverage information. Regardless of tests passing/failing you will get code coverage information dumped to public/coverage/client/index.html - simply point your browser and that file and bask in the coverage'ness. You can drill down to each individual file and see line-by-line output of coverage as provided by istanbul. Let this information help guide your next set of tests!

Monday, June 24, 2013

Intro To baseapp: Making JavaScript Best Practices Easy

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 1.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)
Getting started is always the hardest part.  Whether a web app, testing, or a blog post staring at a blank sheet of metaphorical paper is tough.  You have a great idea of what it should do, what you want, but how to start?  You just want to write code that brings your idea to life, but how to get to that part?  You need a web server, you need a database, you need authentication, you want it to look nice, you want to use best practices, you want to use the latest technology, and you want it to be fun.  That is why you are doing all of this at some level, it has got to be fun!  That means no tedium, no slogging through boilerplate, how can you skip all of that and just jump straight to the fun part?  Writing code that will change the world, or at least a corner of it, how can you get to that part as quickly as possible?  There is so much cruft to fight through to get there, wouldn't it be nice if all of that stuff could just 'blow away...'?

My friends I have been there, I have had lots of great ideas, start hacking, and then got quickly crushed under the weight of boilerplate and 'best practices'.  I want to 'test first'.  I want to have a solid foundation for my web app.  I want to do things the 'right way'.  But there always seems to be a horrible choice at the beginning of a new project:  either start trying to do things the 'right way' by setting up testing, automation, and foundation before any of the 'exciting' and 'interesting' work (because you cannot bake that stuff into your code later), OR dive right in and start coding the 'good' stuff and be left with a mess soon thereafter.  What a crappy set of choices!  I think we all agree we'd LIKE to start our new project off 'right' with all the test and automation infrastructure built in from the beginning, but doing that saps all of the joy out of coding, out of turning our great idea into an awesome product that the world loves.  And what's the fun in that?

So I give to you, stymied developer, the boilerplate.  I tend to use the same technology in most of my web apps: Express and Redis on the server-side, Bootstrap and LinkedIn-DustJS on the client side.  I use Jasmine and WebDriver for testing and Istanbul for code coverage.  Grunt and Travis-CI for automation and continuous integration.  I run everything through JSHint. I like Plato's static code analysis.  And most of my apps need user registration and login/logout.  And I am sick of having to rewrite all of those pieces from scratch for each web app I dream up!  So, as of today, no more.  I have created 'baseapp', a bare application with all of that all thrown in and ready to go.  The idea is you (and me!) fork this repo for every new web app you write so you can jump straight into the good parts and all the boilerplate test/automation/code coverage crap is already taken care of for us.  I'm not just talking the packages are downloaded for you.  I'm talking test code has already been written for all the base stuff.  Most people, me included, just really want to see how something is done and then we can go off and do it ourselves, 'see one, do one, teach one' kinda thing.  So I have pre-populated baseapp with client-side unit tests, server-side unit tests, and webdriver integration tests.  No need to try to re-figure out how to piece all of these things together yourself.  Just look at what I did and make more of them just like that.

Here are the technologies baseapp leverages to the hilt:

* github
* grunt
* jshint
* dustjs-linkedin
* express
* jasmine
* jasmine-node
* webdriver
* redis
* authentication
* istanbul
* plato
* karma
* phantomjs
* bootstrap
* travis-ci

This is all setup and baked into the baseapp, the packages aren't just installed, they are all actually used and setup and ready to go.  If this stack is up your alley starting your web app with 'best practices' and 'test first' cannot be easier as all of the setup/boilerplate is already done.

Here is how to start your next web app:

1. Fork the baseapp repo
    - so you have your own copy of it - you will be hacking away adding your special sauce!
2. Clone into your local dev environment
3. Update 'package.json' with the name of your web app
    - This is your biggest decision - what to name your app!
4. % npm install
    - This will suck down all packages necessary for your awesome new dev environment
5. Install & start redis from redis.io
    - But you've already got it installed & running right?
6. To run WebDriver/Selenium tests you need to start the Selenium jar
    - % java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar
7. % grunt test

You will see tests pass hopefully!  If not you will get an informative error message to as why.

To see the app running yourself simply:

% npm start

And then in your browser go to:

http://127.0.0.1:3000

And you will see your beautiful application so far - buttons to login and register - that actually work - and that's it.

This article is the beginning in a series explaining exactly what it going on in 'baseapp' - I will explain everything that has already been set up for you and how to leverage it as your develop the world's next great webapp - stay tuned and have fun!