Thursday, November 21, 2013

ES6 - Really?

Everyone seems to be SO PUMPED UP about the upcoming ES6. All of these new features "everyone" has been begging for - they are finally here! JavaScript will finally be fixed (mostly)!

Let's start with the requirements for ES6.  There are only 3, so sounds simple enough.  Who could argue with 'New features require concrete demonstrations'?  This does not capture what I think (hope) it actually means:  New features require concrete demonstrations of their usefulness.  I do not care if someone CAN implement a feature, it needs to be proved to me WHY ES6 needs this feature.  Unfortunately I have yet to see that 'why' answered for most of the new features.
On to the second requirement: 'Keep the language pleasant for casual developers'  HOLY CRAP as a professional JavaScript developer this is the LAST thing I want to hear.  I do not mind if the casual developer enjoys using JavaScript but please do not cater to that person at the expense of someone who must use the language daily.  This requirement will come to bite ES6 in the ass with several of its new features as we will see later in this post.  Is this really a requirement for ES6?  That casual developers find it pleasant?  Wow.  Is this so they don't get scared off and start using Go/Dart/TypeScript/CoffeeScript??  Does the ECMAScript crew get paid per developer or something?
Finally the third requirement: 'Preserve the “start small and iteratively prototype” nature of the language' - YES!  I'm all in on this one.
So of the 3 requirements, 1 is worded strangely, 1 scares the hell out of me, and 1 I completely agree with - not too bad I guess.

The goals for ES6 all sound reasonable except making the language easier for code generators targeting the new edition.  Sounds like a cop out for Google & Microsoft to keep cooking up Go/Dart/TypeScript/CoffeeScript to run in browsers.  Or maybe ES6 is afraid if that if is not a goal Google will drop JavaScript support in Chrome for Dart or something and totally destroy JavaScript?  Who knows but that goal sounds 'political' to me somehow, not directly useful to the language itself and therefore should be not be baked into the language specification.  Maybe that's just the cost of doing ECMAScript business.

Ok on to the proposed ES6 features!

Block scoped variables
This is the first casualty of 'making the language pleasant for casual developers'.  Function scoped variables are 'confusing' so rather than actually have a professional user of the language actually LEARN THE LANGUAGE, let's try to make it easier for them. Now instead of only having to understand function scope, which of course they have to anyway, we must now also throw in block scope to make matters MORE CONFUSING.  Block scope and Function scope in the same language, in the same function, in the same block!  That won't be confusing at all.
Which is more prone to cause errors, only having to understand one concept or having to understand and be on the look out for two? But programmers already understand block scope?  Do they understand how it will play with function-scoped variables? Are people so lazy that they refuse to learn the actual language? I don't like Java because I don't want to learn about public/protected/private. Well TOUGH SHIT you have to because that is an integral part of the language. It's not like 'function scoped variables' are some hidden unknowable feature of JavaScript. There are TONS of books ready to explain in gory detail to you, dear professional programmer, what it is and how to use it. TONS of online material that will hold your hand and gently introduce you to the concept. 
The canonical use-case for this is the avoidance of creating a new scope within a loop when creating functions using the loop variable.  Fine I get that, it makes that single case slightly simpler.  But I do not think this feature is worth it just for that single useful case for which there are several well-known patterns for deal with this in JavaScript.
This is a completely unneeded 'feature' to fix something that is not broken. Merely a bone for lazy professional programmers to feel 'more at home' using JavaScript, professionally, in their job, for which they get paid (a lot) of money. Who just can't wrap their pretty little heads around a relatively simple very integral feature of the language despite TONS of material to help them. You think beginners are confused about function scope? What until we also throw in block scope.  And surprise!  You are going to have to learn about function scope anyway!  Ok need to cool down now.

This perl feature does come in handy and of course is an answer to the age-old interview question about swapping variables without a temporary variable. As for array destructuring I never liked how now one must rely on the order of values in an array when objects are a better semantic fit. Not hideous for small arrays, but over 2 or 3 elements you probably want an object anyway for sanity.  Us perl old timers have fond memories of deconstructing perl's 'date' method to pluck out the various pieces we needed.  Indeed most examples I've seen of this have employed 'date'-type operations.  It can be handy, it can be tricky.  It is however completely unnecessary.
Object destructuring makes the most sense looping but boy do all of the examples I've seen look confusing! This syntactic sugar may be pleasing to some eyes but to my old curmudgeonly ones it just looks more confusing and I will probably only use for simple cases if at all.  Again it looks more tricky than handy.  There is never anything wrong with being explicit, this feature will certainly not make the language any more pleasant for casual developers.

The Spread/Rest Operator
Related to array destructuring comes the 'spread' operator. This thing evaluates arrays in 'array context' as we'd say in perl ($array vs. @array). A perhaps useful but again completely unnecessary piece of syntactic sugar perhaps only added to confuse you further.  If you have to deal with arrays then pass them around as arrays. I have no idea what 'problem' this operator is trying to solve. It does, however, add to the cognitive overload that ES6 provides. This same argument applies when using the Spread operator as the Rest operator.  The fix here may just be finally making the 'arguments' array-like object an actual array and be done.

Oh boy these are fun. Proxies are best used for the fabled 'cross-cutting concerns' of application development land. Stuff like logging, debugging, security, &c are prime candidates for these constructs. Sounds reasonable and pretty modern on first blush - but we already have a pattern for these that work just fine: the Decorator. And the pattern is exceptionally easy to express in JavaScript (see here). Do we need actual syntax in the language to express this?

Method Definitions
The syntactic shortcut for defining methods in objects looks freakin weird. I know no one likes typing 'function' all the time (a job for IDEs) but is it really that bad that we need to introduce new syntax?

Object literal definition shorthand
More syntactic sugar, more confusion to save some keystrokes. Since when is being explicit so horrible?

Arrow Functions
Hey these aren't so terrible! Functions without writing 'function'! Returns without writing 'return'! I'm sure that will work out great for all the newbies out there!  But it gets much worse, using an 'arrow' function also lexically scopes 'this'.  This random confluence of 'features' really scares me.  Please tell me this will not be true of ES6 final.

Default Arguments
These are great - long live default arguments! Completely unnecessary of course but groovy.

Maps and Sets
Spectacular - love 'em - HOWEVER...

Holy Confusion Batman! I do not see the point of these at all except to incite mass confusion. Why should this be baked into the language with specific (confusing) syntax? These should remain userland, JavaScript is NOT (thankfully) Java - even in Java Iterators are not a core part of the language.  Again, no way should there be dedicated syntax for this trivial-to-implement feature.

Oh boy don't get me started. Has ANYONE come up with ONE LEGITIMATE use case for generators? "They are sort of cool" (and very confusing) does not count.  This harkens back to my re-writing of ES6's first requirement: Show me WHY is this necessary.  Plus, this will make causal developers sad and they will surely find generators unpleasant.

Class Syntax
This really makes me sad :'( sniff. Suddenly we are indeed turning JavaScript into Java after years of trying to tell everyone how different we are and what a bad choice of name 'JavaScript' was. 
Just like block-scoped variables this is a bone to lazy professional programmers who REFUSE to learn how the language they are using actually works. Prototypical inheritance is not that bad I promise.  Another casualty of 'making things pleasant'.
That said there is a more fundamental problem here: object hierarchies. They are hard to construct and maintain in any language. You should be minimizing their use and I'm sad that this new syntax may 'encourage' developers to object-ize their world, which is the path to madness (haven't we all learned that yet?). It ain't 1995 anymore; interfaces, decorators, and factories are where we are in 2013.

Prototype for
Wow this one is a biggie.  I can't decide if this is the operator I love to hate or the operator I hate to love.  In certain cases this operator will be awesomely spectacular.  However it will be abused.  It will be confusing.  I cannot decide if the extra complexity and misuse of this operator outweighs the benefits it will provide for a relatively few use cases.  If you look at this construct:

  Array.create=function(proto,props) {return Object.defineProperties(proto <| [ ], props)};

And think 'Oh this creates a sublcass of Array' then you are a sick individual.  Also the operator seems backwards to me, I guess the '<' is supposed to be like an arrow but it just reads backward to me.  Maybe native Hebrew readers will have a better time of it.

Something needs to be done here, might as well bake it into the language. Go for it! I like CommonJS but we will never agree so I beg of you ES6 authors pick something and stick to it and we'll all fall grumbling into line.  This is important and should be done in ES6.

API Improvements
Yes, Yes, and Yes!  We need more of this kinda stuff! Would love to see full perl regex support.

The End of JS?
The Sky Is Falling! Ok it's not. I'm sure all of the language nerds are enjoying themselves coming up with all of this stuff. And who am I to begrudge the biggest names in the JS industry, Mr. Eich and his dreams included?  It's his freaking language and if he wants to add generators then knock yourself out. But the JavaScript we have now actually works and is used by millions(?) of developers worldwide. Due to its unique place in the browser we do not really have a choice. Is JavaScript perfect the way it is? No. Does is need polishing? Absolutely. Does it need perl6-style treatment? Please no.
A stated goal of ES6 is making larger programs easier to write in JavaScript.  The new class syntax seems to be an anti-pattern for this goal for us minimal object/SOLID-types.  All of the added syntactic sugar also seem antithetical.  A Module specification is really all that is required here.
Finally I'd like to add that the ES6 specification is more than twice as long as the previous ES5 specification.  That also makes me sad.

So Now What?
JavaScript can use some tweaking - instead of morphing into Java I want to double-down on JavaScript.  Here is what we should do:

Make 'arguments' a real array
Seems like the least we can do.

Get rid of the 'new' operator
Ya I said it.  Get rid of it, we don't need it, we don't want it.  It only encourages 'object hierarchy' based thoughts which are naughty.  Bad 'new', bad.  We've got Object.create and soon the 'prototype of '<|' operator, 'new' can now be banished.  So while we're at it...

Dump Classes, Add Interfaces 
Classes are sad remnants of a once powerful sect that once ruled the universe, why would we want to pollute our language with them?  Prototypical inheritance + Interfaces + Objects es muy bueno.

Type Checking for 'use types'
Add a new 'type' pragma (a la 'strict') that enforces type checking.  Let's keep all rabid type checkers (looking at you Google and Microsoft) off our backs by throwing them a bone - give them an optional mode whereby types are checked.  Gotta keep them code generating transpilers happy.
I am surprised V8 and equivalent haven't gone off and defined their own pragmas already a la 'use types' to add non-standard functionality.  Maybe the specification itself should say 'you are not allowed to do this!'

Sum It Up
JavaScript is Good.  JavaScript is not Java.  Let's keep it that way!  ES6 is a mishmash of 'making the language pleasant for the casual user' (block scope & class syntax) and making functional programmers/CoffeeScript'ers happy (am not sure what else to call these features: iterators & generators & shorthand syntax) but unfortunately both attempts, yes, make me sad :'( sniff!

PS Thanks to Ariya Hidayat and his excellent blog to which I've linked to extensively in this post.  My complaints about new ES6 features in no way diminish the excellent work he has done explaining what these features are and how they work.  Thanks Ariya!  Now repeat those previous 3 sentences with Nicholas Zakas and his blog.

Wednesday, July 3, 2013

baseapp: Administration

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 8.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)


Bunyan provides very flexible JSON-based logging solution. Here is its config:

Bunyan = require('bunyan')
, log = Bunyan.createLogger({
    name: "YOUR_APP"
    , serializers : Bunyan.stdSerializers 
    , streams: [{
        type: 'rotating-file'
        , path: 'logs/http.log'
        , period: '1d'
        , count: 3

All server logs are are kept in the 'logs' directory, rolled over every day, and 3 previous logs are kept. Bunyan's standard serializers handle serializing the request and response objects.

The logging routine is here:

app.set('logger', log);

// bunyan logging
app.use(function(req, res, next) {
    var end = res.end;
    req._startTime = new Date();
    res.end = function(chunk, encoding){
        res.end = end;
        res.end(chunk, encoding);{req: req, res: res, total_time: new Date() - req._startTime}, 'handled request/response');


This is the second Express 'use' statement - make sure it's near or at the top (only below the 'favicon' use) to log everything! This is taken directly from the 'connect.Logger' middleware - it basically hooks itself into the response 'end' method to record the total time the request took and the request and response objects.

Note we also stash the 'log' object in Express so other middleware can access it.

Starting and Stopping the Server

baseapp uses the awesome forever module to handle the start, restart, and stop of our server. Take a look at our 'start' and 'stop' commands in package.json:

"start": "node_modules/.bin/forever -a -p logs --minUptime 2000 --spinSleepTime 1000 -l forever.log -o logs/out.log -e logs/err.log start app.js",
"stop": "node_modules/.bin/forever stop app.js",

They delegate the handling of starting and stopping our websever to forever - which once started will monitor the process forever (Hmmm) and restart it if it crashes. However our server must be up for at least 2 seconds before forever will try to restart it - remember our server will kill itself off if it cannot connect to 'redis' so in that case we do not want forever always trying to restart it.

The '--spinSleepTime' option just says wait 1 second before trying to start up the server if it died (the forever default). We use the 'append' option (-a) and put all of forever's logs into our 'logs' directory and that's it!.


% npm start

Will keep our webserver up even when/if it crashes subject to sane constraints. Plus we have a clean way to stop it via:

% npm stop

The server gets its HOST and PORT values to listen on from the environment. Here are its defaults:

app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '');

To change this:

% PORT=9999 HOST=localhost npm start

Now your server is listening on http://localhost:9999 (your shell may vary!). npm config variables is another way to go.

Forever has a couple more tricks up its sleeve - want to know what process forever has spawned off for you - 'forever list' to the rescue:

% node_modules/.bin/forever list
    info:    Forever processes running
    data:        uid  command             script forever pid   logfile          uptime
    data:    [0] OlRc /usr/local/bin/node app.js 54323   54324 logs/forever.log 0:0:0:3.727

You can edit the columns this command displays but you can read the forever documentation for that.

Finally note the index number '[0]' of our process - want to see the 'forever.log' for it but too lazy to load it up yourself? Here ya go:

% node_modules/.bin/forever logs 0
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on http://localhost:9999
    data:    app.js:54324 - Express server listening on

Tuesday, July 2, 2013

baseapp: Continuous Integration

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 7.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp achieves Continuous Integration in 3 (yes 3) ways:

They are all complimentary and do similar things so take your pick!

grunt watch

The grunt watch plugin monitors our files and runs grunt tasks as those files are modified - to the configuration!

    watch: {
        // If the main app changes (don't have any specific tests for this file :( YET! )
        mainApp: {
            files: [ 'app.js' ]
            , tasks: ['jshint', 'webdriver' ]
        // If any server-side JS changes
        , serverSide: {
            files: [ 'routes/**/*.js' ]
            , tasks: ['jshint', 'jasmine_node_coverage', 'webdriver' ]
        // If any server-side JS TEST changes
        , serverSideTests: {
            files: [ 'spec/server/**/*.js' ]
            , tasks: ['jshint', 'jasmine_node_coverage' ]
        // If any client-side JS changes
        , clientSide: {
            files: [ 'public/javascripts/**/*.js' ]
            , tasks: ['jshint', 'jasmine', 'webdriver' ]
        // If any client-side JS TEST changes
        , clientSideTests: {
            files: [ 'spec/client/**/*.js' ]
            , tasks: ['jshint', 'jasmine' ]
        // If any integration/webdriver JS TEST changes
        , webDriverTests: {
            files: [ 'spec/webdriver/**/*.js' ]
            , tasks: ['jshint', 'webdriver' ]

The deal here is each file can only be in ONE stanza, if a file is represented in more than one grunt-watch stanza only the last one will 'win'. SO we slice and dice our files to ensure they only show up in one place. So the game is determine which grunt tasks should be run if a given file changes.

Starting with 'app.js' - if it changes changes run the 'jshint' and 'webdriver' tasks because those are the only two tasks that app.js could effect.

Similarly for editing a server-side JS file, if one of those changes run 'jshint', 'jasmine_node_coverage', and 'webdriver' because all of those target are potentially effected by a change to one of those files.

If any server-side unit test file changes we only run 'jshint' and 'jasmine_node_coverage' - we do NOT need to run any webdriver tests because change a server-side unit test file does not potentially effect those.

The same logic applies to changed client-side JavaScript files vs. client-side test files.

Finally if any webdriver tests change then we run the 'jshint' and 'webdriver' tasks because those are the only tasks potentially effected by a change to any of those files.

So in a terminal kick it all off by executing:

% grunt watch

Now in another terminal edit away and the 'grunt watch' terminal will run jshint and tests after you save changed files.


Karma functions similarly to 'grunt watch' - we tell it which files to watch, and if there is a change it should kick off some tests. Karma however ONLY runs out client-side jasmine unit tests. What is snazzy about it is support for multiple simultaneous browsers and built-in coverage generation.

All of its configuration is in karma.conf.js (yes this is a JavaScript file). Since it is going to run jasmine it needs the same information as our 'grunt jasmine' task: namely where all of our client-side JavaScript files are and what extra JavaScript to load into the browser to run our tests. That configuration is here:

// list of files / patterns to load in the browser
files = [

'JASMINE' and 'JASMINE_ADAPTER' are karma built-ins for running jasmine tests - convenient! This section adds in coverage support:

preprocessors = {
    'public/javascripts/*.js': 'coverage'

This tells Karma to generate coverage information for all files in 'public/javascripts/*js' - where all of our client-side JavaScript resides.

coverageReporter = {
    type : 'lcov',
    dir : 'public/coverage/client'

Here is where we tell Karma where to put the coverage output. When the tests are all done running Karma will generate a new code coverage report.

browsers = ['ChromeCanary', 'PhantomJS', 'Safari', 'Firefox'];

Here is our array of browsers we want Karma to run - every time one of our watched files change Karma will execute all of the unit tests in each of those browsers. When Karma starts up it spawns off each of those and they sit around and wait to run tests.


singleRun = false;

Tells Karma to keep running in the background, you can run it in single shot mode if being run in a QA environment.

So to start the whole thing up:

% node_modules/.bin/karma start

Karma will spawn off all the browsers you requested, you can now minimize all of those windows, Karma will print all relevant output to the terminal window. As you edit client-side JavaScript or the tests Karma will automatically re-run all the tests.


Travis-CI executes the 'script' property from our .travis.yml file each time we push to github, if that is not present (which it is not in our config), it runs 'npm test' by default for NodeJS jobs (which for us just turns around and runs 'grunt test'). Our travis-ci configuration is in the .travis.yml file. Here it is:

language: node_js
  - "0.8"
  - "0.10"
services: redis-server
  - "export DISPLAY=:99.0"
  - "sh -e /etc/init.d/xvfb start"
  - "java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &"
  - "sleep 10"

This just says this application is a NodeJS application and we want to test it against version .8 and .10 of node. You need to link your github account with travis-ci so it can watch your github commits and act on them accordingly.

Set up information for Travis CI is here. Basically you just need to create a Travis CI account and activate the GitHub service hook. Then visit your Travis CI profile and enable Travis CI for your repository. With our .travis.yml file in place, the next commit to our github repo will trigger Travis CI.

Travis CI conveniently has both redis and phantomjs (and firefox!) preinstalled for webdriver tests. Redis however is NOT automatically started so we need to tell Travis to start redis-server for us. We also have to start the selenium server and then sleep for 10 seconds to ensure it is up and ready to go. In case we want to use 'firefox' for our Selenium tests we fire up the virtual X framebuffer and set the DISPLAY environment variable accordingly.

You may notice that our travis tests will generate coverage information and total it all up and run plato - none of which we actually use as the result of the travis tests. Oh well.

If all goes well Travis will blow through all of our tests successfully. On a Travis result state change, like from success to failure or vice versa you will get an email with a link to the log so you can debug if one of your tests has failed.

You can see baseapp's Travis CI dashboard here. Yes it took several tries to get it right!! But lucky for you all of the 'hard' work has been done for you.

Finally we add the 'travis build status' badge to the top of our file to show all comers we use travis-ci to test our project and show build status:

[![build status](](

Don't freak out at the markdown syntax, we are just enclosing an image (baseapp.png) with a link to the baseapp travis-ci page.

Monday, July 1, 2013

basepp: Authentication

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 6.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp provides robust registration/authentication services out of the box! Built on Connect sessions and redis all of the 'hard' work has been already done for you.

Before any other URL handler use:

user.setup(app); // this has to go first!

Which came from:

user = require('./routes/user')

If a user is logged in this middleware will load up the user object and put it in the session:

app.all('*', function(req, res, next) {
    exports.isLoggedIn(app.get('redis'), req.session, function(uid) {
        if (uid) {
            exports.loadUser(app.get('redis'), uid, function(userObj) {
                req.session.user = userObj;
        } else {

Where 'loadUser' just returns an object with whatever you want to put in there:

exports.loadUser = function(redis, uid, cb) {
    redis.get('uid:' + uid + ':username', function(err, username) {
        cb({ username: username });

In this case it only contains the username. As your app adds information the 'user' in the redis DB you can stick on those data as well.

Any request to '/user/logout' will log the user out (imagine that!) and destroy the session:

app.all('/user/logout', function(req, res, next) {
    exports.userLogout(app.get('redis'), req.session, function() {

You can read the details about how a user is authenticated, registered, and logged out here so I DRM (Don't Repeat Myself).

Separation Of Concerns

Most interesting to note is I try to keep separate HTTP and user actions. Specifically HTTP routing requests to the various URLs turn around and call more generic user methods ('userLogout', 'isLoggedIn', 'registerUser', 'loginUser', &c). I could have stuck the bodies of those functions directly into the Express middleware routing but that would have made testings those functions significantly more difficult. Also what if we needed another way to deal with users from another protocol - like from a CLI? Those functions are also separated from HTTP-specific stuff like how the username and password are retreived from an HTTP request (req.body.username & req.body.password). Our user authentication stuff needs to have no idea how/where those values came from - they do not want to be tied to HTTP and form handling.

It also works in reverse, I do not want my HTTP-specific code to know any details about the user authentication stuff.

Also note I do pass around a session object but that is NOT specific to HTTP, it is a simple object required for authentication from any protocol. How that session object is -stored- is specific to HTTP (in req.session) but the object iself is generic.

In fact in a perfect world the user authentication stuff would be in a completely separate module, I leave that as an exercise for the reader.

Also note the user functions expect a 'redis' client as an argument. This also allows for easier testing, I can pass these functions a mock'ed out redis implemetation or a connection to a test database, giving me full control of testing these functions.

To The Template

How does 'req.session.user' get funneled to the template? Glad you asked because it happens in two places! First look in routes/index.js:

res.render('index', { user: req.session.user });

When a user initially requests our app the server renders the index.dust template passing in the req.sesion.user object.

Once the user has already loaded our page and registers or tries to login now AJAX is used to pass this object around. You'll see 'userLogin' does two things on successful login:

req.session.user = userObj;

It sets the user object into the session AND returns a JSON-ified version of it to the client. Peeking into login.js the response is handled here:

if (data.username) {
    // refresh page
    dust.render('index', { user: data }, function(err, out){
        var newDoc ="text/html", "replace");

if 'data.username' is set they were successful in logging in and now the client renders the index.dust template with the 'user' property set with the response (that was de-JSON-ified earlier). Here the entire page is replaced with the output from the filled-out template - typically just a piece of your page is replaced!

A successful logout from logout.js does the opposite:

dust.render('index', {}, function(err, out){
    var newDoc ="text/html", "replace");

In this case there is no 'user' property passed to the template so you get the 'login' and 'register' buttons back.

Friday, June 28, 2013

baseapp: Other Grunt Goodies

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 5.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp provides some other grunt goodies as well beyond client-side unit tests, server-side unit tests, and integration tests! Let's take a look...


What JavaScript project would be complete with jslint/jshint? Here is the grunt configuration::

    jshint: {
        all: ['Gruntfile.js', 'public/javascripts/**/*.js', 'routes/**/*.js', 'spec/**/*.js', 'app.js']
        , options: {
            globals: {
                jQuery: true
            , 'laxcomma': true
            , 'multistr': true

I like putting commas at the beginning of expressions and occasionally have been known to use multi-line strings - change these to suit you. Also I don't want jshint to complain about the 'jQuery' global variable. Finally I want to run jshint on my Gruntfile, all of my client-side JavaScript, server-side JavaScript, test files, and app.js. To use simply:

% grunt jshint

And jshint will complain loudly if it finds something it does not like. Note this task is the first task run as part of 'grunt test'.

Template Compilation

I like to pre-compile my dustjs-linkedin templates - who doesn't? This makes all of my templates available to be rendered by the client/browser as appropriate. To the grunt configuration!

    dustjs: {
        compile: {
            files: {
                "public/javascripts/templates.js": ["views/**/*.dust"]

Not much here, this task will simply compile all the dust templates found in the 'view' tree and put them into 'templates.js', suitable for framing or loading into your HTML as you see fit:

    <script src="/vendor/dust-core-1.2.3.min.js"></script>
    <script src="/javascripts/templates.js"></script>

Note you need to of course also load up 'dust-core' to get the dust magic to actually fill out and render a template from templates.js. Here is a snippet from login.js that does this:

    dust.render('index', { user: data }, function(err, out){
        var newDoc ="text/html", "replace");

This is slightly different than a typical app as I am replacing the entire page once a user has successfully logged in - typically you just replace a piece of the page. Here I tell dust I want to render the 'index' template (compiled from 'index.dust') and am passing in some data to the template (the username). Dust asycnhonously returns me the redered text, which is HTML, and then I merrily replace the entire current document with it.

Here is the magic in index.dust that handles that:

    <div>Howdy {user.username}</div>
    <button type="button" class="btn btn-large btn-primary" id="logout">Logout</button>
    {>login type="login"/}
    <button type="button" id="lB" class="btn btn-large btn-primary" data-toggle="modal" data-target="#loginForm">Login</button>
    {>login type="register"/}
    <button type="button" id="rB" class="btn btn-large btn-primary" data-toggle="modal" data-target="#registerForm">Register</button>

This simply says if the 'user' property is defined show them their name and the logout button, otherwise show the login and register buttons.

Note further I include the 'login' template (compiled from login.dust) with a 'login' parameter. Called a 'partial' in the vernacular, this template creates both the 'login' and 'register' modals, since they are pretty much exactly the same it is templated out - that is what templates are for! The id names are different and some text is slightly different but the modals are 95% the same, hence the template.

SO running:

% grunt dustjs

Will compile all templates and create the 'templates.js' file in the right place. This task is run as part of 'grunt test'

I heartily suggest you become good friends with your favorite templating engine and use it extensively!

Static Code Analysis

Beyond the style and syntax-checker that is jshint, baseapp also includes plato for static code analysis. Plato outputs pretty HTML pages for application-level and file-level code statistics. Also especially nice is plato tracks the history of how our files change over time. Here is the config:

    plato: {
        dashr: {
          options : {
            jshint : false
          , files: {
            'public/plato': ['public/javascripts/**/*.js', 'routes/**/*.js', 'spec/**/*.js', 'app.js'],

This tells plato NOT to use jshint (we have already do that ourselves) and where all of our application files are - including our test files. Plato's output is dumped to 'public/plato' so load up the file 'public/plato/index.html' to see static code analysis in all of its glory. From there you can drill down to specific files if you see any red flags. Remember to keep it simple people!

This task is run as part of 'grunt test'

Total Coverage

This task aggragates coverage information from all sources: client-side unit tests, server-side unit tests, and webdriver/integration tests. If you (or your boss) wants to 'total total' number of all coverage from all tests this is it! To the configuration batman:

    total_coverage: {
        options: {
            coverageDir: './build/reports'
            , outputDir: 'public/coverage/total'

Unbeknownst to you we have been careful about putting all 'coverage.json' files (generated by istanbul) in under the single root 'options.coverageDir' for this very reason - to aggregate them all. This command will recursively look through this directory looking for files named 'coverage*.json' and will mash them all together to generate a single mongo report which is put in the 'options.outputDir' directory. Pointing your browser there and loading up 'lcov-report/index.html' will give you the full monty.

Istanbul can also output coverage information using the 'cobertura' format (vs. the 'lcov' format we have been using). This is useful for CI (and other) tools like Jenkins that understand that format. This task also output 'cobertura' format for the total coverage - which is also dumped into the 'options.outputDir' directory.

This task is run as part of 'grunt test'

The Whole Enchilada

So now we can finally see and understand everything that 'grunt test' does - take a look!

grunt.registerTask('test', [

Run jshint, run client-side unit tests, run server-side unit tests, compile our templates, run integration tests with code coverage, total up all of the coverage, and run plato. Whew that was fun!

Thursday, June 27, 2013

baseapp: Integration Tests Using WebDriver

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 4.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles integration tests using jasmine-node running the camme/webdriver npm module for Selenium and istanbul for code coverage. The grunt tasks to run the tests is 'grunt webdriver' or 'grunt webdriver_coverage' to run with code coverage. The only reason to run without code coverage is to debug the tests themselves so don't get used to it.

The deal is this, you write jasmine-node style tests like your server-side unit tests but use the webdriverjs module to bring up a browser and interact with it. The grunt configuration options for the webdriver tests are within the 'webd' object:

    webd: {
        options: {
            tests: 'spec/webdriver/*.js'
            , junitDir: './build/reports/webdriver/'
            , coverDir: 'public/coverage/webdriver'

Nothing surprising here, simply a pointer to all of your webdriver tests and where you want the JUnit XML and coverage output to go. The grunt webdriver tasks will collect coverage output for BOTH the server side AND client side code that is touched during the tests. At the end of the entire test run all of that coverage information is aggregated together (from all webdriver tests) into a total and that total is what is put into the 'options.coverDir' directory. Note to aggregate coverage information from the client-side unit tests, server-side unit tests, and the webdriver tests use the 'grunt total_coverage' task.


Webdriver itself is an interesting beast, and an ugly one. The bulk of your tests should be unit tests. At most 20% of your total tests should be webdriver tests. Webdriver is fickle and you will get false negatives for a variety of reasons, some you can control and others you cannot. Prepare to be frustrated. Unfortunately it is the only tool we have to test 'real' code in 'real' browsers so we suck it up and deal with it. Running tests through phantomjs helps while developing but at some point you must run through 'real' browsers on 'real' hosts, so let's try to make this as painless as possible.

To use WebDriver you must start the Selenium daemon on the host that will run the brower(s). In a real testbed you will run the Selenium grid and farm out Selenium tests to hosts of various OSes with various browser versions, but for development we will run the selenium server locally and have it fire off browsers on our development machine. If you are ssh'ed into your dev box don't despair! You can easily use phantomjs to run your webdriver tests with no display needed as we shall soon see...

SO first start the Selenium server and just let it run in the background forever:

% java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &

Now you are ready to run webdriver tests!

The Tests

WebDriver tests are just jasmine-node tests that use the WebDriver protocol to do stuff and webdriverjs to verify the DOM is in a state you expect and 'assert' for other general purpose assertions. So check out loginSpec.js and I'll walk through it since there is a lot going on. But once you get the hang of it, it's simple.

First we grab a handle to the redis database - the ONLY thing we use it for is to flush the DB after each test so we start with a clean slate. You can see that in the 'afterEach' function (also note we select DB '15' to not touch our production DB).

I will skip the 'saveCoverage' require statement for now, it is not important yet, moving on to the 'login tests' suite. For webdriver we need jasmine to NOT time out as these tests can take an unknown amount of time (otherwise jasmine times out after 5 seconds).

To set up each test our 'beforeEach' reconnects to the Selenium process and requests a browser provided by the BROWSER environment variable (or 'phantomjs' if that is not set). Grunt will set this for us from a command line option:

% grunt webdriver --browser=firefox  # or any grunt task that will eventually call 'webdriver' like 'test'

Replace with with 'safari' or 'chrome' or 'iexplore' for other browsers. Now give Selenium the URL to connect to, from values placed in the environment by grunt. Finally we pause for half a second to give the browser time to load our page. Here begins Selenium issues, having to wait unknown amounts of time to ensure the browser has loaded your page completely.

Our 'afterEach' function flushes our test database and gets code coverage for the test if the user requested coverage. Since the browser is refreshed after each test we need to grab coverage information after each test, at the end of all tests this incremental coverage information is aggragated to give total coverage. You do not need worry about the internals of the 'saveCoverage' function, there be dragons. I will disucss in detail at the end of this post for the curious.

Each test now just manipulates and queries the DOM for expected values. Each webdriver method accepts an optional second function parameter whose signature is 'function(err, val)' - within this function you can assert values and ensure 'err' is null (if you are not expecting an error that is!). Regardless you chain all of your actions together, each method is actually asynchronous but webdriverjs handles all of that for us beind the scenes. Finally at the end of your test '.call(done)' so webdriverjs knows this test is finished.

Let's follow the flow of one of the tests - the biggest meatiest one: 'register & login user'

it("register & login user", function(done) {

The first thing to notice is the 'done' function passed into the test - we call this when the test is finished. At this point we have navigated to our page & should be read to go...


Ahh another bit of WebDriver/Selenium weirdness - need this pause here for the click buttons to work!


Click the 'Register' button...


... and wait for the modal to come up...

        .isVisible('#registerForm', function(err, val) {

... and verify it is visible

Now set some form values...

        .setValue("#emailregister", 'testdummy')
        .setValue("#passwordregister", 'testdummy')

... and submit the form & wait ...

        .isVisible('#registerForm', function(err, val) {
            // register form no longer visible

... and now the register modal is gone (hopefully!)


... now click the 'Login' button ...


... and wait ...

        .isVisible('#loginForm', function(err, val) {

.. and the hopefully now the login modal is showing


so set values on its form...

        .setValue("#emaillogin", 'testdummy')
        .setValue("#passwordlogin", 'testdummy')

and submit it and wait...

        .isVisible('#loginForm', function(err, val) {
            // login form gone now

... modal should be gone now and hopefully the 'Logout' button is visible

        .isVisible('#logout', function(err, val) {
            // logout button now visible

... and the 'Login' button is gone (since we have just logged in)

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button gone

... and the 'Register' button is gone

        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button gone

... Now click the 'Logout' button and wait


... and hopefully the 'Logout' button is now gone

        .isVisible('#logout', function(err, val) {
            // logout button gone now

... and the 'Login' and 'Register' buttons are back...

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button back
        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button back

... and tell webdriverjs this test is done


OK simple enough to follow that flow. All of the tests are executed and we are done.

Running The Server

To run these tests we need the Express server up and running. The gunt task 'express' handles this - here is the config from the Gruntfile:

    express: {
        server: {
            options: {
                server: path.resolve('./app.js')
                , debug: true
                , port: 3000
                , host: ''
                , bases: 'public'

The grunt-express plugin provides this functionality. We tell it where our app.js is, our static directory ('public') and a host/port number (which are placed in the environment by the grunt environment plugin using grunt templates).

Executing the '% grunt express' task will fire up our Express server - but note it ONLY lives for the duration of the grunt process itself - so once grunt quits our server does too (look at the 'express keepalive' task to have it run forever or even better just use 'npm start').

Now take a quick look at the bottom of app.js to see how this works:

// run from command line or loaded as a module (for testing)
if (require.main === module) {
    var server = http.createServer(app);
    server.listen(app.get('port'), app.get('host'), function() {
        console.log('Express server listening on http://' + app.get('host') + ':' + app.get('port'));
} else {
    exports = module.exports = app;

This 'trick' check if app.js was run from the command line (which we do when we run 'npm start') or loaded up as a module (which the 'grunt express' task does). If loaded as a module we export our Express app so grunt can work with it, if executed from the command line we start up the server ourselves. That's what this block here is for.

Grunt Tasks

Look at our 'grunt webdriver' and 'grunt webdriver_coverage' tasks:

// webdriver tests with coverage
grunt.registerTask('webdriver_coverage', [
    'env:test'  // use test db
    , 'env:coverage' // server sends by coverage'd JS files
    , 'express'
    , 'webd:coverage'

// webdriver tests without coverage
grunt.registerTask('webdriver', [
    'env:test'  // use test db
    , 'express'
    , 'webd'

The only difference is how the environment is set up and how the base 'webd' task is executed. Using grunt's env plugin to set up the environment - let's take a quick look:

    , env: {
        options : {
            //Shared Options Hash
        , test: {
            NODE_ENV : 'test'
            , HOST: '<%= %>'
            , PORT: '<%= express.server.options.port %>'
            , BROWSER: '<%= webd.options.browser %>'
        , coverage: {
            COVERAGE: true

Both set the HOST and PORT environment variables which the webdriver tests use here:

    .url("http://" + process.env.HOST + ':' + process.env.PORT)
    .pause(500, done);

and here to pass to getCoverage:

if (process.env.COVERAGE) {
    saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);

NOTE when our Express server is started by grunt these lines are NOT USED:

app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '');

Those are ONLY used when our Express server is started on the command line (via 'npm start'). See the 'Running The Server' section below. The 'express' grunt task will use the 'post' and 'host' configuration properties directly.

While the 'coverage' rule also sets COVERAGE to true which both our Express server and webdriver tests check. Also the BROWSER environment variable is set here to be picked up by our webdriver tests when initializing the webdriverjs client in the 'beforeEach' method:

client = webdriverjs.remote({ desiredCapabilities: { browserName: process.env.BROWSER }});

Running Our Webserver

Here is how app.js reacts to these environment variables (note these variable are only in the grunt process environment - as soon as the grunt process ends it takes its environment away with it):

isCoverageEnabled = (process.env.COVERAGE == "true")

First app.js determines if coverage is requested or not...

if (isCoverageEnabled) {

If so app.js installs istanbul's hook loader which instruments all subsequent 'require'd modules for code coverage. That is why our server-side modules are required AFTER this statement, so if coverage IS requested those modules will be properly instrumented:

// these need to come AFTER the coverage hook loader to get coverage info for them
var routes = require('./routes')
    , user = require('./routes/user')

Now those two modules will have coverage information associated with them - sweet.

The next bit of magic is this:

if (isCoverageEnabled) {
    app.use(im.createClientHandler(path.join(__dirname, 'public'), { 
        matcher: function(req) { return req.url.match(/javascripts/); }
    app.use('/coverage', im.createHandler());

This does two things:

  1. Tells istanbul to check all files requested from the 'public' directory against the provided 'matcher' function. If that matcher function returns 'true' then those files will be dynamically instrumented before being sent back to the requesting client (the browser). In our case any requested file in the 'javascripts' directory will be dynamically instrumented with coverage information.
  2. Any request to '/coverage' will be handed off to istanbul - we will see this in use later while the webdriver tests are running. URLs under '/coverage' speak directly to istanbul which accepts and provides coverage output as we shall see.

Finally this:

if ('test' == app.get('env')) {

This checks the value of the NODE_ENV environment variable which our Gruntfile set to 'test' - so this matches and therefore the 'test' database ('15') is selected for use so as not to interfere with any production data.

And with that our Express server is off and running, with potentially instrumented server-side modules and ready to potentially dyanmically instrument our client-side JavaScript. Plus istanbul is handling all requests under the '/coverage' URL and we are using our test database - whew!

If we are running our webdriver tests without code coverage we are all ready to go, the server is running connected to a test database, our regular JavaScript files are served unchanged and our tests run and JUnit XML output is generated. Things are more interesting when code coverage information is requested.

Dealing With Code Coverage

So we have seen the server setup for running webdriver tests with code coverage what about on the client side? Yes client-side JavaScript is being dynamically instrumented by istanbul but remember after each test the browser is completely refreshed so we must grab and persist all converage information after each test. This is where this magic comes to the fore:

saveCoverage = require('./GetCoverage')

Remember our 'afterEach' method:

afterEach(function(done) {
    if (process.env.COVERAGE) {
        saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);

    // just blows out all of the session & user data


If we requested coverage we utilize this handy-dandy module to save off all current coverage information - the details how it is done are not important but since I know how curious you are it works like this - which you can follow along with in GetCoverage.js:

  1. Execute some JavaScript in the browser via WebDriver to get the CLIENT-SIDE coverage info (which is stored in the global '__coverage__' variable)
  2. POST that strigified JSON of the '__coverage__' variable to instanbul at '/coverage/client' (remember istanbul is in charge of every URL under '/coverage'). This causes istanbul to aggregate the POST'ed client side coverage with the server-side coverage.
  3. GET the entire coverage info from '/coverage/object' - this is the aggregated server + client-side coverage information.
  4. PERSIST (as in 'save to a file') the aggregated coverage information

Those 4 steps are done after every test.

After all tests are finished the 'webd' grunt task then aggregates each of those individual coverage objects into one total one and generates the final HTML report which is available at: 'public/coverage/webdriver/lcov-report/index.html' and we are done.

SIMPLE - The point this is all done and you do not need to know about the gory details, just write webdriver tests following the ones I wrote and you are fine.

Wednesday, June 26, 2013

baseapp: Server-Side Unit Tests

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 3.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles server-side unit tests using jasmine-node and istanbul for code coverage. The grunt task to run the tests is 'grunt jasmine_node_coverage'. The tests always run with code coverage enabled.

Unlike the client-side unit tests I do not use any third party grunt plugin to execute this grunt task. Let's go to the configuration!

    jasmine_node_coverage: {
        options: {
            coverDir: 'public/coverage/server'
            , specDir: 'spec/server'
            , junitDir: './build/reports/jasmine_node/'

Not a lot here - 'options.coverDir' is the directory where coverage information will be dropped, 'options.specDir' is where all of the server-side unit tests live, and finally 'options.junitDir' is where the JUnit XML test output (one file per suite) is placed.

Jasmine-node works almost identically to client-side jasmine. The biggest (only?) difference is how asynchronous tests are handled. While both jasmines match 'runs()' and 'waitsFor()' functions to handle asynchronous tests, jasmine-node also provides 'jasmine.asyncSpecDone();' matched with 'jasmine.asyncSpecWait();' to handle async tests easier.

Let's take a look at the userSpec.js suite. This file unit tests the authentication code in routes/user.js. Unlike the client-side tests these tests are not run in a browser and objects under test are pulled in via the normal 'require' method.

In this case I expect the redis database to be up, you can use jasmine spies to mock it all out, but instead I select redis database '15' to not interfere with production data (redis has 16 databases named 0-15, by default you get database 0 unless you change it as I do in the 'select' call).

Also I have an 'afterEach' method that completely blows out the contents of the redis database after each test so I'm guaranteed each test gets a fresh database (redis.flushdb()).

The async tests themsevles are very straightforward, just call methods and verify resopnses, easy peasy.

Application Architecture

A quick note about ease of testing, you will note in user.js, I was clear about separating HTTP from the functionality of the module to make testing easier. I was very careful to not mix dealing with HTTP-protocol based values with authentication routines. I specifically did not want to mix up HTTP with authentication to make testing easier. This way I do not have to mock/deal with HTTP when testing the authentication routines. Be sure to keep separation of concerns in mind while writing your code. Writing tests first will help keep your code clean.

Code Coverage

Istanbul has great integration with jasmine-node, looking further down in the Gruntfile you can see the implementation of the jasmine_node_coverage task. Simply running istanbul with jasmine-node will automatically generate code coverage and the coverage output is written to 'public/coverage/server/lcov-report/index.html' in fancy HTML.

Tuesday, June 25, 2013

baseapp: Client-Side Unit Tests

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 2.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles clide-side unit tests using jasmine and istanbul for code coverage. The grunt task to run the tests is 'grunt jasmine'. They always run with code coverage enabled.

Test Configuration

Let's look at the configuration in the Gruntfile:

    jasmine : {
        test: {
            src : 'public/javascripts/**/*.js',
            options : {
                specs : 'spec/client/**/*.js'
                , keepRunner: true  // great for debugging tests
                , vendor: [ 
                    , 'public/vendor/jasmine-jquery.js' 
                    , 'public/vendor/dust-core-1.2.3.min.js' 
                    , 'vendor/bootstrap/js/bootstrap.min.js'
                , junit: {
                    path: "./build/reports/jasmine/"
                    , consolidate: true
                , template: require('grunt-template-jasmine-istanbul')
                , templateOptions: {
                    coverage: 'public/coverage/client/coverage.json'
                    , report:   'public/coverage/client'

Here is what is happening - the grunt-contrib-jasmine plugin collects all of your client side JavaScript (the 'src' property), all of your test files ('opitons.specs') and any other JavaScript files we need to execute our tests ('options.vendor') and creates a single HTML file 'SpecRunner.html' which is loaded into a browser (phantomjs). The grunt jasmine plugin automatically adds the jasmine client-side libraries to actually run the tests too. SO when SpecRunner.html is loaded into a browser (phantomjs) all of your tests are run.

The output of these tests (in JUnit XML format) is dumped into the options.junit.path directory - one XML file per test suite.

Finally the grunt-template-jasmine-istanbul package is leveraged to generate code coverage information, the HTML output of which is dumped into the '' directory. We also persist the 'coverage.json' file so it can be aggragated later with other tests (like server-side unit tests and webdriver tests).

Ok that's the setup - for now just know that any '*Spec.js' file placed into the 'spec/client' directory will get executed and any client-side JavaScript file you write in public/javascripts will get loaded to be tested.


% grunt jasmine

Will run all of this stuff.

Test Files

How about actually writing the tests? First get basically familiar with jasmine if not already. Now Let's take a look at logoutSpec.js first - a suite called 'logout' is created with two tests.

The most interesting bits are the fixture setup and the jQuery AJAX spy.


Most client-side JavaScript manipulates the DOM, but we don't want to load in all of our application's HTML to test our code, so we use 'fixtures' instead. The 'setFixtures' call is provided by the jasmine-jquery JavaScript library we loaded in via the 'vendors' array in our Gruntfile.js. This lets us set some HTML for the following test that is automatically cleaned up after the test ends. So note I have to 'setFixtures' for each test. If your HTML is the same for each test then you should use a 'beforeEach' function so you DRY (Don't Repeat Yourself). You'll see the HTML is slightly different for each test so I wasn't able to do that here.

Jasmine-jquery and also load fixtures from a webserver, which is especially nice if you are using templates (which you are!), so you can test using your application's actual HTML. As these fixtures are very simple I did not do that here. Also beware of how the fixture loading interacts with jQuery's AJAX object which I will discuss with more detail next.


Jasmine is espcially nice for providing spies for interacting with your code's dependencies. The logout component relies on an AJAX call to actually log the user out, but while unit testing we do not have a web server running so we need to mock out the AJAX call. We do that using the "spyOn($, 'ajax')" function call.

All subsequent calls that use jQuery's AJAX mechanism will instead get routed to this spy, the underlying AJAX call will NOT get executed (note jasmine does provide a way to also call through to the actualy implementation but we do not want to do that here).

The spy allows us to verify the ajax method was called with the expect arguments. Spies let us do more than just verify arguments and call through to the real underlying implementation, take a look at loginSpec.js to see more!

Search for the 'andCallFake' method - using that method you can have the spied on method execute and return any code you like! Let's take a close look at the "should handle failure default response" test case.

First I set up the HTML fixture for this test, then I create the login component I will test along with some other canned values I will be using more than once so I DRM (Don't Repeat Myself). Now I create a spy for the $.ajax method and inform jasmine whenever that method is called to actually execute the given function instead. Note my function receives the argument list, and in this case I simply turn around and call the provided 'error' callback with some canned data to test that code path. I then set some form elements and 'submit' the form. As this is all synchronous the error callback will get called via 'andCallFake' and finally I expect the error message is being shown properly. Finally note the 'toHaveText' matcher came from jasmine-jquery, which comes chock full of lots of great jQuery-specific matches for us to leverage. Be sure to check them out.

Test Files

So all of our client-side unit test files must reside in the spec/client directory and be named with the word 'Spec' in them - follow that pattern for your own sanity! As you create more component keep adding tests, they can be at any depth in the spec/client tree, jasmine will find them!

_SpecRunner.html and Debugging Tests

Very rarely things do not work the very first time. You'll note there is a grunt configuration property 'keepRunner'. I told you about the SpecRunner.html file grunt-contrib-jasmine generates each time you run 'grunt jasmine' - by default that file is deleted after all the tests complete. But if there a problem running the tests it is very hard to tell what is going on, especially as the tests are all run in phantomjs. By setting 'keepRunner' to 'true' grunt-contrib-jasmine will not delete SpecRunner.html so you can load it up into Chrome (or any other browser not named Chrome) and manually execute unit tests yourself. They will run quickly! Most importantly you can open up a JavaScript console/debugger window to really tell what is going on when your tests are not running correctly.

Test Output

Test results are output in the JUnit XML format with pass/fail information and dumped to the build/reports/jasmine/ directory with one XML file per-suite. The most interesting thing about this format is its wide support.

Code Coverage

All JavaScript files under test are automatically instrumented for code coverage information. Regardless of tests passing/failing you will get code coverage information dumped to public/coverage/client/index.html - simply point your browser and that file and bask in the coverage'ness. You can drill down to each individual file and see line-by-line output of coverage as provided by istanbul. Let this information help guide your next set of tests!

Monday, June 24, 2013

Intro To baseapp: Making JavaScript Best Practices Easy

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 1.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)
Getting started is always the hardest part.  Whether a web app, testing, or a blog post staring at a blank sheet of metaphorical paper is tough.  You have a great idea of what it should do, what you want, but how to start?  You just want to write code that brings your idea to life, but how to get to that part?  You need a web server, you need a database, you need authentication, you want it to look nice, you want to use best practices, you want to use the latest technology, and you want it to be fun.  That is why you are doing all of this at some level, it has got to be fun!  That means no tedium, no slogging through boilerplate, how can you skip all of that and just jump straight to the fun part?  Writing code that will change the world, or at least a corner of it, how can you get to that part as quickly as possible?  There is so much cruft to fight through to get there, wouldn't it be nice if all of that stuff could just 'blow away...'?

My friends I have been there, I have had lots of great ideas, start hacking, and then got quickly crushed under the weight of boilerplate and 'best practices'.  I want to 'test first'.  I want to have a solid foundation for my web app.  I want to do things the 'right way'.  But there always seems to be a horrible choice at the beginning of a new project:  either start trying to do things the 'right way' by setting up testing, automation, and foundation before any of the 'exciting' and 'interesting' work (because you cannot bake that stuff into your code later), OR dive right in and start coding the 'good' stuff and be left with a mess soon thereafter.  What a crappy set of choices!  I think we all agree we'd LIKE to start our new project off 'right' with all the test and automation infrastructure built in from the beginning, but doing that saps all of the joy out of coding, out of turning our great idea into an awesome product that the world loves.  And what's the fun in that?

So I give to you, stymied developer, the boilerplate.  I tend to use the same technology in most of my web apps: Express and Redis on the server-side, Bootstrap and LinkedIn-DustJS on the client side.  I use Jasmine and WebDriver for testing and Istanbul for code coverage.  Grunt and Travis-CI for automation and continuous integration.  I run everything through JSHint. I like Plato's static code analysis.  And most of my apps need user registration and login/logout.  And I am sick of having to rewrite all of those pieces from scratch for each web app I dream up!  So, as of today, no more.  I have created 'baseapp', a bare application with all of that all thrown in and ready to go.  The idea is you (and me!) fork this repo for every new web app you write so you can jump straight into the good parts and all the boilerplate test/automation/code coverage crap is already taken care of for us.  I'm not just talking the packages are downloaded for you.  I'm talking test code has already been written for all the base stuff.  Most people, me included, just really want to see how something is done and then we can go off and do it ourselves, 'see one, do one, teach one' kinda thing.  So I have pre-populated baseapp with client-side unit tests, server-side unit tests, and webdriver integration tests.  No need to try to re-figure out how to piece all of these things together yourself.  Just look at what I did and make more of them just like that.

Here are the technologies baseapp leverages to the hilt:

* github
* grunt
* jshint
* dustjs-linkedin
* express
* jasmine
* jasmine-node
* webdriver
* redis
* authentication
* istanbul
* plato
* karma
* phantomjs
* bootstrap
* travis-ci

This is all setup and baked into the baseapp, the packages aren't just installed, they are all actually used and setup and ready to go.  If this stack is up your alley starting your web app with 'best practices' and 'test first' cannot be easier as all of the setup/boilerplate is already done.

Here is how to start your next web app:

1. Fork the baseapp repo
    - so you have your own copy of it - you will be hacking away adding your special sauce!
2. Clone into your local dev environment
3. Update 'package.json' with the name of your web app
    - This is your biggest decision - what to name your app!
4. % npm install
    - This will suck down all packages necessary for your awesome new dev environment
5. Install & start redis from
    - But you've already got it installed & running right?
6. To run WebDriver/Selenium tests you need to start the Selenium jar
    - % java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar
7. % grunt test

You will see tests pass hopefully!  If not you will get an informative error message to as why.

To see the app running yourself simply:

% npm start

And then in your browser go to:

And you will see your beautiful application so far - buttons to login and register - that actually work - and that's it.

This article is the beginning in a series explaining exactly what it going on in 'baseapp' - I will explain everything that has already been set up for you and how to leverage it as your develop the world's next great webapp - stay tuned and have fun!