Wednesday, July 3, 2013

baseapp: Administration

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 8.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)


Bunyan provides very flexible JSON-based logging solution. Here is its config:

Bunyan = require('bunyan')
, log = Bunyan.createLogger({
    name: "YOUR_APP"
    , serializers : Bunyan.stdSerializers 
    , streams: [{
        type: 'rotating-file'
        , path: 'logs/http.log'
        , period: '1d'
        , count: 3

All server logs are are kept in the 'logs' directory, rolled over every day, and 3 previous logs are kept. Bunyan's standard serializers handle serializing the request and response objects.

The logging routine is here:

app.set('logger', log);

// bunyan logging
app.use(function(req, res, next) {
    var end = res.end;
    req._startTime = new Date();
    res.end = function(chunk, encoding){
        res.end = end;
        res.end(chunk, encoding);{req: req, res: res, total_time: new Date() - req._startTime}, 'handled request/response');


This is the second Express 'use' statement - make sure it's near or at the top (only below the 'favicon' use) to log everything! This is taken directly from the 'connect.Logger' middleware - it basically hooks itself into the response 'end' method to record the total time the request took and the request and response objects.

Note we also stash the 'log' object in Express so other middleware can access it.

Starting and Stopping the Server

baseapp uses the awesome forever module to handle the start, restart, and stop of our server. Take a look at our 'start' and 'stop' commands in package.json:

"start": "node_modules/.bin/forever -a -p logs --minUptime 2000 --spinSleepTime 1000 -l forever.log -o logs/out.log -e logs/err.log start app.js",
"stop": "node_modules/.bin/forever stop app.js",

They delegate the handling of starting and stopping our websever to forever - which once started will monitor the process forever (Hmmm) and restart it if it crashes. However our server must be up for at least 2 seconds before forever will try to restart it - remember our server will kill itself off if it cannot connect to 'redis' so in that case we do not want forever always trying to restart it.

The '--spinSleepTime' option just says wait 1 second before trying to start up the server if it died (the forever default). We use the 'append' option (-a) and put all of forever's logs into our 'logs' directory and that's it!.


% npm start

Will keep our webserver up even when/if it crashes subject to sane constraints. Plus we have a clean way to stop it via:

% npm stop

The server gets its HOST and PORT values to listen on from the environment. Here are its defaults:

app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '');

To change this:

% PORT=9999 HOST=localhost npm start

Now your server is listening on http://localhost:9999 (your shell may vary!). npm config variables is another way to go.

Forever has a couple more tricks up its sleeve - want to know what process forever has spawned off for you - 'forever list' to the rescue:

% node_modules/.bin/forever list
    info:    Forever processes running
    data:        uid  command             script forever pid   logfile          uptime
    data:    [0] OlRc /usr/local/bin/node app.js 54323   54324 logs/forever.log 0:0:0:3.727

You can edit the columns this command displays but you can read the forever documentation for that.

Finally note the index number '[0]' of our process - want to see the 'forever.log' for it but too lazy to load it up yourself? Here ya go:

% node_modules/.bin/forever logs 0
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on
    data:    app.js:54324 - Express server listening on http://localhost:9999
    data:    app.js:54324 - Express server listening on

Tuesday, July 2, 2013

baseapp: Continuous Integration

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 7.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp achieves Continuous Integration in 3 (yes 3) ways:

They are all complimentary and do similar things so take your pick!

grunt watch

The grunt watch plugin monitors our files and runs grunt tasks as those files are modified - to the configuration!

    watch: {
        // If the main app changes (don't have any specific tests for this file :( YET! )
        mainApp: {
            files: [ 'app.js' ]
            , tasks: ['jshint', 'webdriver' ]
        // If any server-side JS changes
        , serverSide: {
            files: [ 'routes/**/*.js' ]
            , tasks: ['jshint', 'jasmine_node_coverage', 'webdriver' ]
        // If any server-side JS TEST changes
        , serverSideTests: {
            files: [ 'spec/server/**/*.js' ]
            , tasks: ['jshint', 'jasmine_node_coverage' ]
        // If any client-side JS changes
        , clientSide: {
            files: [ 'public/javascripts/**/*.js' ]
            , tasks: ['jshint', 'jasmine', 'webdriver' ]
        // If any client-side JS TEST changes
        , clientSideTests: {
            files: [ 'spec/client/**/*.js' ]
            , tasks: ['jshint', 'jasmine' ]
        // If any integration/webdriver JS TEST changes
        , webDriverTests: {
            files: [ 'spec/webdriver/**/*.js' ]
            , tasks: ['jshint', 'webdriver' ]

The deal here is each file can only be in ONE stanza, if a file is represented in more than one grunt-watch stanza only the last one will 'win'. SO we slice and dice our files to ensure they only show up in one place. So the game is determine which grunt tasks should be run if a given file changes.

Starting with 'app.js' - if it changes changes run the 'jshint' and 'webdriver' tasks because those are the only two tasks that app.js could effect.

Similarly for editing a server-side JS file, if one of those changes run 'jshint', 'jasmine_node_coverage', and 'webdriver' because all of those target are potentially effected by a change to one of those files.

If any server-side unit test file changes we only run 'jshint' and 'jasmine_node_coverage' - we do NOT need to run any webdriver tests because change a server-side unit test file does not potentially effect those.

The same logic applies to changed client-side JavaScript files vs. client-side test files.

Finally if any webdriver tests change then we run the 'jshint' and 'webdriver' tasks because those are the only tasks potentially effected by a change to any of those files.

So in a terminal kick it all off by executing:

% grunt watch

Now in another terminal edit away and the 'grunt watch' terminal will run jshint and tests after you save changed files.


Karma functions similarly to 'grunt watch' - we tell it which files to watch, and if there is a change it should kick off some tests. Karma however ONLY runs out client-side jasmine unit tests. What is snazzy about it is support for multiple simultaneous browsers and built-in coverage generation.

All of its configuration is in karma.conf.js (yes this is a JavaScript file). Since it is going to run jasmine it needs the same information as our 'grunt jasmine' task: namely where all of our client-side JavaScript files are and what extra JavaScript to load into the browser to run our tests. That configuration is here:

// list of files / patterns to load in the browser
files = [

'JASMINE' and 'JASMINE_ADAPTER' are karma built-ins for running jasmine tests - convenient! This section adds in coverage support:

preprocessors = {
    'public/javascripts/*.js': 'coverage'

This tells Karma to generate coverage information for all files in 'public/javascripts/*js' - where all of our client-side JavaScript resides.

coverageReporter = {
    type : 'lcov',
    dir : 'public/coverage/client'

Here is where we tell Karma where to put the coverage output. When the tests are all done running Karma will generate a new code coverage report.

browsers = ['ChromeCanary', 'PhantomJS', 'Safari', 'Firefox'];

Here is our array of browsers we want Karma to run - every time one of our watched files change Karma will execute all of the unit tests in each of those browsers. When Karma starts up it spawns off each of those and they sit around and wait to run tests.


singleRun = false;

Tells Karma to keep running in the background, you can run it in single shot mode if being run in a QA environment.

So to start the whole thing up:

% node_modules/.bin/karma start

Karma will spawn off all the browsers you requested, you can now minimize all of those windows, Karma will print all relevant output to the terminal window. As you edit client-side JavaScript or the tests Karma will automatically re-run all the tests.


Travis-CI executes the 'script' property from our .travis.yml file each time we push to github, if that is not present (which it is not in our config), it runs 'npm test' by default for NodeJS jobs (which for us just turns around and runs 'grunt test'). Our travis-ci configuration is in the .travis.yml file. Here it is:

language: node_js
  - "0.8"
  - "0.10"
services: redis-server
  - "export DISPLAY=:99.0"
  - "sh -e /etc/init.d/xvfb start"
  - "java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &"
  - "sleep 10"

This just says this application is a NodeJS application and we want to test it against version .8 and .10 of node. You need to link your github account with travis-ci so it can watch your github commits and act on them accordingly.

Set up information for Travis CI is here. Basically you just need to create a Travis CI account and activate the GitHub service hook. Then visit your Travis CI profile and enable Travis CI for your repository. With our .travis.yml file in place, the next commit to our github repo will trigger Travis CI.

Travis CI conveniently has both redis and phantomjs (and firefox!) preinstalled for webdriver tests. Redis however is NOT automatically started so we need to tell Travis to start redis-server for us. We also have to start the selenium server and then sleep for 10 seconds to ensure it is up and ready to go. In case we want to use 'firefox' for our Selenium tests we fire up the virtual X framebuffer and set the DISPLAY environment variable accordingly.

You may notice that our travis tests will generate coverage information and total it all up and run plato - none of which we actually use as the result of the travis tests. Oh well.

If all goes well Travis will blow through all of our tests successfully. On a Travis result state change, like from success to failure or vice versa you will get an email with a link to the log so you can debug if one of your tests has failed.

You can see baseapp's Travis CI dashboard here. Yes it took several tries to get it right!! But lucky for you all of the 'hard' work has been done for you.

Finally we add the 'travis build status' badge to the top of our file to show all comers we use travis-ci to test our project and show build status:

[![build status](](

Don't freak out at the markdown syntax, we are just enclosing an image (baseapp.png) with a link to the baseapp travis-ci page.

Monday, July 1, 2013

basepp: Authentication

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 6.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp provides robust registration/authentication services out of the box! Built on Connect sessions and redis all of the 'hard' work has been already done for you.

Before any other URL handler use:

user.setup(app); // this has to go first!

Which came from:

user = require('./routes/user')

If a user is logged in this middleware will load up the user object and put it in the session:

app.all('*', function(req, res, next) {
    exports.isLoggedIn(app.get('redis'), req.session, function(uid) {
        if (uid) {
            exports.loadUser(app.get('redis'), uid, function(userObj) {
                req.session.user = userObj;
        } else {

Where 'loadUser' just returns an object with whatever you want to put in there:

exports.loadUser = function(redis, uid, cb) {
    redis.get('uid:' + uid + ':username', function(err, username) {
        cb({ username: username });

In this case it only contains the username. As your app adds information the 'user' in the redis DB you can stick on those data as well.

Any request to '/user/logout' will log the user out (imagine that!) and destroy the session:

app.all('/user/logout', function(req, res, next) {
    exports.userLogout(app.get('redis'), req.session, function() {

You can read the details about how a user is authenticated, registered, and logged out here so I DRM (Don't Repeat Myself).

Separation Of Concerns

Most interesting to note is I try to keep separate HTTP and user actions. Specifically HTTP routing requests to the various URLs turn around and call more generic user methods ('userLogout', 'isLoggedIn', 'registerUser', 'loginUser', &c). I could have stuck the bodies of those functions directly into the Express middleware routing but that would have made testings those functions significantly more difficult. Also what if we needed another way to deal with users from another protocol - like from a CLI? Those functions are also separated from HTTP-specific stuff like how the username and password are retreived from an HTTP request (req.body.username & req.body.password). Our user authentication stuff needs to have no idea how/where those values came from - they do not want to be tied to HTTP and form handling.

It also works in reverse, I do not want my HTTP-specific code to know any details about the user authentication stuff.

Also note I do pass around a session object but that is NOT specific to HTTP, it is a simple object required for authentication from any protocol. How that session object is -stored- is specific to HTTP (in req.session) but the object iself is generic.

In fact in a perfect world the user authentication stuff would be in a completely separate module, I leave that as an exercise for the reader.

Also note the user functions expect a 'redis' client as an argument. This also allows for easier testing, I can pass these functions a mock'ed out redis implemetation or a connection to a test database, giving me full control of testing these functions.

To The Template

How does 'req.session.user' get funneled to the template? Glad you asked because it happens in two places! First look in routes/index.js:

res.render('index', { user: req.session.user });

When a user initially requests our app the server renders the index.dust template passing in the req.sesion.user object.

Once the user has already loaded our page and registers or tries to login now AJAX is used to pass this object around. You'll see 'userLogin' does two things on successful login:

req.session.user = userObj;

It sets the user object into the session AND returns a JSON-ified version of it to the client. Peeking into login.js the response is handled here:

if (data.username) {
    // refresh page
    dust.render('index', { user: data }, function(err, out){
        var newDoc ="text/html", "replace");

if 'data.username' is set they were successful in logging in and now the client renders the index.dust template with the 'user' property set with the response (that was de-JSON-ified earlier). Here the entire page is replaced with the output from the filled-out template - typically just a piece of your page is replaced!

A successful logout from logout.js does the opposite:

dust.render('index', {}, function(err, out){
    var newDoc ="text/html", "replace");

In this case there is no 'user' property passed to the template so you get the 'login' and 'register' buttons back.