Showing posts with label express. Show all posts
Showing posts with label express. Show all posts

Monday, July 1, 2013

basepp: Authentication

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 6.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp provides robust registration/authentication services out of the box! Built on Connect sessions and redis all of the 'hard' work has been already done for you.

Before any other URL handler use:

user.setup(app); // this has to go first!

Which came from:

user = require('./routes/user')

If a user is logged in this middleware will load up the user object and put it in the session:

app.all('*', function(req, res, next) {
    exports.isLoggedIn(app.get('redis'), req.session, function(uid) {
        if (uid) {
            exports.loadUser(app.get('redis'), uid, function(userObj) {
                req.session.user = userObj;
                next();
            });
        } else {
            next();
        }
    });
});

Where 'loadUser' just returns an object with whatever you want to put in there:

exports.loadUser = function(redis, uid, cb) {
    redis.get('uid:' + uid + ':username', function(err, username) {
        cb({ username: username });
    });
};

In this case it only contains the username. As your app adds information the 'user' in the redis DB you can stick on those data as well.

Any request to '/user/logout' will log the user out (imagine that!) and destroy the session:

app.all('/user/logout', function(req, res, next) {
    exports.userLogout(app.get('redis'), req.session, function() {
        req.session.destroy();
        res.end('OK');
    });
});

You can read the details about how a user is authenticated, registered, and logged out here so I DRM (Don't Repeat Myself).

Separation Of Concerns

Most interesting to note is I try to keep separate HTTP and user actions. Specifically HTTP routing requests to the various URLs turn around and call more generic user methods ('userLogout', 'isLoggedIn', 'registerUser', 'loginUser', &c). I could have stuck the bodies of those functions directly into the Express middleware routing but that would have made testings those functions significantly more difficult. Also what if we needed another way to deal with users from another protocol - like from a CLI? Those functions are also separated from HTTP-specific stuff like how the username and password are retreived from an HTTP request (req.body.username & req.body.password). Our user authentication stuff needs to have no idea how/where those values came from - they do not want to be tied to HTTP and form handling.

It also works in reverse, I do not want my HTTP-specific code to know any details about the user authentication stuff.

Also note I do pass around a session object but that is NOT specific to HTTP, it is a simple object required for authentication from any protocol. How that session object is -stored- is specific to HTTP (in req.session) but the object iself is generic.

In fact in a perfect world the user authentication stuff would be in a completely separate module, I leave that as an exercise for the reader.

Also note the user functions expect a 'redis' client as an argument. This also allows for easier testing, I can pass these functions a mock'ed out redis implemetation or a connection to a test database, giving me full control of testing these functions.

To The Template

How does 'req.session.user' get funneled to the template? Glad you asked because it happens in two places! First look in routes/index.js:

res.render('index', { user: req.session.user });

When a user initially requests our app the server renders the index.dust template passing in the req.sesion.user object.

Once the user has already loaded our page and registers or tries to login now AJAX is used to pass this object around. You'll see 'userLogin' does two things on successful login:

req.session.user = userObj;
res.json(JSON.stringify(userObj));

It sets the user object into the session AND returns a JSON-ified version of it to the client. Peeking into login.js the response is handled here:

if (data.username) {
    // refresh page
    dust.render('index', { user: data }, function(err, out){
        var newDoc = document.open("text/html", "replace");
        newDoc.write(out);
        newDoc.close();
    });
}

if 'data.username' is set they were successful in logging in and now the client renders the index.dust template with the 'user' property set with the response (that was de-JSON-ified earlier). Here the entire page is replaced with the output from the filled-out template - typically just a piece of your page is replaced!

A successful logout from logout.js does the opposite:

dust.render('index', {}, function(err, out){
    var newDoc = document.open("text/html", "replace");
    newDoc.write(out);
    newDoc.close();
});

In this case there is no 'user' property passed to the template so you get the 'login' and 'register' buttons back.

Thursday, June 27, 2013

baseapp: Integration Tests Using WebDriver

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 4.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)

baseapp handles integration tests using jasmine-node running the camme/webdriver npm module for Selenium and istanbul for code coverage. The grunt tasks to run the tests is 'grunt webdriver' or 'grunt webdriver_coverage' to run with code coverage. The only reason to run without code coverage is to debug the tests themselves so don't get used to it.

The deal is this, you write jasmine-node style tests like your server-side unit tests but use the webdriverjs module to bring up a browser and interact with it. The grunt configuration options for the webdriver tests are within the 'webd' object:

    webd: {
        options: {
            tests: 'spec/webdriver/*.js'
            , junitDir: './build/reports/webdriver/'
            , coverDir: 'public/coverage/webdriver'
        }
    }

Nothing surprising here, simply a pointer to all of your webdriver tests and where you want the JUnit XML and coverage output to go. The grunt webdriver tasks will collect coverage output for BOTH the server side AND client side code that is touched during the tests. At the end of the entire test run all of that coverage information is aggregated together (from all webdriver tests) into a total and that total is what is put into the 'options.coverDir' directory. Note to aggregate coverage information from the client-side unit tests, server-side unit tests, and the webdriver tests use the 'grunt total_coverage' task.

WebDriver

Webdriver itself is an interesting beast, and an ugly one. The bulk of your tests should be unit tests. At most 20% of your total tests should be webdriver tests. Webdriver is fickle and you will get false negatives for a variety of reasons, some you can control and others you cannot. Prepare to be frustrated. Unfortunately it is the only tool we have to test 'real' code in 'real' browsers so we suck it up and deal with it. Running tests through phantomjs helps while developing but at some point you must run through 'real' browsers on 'real' hosts, so let's try to make this as painless as possible.

To use WebDriver you must start the Selenium daemon on the host that will run the brower(s). In a real testbed you will run the Selenium grid and farm out Selenium tests to hosts of various OSes with various browser versions, but for development we will run the selenium server locally and have it fire off browsers on our development machine. If you are ssh'ed into your dev box don't despair! You can easily use phantomjs to run your webdriver tests with no display needed as we shall soon see...

SO first start the Selenium server and just let it run in the background forever:

% java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &

Now you are ready to run webdriver tests!

The Tests

WebDriver tests are just jasmine-node tests that use the WebDriver protocol to do stuff and webdriverjs to verify the DOM is in a state you expect and 'assert' for other general purpose assertions. So check out loginSpec.js and I'll walk through it since there is a lot going on. But once you get the hang of it, it's simple.

First we grab a handle to the redis database - the ONLY thing we use it for is to flush the DB after each test so we start with a clean slate. You can see that in the 'afterEach' function (also note we select DB '15' to not touch our production DB).

I will skip the 'saveCoverage' require statement for now, it is not important yet, moving on to the 'login tests' suite. For webdriver we need jasmine to NOT time out as these tests can take an unknown amount of time (otherwise jasmine times out after 5 seconds).

To set up each test our 'beforeEach' reconnects to the Selenium process and requests a browser provided by the BROWSER environment variable (or 'phantomjs' if that is not set). Grunt will set this for us from a command line option:

% grunt webdriver --browser=firefox  # or any grunt task that will eventually call 'webdriver' like 'test'

Replace with with 'safari' or 'chrome' or 'iexplore' for other browsers. Now give Selenium the URL to connect to, from values placed in the environment by grunt. Finally we pause for half a second to give the browser time to load our page. Here begins Selenium issues, having to wait unknown amounts of time to ensure the browser has loaded your page completely.

Our 'afterEach' function flushes our test database and gets code coverage for the test if the user requested coverage. Since the browser is refreshed after each test we need to grab coverage information after each test, at the end of all tests this incremental coverage information is aggragated to give total coverage. You do not need worry about the internals of the 'saveCoverage' function, there be dragons. I will disucss in detail at the end of this post for the curious.

Each test now just manipulates and queries the DOM for expected values. Each webdriver method accepts an optional second function parameter whose signature is 'function(err, val)' - within this function you can assert values and ensure 'err' is null (if you are not expecting an error that is!). Regardless you chain all of your actions together, each method is actually asynchronous but webdriverjs handles all of that for us beind the scenes. Finally at the end of your test '.call(done)' so webdriverjs knows this test is finished.

Let's follow the flow of one of the tests - the biggest meatiest one: 'register & login user'

it("register & login user", function(done) {

The first thing to notice is the 'done' function passed into the test - we call this when the test is finished. At this point we have navigated to our page & should be read to go...

    client
        .pause(1)

Ahh another bit of WebDriver/Selenium weirdness - need this pause here for the click buttons to work!

        .click('button[data-target="#registerForm"]')

Click the 'Register' button...

        .pause(500)

... and wait for the modal to come up...

        .isVisible('#registerForm', function(err, val) {
            assert(val);
        })

... and verify it is visible

Now set some form values...

        .setValue("#emailregister", 'testdummy')
        .setValue("#passwordregister", 'testdummy')
        .click('#registerSubmit')
        .pause(500)

... and submit the form & wait ...

        .isVisible('#registerForm', function(err, val) {
            // register form no longer visible
            assert(!val);
        })

... and now the register modal is gone (hopefully!)

        .click('button[data-target="#loginForm"]')

... now click the 'Login' button ...

        .pause(500)

... and wait ...

        .isVisible('#loginForm', function(err, val) {
            assert(val);

.. and the hopefully now the login modal is showing

        })

so set values on its form...

        .setValue("#emaillogin", 'testdummy')
        .setValue("#passwordlogin", 'testdummy')
        .click('#loginSubmit')
        .pause(200)

and submit it and wait...

        .isVisible('#loginForm', function(err, val) {
            // login form gone now
            assert(err);
        })

... modal should be gone now and hopefully the 'Logout' button is visible

        .isVisible('#logout', function(err, val) {
            // logout button now visible
            assert(val);
        })

... and the 'Login' button is gone (since we have just logged in)

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button gone
            assert(err);
        })

... and the 'Register' button is gone

        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button gone
            assert(err);
        })

... Now click the 'Logout' button and wait

        .click('#logout')
        .pause(500)

... and hopefully the 'Logout' button is now gone

        .isVisible('#logout', function(err, val) {
            // logout button gone now
            assert(err);
        })

... and the 'Login' and 'Register' buttons are back...

        .isVisible('button[data-target="#loginForm"]', function(err, val) {
            // login button back
            assert(val);
        })
        .isVisible('button[data-target="#registerForm"]', function(err, val) {
            // register button back
            assert(val);
        })

... and tell webdriverjs this test is done

        .call(done);
    }

OK simple enough to follow that flow. All of the tests are executed and we are done.

Running The Server

To run these tests we need the Express server up and running. The gunt task 'express' handles this - here is the config from the Gruntfile:

    express: {
        server: {
            options: {
                server: path.resolve('./app.js')
                , debug: true
                , port: 3000
                , host: '127.0.0.1'
                , bases: 'public'
            }
        }
    }

The grunt-express plugin provides this functionality. We tell it where our app.js is, our static directory ('public') and a host/port number (which are placed in the environment by the grunt environment plugin using grunt templates).

Executing the '% grunt express' task will fire up our Express server - but note it ONLY lives for the duration of the grunt process itself - so once grunt quits our server does too (look at the 'express keepalive' task to have it run forever or even better just use 'npm start').

Now take a quick look at the bottom of app.js to see how this works:

// run from command line or loaded as a module (for testing)
if (require.main === module) {
    var server = http.createServer(app);
    server.listen(app.get('port'), app.get('host'), function() {
        console.log('Express server listening on http://' + app.get('host') + ':' + app.get('port'));
    });
} else {
    exports = module.exports = app;
}

This 'trick' check if app.js was run from the command line (which we do when we run 'npm start') or loaded up as a module (which the 'grunt express' task does). If loaded as a module we export our Express app so grunt can work with it, if executed from the command line we start up the server ourselves. That's what this block here is for.

Grunt Tasks

Look at our 'grunt webdriver' and 'grunt webdriver_coverage' tasks:

// webdriver tests with coverage
grunt.registerTask('webdriver_coverage', [
    'env:test'  // use test db
    , 'env:coverage' // server sends by coverage'd JS files
    , 'express'
    , 'webd:coverage'
]);

// webdriver tests without coverage
grunt.registerTask('webdriver', [
    'env:test'  // use test db
    , 'express'
    , 'webd'
]);

The only difference is how the environment is set up and how the base 'webd' task is executed. Using grunt's env plugin to set up the environment - let's take a quick look:

    , env: {
        options : {
            //Shared Options Hash
        }
        , test: {
            NODE_ENV : 'test'
            , HOST: '<%= express.server.options.host %>'
            , PORT: '<%= express.server.options.port %>'
            , BROWSER: '<%= webd.options.browser %>'
        }
        , coverage: {
            COVERAGE: true
        }
    }

Both set the HOST and PORT environment variables which the webdriver tests use here:

client.init()
    .url("http://" + process.env.HOST + ':' + process.env.PORT)
    .pause(500, done);

and here to pass to getCoverage:

if (process.env.COVERAGE) {
    saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
}

NOTE when our Express server is started by grunt these lines are NOT USED:

app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '0.0.0.0');

Those are ONLY used when our Express server is started on the command line (via 'npm start'). See the 'Running The Server' section below. The 'express' grunt task will use the 'post' and 'host' configuration properties directly.

While the 'coverage' rule also sets COVERAGE to true which both our Express server and webdriver tests check. Also the BROWSER environment variable is set here to be picked up by our webdriver tests when initializing the webdriverjs client in the 'beforeEach' method:

client = webdriverjs.remote({ desiredCapabilities: { browserName: process.env.BROWSER }});

Running Our Webserver

Here is how app.js reacts to these environment variables (note these variable are only in the grunt process environment - as soon as the grunt process ends it takes its environment away with it):

isCoverageEnabled = (process.env.COVERAGE == "true")

First app.js determines if coverage is requested or not...

if (isCoverageEnabled) {
    im.hookLoader(__dirname);
}

If so app.js installs istanbul's hook loader which instruments all subsequent 'require'd modules for code coverage. That is why our server-side modules are required AFTER this statement, so if coverage IS requested those modules will be properly instrumented:

// these need to come AFTER the coverage hook loader to get coverage info for them
var routes = require('./routes')
    , user = require('./routes/user')
;

Now those two modules will have coverage information associated with them - sweet.

The next bit of magic is this:

if (isCoverageEnabled) {
    app.use(im.createClientHandler(path.join(__dirname, 'public'), { 
        matcher: function(req) { return req.url.match(/javascripts/); }
    }));
    app.use('/coverage', im.createHandler());
}

This does two things:

  1. Tells istanbul to check all files requested from the 'public' directory against the provided 'matcher' function. If that matcher function returns 'true' then those files will be dynamically instrumented before being sent back to the requesting client (the browser). In our case any requested file in the 'javascripts' directory will be dynamically instrumented with coverage information.
  2. Any request to '/coverage' will be handed off to istanbul - we will see this in use later while the webdriver tests are running. URLs under '/coverage' speak directly to istanbul which accepts and provides coverage output as we shall see.

Finally this:

if ('test' == app.get('env')) {
  app.use(express.errorHandler());
  redis.select(15);
}

This checks the value of the NODE_ENV environment variable which our Gruntfile set to 'test' - so this matches and therefore the 'test' database ('15') is selected for use so as not to interfere with any production data.

And with that our Express server is off and running, with potentially instrumented server-side modules and ready to potentially dyanmically instrument our client-side JavaScript. Plus istanbul is handling all requests under the '/coverage' URL and we are using our test database - whew!

If we are running our webdriver tests without code coverage we are all ready to go, the server is running connected to a test database, our regular JavaScript files are served unchanged and our tests run and JUnit XML output is generated. Things are more interesting when code coverage information is requested.

Dealing With Code Coverage

So we have seen the server setup for running webdriver tests with code coverage what about on the client side? Yes client-side JavaScript is being dynamically instrumented by istanbul but remember after each test the browser is completely refreshed so we must grab and persist all converage information after each test. This is where this magic comes to the fore:

saveCoverage = require('./GetCoverage')

Remember our 'afterEach' method:

afterEach(function(done) {
    if (process.env.COVERAGE) {
        saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
    }

    // just blows out all of the session & user data
    redis.flushdb();

    client.end(done);
});

If we requested coverage we utilize this handy-dandy module to save off all current coverage information - the details how it is done are not important but since I know how curious you are it works like this - which you can follow along with in GetCoverage.js:

  1. Execute some JavaScript in the browser via WebDriver to get the CLIENT-SIDE coverage info (which is stored in the global '__coverage__' variable)
  2. POST that strigified JSON of the '__coverage__' variable to instanbul at '/coverage/client' (remember istanbul is in charge of every URL under '/coverage'). This causes istanbul to aggregate the POST'ed client side coverage with the server-side coverage.
  3. GET the entire coverage info from '/coverage/object' - this is the aggregated server + client-side coverage information.
  4. PERSIST (as in 'save to a file') the aggregated coverage information

Those 4 steps are done after every test.

After all tests are finished the 'webd' grunt task then aggregates each of those individual coverage objects into one total one and generates the final HTML report which is available at: 'public/coverage/webdriver/lcov-report/index.html' and we are done.

SIMPLE - The point this is all done and you do not need to know about the gory details, just write webdriver tests following the ones I wrote and you are fine.

Monday, June 24, 2013

Intro To baseapp: Making JavaScript Best Practices Easy

baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 1.

  1. Intro to baseapp
  2. Client-Side Unit Tests
  3. Server-Side Unit Tests
  4. WebDriver Integration Tests
  5. Other Grunt Goodies
  6. Authentication
  7. Continuous Integration
  8. Administration
(or binge read them all on the baseapp wiki!)
Getting started is always the hardest part.  Whether a web app, testing, or a blog post staring at a blank sheet of metaphorical paper is tough.  You have a great idea of what it should do, what you want, but how to start?  You just want to write code that brings your idea to life, but how to get to that part?  You need a web server, you need a database, you need authentication, you want it to look nice, you want to use best practices, you want to use the latest technology, and you want it to be fun.  That is why you are doing all of this at some level, it has got to be fun!  That means no tedium, no slogging through boilerplate, how can you skip all of that and just jump straight to the fun part?  Writing code that will change the world, or at least a corner of it, how can you get to that part as quickly as possible?  There is so much cruft to fight through to get there, wouldn't it be nice if all of that stuff could just 'blow away...'?

My friends I have been there, I have had lots of great ideas, start hacking, and then got quickly crushed under the weight of boilerplate and 'best practices'.  I want to 'test first'.  I want to have a solid foundation for my web app.  I want to do things the 'right way'.  But there always seems to be a horrible choice at the beginning of a new project:  either start trying to do things the 'right way' by setting up testing, automation, and foundation before any of the 'exciting' and 'interesting' work (because you cannot bake that stuff into your code later), OR dive right in and start coding the 'good' stuff and be left with a mess soon thereafter.  What a crappy set of choices!  I think we all agree we'd LIKE to start our new project off 'right' with all the test and automation infrastructure built in from the beginning, but doing that saps all of the joy out of coding, out of turning our great idea into an awesome product that the world loves.  And what's the fun in that?

So I give to you, stymied developer, the boilerplate.  I tend to use the same technology in most of my web apps: Express and Redis on the server-side, Bootstrap and LinkedIn-DustJS on the client side.  I use Jasmine and WebDriver for testing and Istanbul for code coverage.  Grunt and Travis-CI for automation and continuous integration.  I run everything through JSHint. I like Plato's static code analysis.  And most of my apps need user registration and login/logout.  And I am sick of having to rewrite all of those pieces from scratch for each web app I dream up!  So, as of today, no more.  I have created 'baseapp', a bare application with all of that all thrown in and ready to go.  The idea is you (and me!) fork this repo for every new web app you write so you can jump straight into the good parts and all the boilerplate test/automation/code coverage crap is already taken care of for us.  I'm not just talking the packages are downloaded for you.  I'm talking test code has already been written for all the base stuff.  Most people, me included, just really want to see how something is done and then we can go off and do it ourselves, 'see one, do one, teach one' kinda thing.  So I have pre-populated baseapp with client-side unit tests, server-side unit tests, and webdriver integration tests.  No need to try to re-figure out how to piece all of these things together yourself.  Just look at what I did and make more of them just like that.

Here are the technologies baseapp leverages to the hilt:

* github
* grunt
* jshint
* dustjs-linkedin
* express
* jasmine
* jasmine-node
* webdriver
* redis
* authentication
* istanbul
* plato
* karma
* phantomjs
* bootstrap
* travis-ci

This is all setup and baked into the baseapp, the packages aren't just installed, they are all actually used and setup and ready to go.  If this stack is up your alley starting your web app with 'best practices' and 'test first' cannot be easier as all of the setup/boilerplate is already done.

Here is how to start your next web app:

1. Fork the baseapp repo
    - so you have your own copy of it - you will be hacking away adding your special sauce!
2. Clone into your local dev environment
3. Update 'package.json' with the name of your web app
    - This is your biggest decision - what to name your app!
4. % npm install
    - This will suck down all packages necessary for your awesome new dev environment
5. Install & start redis from redis.io
    - But you've already got it installed & running right?
6. To run WebDriver/Selenium tests you need to start the Selenium jar
    - % java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar
7. % grunt test

You will see tests pass hopefully!  If not you will get an informative error message to as why.

To see the app running yourself simply:

% npm start

And then in your browser go to:

http://127.0.0.1:3000

And you will see your beautiful application so far - buttons to login and register - that actually work - and that's it.

This article is the beginning in a series explaining exactly what it going on in 'baseapp' - I will explain everything that has already been set up for you and how to leverage it as your develop the world's next great webapp - stay tuned and have fun!