baseapp provides all the boilerplate to get your JavaScript web application started off right, this Part 4.
- Intro to baseapp
- Client-Side Unit Tests
- Server-Side Unit Tests
- WebDriver Integration Tests
- Other Grunt Goodies
- Authentication
- Continuous Integration
- Administration
(or binge read them all
on the baseapp wiki!)
baseapp handles integration tests using jasmine-node running the camme/webdriver npm module for Selenium and istanbul for code coverage. The grunt tasks to run the tests is 'grunt webdriver' or 'grunt webdriver_coverage' to run with code coverage. The only reason to run without code coverage is to debug the tests themselves so don't get used to it.
The deal is this, you write jasmine-node style tests like your server-side unit tests but use the webdriverjs module to bring up a browser and interact with it. The grunt configuration options for the webdriver tests are within the 'webd' object:
webd: {
options: {
tests: 'spec/webdriver/*.js'
, junitDir: './build/reports/webdriver/'
, coverDir: 'public/coverage/webdriver'
}
}
Nothing surprising here, simply a pointer to all of your webdriver tests and where you want the JUnit XML and coverage output to go. The grunt webdriver tasks will collect coverage output for BOTH the server side AND client side code that is touched during the tests. At the end of the entire test run all of that coverage information is aggregated together (from all webdriver tests) into a total and that total is what is put into the 'options.coverDir' directory. Note to aggregate coverage information from the client-side unit tests, server-side unit tests, and the webdriver tests use the 'grunt total_coverage' task.
WebDriver
Webdriver itself is an interesting beast, and an ugly one. The bulk of your tests should be unit tests. At most 20% of your total tests should be webdriver tests. Webdriver is fickle and you will get false negatives for a variety of reasons, some you can control and others you cannot. Prepare to be frustrated. Unfortunately it is the only tool we have to test 'real' code in 'real' browsers so we suck it up and deal with it. Running tests through phantomjs helps while developing but at some point you must run through 'real' browsers on 'real' hosts, so let's try to make this as painless as possible.
To use WebDriver you must start the Selenium daemon on the host that will run the brower(s). In a real testbed you will run the Selenium grid and farm out Selenium tests to hosts of various OSes with various browser versions, but for development we will run the selenium server locally and have it fire off browsers on our development machine. If you are ssh'ed into your dev box don't despair! You can easily use phantomjs to run your webdriver tests with no display needed as we shall soon see...
SO first start the Selenium server and just let it run in the background forever:
% java -jar ./node_modules/webdriverjs/bin/selenium-server-standalone-2.31.0.jar &
Now you are ready to run webdriver tests!
The Tests
WebDriver tests are just jasmine-node tests that use the WebDriver protocol to do stuff and webdriverjs to verify the DOM is in a state you expect and 'assert' for other general purpose assertions. So check out loginSpec.js and I'll walk through it since there is a lot going on. But once you get the hang of it, it's simple.
First we grab a handle to the redis database - the ONLY thing we use it for is to flush the DB after each test so we start with a clean slate. You can see that in the 'afterEach' function (also note we select DB '15' to not touch our production DB).
I will skip the 'saveCoverage' require statement for now, it is not important yet, moving on to the 'login tests' suite. For webdriver we need jasmine to NOT time out as these tests can take an unknown amount of time (otherwise jasmine times out after 5 seconds).
To set up each test our 'beforeEach' reconnects to the Selenium process and requests a browser provided by the BROWSER environment variable (or 'phantomjs' if that is not set). Grunt will set this for us from a command line option:
% grunt webdriver --browser=firefox # or any grunt task that will eventually call 'webdriver' like 'test'
Replace with with 'safari' or 'chrome' or 'iexplore' for other browsers. Now give Selenium the URL to connect to, from values placed in the environment by grunt. Finally we pause for half a second to give the browser time to load our page. Here begins Selenium issues, having to wait unknown amounts of time to ensure the browser has loaded your page completely.
Our 'afterEach' function flushes our test database and gets code coverage for the test if the user requested coverage. Since the browser is refreshed after each test we need to grab coverage information after each test, at the end of all tests this incremental coverage information is aggragated to give total coverage. You do not need worry about the internals of the 'saveCoverage' function, there be dragons. I will disucss in detail at the end of this post for the curious.
Each test now just manipulates and queries the DOM for expected values. Each webdriver method accepts an optional second function parameter whose signature is 'function(err, val)' - within this function you can assert values and ensure 'err' is null (if you are not expecting an error that is!). Regardless you chain all of your actions together, each method is actually asynchronous but webdriverjs handles all of that for us beind the scenes. Finally at the end of your test '.call(done)' so webdriverjs knows this test is finished.
Let's follow the flow of one of the tests - the biggest meatiest one: 'register & login user'
it("register & login user", function(done) {
The first thing to notice is the 'done' function passed into the test - we call this when the test is finished.
At this point we have navigated to our page & should be read to go...
client
.pause(1)
Ahh another bit of WebDriver/Selenium weirdness - need this pause here for the click buttons to work!
.click('button[data-target="#registerForm"]')
Click the 'Register' button...
.pause(500)
... and wait for the modal to come up...
.isVisible('#registerForm', function(err, val) {
assert(val);
})
... and verify it is visible
Now set some form values...
.setValue("#emailregister", 'testdummy')
.setValue("#passwordregister", 'testdummy')
.click('#registerSubmit')
.pause(500)
... and submit the form & wait ...
.isVisible('#registerForm', function(err, val) {
// register form no longer visible
assert(!val);
})
... and now the register modal is gone (hopefully!)
.click('button[data-target="#loginForm"]')
... now click the 'Login' button ...
.pause(500)
... and wait ...
.isVisible('#loginForm', function(err, val) {
assert(val);
.. and the hopefully now the login modal is showing
})
so set values on its form...
.setValue("#emaillogin", 'testdummy')
.setValue("#passwordlogin", 'testdummy')
.click('#loginSubmit')
.pause(200)
and submit it and wait...
.isVisible('#loginForm', function(err, val) {
// login form gone now
assert(err);
})
... modal should be gone now and hopefully the 'Logout' button is visible
.isVisible('#logout', function(err, val) {
// logout button now visible
assert(val);
})
... and the 'Login' button is gone (since we have just logged in)
.isVisible('button[data-target="#loginForm"]', function(err, val) {
// login button gone
assert(err);
})
... and the 'Register' button is gone
.isVisible('button[data-target="#registerForm"]', function(err, val) {
// register button gone
assert(err);
})
... Now click the 'Logout' button and wait
.click('#logout')
.pause(500)
... and hopefully the 'Logout' button is now gone
.isVisible('#logout', function(err, val) {
// logout button gone now
assert(err);
})
... and the 'Login' and 'Register' buttons are back...
.isVisible('button[data-target="#loginForm"]', function(err, val) {
// login button back
assert(val);
})
.isVisible('button[data-target="#registerForm"]', function(err, val) {
// register button back
assert(val);
})
... and tell webdriverjs this test is done
.call(done);
}
OK simple enough to follow that flow. All of the tests are executed and we are done.
Running The Server
To run these tests we need the Express server up and running. The gunt task 'express' handles this - here is the config from the Gruntfile:
express: {
server: {
options: {
server: path.resolve('./app.js')
, debug: true
, port: 3000
, host: '127.0.0.1'
, bases: 'public'
}
}
}
The grunt-express plugin provides this functionality. We tell it where our app.js is, our static directory ('public') and a host/port number (which are placed in the environment by the grunt environment plugin using grunt templates).
Executing the '% grunt express' task will fire up our Express server - but note it ONLY lives for the duration of the grunt process itself - so once grunt quits our server does too (look at the 'express keepalive' task to have it run forever or even better just use 'npm start').
Now take a quick look at the bottom of app.js to see how this works:
// run from command line or loaded as a module (for testing)
if (require.main === module) {
var server = http.createServer(app);
server.listen(app.get('port'), app.get('host'), function() {
console.log('Express server listening on http://' + app.get('host') + ':' + app.get('port'));
});
} else {
exports = module.exports = app;
}
This 'trick' check if app.js was run from the command line (which we do when we run 'npm start') or loaded up as a module (which the 'grunt express' task does). If loaded as a module we export our Express app so grunt can work with it, if executed from the command line we start up the server ourselves. That's what this block here is for.
Grunt Tasks
Look at our 'grunt webdriver' and 'grunt webdriver_coverage' tasks:
// webdriver tests with coverage
grunt.registerTask('webdriver_coverage', [
'env:test' // use test db
, 'env:coverage' // server sends by coverage'd JS files
, 'express'
, 'webd:coverage'
]);
// webdriver tests without coverage
grunt.registerTask('webdriver', [
'env:test' // use test db
, 'express'
, 'webd'
]);
The only difference is how the environment is set up and how the base 'webd' task is executed. Using grunt's env plugin to set up the environment - let's take a quick look:
, env: {
options : {
//Shared Options Hash
}
, test: {
NODE_ENV : 'test'
, HOST: '<%= express.server.options.host %>'
, PORT: '<%= express.server.options.port %>'
, BROWSER: '<%= webd.options.browser %>'
}
, coverage: {
COVERAGE: true
}
}
Both set the HOST and PORT environment variables which the webdriver tests use here:
client.init()
.url("http://" + process.env.HOST + ':' + process.env.PORT)
.pause(500, done);
and here to pass to getCoverage:
if (process.env.COVERAGE) {
saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
}
NOTE when our Express server is started by grunt these lines are NOT USED:
app.set('port', process.env.PORT || 3000);
app.set('host', process.env.HOST || '0.0.0.0');
Those are ONLY used when our Express server is started on the command line (via 'npm start'). See the 'Running The Server' section below. The 'express' grunt task will use the 'post' and 'host' configuration properties directly.
While the 'coverage' rule also sets COVERAGE to true which both our Express server and webdriver tests check. Also the BROWSER environment variable is set here to be picked up by our webdriver tests when initializing the webdriverjs client in the 'beforeEach' method:
client = webdriverjs.remote({ desiredCapabilities: { browserName: process.env.BROWSER }});
Running Our Webserver
Here is how app.js reacts to these environment variables (note these variable are only in the grunt process environment - as soon as the grunt process ends it takes its environment away with it):
isCoverageEnabled = (process.env.COVERAGE == "true")
First app.js determines if coverage is requested or not...
if (isCoverageEnabled) {
im.hookLoader(__dirname);
}
If so app.js installs istanbul's hook loader which instruments all subsequent 'require'd modules for code coverage. That is why our server-side modules are required AFTER this statement, so if coverage IS requested those modules will be properly instrumented:
// these need to come AFTER the coverage hook loader to get coverage info for them
var routes = require('./routes')
, user = require('./routes/user')
;
Now those two modules will have coverage information associated with them - sweet.
The next bit of magic is this:
if (isCoverageEnabled) {
app.use(im.createClientHandler(path.join(__dirname, 'public'), {
matcher: function(req) { return req.url.match(/javascripts/); }
}));
app.use('/coverage', im.createHandler());
}
This does two things:
- Tells istanbul to check all files requested from the 'public' directory against the provided 'matcher' function. If that matcher function returns 'true' then those files will be dynamically instrumented before being sent back to the requesting client (the browser). In our case any requested file in the 'javascripts' directory will be dynamically instrumented with coverage information.
- Any request to '/coverage' will be handed off to istanbul - we will see this in use later while the webdriver tests are running. URLs under '/coverage' speak directly to istanbul which accepts and provides coverage output as we shall see.
Finally this:
if ('test' == app.get('env')) {
app.use(express.errorHandler());
redis.select(15);
}
This checks the value of the NODE_ENV environment variable which our Gruntfile set to 'test' - so this matches and therefore the 'test' database ('15') is selected for use so as not to interfere with any production data.
And with that our Express server is off and running, with potentially instrumented server-side modules and ready to potentially dyanmically instrument our client-side JavaScript. Plus istanbul is handling all requests under the '/coverage' URL and we are using our test database - whew!
If we are running our webdriver tests without code coverage we are all ready to go, the server is running connected to a test database, our regular JavaScript files are served unchanged and our tests run and JUnit XML output is generated. Things are more interesting when code coverage information is requested.
Dealing With Code Coverage
So we have seen the server setup for running webdriver tests with code coverage what about on the client side? Yes client-side JavaScript is being dynamically instrumented by istanbul but remember after each test the browser is completely refreshed so we must grab and persist all converage information after each test. This is where this magic comes to the fore:
saveCoverage = require('./GetCoverage')
Remember our 'afterEach' method:
afterEach(function(done) {
if (process.env.COVERAGE) {
saveCoverage.GetCoverage(client, process.env.HOST, process.env.PORT);
}
// just blows out all of the session & user data
redis.flushdb();
client.end(done);
});
If we requested coverage we utilize this handy-dandy module to save off all current coverage information - the details how it is done are not important but since I know how curious you are it works like this - which you can follow along with in GetCoverage.js:
- Execute some JavaScript in the browser via WebDriver to get the CLIENT-SIDE coverage info (which is stored in the global '__coverage__' variable)
- POST that strigified JSON of the '__coverage__' variable to instanbul at '/coverage/client' (remember istanbul is in charge of every URL under '/coverage'). This causes istanbul to aggregate the POST'ed client side coverage with the server-side coverage.
- GET the entire coverage info from '/coverage/object' - this is the aggregated server + client-side coverage information.
- PERSIST (as in 'save to a file') the aggregated coverage information
Those 4 steps are done after every test.
After all tests are finished the 'webd' grunt task then aggregates each of those individual coverage objects into one total one and generates the final HTML report which is available at: 'public/coverage/webdriver/lcov-report/index.html' and we are done.
SIMPLE - The point this is all done and you do not need to know about the gory details, just write webdriver tests following the ones I wrote and you are fine.