For some reason, you have a node.js app that’s built, but it has no tests or documentation. Maybe you inherited it. Maybe you were feeling exceptionally manly and didn’t think you needed tests. Maybe you’re now having second thoughts or whatever. It’s thankfully not that difficult–and certainly worth the effort–to add tests after the fact, even though I think behavior-driven design is a better option.
For a node app that I inherited, I started by building lower-level tests–I hesitate to call them unit tests–focusing on the parts of the code that I felt would be the most difficult for someone else to understand–or the most difficult for me to remember if I left it for some time. Starting this way also helped me to get my head around the code. Along the way, I discovered that certain parts of the system weren’t testable, so I refactored them. I ended up with over 120 tests, better code, and peace of mind.
I started with mocha and should, and then added istanbul for code coverage–all run from within grunt. I chose these primarily because I could get them to work and found enough examples to feel comfortable using them. Mocha and should tests seemed very natural to write–I’m still not sure what to think about cucumber using Gherkin-syntax for these lower-level tests. Istanbul was extremely useful in figuring out what tests I still needed to write–but note that its coverage metrics only pertain to the modules for which tests are written and not to the entire codebase.
These tests are pretty fast, taking about 10 seconds to run, including app initialization. This isn’t nearly fast enough for true Test-driven development, but I’ve somehow inherited 2 other node apps for which I need to build tests–this will have to do for now.
I did encounter a few issues: The biggest so far is that a few tests that run fine in mocha explode in istanbul. I tried a few things, but eventually gave in to my inner slacker and moved these tests-of-death into tests/mocha-only and configured istanbul to only run tests in the tests directory.
Here are the packages:
These are configured in gruntfile.js (partial file, below):
src: 'app/', // a folder works nicely
format: 'pretty', //pretty : prints the feature as is,
//progress: prints one character per scenario,
//json : prints the feature as JSON,
//summary : prints a summary only, after all scenarios were executed
coverageFolder: 'coverage*', // will check both coverage folders and merge the coverage results
// Test task.
grunt.registerTask('test', ['env:test', 'mochaTest']);
grunt.registerTask('accept', ['env:test', 'cucumberjs']);
grunt.registerTask('check_coverage', ['env:test', 'istanbul_check_coverage']);
All of these tests are written against my service modules–essentially these service modules support my mean.js controllers. Once I had good enough low-level test coverage, and wanting an excuse to try building high-level acceptance tests using cucumber, I moved on.
I’m using cucumber.js, selenium server, and webdriverio for my acceptance tests. Again, the primary factor in choosing these particular pieces was that I could get them installed and working, and found enough examples and documentation to feel comfortable going forward. Now, the BDD folks will tell you that the primary benefit of BDD is NOT that the specifications can be executed, but the conversations among team members that arise from following the method. This is a good thing, because at the moment I only have 7 tests working out of an initial 14, and the tests take practically forever to run. I clearly have something wrong, somewhere.
I’ll post some details about the mocha tests, then I’ll turn to the cucumber tests…