Testing Mean.js Controllers – Part 2

You need to test that only authorized users can perform certain functions, to help prevent AshleyMadison-like embarrassments. Plus, doing so will boost your code coverage metrics by hitting all of those auth failure branches.

For mean.js with passport, you normally have something like this in each controller:

var users =  require(app/controllers/users/authorization.server.controller);

You need to mock this controller, using the method I described in another post on mocking. Yes, there are lots of mocking libraries available, but I’m too lazy to learn them, and so far, my cheap method seems to work really well for me.

var users =  require(config.requireModules.authorization.version);

In your normal config file, such as production.js, you would have:

requireModules: {
   authorization: {
      version: 'app/controllers/users/users.authorization.server.controller'
   }
 }

Note how the authorization.version field points to the normal auth controller.
In test.js, your config file for testing, you would include the following:

requireModules: {
   authorization: {
      version: 'app/tests/mock_modules/mock-users.authorization.server.controller'
   }
}

Now, you’ve successfully mocked your auth module. That module looks like this:

'use strict';

/**
 * Module dependencies.
 */
var passFailFlag = true;

//For test setup
exports.passFail = function(pr) {
passFailFlag);
	passFailFlag = pr;
};
/**
 * User middleware
 */

exports.userByID = function (req, res, next, id) {
	if (passFailFlag) {
		req.profile = config.anonymousUser;
		next();
	} else {
		return next(new Error('Failed to load User ' + id));
	}
};

/**
 * Require login routing middleware
 */
exports.requiresLogin = function (req, res, next) {
	if (passFailFlag) {
		next();
	} else {
		return res.status(401).send({
			message: 'User is not logged in'
		});
	}
};

exports.validLogin = function (req) {
	//console.log('***validLogin*** passFailFlag= ' + passFailFlag);
	return passFailFlag;
};

exports.isAuthorized = function (req, roles) {
	return passFailFlag;
};

/**
 * User authorizations routing middleware
 */
exports.hasAuthorization = function (roles) {
	return passFailFlag;
};

This auth mock will let you dynamically set whether an auth test should pass or fail within an individual test!
Note that it exports a function that lets you set the value of a pass/fail flag that it uses to determine whether it passes or fails authorization.

Now, to test an authorization failure:

describe('taskJobs.Server.Controller.Test - Auth Fail Show Task Jobs', function() {
   it('should fail to return a list of jobs', function(done) {
      var response = buildResponse();
      var request  = http_mocks.createRequest({
         method: 'GET',
         url: '/taskJobs'
      });
      response.on('end', function() {
         if (response.statusCode !== 302) {
            done(new Error('Incorrect Status Code: ' + response.statusCode));
         } else if (response._getRedirectUrl() !== '/') {
            done(new Error('Incorrect Redirect: ' + response._getRedirectUrl()));
         } else {
            done();
         }
      });
      users.passFail(false);
      controller.list(request, response);
      users.passFail(true);
   });
})

Note how we set the passFail flag
An auth failure results in a 302 (redirect) statusCode, AND the redirect url will be (in this case, anyway), = ‘/’. You can look in your controller and see what you do for each method when an authorization failure occurs. See my previous post on controller testing to see the controller that fits this example.

Without the auth mock, you can use the bulk of this code to test any redirect, just set the expected redirect url value, and the url in the request, and this code should just work.

Advertisements
Posted in Software | Tagged , | Leave a comment

Testing Mean.js Controllers – Part 1

It’s time to test controllers. In this post, I’ll test a controller that returns a json response.

Step 1 is to install node-mocks-http.

Step 2 is to download jsdValidator.

In our sample controller test file, include jsdValidator and create a schema. The schema will match what should get returned by the real controller.

The relevant portion of our route looks like this:

var taskJobs = require('app/controllers/taskJobs.server.controller');
app.route('/taskJobs')
   .get(taskJobs.list);

In our taskJobs.server.controller, we have:

exports.list = function (req, res) {
 if (!users.isAuthorized(req, ['user'])) {
 res.redirect('/');
 return;
 }
 var restJobs = restValidatorService.restJobs;
 var scre = /^(\S+)\s(\S+)\s(\S+)\s(\S+)\s(\S+)\s(\S+)/;
 var cronX;
 var output = [];
 _.forEach(taskJobs, function (job) {
 output.push({
 task: job.task.service,
 enabled: job.enabled,
 persist: job.persist,
 status: job.running ? 'running' : 'waiting',
 lastRun: job.startTimeStamp,
 processedMessages: job.processedMessages,
 errorMessages: job.errorMessages,
 commError: job.commError
 });
 });
 res.jsonp(output);
};

The controller is working if it responds with json output in a particular format. So, include jsdValidator in the controller test file, and create a schema that matches what the controller is supposed to produce. See jsdValidator for more info on how it works and how to build schemas.

The relevant portion of taskJobs.server.controller.test.js looks like this:

var jsd = require('app/services/jsdValidator/jsdValidator');
var streamSchema = {
	'Type': 'Array',
	'Optional': false,
	'Values': [
    {
      'Type': 'Object',
      'Optional': false,
      'Attributes': [
        {
          'Name': 'task',
          'Description': '',
          'Type': 'String',
          'Optional': false
        },{
          'Name': 'enabled',
          'Description': '',
          'Type': 'Boolean',
          'Optional': false
        },{
          'Name': 'persist',
          'Description': '',
          'Type': 'Boolean',
          'Optional': false
        },{
          'Name': 'status',
          'Description': '',
          'Type': 'String',
          'Values': ['running','waiting'],
          'Optional': false,
        },{
          'Name': 'lastRun',
          'Description': '',
          'Type': 'String',
          'Optional': false,
          'CanBeNull': true
        },{
          'Name': 'processedMessages',
          'Description': '',
          'Type': 'Number',
          'Optional': false
        },{
          'Name': 'errorMessages',
          'Description': '',
          'Type': 'Number',
          'Optional': false
        },{
          'Name': 'commError',
          'Description': '',
          'Type': 'Number',
          'Optional': false,
          'CanBeNull': true
        }
      ]
    }
  ]
};
describe('myTasks.Server.Controller.Test - Show Tasks', function() {
 it('should return a list of tasks', function(done) {
    var response = buildResponse();
    var request = http_mocks.createRequest({
       method: 'GET',
       url: '/myTasks'
    });
    response.on('end', function() {
       if (response.statusCode !== 200) {
          done(new Error('Incorrect Status Code: ' + response.statusCode));
       } else {
          jsdv = new JSDValidator({Schema: myTaskSchema});
          var isValidated = jsdv.Validate(JSON.parse(response._getData()));
          if (!isValidated) {
             done(new Error('Invalid Response: ' + jsdv.Error));
          } else {
             done();
          }
      }
    });
    controller.list(request, response);
 });
})

In the test controller, set up a few parameters:
First, the url needs to match the url in the route file for the action in the controller that you want to test.
Second, at the bottom, where it says: controller.list(request, response), .list is the actual method that you want to test in the controller. It has to match the method that is in the route file.
Third, modify the ‘describe’ and ‘it’ descriptions, and you’re done. It’s pretty easy to clone this code to test all of your controllers.

Posted in Software | Tagged , | Leave a comment

The fungibility fallacy

You no doubt have heard of Brooks’ Law: Adding people to a late project will make it later, popularized in his 1975 book The Mythical Man-Month. But more than 35 years later, the practice still happens, with predictable results.
But where do these new resources come from? Hiring takes too long, so they’re likely to come from other teams within the same company or even department. Software engineers are software engineers–the fungibility fallacy. So these teams now suddenly find themselves short-handed, and looking to shift the lost resource’s responsibilities onto the remaining team members, who now have to get up to speed on their new responsibilities in the same exact way that the transferred member has to get up to speed. This perturbation reverberates throughout the team as everyone is disrupted, and that perturbation reverberates long after the lost team member returns.
In many cases, it’s better to let the lost team member’s work go unfulfilled in order to reduce the overall disruption. After all, they’re only supposed to be on temporary assignment, and, unless someone else on the team is already up to speed on what the lost member was doing, they’re not going to be productive for a few weeks anyway. Now multiply that productivity loss by everyone who is re-tasked, and you can see that re-tasking is expensive. I’m all for having multiple people having expertise in something, but that is difficult to achieve and not something you want to develop while under this kind of stress.
If you’re faced with losing part of your team to a late project, suggest the Bermuda strategy instead; send 90% of that team to Bermuda, and let the remaining 10% finish the project.

Use behavior-driven development, test-driven development, and lean development practices to avoid getting your self into such a dilemma. And be a little more pessimistic when you plan.

Posted in Software | Leave a comment

Current state of reducing UAV piloting workload.

Here are a few products that illustrate the current state of reducing the workload of piloting a UAV, at least for drones below a certain pricepoint:

The Hexo+ dispenses with the traditional joystick-based controller; instead, the hexacopter is controlled entirely through an app that abstracts flight control as a set of “cinematic camera movements”. The company claims that customers will “soon” be able to customize and combine these movements. https://hexoplus.com/product/hexo_drone_3d

3DR has introduced a new feature that lets the operator set up any number of keyframes that the drone will follow along, moving the camera as needed. https://3drobotics.com/solo-drone/

Sensefly’s eMotion app lets you photograph an area by specifying: the area, the desired ground resolution, and amount of image overlap. The drone flies the needed flight path without operator intervention. https://www.sensefly.com/drones/emotion.html

Sensefly’s eXom, which is purpose-built for inspection tasks, has 5 ultrasonic sensors and 5 low-res video cameras, and does offer some obstacle avoidance capabilities. https://www.sensefly.com/drones/exom.html

Posted in Software | Leave a comment

Recovering from a near-catastrophic npm update

I was tracking down a vexing problem in my node app. I was converging on my problem being a memory leak in a 3rd party node module, and decided to do an npm update.

I typed in npm update and, without thinking, pressed [Enter]. Yeah. Smart move dumbass. Completely hosed my app. Serves me right. For those in our studio audience, what I did was blindly update every node_module–3rd party components my app depends on–to their latest version. All. At. Once. As Gimli said: “Small chance of success. Certainty of death. What are we waitin’ for?”

To recover, I had to get rid of all of the node_modules. But–yes I use Windows 8 (on a touchscreen laptop)–so, the paths to these modules exceed the maximum limits supported by the relevant commands and so I couldn’t delete all of the node_modules!

I luckily found a great tip on the web.
1) Create a new directory. Put nothing in it.
2) Open cmd prompt, and: Robocopy [new_directory] [node_modules directory to delete] /RIM
3) Be amazed. The /RIM is the key because it removes any files in the target directory that are not found in the source directory.

The result was that the target directory was gone.

I reinstalled nodejs (which also installs npm).

Then, in my project directory, I ran: npm install, and npm install -dev. This used the project’s package.json file, and restored the versions listed there.

What a stunningly dumb move on my part. Fortunately, the above worked and my app is running again, but not without a tremendous amount of pain and suffering…a true self-inflicted head wound. I need to figure out some safeguards to not do this again–especially difficult since I don’t get to write code that often and have to come back to this after weeks away. But such is life.

Now. What the hell was I trying to fix?

Posted in Software | Leave a comment

Triage Changes to Manage Risk Using Code Reviews

Code reviews need to happen in-band with the main development & deployment process. To do otherwise limits their effectiveness and generates resistance–like every other out-of-band task that interrupts flow.

Tests and code coverage, linting, and complexity metrics are some of the tools we can use to help focus our code review efforts, and I believe there is a lot more these tools can do. For example, I would love to have rules for creating function names. Good function names should start with a verb–it’s not that difficult to enforce that rule, and you could easily establish a vocabulary of verbs to use across projects. Similarly, the nouns should reflect the problem domain, with the vocabulary building up as the tests and code evolve. This would be much better than the simple, naive autocomplete suggestions currently provided by editors–as useful as that feature is.

Anyway…If it’s not possible to inspect 100% of code changes, something else is required to maximize the overall effectiveness of code reviews. The key terms here are triage and risk. Triage is fast and simple sorting, based on obvious indicators. Risk is the probability of loss or damage due to an adverse event.

To most effectively allocate your code reviewing time, you triage changes to manage risk:

  • Changes that impact security incur high risk.
  • Changes that impact more critical functions, incur a higher risk than changes that impact less critical functions.
  • Changes that impact more frequently used functions incur a higher risk than changes that impact less frequently used functions.
  • Senior engineers will make fewer mistakes than junior engineers performing the same task. Consequently, you’d like to scrutinize their work a little more closely. Code reviews are also teaching opportunities.
  • An engineer will introduce more defects into a complex function than a simpler function. Consequently, you’d like to limit complexity and scrutinize changes to complex functions more than others.
  • A more intensive change incurs a higher risk than a simpler change, but keep in mind that even the smallest change can have major consequences.
  • Everyone’s code needs to be reviewed once in awhile.
  • All code should be reviewed every once in awhile.

If you use BDD, then you should be able to identify your most critical, service-defining scenarios.

Posted in Software | Tagged , | Leave a comment

Robert Saunders–the Real Father of Perpetual Beta

The concept of perpetual beta is attributed to modern web applications, but I first encountered the practice–and the phrase–in the early 1990s. Yes, the early 90s, when software shipped on floppy discs! At the time, I was working at Logic Works managing a product called BPwin, a business process modeling tool built on a shoestring budget, but which, against all expectations, captured a nice little niche and turned a nifty profit. The team was miniscule, I had one developer, and a part-time tech writer and another part-time QA person. But the developer–Robert Saunders–could crank out code like nobody’s business, and liked to work overnight. I would often have a new version ready for testing in the morning, and so there was a lot of testing and retesting, constantly looking, constantly testing, wash, rinse, repeat. Sometimes the documentation would come first, and Bob would build to the documentation, sometimes we’d talk about a feature and the code would come first, but the product was almost always 3 days from ready–that is, if we needed to ship a new build, we could generally do so on 3 days notice. Bob coined the term perpetual beta, to describe our arrangement. BPwin was often sold to corporations on the promise of some new feature, which we’d rapidly build in. Looking back, we employed a lot of what today would be called lean development practices, and I never wanted to build software the old fashioned way again. Features followed the money, were delivered quickly and followed by rapid feedback cycles with the target customer to close any gaps between what we delivered and what they needed (which they couldn’t articulate until they had something to try).

So there you have it. Robert Saunders–the real father of perpetual beta.

Posted in Software | Leave a comment