The fungibility fallacy

You no doubt have heard of Brooks’ Law: Adding people to a late project will make it later, popularized in his 1975 book The Mythical Man-Month. But more than 35 years later, the practice still happens, with predictable results.
But where do these new resources come from? Hiring takes too long, so they’re likely to come from other teams within the same company or even department. Software engineers are software engineers–the fungibility fallacy. So these teams now suddenly find themselves short-handed, and looking to shift the lost resource’s responsibilities onto the remaining team members, who now have to get up to speed on their new responsibilities in the same exact way that the transferred member has to get up to speed. This perturbation reverberates throughout the team as everyone is disrupted, and that perturbation reverberates long after the lost team member returns.
In many cases, it’s better to let the lost team member’s work go unfulfilled in order to reduce the overall disruption. After all, they’re only supposed to be on temporary assignment, and, unless someone else on the team is already up to speed on what the lost member was doing, they’re not going to be productive for a few weeks anyway. Now multiply that productivity loss by everyone who is re-tasked, and you can see that re-tasking is expensive. I’m all for having multiple people having expertise in something, but that is difficult to achieve and not something you want to develop while under this kind of stress.
If you’re faced with losing part of your team to a late project, suggest the Bermuda strategy instead; send 90% of that team to Bermuda, and let the remaining 10% finish the project.

Use behavior-driven development, test-driven development, and lean development practices to avoid getting your self into such a dilemma. And be a little more pessimistic when you plan.

Posted in Software | Leave a comment

Current state of reducing UAV piloting workload.

Here are a few products that illustrate the current state of reducing the workload of piloting a UAV, at least for drones below a certain pricepoint:

The Hexo+ dispenses with the traditional joystick-based controller; instead, the hexacopter is controlled entirely through an app that abstracts flight control as a set of “cinematic camera movements”. The company claims that customers will “soon” be able to customize and combine these movements.

3DR has introduced a new feature that lets the operator set up any number of keyframes that the drone will follow along, moving the camera as needed.

Sensefly’s eMotion app lets you photograph an area by specifying: the area, the desired ground resolution, and amount of image overlap. The drone flies the needed flight path without operator intervention.

Sensefly’s eXom, which is purpose-built for inspection tasks, has 5 ultrasonic sensors and 5 low-res video cameras, and does offer some obstacle avoidance capabilities.

Posted in Software | Leave a comment

Recovering from a near-catastrophic npm update

I was tracking down a vexing problem in my node app. I was converging on my problem being a memory leak in a 3rd party node module, and decided to do an npm update.

I typed in npm update and, without thinking, pressed [Enter]. Yeah. Smart move dumbass. Completely hosed my app. Serves me right. For those in our studio audience, what I did was blindly update every node_module–3rd party components my app depends on–to their latest version. All. At. Once. As Gimli said: “Small chance of success. Certainty of death. What are we waitin’ for?”

To recover, I had to get rid of all of the node_modules. But–yes I use Windows 8 (on a touchscreen laptop)–so, the paths to these modules exceed the maximum limits supported by the relevant commands and so I couldn’t delete all of the node_modules!

I luckily found a great tip on the web.
1) Create a new directory. Put nothing in it.
2) Open cmd prompt, and: Robocopy [new_directory] [node_modules directory to delete] /RIM
3) Be amazed. The /RIM is the key because it removes any files in the target directory that are not found in the source directory.

The result was that the target directory was gone.

I reinstalled nodejs (which also installs npm).

Then, in my project directory, I ran: npm install, and npm install -dev. This used the project’s package.json file, and restored the versions listed there.

What a stunningly dumb move on my part. Fortunately, the above worked and my app is running again, but not without a tremendous amount of pain and suffering…a true self-inflicted head wound. I need to figure out some safeguards to not do this again–especially difficult since I don’t get to write code that often and have to come back to this after weeks away. But such is life.

Now. What the hell was I trying to fix?

Posted in Software | Leave a comment

Triage Changes to Manage Risk Using Code Reviews

Code reviews need to happen in-band with the main development & deployment process. To do otherwise limits their effectiveness and generates resistance–like every other out-of-band task that interrupts flow.

Tests and code coverage, linting, and complexity metrics are some of the tools we can use to help focus our code review efforts, and I believe there is a lot more these tools can do. For example, I would love to have rules for creating function names. Good function names should start with a verb–it’s not that difficult to enforce that rule, and you could easily establish a vocabulary of verbs to use across projects. Similarly, the nouns should reflect the problem domain, with the vocabulary building up as the tests and code evolve. This would be much better than the simple, naive autocomplete suggestions currently provided by editors–as useful as that feature is.

Anyway…If it’s not possible to inspect 100% of code changes, something else is required to maximize the overall effectiveness of code reviews. The key terms here are triage and risk. Triage is fast and simple sorting, based on obvious indicators. Risk is the probability of loss or damage due to an adverse event.

To most effectively allocate your code reviewing time, you triage changes to manage risk:

  • Changes that impact security incur high risk.
  • Changes that impact more critical functions, incur a higher risk than changes that impact less critical functions.
  • Changes that impact more frequently used functions incur a higher risk than changes that impact less frequently used functions.
  • Senior engineers will make fewer mistakes than junior engineers performing the same task. Consequently, you’d like to scrutinize their work a little more closely. Code reviews are also teaching opportunities.
  • An engineer will introduce more defects into a complex function than a simpler function. Consequently, you’d like to limit complexity and scrutinize changes to complex functions more than others.
  • A more intensive change incurs a higher risk than a simpler change, but keep in mind that even the smallest change can have major consequences.
  • Everyone’s code needs to be reviewed once in awhile.
  • All code should be reviewed every once in awhile.

If you use BDD, then you should be able to identify your most critical, service-defining scenarios.

Posted in Software | Tagged , | Leave a comment

Robert Saunders–the Real Father of Perpetual Beta

The concept of perpetual beta is attributed to modern web applications, but I first encountered the practice–and the phrase–in the early 1990s. Yes, the early 90s, when software shipped on floppy discs! At the time, I was working at Logic Works managing a product called BPwin, a business process modeling tool built on a shoestring budget, but which, against all expectations, captured a nice little niche and turned a nifty profit. The team was miniscule, I had one developer, and a part-time tech writer and another part-time QA person. But the developer–Robert Saunders–could crank out code like nobody’s business, and liked to work overnight. I would often have a new version ready for testing in the morning, and so there was a lot of testing and retesting, constantly looking, constantly testing, wash, rinse, repeat. Sometimes the documentation would come first, and Bob would build to the documentation, sometimes we’d talk about a feature and the code would come first, but the product was almost always 3 days from ready–that is, if we needed to ship a new build, we could generally do so on 3 days notice. Bob coined the term perpetual beta, to describe our arrangement. BPwin was often sold to corporations on the promise of some new feature, which we’d rapidly build in. Looking back, we employed a lot of what today would be called lean development practices, and I never wanted to build software the old fashioned way again. Features followed the money, were delivered quickly and followed by rapid feedback cycles with the target customer to close any gaps between what we delivered and what they needed (which they couldn’t articulate until they had something to try).

So there you have it. Robert Saunders–the real father of perpetual beta.

Posted in Software | Leave a comment

Using Codepainter to Format your Node.js App

I previously posted about using jshint to catch style errors in your code. To be sure, it’s way, way, way better to have a style formatter automatically apply your chosen style right within your editor–or at least flag your errors as soon as possible after you make them. But, neither of these will help you if you have a bunch of code that you need to get into shape; then you need a tool that will actually transform your code in abatch. Codepainter will apply selected transforms to your code as a grunt task–giving you improved code as a result. There are a few caveats, but it worked for me. First, install codepainter and grunt-codepainter using npm. Codepainter gets configured in grunt like this:

codepainter: {
static: {
options: {
editorConfig: false,
style: {
indent_style: 'tab',
trim_trailing_whitespace : true,
indent_size : 1,
max_line_length : 100,
quote_type : 'single',
curly_bracket_next_line : false,
spaces_around_operators : true,
space_after_control_statements : true,
space_after_anonymous_functions : true,
spaces_in_brackets : false
files: [{
expand: true, // Enable dynamic expansion
cwd: 'app/', // Src matches are relative to this path
src: ['+(controllers|services)/**/*.js'], // Actual patterns to match
dest: 'styled/' // Destination path prefix

Add a line to register the task:

grunt.registerTask('styleme', 'codepainter');

That’s it. When your run grunt styleme, codepainter will apply the specified style transforms to the files that it finds using the cwd: and src: entries, and puts the transformed files in the directory you specify in dest:. It preserves the directory structure in the output, so I ended up with styled/controllers and styled/services, including subdirectories. Copy these directories back into the directory you specified in cwd:, and you now have the styled files in your project. You should, of course, back up those directories beforehand, copy the files over, and then run all of your tests afterwards. You can use grunt-contrib-copy and set up a chain of tasks to format, copy, and test.

But use extreme caution, here. This is not something I can recommend that you do regularly–if you do–please set it up so that it’s automated and goof-proof! The real holy grail for me remains something that will correct my style as I write, and flag errors that it can’t correct. But codepainter will save you from having to make thousands of style corrections by hand, if you ever face such a situation!

Posted in Software | Tagged | Leave a comment

Complexity Analysis for Node.js Apps

In previous posts, I introduced style checking (linting) using jshint, integration testing using mocha and should, code test coverage using coverage, and vulnerability identification using retire. If you’re starting a new project, I strongly encourage you to integrate these packages into your routine–it will save you plenty in the long run.

I now want to turn to managing software complexity. Defect rates increase superlinearly to code complexity, so managing software complexity is part of a broader strategy of risk reduction. Software complexity is not as cut & dried as style checking and unit testing, and refactoring requires effort and entails its own risks. Automated complexity measures primary benefit is that it help focus your attention on high-risk sections of your code. A thorough treatment of the topic is beyond the scope of a blog post, but you can get started quite easily.

I use two node packages: ‘complexity-report’ and ‘grunt-complexity’. Install them using npm, and then set up your gruntfile:

complexity: {
generic: {
src: ['app/**/*.js', 'public/modules/*.js'],
exclude: ['app/yyy/**', 'app/tests/**', 'app/services/xxx/**' ],
options: {
breakOnErrors: true,
//jsLintXML: 'report.xml', // create XML JSLint-like report
//checkstyleXML: 'checkstyle.xml', // create checkstyle report
//pmdXML: 'pmd.xml', // create pmd report
errorsOnly: false, // show only maintainability errors
cyclomatic: [10, 20, 50], // or optionally a single value, like 3
halstead: [10, 20, 50], // or optionally a single value, like 8
maintainability: 75,
hideComplexFunctions: true, // only display maintainability. Set to false to get more detailed output.
broadcast: false // broadcast data over event-bus


grunt.registerTask('complex', 'complexity');

Note: you cannot use grunt.registerTask(‘complexity’, ‘complexity’). It’s like crossing the beams, if you remember your Ghostbusters analogies.

If you’re just starting your new application–great! If you’re following TDD/BDD, then complexity is just another tool to help you make choices as you refactor–you’re always trying to keep the board green! (But see my caveats at the end). Using the settings above (particularly hideComplexFunctions: true), when you run grunt complex, you’ll get a nice summary of the maintainability metric for each module, with the output ordered from the lowest score to the highest. Maintainability was introduced by Paul Oman and Jack Hagemeister in 1991, and combines 3 different metrics to create an overall availability score. Don’t worry about the details for now. You can control the coloring of the output by changing the value of maintainability: 75 to something else. We’ll discuss this in a bit. Anyway, this is the output format I recommend you use–as long as everything is green, you’re good to go. In my opinion, BDD/TDD helps to prevent a lot of the issues that retrospective complexity analysis was designed to address. I use complexity-report as another safety check during development & deployment, and to help triage code for manual review.

If you’re dealing with existing code, then things are a bit more…err…complex. The first time you run grunt complex, there’s a good chance that you see a lot of red and yellow bars. Don’t panic! To understand where the trouble spots are, we have to dig a little deeper. Set hideComplexFunctions: false, and rerun grunt complex. It still provides the maintainability graphic for each module, but nested underneath each entry are the complexity metrics for every function found in that module. I don’t recommend that you attribute any prescriptive power to these metrics. Clean Code: A Handbook of Agile Software Craftsmanship, by Robert C. Martin, is a great resource for improving the maintainability of your code. The examples are in java, but the principles are universal.

If you want a better understanding of code metrics can be found here and here.

Do not try to reduce complexity purely for the sake of lowering a metric below a specified number. Adjust the thresholds based on your experience and what you (and your peers) see in the code. If you’re OK with a function, then leave it alone. Phil Booth, the author of complexity-report, has accepted my suggestion to allow module and function-level overrides of the global threshold values for the individual metrics, similar to what jshint allows. This is an important feature, because you can then put complexity-report into your standard development & deployment practices, setting thresholds to sensible values in various places. Currently, you have to do complexity reporting out-of-band, or set ridiculously high thresholds that render the capability useless for deployment.

Posted in Software | Tagged | Leave a comment