Categories
business computer

Preston's Business Razor: A Stakeholder Perspective On Pair Programming

xplogoPair programming is an activity of eXtreme Programming (XP) wherein two developers work in conjunction–often physically seated next to each other with a single keyboard and mouse–to solve the same development tasks as a single mind. Having developer pairs tackle complex tasks can go a long way towards…

  • Increasing personal productivity.
  • Reducing defects.
  • Minimizing misinterpretation of requirements.
  • Improving designs.
  • (Many other benefits.)

As a developer, pairing make mountains of sense. Most tasks in the development world can be improved in a direct, obvious way.

The business perspective, however, is somewhat different. While any give client or manager will say “yes” if they want to see the above occur, a pragmatic developer presenting pro-pairing arguments must, more importantly, provide evidence that stakeholder–not developer–outcomes improve. Let’s look at a few different stakeholder perspectives individually…

Return on investment.

moneyThe common pro-pair argument of “increasing personal productivity” is, unfortunately, a deceptively irrelevant point when it comes to ROI. Business stakeholders will always want to increase productivity, but only if it improve project value per dollar, ceteris paribus. Individual productivity and overall ROI and not always proportional… but we’ll get back to this in a minute.

To play devil’s advocate, let’s create a extreme, cynical analogy by playing stakeholder to a small project that can be run in one of two ways…

  1. Two expert developers arduously working for 2 man-months to complete a small project for a total business cost of $20K.
  2. One hundred college interns assigned into 20 5-man teams, each trying to create a solution equal to or better than the above team could, estimated at 100 cumulative man-months (50x the development effort, but free because they get college credit) plus one man-month of project management and a half-man-month of additional overhead to simply identify the best developed solution. Total cost: $15K.

Now, this latter case is clearly an extreme fabrication of how real-world projects run, but does highlight a Occam’s Razor-like rule for business types…

If presented with two approaches with equal outcomes and equal risk, chose the cheapest. (Aside: This is not argument for crowdsourcing.)

In the latter case, overall productivity, code quality of a random line, design quality and other factors from a random intern will be horrendous. We’ll probable end of throwing out at least 95% of the code. But here’s the kicker… it doesn’t matter. From a business perspective we don’t care about the 95 interns that can’t tell a hard drive from an iPhone. (It’s a problem for another day, at least.) We do care about the 5 brilliant interns that teamed up, overcame the mediocrity of their peers, created something truly magnificent, and saved the company $5K. Despite bad individual productivity, the overall outcome is positive and at an overall lower cost. If we had to apply Preston’s Business Razor to this scenario, there is a clear winner, and it’s not the “ideal” one.

Why is “two” the ideal number?

kittens_huggingIt’s not… except when it is.

In economics, there are a series of concepts related to production possibilities, allocative efficiency, Paredo efficiency etc. that can be applied to engineering: using a group of individuals to maximize the production of various outcomes with limited resources. Here’s a simple empirical experiment that you can run using a group of 15 people and a good 30 minutes that touches on some of these concepts.

  1. Print out instructions on how to make an origami cube, give them to each person, and make sure everyone can make a box on their own. Instruct everyone to make as many fully-assembled, respectable boxes as possible in 5 minutes. Some will be great at it, others less so, and maybe a few that just can’t do it. Don’t count the crappy-looking cubes. Figure out the average time to build a box across the group. This is our “baseline” number that we’re going to try to beat.
  2. Break the 15 people into 5 groups of 3. Each team of 3 will now produce boxes assembly-line style, requiring each member to master specific parts of the process. Measure the production capabilities of each team in 5 minutes and again find the average time to create a box across the entire group.
  3. Reform everyone into 3 groups of 5. The assembly lines will be longer, requiring everyone to become even more specialized in their responsibilities. Again let the groups run for 5 minutes and compute your output.
  4. Lastly, form the entire group into a single, massive assembly pipeline of 15 people. Time the group and compute your output.

origami_cubeWe now have 4 data points in how to maximize the production of the group, as well as some interesting observations. First, in all likelihood, the best overall production probably came in one of the middle two trials. People were forced to specialize, but not overly so to the point of awkwardness. Second, having 15 people do a task with less than 15 significant steps is really awkward. People specialized to the point of meaninglessness; issues in the pipeline blocked way too many people; the shear overhead of literally moving paper around defeated the point of specialization. Third, each group had its own characteristics. Some may have been so productive that they blew away the baseline quota, while other in similar sized teams simply could not work together due to process issues, personality conflicts etc. Each group probably also adapted within those five minutes to maximize the groups output based on who was faster/slowest, and best/worst at folding. Some groups may have created a “manager” role to correct critical pipeline issues, pitch in a few folds when someone gets behind, or fix the “broken” boxes. When in a massive 15-person pipeline, some may have gotten frustrated and wanted to split back into smaller groups.

Let’s put it into a real-world perspective by taking an arbitrary task from an issue tracking system: “refactor foo to support bar.” This task has it’s own optimal number of concurrent developers that will be unique to the team. For a group of interns, maybe it’s 5.5; for a team of superheros, 1.2; for my team, maybe 2.4. This specific task and specific team has its own distinct production characteristics (even though the task only needs to be done once), and only in very rare cases will the optimal number of people assigned to it be exactly equal to 2.0. The point is this…

Asserting that a “pair” of people is always optimal is just as absurd as asserting groups of 1, 3 or 4 are always optimal.

The number is unique per task, per project, per team, and understood outside of computer science when looked at from a businessy economic perspective. So from a stakeholder viewpoint, use of pair programming is absolutely acceptable (and even preferred) when optimal over other options.

Experienced engineers inherently understand that some tasks require multiple minds to collectively discuss difficult challenges, debug complex code etc., and don’t hesitate to seek additional eyes when it feels right. What we should not do is cling to the notion that “2” is a magic number that should be used without contextual consideration. Maybe it’s 3… or 1… or 7… there is no universal constant that can predict this number, and it’s ok that it varies per task.

So for now, let’s put aside this arbitrary “2”, and instead rely on our experience, higher-level intuition, business strategy, basic metrics and strong understanding of our peers strengths and weaknesses when deciding when to pair.

Preston

Leave a Reply

Your email address will not be published. Required fields are marked *