We Interrupt This Broadcast

Posted by Robert Merrill on December 17, 2012 under Change Leadership | Be the First to Comment

Given the events of Friday, December 14, 2012 in Newtown, Connecticut, I’m choosing not to write about software project failure as though nothing has happened.

I’m ashamed to admit it, but when I first heard the news, the needle didn’t even move. Another wacko shot up a school. Huh. I know my emotions don’t always work right—sometimes I get either nothing or over-the-top—but have I become so numb to these recurring tragedies that it requires conscious effort to feel anything?

It was when I read the unfolding reactions as relayed by my friends on Facebook that I became grieved—and angry. Mostly there was just an outpouring of emotion—welcome, because it helped me to feel something appropriate. But there were also predictable people quoted as saying predictable things about how it was someone else’s fault and someone else needed to change. Too many calls for a “national conversation” contained dialogue-killing words like “political talking points,” “cowed by extremists,” “pry away our Constitutional rights,” and “fantasies that the gun lobby hides behind.” Read Crucial Conversations. Please. You’ll serve your own causes better if you do.

Four days later, it feels like “this time is different.” I hope that’s true. But I’ve seen this pattern before. For about six weeks after September 11, 2001, I lived in a different country. The desires and pressures of everyday life had been put in perspective. People were more polite on the roads and more careful in their speech. But it wore off, and we got back to “normal.”

When we draw our power to change from emotion alone, it can’t be any other way.

Cause-and-effect emotions have a half-life. If the change takes longer to become permanent than the emotion lasts, we revert to the status quo ante—“the way things were before.” In order for things to change, enough of us must choose to cash in our emotions for convictions, and with them a more lasting power and changed priorities.

Before Newtown, there was an established set of people and perspectives around the subject of school shootings. If that set of people and perspectives is not significantly different three months from now, we will probably continue to get the same results we’ve been getting.

Maybe some of the existing activists will, in their emotion, embrace new perspectives.

Maybe some new people, with new perspectives, will become part of the activist mix.

Or maybe the emotions will run their course and things will get back to…Normal.

Thoughts on “Not So Fast…10 Steps to Take Before Hiring a Web Developer”

Posted by Robert Merrill on under Project Set-Up, Software teams | Be the First to Comment

In the Society for Professional Marketing Services (SMPS) Wisconsin Chapter newsletter, Melissa Opad writes, “Picking a web developer is like picking a spouse…While that’s not exactly true (at all), it is a very important decision that will have great implications on the success of your website project.” Ms. Opad then goes on to list 10 Steps to Take Before Hiring a Web Developer.

Ms. Opad has a lot of good advice, but there are also a few things that I think are a lot more like choosing a spouse than writing a tight Hollywood-style pre-nup.

Based on my experience as a web application (primarily eCommerce) developer, web/software project definer and estimator, and consultant to firms hiring outside help for their web and software projects, here’s my take on Ms. Opad’s 10 steps: Read more of this article »

Code Review for Solo Programmers

Posted by Robert Merrill on December 12, 2012 under Process Improvement | Read the First Comment

If you write most of your software alone, like I do, you have this problem. There’s no one to review your code.

Actually, there is. I learned about them from a discussion on the Madison Area Software Developers’ Meetup mailing list.

You can also get code reviewed on Experts-Exchange.com. A subscription is $12.95/month. It’s not so good for browsing through code to help learn, because the code reviews are by technology, and are listed under the specific question rather than lumped all together. But EE has been a good place to get answers to technology questions in general, especially because it’s moderated well, and questioners have to tell which solution actually worked.

A few thoughts  on code reviews in general:

  1. For maximum effectiveness, use two or three reviewers. The second reviewer will spot half again as many issues; it’s amazing. The added impact of more than three isn’t worth the added time.
  2. Don’t try to review too much in one session. Sessions should go no longer than an hour—two at the outside. That includes both individuals reading code and the review read-out meeting itself.
  3. Require reviewers to go through the code by themselves, in advance. Use a consistently line-numbered version to make it easy to log issues and merge them up later.
  4. Over time, you’ll establish a metric of how much code can be reviewed in an hour or two, making it easier to chunk it.
  5. The meeting should be solely to read out issues. Reviewers should have already reviewed the code and made their list of issues. How to resolve issues is up to the author’s discretion, but should happen apart from the meeting. “Committee of the whole” problem-solving is highly tempting & wasteful.
  6. If the author feels “on the spot” leading the actual walk through the code, it’s OK for someone else to do it. Just make sure it’s clear that there’s someone recording issues and merging duplicates as they’re read out, and that it isn’t the author. They need to be able to listen and ask & answer brief, clarifying questions related to the issue itself—not the solution.
  7. Guard the review scope zealously. If the review surfaces broader design or architecture issues, set up  another round of review.

You can review any kind of document this way, not just code. It seems time-consuming, but in terms of defects detected and resolved per hour, it blows away any other QA method I’ve ever seen, especially manual-test-and-fix. I didn’t invent this. It’s called Gilb Inspection. I’m sure they’ve done more with it since I was trained on and used it a dozen years ago.

We had testers complaining of boredom.

 

Wedding Vows, Pinky Swears, On-Base Percentage, and Twelve-Step Programs (Part 2 of 2)

Posted by Robert Merrill on December 10, 2012 under 100 Words | Be the First to Comment

Last week, Senators Carl Levin (D-MI) and John McCain (R-AZ), senior members of the Armed Services Committee, wrote an official letter to Defense Secretary Leon Panetta asking 10 questions about the Air Force’s recently cancelled Expeditionary Combat Support System (ECSS), which the Senators called, “One of the most egregious examples of mismanagement in recent memory.

According to DisputeSoft, a Washington, DC company that makes its living providing expert testimony in such cases, “Root cause(s) of software failure is almost always traced to misaligned expectations and/or failure to comply with contractual responsibilities and/or industry standards and practices.”

Last week, I wrote about expectations and promises (contractual responsibilities), and surmised that maybe failed software projects were like failed marriages, or kids making pinky-swears on behalf of other kids, or coaches designing football plays their team couldn’t execute.

This week, let’s look at that third reason, “Failure to comply with…industry standards and practices.” That sounds the most damning to me. This sounds like “not following directions.” Oracle and CSC do this stuff for a living—they’re professionals. Isn’t that the mark of professionals—doing things right?

Credentialed Professionals & Performance Professionals

Before even getting into the subject of “industry standards and practices,” let’s consider something more basic. Do professionals always follow directions? Do professionals always succeed? What’s a “professional?”

It’s a word whose meaning has changed.

Once it meant practitioners of an activity requiring rigorous training and licensing, e.g. law or medicine. A Wikipedia contributor makes this interesting observation. “Due to the personal and confidential nature of many professional services, and thus the necessity to place a great deal of trust in them, most professionals are subject to strict codes of conduct enshrining rigorous ethical and moral obligations.”

“Professional” has also come to mean, “Someone who earns their living from an activity,” or even “Someone who performs an activity at a level typical of professionals,” particularly if the activity is difficult enough that results clearly vary. There are professional guitar players but not (so far as I know) professional shoe-tiers, and it’s a movie cliché to characterize a crime scene as, “the work of professionals.”

So at the extremes, there are credentialed professionals by training, licensure, and adherence to a code of conduct (with consequences for breaches of same), and there are performance professionals who do something well enough that they are able to earn a living at it, regardless of how they got there.

Software developers are mostly performance professionals except that their work is not all that public or easy to evaluate.

Professionalism and Absolute vs. Relative Standards of Success

Now let’s return to the subject at hand—failure, and following standards and practices.

Airline pilots are professionals, and they are nearly always successful, by absolute standards. They have very detailed and precise procedures, and they are rigorously trained and licensed.

Maybe that’s the problem—software hasn’t “grown up”—we don’t have one set of procedures, and we let people make software without enforcing proper training and practice through licensure.

Medical Doctors are professionals, and their training looks even more demanding than that of airline pilots. Doctors are successful most of the time, if the injury is not too severe, or the illness has a clear cause and a proven treatment. But that’s sort of like saying, “Doctors are usually successful when they’re usually successful,” because other diseases that seem to be most often a losing battle. Even for those, doctors seem to have procedures that they follow, but that doesn’t guarantee success.  Other times, doctors seem to struggle with diagnosis—identifying the condition and knowing which procedure to apply.

Maybe software projects are like practicing medicine. For some projects, success is almost assured if you “do it right.” But for others, even the best execution of the proper procedures ends in failure, or it’s difficult consistently to pick the proper treatment.

Major League baseball players are professionals. The rules of baseball are precise and complicated, but they have a different feel than the landing procedure for even a pretend 747. The rules define a successful pitch or swing of the bat, but there are no standards and practices for success—that’s where talent, practice, competition, and coaching come in. And success as a batter in absolute terms—getting on base—is elusive. A baseball player who reaches base 40% of the time against professional pitchers is considered to be an excellent batter.

Maybe software projects are like baseball. Like Cornelius McGillicuddy says, “You can’t win them all.”

Acceptance & Courage

So, what do we know so far?

  • Some professionals have precise procedures and are nearly always successful, in absolute terms. That’s not software, at least not yet.
  • Other professions have defined procedures, but part of the profession is knowing which one to use, and even then, success in absolute terms is not assured. Patients die despite professional medical care.
  • Inability to achieve consistent success in absolute terms does not invalidate a profession. We just apply an empirical standard of excellence, “What the best are able to do.” An on-base percentage of 40% against Major League pitching and fielding will make you a very wealthy man (or maybe someday, woman).

According to Jeff Atwood, “The Long, Dismal History of Software Project Failure,” 5 to 15 percent of projects are “abandoned before or shortly after delivery as hopelessly inadequate.” That would be ECSS. Atwood cites the Standish Group’s CHAOS Reports on software failures as evidence that we’re getting better—the percentage of cancelled projects or scrapped systems in 1994 was 2x-3x higher, at 30 percent. The reason given is new project management techniques that break one big project down into multiple smaller ones.

Atwood also implies that there different ways to define failure. Besides abandonment or cancellation, there is, “Over time, over budget and/or lacking critical features and requirements,” and that still happens 50 percent of the time!

The famous Serenity Prayer of Alcoholics Anonymous and other “twelve-step programs” says, “God grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.”

  • What if software development is just by nature a failure-prone activity, like hitting Major League pitching? Maybe Oracle, Computer Sciences Corporation, and the US Air Force just struck out this time, that’s all. Maybe there’s no blame and no scandal, despite what Senators Levin and McCain are implying. Maybe we’re just lacking in serenity and acceptance, although to the Senators’ point, it would be nice to stop a failure-in-the-making before spending $1B.
  • What if we really could get better as a profession (or trade, or whatever we are), but we lack courage? It would take courage to make results more public, so that we could establish an empirical standard of success. It might also, as the Serenity Prayer says, take courage to change how we do things.

I’ve seen my share of sunk costs, and broken families from too many stressful nights and weekends trying to keep a schedule, and I’ve seen my share of blame-fixing and finger-pointing, and I’ve yet to see any of these things make the failures stop.

It sure feels like we’re lacking in wisdom.

Wedding Vows, Pinky Swears, On-Base Percentage, and Twelve-Step Programs (1 of 2)

Posted by Robert Merrill on December 3, 2012 under 100 Words, Software Development | Be the First to Comment

No one but the most Machiavellian office politician wants to be part of a failed project. My consulting practice exists in part to prevent them, and I’m assuming you want to prevent or avoid them, too. So I decided last week (“Dog Bites Man—Again”) to look into and write about a US Air Force ERP project called ECSS that was just canceled, less than half done, after  7 years of a 7-year schedule and $1B of a projected TCO (Total Cost of Ownership) of $3B.

This isn’t just for you, it’s for me. I’m trying to look at ECSS in the overall context of “software project failure,” as if I don’t know much of anything. Maybe I’ll get out of a rut I don’t even know I’m in. As I wrote in the accompanying newsletter, “You’ve Already Paid Your $3,” last week:

  1. I think I know about software project failure, which means it’s time to retrace my steps.
  2. We learn deep lessons best from our own pain, and second best from outsized stories, like parables.

Google the phrase “software project failure” and you get 22,000 hits. One in particular caught my eye—DisputeSoft, an expert-witness consultancy “located in the Washington, DC metro area.” They make a living off it! Read more of this article »