Retrospectives don’t get the respect they deserve, compared to the other elements of agile work. By comparison, they’re right-brained and touchy-feely. And even though I consider them a load-bearing wall of agile, weak retros aren’t felt nearly as quickly as failures in commitments, quality & technical debt, or user feedback. But they are essential to long-term team health, and maintaining and growing your way of working even in a changing context.
A Retrospective-Technique Library
Today I was coaching some ScrumMasters about the retrospectives they’re about to lead, and I ran across an excellent article by Ben Linders on InfoQ, “Why Do Teams Find It Difficult to Do Retrospectives?” I encourage you to read it, and follow the included links. There are a lot of good ideas to help your retrospectives fulfill the last principle in the agile manifesto:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Another excellent source of retrospective techniques is Agile Retrospectives: Making Good Teams Great by Esther Derby and Diana Larsen. They offer a five-step retrospective plan that keeps facts, insights, and actions from running together, and activities to help the team with each step.
The Basic Formula, And Adjustments
But sometimes it’s good to get back to basics. We spent our prep time today on the minimalist retrospective formula:
- What went well?
- What went badly?
- What shall we change?
Here are some ways we came up with to fine-tune it.
Encouraging a forward, action-oriented perspective
If your team is prone to dwell on the past rather than improve for the future, rephrase the questions:
- What should we keep doing (or do more)?
- What should we stop doing (or do less)?
- What should we start doing?
We thought of several other alternate wordings—you can do the same.
Hearing all voices
If you aren’t sure you’re hearing from everyone because of more and less dominant team members, especially on a bigger team, start the process with, “I’d like everyone to take the next 15 minutes to write down three items for each of the three questions.” That will let the introverts work the way they work best, by gathering their thoughts first. Then go around the room and let each person read out their list in turn, keeping a merged list on the board.
Making it safe
If you think that people might withhold information out of fear, add some anonymity. For a small team, collect the lists and write them out on the board yourself. For larger teams, have them work in triplets to merge their individual lists, and then have one person from each triplet read out.
If there’s a particular subject that you believe deserves special emphasis, shine a light on it. Sometimes a flashlight is enough. “As you’re writing down your items, spend at least a little time thinking about how well we handled the Definition of Done in the user stories and the conversations with the product owner and subject-matter experts she recommended.”
If the particular subject needs a searchlight, separate it. “Make a separate list of ‘keep doing, stop doing, and start doing’ when it comes to the build environment. That seems like it’s been a source of frustration.”
Helping your team follow through
Finally, to keep retrospectives from reverting from Lessons Learned to Lessons Observed (and repeated), write the action items—not too many!—on special cards and display them where you hold your daily stand-ups. Keeping them in sight is a good way to help people keep them in mind.
Another thread on the IIBA® LinkedIn forum, this one called, “Disturbed by seeing a raft of costly projects with no projected ROI – thoughts?” (you might have to be a group member to see it) prompted me to write a reply that I didn’t want to bury in the LinkedIn forums.
The author, Lance Latham, reported that of 32 projects, only 6 had any cost-benefit analysis at all, and 4 of those only looked at the costs of doing the project in different ways. I’m not surprised. Here are some suggested reasons why this is, and a thought on why it’s not a complete disaster.
Fear of being wrong
If I’ll be punished for being wrong, and I know that early estimates of cost and benefit are likely to be wrong, I won’t create them, or I won’t publish them. If I think the project is a good idea, I’ll create just enough supporting documentation to get it funded.
Ranges and probabilities instead of hard numbers
If I’ve never learned, or am not comfortable, working with numbers that I know are imprecise but not meaningless, I will struggle to come up with them, especially if the scope is vaguely defined. And the scope is always vaguely defined at first.
The very process of refining the scope requires decisions about whether a particular outcome is in scope or not, and that involves some kind of cost-benefit analysis of the thing in question. Or it at least means a guess at the cost of the thing, and a comparison of the uncertain total cost of all the things in my scope basket with a vaguely known budget. (The budget itself is subject to negotiation.) And at this point, the outcome or feature in question is itself only vaguely defined. It’s like a nested Russian doll of chicken-and-egg problems. How do I deal with that?
Organizational natural selection
People who are good at working with nested chicken-and-egg problems using uncertain numbers don’t seem to climb the corporate ladder to the height where they’re approving major projects. Or they lack other traits which are needed to keep from falling (or being pushed) off the ladder. Or they don’t want to climb the ladder at all.
But maybe it’s not the end of the world
If we believe Jonah Lehrer (“How We Decide”), maybe we shouldn’t be surprised. When analyzing complex decisions, we reach the point of diminishing returns relatively quickly.
Of Latham’s 32 projects, how many were turning out to be not worth the cost (the Ultimate Non-Functional Requirement)? How many projects that were shelved in favor of the 32 would have been winners? How much would additional analysis have helped?
I sometimes advise, but I’ve never ascended to the heights where I approve major projects. But I’ve watched (and been downwind). Here are my impressions.
- We do two levels of initial cost-benefit analysis: a) none that I can see, or b) way more than the uncertainties in the numbers justify.
- We’re not quick enough to radically re-scope or even cancel projects based on their actual progress.
I’m writing for a typical client of mine—a software team of a half-dozen people, and the firm that pays them to do projects of a few months to a year in length. Let’s assume it’s you.
You want to get better at making software. You know that change is difficult and that there are no silver bullets. But you also know that you could be twice as good as you are now. (If you’ve never intentionally tried to get better at making software, I can almost guarantee it).
You’ve looked at “Software Process Improvement,” and your head is spinning. Where do you start? You may not even have a software development process. (Correction—you may not have an intentional software development process. The fact is, everybody has a process for everything they do, whether it’s making software or taking out the trash. It’s whatever they do now).
You want to make some simple (but not simplistic) changes, and you need to understand why they will probably work. (You will see below why this is important).
So you hire me and ask me what to do. Read more of this article »
If you write most of your software alone, like I do, you have this problem. There’s no one to review your code.
Actually, there is. I learned about them from a discussion on the Madison Area Software Developers’ Meetup mailing list.
You can also get code reviewed on Experts-Exchange.com. A subscription is $12.95/month. It’s not so good for browsing through code to help learn, because the code reviews are by technology, and are listed under the specific question rather than lumped all together. But EE has been a good place to get answers to technology questions in general, especially because it’s moderated well, and questioners have to tell which solution actually worked.
A few thoughts on code reviews in general:
- For maximum effectiveness, use two or three reviewers. The second reviewer will spot half again as many issues; it’s amazing. The added impact of more than three isn’t worth the added time.
- Don’t try to review too much in one session. Sessions should go no longer than an hour—two at the outside. That includes both individuals reading code and the review read-out meeting itself.
- Require reviewers to go through the code by themselves, in advance. Use a consistently line-numbered version to make it easy to log issues and merge them up later.
- Over time, you’ll establish a metric of how much code can be reviewed in an hour or two, making it easier to chunk it.
- The meeting should be solely to read out issues. Reviewers should have already reviewed the code and made their list of issues. How to resolve issues is up to the author’s discretion, but should happen apart from the meeting. “Committee of the whole” problem-solving is highly tempting & wasteful.
- If the author feels “on the spot” leading the actual walk through the code, it’s OK for someone else to do it. Just make sure it’s clear that there’s someone recording issues and merging duplicates as they’re read out, and that it isn’t the author. They need to be able to listen and ask & answer brief, clarifying questions related to the issue itself—not the solution.
- Guard the review scope zealously. If the review surfaces broader design or architecture issues, set up another round of review.
You can review any kind of document this way, not just code. It seems time-consuming, but in terms of defects detected and resolved per hour, it blows away any other QA method I’ve ever seen, especially manual-test-and-fix. I didn’t invent this. It’s called Gilb Inspection. I’m sure they’ve done more with it since I was trained on and used it a dozen years ago.
We had testers complaining of boredom.
What do you do when you realize you’ve dug yourself a hole?
I’m talking about the kind of holes that we dig ourselves in business, or the kind of gigantic hole that we in the United States have dug ourselves.
- Stop digging (Will Rogers). Doing whatever you’ve been doing, only with more fervor, is probably going to result in more hole.
- Re-evaluate your thinking (Albert Einstein, Part I). We all have mental models of the world and the people who populate it. They’re essential approximations of reality because reality is too much for our little brains (or we’re just in denial). Read more of this article »