Reading time: 90 seconds
In QualiMente’s Secure Cloud team retrospective on Friday, we identified a couple of improvement actions:

For context, we are several weeks into building a new service called k9
that helps people and access analysis tools understand “who has access to what” in AWS.
As product owner and team lead, I prioritized building something useful in order to learn about the domain and get feedback from customers over internal aesthetics I expected to change. As with many new projects, we know a lot more about the problem domain and even the underlying AWS security services than we did a month ago. Also, ‘code review’ was the skill we focused-on learning last week as part of our continuous improvement efforts.
Now we’re converging the product’s internal structure closer to the newly-learned reality before expanding the feature set. We started some of that refactoring in the middle of last week. Some of these refactoring efforts were glommed onto a significant feature addition and everyone ended up agreeing that change was too large.
In our retrospective, we decided to make a couple behavioral changes related to refactoring and code reviews:
- Perform small refactorings within feature stories, but create a dedicated story for a large refactoring. (1)
- Reviewers should be stricter about improving code when they see it as opposed to letting issues grow.
We’re also going to create a checklist to support k9
change authors and reviewers.
This story was entered right after our retrospective and filled-out a bit this morning when I groomed the backlog:

This checklist will build on QM’s standard code review checklist by prompting for the behavioral changes we want to make in our code review process and failure-handling guidelines that are important for security work. Naturally, this checklist will evolve over time.
Zzzt! Our feedback loop is closing.
I hope yours does, too.
#NoDrama
(1) Yep, there are a number of tradeoffs when modeling refactoring as a discrete story or task, particularly: visibility, measuring throughput, and predicting releases