I was today years old when I learned that the 3 Cs of a User Story are not about the elements that make up a good User Story. I constantly run across this memory aid in blog posts, books, and training. I thought I understood its purpose, but as it turns out, the actual value of this technique is an open secret. We’ve all read about it, but I don’t think most of us understand it at a fundamental level.
The phrase “3 Cs of User Stories” is a memory aid that helps us remember a user story’s three progressive elaboration phases. Each of the three Cs stand for Card, Conversation, and Confirmation.
Before researching this blog post, I would have told you that the expression helps us remember all of the components of a User Story—like a mnemonic checklist. In the interest of openness, I’m willing to admit I was wrong.
The ultimate role of stories should be to reach all-encompassing, specific confirmation statements. This process proposed by Ron Jeffries in 2001 serves as a road map we can follow to ensure we reach that goal.
The 3 Cs demonstrate the evolution from idea to executable work.
Given my tendency toward verboseness and the fact that many people have written entire books on User Stories, I’ll only give a brief synopsis of them here.
A user story is the defacto method for capturing customer needs on Agile projects.
The story is typically written from a user’s perspective and in simple language that users easily understand. There is one user story template that is more common, but there can be some variations in the format. The gist is to cover the who, the why, and the what.
When we follow the process as intended, we’ll resolve a lot of the conventional pain points of Agile teams.
Lack of requirements
Lack of context for requirements
Lack of understanding of business rules
Teams building the wrong thing
Given that the User Story is a staple on most agile teams, learning to maximize its usefulness will drastically improve the value your teams deliver to customers.
The card is just a placeholder that signifies there is a customer requirement. It’s like an item on a to-do list or a reminder taped to the fridge.
At a minimum, the card should cover the who, what, and why related to the request. Context can be helpful, but you don’t want to go overboard. One reason Agile teams popularized index cards for stories was the necessity to keep things simple. In practice, teams found the index card to be of an optimal size for generic requirements. If you can’t fit it on a card, the request is too complex, and you need to figure out how to simplify it, even if that means splitting user stories.
I’m always amazed at how many people skip the basic information, thinking they’ll remember what they meant by the incomplete sentence of five words. Still, inevitably when we come across the product backlog item again, even the creator has no idea what he meant.
An anti-pattern of User Stories uses them as binding contracts between team members. The development team only builds precisely what we officially documented on the card and cannot be held accountable for unspecified elements. We will test only what is reported on the card when testing the functionality.
This behavior goes against the agile value of “Customer collaboration over contract negotiation.” Customer collaboration shouldn’t stop after documentation. There is a fundamental difference between a deconstructed business requirements document and User Story cards. The role of stories is to facilitate requirements gathering, not be the final requirement.
The card is a commitment to have conversations about the need. These conversations aim to reach a detailed understanding of what is expected for the change to be successful.
Trying to document the precise need of a product backlog item before bringing it to the entire team is susceptible to the fallacy that we can know everything upfront. The goal at this point is simply to remember there is a need. We’ll flesh out the details later.
At some point in the team’s process, we select a card from the product backlog, and conversations start to ensure everyone involved is clear on what needs to happen. These conversations help illuminate and clarify unknowns as early as possible. At this point, we’re striving to fill in any gaps in requirements.
Take note of the use of “conversations.” This activity is not just a single step in the process, and we may need to have multiple conversations before we’re clear on exactly what changes the agile team should implement. The collaboration to reach a common understanding is continuous and incremental. For those that are time-sensitive, looking to eliminate meetings (and thus conversations), I encourage you to consider how much time will be wasted in rework without these conversations.
The need for these conversations doesn’t hold up development, as the team can start coding once they feel they have enough clarity around the ask. Also, we don’t stop conversations just because we started development. We’ll continue to have as many conversations as needed while learning more about the work to ensure that we maximize value by delivering the right thing.
Much important information will come out of these conversations, but we don’t need to document everything on the card. As a rule of thumb, try to keep documentation to the minimum required. I once worked with a fellow who wrote a magnum opus for each ticket he came across. As a result, the team wouldn’t read anything because they didn’t want to invest the time to differentiate the relevant bits.
At this point, you’re probably wondering: if we’re going to have conversations and learn important details but not update the card, won’t we forget what we discussed? This is where confirmation comes in to save the day.
It may be necessary to create limited documentation or pictures to facilitate conversations, but we should define the majority of the vital information as confirmations.
This phase aims to boil down everything we’ve learned into statements that reflect what is essential to fulfilling the need. We’re building a record of what we’ve agreed to develop. Not so that we can create this as a contract, but instead, we’ve explicitly documented what needs to happen to satisfy the requirement. A checklist, if you will, that ensures a shared understanding.
How do we measure success? - is the driving question behind this activity.
We refer to confirmations as acceptance criteria more commonly, and there are standard ways of documenting this information. The niche of behavior-driven development has made long strides in solving the documentation aspect. The closer we can get to executable code here, the better. Strive to specify everything in terms of the criteria we should meet so that the work provides value to the customer. Stakeholders explain what they want in terms of what it would take to satisfy the need, and developers use those same statements to verify that they’ve fulfilled the requirement.
These acceptance criteria will help facilitate testing the product backlog item—on both the technical and stakeholder sides. Stakeholders can use the acceptance criteria to validate that the new features meet the expectations that they set forth for success in those previous conversations.
These acceptance criteria and the definition of done will determine when the product backlog item is truly “done” and ready for release.
Some common anti-patterns are solved by following this process.
For instance, image Ron has asked us to make a change. We’ve met with Ron to discuss the details of this change, ask any clarifying questions, and created any diagrams needed to ensure we all have a shared understanding of the need. We worked with Ron to boil that information down to a list of expectations that Ron agrees outlines what he would use to determine the success of the change. We’ll implement the functionality using this same list to make sure we meet all of Ron’s expectations.
Following this process doesn’t leave much room for the team to build the wrong thing. Suppose we did somehow manage to implement something different than Ron expected. In that case, we can go back to the acceptance criteria (generated collaboratively) and pretty quickly determine where we have any gaps in requirements.
We understand the level of effort better when we’ve documented precisely what needs to change at this level of detail. Our estimates will get more accurate because we’ve defined the scope better. A more detailed scope will lead to better predictability as we’ll know how to split user stories and only bring what we can reliably complete into the Sprint. As a result, mid-sprint scope creep will be less likely.
In-depth conversations with stakeholders are a prerequisite to reaching such a detailed list of acceptance criteria. To compose this list, the team will have had some deep conversations about exactly what we’re building, why we’re building it, and how it affects other parts of our system. We’ve had ample opportunity to ask clarifying questions with the right people in the room, and we’ve boiled all of this information down to succinct statements of what will be changing in the system. It’s impossible to reach the objective and still not have context about what we’re building.
We know exactly how to test the solution because we’ve defined it in the acceptance criteria. We aren’t trying to read the story and guess how the developer chose to interpret the need; we know precisely how the system should function after the change and how to test to ensure quality.
I mentioned at the beginning that I’ve misunderstood the purpose of the 3 Cs until today. Initially, I thought of them as elements that needed to be present in a User Story. Next, I shifted to thinking of them as the phases that a story moves through. Now I think of it as a process where the second and third steps can be repeated and revisited.
A trainer exposed me to a diagram that added Consequences and Construction to the original 3 Cs. The word consequences was defined as “evaluate what you built first as a team then with business stakeholders and in tests with customers and users.” The word construction was described as “Teams create software referencing notes and pictures from conversations to help them remember details.”
I think these add-ons were defined because the creator was looking at the 3 Cs as a life cycle. The User story comes into being, and then we have a conversation, develop acceptance criteria, write the code, and see how we did. However, I don’t believe the actual value of the 3 Cs comes from seeing them as a life cycle. The better opportunity is to view them as a progressive elaboration process that allows everyone involved to reach a shared understanding. Framed this way, I don’t think adding construction and consequences as additional phases makes sense.
This article contended that the 4th C should be context. Though I agree with a ton of the information in the article, as I stated above, I don’t think you can follow this process as intended and still be missing the context. I believe context is what Ron Jefferies designed the 3 Cs to ensure.
I agree that many teams out there are missing context, but I don’t think it’s a result of a missing C. I think it’s the result of not having effective conversations and not documenting the results of those conversations as confirmations.
The User Story isn’t just a communication device that we throw over the wall to the developers. They’re not mini-contracts so that we can point fingers at why we didn’t complete some aspect of missing work. Cards are placeholders that commit us to conversations enabling us to document our shared understanding as confirmations. Following these 3 Cs, our teams will be able to deliver more customer value.