Outcomes over outputs

Short summary of Outcomes over outputs

Florian Sauter

4 minute read

The concept of using outcomes over output has caught my attention pretty early on in my career - as it’s such an obvious and seemingly easy concept, yet still hard to implement properly. And at times it produces somewhat challenging situations when working with people who are used to working in a more “command and control” like environment.

The book

The book I summarize here today is a short one titled “Outcomes over Output”. It starts by stating the obvious (that delivering many “things” in terms of features, our “outputs” does not necessarily deliver any value for the business in the form of outcomes or impact). And it aims to answer the question how outcomes can help to get there instead.

The book defines outcomes as “the human behaviors that drive business results .. happen when you deliver the right features” (and ideally by delivering as few features as possible).

One interesting fact here is to navigate the levels properly. The book points out that there’s a natural chain of casualties between Resources, Activities, Outputs, Outcomes, and Impact. Managing by outputs is dangerous because there’s no guarantee for any outcomes. Managing by impact only, however, is a common misconception and tends to not work either because these are usually too high-level to be useful (think of “we need to make more revenues”). Outcomes are the sweet spot, where it’s about a focus on changing customer behavior in such a way that it drives a concrete business result.

One key problem here is that oftentimes it’s not entirely clear whether a certain activity leads to the behavioral change (outcome) we want. That’s where experiments come into play to learn quickly whether any given activity works as intended.

Finding the right outcomes

The key starting question is: “what are the customer behaviors that drive business results?”. So we are looking for things that people do, which has the nice property of being observable and measurable. And we are looking for leading indicators on the way to out intended business result. Normally, there will be a lot of uncertainty in these outcomes we come up with. That’s why we should call them Hypothesis and phrase them in the form of a) what we believe, and b) the evidence that we’re seeking to know that we’re right or wrong.

Joshua Seiden puts out three “magic questions” to ask when creating Outcomes:

“What are the user and customer behaviors that drive business results? (This is the outcome we want to create) How can we get people to do more of these behaviors? (These are the features, policy changes, promotions, etc. that we’ll do to try to create the outcomes.) How do we know that we’re right? (This uncovers the dynamics of the system, as well as the tests and metrics we’ll use to measure our progress)

A word on technology initiatives

The author points out what I can confirm from my experience: internal tech initiatives are often very bad at defining outcomes. Refactorings and such are often measured by how many subsystems are already touched/completed. It would, however, be much better to measure them in terms of behavior, e.g. how many users are on the old vs. the new system, or how many developers have access to new possibilities now etc.

Output based planning is the root of the problem

I covered the whole idea around better planning and roadmap creating already in my article on roadmaps (link). The book puts the problem in a nice way:

“Roadmaps fail when they present a picture of the future that is at odds with what we know about the future. If we were setting out to cross an uncharted desert - one we cannot see from the air, and that was of unknown size - it would be crazy to predict that we could cross it in a few hours. [..] you’d be reckless to make a prediction. Instead, you’d probably present your journey (if you chose to make it at all!) as an exploration.“”

The parallels here to product development is that there’s often also many unknowns and uncertainties involved. So instead of outcomes, we want to plan for themes of work, problems to solve, outcomes to deliver, or a certain customer story.

A big challenge to overcome here is stakeholder expectations. Many stakeholders want a fixed date and a fixed feature set. And it might feel frustrating that the best answer we can give here is “we stop working on something when we’ve made enough progress to feel satisfied.” One way to deal with this is to establish clear hypothesis and measures of success upfront with stakeholders, and regularly review performance against these. But if definitely also requires a cultural change towards trust, and a culture to accept failure and learning and being ready to talk about it. The way to get skeptical stakeholders on board is to show what’s in it for them: actual impact on the business, versus ticking off a list of outcomes.

Rethinking bounded contexts

Learnings about DDD, bounded contexts and team autonomy

Florian Sauter

5 minute read

I was wrestling quite a bit with my colleagues recently about how we could benefit from DDD, especially from the notion of bounded contexts. We somehow felt that the concept is an important building block for how we split up our services, but we had some trouble to really make the DDD ideas work in practice.

Right in time, I stumbled across a video by Nick Tune which I found very useful (full video is linked at the end of the post). And despite the fact that his views seem to upset parts of the hardcore DDD community, I found his core idea really interesting: He suggests to link the idea of bounded contexts strongly to the idea of team autonomy, and he concludes that bounded contexts should be primarily reflected in teams that own “product and tech things” that change together for a business reason.

For Nick bounded contexts are merely a tool to create a large number of different models of how things could be split up. And based on these different options he suggest to apply the Theory of Constraints to select (and justify the selection!) of a certain cut. Always questioning whether this specific cut really resolves a bottleneck for quick software delivery. And once a cut is found, it then has to be reflected in the organizational architecture (=the team setup) as well to be effective.

Here’s a few take-aways on the approach in general:

Understand the business you’re in

Sounds pretty obvious, but it’s worth stating: thinking about bounded contexts doesn’t make sense from a purely technical point of view. The starting point has to to be the business model, the business problems one wants to solve, the process steps thins involves as well as the the customers’ needs.

Start with a broad funnel of different models

Starting with the definition of bounded contexts right away is difficult. There are tools though that can be used very beneficially as clues for how the bounded contexts could look like that are worth exploring. These include linguistic boundaries (i.e. same/similar phrases that have different meanings in different parts of the business), data/data flow considerations (who owns which data? how does data flow? are there dependencies between data?), domain expert boundaries (what’s the area that a single expert can cover and where does the expertise of the other person start?), as well as business process steps (which sometimes map directly to bounded contexts).

Another category of clues can be the existing organizational setup. However, I don’t like that too much as a clue, as - especially in a startup setting - the organizational design is often less deliberate and less stable than we would want it to be for this purpose.

Now, once we have enough clues how different cuts of bounded contexts could look like it’s time to compare them and select a winner.

Picking the right cut for your bounded contexts

Now, here’s the key learning: Pick the right cut of your bounded contexts based on the bottlenecks you want to resolve. So to the disappointment of my inner engineering soul it’s not about picking a model that feels elegant, or tidy, or smart by any other engineering measure. It’s about removing constraints/bottlenecks. And these bottlenecks need to be carefully evaluated as they need to serve as the main justification for the cut. And, surprise, surprise, these bottlenecks can often be found in team organization, team communication issues, and team dependencies.

The bottlenecks are a function of what we want to optimize for. I personally like to work in environments that produce innovations, and for those kinds of environments I feel a broad consensus that speed of incremental software delivery that allow for fast feedback cycles on ideas and rapid improvements is the most important optimization goal. And that aligns well with the notion that bounded contexts together with the right team setup can help to solve coordination bottlenecks by cutting dependencies at the right spots.

Can you cut all dependencies?

I have never seen a real world system that looks as nicely cut as any of the book examples. Reality tends to be messy, so the expectation of having all clean boundaries everywhere is something that I would say is just not achievable in a real world system.

So the best outcome one might expect from the exercise is probably a cut of contexts and teams that are reasonably independent to deliver on their core purpose. But how to you then make sure that there is still good alignment between those remaining ugly dependencies and the teams responsible for those? Nick suggests a couple of activities that very much focus on the human side of this alignment, basically saying that everyone should have a good overall understanding of the different corners of a system by e.g. show & tell events, occasional cross-team pairing, or event storming sessions as an artifact that fosters this common understanding.

On top of that, the bounded contexts should be allowed to evolve over time alongside the teams, the changes in business processes and needs and technical discoveries that occur as we go.

Interested to learn more first hand from Nick? Then watch the video: