Background
On November 22nd, 2019, I gave the closing keynote at Scrum Deutschland, a talk called 'The Four Things You Do To Prevent Value Delivery'. In this keynote, I discussed the trends I found during my research at multiple organizations.
For background, I'm often called into organizations to investigate their ability to deliver value. Such a study is holistic; it involves looking at the organizational design, technical capabilities, culture and knowledge, and type of control and metrics used to define success. I often get these kinds of request, usually because of three main reasons:
- Something went wrong, and the organization wishes to know what happened where and when.
- Work (or sometimes even a transformation) is ongoing, and the organization wants a second opinion on risk or improvement factors.
- The organization expects an inquiry or audit into its delivery capability and wishes to improve things before that time, or at least have a rebuttal.
As expected, repeated studies have allowed me to observe some patterns. Some of these have to do with 'the system', the whole of hierarchy, setups, technology and so on that make up an organisation. The impact of the human element in all of this surprised me, though not because of its presence, but more how it impacted everything. I didn't find any malicious behavior at any point. What I found where people trying their best, making an effort to be helpful, and yet this often resulted in issues in the delivery capability. It is these observations that I shared in my (happy to say well-received) keynote. And now in these four articles.
Let’s start with an example.
I was called in to assess an organization that was having difficulties. They were several months overdue on delivering a core system. Estimates were wildly unreliable; the Backlog had only increased in size, morale was low, and yet nothing seemed to be happening. The organization used a combination of Scrum and Project Management on paper. Upon further study of where the work for each team(member) came from, what I found was a still-running Lean improvement program, several projects both inside and outside of the scope of two programs, and Scrum, via a set of Product Owners. With no single clear source of truth, choice, and work, team(members) would switch scope depending on who last spoke to them, and work kept on being added to an already overburdened system. When discussing this with the people in charge, it was not new information. The approach was deemed to work when it was devised, and so it should work now. Clearly, the issues lay somewhere else. Upon further pressing to reconsider the organizational setup using the experiences gathered thus far, I was met with the following quote:
'We will not be discussing the approach again.’
Elaboration
In retrospect, it was a predictable response. Most people, myself included, would probably point to internal politics as the culprit here. It's easy to launch initiatives, but very costly to cancel them. It is politically expedient to be successful all the time. This reasoning is a correct, well-fitted explanation of the quote and behavior in the example.
And yet, it nagged at me. Sure, internal politics are awful and cause a lot of damage to organizations, and those that are playing them are walking a tightrope of damage control. But is this the whole answer to that response? There was a stubbornness in their response. It was more than refusing to admit failure to prevent loss of face. They were more defensive than that. It puzzled me.
And a few months later, I started reading Annie Duke's book 'Thinking in Bets'. And suddenly, things started coming together. What if the politics came second? What if I was dealing with people who had so thoroughly based their identity on their ability to solve problems and manage complex things, that the idea that they 'got it wrong' was inconceivable to them on a fundamental level? But then, that begged the question of why being wrong about something was such a big deal?
So, down the rabbit hole I went.
Explanation (Why this is wrong)
It turned out the rabbit hole wasn't that deep; the answer wasn't that difficult. People can't handle complexity. Cognitively I mean. We have a bunch of built-in patterns to reduce the big, scary, chaotic world down into manageable chunks. Incredibly useful back when we climbed out of trees and started living in small groups in a dangerous world full of things trying to kill us. Got bit by a snake once in the grass? From now on, the grass is where you need to be wary.
Much research into this collection of biases and fallacies was done by Daniel Kahneman and Amos Tversky, resulting in a Nobel Prize and a book that should be mandatory reading for everyone with a functioning brain over 16: Thinking, Fast and Slow. While parts of it have since been critiqued as part of the more significant reproduction crisis in science, it offers an invaluable start in understanding the human condition. This field, Behavioral Economics, has been expanding since and has a lot to offer for those of us trying to enact change.
So, back to dealing with complexity. It turns out our brain is making a considerable effort to reduce the complexity of the world for us through the use of these biases.
- You did something, and it went wrong? Clearly, you should've prepared better. The assumption here is that the outcome of a decision is always causally linked to the quality of the decision. It's not; there are a variety of factors at play; thus the outcome of a decision and the quality of the decision are merely correlated.
- Expecting something bad to happen? Something bad will happen. We hate being wrong on a biological level. So if we have a belief, we look for things that confirm that belief and ignore things that don’t. In other words, we have a confirmation bias.
- Learned something that didn’t fit what you already knew? Explain it in such a way that it does fit. Two truths cannot contradict each other. But sometimes they do, or it seems that way at least. And remember, we hate being wrong. So rather than admit we were wrong about something when encountering cognitive dissonance, we will instead twist this new contradicting information in such a way that it no longer contradicts.
When the biases stop working, or even threaten to be overwhelmed by evidence, is when we are confronted with the truth of the world. Namely, that's it's much less like a game of chess, where you have knowledge of almost all the variables in play and plans actually mean something, and much more like a game of poker, where there's a lot you don't know and not everything you think you know can be trusted. And that world is a terrifying place. So scary, in fact, that our brains will do whatever it takes to convince us the world IS predictable and straightforward. Thus leading to the behavior in the example.
It's not that none of those people wanted to admit fault to others (though they didn't want that either). Much more, it was that acknowledging that the result was different than what they predicted implied there were things outside of their control or knowledge, meaning that they were far less in control of their fate than they wanted. And this simply could not be. If they could blame anything but complexity or shift the conversation away from it, at least, that would make things make sense again. And so the discussion was shut down.
'We will not be discussing the approach again.'
Solution
This is not a solution, of course. This is just sticking one's head in the sand. And yet this is what we do all too often. Either by ignoring things (consciously or unconsciously) or explaining things away. Until reality catches up as it always does.
So what then? Well, in all honesty, that story about playing chess versus playing poker didn't come from me. I read it in that book by Annie Duke I mentioned earlier. I recommend it; there's a lot of valuable advice in there. But the core element is one I'd like to share here. Start thinking in terms of bets. What's the chance of being right? What's the risk of being wrong? What happens if I'm wrong? What should I see if I'm right? And because you're playing a game of imperfect information, being wrong once doesn't necessarily mean you were wrong. It could just be bad luck. So maybe try some things a few times, if the cost isn't that high. Because a betting strategy can only be evaluated after multiple bets. So do small experiments numerous times. And if you find something that works today, don't assume it'll work tomorrow. Think in bets.
And try to prevent yourself from being certain of something. Because that's when you lose connection with reality. As always, the only certainties you have in life is that one day, it's going to end, and before (and after) that time, the government will take half your stuff. The rest you should be exploring.
Don’t play chess when you should be playing poker.
Sources & Inspiration
Daniel Kahneman - Thinking, Fast and Slow
Annie Duke - Thinking in Bets
Robert Burton - On Being Certain
Barry O’Reilly - Unlearn
George Tuff & Steven Goldbach - Detonate