Want training, consulting or inspirational talks on this subject?
In the beginning of 2020, we did as we had always done. We had weekly Release Board Meetings run by Line Managers who would double check and sign off on the release and be overall accountable for the outcome. This approach would not work for two reasons:
- to succeed in increasing speed we needed to let go of the top-down-control approach: decision power had to be placed in the teams, and
- we didn’t have line managers anymore, so who were actually accountable? It felt like the most obvious example of an opportunity to let the teams decide “the how”.
To facilitate speed, we therefore increased the frequency of Release Board Meetings to twice a week  and told the teams that they were now in charge of quality and coordination. It was a small change that was made with the best intentions: it created a vacuum for decisions, with little to no structure that supported the teams in filling that vacuum and taking responsibility.
As the teams were busy delivering value to the business, it never became a priority to spend time on indirect value adding activities such as finding new ways to ensure quality of changes.
It is an example of process debt, and in Scrum@scale we met it under “Delivery” (in the Scrum Master cycle) as it relates to delivering a consistent flow of valuable finished products to customers – at high quality.
With an increasing number of releases in an environment that was becoming ever more complex and growing in size, we started seeing a rise in the number of incidents that – due to not living up to quality standards – had an unintended and unexpected effect on business processes.
The impact on the business from these incidents were downtime – significant downtime, which is significant when the daily business is a matter of millions of euros.
The impact was also visible within the infrastructure teams. The infrastructure engineers in the teams experienced some hard backlashes from other parts of the digital organization and for each major incident our credibility took a toll.
The engineers became risk averse, and the knee jerk reaction from the management layers in the organization was to want to install top-down controls again.
We took a deep breath and tried to remember what we wanted to achieve: we wanted teams to have freedom to decide the how, because we wanted speed and agility. And we knew we could not achieve this by dictating improvements.
So instead we created a structure for the teams to act within – a clear outline of the playing field and the rules. We called this structure the “Release Maturity Model”(RMM) , and it is illustrated in the figure below.
The RMM outlined 7 areas relevant in the release process (the playing field), including how source code is stored, how tests and validations are made, how risk is assessed, how change requests are documented and approved, how deployment is orchestrated, and how the team learns from the experiences they get from their releases. The rules were that he teams were requested to
- assess their current maturity (on a 4 level scale from “this is not something we do” to “this is fully automated in our setup”),
- decide on two areas where they would commit to improving their maturity in the next quarter, based on their evaluation of where it would be of highest value in their specific situation, and
- decide how they would improve.
It didn’t work.
The approach with the playing field and the rules was very well received in the teams who we tested it on – they eagerly discussed where and how to improve. But the initiative failed quickly as the teams did not move past the discussion.
Even though we had created a vacuum for them to make decisions AND provided a structure to support them, there was something missing. Our conclusion was that there were two unsolved issues:
1) We didn’t manage to make this a priority in the teams – the incentive to work on creating customer value was overruling the considerations for improvements. We could perhaps have mitigated the effect of lack of incentive if we had 2) made sure that the engineers had the time to focus on becoming great at releasing and knew where to get inspiration, tools and competencies, but we didn’t.
 You might add that the more “correct agile” approach would be continuous release. This is also a guiding start that we are working towards. However, we concluded that we are not mature enough as an organization to make this happen. Yet.
 The model is “home-grown” as we were not able to find a framework that would provide enough structure AND freedom to fit the purpose, but we do not suggest that we made an exhaustive search.
Curious to learn more?
Want training, inspirational talks of consulting on the topic?