Discover, Decide, Deliver: Part Two, Decide

This is the second part of a three part series in which I cover my methods of delivering successful business outcomes via software.

Part One: Discover

Part Two: Decide

Part Three: Deliver


We have now spent some time discovering potential problems or opportunities within our or our partner’s business, ensuring we have identified business goals that our delivery can contribute to, and spent a commensurate amount of time building a high level solution scope.

If you haven’t read the first post I’d encourage you to do so, as it wil be assumed knowledge for this post - if you find yourself wondering why something is missing here, it was likely covered during Discover, or will be covered in Deliver.

The question is now: should our business invest time and money in the proposed initiative?

I previously stated:

The two biggest mistakes that are made at this stage of a software initiative are:

  • Jumping straight into solutioning

  • Spending too long in up-front planning

The first problem, jumping straight into solutioning, we solved during Discover.

The second problem, spending too long in up-front planning, we solve now.

Decide

At this point we should have not invested a large amount of time and effort in discovery, and so will not be tempted to give into Sunk Cost Fallacy when making a decision to proceed or not with a given initiative.

Making a decision should be relatively quick, and most importantly, based on empirical data.

Data

To make a decision, we need the right data on hand. We cannot decide on whether to proceed with a software initiative without the requisite minimum amount of data present - any coversations that occur in the absence of this data are simply speculation and conjecture. We want to avoid the HiPPO-based decision, or any other gut-feel type decision making, and proceed with eyes wide open.

The three pieces of data we need to gather at this stage are:

  • How much will it cost?
  • What are the percieved risks, assumptions, issues, and dependencies?
  • What is the expected ROI?

Or more succinctly: cost, risk, and return.

Remember that we should already have a clear picture of how the initiative contributes to the businesses vision, strategy, and goals, which were formed during Discover.

How Much Will It Cost

The first data we need on hand to make the decision is How much will it cost? - and that will involve estimation.

The most important part of this step is to understand that we are not estimating trying to figure out exactly how much effort it will take to produce precisely what we have defined to date - what we end up delivering in any given software initiative rarely resembles what we understood up-front.

Estimation as a tool to fix scope, drive “accountability”, and railroad software development initiatives is why the #NoEstimates movement gained some steam. Unlike the #NoEstimates movement, in my world, businesses need some assurance around how much they will likely need to invest to see a desired return.

It is critical to understand that what we are attempting to do with our up-front estimation is to create a budget within which we are confident we can succeed. Or in other words - we need to create a budget within which success is likely.

I want to take a moment to belabour the above point even further. I’ve used the word budget instead of estimate. It is important to know that the ultimate output of our up-front estimation isn’t an “estimate”. An estimate implies an assumption that the scope of work will remain reasonably constant, and that accountability will be driven by tracking actuals against the up-front estimate. This is actively harmful behaviour in a software delivery, but one I still encounter far too often, usually at larger enterprises with more arduous, dogmatic governance processes. The ultimate output of our up-front estimation is a budget. It is an amount of money and time within which we can guide a software delivery, accommodate change and risk, and make decisions at the correct time to make them, all contributing to delivering a successful outcome before the budget is exhausted.

Will we be done by the time the budget is exhausted? Unlikely, most software initiatives are rarely ever ‘done’. There are always other things that could be done. But what we have delivered by the time the budget is exhausted will be enough for us to ship, realise a return, satisify stakeholders, and contribute to the business goals identified in Discover.

My colleague Liam McLennan shared an article on Twitter recently on estimation, entitled Blink Estimation, by Dan North. I heartily recommend reading the whole thing, it is truly excellent. The part in particular I wish to draw attention to is the section entitled Estimation as an investment decision. If you set a budget within which you are confident of success, these two quotes from that section become the essence of how you then deliver to that budget:

You simply work towards the date and declare victory when you get there.

It’s like firing an arrow and then painting the target around it — you start hitting a lot of bulls-eyes!

So how do we go about crafting our estimate?

Experts estimate only

The best estimates come from people who have delivered a lot of software in a variety of environments and conditions, on the tools. The context they have built over this time gives people an innate sense of how long things take, from simple forms to complex system-of-systems concerns, and what can impact those timelines.

They also have a historical mental catalog of ‘this shape solution takes around this long to produce’ to sanity check against.

Wrap your arms around it

We have our proposed solution scope from our Discover phase - this might be in the form of an Impact Map, or a set of wire frames, or some rough sketches on paper. Whatever format it is in, we need to be able to ‘wrap our arms around it’, to estimate how much effort it will take to produce it.

The Dan North article referenced previously elicits interesting fact in regards to estimate sizing: the more resolution or detail we attempt to plan to up front, the larger estimates tend to become! This means that if we have a set of detailed wireframes, it is likely that our estimate will be larger than that if we had estimated against our Impact Map’s high-level functional statements.

If we are working with a business who understands the constraints and realities of software delivery, we may want to harness the fact that scarcity drives successful innovation and estimate on less granular information - confident that we can work in close collaboration with that business to make appropriate decisions to deliver a successful outcome within a leaner project budget.

If we know we are working in an organization that has less mature software delivery capabilities, where “everything is a priority”, we may want to go into more detail, knowing that we are likely to need both more planning collateral to drive hard conversations regarding scope and budget as the delivery evolves and changes and needs to adapt, and more budget to deal with the inflexibility of stakeholders for that adaptation to occur.

Often enough businesses who have a mature software delivery function will still want some clarity and assurance on the shape of a solution up-front - a tangible ‘future-state’ view of what the solution may look like. It should be of no surprise to anyone that businesses are hesistant to commit time, money, and effort to pursuing initiatives that they cannot visualise a potential end-state for. This is entirely rational, reasonable, and normal.

My personal preference when estimating a given initiative is to have a reasonable view on the high level Functions the system presents, the Fixed Overheads required to build the system (such as setting up a CI/CD pipeline), and any Cross-Cutting Concerns that are likely to impact any function delivered within the system (such as WCAG compliance).

A trivial example of a functional breakdown at what I’d consider to be “the ideal level” would be:

- Sortable, Filterable list of users
- Create, View, Update, Delete user details
- Assign Azure AD Groups to user 

as opposed to

- User Management

I’ve found this level of granularity to produce good estimates that can inform a budget - budgets within which myself and my teams have succeeded time and again, whilst adapting to change and dealing with risks that have arisen during those deliveries.

Having an understanding of how our budget was constructed can also help us drive conversations during Deliver. We may say, for example “We estimated x days for User Management, but given new information discovered over the last week of delivery, we now anticipate it will take y days” - we can drive a conversation around whether we trade scope, or increase budget, to accommodate the current reality of the delivery.

Use the right units

Nothing takes “Just an hour” at this granularity. Almost nothing takes “Just an hour” in software development anyway. Between discovering the scope of a particular story, refining it down to an executable state, developing it, testing it, deploying it, verifying it, potentially iterating on it, and ultimately shipping it, even the most innoccuous piece of scope will take longer than you anticipate, as per Hofstadter’s Law.

As a rule of thumb, everything takes at least half a day. No exceptions.

Understand the environment we are delivering within

Are we working within a medium size, privately owned company, with the CEO as our Product Owner, their credit card powering our cloud tenancies, and almost nothing standing in between our delivery team, the customer, and our desired outcome?

Or are we working within state government, with an appointed government Project Manager, reporting to a Project Board, with large overheads to our ideal procurement, governance, and delivery processes?

Understand the team we are delivering with

Is our team comprised of already-aligned, senior people who are deeply skilled in the technologies that will be used to deliver the proposed solution? Or are we working to build capability, upskill juniors, and transfer knowledge? Will there be a significant amount of effort to go from Forming to Performing?

Are we all going to be co-located? Or will it be delivered remotely?

Understand the scope’s propensity to grow or change

Sometimes an initiative’s scope is both very obvious, and quite limited due to its nature - if we deliver X, then we will realise Y. At other times, it can be very amorphous, with it not being immediately obvious how we are going to achieve our initiative’s goal - we may be working on solving a complex problem, whatever we have assumed up-front may need a lot of validation, and it is likely we will need time to iterate on and pivot our solution to arrive at a successful outcome.

If we have a well-defined software initiative shaped within our Discover phase, understand the environment we will be delivering within, and have a good understanding of the scope’s propensity to grow, we should be well positioned to support the anticipated change during our delivery by accomodating for it within our budget.

Create a high level solution architecture

We need to know roughly what technical components will comprise our solution. It may not matter if we use Angular or React for our web front end, but it will certainly impact us if we need to use Xamarin instead of Cordova for cross-platform mobile, or need to host in AWS EC2 auto scaling groups for our compute, instead of Azure Web Applications. Fundamental technical decisions will have an impact on time and budget.

The main rule here is to defer as many decisions as possible until it is the appropriate time to make them, during delivery. And if you aren’t the one on the ground delivering, remember: no one likes being hung by someone else’s rope.

Summary

The information suggested above should not take long to gather, create, and validate. Our ideal timeframe will be hours, not days or weeks, to put it together. Once it is on hand, we have everything we need to create a budget within which we are confident we can succeed.

Spending too much time up-front discovering and qualifying scope to inform cost is both costly, will anchor stakeholders, and creates noise and risk as unvalidated assumptions pile up - make a call as early as practicable on your budget and move on.

For larger projects that involve multiple phases or milestones, I will always estimate the first or most well-understood milestone as above, and then use probabilistic forecasting techniques to attempt to predict a program-level budget - and communicate this accordingly. Troy Magennis has published some excellent free materials and tools for this that I have utilized with great success before.

What Are Our Risks, Assumptions, Issues, and Dependencies

Risks are things that have a likelihood of impacting us during a given software delivery. Assumptions are things we hold to be true up-front. Issues are risks that have eventuated, and are having an impact already. Dependencies are things our solution will require, but are outside of our direct control.

To make a decision on a given software initiative, we need to understand the risk involved in the delivery - what is the chance that the scope will explode due to unforseen reasons? What is the chance that our dependencies simply will not be delivered within a timeframe or to the standard required? What is the chance current issues that are already present will stop us from creating a successful outcome?

An effective way to discover and quanitfy the risks posed by a given software initiative is to create a RAID map. Whilst risk is only the first item in a RAID map, the RAID map as a whole is a risk management tool.

The best way to build a RAID map is collaboratively with our customer, involving technical and non-technical stakeholders who will either be impacted by the solution, or will impact the solution. Ensuring we have an appropriate variety and range of input will ensure previously unforeseen issues will be highlighted, captured, discussed, and understood.

Some examples of things that I have seen come up during building a RAID map with my customers, which have directly impacted the required budget, are:

  • Assumption: solution must be WCAG AA compliant
  • Risk: current automated build process is brittle
  • Issue: scope is currently volatile due to different parties conflicting interests
  • Assumption: we need to build the API for public consumption, although initially the only consumer will be the web client also being built
  • Risk: internal resourcing constraints

Summary

Our RAID map will serve a few purposes:

  • It will further inform our budget
  • It will surface risk and allow us to take it into account when deciding
  • It will further align our stakeholders participating at this point of the process

One we have it on hand, and have made appropriate affordances in our budget for applicable entires, we can move on from cost and risk, and start looking at return.

What Is Our Expected Return On Investment

The next data we need to understand is How much will we benefit from the initiative?

Remember from Part One that success in any software endeavour is not simply having software at the end.

Our business may benefit from a given initiative either in a generative way, such as by developing something that produces income by creating additional value in a product or service, or reduces operating costs; Or in a preventative way, such avoiding costly fines that may arise if a given system isn’t compliant with legislation or commercial agreements by a certain time.

To calculate our return on investment, we need:

  • The estimated value we are looking to gain, defined against our goal or goals, over a set period
  • The allocated budget for the delivery
  • The estimated implementation, change management, and operational costs of the initiative over the same period

The more complicated you make this calculation, the more assumptions you are baking into it, and the less likely it is going to be a good metric in determining success. Several articles, such as this one on CIO.com suggest estimating ROI by applying many different factors and adjustments. Most of this is subjective complexity, and will ultimately make it much harder to determine whether a given software initiative has succeeded or not, as most of the inputs are largely unmeasurable.

Instead we should simply be comparing our SMART goal’s agreed measurable metrics over a period of time to the cost of the delivery and ongoing operational costs.

So what consitutes a favourable comparison? It is likely that it needs to be higher than you think.

The Standish Group’s annual CHAOS report, which surveys thousands of companies each year to analyse software project successes and failures, as of 2016 still shows a huge 70% of projects either outright failing, or not meeting a bar that can be called successful. It also reports over half of all undertaken software projects costing double their original estimates.

Whilst it would be easy to attribute these failures to poor software delivery maturity on behalf of both vendors and customers, the reality is that delivering software is hard (we should always seek to be humble programmers), and even the best delivery team, working for an ideal customer, can and will face unforeseen challenges that threaten the success of the initiative from time to time.

We need to take risk into account when considering return. The higher the risk you will have to face to deliver an initiative, the higher you want your return to be.

If your initiative seems to be reasonably low-risk, a 1.5-2x return may be acceptable. If it is moderate risk, those risks may materialize and quickly eat into a 2x return - we may need a 3x anticipated return to justify pulling the trigger on it. If it is high risk, it is not unreasonable to think you may want 5-10x your initial anticipated cost. There isn’t a hard-and-fast rule here, but once again, an acceptable ROI is likely higher than you think.

If the numbers don’t stack up, but the goal is important to the business, go back to the drawing board with your scope and estimates - there is almost always a way to deliver less software to achieve very similar goals, as per the Pareto Principle.


A slight aside on the topic of ROI is the idea of technical initiatives.

What do we do with technical initiatives that do not contribute directly to a businesses products or services, but either reduce latent risk in a platform or improve it in such a way that future initiatives will benefit from it? These initiatives likely aren’t products of our Discover phase, but are put forward by development teams who are the custodians of the platform.

The answer is to always strive to have an initiative deliver some sort of value for the business. You can almost always find a business initiative to deliver on when tackling a technical initiative - so if you want to rewrite a component in a new technology for example, find a significant business improvement to that component that could be delivered at the same time. If you can’t find anything, then why undertake the technical initiative? We will have software before, and will have software after - this is not success.

Another perspective on technical initiatives, in this post penned by Gojko Adzic, is that if we allowed technical teams a budget for appropriate technical improvements and refactorings, we largely wouldn’t need to devote large chunks of time to aggregate technical tasks.


Decision Time

Proceeding with a software initiative that is not likely to deliver an appropriate return on investment is not only costly from a financial point of view for the business, but the knowledge that the initiative is borderline profitable or worse tends to permeate the delivery. This leads to both substandard product development due to overly constrained resources, and to general negative impacts on delivery teams and cultures due to the oppresive nature of cut-throat product development.

If the return on investment is acceptable, the initiative gets the green light! If it doesn’t, we either abandon the initiative, or find a way to reshape or redefine it until it represents a sound investment to the business.

It is likely that our business may have a set of initiatives that are being evaluated at the same time, all with different costs and returns, addressing different stakeholder needs, that we will need to prioritise somehow in order to best benefit the business. All initiatives will have an associated Opportunity Cost. One way to base our prioritisation on empirical data, just as we based our decision to proceed with a given initiative on empirical data, is to calculate the Cost of Delay for each opportunity, and prioritise accordingly.

It can be useful to capture all approved initiatives on a physical card wall, giving all stakeholders clear visibility of validated initiatives and their current priority.

Once an initiative rises to the top of the businesses priority stack, it is time to Deliver. Part 3 of this series will cover software delivery, and the behaviours and processes we can exhibit and embrace to improve our chances of success.

It is important to understand that evaluating the cost and return of a given initiative doesn’t stop at Decide - that just gives it the green light to proceed. Cost and return are two sides of the same coin, and will both be closely monitored during Deliver, as we Build, Measure, and Learn.