In times of uncertainty, we strive for predictability. The relentless pace of change driven by rapid technological advancement has made more and more companies paranoid.
They look over their shoulders trying to find their own blindspots. They try to anticipate the next wave of disruption, like a tourist who got off on the wrong subway stop and is no longer entirely confident that the four words of Mandarin they have learned will be of much help. (我要香蕉牛奶 Wǒ yào xiāngjiāo niúnǎi)
Corporations are reacting to the change in the status quo. They use the tools they know and are comfortable with. Corporations crave predictability and simultaneously desire the 50x returns they see some Venture Capitalists making. Ignoring the fact that risk and reward go hand in hand, they fall back to their basic solution: The Business Case.
These business cases have three distinct problems:
Accuracy – Blind Forecasting
Well known accounting methods such as the double ledger entry system are extremely accurate. Of course! They only count things in the past, where everything is certain and, barring time travel, unlikely to change. Without lots of historical data however, they have a poor track record of calculating the future.
Calculations like Return on Investment (ROI), Internal Rate of Return (IRR) and Net Present Value (NPV) use some of the double ledger entry data to predict an outcome of a proposed business case many years into the future. They are highly useful in extremely predictable situations with little innovation and zero disruption. However, even in the relatively confident domain of a corporation’s core business, neither of us (Elijah nor Tristan) has ever met a corporation that claims their business cases are actually accurate.
The most confident claim either of us have heard is that they achieve 70% of the predicting revenue from a business case and some are closer to 40%. As a result, CFOs simply discount any submitted proposal to account for that failure. But this gets even worse for transformational innovation, where there can be no historical data. But things get worse. The input variables for these models are often pure speculation from the innovation teams. The entire process is dubious guesswork, shoved through a poor calculation, resulting in a wildly inaccurate prediction.
Veracity – Lie to Me
Innovation project leaders are not stupid.
They know their careers depend on launching new products. And they also know that they will probably be reassigned to another project before the year is up. So they need to have a launch on their resume. Results, on the other hand, are often someone else’s problem.
This creates a peculiar incentive to justify a project’s existence, evidence be damned. So the question becomes, “How can I make this business case look brilliant?”
The project lead knows the project won’t be funded unless it shows the potential to earn $10M. So that is what the business case will show. (Actually, it would show best case $15M and worst case $5M)
The CFO is no dummy either. They know that the entrepreneurs will inflate their numbers so they apply a 50% (or greater!) discount rate to account for risk.
The next step is obvious, the innovation team lead now must project $20M ($30M best case, $10M worst case) and the project is approved.
This is the same thing startups do when pitching for funding. If a VC wants a one billion dollar market, the startup projects a billion dollar market. If the VC wants two billion, with some logical gymnastics, the Total Addressable Market (TAM) magically grows to reach that figure.
An innovation project leader’s job is to innovate. If they can’t get started without inflating the numbers, they will inflate the numbers, and they won’t even realize they are doing it. We first lie to ourselves, then to everyone else.
Testability – Too Little, Too Late
Lastly, standard business cases are not testable. At least not in the sense that the tests are useful. Typically, a business case is primarily composed of lagging indicators that only tell us if we are doing well when it is too late to do anything about it.
The most common metric in a business case is revenue. If the business case predicts one billion in four years, we could rightly say that this number is testable in the strictest sense of the word. However, we will only know if that number is accurate in four years.
Subdividing this number is not much better. We can check what our revenue number is after one year or even one month, but no matter how subdivided it is, revenue is still a lagging indicator.
A marathon runner’s time in the first mile does tell us something about whether or not they will reach the finish line and, based on historical patterns, can help us estimate the completion time. Based on that data, we can yell encouraging words, we can offer extra hydration stops, and we could trip the other runners. However, all of the most significant activities necessary to improve our runner’s race time happened before the race, during training.
The same is true of innovation. Most of the activities required to achieve great product/market fit happen before the launch during customer discovery and the rapid experimentation that we call validation. If we build our business case on lagging metrics like revenue, we are creating predictions that are fundamentally untestable until it is too late.
Towards a Solution
These three issues have combined to turn business cases from a useful tool into bureaucracy. We will churn them out because they are required, but few see them as anything other than overly optimistic guesswork that does not really aid decision making. Even with what looks like quantitative data in front of us, we must make decisions based on our gut because the data is not indicative data and the business plan is fiction.
Business cases can be saved, but only by adopting new principles and better math.
“It is better to be roughly right than precisely wrong.” – Carveth Read
The assumed accuracy of the business case must be thrown away in favour of ranges. Instead of saying, “This project will make one million dollars,” we must be prepared to say, “This project will make somewhere between two million dollars and absolutely nothing.” Giving a range of possible results is an expression of uncertainty which actually quantifies the risk involved.
And let’s fix the math while we are at it. A probabilistic approach using those ranges to perform a Monte Carlo analysis will allow us to give an estimated probability of a desired ROI instead of false confidence in an illusion.
Estimating in ranges reduces the pressure to lie.
Instead of being forced to create an unrealistic business case, all parties can communicate better and agree that there are a range of possible outcomes from fantastic to pathetic. Instead of it being the innovation leader’s job to launch the project (then flee before the results come in), it should be the leader’s job to gather information that reduces uncertainty.
Bringing in valid information that tells us to kill the project is a great outcome and should be rewarded.
In data we trust. Any business case should be composed of variables that are testable.
Any plan that requires a full launch before determining its validity is a terrible plan. Business cases should have component variables that can be easily tested with known research methods such as fake doors, customer discovery interviews, concierge testing, wizard of oz testing and others.
As Douglas Hubbard likes to say, “If you know almost nothing, almost anything will tell you something.” With a large range of uncertainty, even a few conversations with customers can sometimes eliminate risk. It doesn’t take a $200,000 Gartner report to answer basic questions such as, “Will anyone buy this thing?”
The next logical step to address the problem of the much misused business case is to replace it with something better. Innovation accounting principles such as accuracy, veracity, and testability will allow us to better predict outcomes from even early stage innovation projects and make better investment decisions. Innovation should not only be guesswork.
- Basic business cases produce bad decisions for innovation projects.
- Accuracy is more important than precision.
- Talking openly about uncertainty will increase veracity.
- If it’s not testable, it’s dangerous.
Co-authored by Tristan Kromer and Elijah Eilert.