September 2020

Using Narratives to Make Sense of High Uncertainty

This was originally published in June 2020 on Britten Coyne Partners' Strategic Risk Blog

The arrival of the COVID19 pandemic has made the distinction between risk and uncertainty painfully clear. In the case of the former, the range of possible future outcomes is known, as are their probabilities, and potential impact.

In the case of uncertainty, some or all of these are unknown.

We have many tools available to help us make good decisions in the face of risk. While the crises occasionally remind us that these tools aren’t perfect (e.g., Long Term Capital Management in 1998, the housing collapse in 2008, and COVID19 in 2020), most of the time they serve us well.

But that is not true when we confront highly uncertain systems and situations.

To be sure, our first reaction is to try to adopt our risk tools to help us in these situations. In place of probabilities based on the historical frequencies at which common events (e.g., car accidents) occur, we take the Bayesian approach, and substitute probabilities signifying our degree of belief that historically rare or unprecedented events will occur in the future.

The four years I spent on the Good Judgment Project reminded me once again of the benefits (e.g., superior forecast accuracy) that arise from the disciplined application of Bayesian methodology.

Yet over a forty-year career of making decisions in the face of uncertainty, I’ve also seen that Bayesian methods can sometimes creates a false – and dangerous – sense of security about the precision of our knowledge, leading to overconfidence and poor decisions.

What we need is broader mix of methods for analyzing and making good decisions under conditions of uncertainty (and ignorance, or “unknown unknowns”), not just risk.

With this in mind, I was excited to read a new paper, “The Role Of Narrative In Collaborative Reasoning And Intelligence Analysis: A Case
”, by Saletta et al, which significantly adds to our understanding of the role of narratives and how we construct them to help us make sense of highly uncertain situations

At Britten Coyne Partners, we have written a lot about the important, if too little understood, role that narrative plays in both individual and collective sensemaking and decision making under uncertainty.

Researchers have found that when uncertainty rises, evolution has primed human beings to become much more prone to conformity and to rely more on imitating what others are doing (so-called "social learning" or “social copying”).

Paradoxically, as uncertainty increases, people are more likely to become attracted to a smaller (not larger) number of competing narratives that explain the past and present and sometimes predict possible future outcomes.

In other words, as uncertainty increases the conventional wisdom grows stronger, even as it is becoming more fragile and downside risks are rising.

Today's hyperconnected socio-technical systems — including financial markets — are therefore more vulnerable than ever before to small changes in information that trigger feelings (especially fear) and behavior that spread quickly, and are then further amplified by algorithms of various types. The increasing result is sudden, non-linear change.

In recent years, the role of narratives in economic cycle and financial markets has increasingly been a subject of academic inquiry (e.g., “Narrative Economics”, by Bob Shiller; “Constructing Conviction through Action and Narrative: How Money Managers Manage Uncertainty and the Consequences for Financial Market Functioning”, by Chong and Tuckett; and “News and Narratives in Financial Systems: Exploiting Big Data for Systemic Risk Assessment”, by Nyman et al).

While there are many definitions of “narrative” (and synonyms for it, like “analytical line”, and sometimes “mental models”), most have some common elements, including descriptions of context and key characters, actions and events that move the narrative forward through space and time, and causal links to outcomes and various types of consequences (e.g., cognitive, affective, and/or physical).

Saletta and his co-authors take our understanding of narrative from the macro to the micro level, and describe how, “individuals and teams use narrative to solve the kinds of complex problems organizations and intelligence agencies face daily.”

They “observed that team members generated “micro-narratives”, which provided a means for testing, assessing and weighing alternative hypotheses through mental simulation in the context of collaborative reasoning…

“Micro-narratives are not fully developed narratives; [instead] they are incomplete stories or scenarios…that emerge in an unstructured manner in the course of a team’s collaborative reasoning and problem solving...

[Micronarratives] “serve as basic units that individuals and teams can debate, deliberate upon, and discuss in an iterative process in which micro-narratives are generated and weighed against each other for plausibility with regard to evidence, general knowledge about the world, and fit with other micro-narratives. They can then be organized and assembled into a larger, more developed narrative…

The authors document that the intelligence analysts they studied “ran mental simulations to reason about evidence in a complex problem... [and] test and evaluate many diverse, interacting and often competing micro-narratives that were generated collaboratively with other team members...They tested the plausibility of these micro-narratives against each other, what they knew, and their best estimates (or guesses) for what they didn’t know”…

“In a non-linear and iterative process, the analysts used the insights developed in the process of generating and evaluating micro-narratives to develop a macro-level narrative.”

The authors conclude that, “narrative thought processes play an important role in complex collaborative problem-solving and reasoning with evidence…This is contrary to a widespread perception that narrative thinking is fundamentally distinct from formal, logical reasoning.”

This is also a very accurate description of the team forecasting process that I experienced during my years on the Good Judgment Project.

At Britten Coyne Partners, we also stress that this basic process of collaborative sensemaking in highly uncertain systems and situations can be further enhanced through the use of three complementary processes.

The first is structuring sensemaking processes around the three critical questions first described by Mica Endsley twenty-five years ago:

(1) What are the key elements (e.g., characters, events, etc.) in the system or situation you are assessing, over the time horizon you are using?

(2) What are the most important ways in which these elements are related to each other (e.g., causal connections and positive feedback loops that give rise to non-linear effects)?

(3) Given the interaction of the critical trends and uncertainties you have identified, how could the system/situation evolve in the future, either on its own or in response to actions you and/or other players could take?

The second is using explicit processes to offset what Daniel Kahneman has called the WYSIATI phenomenon (“What You See Is All There Is”), or our natural tendency to reach conclusions only on the basis of the information that is readily at hand. As Sherlock Holmes highlighted in “The Hound of the Baskervilles”, it is often the dog that doesn’t bark that provides the most important clue.

In “Superforecasting”, Professor Phil Tetlock showed how the WYSIATI problem can be overcome (and forecast accuracy improved) by combining the information at hand with longer term, “base rate” or “reference case” data.

Another approach is Gary Klein’s “pre-mortem” technique. Tell your team to assume it is some point in the future and their assessment or forecast has turned out to be wrong. Ask them to write down the information they missed or misinterpreted, including important information that was absent.

A final technique is to have your team assume that their original evaluation of the evidence was wrong, and to generate alternative assessments of what it could mean.

The third process is Marvin Cohen’s critiquing method, which focuses on finding and resolving three problems that reduce the reliability of macro narratives (e.g., more formal scenarios). Incomplete narratives are either missing key narrative elements or represent them using assumptions rather than direct evidence. The second problem is the use of assumptions to explain away conflicts between available evidence. And the third problem is the use of doubtful assumptions that have weak evidential support.

In the post-COVID19 world, the ability to make sense of unprecedented uncertainty and maintain ongoing situation awareness under rapidly evolving conditions will be a hallmark of high performance teams.

Unfortunately, this is not a capability that has been developed in the course of many leaders’ previous training and experience. Mastering new tools and processes, like the use of narrative, is critical to organization’s future success.

These and other processes and tools for making good decisions in the face of unprecedented uncertainty are covered in our new online course, leading to a Certified Competence in Strategic Risk Governance and Management. You can learn more about it at the Strategic Risk Institute LLC (an affiliated of Britten Coyne Partners and Index Investor LLC).

A 50% discount on the launch price is offered for the first 100 subscribers. Enter the code 50PERCENTPOUND (for payment in pounds) or 50PERCENTDOLLAR (for payment in dollars) on the payment page.

Did Markets Take So Long to React to COVID-19?

This article was originally published on the Index Investor Blog

On 31 December, 2019, the S&P 500 closed at 3,230.78. That same day, the government in Wuhan, China, confirmed that health authorities were treating multiple cases of pneumonia.

On 6 January 2020, the Wall Street Journal reported that, in China, “medical authorities are racing to identify the cause of a mystery viral pneumonia that has infected 59 people in central China, seven of whom are in critical condition, and triggered health alerts in Hong Kong and Singapore.” The next day, the Financial Times reported that “health authorities are working to identify the outbreak of viral pneumonia that has infected at least 59 people in Wuhan [China]. Officials have ruled out Severe Acute Respiratory Syndrome, Middle East Respiratory Syndrome and certain types of flu.”

On 8 January, the Financial Times (FT) reported that, “The world is already grappling with its first emerging disease of the decade. Dozens of people in Wuhan, a city in central China, have been hit by an unexplained pneumonia. There are no recorded deaths but, among 59 who have fallen sick, seven are reported to be in a critical condition with breathing difficulties. The authorities have ruled out seasonal flu, bird flu, severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS). Singapore and Hong Kong are now screening air passengers for fever. The outbreak, which began in December, has been traced back to a market selling seafood and live animals such as bats and marmots. It has since been closed and disinfected. The possibility that yet another malign microorganism has hurdled the species barrier to infect humans is likely to boost calls for a global catalogue of animal pathogens.”

On 9 January, the FT reported that, “A pneumonia outbreak that has infected more than 50 people in the Chinese city of Wuhan was caused by a coronavirus, which is the same kind of pathogen involved in the deadly SARS outbreak in 2003, Chinese state media said on Thursday. The outbreak, which comes ahead of the lunar new year holidays in late January when millions of Chinese will be travelling to see their families, has caused alarm in the region. The virus has prompted widespread concern on Chinese social media and triggered memories of the 2003 outbreak of severe acute respiratory syndrome, or SARS, that infected more than 8,000 people worldwide and killed more than 700, including almost 300 in Hong Kong. The World Health Organization said, in a statement issued on Thursday, the Chinese authorities believed the disease ‘does not transmit readily between people’, but noted that it could cause severe illness in some patients.”

On 15 January, the FT ran a story headlined, “How Dangerous is China’s Latest Viral Outbreak?” This was its first paragraph: “An outbreak of a new kind of viral disease in China has led to widespread concern about the risks involved and fears of an official cover-up. A 2002-03 outbreak of severe acute respiratory syndrome (SARS) killed more than 800 people after Chinese officials covered up new cases for months, greatly worsening its spread. That has raised questions over Beijing’s handling of the latest outbreak that began in Wuhan, capital of Hubei province.”

On 14 January, there was a Democratic Candidate Debate.

On 16 January, Donald Trump’s impeachment trial began.

On 18 January, in a story headlined “Scientists Warn Over China Virus Outbreak”, the FT wrote that, “Concerns are rising over an outbreak of a virus that originated in China as leading scientists suggested that more than 1,700 people may already have been infected, far more than had been thought. Chinese health authorities said this weekend that they had discovered 21 more suspected cases in the central city of Wuhan, bringing the total number of suspected cases of the pneumonia illness in the city up to 62. But experts warn that there is significant uncertainty about the severity and spread of the illness, which has already killed two people and evoked memories of the SARS outbreak that killed hundreds of people more than 15 years ago. A study by the respected MRC Centre for Global Infectious Disease Analysis concluded that a total of 1,723 people in Wuhan City would have had onset of symptoms by January 12, the last reported onset date of any case. Neil Ferguson, a public health expert from Imperial College London, who founded the centre, told the BBC he was “substantially more concerned than I was a week ago”.

Later in the same story, it was noted that, “on Sunday, Li Gang, director of the Wuhan Center for Disease Control and Prevention, told state broadcaster CCTV that the information available ‘does not rule out the possibility of limited human-to-human transmission.’ ‘The infectivity of the new coronavirus is not strong,’ he added, referring to how rapidly the virus may spread between individuals. ‘The risk of continuous human-to-human transmission is low’. Most patients have presented relatively mild symptoms, Mr. Li said, and no cases had been found in more than 700 people who came into close contact with infected patients.”

On 20 January, the FT reported that, “China Confirms Human-to-Human Transmission of SARS-like Virus”.

On 21 January, the World Health Organization issued its first Situation Report on the “Novel Coronavirus”. The same day, the FT reported that Asian stocks had fallen after Beijing confirmed human-to-human transmission.

On 22 January, the US confirmed its first case, a patent in Washington State who had returned from Wuhan. On the same day, the FT ran a story headlined, “How China’s Slow Response Aided Coronavirus Outbreak.”

On 23 January, Chinese authorities began their quarantine of Wuhan.

On 27 January, The Index Investor posted its first multipart tweet (Twitter: @indexllc) about the new coronavirus, including initial estimates of its Case Fatality Rate, and key uncertainties for investors to monitor. Between then and today (16 March) Index has posted 11 more (often multipart) tweets, covering new high value information about the virus focused on reducing the range of outcomes for critical uncertainties to improve forecast accuracy.

On 30 January, the WHO declared a global health emergency.

On 31 January, the US restricted travel to China. That evening, the UK officially left the European Union.

3 February: Iowa Democratic Caucuses. That same day, The Index Investor tweeted, "Until the uncertainty surrounding asymptomatic transmission of coronavirus is resolved with more evidence, expect travel bans and other isolation measures to continue, as a prudent policy reaction at this stage."

On 4 February, The Index Investor tweeted the key conclusion from a new Lancet article: "Independent and self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases and the absence of large-scale public health interventions."

On 5 February, Donald Trump was acquitted at the end of his impeachment trial. That same day, the Diamond Princess cruise ship was quarantined in Japan with 3,600 passengers on board.

On 7 February, there was another Democratic Candidate Debate. Earlier that day, in China, Li Wenliang, the Wuhan doctor who tried to raise the alarm about the new coronavirus (and was accused by the police of “rumormongering”) died after contracting it.

11 February: New Hampshire Democratic Primary.

On 12 February, Dr. Nancy Messonnier, Director of the US Center for Disease Control’s Respiratory Disease program, noted on a press conference call that, “the goal of the measures we have taken to date are to slow the introduction and impact of this disease in the United States but at some point, we are likely to see community spread in the U.S.”

On 18 February, in our new issue of The Index Investor, we wrote, “The Wuhan coronavirus will almost certainly depress global economic growth, by an amount that is highly uncertain at this point. Global aggregate demand has already been weakening. A worsening slowdown (or growth turning negative) will very likely be reinforced by mounting debt servicing problems in our highly leveraged global economy.”

On 19 February, the S&P 500 closed at 3,386.15, thus far the 2020 high. That night there was a Democratic Candidate Debate, the first one to include Michael Bloomberg.

On 20 February, the FT’s Gillian Tett titled her column, “Share Prices Look Sky High Amid Coronavirus Fears.”

On 23 February Italian authorities limited travel to 10 towns in the Lombardy region after a sudden increase in coronavirus cases.

On 25 February, Larry Kudlow, Director of the National Economic Council, said, “We have contained this. I won’t say [it’s] airtight, but it’s pretty close to airtight”. He added that, while the outbreak is a “human tragedy,” it will likely not be an “economic tragedy.” That night, there was another Democratic Candidate Debate.

At a 26 February press conference, president Trump said that, the current number of COVID-19 cases in the U.S. is “going very substantially down, not up.” He also claimed that “the U.S. is “rapidly developing a vaccine” for COVID-19 and “will essentially have a flu shot for this in a fairly quick manner.”

3 March: Super Tuesday Democratic Primaries.

On 15 March, there was another Democratic Candidate Debate, this time just between Joe Biden and Bernie Sanders.

On 16 March, after multiple trading stops, the S&P 500 closed at 2,386.13, down 29.5% from its February peak.

In the future, many people will ask the same question the Queen asked in the aftermath of the 2008 global financial crisis: “Why didn’t anyone see this coming?”

Our preliminary answer to that question is that it is very likely that multiple interacting factors were at work, including the following:

• We naturally resist the cognitive dissonance that is produced by evidence that severely contradicts our current system of interrelated and mutually supporting beliefs. One aspect of this is our tendency to avoid information that challenges our existing beliefs in a negative way, and to give more attention to evidence that supports our existing views (see “How People Decide What They Want to Know” by Sharot and Sunstein). Another is the way we subconsciously try to fit new discordant evidence into our existing world view by subtly adjusting our beliefs to incorporate it. As Daniel Kahneman noted in his book, “Thinking Fast and Slow”, it is only when this automatic adjustment fails that we note our feeling of surprise at a new piece of information and consciously reason about its potential significance.

• But as Tali Sharot’s research has repeatedly found, even our conscious reasoning is flawed because we often fail to fully incorporate negative information into our beliefs and our memories. Sharot finds that this is the root cause of our natural over-optimism bias.

• Another factor is that as uncertainty increases, so too does our natural human desire to conform to the views and behavior of our group, and to engage in more social learning (i.e., copying). In our evolutionary past this was undoubtedly adaptive; today it is not. When uncertainty increases, the number of diversity of narratives about the future by different members of a population tends to decline, making the system of beliefs more fragile, and susceptible to changes that are both sudden and large.

• As uncertainty increases, people are also more willing to discount conclusions based on their private information when those conclusions disagree with the dominant view in their group. This can allow an increasingly dangerous state to persist for long periods of time, until a strong public signal that is consistent with group members’ private information causes them to quickly and substantially update their beliefs all at once. However, if public signals are confusing or contradictory, the dominant narrative can remain in place, despite accumulating evidence that it is wrong.

• Another relevant shortcoming of human reasoning is our tendency to form beliefs by unconsciously matching the features of a current situation to similar ones that are stored in our memory (this is sometimes called “Retrieved Context Theory”). Our strongest memories are those formed by events that triggered high levels of emotional arousal and negative feelings (or “valence”). In this case, many people may have initially associated early reports about COVID-19 with vague memories of the relatively benign way the SARS epidemic played out in 2003. In the future, early reports of new respiratory viruses are almost certain to be associated with people’s much more negative COVID-19 memories.

• Another factor that may have contributed to assessment failure is that most people struggle to grasp the future implications of processes characterized by time delays and non-linearity, such as those that underlie infectious disease dynamics.

• It is also the case that when our mental energy has been depleted, we exhibit poor control over the allocation of our attention, and are therefore likely to miss important signals. In this regard, the early development of the Wuhan coronavirus crisis coincided with the Trump impeachment trial, multiple Democratic Candidate Debates, the Iowa Democratic primary caucuses, and the New Hampshire Democratic primary.

• Finally, in October 2019, the Economist estimated that 35% of public equities are now managed using quantitative processes. This presents a number of fundamental challenges when disruptive changes like the arrival of COVID-19 roil the financial markets. Most of these processes cannot detect, early on, changes that are not contained in the set of historical data on which they were trained, and/or which have not yet manifested in the signals they track (e.g., changes in sentiment or momentum). Because they are far better at causal and counterfactual reasoning (which quantitative techniques still can’t handle), human beings are still far more effective at making sense of the uncertainties that are inherent in highly complex, evolving systems like global macro.

Since 1997, the mission of The Index Investor has been to help investors, corporate, and government leaders to better anticipate, more accurately assess, and adapt in time to emerging macro threats. This mission provides a framework for answering the question, “Why didn’t anyone see this coming?”

It wasn’t due to a failure of anticipation. Over the years, multiple risk analyses and simulations have considered the potential impact of pandemics. For example, in its most recent analysis of alternative future scenarios (“Global Trends 2030: Alternative Worlds”), the US National Intelligence Council wrote this: “An easily transmissible novel respiratory pathogen that kills or incapacitates more than one percent of its victims is among the most disruptive events possible. Unlike other disruptive global events, such an outbreak would result in a global pandemic that directly causes suffering and death in every corner of the world, probably in less than six months.” We also included the possibility of a global pandemic in our January 2020 feature article, “Global Macro Risk Dynamics in the 2020s and Beyond.”

At The Index Investor, we have written about the risks posed by pandemic influenza many times since 1997, and have a substantial amount of information about this threat in the free research library on our website.

In sum, with respect to anticipation failure, COVID-19 was not a black swan.

Failure to accurately assess the threat posed by the Wuhan coronavirus after it appeared is a far better explanation for the human, economic, and financial pain and losses that many individuals and organizations have suffered.

Many of the factors that contributed to this failure are listed above. In the future, more will be discovered and discussed. As we noted, many of them are deeply rooted in human nature.

One of the most important lessons learned over the years at Britten Coyne Partners (The Index Investor’s strategic risk governance and management consulting affiliate) is that simply increasing awareness of them usually doesn’t reduce their impact. A far more effective approach is the consistent use of organizational processes that are designed to offset their potential negative impact. Such processes include, for example, the use of pre-mortem analyses, warning indicators, reference cases, and the systematic search for evidence that contradicts current beliefs.

In some cases, failure to adapt in time to a threat that was anticipated and accurately assessed must also have played a role, as it has so many times in history.

In some instances, this failure might have been rooted in system design (e.g., quantitative strategies and robo-advisors that were constrained to remain fully invested, and unable to shift into cash, or exceed certain allocation limits to less risky asset classes).

However, we suspect that the incentives faced by decision makers played a far more important role, as it always does in cases of strategic failure. When managers are evaluated and compensated on the basis of beating the performance of a benchmark index, there is a very strong incentive to avoid selling too soon, even as they see downside risks increasing. This was famously summed up in former Citibank CEO Chuck Prince’s quote from July 2007: “As long as the music is playing, you've got to get up and dance. We're still dancing.”

Moreover, as organizations grow larger and become more concerned with efficiently scaling a successful business model, their culture usually evolves to one that penalizes errors of commission (“false alarms”) more heavily than errors of omission (“missed alarms”). In the face of disruptive changes in their external environment, this can be deadly.

A final cause of failure to adapt to emerging threats lies in the frameworks we usually use to think about and discuss them. Modern risk assessment has its roots in quantitative tools used by actuaries to estimate the probability and potential impact of discrete events (e.g., prices falling below a put option’s strike price, or the number and severity of hurricane related insurance losses in 2020).

What this framework leaves out is often critical. In a world of evolving uncertainty and novel threats, time dynamics are critical. The key concept is the changing relationship between how much time remains before an emerging risk reaches a critical threshold (e.g., when does it become a dangerous threat), and how much time is still needed before an adequate response to it can be developed and implemented. Boards, management teams, and individuals that closely monitor changes in gap (what Britten Coyne calls the “safety margin”) have a much better chance of adapting in time.

Strategic failure is a complex phenomenon that is always rooted in some combination of interacting individual and organizational failures to anticipate, assess, and adapt to emerging threats. COVID-19 is just the latest example of this process.

Wuhan Coronavirus - Critical Uncertainties

Originally published on February 20, 2020, on Britten Coyne Partners' Strategic Risk Blog

Since the end of December, we have seen an expanding outbreak of a new variant of the coronavirus in China, spreading from its epicenter in Wuhan.

There are two critical uncertainties to resolve with more evidence: (1) the transmissibility of the Wuhan strain, which so far appears to be high, and (2) the pathogenicity (CFR), which at this point still appears to be relatively low. And when you hear an estimated CFR, always remember to check the denominator on which it is based (lab confirmed or just symptomatic cases).

When it comes to contagious viral diseases, there is usually a tradeoff between their transmissibility (how easily they spread) and their pathogenicity (how many people who become infected die). Viruses that quickly kill their infected hosts effectively limit their own spread.

The number of infected people who die is measured by the "Case Fatality Rate." However, this is a noisy estimate, because the denominator can be based on lab confirmed cases (which increases CFR) or just symptomatic cases (which lowers estimated CFR). Early estimates (based on very noisy reporting) have reported a preliminary CFR for the Wuhan strain of around 2%. However, this will likely change as more evidence becomes available.

To put Wuhan in perspective, the CFRs for Ebola and highly pathogenic H5N1 influenza are >60%. The 1918 pandemic flu was estimated at 10% to 20% (this strain was also relatively transmissible which is why it killed so many). The 2009 H1N1 "swine" flu CFR was estimated at 5% to 9%. By comparison, typical seasonal influenza has a CFR of one tenth of one percent or less (0.1%).

For other coronaviruses, SARS' CFR was estimated to be around 10%, while MERS' was 35%.

Transimissibility is measured using the “Basic Reproduction Number” (known as “R0” or “R-naught”), which is the number of people who will become infected by contact with one contagious person. If R is less than one (e.g., because of a high CFR), an epidemic will quickly “burn itself out”. In contrast, when R is greater than 1, a virus will spread exponentially.
Initial estimates of R for the Wuhan Novel Coronavirus are very noisy at this point. The World Health Organization has published a range of 1.4 to 2.5

For comparison, here are some historic estimated Basic Reproduction Numbers:

  • 1918 Spanish Flu = 2.3 to 3.4 (95% confidence interval)
  • SARS Coronavirus = 1.9
  • 1968 Flu = 1.80
  • 2009 Swine Flu = 1.46
  • Seasonal Influenza = 1.28
  • MERS Coronavirus = <1.0
  • Highly Pathogenic H5N1 Influenza = .90
  • Ebola = .70

The most concerning finding about the Wuhan Coronavirus are claims that it may be capable of infecting other people before a patient becomes symptomatic (i.e., shows signs that he/she has contracted the virus).

An article in the Lancet (“Nowcasting and Forecasting the Potential Domestic and International Spread of the 2019-nCoV Outbreak Originating in Wuhan, China”) found that, “Independent self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases & the absence of large-scale public health interventions."

If it is supported by subsequent research, this initial finding will almost certainly lead to the imposition of more travel bans and quarantine measures in an attempt to limit transmission of the virus.

To end this post with a bit of good news, a very recent analysis has concluded that, because the number of new coronavirus cases in China is growing more slowly than the exponential rate implied by its Basic Reproduction Number, quarantine, travel bans, and "self-isolation" measures appear to be having a positive impact (

Sources of Forecast Errors and Uncertainties

Originally published in February 2020 on Britten Coyne Partners' Strategic Risk Blog

When seeking to improve forecast accuracy, it is critical to understand the major sources of forecast error. Unfortunately, this is not something that is typically taught in school. And learning it the hard way can be very expensive. Hence this note.

Broadly speaking, there are four sources of forecast uncertainty and error:

1. An incorrect underlying theory or theories;
2. Poor modeling of a theory to apply it to a problem;
3. Wrong parameter values for variables in a model;
4. Calculation mistakes.

Let’s take a closer look at each of these.


When we make a forecast we are usually basing it on a theory. The problem here is twofold.

First, we often fail to consciously acknowledge the theory that underlies our forecast.

Second, even when we do this, we usually fail to reflect on the limitations of that theory when it comes to accurately forecasting real world results. Here’s a case in point: How many economic forecasts have been based on rational expectations and/or efficient market theories, despite their demonstrated weaknesses as descriptions of reality? Or, to cite an even more painful example, in the years before the 2008 Global Financial Crises, central bank policy was guided by equilibrium theories that failed to provide early warning of the impending disaster.

The forecasts we make are actually conditional on the accuracy of the theories that underlie them. In the case of high impact outcomes that we believe to have a low likelihood of occurring, failing to take into account the probability of the underlying theory’s accuracy can lead to substantial underestimates of the chance a disaster may occur (see, “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”, by Ord et al).

There are three other situations where the role of theory is usually obscured.

The first is forecasts based on intuition. Research has found that accurate intuition is developed through the combination of (a) repeated experience over time; (b) in a system whose structure and dynamics don’t change; (c) the receipt of repeated feedback on the accuracy of forecasts; and (d) followed by explicit reflection on this feedback that gradually sharpens intuition.

When we make a forecast based on intuition, we are (usually implicitly) making the assumption that this theory applies to the situation at hand. Yet in too many cases, it does not (e.g., because the underlying system is continually evolving). In these cases, our “intuition” very likely rests on a small number of cases that are easily recalled either because they are recent or still vivid in our memory.

The second is a forecast based on analogies. The implicit theory here is that those analogies have enough in common with the situation at hand to make them a valid basis for a forecast. In too many cases, this is only loosely true, and the resulting forecast has a higher degree of uncertainty that we acknowledge.

The third is a forecast based on the application of machine learning algorithms to a large set of data. It is often said that these forecasts are “theory free” because their predictions are based on the application of complex relationships that were found in the analysis of the training data set.

Yet theory is still very much present, including, for example, those that underlie various approaches to machine learning, and those that guide explanation of the extremely complex process that produced the forecast.

Another theoretical concern with machine learning-based forecasts is the often implicit assumption that either the system that generated the data used to train the ML algorithm will remain stable in the future (which is not the case for complex adaptive social or socio-technical systems like the economy, society, politics, and financial markets), or that it will be possible to continually update the training data and machine learning algorithm to match the speed at which the system is changing.


While theories are generalized approaches to explaining and predicting observed effects, models (i.e., a specification of input and output variables and the relationships between them) apply these theories to specific real world forecasting problems.

This creates multiple sources of uncertainty. The first is the decision about which theory to include in a model, as more than one may apply. RAND’s Robert Lempert is a leading expert in this area, who advocates the construction of “ensemble” models that combine the results from applying multiple theories. Most national weather services do the same thing to guide their forecasts. However, ensemble modeling is still far from mainstream.

A second source of uncertainty is the extent to which the implications of a theory are fully captured in a model. A recent example of this was the BBC’s 24 February 2020 story, “Australia Fires Were Worse Than Any Prediction”, which noted they surpassed anything that existing fire models had simulated.

A third source of modeling uncertainty has been extensively researched by Dr. Francois Hemez, a scientist at the Los Alamos and Lawrence Livermore National Laboratories in the United States whose focus is the simulation of nuclear weapons detonations.

He has concluded that all models of complex phenomena face an inescapable tradeoff between their fidelity to historical data, robustness to lack of knowledge, and consistency of predictions.

In evolving systems, models which closely reproduce historical effects often do a poor job of predicting the future. In other words, the better a model reproduces the past, the less accurately it will predict the future, even if its forecasts are relatively consistent.

Hemez also notes that, “while unavoidable, modeling assumptions provide us with a false sense of confidence because they tend to hide our lack-of-knowledge, and the effect that this ignorance may have on predictions. The important question then becomes: ‘how vulnerable to this ignorance are our predictions?’”

“This is the reason why ‘predictability’ should not just be about accuracy, or the ability of predictions to reproduce [historical outcomes]. It is equally important that predictions be robust to the lack-of-knowledge embodied in our assumptions” (see Hemez in “Challenges in Computational Social Modeling and Simulation for National Security Decision Making” by McNamara et al).

However, making a model more robust to our lack of knowledge (e.g., by using the ensemble approach) will often reduce the consistency of its predictions about the future.

The good news is that forecast accuracy often can be increased by combining predictions made using different models and assumptions, either by simply averaging them or via a more sophisticated method (e.g., shrinkage, extremizing, etc.).

Parameter Values

The values we place on model variables is the source of uncertainty with which people are most familiar.

As such, many approaches are used to address it, including scenarios and sensitivity analysis (e.g., best, worst, and most likely cases), Monte Carlo methods (i.e., specifying input variables and results as distributions of possible outcomes, rather than point estimates), and systematic Bayesian updating of estimated values as new information becomes available.

However, even when these methods are used important sources of uncertainty can still remain. For example, in Monte Carlo modeling there is often uncertainty about the correct form of the distributions to use for different input variables. Typical defaults include the uniform distribution (where all values are equally possible), the normal (bell curve) distribution, and a triangular distribution based on the most likely value as well as those believed to be at the 10th and 90th percentiles. Unfortunately, when variable values are produced by a complex adaptive system, they often follow a power law (Pareto) distribution, and the use of traditional distributions increases forecast uncertainty.

Another common source of uncertainty is the relationship between different variables. In many models, the default decision is to assume variables are independent, which is often not true.

A final source of uncertainty is that under different conditions, the values of some model input variables may only change with varying time lags, which are rarely taken into account.


Researchers have found that calculation errors are distressingly common, and especially in spreadsheet models (e.g., “Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors” by Raymond Panko, “Comprehensive Review for Common Types of Errors Using Spreadsheets” by Ali Aburas, and “What We Don’t Know About Spreadsheet Errors Today: The Facts, Why We don’t Believe Them, and What We Need to Do”, by Raymond Panko).

While large enterprises that create and employ complex models increasingly have independent model validation and verification (V&V) groups, and while new automated error checking technologies are appearing (e.g., see the ExcelInt add-in), their use continues to be the exception not the rule.

As a result, a large number of model calculation errors probably go undetected, at least until they produce a catastrophic result (usually a large financial loss).


People frequently make forecasts that assign probabilities to one or more possible future outcomes. In some cases, these probabilities are based on historical frequencies – like the likelihood of being in a car accident.

But in far more cases, forecasts reflect our subjective belief about the likelihood of the outcome in question – i.e., “I believe the probability of “X” occurring before the end of 2030 is 25%.”

What few people realize is that these forecasts are actually conditional probabilities that contain multiple sources of cumulative uncertainty.

For example, consider the probability of “X” occurring before the end of 2030 is 25% -- conditional upon (1) the probability the theory that underlies my estimate is valid; (2) the probability my model has appropriately applied this theory to the forecasting question at hand; (3) the probability my estimated value or values for the variables in my model are accurate; and (4) the probability I have not made any calculation errors.

Given what we know about these four conditioning factors, it is clear that many of the subjective forecasts we encounter are a good deal more uncertain than we usually realize.

In the absence of the opportunity to delve more deeply into the potential sources of error in a given probability forecast, the best way to improve predictive accuracy is to select and combine multiple forecasts that are made using different methodologies, and/or alternative sources of information.

The Critical Importance of Anticipatory Intelligence in Our Complex, Uncertain World

Originally Published in August, 2019 on Britten Coyne Partners' Strategic Risk Blog

The deceptive economic and geopolitical calm of the past decade has been an aberration, brought about by unprecedented global monetary stimulus to hold at bay the deflationary forces that have been building in the global economy. Thanks to central bankers’ efforts, volatility has remained low, and organizations have not had to worry too much about disruptive risks beyond those posed by rapid technological change. That is about to change: Brexit, the election of Donald Trump, the emergence of a new US-China Cold War, and nearly two trillion dollars of sovereign bonds bearing negative interest rates are early indications that we are entering a period of much higher uncertainty.

With this change will come much greater organizational focus on developing the processes, methods, tools, and skills needed to survive and thrive in a much more dangerous environment. Josh Kerbel, a faculty member at the United States’ National Intelligence University, recently published an article that we hope will have a substantial impact on these efforts, and closely reflects our views at Britten Coyne Partners.

In “Coming to Terms with Anticipatory Intelligence”, Kerbel notes that it is “a relatively new type of intelligence that is distinct from the “strategic intelligence” that the intelligence community has traditionally focused on. It was born from recognition that the spiking global complexity (interconnectivity and interdependence, both virtual and physical) that characterizes the post–Cold War security environment, with its proclivity to generate emergent (non-additive or nonlinear) phenomena, is essentially new. And as such, it demands new approaches.”

“More precisely, this new strategic environment means that it is no longer enough for the intelligence community to just do traditional strategic intelligence: locking onto, drilling down on, and — less frequently — forecasting the future of issues once they’ve emerged. While still important, such an approach will increasingly be too late. Rather, the intelligence community should also learn to practice foresight (which is not the same as forecasting) and imagine or envision possibilities before they emerge. In other words, it should learn to anticipate.”

Kerbel echoes longstanding concerns among some members of the intelligence community. For example, a 1983 CIA analysis of failed intelligence estimates noted that, “each involved historical discontinuity, and, in the early stages...unlikely outcomes. The basic problem was...situations in which trend continuity and precedent were of marginal, if not counterproductive value."

This distinction was also brought home to me during the four years I spend on the Good Judgement Project, which demonstrated that forecasting skills could be significantly improved through the use of a mix of techniques. But hiding in the background was an equally important question: What was the source of the questions whose outcome we were forecasting? One of my key takeaways was that anticipatory thinking – posing the right questions – was just as important to successful policy and action as accurately forecasting their outcome.

Kerbel notes that, “as clear and compelling as the case for anticipatory intelligence is, it remains poorly understood… Since the 1990s, increasing complexity has been an issue that many in the intelligence community have impulsively dismissed or discounted. Their refrain echoes: “But the world has always been complex.” That’s true. However, what they fail to understand is that the closed and discrete character of the Soviet Union and the bipolar nature of the Cold War — the intelligence community’s formative experience — eclipsed much of the world’s complexity and effectively rendered America’s strategic challenge merely complicated (no, they’re not the same). Consequently, the intelligence community’s prevailing habits, processes, mindsets, etc. — as exemplified in the traditional practice of strategic intelligence — are simply incompatible with the challenges posed by the exponentially more complex post-Cold War strategic environment.”

Kerbel’s view is that “Fundamentally, anticipatory intelligence is about the anticipation of emergence… Truly emergent issues are fundamentally new — nonlinear — behaviors that result unpredictably but not unforeseeably from micro-behaviors in highly complex (interconnected and interdependent) systems, such as the post–Cold War strategic environment. Although emergence can seemingly happen quite quickly (hence the need to anticipate), the conditions enabling it are often building for some time — just waiting for the “spark.” It is these conditions and what they are potentially “ripe” for — not the spark — that anticipatory intelligence should seek to understand… Foresight involves imagining how a broad set of possible conditions (trends, actors, developments, behaviors, etc.) might interact and generate emergent outcomes.”

This begs the question of which foresight methods and tools are most effective. We go into great detail about this in our Strategic Risk Governance and Management course. In this blog post we’ll highlight four key insights.

Traditional scenario methodologies often disappoint

As a general rule, when reasoning from the present to the future, we naturally (to maintain our sense psychological safety) minimize the extent of change that could occur.

In complex systems, it is almost always impossible to reduce the forces that could produce non-linear change to just two critical uncertainties, as is done in the familiar “2 x 2” scenario method. And in some cases, the uncertainties that most worry an organization’s senior leaders are either out of bounds for the scenario construction team, or the range of their possible outcomes is deliberately constrained.

I first studied the scenario methodology under Shell’s Pierre Wack back in 1983. In its early applications, this approach was often able to fulfill its goal of changing senior leaders’ perceptions. Over the years, however, I have seen what I call “scenario archetypes” become more common, which has weakened their ability to surprise leaders and change their perceptions. These archetypes result from one critical uncertainty being technological in nature, and the other being one whose negative outcome would be very bad indeed.

This gives rise to three archetypes: (1) Business pretty much as usual, with current trends linearly extrapolated (this is usually the scenario that explicitly or implicitly underlies the organization’s strategy); (2) The World Goes To Hell (slow technology change and the negative outcome for the other uncertainty); and (3) Technology Saves the Day (fast technology change overcomes the negative outcome of the other uncertainty). This leaves what is usually the least well defined but potentially most important scenario, where technology rapidly develops, but the other uncertainty does not have the negative outcome. Too many organizations fail to fully explore the implications of this scenario, usually because they are more realistically threatening to the current strategy.

Historical analogies are limited by our knowledge of history

Whether the subject is political economic, technological, business, or military history, most of us have studied too little of it to have a rich based of historical analogies from which we can draw while trying to anticipate the future.

Consider some of the challenge we face at the present, including the transition from an industrial to an information and knowledge-based economy; the rapid improvement in potential “general purpose” technologies like automation and artificial intelligence; and the potential transition of the global political economy from a period of growing disorder and conflict to period of more ordered conflict due to a new Cold War between the US and China. In all these cases, the most relevant historical analogies may lie further in the past than many people realize.

Prospective hindsight – reasoning from the future to the present – is surprisingly effective

Research has shown that when we are given future event, told that it is true, and asked to explain how it happened, our causal reasoning is much more detailed than if we are simply asked, in the present, how this future event might happen.

However, that still leaves the “creative” or “imaginative” challenge of conceiving of these potential future events. We have found that starting with broad future outcomes – e.g., our company has failed; China has successfully forced the US from East Asia – generates a richer set of alternative narratives than a narrower focus on specific future events.

Explicitly focusing on system interactions helps identify emergent effects and early warning indicators

Quantitatively, agent-based models, which enable complex interactions between different types of agents, can produce surprising emergent effects, and, critically, help you to understand why they occur (which can aid in either their prediction or in designing interventions to promote or avoid them).

Qualitatively, we have found it very useful to create traditional scenarios in narrower policy areas (e.g., technology, the economy, national security, etc.) and then explicitly trace and assess overall system dynamics and how different scenario outcomes could interact across time and across policy areas (e.g., technology change often precedes economic and national security change) to produce varying emergent effects.

Kerbel concludes by noting that, “Exponentially increasing global complexity is the defining characteristic of the age.” Because of this, effective anticipatory intelligence capabilities are more important than ever before to organizations’ future survival and success – and more challenging to develop.
blog comments powered by Disqus