When SMEs disagree on an estimate, ask about underlying factors, widen the uncertainty range, and run separate analyses.

Learn how to handle SME disagreement on estimates by asking about underlying factors, widening the uncertainty range, and running analyses with each estimate. This multi-step approach clarifies assumptions, reveals information gaps, and strengthens risk decisions through collaboration. It pays off!!

When two subject matter experts (SMEs) don’t see eye to eye on an estimate, it can feel like you’ve run into a wall. In FAIR terms, a disagreement isn’t a dead end—it’s data pointing you toward a better understanding of risk. The good news? There are practical, sensible steps you can take that respect the science of risk modeling and still keep the process human. In practice, you can and should consider all of the following actions. They each address a different layer of the problem, and together they help you land on something that’s both credible and useful.

Let’s unpack the four moves that often work well when SMEs disagree

  1. Ask each SME about the underlying factors composing the variable

Here’s the thing: estimates aren’t just numbers. They’re built from a bedrock of assumptions, data inputs, and judgments about what could happen. When SMEs disagree, the first move is to have each of them spell out the factors that feed their estimate. What events are they counting as loss events? What likelihoods are they attaching to those events? What about exposure—asset value at risk, asset criticality, and potential consequences?

In FAIR terms, you’re trying to illuminate the components of the variable you’re estimating—whether it’s Loss Event Frequency, Loss Magnitude, or a subcomponent like contact probability or vulnerability. By pulling the factors into the open, you can see where the divergence originates. Is one SME weighting certain data sets more heavily? Do they interpret a control in place as more or less effective than the other does? Do they disagree about the time horizon or the speed of an attacker’s approach? These questions aren’t a trap; they’re breadcrumbs that guide you toward a shared understanding.

A practical tip: use a simple elicitation form or a quick workshop where each SME writes down factors they’re using, with space for notes about data sources, confidence, and noted uncertainties. If you document it well, you’ll soon spot the spots where your model’s health depends on fragile assumptions—and you can address them directly.

  1. Combine the estimates into a wider range to reflect uncertainty

Disagreement isn’t a failure of rigor; it’s a signal that there’s meaningful uncertainty to capture. When estimates differ, producing a single crisp number may mislead stakeholders about the level of risk. A wide, well-justified range can be more honest and more actionable.

Think of it like weather forecasting: one forecast might say there’s a 20% chance of a heavy storm, another says 35%. The helpful move isn’t to pretend one is wrong; it’s to acknowledge that both are credible under different assumptions, and to show what conditions would push the outcome in one direction or the other. In FAIR practice, this translates to presenting a probability distribution or a clearly bounded range for the variable, rather than a precise point estimate.

When you present a range, you should also narrate the drivers that push the range wider or narrower. Is the range broad because one SME is uncertain about data quality? Or because the consequences under different threat scenarios swing widely? This narrative part matters as much as the numbers, because it helps decision-makers gauge where to focus risk-reduction efforts or governance attention.

  1. Run the analysis twice, with each SME’s estimate, to assess results

With two credible but differing inputs, you gain a valuable comparison. Running the analysis separately using each SME’s estimate lets you observe how the conclusions shift. Do the overall risk levels rise or fall with each input set? Do the relative rankings of risk, controls, or mitigations change in meaningful ways? This exercise isn’t about choosing a “winner”; it’s about showing the sensitivity of your results to key assumptions and about surfacing points of agreement or conflict that deserve further discussion.

A dual-run approach also creates a concrete basis for conversation between SMEs. If one run highlights a scenario where a particular control underperforms under one set of assumptions, you can explore that discrepancy together, revisit data quality, or consider refining parameter values. It’s a practical, iterative way to move from “I think this is accurate” to “Here’s how the model behaves under different lenses.”

  1. Do all of the above—because combined, they’re more robust than any single step

Individually, these steps are valuable. Together, they form a robust workflow that addresses the root of disagreements while preserving transparency and trust. Engaging SMEs to reveal their reasoning, widening the uncertainty portrayal, and testing with alternative inputs creates a fuller picture of risk. It also signals to stakeholders that you’re not chasing a neat number; you’re pursuing a credible, defendable risk profile.

A practical way to weave these steps into a smooth process is to combine structured elicitation with transparent documentation. Create a risk notebook where you log each SME’s factors, the data sources, the confidence levels, and the assumptions you’re testing. Then run the model with both inputs and compare the outputs side by side. Finally, bring everyone to a joint review where you discuss the differences, adjust where you can, and agree on how to communicate what’s uncertain and what’s understood.

How to implement this in real life (without turning it into a soap opera)

  • Start with a clear definition of the variable. In FAIR, you’re often dealing with Loss Event Frequency and Loss Magnitude, or their components. Make sure everyone is aligned on what exactly is being estimated.

  • Use a lightweight elicitation template. A simple form that lists factors, data sources, assumptions, and confidence levels helps prevent drift during the discussion.

  • Name the data quality loudly and clearly. Is a data point derived from historical events, expert judgment, or external benchmarks? If confidence is low, flag it and treat it as a separate source of uncertainty.

  • Document the rationale for each SME’s estimate. A brief note on why they expect a certain rate or value helps future analysts understand the lens through which the input was formed.

  • Prefer probabilistic representations when possible. If you can express the estimate as a distribution or a range, you’ll convey uncertainty more faithfully than a single number.

  • Keep the process collaborative. A neutral facilitator or a joint review session can help keep discussions productive and focused on the model, not personal disagreement.

  • Communicate results with clarity. Use visualizations—range charts, shaded bands, or side-by-side comparisons—to make the uncertainty tangible for stakeholders who might not be steeped in risk jargon.

A few pitfalls to watch for—and how to steer around them

  • Anchor bias: If one SME’s number is introduced first, others might anchor to it. Counter this by presenting the factors and ranges first, then the estimates, and consider anonymized inputs to minimize influence.

  • Overconfidence: It’s tempting to push for a single, “best” number. Resist the urge; risk naturally wears multiple faces. The strength lies in showing how outcomes shift with plausible variations.

  • Data quality gaps: When data is thin, the model leans on judgment. That’s okay, as long as you’re explicit about the limits and the added uncertainty. Use that as a trigger to gather better data or adjust the modeling approach.

  • Too many cooks: Involving too many SMEs can bog you down. Keep the set of key players tight, with a clear agenda, and use a structured process so everyone’s input is meaningful.

A quick analogy to keep this grounded

Imagine you’re planning a rescue mission in a storm. Two captains forecast different wind patterns. Rather than pick one forecast and go, you’d do three things: ask each captain to explain what wind data they’re counting on, plot a weather envelope that captures the possible conditions, and run a dry run with each forecast to see how the plan might change. Then you’d run a joint briefing to decide on the safest course. The same logic applies when SMEs disagree on risk estimates in a FAIR analysis. The goal isn’t to pick a favorite number; it’s to chart the risks clearly, so leadership can act confidently.

Putting it all together: why this approach pays off

  • It strengthens the credibility of your analysis. When you show how you’ve handled disagreement, you demonstrate rigor and transparency.

  • It improves decision quality. A broader view of uncertainty often reveals gaps, opportunities, and tradeoffs that a single point estimate would miss.

  • It builds trust among stakeholders. People value a process that listens to different perspectives and explains the reasoning behind decisions.

  • It creates a defensible trail. The documentation you’ve built—factors, data sources, and the rationale for each estimate—becomes a reusable asset for future analyses.

In the end, the meteorologist approach—understand the inputs, quantify the range, test alternative scenarios, and compare outcomes—keeps your FAIR work grounded and practical. It respects the expert judgment you’ve got in the room while acknowledging that risk, by its very nature, lives in the space between certainties.

A closing thought

Disagreements don’t have to be uncomfortable. They can be opportunities to sharpen your model, sharpen your communication, and sharpen your decision-making. When SMEs differ, you’re not picking sides—you’re refining the picture. And that’s exactly where sound risk analysis earns its keep: by painting a truer picture of what could happen, and guiding actions that protect people, data, and systems in the process. So, yes—ask the questions, widen the range, run the analyses, and bring those diverse voices into a single, clearer understanding of risk. Your future self—and your stakeholders—will thank you for it.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy