When there's no past loss data, stepping down to Threat Event Frequency and Vulnerability helps FAIR risk analysis.

Learn how FAIR risk analysis handles missing loss data by stepping down to Threat Event Frequency and Vulnerability. This approach blends known threats with control gaps to craft a credible risk estimate, keeping the process practical and adaptable when historical data isn’t available.

No data on past losses? Let’s talk through a smart way to keep risk estimates honest when the history book is blank.

If you’ve ever tried to measure risk with a map that’s missing most of its landmarks, you know precision starts to feel like a wish. In information risk work, that missing map shows up as a lack of data about past loss events. It happens more often than you’d think, especially in newer systems, niche industries, or sensors-heavy environments where incidents aren’t always labeled or recorded consistently. The instinct might be to guess, but in FAIR thinking there’s a cleaner path: step down a level and work with Threat Event Frequency and Vulnerability.

Here’s the idea in plain terms. In the Factor Analysis of Information Risk (FAIR) framework, you don’t start by hammering a single loss number. You break risk down into parts that you can reason about, even when you don’t have a tidy history. When past loss data is scarce or missing, you shift your focus downward in the model—from the level of actual losses to the likelihood of events happening and how vulnerable the organization is to those events. The specific move people use is to Step down a level and work at the Threat Event Frequency (TEF) and Vulnerability levels. That’s the pragmatic workaround that keeps the picture useful without pretending data exists where it doesn’t.

Let me explain TEF and Vulnerability a bit, because a lot of the magic happens there.

  • Threat Event Frequency (TEF): This is the rate at which threats might occur in your environment. It’s not about whether a loss will happen, but about how often the relevant threats are likely to occur. Think of TEF as the tempo of risk activity—how often the door is potentially smashed in by a threat actor, a misconfigured service is exposed, or a phishing attempt lands in a user’s inbox. When historical loss data is missing, TEF becomes a forward-looking, scenario-based estimate. You can ground it in threat intelligence, industry patterns, and the ways your organization’s environment changes over time.

  • Vulnerability: This is the probability that a threat event exploits a weakness in your defenses. In other words, given a threat event might occur, how likely is it that your controls, processes, or human factors let that event lead to a loss? Vulnerability is shaped by controls in place, their effectiveness, how people actually use them, and how well systems are configured and monitored.

A quick mental picture: Threat Event Frequency tells you how often threats are trying to do something in your space; Vulnerability tells you how often those attempts that could cause loss actually succeed. The Risk you care about—the potential loss events per year—emerges from combining these pieces, even if you don’t have direct historical loss data to lean on.

A practical walk-through: what to do when past loss data is scarce

  1. Start with a defensible TEF. Rather than asking, “How often did losses happen in the past?” you ask, “How often could a threat event occur here, given the current environment?” Gather input from:
  • Threat intelligence that’s relevant to your industry or technology stack.

  • Internal incident indicators, like near-misses or security alerts, even if they didn’t lead to a loss.

  • External benchmarks or case studies that resemble your setup, used as rough guides rather than exact predictions.

  • Environment changes: new deployments, added integrations, or shifts in user behavior (for instance, more remote access or cloud adoption).

  1. Assess Vulnerability with care. Look at your controls not as a checkbox count, but as a living capability:
  • How effective are your safeguards against the specific threat types you’re modeling?

  • Are patches current? Are configurations hardened? Do users have awareness training?

  • How do people interact with systems? A technically strong control can falter if people don’t follow procedures.

  • What assumptions are you making about attacker methods? If those assumptions are shaky, adjust the vulnerability estimate.

  1. Use a scenario-based approach. Without a solid historical loss line, scenarios become your best friend:
  • Build plausible threat scenarios tied to the asset and the environment.

  • For each scenario, estimate TEF and Vulnerability, then translate those into a Loss Event Frequency (LEF) proxy.

  • Don’t chase a single “correct” number. Instead, build a defensible range that reflects uncertainty.

  1. Document uncertainties clearly. This isn’t a weakness; it’s realism. When you can’t point to a precise past loss, you point to a credible range and the reasons behind it:
  • State the sources you used for TEF and Vulnerability estimates.

  • Note gaps in data and how they influence the numbers.

  • Include sensitivity checks: how would the results shift if TEF is doubled or Vulnerability is reduced by half?

  1. Move from LEF toward impact estimates with context. Once you have TEF and Vulnerability estimates, you can still talk about potential loss in meaningful terms:
  • Tie LEF to asset value and loss magnitude (how badly things would hurt if a loss occurs).

  • Use ranges to reflect uncertainty in both frequency and impact.

  • Communicate the overall risk in a way decision-makers can relate to—without pretending historical losses backstop the numbers.

Why this approach is more robust than guessing

  • It respects what data you do have. Even when history is thin, there is often a trail of signals—threat patterns, control effectiveness, and human factors—that you can use to build credible TEF and Vulnerability estimates.

  • It aligns with how risk actually behaves. Frequency and vulnerability are not just abstract concepts; they map to real-world conditions like attacker behavior, control lapses, and operational changes.

  • It creates a forward-looking view. You’re not stuck in the past; you’re building a narrative about what could happen next, given the defenses in place and the threat landscape you’re seeing.

  • It keeps uncertainty honest. By openly stating ranges and assumptions, you reduce the risk of overconfidence and you give decision-makers a clearer picture of where the big unknowns lie.

A few practical tips to sharpen your TEF and Vulnerability estimates

  • Triangulate from multiple sources. If you don’t have your own loss history, combine threat intel, external reports, and internal indicators. Weave them into a coherent TEF estimate rather than choosing one source as gospel.

  • Ground threats in your context. Not all threats are equal for every organization. A cloud-centric shop worries about API abuse; a traditional enterprise might fret phishing and insider risk more. Tailor TEF to what’s realistic for you.

  • Think in ranges, not points. A best practice is to present TEF and Vulnerability as a spectrum (low, moderate, high) or as a numeric range (e.g., 0.05 to 0.15 per year). This mirrors real uncertainty and keeps the analysis honest.

  • Revisit and revise as you learn more. If you later obtain data or observe changes in the threat landscape, update TEF and Vulnerability. The model should flex with reality, not stay frozen in a single estimate.

  • Include a storytelling element. People respond to narratives. Use short scenarios that illustrate how a threat could exploit a vulnerability and what a loss would look like. This helps stakeholders grasp risk without drowning in numbers.

Common pitfalls to avoid

  • Treating TEF or Vulnerability as fixed numbers. They’re estimates that should reflect confidence levels and uncertainty.

  • Ignoring human factors. Controls aren’t only technical; training, culture, and procedures matter a lot in determining Vulnerability.

  • Overcomplicating the model. When data is thin, keep the model focused on TEF and Vulnerability first. You can layer in more detail later if needed.

  • Forcing a single “risk score” onto everyone. Not all stakeholders care about the same outputs. Provide concise, actionable views for executives and more detailed views for security teams.

A quick mental model you can carry around

  • TEF tells you how often a threat could reasonably show up in your environment.

  • Vulnerability tells you how often that threat would get through if it showed up.

  • LEF combines the two to suggest how often a loss event might occur.

  • Loss Magnitude gives you the potential impact if that loss event happens.

  • Risk is the combination of how often (LEF) and how bad (Loss Magnitude) it could be, with clear notes about uncertainty.

The big takeaway is simple: when the data on past losses is missing, don’t fight the gap. lean into the downward steps. Analyze Threat Event Frequency and Vulnerability, build credible scenarios, and quantify what those scenarios imply in terms of possible loss. This approach keeps your risk assessment grounded, adaptable, and genuinely useful for making informed decisions.

If you’re exploring FAIR concepts, this is the kind of practical mindset that makes the model feel not just theoretical, but alive in real-world settings. You’re not pretending history exists where it doesn’t—you’re painting a plausible, defensible picture of risk based on how threats interact with your defenses right now. It’s a smarter way to think about risk, especially when you’re staring at a blank page and wondering what to do next.

A final thought to carry with you: risk work isn’t a rigid script. It’s a conversation with data, a negotiation with uncertainty, and a steady climb toward clarity. When data leaves you short, the best move is to recalibrate, collect what you can, and build forward-looking estimates that still feel trustworthy. That’s the heart of FAIR—finding a clear path through complexity, one carefully reasoned step at a time.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy