When data on past loss events is scarce, analysts focus on Threat Event Frequency.

When loss history is thin, analysts pivot from guessing losses to measuring how often threats might occur. By focusing on Threat Event Frequency—using industry trends, models, and expert input—they build a reliable risk picture without overrelying on scarce past data.

Outline to guide the read

  • Open with a relatable problem: scarce data on past losses and why that stings.
  • Set the stage: the FAIR model basics, focusing on Threat Event Frequency (TEF) and how it fits with LEF and LM.

  • Core idea: when data is scarce, step down in analysis to TEF to keep the assessment credible.

  • Practical steps: how to estimate TEF using industry trends, expert judgment, and scenario thinking; how to combine with vulnerability to get LEF; how LM fits in when it’s hard to pin down.

  • A concrete example to illustrate the math and intuition.

  • How to communicate results and keep uncertainty honest.

  • Wrap with a takeaway and a quick nudge to useful resources.

Article: When data is scarce, a smarter path in FAIR risk analysis

Let’s face it: not every risk you’re called to assess comes with a neat pile of past loss data. Some threats are new, some losses are rare, and the really valuable data sits behind closed doors. In those moments, analysts can feel boxed in. Do you guess the losses and risk overstatement? Do you ignore the data gap and pretend you know more than you do? Neither choice feels right. Here’s the practical, grounded route that keeps FAIR honest: step down in the analysis and focus on Threat Event Frequency.

What FAIR actually is, in one breath

FAIR stands for Factor Analysis of Information Risk. It’s a way to translate intangible risk into numbers you can discuss with a board, a security team, or a product manager. At its core, risk in FAIR is about three ingredients: how often a threat could occur (Threat Event Frequency, TEF), how likely a threat event is to cause a loss (Vulnerability), and how big the loss would be if it happens (Loss Magnitude, LM). When you multiply a loss-event frequency (LEF) by a loss amount, you get a meaningful expected loss metric. The math is clean, but the insight comes from choosing the right inputs.

Here’s the thing about scarce data

When you don’t have a robust history of actual losses, estimating LM directly becomes risky. You could end up overconfidently pinning a dollar value on something you haven’t seen. The FAIR approach gives you a safer, more informative alternative: you reduce the focus on precise loss numbers and lean into TEF—how often a threat event might occur—paired with grounded estimates of vulnerability and LM where possible. By doing so, you still build a credible risk picture without pretending you know every past loss detail.

Why TEF as a starting point makes sense

Threat Event Frequency captures the cadence of threats. It answers questions like: How often could a phishing scam be attempted in a year? How frequently might a ransomware attempt land on us? This is especially powerful when historical losses are thin because you can lean on broader data sources: industry trends, known attacker behavior patterns, and expert judgment. TEF is more generalizable than “how much money did we lose last time?” and that makes it a sturdy foundation when data is sparse.

From TEF to a complete picture: the path forward

  1. Define the threat scenarios you care about

Start by listing plausible threat events relevant to your asset. For example, if you’re protecting customer data, scenarios might include phishing campaigns targeting credentials, malware delivering data exfiltration, or misconfigurations leading to data exposure. If you’re protecting critical infrastructure, you might consider supply-chain compromises, insider threats, or third-party service disruptions. The goal is to map concrete scenarios that could lead to loss, not to chase every hypothetic risk.

  1. Estimate Threat Event Frequency (TEF) for each scenario

Think in terms of a time window (usually per year). Use a mix of sources:

  • Industry reports and trends: look for general threat frequencies and patterns across the sector.

  • Theoretical models: simple probabilistic reasoning about attacker opportunities and defenses can guide you.

  • Expert judgment: consult seasoned security professionals who’ve seen the landscape evolve.

  • Scenario-based assumptions: when nothing else is available, define a plausible frequency range (e.g., 0.05 to 0.2 events per year) and be explicit about why.

The aim isn’t a perfect number but a credible range that reflects what’s likely, given the constraints.

  1. Bring vulnerability into the mix

In FAIR, vulnerability is the probability that a threat event, once it occurs, results in a loss. If you don’t have reliable loss data, you can still reason about vulnerability by asking practical questions:

  • How strong are our controls against this threat?

  • Do we have compensating safeguards (like two-factor authentication, anomaly detection, backups) that reduce the chance of a loss once a threat event happens?

  • How quickly could we detect and respond to the event?

Vulnerability acts as the bridge between TEF and LEF. If TEF is the frequency of events, vulnerability tempers how many of those events actually translate into a loss.

  1. Compute Loss Event Frequency (LEF)

LEF = TEF × Vulnerability. This gives you a frequency of loss events per time period. It’s the bridge from “threats” to “losses,” and it remains meaningful even when you don’t know the exact dollar impact of each event. You’ll often end up with a range here, reflecting the uncertainty in both TEF and vulnerability.

  1. Address Loss Magnitude (LM) where you can

LM is about how big the loss would be if a loss event occurs. When past losses are scarce, you can still reason about LM with careful ranges, not precise numbers:

  • Consider categories of impact (minor, moderate, severe) rather than a single dollar figure.

  • Use rough dollar bands derived from partial data, benchmarking, or expert opinion.

  • If you can, anchor LM to a known metric (e.g., cost of remediation, regulatory penalties, customer notification burdens) and expand with uncertainty bands.

  1. Synthesize risk and communicate clearly

Risk in FAIR is the combination of LEF and LM. If you have LEF as a frequency and LM as a plausible range, you can present an Expected Annual Loss (EAL) as a range. For example: LEF of 0.08 losses per year (one every 12.5 years on average) with LM between $0.5M and $2M yields an EAL between $40k and $160k per year. That kind of framing is honest, actionable, and far more trustworthy than a single, overconfident number.

A concrete, bite-sized example you can picture

Imagine an organization that stores health data. Past loss data is sparse because breaches are relatively recent and fines are irregular. The analyst outlines a few threat scenarios:

  • Phishing-driven credential theft leading to unauthorized access.

  • Misconfigured cloud storage leaking data.

  • Supply-chain compromise affecting third-party software.

For each scenario, TEF is estimated from industry trends and expert judgment:

  • Phishing credential theft: 0.15 per year

  • Misconfigurations: 0.08 per year

  • Supply-chain compromise: 0.05 per year

Next, vulnerability is assessed:

  • Phishing: 0.6 (significant risk if credentials are stolen)

  • Misconfigurations: 0.5 (moderate risk given decent controls)

  • Supply-chain: 0.7 (high risk due to external dependencies)

LEF for each scenario:

  • Phishing: 0.15 × 0.6 = 0.09 per year

  • Misconfigurations: 0.08 × 0.5 = 0.04 per year

  • Supply-chain: 0.05 × 0.7 = 0.035 per year

LM is kept as ranges because precise losses aren’t known:

  • Phishing: $0.5M–$2M per event

  • Misconfigurations: $0.2M–$1M

  • Supply-chain: $1M–$5M

Putting it together, the expected yearly risk looks like:

  • Phishing: 0.09 × $0.5M–$2M = $45k–$180k per year

  • Misconfigurations: 0.04 × $0.2M–$1M = $8k–$40k per year

  • Supply-chain: 0.035 × $1M–$5M = $35k–$175k per year

Add them up and you get a credible, range-based view of annual risk. It’s not a precise forecast, but it’s a solid, transparent picture you can discuss with leadership, risk committees, and the security team.

Why this approach doesn’t shortchange rigor

If you’re worried that TEF is wishful thinking, here’s the reassurance: TEF doesn’t replace data; it complements it. It uses what you can reasonably know—threat landscapes, defender capabilities, and educated judgments—without pretending you have a time machine for past losses. You’re still doing a quantitative risk assessment; you’re just choosing inputs that fit the data reality. That’s sensible, not evasive.

A few practical tips that keep the method honest

  • Document every assumption: when you estimate TEF, show the sources and reasoning behind the numbers. This isn’t vanity; it’s what makes the results defensible under scrutiny.

  • Use ranges, not single numbers: uncertainty is a feature, not a bug. Present best-case, most-likely, and worst-case bands.

  • Keep a feedback loop: as new incidents occur or new industry data becomes available, update TEF, vulnerability, and LM. The model should evolve, not stay stuck.

  • Don’t ignore dependencies: some threats co-occur or influence one another. Note where that might tilt TEF or vulnerability for a scenario.

  • Communicate in business terms: translate risk into potential financial impact, regulatory exposure, and operational consequences. People understand dollars and deadlines a lot better than abstract risk scores.

Where to look for credible inputs

  • Industry threat reports and analyses (seasoned publications that summarize attacker behavior and prevalence)

  • Expert panels or internal red-team findings to calibrate TEF and vulnerability

  • Benchmark data from vendors and peer organizations, treated as directional guidance rather than gospel

  • Regression logic or simple Poisson-type reasoning for frequency estimates if you want to formalize the math

The broader takeaway

Scarcity of past-loss data doesn’t have to stall a FAIR assessment. In fact, it’s a timely reminder of why the framework’s structure matters. TEF gives you a disciplined way to reason about how often threats might happen, even when you can’t point to a long line of prior losses. When TEF is combined with thoughtful vulnerability estimates and defensible LM bands, you still produce a risk picture that’s rigorous, transparent, and useful for decision-making.

A small nudge for the curious

If you’re curious to deepen your understanding, you can explore how TEF interacts with LEF and LM in more formal terms—through the lens of probability models and uncertainty quantification. Tools like Monte Carlo simulations can be employed to propagate the ranges and show you a distribution of possible annual losses, not just a single “expected” figure. It’s not about turning risk into a crystal ball; it’s about making the best, clearest call you can given what you know.

Final takeaway: when data is thin, think frequency first

In FAIR analysis, data gaps aren’t a dead end. They’re a cue to shift focus to Threat Event Frequency, then layer in what you can know about vulnerability and loss magnitude. This approach keeps your assessment grounded, communicable, and actionable—precisely what you need when you’re steering risk discussions in real organizations.

If you’re exploring FAIR ideas and how to apply them in real-world risk work, keep TEF in your toolkit. It’s the steady compass when the data map is fuzzy, and it helps you tell a story about risk that others can actually act on.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy