Why Resistance Strength at the lowest level of abstraction matters when evaluating a new control in FAIR

Understanding FAIR analysis means picking the right level of detail. Evaluating a new control at the lowest abstraction (Resistance Strength) reveals how well it can prevent or lessen loss events. Higher levels risk missing critical details, making granular insight essential for risk decisions. Insight.

Outline (quick skeleton)

  • Hook: When you’re evaluating a new security control, the instinct is to look at big-picture numbers. But FAIR asks you to zoom in first.
  • Core idea: The appropriate level of abstraction for testing a new control is the lowest level—Resistance Strength—so you can see the true capability of the control in concrete terms.

  • Section 1: Quick primer on levels of abstraction in the FAIR model and what Resistance Strength means.

  • Section 2: Why measuring at the granular level beats high-level summaries for a new control.

  • Section 3: Practical data to gather at the Resistance Strength level; examples and methods.

  • Section 4: How multiple analyses benefit from this level, with caveats and how to handle data gaps.

  • Section 5: Real-world-friendly guidance and a closing thought.

What follows is the full article.

Choosing the right lens: how granular you should be when assessing a new control

Let me explain something that often sounds dry but really matters in risk work. When you’re evaluating a new control, you don’t want to guess how it will affect risk by peering only at broad, high-level numbers. In FAIR—the Factor Analysis of Information Risk model—the right starting point is the lowest level of abstraction, the level that zeroes in on Resistance Strength. Why? Because that level shows you, in concrete terms, what the control can actually do to prevent or blunt a loss event.

Let’s lay out a quick roadmap of what these terms mean and why this particular layer matters.

Understanding levels of abstraction in FAIR

FAIR maps risk by linking threats, vulnerabilities, and controls to the likelihood and impact of loss events. At a high level, you might be tempted to look at Loss Event Frequency, or even the overall risk value. But for a new control, those big numbers can hide the real story. They’re influenced by many moving parts: how often a threat occurs, how vulnerability behaves, how well a control is implemented, and how that control actually resists or mitigates a threat.

Resistance Strength is the measure of a control’s capability to resist a loss event in a granular way. It’s the counterweight to threat capability and vulnerability: with a strong Resistance Strength, a control doesn’t just exist on paper; it actively reduces the chance that a loss event will happen. In other words, Resistance Strength translates the ingenuity of a control into something you can observe, quantify, and compare.

Why the granular level wins for a new control

There’s a simple truth behind the math: when the data is fresh and the control is new, you want to see the direct effect. If you start from Loss Event Frequency or other high-level abstractions, you risk averaging out the signal you’re trying to detect. A subtle weakness in a control or a design flaw can get masked by the noise of broader metrics. It’s a bit like judging a car’s braking performance by fuel economy alone—you may miss how quickly or reliably the brakes actually respond in an emergency.

Think of Resistance Strength as the “brakes test” for your new control. It tells you: if a threat materializes, how well does the control hold its ground, what is the chance it fails, and how does that failure rate compare across different contexts? That level of clarity is gold for risk decisions, whether you’re deciding on deployment, tuning, or budget adjustments.

What to measure at the Resistance Strength level

If you’re centering your analyses on Resistance Strength, here are practical data points and methods you can use:

  • Control failure rate under test conditions: How often does the control fail when subjected to a controlled threat or test scenario? This gives a direct read on resilience.

  • Time-to-detect and time-to-contain: For controls that are supposed to raise awareness or stop a process, measure how quickly they identify an anomaly and how fast containment begins.

  • Prevention rate against targeted threats: In a simulated or red-teaming exercise, what percentage of the specific, relevant threats does the control prevent from advancing?

  • Recovery robustness: When a threat slips through, how quickly does the control contribute to recovery or rollback? This tests the practical durability of the defense.

  • Coverage across contexts: Does the control maintain its strength across different systems, data classifications, or operating environments? Contextual testing keeps results honest.

  • Resistance to evasion: Can threat actors find a way around the control? If you test adversarial scenarios, you’re better equipped to quantify true resilience.

  • Interactions with other controls: Is there a compound effect when multiple controls work together? Sometimes strength compounds; other times, interdependencies reveal new weaknesses.

How to collect and interpret these signals

  • Realistic testing: Use red team exercises, tabletop simulations, or controlled experiments. The goal is to provoke the control’s response in a safe environment and observe outcomes.

  • Historical data where possible: If you have prior incidents or near-misses related to the same risk class, compare how the new control would’ve changed those outcomes. Be mindful of changing contexts.

  • Simulations and synthetic data: When live data is scarce, simulations help you estimate how Resistance Strength might translate into real risk reduction.

  • Expert judgment with guardrails: When data is thin, bring in subject matter experts to estimate Resistance Strength, but document assumptions and uncertainties clearly.

  • Metrics that stick: Use clear units and definitions. For example, “probability of control failure per 1,000 threat attempts” or “mean time to detect in minutes.” Consistency across analyses makes comparison meaningful.

A tangible example to anchor the idea

Imagine you’re deploying a new anomaly-detection system in a financial services environment. The question isn’t only “will losses go down in the abstract?” It’s: “How often does the detector miss a real fraud signal (false negatives) or raise false alarms (false positives)?” You’d want to measure Resistance Strength directly:

  • How many actual fraudulent events were detected in a given period versus total events attempted?

  • How long does it take for the system to flag a suspicious activity after the attempt begins?

  • In what percentage of cases does the detector miss a genuine fraud, and under which conditions does it perform best or struggle?

  • Do different transaction types require different thresholds to maintain strength?

These granular observations give you a clear map of where the control excels and where tweaks are needed. Only then can you judge, with confidence, how the control changes your risk profile.

From one analysis to many: how this level supports multiple views

The strength of focusing on Resistance Strength is that you can build multiple analyses that each tell a coherent part of the story, without losing the thread. For example:

  • Contextual comparison: Test the same control in retail banking, mid-market, and enterprise settings to see how its strength holds up across contexts.

  • Scenario diversity: Evaluate performance across different threat archetypes—phishing, credential stuffing, insider risk—to understand where the control’s resilience is strongest.

  • Time-based perspective: Monitor Resistance Strength over several quarters to detect if the control’s robustness degrades as the environment changes (think new software, workers, or processes).

  • Cost-benefit linkage: Tie the measured Strength to cost data. If a control’s resilience improves, does the resulting risk reduction justify the investment?

Of course, there are caveats. If data is scarce, you might begin with expert estimates or narrow-scoped tests, but you should still anchor those estimates in the same Strength framework and clearly mark where data is inferred. In the end, you want a consistent line of reasoning from the granular measure up to the big-picture risk picture.

A few practical pitfalls to watch for

  • Don’t overstate the generality of a single test. One scenario or one data point rarely tells the whole truth.

  • Keep definitions stable. If you redefine “failure” mid-project, comparisons become muddy.

  • Remember context matters. A control might show amazing Resistance Strength under test conditions but stumble in a live production environment due to a factor you hadn’t considered.

  • Balance speed and rigor. Granular analyses take more time upfront, but they pay off with sharper decisions later.

Bringing it all together: a practical mindset for evaluating new controls

Here’s a simple way to frame your work: start with the granular lens, then connect the dots upward. Ask yourself, “What does this control do, exactly, when faced with a real threat? What does ‘resistance’ look like in practice?” Gather the measured signals—failure rates, detection times, and context-specific performance—and translate them into risk-reducing capability. Only after you’ve established a solid Resistance Strength should you broaden the view to see how those strengths aggregate into overall risk reduction.

If you’re mapping or evaluating a new control, the principle to live by is straightforward: analyze at the lowest practical level. Resist the urge to rely on high-level summaries alone. The granular view reveals the true capability of the defense, the subtleties that matter for decision-making, and the leverage you’ll need to tune, invest, or reconfigure as the landscape shifts.

A closing thought, with a touch of realism

No one enjoys chasing perfect data. Data gaps happen. Teams sprint to get something tangible, and a neat, precise Resistance Strength figure can feel like a unicorn. Here’s the good news: you don’t need perfect data to start making smarter choices. Begin with deliberate, measurable signals at the Resistance Strength level. Build a thread of evidence, document assumptions, and keep refining. Over time, those granular insights converge into a trustworthy picture of how the new control reshapes risk.

If you’re curious to explore this approach further, consider looking into FAIR resources that illuminate how controls are modeled and measured. Tools—from open-source simulations to risk-management platforms—can help you structure Resistance Strength assessments and compare alternatives with rigor. The aim isn’t to chase numbers for their own sake, but to illuminate how a control actually performs when it’s put to the test.

So, the next time you’re faced with evaluating a new control, remember the guiding principle: start at the lowest practical level. Focus on Resistance Strength. Let the data tell you what the control can and cannot do. Then broaden the view to see how those strengths fit into the bigger risk picture. It’s a straightforward shift, but it makes all the difference when decisions rest on solid, actionable insight.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy