Belfort Advisory17 March 20265 min read

A Pragmatic Approach to Insider Risk Management

Most organizations cannot tell you whether their insider risk program works. We built a structured methodology covering nine capability domains and nine quality axes to find out.

The Belfry of Ghent — a symbol of structured oversight and governance

Most organizations that have an insider risk program cannot tell you whether it works. They know what tools are deployed and what policies exist. They cannot tell you whether any of it would hold up when an actual incident hits. We have spent the last year building a structured methodology for assessing and improving insider risk programs. This post shares the core of that approach.

Scope it right

Insider threat programs focus on catching malicious actors. That is too narrow. Insider risk management covers negligence, error, compromised credentials, third-party access, and organizational conditions that create exposure. Roughly 55% of insider incidents stem from carelessness, not malice. A program that only hunts for bad actors misses more than half the problem.

A solid program defines scope across malicious insiders (intentional harm), negligent insiders (carelessness, workarounds, shadow IT), and compromised insiders (legitimate accounts taken over by external actors). Each requires different detection logic, different response protocols, and different prevention strategies.

Map the capabilities: nine domains

An insider risk program that only lives in the security team will fail. The capability areas we assess cover the full cross-functional surface:

Ghent waterfront — historic buildings along the canal, reflecting the layered governance structures of Belgian cities
Effective insider risk programs span nine interconnected capability domains
  1. Strategy and governance: mandate, sponsorship, funding, RACI, program management
  2. Threat modeling and operations: scenario development, deterrence, threat hunting, intelligence loop
  3. Risk management: crown jewels identification, risk appetite, scenario-based treatment plans
  4. Legal, privacy, and ethics: lawful basis for monitoring, proportionality, DPIA coverage, works council alignment
  5. Human-centric culture: speak-up mechanisms, psychological safety, manager training, JML lifecycle management
  6. Technical controls: IAM, DLP, endpoint detection, PAM, logging integrity
  7. Behavioral analytics and detection: UEBA, risk scoring, signal fusion, alert triage quality
  8. Investigation and response: forensic readiness, multidisciplinary triage, case management, law enforcement liaison
  9. Performance and resilience: KPIs, tabletop exercises, control testing, lessons learned

Most gaps sit in domains 4 and 5. Organizations invest in technical controls but underinvest in the legal basis for using them and the cultural conditions that prevent incidents from occurring.

Quality axes: measuring how well, not just what

Having a capability and having an effective capability are different things. A DLP policy that generates 500 false positives per day is not a control. It is noise.

We measure each domain through a set of quality axes. Each axis asks a different question about the same capability:

  • Governance: Is there clear ownership, funding, and oversight cadence? Or does the capability exist without anyone being accountable for it?
  • Execution: Is this operationally consistent? Do people actually follow the process, or does it only work in theory?
  • Technical orchestration: Is detection, monitoring, and response automated and integrated? Or manual, fragmented, dependent on individual knowledge?
  • Legal and privacy: Is this defensible under GDPR, NIS2, the EU AI Act, and local employment law? Could you use the evidence you collect in court?
  • Human sentiment: Do employees trust the program? Or does the monitoring create the resentment that drives insider risk in the first place?
  • Visibility: Do you know where your sensitive data sits, who has access to what, and where concentration risk exists?
  • Resilience: When an incident happens, can you investigate, contain, and recover? Have you tested this, or is the playbook theoretical?
  • Friction management: Do your controls create workarounds? Shadow IT and unapproved AI tools are symptoms of friction, not root causes.
  • Control lag: How quickly can you adapt when the organization changes? Mergers, restructurings, and technology migrations create windows of elevated risk. Programs that take months to adjust leave gaps.

The intersection of domains and axes is where the real picture emerges. An organization might have strong technical controls that are well governed and operationally consistent, but score poorly on legal defensibility and human sentiment. That means the tools work, but using them might violate GDPR, and employees see them as surveillance rather than protection.

You cannot see that in a one-dimensional maturity score.

Foundations before sophistication

A pattern we see regularly: organizations deploy advanced behavioral analytics without having basic governance in place. UEBA is running but nobody owns the alert triage. Detection rules exist but there is no legal basis for the monitoring that feeds them.

We handle this through structural integrity checks. Foundational capabilities constrain the maximum score of advanced ones. Weak governance caps how much credit the program gets for sophisticated detection, regardless of how impressive the tooling looks. This prevents "paper maturity," where expensive tools mask the absence of governance, legal basis, and operational process.

Adapt the depth

Not every organization needs 200 questions. A 150-person company with no formal insider risk program needs a focused assessment that tells them where to start. A 5,000-person NIS2-regulated utility needs depth across every domain.

Our assessment uses gatekeeper questions to route dynamically. Foundational questions unlock or suppress downstream areas based on actual maturity. Less mature organizations get a shorter, more actionable assessment. Mature organizations get the depth needed to find specific gaps.

And we validate without requiring artifact uploads. Targeted follow-up questions that test the depth and consistency of claims reveal operational reality more reliably than document review, without the NDA friction that blocks honest participation.

The regulatory context shapes everything

The Belfry of Bruges — European regulatory frameworks require structural oversight of insider risk
NIS2, DORA, EU AI Act, and GDPR create overlapping requirements for insider risk programs

For organizations operating in Europe, regulation is not a compliance checkbox. It shapes what an insider risk program can and cannot do.

NIS2 mandates insider risk measures for covered entities, with management personally liable (fines up to €10 million or 2% of global turnover). DORA extends similar requirements across financial services. The EU AI Act classifies employee behavioral monitoring systems as high-risk from August 2026. Belgium’s Private Investigations Act requires government licensing for structured internal investigations. GDPR enforcement against employee monitoring has produced over €355 million in fines.

These regulations pull in opposite directions. NIS2 says monitor. GDPR says justify every bit of monitoring. The EU AI Act says your monitoring AI needs human oversight and impact assessments. A program that does not account for this tension from the start will be either non-compliant or operationally toothless.

This is exactly why legal and privacy is an axis, not a domain. It applies to everything.

From assessment to program

The output is not a score for the sake of scoring. It is a prioritized program: what to fix first, what to build next, and what to defer. Weaknesses map to specific recommendations organized into phases that account for regulatory obligations, resource constraints, and actual risk exposure.

Insider risk is not a technology problem with a governance wrapper. It is a governance, legal, human, and operational challenge that includes technology. Getting that order right changes how you build everything else.

Want to discuss how this applies to your organisation?