Belfort Advisory3 March 20264 min read

Insider Risk Is Not a Separate Discipline

The industry treats insider risk as its own domain with dedicated teams and tools. We think that model is wrong. Infrastructure, identities, and information are the same surfaces whether the risk is external or internal.

Server room with glowing cables and network infrastructure — the same systems that protect against external threats also govern insider risk

There is a growing market for standalone insider risk programs. Dedicated teams, dedicated tools, dedicated budgets. We think that model is wrong, and this post explains why Belfort is built differently.

The false separation

The industry decided that cybersecurity protects against external threats and insider risk management protects against internal ones. Clean categories. But the underlying infrastructure is the same. The identity layer that protects against credential stuffing is the same layer that governs whether a departing employee retains access to sensitive systems. The SOC that detects lateral movement by an external attacker sees identical telemetry when an insider stages data for exfiltration.

The logging architecture that supports incident response is the same architecture that determines whether an internal investigation produces admissible evidence.

Separate disciplines means parallel governance for the same systems, separate teams looking at the same data, and gaps where neither team owns the overlap. We have watched organizations discover those gaps during an actual incident.

Three surfaces, one problem

We organize our work around three surfaces: infrastructure, information, and identities. Every cybersecurity risk maps to at least one. So does every insider risk. The question is the same: what happens when the source of risk has legitimate access?

Illuminated circuit board representing the identity and access layer that underpins both cyber and insider risk
Identities, information, and infrastructure: three surfaces where cyber and insider risk converge

Identities

Identities are where the overlap is most obvious and most neglected. IAM programs invest heavily in authentication and privilege management for external attack scenarios. The insider scenarios get less attention: permission accumulation during lateral moves, service accounts with no owner, contractor access persisting months after engagement ends, shared credentials in OT environments that make individual accountability impossible.

The hard part is not the technology. It is the JML lifecycle for difficult transitions. Onboarding is easy. Offboarding is usually adequate. What most organizations handle poorly: demotions, lateral moves into lower-sensitivity roles, garden leave, employees under investigation who still need access to maintain the appearance of normality. These require HR, legal, and security to coordinate in real time.

If your IAM program does not account for them, you do not have an insider risk control. You have a provisioning system.

Information

Information is where detection collides with legal reality. A SOC analyst sees a large data download at 2 AM. External compromise or insider exfiltration? The alert is identical. The response should not be. Insider cases need multidisciplinary triage from the start: HR context, legal assessment of the monitoring basis, and evidence handling that maintains chain of custody.

A technically perfect investigation that violates GDPR proportionality or Belgium’s Private Investigations Act produces evidence that is legally void.

Organizations deploying UEBA or behavioral DLP for insider detection need a lawful basis, a proportionality assessment, and documented human oversight before they switch the tool on. GDPR enforcement against employee monitoring has produced over €355 million in fines. The EU AI Act classifies behavioral analytics in employment as high-risk from August 2026. These are not abstract compliance concerns. They determine whether your detection program is an asset or a liability.

Infrastructure

Infrastructure is the dimension most insider risk programs miss entirely. OT environments, physical access to critical facilities, engineering workstations with direct process control. Weaker identity controls, less logging, shared access driven by operational necessity.

An insider with physical access to the right system can cause harm that no amount of IT-side monitoring will detect. NIS2 makes this explicit for covered entities. Yet most insider risk assessments stop at the network boundary.

Legal from the start

Lady Justice holding scales — legal defensibility must be built into insider risk programs from day one
Every monitoring capability needs a lawful basis; every investigation must comply with labor law and evidentiary standards

This is why Belfort Advisory operates alongside Belfort Law. Every monitoring capability needs a lawful basis. Every investigation must comply with labor law, works council requirements, and evidentiary standards. In Belgium, the intersection of the Private Investigations Act, GDPR, and employment law means a single procedural misstep can render all evidence automatically void.

Building legal defensibility into the program from the start is not conservative. It is the only way to ensure that when an insider incident happens, the organization can act on what it finds.

One program, one lens

When insider risk is treated as a separate discipline, it becomes a specialized function that borrows data from the cybersecurity program occasionally. When it is treated as a dimension of the cybersecurity program itself, every control gets evaluated for insider exposure, every detection capability gets insider use cases, and every governance decision accounts for the possibility that the risk comes from inside.

Want to discuss how this applies to your organisation?