Our Philosophy

Most security work still treats people as “the weakest link” or labels incidents as “human error”.

At Gamayun, behavioural security starts from a different assumption: incidents emerge from how people, power and pressure are arranged – from the organisation’s risk architecture – not from individual carelessness alone.

By risk architecture we mean the way roles, access, incentives, power and information flows are arranged around critical assets and decisions.
By behavioural security we mean the study and design of how security-relevant behaviour emerges from those arrangements, rather than from policies alone.

This page sets out the principles we work from and the limits we do not cross.

These principles exist to ensure behavioural work produces reliable risk reduction rather than coercive or symbolic controls.


What We Mean by Behavioural Security

Behavioural security is not a rebrand of “human factor” or awareness training.

For us, behavioural security means looking at security through the lens of behaviour in context:

  • Behaviour is the main medium through which risk is created, moved and contained.
  • Structures, incentives and strain shape what people actually do, not what policies say they should do.
  • Technology, controls and procedures are part of this behavioural system, not separate from it.

Instead of asking only “Who clicked?”, we are more interested in questions like:

  • Who is under pressure, and from whom?
  • Who is undermined, blocked or caught between loyalties?
  • Which roles sit at the junction of access, ambiguity and weak oversight?
  • How do governance and leadership decisions make certain outcomes almost inevitable?

For example, a recurring data leak may say more about how a role is overloaded, isolated and pressured than about any single person’s attitude to policy.

Behavioural security is about understanding and redesigning these conditions, not just asking individuals to “be more careful”.


Core Principles

A small set of principles shapes all of our work.

Architecture and system responsibility before blame
We start by analysing how roles, power, incentives and information flows make certain behaviours likely. Responsibility for behavioural risk begins at system level, not only with the last individual in the chain.

Behaviour in context
Behaviour is interpreted against role, constraints, pressure and relationships – not as isolated “choices” detached from the environment in which they were made.

No social scoring
We do not build or support systems that assign hidden “risk”, “loyalty” or “trust” scores to staff. People are not reduced to generic ratings.

Profiling as an analytical tool, not an instrument of control
ABPM™ (Applied Behavioural Profiling Model) is used as one structured lens in clearly justified, high-risk contexts, with clear purpose and human oversight. It is not used to categorise whole populations, to automate decisions, or to legitimise actions that have already been decided for other reasons.

Ethics as constraint, not decoration
Ethical limits are part of the method. If a proposed use of behavioural work conflicts with those limits, we do not do it – even if it would be commercially attractive. Most organisations are imperfect and conflicted; that is normal. These boundaries are not a purity test, they are limits on the kinds of tools and programmes we will design.


Where We Draw the Line

There are uses of behavioural and profiling work we will not support, even if there is demand for them.

Covert social scoring and mass monitoring
We do not design or legitimise systems that quietly rate staff and keep those ratings hidden from the people being judged, or that monitor populations primarily to feed such scores.

Profiling as a shortcut to removal
We do not provide “scientific” labels simply to justify removing people or blocking careers while leaving the underlying architecture untouched.

Behavioural theatre
We are not interested in programmes whose main purpose is to generate reassuring dashboards or slide decks – behavioural theatre – without any serious change in how work is structured, supervised or incentivised.

False certainty
Behavioural work can reduce uncertainty and improve foresight. It cannot provide infallible predictions or “guarantees” that a particular individual will never become a risk. We refuse to pretend otherwise.

When we use ABPM™, it is in defined, high-risk contexts with explicit purpose, limited scope and human oversight – not as ongoing, automated scoring of an entire workforce.


Who This Is For

This approach is not aimed at everyone with a security budget.

It is for organisations that:

  • Are willing to look beyond “human error” and awareness slogans
  • Accept that governance, incentives and leadership can themselves be risk factors
  • Want structural options, not just another round of training or tools
  • Prefer clear, sometimes uncomfortable insight over comforting numbers that do not change outcomes

If you are prepared to ask “What in our architecture produces this behaviour?”, there is a basis for working together.


Who This Is Not For

Our work is unlikely to be a good fit for organisations that:

  • Primarily want a tool to “find bad apples”
  • Are looking for covert monitoring, social scoring or more intrusive surveillance as the main answer
  • Treat people mainly as compliance objects to be nudged or punished
  • Expect certainty and guarantees where none exist

In those situations, other types of services will align better with your expectations than behavioural security in this form.


How This Shapes Our Work in Practice

In practice, this philosophy has concrete consequences for how we design and deliver work.

  • We start from risk architectures, not lists of “risky individuals”. We map how roles, access, pressure and relationships create predictable trajectories.
  • We avoid interventions chosen for presentational clarity rather than demonstrable impact on risk-producing conditions.
  • We use behavioural profiling (ABPM™) selectively, in clearly justified, high-risk contexts, and always as one source of input alongside other evidence.
  • We focus our recommendations on governance, role design, incentives, supervision and controls – places where changing the architecture actually changes behaviour.
  • We treat incidents and near misses as signals about the system, not just as stories about individual failure.

In practice, organisations that work this way tend to see fewer repeat patterns and more meaningful insider-risk reductions than those relying on awareness-only programmes, because it targets the conditions that repeatedly produce those behaviours, rather than only the last person who got caught.


© 2025 Gamayun Outsourcing Limited. All rights reserved.
Company No. 09754464 | Registered in England & Wales.