When Security Ignores People, It Quietly Fails Its Own Systems

Most organisations now run security awareness programmes. Yet phishing and basic social engineering remain stubbornly effective.

One factor is rarely examined: security designs its training around its own chart of “important topics”, while people filter information using their own chart of “things worth paying attention to”.

When those two charts misalign, awareness does not simply “underperform”. It becomes a form of governance theatre: a way to prove that topics were “covered”, without changing what people actually do.

Organisations build their training plans from institutional constraints:

The implicit goal is coverage: every required topic appears somewhere in a programme; everyone is exposed to it; completion can be demonstrated. From the organisation’s standpoint, this is still a success: the topics have been “delivered”; the chart is complete. From a behavioural standpoint, coverage is just an artefact. Exposure is not learning, and it certainly isn’t control.

Employees do not carry the organisation’s chart in their heads. They carry their own:

Adult learning research has been clear for decades: adults learn what they find useful for their own goals, not what they are merely exposed to. Knowles (1984) describes adult learning as problem-centred and relevance-driven; Wigfield and Eccles (2000) show that people invest effort when a task has perceived value and withdraw when it does not.

Information security studies echo this. Bulgurcu, Cavusoglu and Benbasat (2010) find that employees’ intentions to comply with information security policies are strongly shaped by their beliefs about personal benefits and costs of compliance, not just fear of sanctions. A broader literature on security behaviour and policy compliance repeatedly links perceived usefulness and role relevance to actual behaviour.

The mechanism is straightforward:

Craik and Lockhart’s levels-of-processing framework is blunt on this point: information processed superficially does not form durable memory traces (Craik & Lockhart, 1972). So when an employee looks at their awareness module, the question in their head is not “Is this on the organisational risk register?” but “Is this worth my limited attention?”

If the answer is no, the cognitive system behaves rationally: click through, comply minimally, forget quickly.

In other words: the organisation’s chart is broad; employees’ charts are specific. Training is built to match the former, not the latter.


A recent large-scale mixed-methods study of embedded phishing training found that most of the impact came from the reminder effect – being periodically confronted with the risk – rather than from the educational pages people were shown after clicking. Many employees admitted they barely engaged with that content at all, citing lack of time and low perceived value (Lain et al., 2024).

Put differently: even when the training is targeted at a real failure event (a click), many employees still do not judge the content valuable enough to attend to.

A separate scoping review of 42 phishing-training studies by Marshall et al. (2024) finds that while many interventions report short-term improvements, they report that results across studies are inconsistent, heavily dependent on context, and rarely demonstrate long-term behavioural change outside controlled settings. This is exactly what one would expect in a landscape where topic alignment and perceived usefulness are treated as optional extras rather than central design constraints.

The consequence of this misalignment is predictable:

  1. The organisation optimises for coverage.
    All mandated topics appear in the programme. Completion rates are high. Audit evidence is available.
  2. Employees optimise for usefulness.
    They attend to the few topics that intersect with their real risks and ignore or skim the rest.
  3. Metrics measure exposure, not encoding.
    Dashboards show that 95–100% of staff completed the training. Nothing in those metrics tells you how much was processed deeply enough to survive into real decision-making. Completion rates measure exposure. Incidents measure behaviour. The two are barely correlated.
  4. Incidents are still attributed to “human error”.
    Awareness campaigns can even generate a false sense of security, with employees performing well in tests while still failing in practice. The gap between what the system believes it has “fixed” and what people actually do is left unexplored.

At that point, awareness is no longer functioning as a control. It is functioning as a symbolic reassurance mechanism: proof that the organisation said the right things, irrespective of whether anyone internalised them.

A system that ignores how its human component actually selects, stores and uses information is not just incomplete. It is mis-specified.

Security often asserts that “people are the weakest link”. The picture emerging from the research is less flattering for systems than for people:

The result is a quiet but serious failure: security design awareness around the organisation’s topic list and then blame people for acting according to their own.

If organisations want awareness to be more than governance theatre, topic selection and scenario design cannot be driven solely by compliance and audit needs. They have to be anchored in:

Ignoring that is not just “bad messaging”. It is a design error at the system level.

If adults learn only what they find useful, and security training continues to be built around what the organisation wants to say, whose chart is really shaping security behaviour – ours, or theirs?

References.