Security culture still clings to the fantasy that a trained observer can “just tell” who is lying or dangerous from a thin slice of behaviour. Behaviour detection programmes, micro-expression courses and “human lie detector” trainings all trade on that idea.
Call this snapshot profiling: the claim that a brief, observation-only sample of behaviour is enough to classify a person as deceptive, hostile or high-risk.
From a behavioural security perspective, snapshot profiling is not just imperfect. It is structurally misaligned with what we know about behaviour, personality and risk.
1. Why snapshot profiling fails: structure, not just error.
Snapshot profiling rests on three assumptions:
- Behaviour in security environments transparently reflects inner states.
- Deception and risk leave visible “tells” that trained observers can reliably decode.
- Real-world deployments have shown this to work.
All three assumptions collide with existing evidence.
Strong situations and compressed behaviour.
Airports, border crossings, corporate lobbies and formal interviews are strong situations: rules, scripts and surveillance constrain behaviour and narrow the range of admissible actions. Strong situation theory argues that in such environments, the influence of personality on momentary behaviour is dampened; everyone behaves more similarly because the situation is doing most of the work (Cooper and Withey, 2009).
Trying to infer deep traits or intent from micro-variations in posture or gaze in a strong situation is like trying to infer running style from someone walking on a narrow ledge.
Deception cues are weak.
A meta-analysis of 206 studies and 24,483 judges found that people detect lies from behaviour alone at about 54% accuracy – barely above chance (Bond and DePaulo, 2006). On average, judges correctly classified only 47% of lies as deceptive and 61% of truths as honest. Many of the cues that populate training slides – gaze aversion, fidgeting, posture changes – are either very weakly related to deception, or their meaning flips with context.
Training often increases confidence more than accuracy (Hauch et al., 2016).
Large-scale practice has already failed.
The US Transportation Security Administration’s Screening of Passengers by Observation Techniques (SPOT) is the most visible real-world test of snapshot profiling. Behaviour Detection Officers were trained to score passengers on clusters of “behavioural indicators” of stress, fear or deception.
Government Accountability Office reviews found that TSA deployed SPOT nationwide before establishing a scientifically valid basis for the indicators, and that no rigorous evidence showed the programme could reliably identify higher-risk passengers in the airport environment (GAO-10-763; GAO-11-461T). A National Research Council review likewise noted the lack of scientific consensus that behaviour detection principles could be reliably used for counter-terrorism screening.
In the Applied Behavioural Profiling Model (ABPM), this is treated as a hard design constraint: if a method claims to classify people reliably from a handful of behavioural “tells”, in a strong situation, it is out of line with the data.
No amount of practitioner folklore compensates for that.
2. First impressions, halo, and why charm is a security bug.
If snapshot profiling were merely noisy, it would be a limited problem. The situation is worse: human judgment is systematically skewed by first impressions and the halo effect, and some high-risk individuals are very good at weaponising that.
Fast impressions, sticky judgments.
People form trait judgments from faces in under a second. Todorov and colleagues showed that competence ratings from one-second exposures to US political candidates’ faces predicted real election outcomes better than chance, and were linearly related to margin of victory (Todorov et al., 2005).
The halo effect means that one salient quality – attractiveness, confidence, likeability – bleeds into judgments of unrelated traits. Dion, Berscheid and Walster’s classic “what is beautiful is good” study showed that attractive individuals were assumed to be more socially and intellectually competent than unattractive ones, based purely on photographs (Dion et al., 1972).
These are not occasional glitches; they are default shortcuts in human social cognition.
Who benefits from the halo?
For behavioural security, the asymmetry matters more than the average error.
- Narcissistic individuals often make excellent first impressions. Brunell et al. (2008) found that narcissists are more likely to emerge as leaders in unstructured groups and are initially rated as charming and competent, even though later performance and interpersonal impact can be poor or exploitative.
- Corporate psychopaths and fraudsters depend on façades. Case literature on “snakes in suits” and white-collar crime documents systematic use of charm, technical fluency, symbols of success and shared identity to gain trust, suppress suspicion and access resources.
The result:
Snapshot profiling doesn’t just fail in general; it fails worst with charismatic predators, status-hungry narcissists and polished fraudsters (Babiak and Hare, 2006).
How ABPM treats halos:
ABPM builds this in as a design assumption. The model does not treat rapid likeability as evidence of safety; it treats it as a reason to delay judgment and watch how the person responds over time to status threat, boundaries, entitlement and loyalty conflicts.
A strong positive halo is reinterpreted as:
- “interesting data point, monitor pattern”,
not - “relax, this one is fine”.
Any system that teaches staff to “trust their read” after a few seconds is, in effect, institutionalising the halo effect. That is not behavioural security; it is a codified vulnerability.
3. Personality as time-series: where ABPM actually sits.
Rejecting 5-second reading does not mean personality is irrelevant. It means taking personality research seriously.
Longitudinal work shows that personality traits have moderate rank-order stability across the life course. Roberts and DelVecchio’s meta-analysis of 152 longitudinal studies found that trait test-retest correlations increase from about .31 in childhood to around .64 by age 30 and plateau near .74 in later adulthood (Roberts and DelVecchio, 2000).
At the same time, Mischel’s critique in Personality and Assessment highlighted that single behaviours often vary substantially with context; traits predict patterns across situations, not specific acts in specific moments (Mischel, 1968). Later work on within-person variability reinforced this: personality is best understood as a distribution of states over time, shaped by person × situation interactions (Fleeson, 2001).
Security-relevant questions are inherently temporal:
- Who tends to bend rules when humiliated or excluded?
- Who escalates grievance instead of resolving it?
- Who repeatedly rationalises dubious actions under pressure?
- Who is unusually responsive to flattery, status or moral disengagement?
These are pattern questions, not snapshot questions.
Trajectories, not types.
Behavioural security therefore should treat individuals as trajectories, not static labels. Because risk unfolds as a sequence of perceptions, decisions and feedback from the organisation, any model that ignores temporal structure is about something other than security behaviour. A serious assessment integrates:
- repeated behaviour across settings and roles;
- language and narratives (how people explain events, blame, loyalty);
- role and incident history;
- OSINT, grievances, identity claims;
- organisational context: justice, incentives, norms, tolerated deviance.
ABPM is designed to sit inside this frame. Its profiles are theory-driven shorthand for recurring responses to particular psychological pressures (such as control, status, connection, care, anxiety) that can map onto specific risk pathways: insider betrayal, susceptibility to grooming, misuse of authority, corrosive withdrawal, paralysing over-caution.
Crucially: ABPM is a working framework, not a finished diagnostic instrument. It is built to obey the constraints described above (multi-source, longitudinal, adversarial to halo) and is intended to be testable and revisable as empirical data accumulate.
4. Four laws of behavioural profiling.
Even without endorsing any particular model, the evidence above implies a minimum set of rules. Call them four laws of behavioural profiling. Any method that violates them is snapshot profiling by another name.
Law 1 – No single cue, channel or moment is decisive.
A behavioural tic, one interview, or one OSINT hit may justify a hypothesis, not a verdict. Strong situations, weak deception cues and SPOT’s failure together make it clear that behaviour-only diagnosis in constrained environments is not reliable enough to stand alone.
If a training package promises confident classification from a short list of “tells”, it is performance, not protection.
Law 2 – First impressions are treated as risk, not reassurance.
Because first impressions and halo effects are fast, sticky and easily exploited by charismatic, narcissistic and psychopathic individuals, strong positive initial impressions (especially around charm and confidence) should decrease confidence in “safe” judgements, not increase it.
They justify slower assessment and more data, not accelerated trust.
Law 3 – Profiles are time-series hypotheses, not identities.
Whether using ABPM or any other framework, a “profile” is a working model of how someone tends to respond to certain pressures, to be updated as new evidence arrives. It is not an essence to be pinned on a person and defended.
This is the personality-as-trajectory view forced by longitudinal research, not an optional preference.
Law 4 – Uncertainty is explicit.
Behavioural profiling works in probabilities and scenarios, not certainties. Assessments should sound more like:
“Given these patterns and this context, here are the plausible risk pathways, and here is our confidence interval.”
Rather than:
“I can tell this person is loyal / lying / dangerous.”
Organisations that demand instant certainty about people will reliably reward confident error over disciplined doubt. They will also be drawn to snapshot profiling because it feels decisive.
ABPM is being built under these four laws. It assumes multiple channels of data, treats profiles as revisable time-series hypotheses, and foregrounds mechanisms (status, control, loyalty, resentment, anxiety) rather than surface “tells”. It does not promise to read anyone in five seconds. If it ever did, it would stop being behavioural security and become part of the problem it was designed to solve.
References.
Babiak, P., & Hare, R. D. (2006). Snakes in suits: When psychopaths go to work. Regan Books/Harper Collins Publishers.
Bond, C.F. and DePaulo, B.M. (2006). Accuracy of Deception Judgments. Personality and Social Psychology Review, 10(3), pp.214–234. https://doi.org/10.1207/s15327957pspr1003_2.
Brunell, A.B., Gentry, W.A., Campbell, W.K., Hoffman, B.J., Kuhnert, K.W. and DeMarree, K.G. (2008). Leader Emergence: The Case of the Narcissistic Leader. Personality and Social Psychology Bulletin, 34(12), pp.1663–1676. https://doi.org/10.1177/0146167208324101.
Cooper, W. H., & Withey, M. J. (2009). The strong situation hypothesis. Personality and Social Psychology Review, 13(1), 62–72. https://doi.org/10.1177/1088868308329378
Dion, K., Berscheid, E., & Walster, E. (1972). What is beautiful is good. Journal of Personality and Social Psychology, 24(3), 285–290. https://doi.org/10.1037/h0033731
Fleeson, W. (2001). Toward a structure- and process-integrated view of personality: Traits as density distributions of states. Journal of Personality and Social Psychology, 80(6), 1011–1027. https://doi.org/10.1037/0022-3514.80.6.1011
GAO (2010) Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges | U.S. GAO. [online] http://www.gao.gov. Available at: https://www.gao.gov/products/gao-10-763.
GAO (2011) Aviation Security: TSA Is Taking Steps to Validate the Science Underlying Its Passenger Behavior Detection Program, but Efforts May Not Be Comprehensive. [online] Gao.gov. Available at: https://www.gao.gov/products/gao-11-461t
Hauch, V., Sporer, S. L., Michael, S. W., & Meissner, C. A. (2016). Does training improve the detection of deception? A meta-analysis. Communication Research, 43(3), 283–343. https://doi.org/10.1177/0093650214534974
Mischel, W. (1968) Personality and assessment. New York: Wiley.
Roberts, B.W. and DelVecchio, W.F. (2000). The rank-order consistency of personality traits from childhood to old age: A quantitative review of longitudinal studies. Psychological Bulletin, 126(1), pp.3–25. https://doi.org/10.1037/0033-2909.126.1.3.
Todorov, A., Mandisodza, A.N., Goren, A. and Hall, C.C. (2005) ‘Inferences of competence from faces predict election outcomes’, Science, 308(5728), pp. 1623–1626. https://doi.org/10.1126/science.1110589.