5 min read
Securing Remote ID Verification: Steps to Combat Video Injection & Session Attacks
Cryptomathic : modified on 27. April 2026
- Home >
- Securing Remote ID Verification: Steps to Combat Video Injection & Session Attacks
Remote identity verification is facing a more complex class of fraud attacks than many organisations are prepared for. Deepfakes receive most of the attention, and rightly so, but they are only part of the problem. In practice, some of the most effective attacks against remote onboarding now target the integrity of the verification session itself.
That distinction matters.
Modern identity verification (IDV) journeys rely on more than a document image and a selfie. They also depend on signals from the browser or mobile app, device context, runtime behaviour, session flow, and evidence that the interaction is genuine and unaltered. When attackers can manipulate those signals, organisations may still receive data that looks plausible even though the trust behind it has already been compromised.
For fraud, identity, and security leaders, the question is no longer only whether an image or video is fake. It is whether the entire verification flow can still be trusted.
Deepfakes Are Only One Part Of The Threat Landscape
Most discussions about remote IDV fraud still focus on synthetic faces, forged videos, or manipulated documents. Those risks are real, but the framing is too narrow.
Attackers increasingly combine media-based deception with application and session-level techniques designed to interfere with how evidence is captured, transmitted, or interpreted. Instead of trying only to fool a liveness model or document check, they also try to manipulate the environment in which identity verification takes place.
Common examples include:
- Video injection, where a manipulated or pre-recorded video feed is inserted into the verification flow instead of a genuine live camera stream
- Virtual cameras, which make altered or non-live media appear to the application as though it is coming from a legitimate camera source
- Code tampering and function hijacking, where attackers interfere with application logic or hook key functions to alter behaviour or outputs
- Data manipulation, where telemetry, session data, or control values are modified before they reach the server
- Browser session manipulation, where client-side scripts, flows, or execution paths are altered to weaken controls
- Mobile runtime attacks, including hooking, repackaging, or run time tampering designed to bypass app protections
These techniques differ in method, but they share a common objective: to weaken the reliability of the signals that remote IDV systems often assume are trustworthy.
The Real Issue Is Trust In The Session
A remote IDV journey is not a single event. It is a chain of trust decisions.
A user is asked to capture an identity document, present a live face, respond to prompts, and complete steps through a browser or mobile app. Behind the scenes, the system may also rely on device indicators, session data, behavioural patterns and application state to assess risk and determine whether the interaction appears legitimate.
This creates a broader attack surface than many programmes account for.
If a fraudster can inject a video feed, substitute a virtual camera, tamper with application logic, manipulate session variables, or interfere with the runtime environment, the system may continue receiving data that appears internally consistent while no longer reflecting a genuine interaction. The evidence still arrives, but its integrity may not.
That is why deepfake detection on its own is not enough. Organisations also need controls that protect the authenticity and integrity of the capture environment, the application runtime, and the session data path.
Why Browser and Mobile Channels Both Matter
Many organisations still think about browser and mobile risk separately. Attackers do not.
In browser-based IDV flows, the attack surface can include script manipulation, injected overlays, substituted media sources, and instrumented sessions. In mobile environments, it can include reverse engineering, runtime hooking, tampered application logic, repackaged apps, and malware-assisted interference with sensitive flows.
From an attacker’s perspective, the objective is the same across both channels: gain control over what the verification service sees and trusts.
This is why remote IDV security cannot be reduced to backend checks or model accuracy alone. It must also address the integrity of the client environment producing the evidence in the first place.
What Effective Defence Looks Like
A stronger approach to remote IDV fraud prevention starts with one principle: protect the journey, not only the artifact.
That means combining fraud controls with application security, runtime protection, and integrity validation across the full onboarding flow.
1) Harden The Client-Side Environment
If the verification journey begins in a browser or mobile app, that environment needs active protection. Backend-only controls leave too much room for manipulation before data is even assessed.
In practice, this can include code hardening, anti-tampering measures, runtime detection of hooking or instrumentation, script protection, and controls designed to make unauthorised modification harder and more visible.
The objective is not to make attack impossible. It is to raise the cost of manipulation, reduce silent tampering, and improve the defender’s ability to detect abnormal execution states.
2) Treat Capture Integrity As A First-Class Control
A camera feed should not be assumed to be genuine simply because the application receives video data.
Organisations should consider how they validate the integrity of capture inputs, detect substituted media sources, and identify signs that the presented stream is not coming from a legitimate live interaction. This is where video injection and virtual camera defences become especially important.
3) Secure Browser & Mobile Journeys Consistently
Fraud prevention teams, digital identity teams, and application security teams often work in parallel rather than together. Attackers benefit from those gaps.
Where onboarding begins in one channel and continues in another, the control model should be joined up. The same core principles should apply across browser and mobile environments: protect runtime integrity, monitor for tampering, validate trusted signals, and design the session so manipulation is harder to conceal.
4) Align Technical Controls With Assurance Requirements
Organisations in regulated sectors often need to show not only that controls exist, but that they are proportionate to the assurance level and threat environment. Guidance from bodies such as NIST and OWASP, and relevant ETSI frameworks where applicable, can help structure control thinking and governance language.
That does not mean every deployment needs the same level of protection. It does mean fraud and security teams should be able to explain how their controls address manipulated media, tampered runtimes, session integrity, and the trustworthiness of digital evidence.
Remote IDV Security Checklist: 8 Questions To Ask Now
- Have we assessed exposure to video injection and virtual camera attacks in our IDV flow?
- Do we protect both browser and mobile verification journeys, not just backend decisioning?
- Can we detect signs of tampering, hooking, or runtime manipulation in the client environment?
- Are we validating whether supposedly trusted device, session, and runtime signals could be spoofed or altered?
- Do we treat capture integrity as a control in its own right, rather than relying only on liveness or document checks?
- Are fraud, identity, and application security teams aligned on ownership of remote onboarding risk?
- Have we tested the end-to-end journey against realistic attack scenarios, not only expected user behaviour?
- Can we explain how our controls support assurance and compliance expectations under frameworks such as OWASP, NIST, and ETSI?
The Strategic Takeaway
Remote IDV fraud is often discussed as a content problem. Increasingly, it is an integrity problem.
Yes, organisations need to detect manipulated faces, forged documents, and synthetic media. But they also need to protect the browser and mobile environments where identity evidence is captured, transmitted, and assessed. Otherwise, trusted signals can be turned into untrusted inputs without being recognised as such.
For organisations running high-value or regulated onboarding journeys, the next step is not simply to ask whether their deepfake controls are good enough. It is to ask whether the entire remote verification session can withstand manipulation.
That is the more important security question, and it is rapidly becoming the more commercially important one as well.
Want to go beyond deepfake detection and better protect your remote onboarding journeys?
Register for our upcoming webinar with Jscrambler, "Preventing Remote ID Verification Fraud: Video Injection, Virtual Cameras & Other Attacks". Discover how attackers exploit browser sessions, mobile apps, and trusted IDV signals - and what you can do to stop them. Register here.