AdvancedAI & Emerging Threats

Add video call verification for high-risk identity scenarios — deepfakes can pass audio-only checks

AI voice cloning passes telephone and audio-only verification. Deepfake video is advancing rapidly and can fool video calls under certain conditions. For the highest-risk identity scenarios — executive approval for large wire transfers, contractor onboarding with system access, MFA reset for privileged accounts — require a video call where the individual physically holds a company-issued badge next to their face. Current deepfake video technology struggles to render highly specific, real-time items accurately. Combine with a live action challenge: ask the person to perform a specific physical action (show a specific finger count, write a code word on paper) that a pre-recorded deepfake cannot anticipate.

Tags

deepfake videoidentity verificationliveness detectionvideo callAI fraud

More in AI & Emerging Threats

All guides
beginnerfeatured

Establish a company policy for what data employees may input into AI tools

Samsung engineers uploaded proprietary semiconductor source code and internal meeting notes to ChatGPT within weeks of the company lifting its AI tool ban. The data was sent to OpenAI's servers and potentially incorporated into training. AI tools that process user input are data processors — all data entered is shared with the vendor under their terms of service. Establish a clear policy before allowing AI tool use: define what data classification levels may be entered (typically public and internal only, never confidential or restricted), use enterprise AI contracts with data opt-out provisions, and implement DLP controls that block submission of certain data patterns to external AI services.

See: Samsung ChatGPT LeakAI & Emerging Threats
intermediate

Do not rely on voice recognition for financial authorisation — any voice can be cloned

AI voice synthesis has surpassed the quality threshold for telephone fraud. The first documented AI voice clone fraud transferred €220,000 in 2019. Garmin's CEO, financial executives at multiple Fortune 500 companies, and even LastPass's CEO have had their voices cloned for fraud attempts. For any financial transaction, the voice heard on a phone call is no longer sufficient authorisation. Require independent channel verification (a separate text or email from a known internal system, a coded authorisation number from your financial controls platform) for any wire transfer, regardless of whether the requesting voice sounds exactly like your CEO.

See: AI Voice Clone CEO FraudAI & Emerging Threats
beginner

Treat grammatically perfect emails as potential AI-generated phishing

For decades, poor grammar and spelling were reliable indicators of phishing emails. WormGPT, FraudGPT, and general-purpose LLMs have eliminated this signal. AI-generated phishing emails are indistinguishable from legitimate emails by grammar alone. Train employees that the absence of spelling mistakes is no longer a safety signal. Shift phishing recognition training to focus on: unexpected requests (even in well-written emails), urgency combined with an unusual ask, mismatched sender domains (visible in email headers), and links that do not go to the expected domain. Technical controls — DMARC, email authentication, URL scanning — become more important as human detection degrades.

See: WormGPTAI & Emerging Threats