BeginnerAI & Emerging Threats

Treat grammatically perfect emails as potential AI-generated phishing

For decades, poor grammar and spelling were reliable indicators of phishing emails. WormGPT, FraudGPT, and general-purpose LLMs have eliminated this signal. AI-generated phishing emails are indistinguishable from legitimate emails by grammar alone. Train employees that the absence of spelling mistakes is no longer a safety signal. Shift phishing recognition training to focus on: unexpected requests (even in well-written emails), urgency combined with an unusual ask, mismatched sender domains (visible in email headers), and links that do not go to the expected domain. Technical controls — DMARC, email authentication, URL scanning — become more important as human detection degrades.

Tags

AI phishingphishing indicatorssecurity awarenessDMARCgrammar

More in AI & Emerging Threats

All guides
beginnerfeatured

Establish a company policy for what data employees may input into AI tools

Samsung engineers uploaded proprietary semiconductor source code and internal meeting notes to ChatGPT within weeks of the company lifting its AI tool ban. The data was sent to OpenAI's servers and potentially incorporated into training. AI tools that process user input are data processors — all data entered is shared with the vendor under their terms of service. Establish a clear policy before allowing AI tool use: define what data classification levels may be entered (typically public and internal only, never confidential or restricted), use enterprise AI contracts with data opt-out provisions, and implement DLP controls that block submission of certain data patterns to external AI services.

See: Samsung ChatGPT LeakAI & Emerging Threats
intermediate

Do not rely on voice recognition for financial authorisation — any voice can be cloned

AI voice synthesis has surpassed the quality threshold for telephone fraud. The first documented AI voice clone fraud transferred €220,000 in 2019. Garmin's CEO, financial executives at multiple Fortune 500 companies, and even LastPass's CEO have had their voices cloned for fraud attempts. For any financial transaction, the voice heard on a phone call is no longer sufficient authorisation. Require independent channel verification (a separate text or email from a known internal system, a coded authorisation number from your financial controls platform) for any wire transfer, regardless of whether the requesting voice sounds exactly like your CEO.

See: AI Voice Clone CEO FraudAI & Emerging Threats
intermediate

Review all AI-suggested code for hardcoded credentials before committing

GitHub Copilot and other AI code assistants can suggest real credentials from their training data — API keys, passwords, and tokens from public repositories that were used to train the model. Security researchers demonstrated that Copilot would suggest valid AWS keys when writing code that declared an AWS client. Review every AI-generated code suggestion before committing. Use pre-commit hooks that scan for secrets patterns (git-secrets, truffleHog, Gitleaks) as a safety net. Never assume that because a credential appears in an AI suggestion, it is a placeholder — verify explicitly.

See: GitHub Copilot Secrets LeakAI & Emerging Threats