BeginnerAI & Emerging Threats

Check your children's credit reports annually — their SSNs are prime synthetic identity targets

Children's Social Security numbers are particularly valuable for synthetic identity fraud because they have no credit history — meaning a fraudster can build a credit profile from scratch using the real SSN with a completely fabricated name and birthday. The mismatch between the SSN's issue date and the claimed age takes years to surface in credit bureau algorithms. AI-powered operations can manage thousands of synthetic identities simultaneously. Request a credit freeze for your children's SSNs at all three major bureaus (Experian, Equifax, TransUnion) — it is free, does not affect their future credit, and blocks the SSN from being used to open new credit accounts.

Tags

synthetic identitycredit freezechildren SSNidentity theftAI fraud

More in AI & Emerging Threats

All guides
beginnerfeatured

Establish a company policy for what data employees may input into AI tools

Samsung engineers uploaded proprietary semiconductor source code and internal meeting notes to ChatGPT within weeks of the company lifting its AI tool ban. The data was sent to OpenAI's servers and potentially incorporated into training. AI tools that process user input are data processors — all data entered is shared with the vendor under their terms of service. Establish a clear policy before allowing AI tool use: define what data classification levels may be entered (typically public and internal only, never confidential or restricted), use enterprise AI contracts with data opt-out provisions, and implement DLP controls that block submission of certain data patterns to external AI services.

See: Samsung ChatGPT LeakAI & Emerging Threats
intermediate

Do not rely on voice recognition for financial authorisation — any voice can be cloned

AI voice synthesis has surpassed the quality threshold for telephone fraud. The first documented AI voice clone fraud transferred €220,000 in 2019. Garmin's CEO, financial executives at multiple Fortune 500 companies, and even LastPass's CEO have had their voices cloned for fraud attempts. For any financial transaction, the voice heard on a phone call is no longer sufficient authorisation. Require independent channel verification (a separate text or email from a known internal system, a coded authorisation number from your financial controls platform) for any wire transfer, regardless of whether the requesting voice sounds exactly like your CEO.

See: AI Voice Clone CEO FraudAI & Emerging Threats
beginner

Treat grammatically perfect emails as potential AI-generated phishing

For decades, poor grammar and spelling were reliable indicators of phishing emails. WormGPT, FraudGPT, and general-purpose LLMs have eliminated this signal. AI-generated phishing emails are indistinguishable from legitimate emails by grammar alone. Train employees that the absence of spelling mistakes is no longer a safety signal. Shift phishing recognition training to focus on: unexpected requests (even in well-written emails), urgency combined with an unusual ask, mismatched sender domains (visible in email headers), and links that do not go to the expected domain. Technical controls — DMARC, email authentication, URL scanning — become more important as human detection degrades.

See: WormGPTAI & Emerging Threats