AI LinkedIn Fake Profiles: North Korea Uses Generated Faces to Infiltrate Tech Companies
North Korean IT workers created fake LinkedIn profiles using AI-generated profile photos to obtain remote employment at Western technology companies, earning revenue to fund weapons programmes and stealing IP.
Background
North Korea's IT worker programme (tracked by CISA as "DPRK IT Workers") has deployed thousands of workers under false identities to obtain remote employment. AI-generated headshot photos made fake profiles more convincing and harder to reverse-image-search.
The Attack
North Korean IT workers created LinkedIn, GitHub, and freelance platform profiles using AI-generated facial photographs (created with tools like ThisPersonDoesNotExist.com or similar generators) combined with stolen or fabricated identities, educational credentials, and employment histories. They successfully obtained employment as software developers, paid in cryptocurrency. Once employed, they used their access to steal proprietary code, plant backdoors, and earn revenue flowing back to North Korea. A 2022 DOJ indictment revealed the operation had generated over $300 million for North Korea's weapons programme.
Response
The FBI, CISA, and State Department issued a joint advisory warning companies to screen for North Korean IT workers. Google's Threat Analysis Group published analysis. LinkedIn began using AI detection for profile photos. Companies began implementing video interview verification and identity document checks.
Outcome
The scale was remarkable: thousands of North Korean workers earning legitimate salaries at Western tech companies while simultaneously conducting espionage. The program earned an estimated $300+ million annually. The use of AI-generated photos made the fake profiles resistant to reverse image search, a previously reliable detection method.
Key Takeaways
- Video interviews should include identity document verification — do not rely solely on LinkedIn profile photos
- AI-generated face photos are visually convincing but have detectable artifacts — use dedicated deepfake detection tools for hiring
- Remote employees with access to sensitive code repositories warrant enhanced background verification
- AI-generated profile photos have a distinctive aesthetic — security teams and HR should be trained to recognise them