General 10 min read

    Social Engineering Beyond Phishing: Voice, SMS, Deepfakes, and the Human Layer

    Email phishing gets all the attention, but vishing, smishing, and AI-generated deepfakes are quietly driving more high-value fraud than email ever did. What modern social engineering looks like — and why training alone won't fix it.

    Social Engineering Beyond Phishing: Voice, SMS, Deepfakes, and the Human Layer

    Why Email Isn't the Only Channel Anymore

    Email security got better. So attackers moved channels. SMS, WhatsApp, voice calls, LinkedIn DMs, Microsoft Teams chats — all of these have weaker filtering and stronger trust signals than email. Most companies' security awareness programs are still email-shaped.

    Vishing: Voice Phishing in 2026

    The textbook vishing call from 2015 ("This is Microsoft support, your computer has a virus") still exists, but it's the bottom tier. The high-value version uses voice cloning:

    • Three seconds of public audio (a webinar, a podcast snippet, a voicemail) is enough.
    • Real-time cloning lets attackers hold full conversations in a victim's CEO's voice.
    • Targets are usually finance staff, with urgency framing ("don't tell anyone, I'm in a meeting, just wire the funds").

    Several documented 2024–2026 fraud cases involved 7-figure wire transfers triggered by deepfaked voice calls — sometimes combined with a deepfaked Zoom call to add visual confirmation.

    Smishing & Messaging-App Attacks

    SMS and messaging apps don't have spam filters anywhere as mature as email. The most common patterns:

    • Package delivery scams — "Your shipment requires a customs payment."
    • Bank-fraud spoofs — "Confirm transaction or your account will be locked."
    • Job offer scams — "We saw your LinkedIn, here's a great role with a quick interview on Telegram."
    • Workforce-targeted — "This is the new IT helpdesk, please confirm your password reset."

    Deepfakes Beyond Voice

    Video deepfakes are now real-time-capable on consumer hardware. The most consequential applications so far:

    • Fake hiring interviews — adversaries impersonate candidates to gain employment access (DPRK actors have used this at scale).
    • Fake board members on video calls authorizing transactions.
    • Identity verification bypass — defeating selfie-based KYC at financial services.

    What Actually Works Against This

    Awareness training helps but doesn't scale to the deepfake era. The structural defenses:

    • Out-of-band verification for any unusual financial request — and a no-exceptions policy.
    • Phishing-resistant MFA (FIDO2) so a tricked password reveal doesn't equal account takeover.
    • Liveness detection and document-bound credentials for identity verification.
    • Internal verification protocols — code words, callback procedures, written confirmation requirements.
    • A culture where slowing down is praised, not penalized, when something feels off.