Mobile devices are now the frontline of organizational risk

Basking Ridge, NJ — Mobile devices stopped being optional conveniences years ago; today they’re frontline endpoints attackers treat as primary targets. The Verizon 2025 Mobile Security Index (MSI) isn’t subtle: 85% of organizations report mobile attacks rising, and 75% increased mobile security spending last year. But here’s the kicker — the fast, largely unguided adoption of generative AI (genAI) by employees has dramatically expanded the attack surface and introduced a whole new class of messy, operational risks.

Why genAI makes mobile threats different

There are two, related problems — and both are worrying.

  • Low preparedness for AI-assisted attacks: Only about 17% of orgs report specific controls to block AI-assisted attacks. That’s a yawning gap. Attackers are using genAI to scale social engineering and finesse malware; these aren’t the blunt phishing emails of a decade ago.
  • Widespread genAI use on mobile devices: Nearly 93% of orgs say employees use genAI on mobile for day-to-day work. More than 64% list data compromise via genAI as a top mobile risk. In short: people are feeding corporate context into models with few guardrails.

Put another way: adversaries now have smarter toolchains, and employees have access to those same toolchains — often with zero training. From what I’ve seen, it’s like handing a high-powered tool to someone who’s only skimmed the manual. Things go wrong. Fast.

Human behavior remains the weakest link

The MSI sketches a true “perfect storm”: AI-enabled threats intersecting with human fallibility. One line that stuck with me: in orgs that ran smishing tests, up to 39% saw half their staff click a malicious link. That’s not an abstract stat — that’s the exact path an attacker needs for credential theft, ransomware, or supply-chain intrusion.

Picture this: a finance analyst gets a mobile voice-transcribed message asking to approve an invoice. It uses internal codenames, mimics a manager’s cadence, and references a recent Slack thread. With genAI, attackers produce that context-rich lure in minutes. One click and you’re facing exfiltration, outages, and regulatory headaches. I’ve watched teams go from a single misclick to weeks of containment. It’s brutal and painfully common.

SMBs vs. enterprises: who’s more exposed?

SMBs feel the squeeze. The MSI shows 57% of SMBs believe they lack the resources to respond as effectively as larger firms, and 54% feel they have more to lose from a breach. Larger orgs often lead on a few proactive defenses:

  • Employee mobile security training: 66% of enterprises vs. 56% of SMBs
  • AI risk training: 50% of enterprises vs. 39% of SMBs
  • Advanced multi-factor authentication: 57% of enterprises vs. 45% of SMBs

But size isn’t immunity. Across the board, 63% reported significant downtime and 50% reported data loss in the past year. Those are real, billable impacts — downtime, reputation damage, compliance fines — and they explain why mobile endpoint security for businesses needs to sit near the top of every risk register.

How to build resilience in an AI-security world

Resilience isn’t a single product purchase. It’s a layered program blending people, policy, and technical controls. Below are practical, prioritized steps you can start this quarter — not someday.

  • Create explicit AI usage and data-handling policies. Spell out which genAI tools are approved, what corporate data may be submitted, and what’s forbidden. Be specific: examples, do’s and don’ts, and clear escalation paths. Vague policies don’t help when someone’s under pressure and improvising.
  • Expand mobile-focused training with scenario-based exercises. Smishing and genAI-assisted phishing simulations should be frequent and role-specific. Short, tactical coaching right after an exercise beats a quarterly slide deck. Real-world practice measurably lowers click rates — I’ve seen teams cut repeat-fail rates by half after three targeted simulations. If you’re wondering how to run smishing simulations for remote staff, start small, make scenarios believable (finance, HR, IT), and iterate.
  • Deploy AI-aware security controls. Invest in telemetry and detection tuned for generative patterns: content analysis that recognizes synthesized text, device-level anomaly detection for unusual API or model usage on phones, and heuristics for AI-driven social engineering. Plain signatures won’t cut it. Think “AI-aware detection for mobile endpoints” and build on that.
  • Enforce strong authentication and least privilege. Step-up authentication for risky actions, conditional access for mobile apps, and tight app permissions. Least privilege isn’t glamorous, but it’s the single most practical limiter of damage when accounts are phished.
  • Integrate network and mobile security. Unified visibility across endpoints and the corporate network catches lateral movement sooner. If your SIEM and MDM aren’t talking, you’re blind to a lot of the attack choreography — so focus on how to integrate MDM and SIEM for unified mobile visibility.

To borrow a line from Chris Novak, VP of Global Cybersecurity Solutions at Verizon Business: mobile security is “a battle fought in the palm of every employee’s hand.” I’d add: train the hand, secure the tool, and assume attackers will weaponize AI — then build for that reality.

Short-term actions for immediate impact

  • Audit which genAI apps are used on mobile and immediately block or restrict risky integrations. Ask: how do I audit which AI apps my employees use on mobile?
  • Mandate privacy-preserving settings and data controls for any approved AI apps. No free-text uploads of customer PII. No exceptions without review — how to restrict uploads of customer PII to genAI apps should be a documented rule.
  • Prioritize protection for high-risk users (finance, HR, IT admins) — step-up authentication, tighter session controls, and stricter data egress policies.
  • Draft an incident playbook that specifically covers AI-generated content and model abuse: who analyzes the model output, how to validate claimed provenance, and escalation timelines. If you’re wondering how to build an incident playbook for AI-generated scams, make it practical and tabletop-tested.

Long-term strategy: adapt, iterate, and measure

Security is continuous, never a checkbox. Track smishing click rates, measure incidents tied to AI-generated content, and feed those metrics back to the business. Use the data to justify targeted investments: better telemetry, more frequent tabletop exercises, or vendor tools that detect AI-specific threats.

There’s nuance. Not every org needs every shiny tool. The smart move: pilot narrow controls, measure impact, then scale what demonstrably reduces risk. I’ve seen programs succeed where leaders prioritized a few high-impact changes, then widened scope as confidence (and budget) grew.

For benchmarking, read the Verizon 2025 Mobile Security Index (MSI) — it’s a solid place to start for context and controls. Industry research increasingly supports a people-first posture coupled with AI-aware tooling when facing generative threats. If you want deeper reading on AI-specific browser and agent risks, see our guide to AI browser risks.

Key takeaways

  • Mobile + AI = elevated risk: generative AI amplifies attacker capabilities while mobile devices broaden the attack surface.
  • Most organizations are underprepared: only a small fraction have AI-specific controls in place.
  • Human behavior matters: targeted, realistic simulations and coaching reduce risk faster than generic training. (Yes, even for remote teams.)
  • Unify defenses: network and mobile security must be integrated and detection needs to be AI-aware.

From what I’ve watched across market cycles, organizations that combine precise policy, ongoing role-based mobile security training, and adaptive tech gain two big advantages: they reduce successful attacks and shorten recovery time. It’s not about hoarding every new tool — it’s about changing how people interact with AI on their phones. That cultural and operational shift often separates a near miss from a headline-making breach.

Learn more about AI-specific browser and agent risks in our piece on OpenAI’s Atlas Browser: Powerful AI, Big Convenience — and Serious Security Risks, which explores how powerful AI tooling on endpoints can introduce new attack surfaces relevant to mobile and browser-integrated agents.

🎉

Thanks for reading!

If you found this article helpful, share it with others

📬 Stay Updated

Get the latest AI insights delivered to your inbox