Getting your Trinity Audio player ready...
|
A growing chorus of cybersecurity experts is sounding the alarm over the weaponization of artificial intelligence by cybercriminals, marking what many describe as a new era of digital warfare. According to the 2025 Threat Report by cybersecurity firm Deep Instinct, AI-powered attacks—including ransomware, phishing scams, and polymorphic malware—have surged dramatically, with ransomware incidents alone rising by 30% since 2023.
Yariv Fishman, Chief Product Officer at Deep Instinct, emphasized the urgency of proactive defense: “The era of reactive security is over. Preemptive defenses powered by deep learning will define cybersecurity’s future”.
The University of California, Berkeley’s Center for Long-Term Cybersecurity (CLTC) also highlighted the growing sophistication of AI-enabled cybercrime. In a recent tabletop exercise involving law enforcement and industry leaders, experts noted that large language models (LLMs) are lowering the barrier to entry for cybercriminals, enabling tailored phishing campaigns, deepfake impersonations, and automated reconnaissance.
Forbes contributor Emil Sayegh, a serial tech CEO, warned that the unchecked deployment of AI across critical infrastructure—often without adequate scrutiny—is creating irreversible vulnerabilities. He cited the case of DeepSeek, a Chinese AI chatbot found to transmit unencrypted user data and hard-coded encryption keys, raising serious national security concerns.
As AI continues to evolve, experts urge governments, corporations, and developers to adopt stricter oversight, transparent data practices, and AI-specific cybersecurity protocols to mitigate the risks of this rapidly expanding threat landscape.
Sources:
SafetyDetectives – Deep Instinct 2025 Threat Report
Forbes – Emil Sayegh on AI Cybersecurity Risks
UC Berkeley CLTC – Rise of AI-enabled Cybercrime