Press ESC to close · Ctrl+K to search

Tech

AI vs Hackers: An Arms Race With No Finish Line

Mar 1, 2026 (Updated: Apr 13, 2026) 4 min read 37 views
AI vs Hackers: An Arms Race With No Finish Line

The cybersecurity landscape of 2026 is defined by a paradox that would be funny if the stakes were not so catastrophically high: the same artificial intelligence technology that Security Operations Centre analysts use to detect network intrusions at 3 AM is the same technology that the attackers used to craft the phishing email that caused the intrusion in the first place. AI has not merely entered the cybersecurity arena—it has armed both sides simultaneously, creating an accelerating offensive-defensive arms race whose trajectory nobody can confidently predict. The defenders have AI that can analyze millions of network events per second, detect anomalous patterns that no human analyst could identify, and respond to threats in milliseconds. The attackers have AI that can generate undetectable phishing emails, discover software vulnerabilities automatically, and adapt attack strategies in real time based on the defender's responses. Neither side has a decisive advantage, and the equilibrium, such as it is, depends on which side deploys AI more effectively at any given moment.

This essay examines the current state of this arms race with the specificity that the topic demands—not the vague "AI is changing cybersecurity" generalisations that dominate the conversation, but the actual technical mechanisms through which AI is deployed offensively and defensively, the real-world incidents that illustrate these mechanisms, and the honest assessment of where the advantage currently lies (spoiler: it shifts constantly).

The Offensive Side: How Attackers Use AI

A dramatic visualization of digital warfare with shield icons defending against incoming cyber attack vectors in neon blue and red

AI-Generated Phishing: Phishing—the practice of sending fraudulent communications that appear to come from legitimate sources to trick recipients into revealing sensitive information or installing malware—has been the most successful cyberattack vector for decades. Traditional phishing had a built-in quality ceiling: the emails were written by humans (often non-native English speakers) and frequently contained grammatical errors, formatting inconsistencies, and implausible scenarios that alert recipients could identify. Large language models have eliminated this ceiling. AI-generated phishing emails are grammatically perfect, tonally appropriate for the impersonated sender, personalised using publicly available information about the recipient (scraped from LinkedIn, social media, and corporate websites), and can be generated at scale—thousands of unique, personalised phishing emails per hour, each indistinguishable from legitimate correspondence.

The attack that demonstrated this capability most vividly occurred in early 2025, when a Hong Kong-based multinational's finance department received a video call from what appeared to be the company's CFO instructing them to transfer $25 million to a specified account. The "CFO" was a deepfake—a real-time AI-generated video and audio synthesis of the executive's appearance and voice, created using publicly available video footage (earnings calls, conference presentations, media interviews). The finance team, convinced by the visual and auditory authenticity of the instruction, executed the transfer. The money was gone within minutes. This attack required no traditional hacking—no network penetration, no malware, no zero-day exploit. It required AI-generated video, AI-generated voice, and the human tendency to trust sensory evidence.

Automated Vulnerability Discovery: AI-powered fuzzing tools—programmes that automatically generate and submit millions of malformed inputs to software systems to identify inputs that cause crashes, errors, or unexpected behaviour—can discover software vulnerabilities at a speed that vastly exceeds human security research. Google's OSS-Fuzz project, which uses AI-enhanced fuzzing to test open-source software, has discovered over 10,000 vulnerabilities in widely used software projects. The same techniques are available to attackers, who can use AI fuzzing to discover zero-day vulnerabilities (previously unknown security flaws) in target software, develop exploits, and deploy attacks before the software vendor becomes aware of the vulnerability. The window between AI-powered vulnerability discovery and exploit deployment is shrinking toward zero.

Adaptive Attack Strategies: Traditional cyberattacks follow predefined playbooks—if the initial exploit succeeds, execute step two; if it fails, try alternative exploit. AI-powered attacks can adapt in real time: modifying payloads to evade detection signatures, adjusting timing to avoid anomaly detection thresholds, impersonating different network protocols to bypass firewall rules, and pivoting to alternative attack vectors when primary approaches are blocked. This adaptive capability makes AI-powered attacks significantly harder to defend against using signature-based detection (which relies on recognizing known attack patterns) and significantly increases the cognitive load on human security analysts who must interpret and respond to attacks that change shape during the engagement.

The Defensive Side: How AI Protects Networks

Behavioural Analytics and Anomaly Detection: The most valuable defensive application of AI in cybersecurity is the establishment of baseline behavioural profiles for users, devices, and network traffic, and the detection of deviations from those baselines that may indicate compromise. When a user account that normally accesses the company's CRM from a Delhi office during business hours suddenly begins accessing financial databases from a Tor exit node at 2 AM, AI-powered UEBA (User and Entity Behaviour Analytics) systems flag this as anomalous within seconds—correlating the time, location, access pattern, and data volume into a risk score that triggers automated investigation and, if the score exceeds a threshold, automated containment (isolating the compromised account before the attacker can extract data). This analysis happens across millions of events per second—a monitoring density that no team of human analysts could achieve.

Automated Incident Response: SOAR platforms (Security Orchestration, Automation, and Response), enhanced with AI decision-making, can perform initial incident response actions—isolating compromised systems, blocking malicious IP addresses, revoking compromised credentials, preserving forensic evidence—within seconds of detection, without waiting for human analyst approval. This automated response capability is critical because many modern attacks (ransomware in particular) operate on timelines measured in minutes: from initial access to data encryption, a well-designed ransomware attack can render an entire network inoperable in 15-30 minutes. A human-dependent response process—detect, alert, investigate, decide, act—requires 30-60 minutes at minimum. Automated response eliminates the decision latency that human-in-the-loop processes introduce.

The Indian Cybersecurity Landscape

India's cybersecurity situation is particularly consequential because of the country's rapid digitalisation across banking (UPI processes over 12 billion transactions monthly), government services (Aadhaar-linked services impacting 1.3 billion citizens), and enterprise operations. CERT-In (India's Computer Emergency Response Team) reported over 1.39 million cybersecurity incidents in 2024—a number that almost certainly understates the actual volume due to underreporting. The Indian cybersecurity workforce is estimated at approximately 300,000 professionals against a requirement of approximately 1.2 million—a skills gap that AI tools must partially fill.

Indian organisations face specific threat patterns: UPI-related fraud (social engineering attacks targeting mobile payment users), Aadhaar data exploitation (attacks targeting the biometric and demographic database that underpins India's digital identity infrastructure), and ransomware targeting Indian healthcare and educational institutions (which typically have weaker security postures than financial services and large enterprises). AI defensive tools deployed by Indian financial institutions—particularly the major banks (SBI, HDFC, ICICI) and payment platforms (Paytm, PhonePe, Google Pay)—have significantly reduced fraud rates by detecting anomalous transaction patterns in real time, but the sophistication of attacks continues to escalate.

Frequently Asked Questions (FAQs)

Is AI making us more secure or less secure overall?
The honest answer is: both, simultaneously, with the net effect varying by organisation. Large organisations with sophisticated security operations and the resources to deploy AI defensive tools are measurably more secure than they were five years ago—AI has dramatically improved detection speed, response time, and the ability to process the volume of security events that modern networks generate. Small and medium organisations that lack dedicated security teams are arguably less secure, because they face AI-enhanced attacks without AI-enhanced defences. The security gap between well-resourced and under-resourced organisations is widening, and AI is accelerating that divergence. The net societal effect depends on which side of this gap contains more organisations—and currently, the under-resourced side is substantially larger.

What should individuals do to protect themselves from AI-powered attacks?
The most impactful individual protections against AI-enhanced attacks are: 1) Enable multi-factor authentication on every account that supports it—MFA defeats credential theft, which remains the most common attack vector. 2) Verify unusual requests through a separate communication channel—if you receive an email or call requesting a financial transaction or credential, verify by calling the sender directly using a known phone number. 3) Be sceptical of urgency—AI-generated phishing emails and deepfake calls typically create artificial time pressure ("transfer this money within the hour") because urgency short-circuits critical thinking. 4) Keep software updated—automated vulnerability exploitation targets known vulnerabilities that have available patches. 5) Use a password manager—unique, complex passwords for every account eliminate the risk of credential stuffing attacks that exploit password reuse.

Can AI detect deepfakes?
AI deepfake detection tools exist and are improving, but they face a fundamental asymmetry: the detection system must identify subtle artefacts that distinguish generated media from authentic media, while the generation system can continuously improve to reduce those artefacts. Current deepfake detection achieves 85-95% accuracy on known generation methods but degrades significantly when confronted with generation techniques it has not been trained on. The practical reality is that deepfake detection is an ongoing arms race, not a solved problem. For high-stakes communications (financial transactions, executive instructions), the most reliable defence is not technical detection but process controls: requiring multi-person authorisation for large transactions, establishing verification protocols for unusual requests, and maintaining healthy scepticism toward any communication that requests urgent action.

NK

About Naval Kishor

Naval is a technology enthusiast and the founder of Bytes & Beyond. With over 8 years of experience in the digital space, he breaks down complex subjects into engaging, everyday insights.

Comments (0)

Be the first to share your thoughts on this article.

More to read

✉️

Wait — don't miss out!

Join our newsletter and get the best stories delivered to your inbox every week. No spam, unsubscribe anytime.

Join our readers · Free forever