TL;DR:
a16z crypto published a research paper that exposes a security problem in DeFi: artificial intelligence agents no longer merely assist in defending protocols — they are capable of autonomously identifying and reproducing price manipulation vulnerabilities.
Preliminary results indicate success rates close to 70% when agents had access to known exploit paths and structured knowledge, though they still show limitations in complex multi-step attacks.

For years, security in DeFi followed a predictable pattern: protocols launched code, commissioned audits, patched detected issues, and trusted that the review was sufficient. That model already looked fragile when human attackers outpaced audit cycles. AI agents widened that gap substantially.
A system capable of continuously testing exploit paths does not wait for the next scheduled review. It keeps searching. That is why a16z argues that the DeFi ecosystem must abandon the “code is law” logic and move toward security based on formal specifications: proving what a protocol is allowed to do, rather than reacting only after an attack has already occurred.

What makes AI particularly dangerous is its scale. An agent does not need creativity in the human sense: it needs repetition and enough reasoning capacity to test assumptions faster than defenders can respond. If it can simulate thousands of exploit paths across lending pools, oracles, bridge logic, and liquidation mechanics, the attacker only needs one to work. The defender must protect all of them.
According to a16z, composability also worsens the outlook. A vulnerability in an isolated contract is dangerous. In a bridge or a cross-chain collateral structure, it can become systemic. AI agents do not distinguish between “core” and “peripheral” failures: they evaluate whether the system’s assumptions break down, and they do so at machine speed.
The a16z research also notes that, historically, the attack arrives before the defense. Attackers experiment without needing governance approval or internal consensus. They only need one opening. According to initial reports, AI agents show greater effectiveness exploiting vulnerabilities than safely remediating them. Detection is simpler than safe remediation. That should unsettle every DeFi protocol operating today.