TL;DR
Ethereum developers are outlining a new privacy model for AI chatbots that shields user identities while still allowing providers to verify payments and punish abuse. Vitalik Buterin and Davide Crapis explain that today’s AI systems expose sensitive data because API calls can be logged, tracked, and linked to real individuals. Their proposal introduces a zero‑knowledge framework that lets users interact privately without sacrificing accountability.
Ethereum’s Buterin and Crapis argue that AI chatbots rely on email logins or credit card payments, both of which tie every request to a real identity. This creates risks of profiling, tracking, and even legal exposure if logs are used in court. Blockchain payments are not a solution either, since paying on‑chain for each request is slow, expensive, and publicly traceable. Every transaction becomes a visible record, making privacy impossible. The developers say the industry can no longer ignore these issues as AI usage grows daily.
To solve this, Ethereum developers propose a system where users deposit funds into a smart contract once and then make thousands of private API calls. Providers know the requests are paid for, but the user does not repeatedly reveal their identity. Zero‑knowledge cryptography ensures that honest users remain anonymous while still proving they are spending from their deposited funds. This model aims to keep people safe while allowing AI technology to scale responsibly.

The system uses Rate‑Limit Nullifiers, which allow anonymous requests while catching anyone who tries to cheat. Each request receives a ticket index, and the user must generate a ZK‑STARK proving they are spending deposited funds and receiving any refunds owed. A unique nullifier prevents reuse of the same ticket, immediately exposing double‑spending attempts. Refund processing is built in because AI requests vary in cost.
Buterin and Crapis note that abuse includes more than double‑spending. Users may attempt harmful prompts, jailbreaks, or illegal content. To address this, the protocol adds dual staking. One stakeholder enforces mathematical rules, while the other enforces provider policies, ensuring that malicious behavior can be punished without revealing user identities.
Also read: X Money Launch: Bold Moves That Could Change 2026 Finance