From the primary market perspective: Crypto × AI — An Experiment in the Illusion of Tokenization

robot
Abstract generation in progress

Author: Lao Bai

After two years, V has posted another tweet. Following the same timing as two years ago, February 10th. (Related reading: ABCDE: Analyzing AI+Crypto from a Primary Market Perspective)

Two years ago, Vitalik subtly expressed that he wasn’t very optimistic about our then-popular concept of Crypto Helping AI. At that time, the three main trends in the community were the tokenization of computing power assets, data assets, and model assets. My report from two years ago mainly discussed some phenomena and doubts observed in these three areas from a primary market perspective. From Vitalik’s point of view, he still favors AI Helping Crypto more.

He gave several examples at that time:

AI as a participant in games;

AI as a game interface;

AI as game rules;

AI as game objectives;

Over the past two years, we’ve tried many approaches to Crypto Helping AI, but with limited results. Many projects and tracks are just issuing tokens—no real product-market fit (PMF). I call this the “Tokenization Illusion.”

  1. Tokenization of computing power assets — Most cannot meet enterprise-level SLAs, are unstable, frequently disconnect. They can only handle simple small-to-medium model inference tasks, mostly serving edge markets, with income not tied to tokens…

  2. Data assetization — Friction is high on the supply side (retail investors), willingness is low, and uncertainty is high. On the demand side (enterprises), what’s needed are structured, context-dependent, trustworthy, and legally responsible professional data providers. DAO-based Web3 project teams find it hard to provide such data.

  3. Model assetization — Models are inherently non-scarce, replicable, fine-tunable, and quickly depreciating process assets, not final assets. Hugging Face is more of a collaboration and dissemination platform—like GitHub for ML—not an app store for models. So, attempts to tokenize models via “Decentralized Hugging Face” generally end in failure.

Additionally, over these two years, we’ve experimented with various “Verifiable Inference” methods, which is a typical case of looking for nails with a hammer. From ZKML to OPML, Gaming Theory, and even EigenLayer turning its Restaking narrative into Verifiable AI.

But the situation is similar to what’s happening in the Restaking track — few AVS are willing to pay for ongoing verifiable security.

Similarly, verifiable inference mostly involves verifying “things that no one really needs to be verified.” The threat models on the demand side are extremely vague — who exactly are we defending against?

AI output errors (model capability issues) are far more common than malicious tampering of AI outputs (adversarial problems). Recently, security incidents on OpenClaw and Moltbook have shown this. The real issues stem from:

Incorrect strategy design;

Excessive permissions;

Unclear boundaries;

Unexpected interactions between tool combinations;

There are almost no real concerns about “model tampering” or “malicious rewriting of inference processes.”

Last year, I shared this chart—maybe some old friends remember it.

The ideas Vitalik presented this time are clearly more mature than two years ago, thanks to progress in privacy, X402, ERC8004, prediction markets, and other areas.

You can see that he divided the four quadrants into two parts: one half is AI Helping Crypto, the other half is Crypto Helping AI, no longer leaning heavily toward the former as two years ago.

Top-left and bottom-left — Using Ethereum’s decentralization and transparency to solve trust and economic collaboration issues in AI:

  1. Enabling trustless and private AI interaction (Infrastructure + Survival): Using ZK, FHE, and other technologies to ensure privacy and verifiability in AI interactions (not sure if what I mentioned earlier about verifiable inference counts here).

  2. Ethereum as an economic layer for AI (Infrastructure + Prosperity): Allowing AI agents to perform economic transactions, recruit other bots, pay deposits, or build reputation systems via Ethereum, thus creating a decentralized AI architecture beyond single giant platforms.

Top-right and bottom-right — Using AI’s intelligence to optimize user experience, efficiency, and governance in crypto ecosystems:

  1. Cypherpunk mountain man vision with local LLMs (Impact + Survival): AI as a “shield” and interface for users. For example, local LLMs (large language models) can automatically audit smart contracts, verify transactions, reduce reliance on centralized frontends, and protect personal digital sovereignty.

  2. Making better markets and governance a reality (Impact + Prosperity): Deep AI involvement in prediction markets and DAO governance. AI can act as an efficient participant, processing large amounts of information to amplify human judgment, solving issues like limited human attention, high decision costs, information overload, and voter apathy.

Previously, we were eager to make Crypto Help AI happen, while Vitalik was on the other side. Now, we’ve finally met in the middle. But it seems unrelated to various tokenization efforts or AI Layer1 projects. Hopefully, in two years, looking back at this post, there will be new directions and surprises.

ETH-2,62%
EIGEN-0,39%
ZK-1,31%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)