Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Jensen Huang: More tokens and engineers are needed. This is the kickoff ceremony of the AI revolution.
Feixiang Network News (Sun Yingxin) is still the same black leather coat with the same spirit, and still that old Huang—full of energy and talking up a storm.
On March 17, at the GTC 2026 conference opened in San Jose, California, the founder and CEO of Nvidia, Jensen Huang, delivered a much-anticipated keynote address. The audience coming to hear his talk has to line up—crowds fill the venue. This speech included not only Huang’s review of the technical accumulation from the past two decades, but also his fervent declaration of a blueprint for the future of AI development, as if he were preparing to put the crown of the Token era on someone’s head.
“Welcome to GTC. I just want to remind you that this is a technology conference… We’re going to talk about technology. We’re going to talk about platforms.” Next, Huang launched into a two-hour uninterrupted address, declaring that the AI inference era has arrived in full. Huang pointed out that artificial intelligence has moved beyond the training stage into a new era of inference and action. In this stage, AI is no longer just about understanding the world—it begins to think, plan, and execute tasks.
Trillion-Dollar Engine: Token Factories and the Compute Cost Revolution
Every time there’s something big, wear a leather coat. At this conference, Huang unveiled a deep dive into the concept of trillion-level compute and Token factories. He boldly predicted that in 2027, global AI compute demand would reach at least $1 trillion—far exceeding the earlier forecast of $500 billion—marking the industry’s formal entry into a growth phase multiplied by millions of times. Coupled with Huang’s earlier prediction that compute power is comparable to Moore’s Law, it’s reasonable to believe that this global AI compute demand number is merely the beginning.
Huang put forward a disruptive viewpoint: data centers are shifting from traditional storage facilities to Token production bases. “Tokens are the new main commodity; inference performance determines revenue.” He believes that under this new paradigm, data center output will no longer be static data, but dynamically generated Tokens. The number of Tokens each watt of electricity can generate directly determines a company’s revenue potential.
To illustrate this point, Huang also shared an example from his own lineup, revealing the astonishing performance of the new-generation Rubin platform. This platform increases the maximum value-tier performance per unit Token by ten times. Not only that, it comes with a leap in efficiency—at the scale of a 1GW data center, the Token generation rate will increase by 350 times within two years.
When it came to that, Huang took the opportunity to advertise and emphasized the importance of architectural efficiency. “If you pick the wrong architecture, even if it’s free, it still isn’t worth it.” He further explained that even building a gigawatt-class data center requires about $40 billion, only choosing the best architecture—whose architecture, you know—can ensure that this massive investment turns into the most competitive Token production cost.
Huang says this with confidence, because back then, he personally boxed up the first DGX-1 worth $300,000 and shipped it to the then relatively unknown OpenAI office, delivering it to the Ilya Sutskever team. Later, that machine became the compute cradle for the GPT series.
And now, nine years later, the same 1 PFLOPS compute cost has fallen from hundreds of thousands of dollars to a few thousand dollars, and its size has shrunk from server-sized to the size of a book. Even Huang himself couldn’t help but remark: “The rate at which compute costs are dropping is something no technology in human history can compare to.”
Agent Storm: The Rise of OpenClaw and a New Workplace Paradigm
Previously, we talked about Huang’s fervent declaration, as if he wanted to place the crown of the Token era on whoever’s head. Now, this scene is even more concretely realized. Faced with today’s hot AI topics, he didn’t hold back his praise for OpenClaw and threw out multiple revolutionary interpretations about AI agents and other developments shaking the industry.
Huang is highly optimistic about OpenClaw, calling it “the operating system for Agent computing,” and also saying “OpenClaw has surpassed the achievements of Linux over 30 years in just a few weeks.” He noted that OpenClaw isn’t only an open-source project—it’s also the core hub connecting large models, tools, the file system, and scheduling capabilities. It enables every AI agent to have independent resource management and task-execution abilities, just like a personal computer running Windows.
After that, Huang talked about SaaS. Based on the adoption of OpenClaw, he put forward a bold “SaaS extinction theory”: “All SaaS will become AaaS (agent as a service).” He said that in the future, software companies will no longer just provide tools for people to use; instead, they will provide agents that can autonomously execute tasks. Companies must set their own OpenClaw strategy, as crucial as the Linux or HTTP strategy was back then.
If it were to end there, that would be underestimating Huang’s knack for hitting the heart. Next, he threw out an even more forward-looking view about changes to engineers’ compensation systems, describing a future workplace picture like this: “Engineers will get annual salary plus a Token budget; Tokens become the core efficiency resource.” In other words, in the future, engineers’ income will consist of two parts: base pay and a Token budget. Engineers who can efficiently use the Token budget will see their work efficiency multiplied by ten. Token budgets have even become a new hiring tool in Silicon Valley: “How many Tokens will my tasks come with?” will become the most important question for job seekers.
He said this from personal experience as well: Huang also mentioned a funny incident that happened before. An AI-generated fake Jensen Huang talked about fake cryptocurrencies during the GTC livestream, and the view count astonishingly reached 100,000—5 times the official livestream view count. He joked afterward: “Even my AI alter ego is抢夺 compute heat.”
Ecosystem Grand Plan: The CUDA Flywheel and Move Light, Recede Copper
As always, Huang was dressed in his leather coat, which also shows he’s a nostalgic person. At the conference, Huang spent more than 20 minutes reviewing Nvidia’s ecosystem strategy.
“We need to create markets for the future, not serve existing markets.” Huang traced the journey of CUDA all the way—from being mocked as a scientist’s toy to becoming a foundation for AI today.
On the hardware technology roadmap, Huang clearly pointed to the trend of “move light, step back copper”: “Copper is already dead; light is the future of compute.” In the latest Rubin platform, CPO (co-packaged optics)—a technology that can significantly reduce per-port power consumption—solves the energy consumption bottleneck in high-speed interconnects. Through the COUPE process co-invented with TSMC, optical interfaces are directly embedded into the chip, achieving a revolutionary boost in energy efficiency.
All these achievements point toward a single goal—an AI factory. Huang reiterated his core definition: “Every company must build an AI factory, and competitiveness should be measured by AI-factory efficiency.” He believes that in the future, competition among enterprises will no longer be simply about technology or talent, but about who can generate Tokens at lower cost and higher efficiency.
“Inference is already reality; robots are being born for AI. And right now, shout out what you need: More Tokens! AI engineers! All personnel, stand by!” “This is not just a conference—it’s the start ceremony for an industrial revolution in artificial intelligence.”