How powerful will the intelligent iteration of large models be over the next three years?
Breaking it down, there are actually two variables driving this: the exponential growth of hardware computing power, combined with the scaling laws of the models themselves.
On the chip side, computing capacity can roughly quadruple every 18 months—this is the industry-recognized pace. From the training data perspective, based on current scaling laws, the available high-quality data can support about 2-3 such iteration cycles.
Calculating this way, by 2029, the intelligence level of large models will have increased 16 times compared to now. This number sounds a bit crazy.
For emerging fields like Web3 and AI infrastructure, this kind of capability boost could be key to accelerating application deployment. But at the same time, it also means we need to think earlier about ethical issues and risk prevention.
What’s your view? Are you excited about these possibilities, or a bit worried?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
10 Likes
Reward
10
5
Repost
Share
Comment
0/400
AirdropHunter9000
· 6h ago
16x? Nonsense, the data bottleneck has long been holding things back. It'll be much more conservative than you think at that time, haha.
Algorithms boast about 16x every day, but what’s the result? Training data has already hit the bottom. Do they really think internet text is endless?
Web3 might have some potential. If we can do a good job with distributed model inference, the possibilities are still there.
But as for ethics, we've been talking about it for so many years. How many actually enforce it?
View OriginalReply0
FloorSweeper
· 12h ago
16x? Will we all be unemployed then, haha
View OriginalReply0
AirdropLicker
· 12h ago
16x? Data bottleneck is the real killer; high-quality data is simply not enough.
View OriginalReply0
SillyWhale
· 13h ago
16x? That's nonsense. If the data is all used up, how can we continue iterating?
Artificial intelligence models Large models
View OriginalReply0
Ser_Liquidated
· 13h ago
16x? What about the data bottleneck? It doesn't seem to be a concern.
How powerful will the intelligent iteration of large models be over the next three years?
Breaking it down, there are actually two variables driving this: the exponential growth of hardware computing power, combined with the scaling laws of the models themselves.
On the chip side, computing capacity can roughly quadruple every 18 months—this is the industry-recognized pace. From the training data perspective, based on current scaling laws, the available high-quality data can support about 2-3 such iteration cycles.
Calculating this way, by 2029, the intelligence level of large models will have increased 16 times compared to now. This number sounds a bit crazy.
For emerging fields like Web3 and AI infrastructure, this kind of capability boost could be key to accelerating application deployment. But at the same time, it also means we need to think earlier about ethical issues and risk prevention.
What’s your view? Are you excited about these possibilities, or a bit worried?