Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Tongyi integrates Vibe Coding into all modalities, and Qwen3.5-Omni claims to achieve 215 SOTA results.
According to 1M AI News monitoring, Tongyi Laboratory has released its multimodal model Qwen3.5-Omni, which supports text, image, audio, and audio-video inputs, and can generate fine-grained audio-video Captions with timestamps. The official says that Qwen3.5-Omni-Plus has scored 215 SOTA results on tasks such as audio and audio-video analysis, reasoning, dialogue, and translation, and its capabilities exceed Gemini-3.1-Pro.
This time, the most special increment isn’t the leaderboard, but the “naturally emerging Audio-Visual Vibe Coding capability.” Tongyi says the model was not specifically trained, yet it can already generate runnable code directly from audio-video instructions. The official also claims that the model supports 256K context, recognizes 113 languages, can handle 10 hours of audio or 1 hour of video, and natively supports WebSearch and complex Function Calls.
Qwen3.5-Omni continues the Thinker-Talker split architecture, with both components upgraded to Hybrid-Attention MoE. Tongyi has provided three Plus, Flash, and Light sizes via Alibaba Cloud’s Bailian, and launched a real-time version, Qwen3.5-Omni-Plus-Realtime.