💞 #Gate Square Qixi Celebration# 💞
Couples showcase love / Singles celebrate self-love — gifts for everyone this Qixi!
📅 Event Period
August 26 — August 31, 2025
✨ How to Participate
Romantic Teams 💑
Form a “Heartbeat Squad” with one friend and submit the registration form 👉 https://www.gate.com/questionnaire/7012
Post original content on Gate Square (images, videos, hand-drawn art, digital creations, or copywriting) featuring Qixi romance + Gate elements. Include the hashtag #GateSquareQixiCelebration#
The top 5 squads with the highest total posts will win a Valentine's Day Gift Box + $1
A16z Unveils AI Workstation Powered by NVIDIA Blackwell GPUs
In an era dominated by foundation models and massive datasets, developers and researchers increasingly face bottlenecks in compute access. While the cloud provides scalability, concerns around privacy, latency, and cost have pushed many builders to seek high-performance local alternatives. Venture capital firm A16z has now revealed its custom-built AI workstation, designed to deliver enterprise-grade performance directly on-premise—powered by NVIDIA’s latest Blackwell GPUs.
Full Bandwidth for Demanding AI Workloads
At the core of the system are four NVIDIA RTX 6000 Pro Blackwell Max-Q GPUs, each equipped with 96GB of VRAM for a combined 384GB. Unlike typical multi-GPU workstations that split bandwidth across cards, every GPU here connects through its own dedicated PCIe 5.0 x16 lane. This design provides full GPU-to-CPU bandwidth, ensuring model training and inference run without bottlenecks.
The system is paired with AMD’s Ryzen Threadripper PRO 7975WX, a 32-core, 64-thread CPU optimized for heavy parallel workloads. Together, the CPU and GPUs create a balanced environment for training large language models, fine-tuning specialized systems, and running multimodal AI applications.
Storage and Memory Built for Scale
To keep up with the GPU throughput, A16z’s workstation includes four 2TB PCIe 5.0 NVMe SSDs, delivering nearly 60GB/s aggregate bandwidth in RAID 0. The machine also comes with 256GB of ECC DDR5 memory, scalable up to 2TB across eight channels.
This combination not only accelerates large dataset processing but also integrates NVIDIA GPUDirect Storage, allowing data to flow directly from SSDs to GPU memory. By bypassing CPU memory, this design dramatically reduces latency for workloads that depend on rapid data movement.
Energy Efficiency and Mobility
Despite its raw power, the workstation is surprisingly energy-conscious. It draws a maximum of 1650W and can run from a standard 15-amp outlet. A liquid cooling system stabilizes the CPU during extended training sessions, while the case includes built-in wheels, making the unit transportable across labs and offices.
Unlocking Local AI Potential
The A16z workstation is positioned as a flexible tool for both researchers and startups. It enables teams to train, fine-tune, and deploy models locally without handing sensitive data to the cloud. Beyond large language models, the machine supports multimodal AI, enabling simultaneous workloads across text, video, and image data.
By combining enterprise-class GPU power, vast storage bandwidth, and practical design, A16z’s AI workstation offers a compelling vision of local compute infrastructure at a time when demand for AI resources has never been greater.