Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google paper that caused a collapse in global storage stocks sparks academic controversy: Chinese scholars claim it is severely inaccurate and unrepentant—using our method but deliberately avoiding similarity.
Reporter|Yue Chupeng
On March 26, a paper from Google Research shook the global storage chip market, causing more than $90 billion in market value to evaporate for major U.S. and South Korean players.
Google’s paper claims that a new algorithm called TurboQuant can compress the memory footprint of AI large-model KV caching to one-sixth of its original size without sacrificing accuracy.
Just one day later, Gao Jianyang, a postdoctoral researcher at ETH Zurich, posted on a social platform, directly pointing out serious academic problems with the Google paper.
Gao Jianyang said that Google avoided the similarity between the TurboQuant algorithm and RaBitQ—the RaBitQ method he published during his PhD studies at Nanyang Technological University (NTU) in Singapore in 2024—and incorrectly described the theoretical results of RaBitQ, while also deliberately creating an unfair experimental environment.
RaBitQ is a vector quantization algorithm that can ensure reliable search performance even under highly compressed vector data.
Gao Jianyang also said that the TurboQuant team at Google “knows it’s wrong but won’t change.” Before Google’s TurboQuant paper was officially published in April 2025, he had already pointed out the above issues by email, but after being informed, Google still failed to make thorough corrections in the final version.
On March 29, a reporter from The Daily Economic News (hereinafter NBD) interviewed the authors of the RaBitQ paper, Gao Jianyang and Long Cheng.
RaBitQ is Gao Jianyang’s main work during his PhD studies at Nanyang Technological University in Singapore, while Long Cheng is his PhD supervisor.
At the same time, the reporter also sent an interview email to Google, but as of the time of publication, no response had been received. It is understood that Google Research will present its TurboQuant paper at ICLR 2026 (the 2026 International Conference on Learning Representations) to be held in April.
Gao Jianyang Image source: provided by the interviewee
NBD: When did you first notice problems with the Google TurboQuant paper?
Gao Jianyang: As early as January 2025, the second author of the TurboQuant paper, Majid Daliri, proactively contacted us, asking for help debugging his Python version translated based on his own RaBitQ C++ code, and describing detailed reproduction steps and the error messages. This shows that the TurboQuant team had a thorough understanding of RaBitQ’s technical details.
After the TurboQuant paper was released in April 2025, we noticed that the paper’s description of RaBitQ contains serious misrepresentations—describing RaBitQ as grid-based PQ (product quantization based on a grid), completely ignoring its core random rotation step, while labeling RaBitQ’s theoretical guarantees as “suboptimal” without any derivation or evidence, and the experimental comparisons also feature clearly unfair design.
Our first reaction was confusion and regret: the similarity between TurboQuant and RaBitQ is clearly identifiable from a technical standpoint, and the other side’s level of understanding of RaBitQ far exceeds that of an ordinary reader. In such circumstances, it’s hard to explain such systematic misrepresentations as mere negligence.
NBD: Before making the issue public, what communication took place between the two teams?
Gao Jianyang: We had multiple rounds of communication, spanning more than a year.
In May 2025, we held detailed technical discussions with Majid Daliri by email about differences in experimental conditions and the optimality of theoretical results. We clarified, point by point, the TurboQuant team’s incorrect interpretation. Majid Daliri clearly stated that he had informed all co-authors of the discussion results.
However, after we requested corrections to the factual errors in the paper, he stopped replying.
In November 2025, we discovered that TurboQuant had submitted to ICLR 2026, and the incorrect content remained unchanged. We then contacted the ICLR 2026 PC Chairs (program committee chairs), but received no response.
After the paper was massively promoted through Google’s official channels in March 2026, we formally emailed all authors again.
The response we received was: first author Amir Zandieh promised to revise the theoretical description and experimental conditions, but explicitly refused to revise the discussion about methodological similarity, and claimed he would only make changes after the official end of ICLR 2026. This response disappointed us, but not unexpectedly. Obviously, the other side understood the problem, yet chose to make only the smallest concession.
NBD: What are the most key similarities between TurboQuant and RaBitQ?
Gao Jianyang: The most core similarity between the two is that both adopt the crucial design of applying a random rotation to vectors before quantization (the Johnson-Lindenstrauss transform), and use the statistical properties of the coordinate distributions after rotation to build distance estimators.
It’s worth noting that in the TurboQuant paper authors’ response on ICLR OpenReview (a commonly used public paper review platform in academia), they described their method this way: “Our implementation is that we first normalize the vectors using their L2 norm, then apply a single random rotation so that the components of these vectors after rotation follow a Beta distribution.” This is highly consistent with RaBitQ’s core mechanism, yet the paper body never clearly explains this connection.
A metaphor can help understand it: suppose one chef publishes a complete recipe for a dish first. Later, another chef publishes a dish that uses almost the same core steps, but in the description they characterize the first chef’s dish as “a different method with worse results,” without mentioning the link between the two at all.
Without knowing this, readers naturally cannot make a fair judgment.
Long Cheng Image source: provided by the interviewee
NBD: According to academic norms, how should this kind of relationship be handled?
Long Cheng: Academic norms require that when a new work has a substantial methodological connection to an existing work, it should explicitly cite and positively discuss that connection—including explaining in what respects the new work has advanced and which aspects have been carried over from existing frameworks.
This point is especially important in this case because an ICLR reviewer also independently pointed out in their review comments that “the similarity between RaBitQ and its variants and TurboQuant is that they both use random projection,” and explicitly required more thorough discussion and comparison.
Even the reviewer noticed this connection, yet in the final version, the paper authors not only failed to add discussion, but instead moved the previously incomplete description of RaBitQ from the main text into the appendix. This approach runs counter to the basic requirements of academic norms.
NBD: Why choose to make it public now rather than continue resolving it internally through academic channels?
Long Cheng: We didn’t skip academic channels. We chose to go public when the academic channel process had essentially run its course.
We contacted the paper authors and the ICLR PC Chairs (program committee chairs), and also submitted formal complaints with a complete evidence package to the ICLR General Chairs (conference chairs) and the Code and Ethics Chairs, while also posting public comments on the ICLR OpenReview platform.
But we also have to acknowledge a reality: we are a small university research team, while the other side is Google Research. In terms of resources, influence, and discourse power, the two sides are simply not equal.
The TurboQuant paper’s social media-related views reached tens of millions within a short time—no university lab could possibly match that level of dissemination capability.
In such an imbalance, if we continue to stay silent and wait for the internal process, the incorrect narrative will only harden into consensus faster. Going public is one of the few means available to a weaker side to maintain basic academic facts when formal channels respond slowly.
NBD: If the relevant issues are not corrected, what impacts could result?
Long Cheng: First, it will systematically distort the record of academic history, causing later researchers to misjudge the source of methodological evolution, and then build new work on top of a wrong foundation.
Second, it will undermine the incentive mechanism for original research. If a method that has undergone rigorous theoretical derivation and reaches asymptotically optimal error bounds can be repackaged and pushed to the public with tens of millions of exposures, while the original authors do not receive the recognition they deserve, the damage to the academic ecosystem will be long-lasting and far-reaching.
Third, for the field of vector quantization—which is currently in a phase of rapid development and drawing high attention from industry—incorrect attribution of methods will directly affect practitioners’ and researchers’ judgments about technical routes, leading to the wrong allocation of resources.
NBD: Do you think this constitutes an academic disagreement?
Long Cheng: This is beyond the scope of academic disagreement. Academic disagreements usually occur when the two sides have genuine differences in understanding of the technical content.
But in this case, the TurboQuant team’s understanding of RaBitQ’s technical details has been documented extensively. As early as May 2025, we clarified the optimality of the theoretical guarantees point by point via email, and Majid Daliri explicitly stated he had informed all authors. The unequal experimental conditions were also acknowledged by the authors themselves in the emails.
Under the circumstances described above, the relevant errors were never corrected throughout the entire process—from submission, review, acceptance, publication, to large-scale promotion. We are not inclined to make casual qualitative judgments, but we believe that these actions are supported by sufficient facts for the academic community and relevant institutions to independently assess.
Image source: Gao Jianyang’s social media account
NBD: Where do you think the responsibilities lie for large research institutions like Google Research?
Long Cheng: Endorsement from large institutions itself creates an amplification effect. When a paper is promoted through Google’s official channels, its dissemination speed and coverage are simply incomparable to those of ordinary academic papers.
At this scale, once an incorrect narrative in the paper spreads, the cost of correction increases many times. I believe large institutions have the responsibility to ensure that descriptions involving other people’s work undergo basic factual verification before the paper is massively promoted externally, rather than shifting this responsibility entirely onto peer review.
At the same time, when external researchers raise evidence-based objections, large institutions should also have formal internal mechanisms to handle them rather than remain silent. This is both a responsibility to the academic community and a form of protecting their own credibility.
NBD: Will you take further actions next?
Long Cheng: Next, we plan to publish a detailed technical report on arXiv, systematically outlining the methodological relationship between RaBitQ and TurboQuant, and provide technical explanations for each of three questions one by one for reference by the academic community.
We are also considering reflecting through further channels to relevant institutions such as Google Research Escalation Council (Google research appeal council). Our goal is always to ensure that the public academic record accurately reflects the true relationships among the different methods, rather than manufacturing confrontation.
Huge amounts of information and precise interpretation are available in the Sina Finance APP
Responsible editor: Chang Fuqiang