The Gonka decentralized computing network has undergone a key transformation in recent days. Participating nodes have implemented a new verification standard that fundamentally changes the operation of the entire infrastructure. The mainnet update to version v0.2.9, confirmed through on-chain governance voting and deployed at block height 2451000, marks a breakthrough for decentralized AI processing.
Breakthrough Change: PoC v2 as the New Node Standard
The most significant change in this update is the complete transition to the PoC v2 mechanism for weight allocation. The previous PoC logic has been entirely phased out, and network nodes now operate under a unified verification model. This solution dramatically reduces the heterogenous computational noise that previously hindered the validation process of contributions.
Confirmation via PoC has become the authoritative source of network results. ML nodes operating on models Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 and using images compatible with the new standard can now participate in weight calculations under uniform conditions. This unification enhances the verifiability and confidence of each computational contribution, leading to a much more stable infrastructure for decentralized inference and AI model training.
The transition period from Epoch 158 to 159 marks the symbolic beginning of the first full operational phase after activating PoC v2 — the moment when the entire network of nodes functions under the new regime.
Explosive Growth in Infrastructure Performance
Network performance has reached impressive levels in recent weeks. According to GonkaScan data, the total computational power of nodes is approaching the equivalent of 14,000 H100 units — a key industry benchmark for AI calculations. This represents a dramatic increase compared to December 2025, when Bitfury announced a $50 million investment in infrastructure with only 6,000 H100 equivalents. The monthly growth rate of computational power is about 52%, placing Gonka at the forefront of similar decentralized computing networks.
The resource structure of the nodes shows a clear focus on high-end hardware. Cards such as NVIDIA H100, H200, and A100 account for over 80% of the total network computational capacity. This dominance of top-tier GPUs underscores Gonka’s significant advantage in aggregating and scheduling high-performance computing resources. The nodes favor powerful graphics cards that ensure the performance required by advanced AI models.
Global Expansion of Nodes Creates a New Era of Computing
The geographic reach of the node infrastructure offers a new perspective on the security and resilience of decentralized processing. Nodes operating within the Gonka network span approximately 20 countries and regions across four continents: Europe, Asia, the Middle East, and North America. This geographic diversification is a fundamental advantage of the decentralized model — no single point of failure, mitigation of geopolitical risks, and natural load balancing.
Distributing nodes on such a scale means Gonka’s infrastructure is evolving toward a compute cluster comparable to national data centers. But unlike traditional solutions, the nodes are spread globally, ensuring service continuity and resilience against local disruptions. This approach addresses future challenges — providing reliable access to computational power for decentralized AI training and inference without dependence on any single technological operator.
The Gonka node update to v0.2.9 with the new PoC v2 mechanism marks a turning point for the entire decentralized processing ecosystem. By standardizing node operational protocols, dramatically increasing performance, and expanding infrastructure globally, the network moves closer to the vision of a permanently decentralized alternative to traditional AI data centers.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Gonka strengthens the decentralized AI network with node updates to v0.2.9
The Gonka decentralized computing network has undergone a key transformation in recent days. Participating nodes have implemented a new verification standard that fundamentally changes the operation of the entire infrastructure. The mainnet update to version v0.2.9, confirmed through on-chain governance voting and deployed at block height 2451000, marks a breakthrough for decentralized AI processing.
Breakthrough Change: PoC v2 as the New Node Standard
The most significant change in this update is the complete transition to the PoC v2 mechanism for weight allocation. The previous PoC logic has been entirely phased out, and network nodes now operate under a unified verification model. This solution dramatically reduces the heterogenous computational noise that previously hindered the validation process of contributions.
Confirmation via PoC has become the authoritative source of network results. ML nodes operating on models Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 and using images compatible with the new standard can now participate in weight calculations under uniform conditions. This unification enhances the verifiability and confidence of each computational contribution, leading to a much more stable infrastructure for decentralized inference and AI model training.
The transition period from Epoch 158 to 159 marks the symbolic beginning of the first full operational phase after activating PoC v2 — the moment when the entire network of nodes functions under the new regime.
Explosive Growth in Infrastructure Performance
Network performance has reached impressive levels in recent weeks. According to GonkaScan data, the total computational power of nodes is approaching the equivalent of 14,000 H100 units — a key industry benchmark for AI calculations. This represents a dramatic increase compared to December 2025, when Bitfury announced a $50 million investment in infrastructure with only 6,000 H100 equivalents. The monthly growth rate of computational power is about 52%, placing Gonka at the forefront of similar decentralized computing networks.
The resource structure of the nodes shows a clear focus on high-end hardware. Cards such as NVIDIA H100, H200, and A100 account for over 80% of the total network computational capacity. This dominance of top-tier GPUs underscores Gonka’s significant advantage in aggregating and scheduling high-performance computing resources. The nodes favor powerful graphics cards that ensure the performance required by advanced AI models.
Global Expansion of Nodes Creates a New Era of Computing
The geographic reach of the node infrastructure offers a new perspective on the security and resilience of decentralized processing. Nodes operating within the Gonka network span approximately 20 countries and regions across four continents: Europe, Asia, the Middle East, and North America. This geographic diversification is a fundamental advantage of the decentralized model — no single point of failure, mitigation of geopolitical risks, and natural load balancing.
Distributing nodes on such a scale means Gonka’s infrastructure is evolving toward a compute cluster comparable to national data centers. But unlike traditional solutions, the nodes are spread globally, ensuring service continuity and resilience against local disruptions. This approach addresses future challenges — providing reliable access to computational power for decentralized AI training and inference without dependence on any single technological operator.
The Gonka node update to v0.2.9 with the new PoC v2 mechanism marks a turning point for the entire decentralized processing ecosystem. By standardizing node operational protocols, dramatically increasing performance, and expanding infrastructure globally, the network moves closer to the vision of a permanently decentralized alternative to traditional AI data centers.