Author: Cuy Sheffield, Vice President and Head of Cryptocurrency at Visa
Compiled by: Saoirse, Foresight News
As cryptocurrencies and AI gradually mature, the most significant shifts in these two fields are no longer “theoretically feasible” but “reliably implementable in practice.” Currently, both technologies have crossed critical thresholds, with performance improvements being substantial, yet their actual adoption remains uneven. The core development trends leading into 2026 stem from this gap between “performance and adoption.”
Below are several key themes I have been following long-term, along with initial thoughts on the development directions of these technologies, areas of value accumulation, and even “why the ultimate winners may differ greatly from industry pioneers.”
Theme 1: Cryptocurrencies are transitioning from speculative assets to high-quality technology
The first decade of cryptocurrency development was characterized by “speculative advantage”—its market is global, continuous, and highly open. The intense volatility also makes crypto trading more dynamic and attractive than traditional financial markets.
However, the underlying technology was not yet ready for mainstream applications: early blockchain speeds were slow, costs high, and stability insufficient. Outside of speculative scenarios, cryptocurrencies have rarely surpassed existing traditional systems in cost, speed, or convenience.
Today, this imbalance is beginning to shift. Blockchain technology has become faster, more economical, and more reliable. The most attractive use cases for cryptocurrencies are no longer speculation but infrastructure—especially settlement and payment processes. As cryptocurrencies become more mature, the core role of speculation will gradually weaken: it will not disappear entirely but will no longer be the primary source of value.
Theme 2: Stablecoins are a clear achievement of cryptocurrencies’ “practical utility”
Unlike previous narratives around cryptocurrencies, the success of stablecoins is based on concrete, objective standards: in specific scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, while seamlessly integrating into modern software systems.
Stablecoins do not require users to view cryptocurrencies as an “ideology” to believe in; their applications often occur “implicitly” within existing products and workflows—this allows institutions and companies that previously viewed the crypto ecosystem as “too volatile and not transparent” to finally understand its value clearly.
It can be said that stablecoins help cryptocurrencies re-anchor themselves on “practicality” rather than “speculation,” establishing a clear benchmark for “how cryptocurrencies can successfully land in real-world applications.”
Theme 3: When cryptocurrencies become infrastructure, “distribution capability” is more important than “technological novelty”
In the past, when cryptocurrencies mainly served as “speculative tools,” their “distribution” was endogenous—new tokens could naturally accumulate liquidity and attention simply by existing.
But as cryptocurrencies become infrastructure, their application scenarios are shifting from the “market level” to the “product level”: embedded in payment flows, platforms, and enterprise systems, end users often are unaware of their presence.
This shift benefits two types of entities: one, companies with existing distribution channels and reliable customer relationships; two, institutions with regulatory licenses, compliance systems, and risk management infrastructure. Merely having “innovative protocols” is no longer enough to drive large-scale adoption of cryptocurrencies.
Theme 4: AI agents have practical value, and their influence is surpassing coding domains
The practicality of AI agents is increasingly evident, but their role is often misunderstood: the most successful agents are not “autonomous decision-makers” but “tools that reduce coordination costs in workflows.”
Historically, this is most apparent in software development—AI tools accelerate coding, debugging, refactoring, and environment setup. Recently, this “tool value” has been expanding significantly into more fields.
Take tools like Claude Code as an example: although positioned as “developer tools,” their rapid adoption reflects a deeper trend—agent systems are becoming “interfaces for knowledge work,” not just limited to programming. Users are beginning to apply “agent-driven workflows” to research, analysis, writing, planning, data processing, and operations—tasks that are more aligned with “general professional work” rather than traditional coding.
The real key is not “coding itself” but the underlying core pattern:
Users delegate “goal intent,” not “specific steps”;
Agents manage “context information” across files, tools, and tasks;
Work modes shift from “linear progression” to “iterative, dialog-based.”
In various knowledge work domains, agents excel at gathering context, executing limited tasks, reducing handoffs, and accelerating iteration, but still have shortcomings in “open-ended judgment,” “responsibility attribution,” and “error correction.”
Therefore, most current production-use agents still require “scope limitations, supervision, and system embedding,” rather than operating fully autonomously. The true value of agents lies in “restructuring knowledge workflows,” not in “replacing labor” or “achieving full autonomy.”
Theme 5: The bottleneck of AI has shifted from “intelligence level” to “trustworthiness”
AI models have rapidly improved in intelligence, but the current limiting factor is no longer “language fluency or reasoning ability” alone, but “reliability in actual systems.”
Production environments demand zero tolerance for three issues: AI “hallucinations” (generating false information), inconsistent outputs, and opaque failure modes. Once AI involves customer service, financial transactions, or compliance, “roughly correct” results are no longer acceptable.
Building “trust” requires four foundational capabilities: traceability of results, memory ability, verifiability, and the capacity to proactively expose “uncertainty.” Until these capabilities are mature enough, AI autonomy must be limited.
Theme 6: Systems engineering determines whether AI can land in production scenarios
Successful AI products treat “models” as “components,” not “finished products”—their reliability depends on “architectural design,” not “prompt engineering.”
“Architectural design” includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. This is why AI development is increasingly aligned with “traditional software engineering” rather than “cutting-edge theoretical research.”
Long-term value will favor two groups: system builders and platform owners controlling workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operations, the importance of “systems engineering” will further increase: knowledge work is often complex, state-dependent, and context-rich, making “agents capable of reliably managing memory, tools, and iterative processes” more valuable than those merely capable of generating outputs.
Theme 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI system capabilities grow and deepen integration with the economy, the question of “who owns and controls the most powerful AI models” is triggering core conflicts.
On one hand, R&D in cutting-edge AI remains “capital-intensive,” and influenced by “computing power access, regulatory policies, and geopolitical factors,” leading to increasing concentration; on the other hand, open-source models and tools continue to evolve through “broad experimentation and easy deployment.”
This “coexistence of centralization and openness” raises a series of unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a “hybrid model”—cutting-edge models push technological capabilities forward, while open or semi-open systems embed these capabilities into “widely distributed software.”
Theme 8: Programmable money spawns new agent payment flows
As AI systems play roles in workflows, their demand for “economic interactions” increases—such as paying for services, calling APIs, rewarding other agents, or settling “usage-based interaction fees.”
This demand has renewed attention on “stablecoins”: viewed as “machine-native currency,” with programmability, auditability, and capable of transfers without manual intervention.
Take protocols like x402 as an example: although still in early experimental stages, their direction is clear—payment flows will operate via “API,” not traditional “checkout pages,” enabling software agents to conduct “continuous, fine-grained transactions.”
Currently, this field remains immature: transaction sizes are small, user experience is rough, and security and permission systems are still being developed. But infrastructure innovation often begins with such “early explorations.”
It’s worth noting that the significance is not “for autonomy’s sake,” but rather “when software can execute transactions through programming, new economic behaviors become possible.”
Conclusion
Whether in cryptocurrencies or AI, early development stages favor “eye-catching concepts” and “technological novelty”; in the next phase, “reliability,” “governance,” and “distribution capability” will become more critical competitive dimensions.
Today, the technology itself is no longer the main limiting factor—“embedding technology into real systems” is the key.
In my view, the defining feature of 2026 will not be “a breakthrough in a specific technology,” but “steady infrastructure accumulation”—these infrastructures, operating silently, are quietly reshaping “value flow” and “work modes.”
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Visa's Head of Crypto: Eight Evolutionary Trends of Crypto and AI by 2026
null
Author: Cuy Sheffield, Vice President and Head of Cryptocurrency at Visa
Compiled by: Saoirse, Foresight News
As cryptocurrencies and AI gradually mature, the most significant shifts in these two fields are no longer “theoretically feasible” but “reliably implementable in practice.” Currently, both technologies have crossed critical thresholds, with performance improvements being substantial, yet their actual adoption remains uneven. The core development trends leading into 2026 stem from this gap between “performance and adoption.”
Below are several key themes I have been following long-term, along with initial thoughts on the development directions of these technologies, areas of value accumulation, and even “why the ultimate winners may differ greatly from industry pioneers.”
Theme 1: Cryptocurrencies are transitioning from speculative assets to high-quality technology
The first decade of cryptocurrency development was characterized by “speculative advantage”—its market is global, continuous, and highly open. The intense volatility also makes crypto trading more dynamic and attractive than traditional financial markets.
However, the underlying technology was not yet ready for mainstream applications: early blockchain speeds were slow, costs high, and stability insufficient. Outside of speculative scenarios, cryptocurrencies have rarely surpassed existing traditional systems in cost, speed, or convenience.
Today, this imbalance is beginning to shift. Blockchain technology has become faster, more economical, and more reliable. The most attractive use cases for cryptocurrencies are no longer speculation but infrastructure—especially settlement and payment processes. As cryptocurrencies become more mature, the core role of speculation will gradually weaken: it will not disappear entirely but will no longer be the primary source of value.
Theme 2: Stablecoins are a clear achievement of cryptocurrencies’ “practical utility”
Unlike previous narratives around cryptocurrencies, the success of stablecoins is based on concrete, objective standards: in specific scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, while seamlessly integrating into modern software systems.
Stablecoins do not require users to view cryptocurrencies as an “ideology” to believe in; their applications often occur “implicitly” within existing products and workflows—this allows institutions and companies that previously viewed the crypto ecosystem as “too volatile and not transparent” to finally understand its value clearly.
It can be said that stablecoins help cryptocurrencies re-anchor themselves on “practicality” rather than “speculation,” establishing a clear benchmark for “how cryptocurrencies can successfully land in real-world applications.”
Theme 3: When cryptocurrencies become infrastructure, “distribution capability” is more important than “technological novelty”
In the past, when cryptocurrencies mainly served as “speculative tools,” their “distribution” was endogenous—new tokens could naturally accumulate liquidity and attention simply by existing.
But as cryptocurrencies become infrastructure, their application scenarios are shifting from the “market level” to the “product level”: embedded in payment flows, platforms, and enterprise systems, end users often are unaware of their presence.
This shift benefits two types of entities: one, companies with existing distribution channels and reliable customer relationships; two, institutions with regulatory licenses, compliance systems, and risk management infrastructure. Merely having “innovative protocols” is no longer enough to drive large-scale adoption of cryptocurrencies.
Theme 4: AI agents have practical value, and their influence is surpassing coding domains
The practicality of AI agents is increasingly evident, but their role is often misunderstood: the most successful agents are not “autonomous decision-makers” but “tools that reduce coordination costs in workflows.”
Historically, this is most apparent in software development—AI tools accelerate coding, debugging, refactoring, and environment setup. Recently, this “tool value” has been expanding significantly into more fields.
Take tools like Claude Code as an example: although positioned as “developer tools,” their rapid adoption reflects a deeper trend—agent systems are becoming “interfaces for knowledge work,” not just limited to programming. Users are beginning to apply “agent-driven workflows” to research, analysis, writing, planning, data processing, and operations—tasks that are more aligned with “general professional work” rather than traditional coding.
The real key is not “coding itself” but the underlying core pattern:
Users delegate “goal intent,” not “specific steps”;
Agents manage “context information” across files, tools, and tasks;
Work modes shift from “linear progression” to “iterative, dialog-based.”
In various knowledge work domains, agents excel at gathering context, executing limited tasks, reducing handoffs, and accelerating iteration, but still have shortcomings in “open-ended judgment,” “responsibility attribution,” and “error correction.”
Therefore, most current production-use agents still require “scope limitations, supervision, and system embedding,” rather than operating fully autonomously. The true value of agents lies in “restructuring knowledge workflows,” not in “replacing labor” or “achieving full autonomy.”
Theme 5: The bottleneck of AI has shifted from “intelligence level” to “trustworthiness”
AI models have rapidly improved in intelligence, but the current limiting factor is no longer “language fluency or reasoning ability” alone, but “reliability in actual systems.”
Production environments demand zero tolerance for three issues: AI “hallucinations” (generating false information), inconsistent outputs, and opaque failure modes. Once AI involves customer service, financial transactions, or compliance, “roughly correct” results are no longer acceptable.
Building “trust” requires four foundational capabilities: traceability of results, memory ability, verifiability, and the capacity to proactively expose “uncertainty.” Until these capabilities are mature enough, AI autonomy must be limited.
Theme 6: Systems engineering determines whether AI can land in production scenarios
Successful AI products treat “models” as “components,” not “finished products”—their reliability depends on “architectural design,” not “prompt engineering.”
“Architectural design” includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. This is why AI development is increasingly aligned with “traditional software engineering” rather than “cutting-edge theoretical research.”
Long-term value will favor two groups: system builders and platform owners controlling workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operations, the importance of “systems engineering” will further increase: knowledge work is often complex, state-dependent, and context-rich, making “agents capable of reliably managing memory, tools, and iterative processes” more valuable than those merely capable of generating outputs.
Theme 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI system capabilities grow and deepen integration with the economy, the question of “who owns and controls the most powerful AI models” is triggering core conflicts.
On one hand, R&D in cutting-edge AI remains “capital-intensive,” and influenced by “computing power access, regulatory policies, and geopolitical factors,” leading to increasing concentration; on the other hand, open-source models and tools continue to evolve through “broad experimentation and easy deployment.”
This “coexistence of centralization and openness” raises a series of unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a “hybrid model”—cutting-edge models push technological capabilities forward, while open or semi-open systems embed these capabilities into “widely distributed software.”
Theme 8: Programmable money spawns new agent payment flows
As AI systems play roles in workflows, their demand for “economic interactions” increases—such as paying for services, calling APIs, rewarding other agents, or settling “usage-based interaction fees.”
This demand has renewed attention on “stablecoins”: viewed as “machine-native currency,” with programmability, auditability, and capable of transfers without manual intervention.
Take protocols like x402 as an example: although still in early experimental stages, their direction is clear—payment flows will operate via “API,” not traditional “checkout pages,” enabling software agents to conduct “continuous, fine-grained transactions.”
Currently, this field remains immature: transaction sizes are small, user experience is rough, and security and permission systems are still being developed. But infrastructure innovation often begins with such “early explorations.”
It’s worth noting that the significance is not “for autonomy’s sake,” but rather “when software can execute transactions through programming, new economic behaviors become possible.”
Conclusion
Whether in cryptocurrencies or AI, early development stages favor “eye-catching concepts” and “technological novelty”; in the next phase, “reliability,” “governance,” and “distribution capability” will become more critical competitive dimensions.
Today, the technology itself is no longer the main limiting factor—“embedding technology into real systems” is the key.
In my view, the defining feature of 2026 will not be “a breakthrough in a specific technology,” but “steady infrastructure accumulation”—these infrastructures, operating silently, are quietly reshaping “value flow” and “work modes.”