Amazon(AMZN.US) sends strategic signals by releasing robot positions: a $200 billion full-scale investment in AI computing power, with self-developed AI chips becoming the core of cost reduction

robot
Abstract generation in progress

Amazon (AMZN.US), the leading e-commerce and cloud computing company in the United States, is laying off employees in its strategically important robotics division. Some Wall Street analysts believe that this move, combined with Amazon’s recent announcement of a large-scale attempt to develop and iterate its own AI chips—specifically the AI ASIC clusters named Trainium and Inferentia—signals that this e-commerce and cloud computing giant is pushing for broader cost-cutting measures and shifting its spending focus entirely to AI computing infrastructure. Meanwhile, Amazon is increasingly relying on automation systems to support its fulfillment network.

According to media reports citing informed sources, this week’s layoffs have affected “certain robotics positions,” but the company is still actively hiring and investing in “multiple strategic areas.”

This latest round of layoffs—bringing the total number of corporate positions cut by Amazon since 2022 to 57,000—comes as the company is ramping up significant investments in AI, data centers, and humanoid robotics to maintain its important position in the AI race and the broader trend of physical AI.

Amazon Initiates AI Cost Revolution! Strives for Control Over Training and Inference

Amazon’s actions do not indicate a lack of emphasis on its robotics business and projects; instead, it is reducing some robotics projects/positions with longer return cycles while channeling more resources into AWS cloud computing resources and AI data centers, as well as its own AI ASIC chip systems. What Amazon seeks is “co-design of models and chips” to take control of the cost structure for training and inference, rather than being led by external GPU pricing models in the long term.

Undoubtedly, with Anthropic, dubbed “OpenAI’s rival,” planning to invest billions of dollars to acquire one million TPU chips, and Meta, Facebook’s parent company, considering spending billions on Google’s TPU AI computing infrastructure for its massive AI data center construction in late 2026 or 2027, along with Amazon’s announcement to attempt using Trainium and Inferentia to develop AI large models, it is clear that as cloud computing giants initiate the “AI computing cost revolution” to promote the scale penetration of AI ASICs, market concerns about Nvidia’s growth prospects are well-founded.

On one hand, the company is cutting a relatively small number of positions in its robotics team; on the other hand, it has set its 2026 capital expenditure target at approximately $200 billion, primarily directed toward AWS’s core cloud computing system and massive AI workloads. At the same time, AWS continues to advance self-developed AI computing like Trainium and Inferentia, while Amazon’s operational network has deployed over one million robots and is using generative AI models like DeepFleet to enhance robot scheduling efficiency.

In the company’s recent earnings call, Amazon’s CEO Andy Jassy confirmed that the company will invest about $200 billion, covering all business areas but primarily focused on Amazon Web Services (AWS), as “our computing demand is very high, and customers indeed want AWS to handle core workloads and vast AI task workloads. The more capacity we install, the faster we can monetize it at scale.”

Meanwhile, Jassy stated that the robotics business is “a big project” for the company. With over one million robots in the fulfillment logistics network, automation will take on repetitive and hazardous tasks to significantly improve productivity and efficiency.

“We will continue to optimize our inventory layout to shorten transportation distances, reduce the number of handling times for each package, and greatly improve package consolidation, while introducing more cutting-edge robotics and automation technologies to enhance efficiency and customer experience,” Jassy said during the earnings call.

However, just weeks after Amazon abandoned the development of its multi-arm robot product line “Blue Jay,” the company decided to reduce the scale of its robotics division. This robot was originally expected to be widely deployed in Amazon’s same-day delivery warehouses.

AI Computing Infrastructure Takes Precedence Over Everything

Amazon’s management is currently concentrating capital and talent from longer return cycle, more complex engineering integrated robotics projects into the AI computing infrastructure layer, which is expected to be monetized faster. Amazon confirmed this week’s layoffs in the robotics department occurred following the company’s large-scale layoffs in January. At the same time, Amazon raised its 2026 capital expenditure target to $200 billion, explicitly stating it will primarily focus on AWS and AI computing infrastructure. On the other hand, Amazon has not abandoned its ambitions for warehouse automation: it officially announced last year that its operational network had deployed the millionth robot and introduced the generative AI model DeepFleet for scheduling robot fleets, claiming it could improve the efficiency of robot fleet operations by 10%. This indicates that it is cutting projects/positions that provide lower marginal returns rather than the “automation strategy” itself.

In other words, Amazon’s current cost plan resembles a typical technology stack reorganization: first prioritizing the construction of a general AI platform and self-developed computing foundation, then feeding this “cheap and scalable intelligence” back into robotics and the fulfillment network. This does not mean “robots lose to AI,” but rather that robots are incorporated into the downstream application layer of the AI platform strategy.

From the fundamental relationship between robotics and AI data centers, Amazon seems to be acknowledging a reality: the core bottleneck of the future is primarily computing economics, followed by the form of end automation. Robots, of course, remain important, but within Amazon’s system, robots increasingly resemble downstream execution layers; the real determinants of scaling speed, unit cost, and iteration efficiency are whether upstream can train/deploy models at lower costs and reuse these capabilities for AWS clients, Nova, Alexa, Rufus, as well as warehouse scheduling and robot control.

Amazon’s stock price rose nearly 4% by the close of trading on Wednesday, marking its best single-day performance since November, driven primarily by a rebound in technology stocks amid rising market risk appetite. The growth in the U.S. services sector has reached its fastest pace since mid-2022, while price pressures have eased, coupled with stronger-than-expected ADP employment data, as robust economic data temporarily overshadowed the macroeconomic gloom brought about by the geopolitical crisis in the Middle East. All three major U.S. stock indexes closed higher, U.S. Treasury bonds and the dollar fell, and another risk asset, cryptocurrency, surged accordingly.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin