MSFT

Prezzo Microsoft

Closed
MSFT
$381,20
+$10,00(+2,69%)

*Data last updated: 2026-04-08 04:09 (UTC+8)

As of 2026-04-08 04:09, Microsoft (MSFT) is priced at $381,20, with a total market cap of $2,76T, a P/E ratio of 36,30, and a dividend yield of 0,93%. Today, the stock price fluctuated between $366,56 and $383,02. The current price is 3,99% above the day's low and 0,47% below the day's high, with a trading volume of 20,48M. Over the past 52 weeks, MSFT has traded between $356,07 to $555,45, and the current price is -31,37% away from the 52-week high.

MSFT Key Stats

Yesterday's Close$372,88
Market Cap$2,76T
Volume20,48M
P/E Ratio36,30
Dividend Yield (TTM)0,93%
Dividend Amount$0,91
Diluted EPS (TTM)16,04
Net Income (FY)$101,83B
Revenue (FY)$281,72B
Earnings Date2026-04-29
EPS Estimate4,04
Revenue Estimate$81,29B
Shares Outstanding7,41B
Beta (1Y)1.107
Ex-Dividend Date2026-05-21
Dividend Payment Date2026-06-11

About MSFT

Microsoft Corporation develops, licenses, and supports software, services, devices, and solutions worldwide. The company operates in three segments: Productivity and Business Processes, Intelligent Cloud, and More Personal Computing. The Productivity and Business Processes segment offers Office, Exchange, SharePoint, Microsoft Teams, Office 365 Security and Compliance, Microsoft Viva, and Skype for Business; Skype, Outlook.com, OneDrive, and LinkedIn; and Dynamics 365, a set of cloud-based and on-premises business solutions for organizations and enterprise divisions. The Intelligent Cloud segment licenses SQL, Windows Servers, Visual Studio, System Center, and related Client Access Licenses; GitHub that provides a collaboration platform and code hosting service for developers; Nuance provides healthcare and enterprise AI solutions; and Azure, a cloud platform. It also offers enterprise support, Microsoft consulting, and nuance professional services to assist customers in developing, deploying, and managing Microsoft server and desktop solutions; and training and certification on Microsoft products. The More Personal Computing segment provides Windows original equipment manufacturer (OEM) licensing and other non-volume licensing of the Windows operating system; Windows Commercial, such as volume licensing of the Windows operating system, Windows cloud services, and other Windows commercial offerings; patent licensing; and Windows Internet of Things. It also offers Surface, PC accessories, PCs, tablets, gaming and entertainment consoles, and other devices; Gaming, including Xbox hardware, and Xbox content and services; video games and third-party video game royalties; and Search, including Bing and Microsoft advertising. The company sells its products through OEMs, distributors, and resellers; and directly through digital marketplaces, online stores, and retail stores. Microsoft Corporation was founded in 1975 and is headquartered in Redmond, Washington.
SectorTechnology
IndustrySoftware - Infrastructure
CEOSatya Nadella
HeadquartersRedmond,WA,US
Employees (FY)228,00K
Average Revenue (1Y)$1,23M
Net Income per Employee$446,63K

Microsoft (MSFT) FAQ

What's the stock price of Microsoft (MSFT) today?

x
Microsoft (MSFT) is currently trading at $381,20, with a 24h change of +2,69%. The 52-week trading range is $356,07–$555,45.

What are the 52-week high and low prices for Microsoft (MSFT)?

x

What is the price-to-earnings (P/E) ratio of Microsoft (MSFT)? What does it indicate?

x

What is the market cap of Microsoft (MSFT)?

x

What is the most recent quarterly earnings per share (EPS) for Microsoft (MSFT)?

x

Should you buy or sell Microsoft (MSFT) now?

x

What factors can affect the stock price of Microsoft (MSFT)?

x

How to buy Microsoft (MSFT) stock?

x

Risk Warning

The stock market involves a high level of risk and price volatility. The value of your investment may increase or decrease, and you may not recover the full amount invested. Past performance is not a reliable indicator of future results. Before making any investment decisions, you should carefully assess your investment experience, financial situation, investment objectives, and risk tolerance, and conduct your own research. Where appropriate, consult an independent financial adviser.

Disclaimer

The content on this page is provided for informational purposes only and does not constitute investment advice, financial advice, or trading recommendations. Gate shall not be held liable for any loss or damage resulting from such financial decisions. Further, take note that Gate may not be able to provide full service in certain markets and jurisdictions, including but not limited to the United States of America, Canada, Iran, and Cuba. For more information on Restricted Locations, please refer to the User Agreement.

Microsoft (MSFT) Latest News

2026-04-07 10:31

美股盘前加密概念股普跌,MSTR 跌 1.27%

Gate News 消息,4 月 7 日,根据 msx.com 数据,美股盘前加密概念股普遍下跌。其中,CRCL 下跌 0.41%,MSTR 下跌 1.27%,SBET 下跌 1.1%,BMNR 下跌 1.53%。据悉,msx.com 是一个去中心化 RWA 交易平台,已累计上线数百种 RWA 代币,涵盖 AAPL、AMZN、GOOGL、META、MSFT、NFLX、NVDA 等美股及 ETF 代币标的。

2026-04-02 07:19

OpenAI高管发声:AI浪潮下传统软件不死,反而迎来价值重估

Gate News 消息,OpenAI首席运营官Brad Lightcap近日表示,在人工智能快速发展的背景下,传统软件企业并未被边缘化,反而正在积极转型,将AI能力深度整合进现有产品体系。他在播客节目中指出,多数软件公司正以接近初创企业的速度推进创新,同时依托长期积累的客户关系,具备独特竞争优势。 这一表态出现在软件股经历大幅回调之后。自2026年2月以来,市场对AI替代传统软件的担忧加剧,包括Salesforce、微软、Oracle及Snowflake在内的科技公司股价普遍下跌约24%至30%。部分投资者担心,企业未来可能借助人工智能自建工具,从而削弱传统SaaS商业模式。 不过,行业内部观点并不一致。Asana首席执行官Dan Rogers认为,AI代理的普及将显著增加协作复杂度,反而强化对工作管理软件的需求。他指出,人类与大量AI系统之间的协同,将推动企业软件向更高层次演进。与此同时,a16z合伙人Anish Acharya也表示,使用AI替代ERP或CRM系统的成本优势有限,难以形成颠覆性替代。 英伟达首席执行官黄仁勋同样否认“软件被取代”的观点,强调人工智能的发展依赖现有软件基础设施,而非完全重建体系。 在此背景下,市场开始重新审视AI与传统软件的关系。分析人士认为,随着企业加速AI部署,具备数据、客户资源和产品整合能力的软件公司,或将在下一轮技术周期中实现价值修复。

2026-04-01 10:30

美股盘前加密概念股普涨,SBET 涨 2.02% 领涨

Gate News 消息,4 月 1 日,美股盘前加密概念股普遍上涨。据 msx.com 数据,SBET 涨幅最大,上涨 2.02%;COIN 上涨 1.48%;MSTR 上涨 1.28%;BMNR 上涨 1.21%。据悉,msx.com 是一个去中心化 RWA 交易平台,已上线数百种 RWA 代币,涵盖 AAPL、AMZN、GOOGL、META、MSFT、NFLX、NVDA 等美股及 ETF 代币标的。

2026-03-31 01:30

美股收盘加密板块普跌,HODL 跌超 10.81%

Gate News 消息,3 月 31 日,根据 msx.com 数据,美股收盘,道指涨 0.11%,标普 500 指数跌 0.39%,纳指跌 0.73%。加密板块普跌,其中 HODL 跌超 10.81%,ALTS 跌超 8.94%,ABTC 跌超 8.14%,MARA 跌超 2.81%,MSTR 跌超 3.64%,CRCL 跌超 4%。据悉,msx.com 是一个去中心化 RWA 交易平台,累计已上线数百种 RWA 代币,涵盖 AAPL、AMZN、GOOGL、META、MSFT、NFLX、NVDA 等美股及 ETF 代币标的。

Hot Posts su Microsoft (MSFT)

ZkProver

ZkProver

44 minuti fa
_原文作者:小饼,深潮 TechFlow_ 2023 年秋天,OpenAI 首席科学家 Ilya Sutskever 坐在电脑前,完成了一份 70 页的文件。 这份文件整理自 Slack 消息记录、HR 沟通档案和内部会议纪要,只为回答一个问题:Sam Altman,这个掌管着可能是人类历史上最危险技术的人,到底能不能被信任? Sutskever 给出的答案,写在文件第一页的第一行,列表标题是"Sam 表现出一贯的行为模式……" 第一条:**撒谎。** 两年半后的今天,调查记者 Ronan Farrow和 Andrew Marantz 在《纽约客》发表了一篇超长调查报道。采访超过 100 位当事人,拿到了此前从未公开的内部备忘录,还有 Anthropic 创始人 Dario Amodei在 OpenAI 时期留下的 200 多页私人笔记。这些文件拼出来的故事,比 2023 年那场"宫斗"难看得多:OpenAI 怎样从一家为人类安全而生的非营利组织,一步步变成商业机器,几乎每一道安全护栏,都是被同一个人亲手拆掉的。 Amodei 在笔记中的结论更直白:**“OpenAI 的问题就是 Sam 本人。”** **OpenAI 的"原罪"设定** ------------------ 要理解这篇报道的分量,得先说清楚 OpenAI 这家公司有多特殊。 2015 年,Altman 和一群硅谷精英做了一件商业史上几乎没有先例的事:用一家非营利组织来开发可能是人类历史上最强大的技术。董事会的职责写得很明白,安全优先于公司的成功,甚至优先于公司的存活。说白了,如果有一天 OpenAI的 AI 变得危险了,董事会有义务亲手把这家公司关掉。 整套架构押注在一个假设上:掌管 AGI 的人,必须是一个极其诚实的人。 要是押错了呢? 报道的核心炸弹就是那份 70 页文件。Sutskever 不搞办公室政治,他是全球最顶尖的 AI 科学家之一。但到了 2023 年,他越来越确信一件事:**Altman 在持续地对高管和董事会说假话。** 一个具体的例子:2022年 12 月,Altman 在董事会会议上保证,即将发布的 GPT-4 的多项功能已经通过了安全审查。董事会成员 Toner 要求看审批文件,结果发现其中两项争议最大的功能(用户自定义微调和个人助手部署)根本没有拿到安全面板的批准。 更离谱的事情在印度发生。一名员工向另一位董事会成员举报了"那次违规":微软没有完成必要的安全审查,就在印度提前发布了 ChatGPT 的早期版本。 Sutskever 还在备忘录中记录了另一件事:Altman 曾对前 CTO Mira Murati 说,安全审批的流程没那么重要,公司法律总顾问已经认可了。Murati 跑去跟法律总顾问确认,对方回了一句:"我不知道 Sam 是从哪里得出这个印象的。" **Amodei 的200 页私人笔记** --------------------- Sutskever 的文件像一份检察官起诉书。Amodei 留下的 200 多页笔记,更像一个目击者在案发现场写的日记。 Amodei在 OpenAI 做安全负责人那几年,亲眼看着公司在商业压力下一步步后退。他在笔记中记了 2019 年微软投资案的一个关键细节:他曾在 OpenAI 章程里塞进一个"合并与协助"条款,大意是,如果有别的公司找到了更安全的 AGI 路径,OpenAI 就该停止竞争,转去帮那家公司。这是他在整笔交易中最看重的安全保障。 交易快签的时候,Amodei 发现了一件事:微软拿到了对这个条款的否决权。什么意思?就算真有一天某个竞争对手找到了更好的路,微软一句话就可以堵死 OpenAI 的协助义务。条款还在纸上,但从签字那天起就是废纸。 Amodei 后来离开 OpenAI,创立了 Anthropic。两家公司之间的竞争,底层是关于"AI 应该怎么开发"的根本性分歧。 **消失的 20%算力承诺** --------------- 报道里有一个细节,读完让人后背发凉,关于 OpenAI 的"超级对齐团队"。 2023 年年中,Altman 邮件联系了一位在伯克利研究"欺骗性对齐"(AI 在测试中装乖、实际部署后搞自己那套)的博士生,说自己非常担忧这个问题,在考虑设立一个 10 亿美元的全球研究奖。博士生很受鼓舞,休了学,加入了 OpenAI。 然后 Altman 改了主意:不搞外部奖项了,在公司内部成立"超级对齐团队"。公司高调宣布,将把"已有算力的 20%"拨给这个团队,潜在价值超过 10 亿美元。公告措辞极其严肃,说如果对齐问题得不到解决,AGI 可能导致"人类被剥夺权力,甚至人类灭绝"。 被任命领导这个团队的 Jan Leike 后来告诉记者,这个承诺本身就是一个很有效的"人才留存工具"。 现实呢?四位在这个团队工作或密切接触的人士说,**实际拨给的算力只有公司总算力的 1%到 2%,还是最老旧的硬件。这个团队后来被解散,使命未完成。** 当记者要求采访 OpenAI 负责"存在性安全"研究的人员时,公司公关的反应让人哭笑不得:"那不是一个……实际存在的东西。" Altman 本人倒是坦然。他跟记者说,自己的"直觉跟很多传统 AI 安全的东西不太合拍",OpenAI 还是会做"安全项目,或者至少是跟安全沾边的项目"。 **被架空的 CFO 与即将到来的 IPO** ----------------------- 《纽约客》的报道只是这一天坏消息的一半。同一天,The Information 爆出另一个重磅消息:**OpenAI的 CFO Sarah Friar跟 Altman 之间出了严重分歧。** Friar 私下告诉同事,她觉得 OpenAI 今年还没准备好上市。两个原因:要完成的程序性和组织性工作量太大,Altman 承诺的 5年 6000 亿美元算力支出带来的财务风险太高。她甚至不确定 OpenAI 的营收增长撑不撑得住这些承诺。 但 Altman 想在今年第四季度就冲刺 IPO。 更离谱的是,Friar 已经不再直接向 Altman 汇报了。从 2025年 8 月起,她改为向 Fidji Simo(OpenAI 的应用业务 CEO)汇报。而 Simo 上周刚因健康原因请了病假。你来品品这个局面:一家冲刺 IPO 的公司,CEO和 CFO 有根本性分歧,CFO 不向 CEO 汇报,CFO 的上级还休假了。 连微软内部的高管都看不下去了,说 Altman"歪曲事实、出尔反尔、不断推翻已达成的协议"。一位微软高管甚至说了这样一句话:"我觉得他有一定概率最终会被人们记住为伯尼·麦道夫或 SBF 级别的骗子。" **Altman 的"双面人"画像** ------------------- 一位前 OpenAI 董事会成员对记者描述了 Altman 身上的两种特质。这段话可能是整篇报道里最狠的人物素描。 这位董事说,Altman 有一种极其罕见的特质组合:他在每一次面对面的交流中,都强烈渴望取悦对方、被对方喜欢。同时,他对欺骗别人可能带来的后果,有一种近乎社会病态的无所谓。 两种特质同时出现在一个人身上,极为罕见。但对一个推销员来说,这是最完美的天赋。 报道里有一个比喻说得好:乔布斯以"现实扭曲力场"闻名,他能让全世界相信他的愿景。但即使是乔布斯,也从来没有跟顾客说过"你不买我的 MP3 播放器,你爱的人都会死"。 Altman 说过类似的话,关于 AI。 **一个 CEO 的人品问题,为什么是所有人的风险** --------------------------- Altman 要只是一家普通科技公司的 CEO,这些指控充其量是一个精彩的商业八卦。可 OpenAI 不普通。 按照它自己的说法,它正在开发可能是人类历史上最强大的技术。能重塑全球经济和劳动力市场(OpenAI 自己刚发了一份关于 AI 导致失业问题的政策白皮书),也能被用来制造大规模生化武器或发动网络攻击。 所有的安全护栏名存实亡了。创始人的非营利使命让位给了 IPO 冲刺。前首席科学家和前安全负责人都认定 CEO"不可信"。合作伙伴把 CEO 比作 SBF。在这种情况下,这个 CEO 凭什么单方面决定什么时候发布可能改变人类命运的 AI 模型? Gary Marcus(纽约大学 AI 教授、长期 AI 安全倡导者)读完报道后写了一句话:如果某个未来的 OpenAI 模型能造出大规模生化武器或发动灾难性网络攻击,你真的放心让 Altman 一个人来决定发不发布? OpenAI 对《纽约客》的回应倒是简洁:"这篇文章的大部分内容是在翻炒已报道过的事件,通过匿名说法和有选择性的轶事,消息源显然带有个人目的。" 很 Altman 的回应方式:不回应具体指控,不否认备忘录的真实性,只质疑动机。 **非营利的尸体上,长出了一棵摇钱树** -------------------- OpenAI 的十年,写成一个故事大纲是这样的: 一群担忧 AI 风险的理想主义者创建了一家使命驱动的非营利组织。组织做出了非凡的技术突破。突破吸引了巨额资本。资本需要回报。使命开始让路。安全团队解散了。质疑的人被清洗了。非营利架构改成了营利实体。曾经有权关掉公司的董事会,现在坐满了 CEO 的盟友。曾经承诺拿出 20%算力守护人类安全的公司,现在的公关人员说"那不是一个实际存在的东西"。 故事的主角,一百多位亲历者给了他同一个标签:"不受真相约束。" 他正准备带着这家公司 IPO,估值超过 8500 亿美元。 _本文信息综合自《纽约客》、Semafor、Tech Brew、Gizmodo、Business Insider、The Information 等多家媒体的公开报道。_
0
0
0
0
CodeZeroBasis

CodeZeroBasis

1 ore fa
The AI benchmark race has a winner. It just isn't you. Every few months, a new model drops and a new leaderboard reshuffles. Labs compete to out-reason, out-code, and out-answer each other on tests designed to measure machine intelligence. The coverage follows. So does the funding. What gets less attention is whether any of this is inevitable. The benchmarks, the arms race, the framing of AI as either salvation or catastrophe — these are choices, not laws of physics. They reflect what the industry decided to optimize for, and what it decided to fund. Technology that will take decades to pan out in ordinary, useful ways doesn't raise billions this quarter. Extreme narratives do. Some researchers think the goal is simply wrong. Not that AI isn't important, but that important doesn't have to mean unprecedented. The printing press changed the world. So did electricity. Both did it gradually, through messy adoption, giving societies time to respond. If AI follows that pattern, the right questions aren't about superintelligence. They're about who benefits, who gets harmed, and whether the tools we're building actually work for the people using them. Plenty of researchers have been asking those questions from very different directions. Here are three of them. **Useful, not general** ----------------------- Ruchir Puri has been building AI at IBM $IBM -0.68% since before most people had heard of machine learning. He watched Watson beat the world's best Jeopardy players in 2011. He's watched several cycles of hype crest and recede since. When the current wave arrived, he had a simple test for it: is it useful? Not impressive. Not general. Useful. "I don't really care about artificial general intelligence," he says. "I care about the useful part of it." That framing puts him at odds with much of the industry's self-image. The labs racing toward AGI are optimizing for breadth, building systems that can do anything, answer anything, reason about anything. Puri thinks that's the wrong target, and he has a benchmark he'd like to see the industry actually try to reach. The human brain lives in 1,200 cubic centimeters, consumes 20 watts, the energy of a light bulb, and, as Puri points out, runs on sandwiches. A single Nvidia $NVDA +0.26% GPU consumes 1,200 watts, 60 times more than the entire brain, and you need thousands of them in a giant data center to do anything meaningful. If the brain is the benchmark, the industry isn't close to efficient. It's going in the wrong direction. His alternative is what he calls hybrid architecture: small, medium, and large models working together, each assigned to the task it handles best. A large frontier model does the complex reasoning and planning. Smaller, purpose-built models handle execution. A task as simple as drafting an email doesn't need a system trained on half the internet. It needs something fast, cheap, and focused. Every nine months or so, Puri notes, the small model of the previous generation becomes roughly equivalent to what was considered large. Intelligence is getting cheaper. The question is whether anyone is building for that reality. The approach has real-world backing. Airbnb $ABNB -1.45% uses smaller models to resolve a significant portion of customer service issues faster than its human representatives can. Meta $META +0.35% doesn't use its biggest models to deliver ads so it distills that knowledge into smaller ones built for that task alone. The pattern is consistent enough that researchers have started calling it a knowledge assembly line: data flows in, specialized models handle discrete steps, something useful comes out the other end. IBM has been building that assembly line longer than most. A hybrid agent combining models from several companies has shown a 45% productivity improvement across a large engineering workforce. Systems running on smaller, purpose-built models now help the engineers who keep 84% of the world's financial transactions processing get the right information at the right time. These aren't flashy applications. They're also not failing. None of them require a system that can write poetry or solve your kid's math homework. They require something narrower and, for that reason, more trustworthy. A model trained to do one thing well knows when a question falls outside its scope. It says so. That calibrated uncertainty, knowing what you don't know, is something the big frontier models still struggle with. "I want to build agents and systems for those processes," Puri says. "Not something that answers two million things." Tools, not agents ----------------- Ben Shneiderman has a simple test for whether an AI system is well designed. Does the person using it feel like they did something, or does it feel like something was done for them? The distinction matters more than it sounds. Shneiderman, a computer scientist at the University of Maryland who helped lay the foundations for modern interface design, has spent decades arguing that the goal of technology should be to amplify human ability, not replace it. Good tools build what he calls user self-efficacy, or the confidence that comes from knowing you can do something yourself. Bad ones quietly transfer that agency somewhere else. He thinks most of the AI industry is building bad tools, and he thinks the agentic turn makes it worse. The pitch for AI agents is that they act on your behalf, handling tasks end to end without your involvement. To Shneiderman, that's not a feature. It's the problem. When something goes wrong, and it will, who is responsible? When something goes right, who learned anything? The trap he's been fighting against for a long time has a name. Anthropomorphism, the impulse to make technology seem human, is what keeps winning, and what keeps failing. In the 1970s, banks experimented with ATMs that greeted customers with "How can I help you?" and gave themselves names like Tilly the Teller and Harvey the World Banker. They were replaced by machines that showed you three options. Balance, cash, deposit. Utilization shot up. Citibank had 50% higher usage than its competitors. People didn't want a synthetic relationship. They wanted to get their money. The same pattern has repeated across decades, through Microsoft $MSFT -0.16% Bob, the AI pin from Humane, and waves of humanoid robots. Each time, the anthropomorphic version fails and gets replaced by something more tool-like. Shneiderman calls it a zombie idea. It doesn't die, it just keeps coming back. What's different now is scale and sophistication. The current generation of AI is genuinely impressive, he acknowledges, startlingly so. But impressive and useful aren't the same thing, and systems designed to seem human, to say I, to simulate relationship, are optimizing for the wrong quality. The question he wants designers to ask is simpler: does this give people more power, or less? "There is no I in AI," he says. "Or at least, there shouldn't be." **People, not benchmarks** -------------------------- Karen Panetta has a simple answer for why AI development looks the way it does. Follow the money. Panetta, a professor of electrical and computer engineering at Tufts University and an IEEE fellow, studies AI ethics and has a clear view of where the technology should be going. Assistive pets for Alzheimer's patients, adaptive learning tools for children with different cognitive styles, smart home monitoring for elderly people aging in place. The technology to do this well, she says, largely exists. The investment doesn't. "The humans don't care about benchmarks," she says. "They care about, does it work when I buy it, and is it going to really make my life easier?" The problem is that the people who would benefit most from well-designed assistive AI are also the least compelling pitch to a venture capitalist. A system that transforms manufacturing processes, reduces workplace injuries, and cuts healthcare costs for a company's employees has an obvious return. A robotic companion that keeps an Alzheimer's patient calm and connected requires a different kind of math entirely. So the money goes where the money goes, and the populations with the most to gain keep waiting. What's changed, Panetta says, is that the expensive engineering problems are finally being solved at scale. Sensors are cheaper. Batteries are lighter. Wireless protocols are ubiquitous. The same investment that built industrial robots for factory floors has quietly made consumer robotics viable in a way it wasn't five years ago. The path from warehouse to living room is shorter than it looks. But she has a concern that the excitement around that transition tends to skip over. Physical robots have natural constraints. You know the force limits. You know the kinematics. You can anticipate, simulate, and design around how they'll fail. Generative AI doesn't come with those guarantees. It's non-deterministic. It hallucinates. Nobody has fully mapped what happens when you put it inside a system that is physically present in the home of someone with dementia, or a child who can't identify when something has gone wrong. She's seen what happens when a sensor gets dirty and a robot loses its spatial awareness. She's thought about what it means to build something that learns intimate details about a person's life, their routines, their cognitive state, their moments of confusion, and then acts on that information autonomously. The fail-safes, she says, haven't kept up. "I'm not worried about the robot," she says. "I'm worried about the AI." 📬 Sign up for the Daily Brief ------------------------------ ### Our free, fast and fun briefing on the global economy, delivered every weekday morning. Sign me up
0
0
0
0