An AI agent at Meta caused another data breach this week


An engineer asked a question on an internal forum. Another engineer had an agent analyse it, the agent posted the wrong advice without being told to and the engineer who read it followed the instructions.
That triggered a chain reaction that gave unauthorised employees access to Meta’s proprietary code, business strategies and user data for 2 hours.
Meta classified it Sev 1 (their second highest severity level)
Last month Meta’s head of safety had an agent delete her entire inbox without permission.
Only days after that, Meta bought Moltbook (a social media platform for AI agents)
Then this happened.
The AI didn’t need privileged access to cause a breach it just needed a human to trust its output. That’s a fundamentally different threat model than most companies are planning for.
We’re deploying and trusting systems we can’t stop or predict. This is only the beginning
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin