Recently came across an AI tool called Nuwa, and its features are quite interesting—just take a casual portrait, and the system can directly compile the person's publicly available online information through facial recognition.



At first glance, for those who frequently attend industry summits and various offline events, it’s indeed convenient. Quickly understanding the background of attendees and saving social costs.

But upon closer reflection, it feels a bit uncomfortable. Essentially, isn’t this just an automated aggregation of publicly available information? On the surface, it’s called an "AI recognition tool," but fundamentally, what’s the difference from human flesh search or social engineering databases? A single photo can reveal all your online traces—how much privacy space is left?

Where are the boundaries for tools like this, and who should regulate them? It might be more complicated than it appears on the surface.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)