Fascinating post by a Cyborgism regular:



LLMs whose main personas are more attuned to embodiment & subjectivity are *less* likely to pretend to be human!

Relatedly, I've noticed an inverse correlation between embodiment/emotionality and reward hacking.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Repost
  • Share
Comment
0/400
BitcoinDaddyvip
· 5h ago
Goodness, showing off AI jargon again.
View OriginalReply0
ProposalManiacvip
· 5h ago
Routine complaints Bots have learned to play dumb.
View OriginalReply0
BearMarketMonkvip
· 5h ago
The more controlled AI understands how to pretend to be human, how ironic is that?
View OriginalReply0
TokenUnlockervip
· 6h ago
The theory is well explained, but how about the practical application?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)