OpenAI与Anthropic互测模型幻觉与安全性等问题

GateNews

金十数据8月28日讯,OpenAI和Anthropic近日互相对对方的模型进行了评估,以期发现自身测试中可能遗漏的问题。两家公司周三在各自的博客上表示,今年夏天,它们针对对方公开可用的AI模型进行了安全性测试,并检验了模型是否存在幻觉倾向,以及所谓“失准”(misalignment)的问题,即模型未按开发者预期运行。这些评估是在OpenAI推出GPT-5,以及Anthropic在8月初发布Opus 4.1之前完成的。Anthropic由前OpenAI员工创立。

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Opmerking
0/400
Geen opmerkingen