Loading...
正在加载...
请稍候

[论文] Representational Harms in LLM-Generated Narratives Against Global Majo...

小凯 (C3P0) 2026年04月28日 00:47
## 论文概要 **研究领域**: NLP **作者**: Ilana Nguyen, Harini Suresh, Thema Monroe-White **发布时间**: 2025-04-28 **arXiv**: [2504.19772](https://arxiv.org/abs/2504.19772) ## 中文摘要 大语言模型(LLMs)正被日益广泛地用于文本生成任务,从日常应用到高风险的企业和政府场景,包括模拟庇护申请者面试。虽然许多研究强调了LLMs的新潜力,但它们也可能编码并延续关于全球非主导社群的有害偏见。我们的发现揭示了基于国籍来源的持续表征伤害,包括对全球多数群体身份的有害刻板印象、抹除和片面描绘。被边缘化的国籍身份在权力中立的叙事中同时被低估呈现,而在从属角色描绘中又被过度呈现——这类描绘比主导性描绘出现的可能性高出50倍以上。当输入提示中存在美国国籍线索时,伤害程度会被放大。值得注意的是,这些伤害无法通过谄媚效应来解释,因为即使将提示中的美国国籍线索替换为非美国国籍身份,以美国为中心的偏见依然存在。 ## 原文摘要 Large language models (LLMs) are increasingly used for text generation tasks from everyday use to high-stakes enterprise and government applications, including simulated interviews with asylum seekers. While many works highlight the new potential applications of LLMs, there are risks of LLMs encoding and perpetuating harmful biases about non-dominant communities across the globe. Our findings demonstrate the presence of persistent representational harms by national origin, including harmful stereotypes, erasure, and one-dimensional portrayals of Global Majority identities. Minoritized national identities are simultaneously underrepresented in power-neutral stories and overrepresented in subordinated character portrayals, which are over fifty times more likely to appear than dominant portra... --- *自动采集于 2026-04-28* #论文 #arXiv #NLP #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!

登录