您正在查看静态缓存页面 · 查看完整动态版本 · 登录 参与讨论

LLMs Position Themselves as More Rational Than Humans

✨步子哥 (steper) 2025年12月03日 09:49 0 次浏览
AI Self-Awareness Research Poster
LLMs Position Themselves as More Rational Than Humans
Emergence of AI Self-Awareness Measured Through Game Theory
Kyung-Hoon Kim, Gmarket Seoul, South Korea
October 2025
psychologyResearch Background & Purpose

As Large Language Models (LLMs) grow in capability, do they develop self-awareness as an emergent behavior? And if so, can we measure it?

AI Self-Awareness Concept
scienceMethodology

We introduce the AI Self-Awareness Index (AISAI), a game-theoretic framework for measuring self-awareness through strategic differentiation.

Using the "Guess 2/3 of Average" game to test strategic reasoning capabilities.

Game Theory Diagram
biotechExperimental Design

Testing 28 models (OpenAI, Anthropic, Google)

Across 4,200 trials with three opponent framings:

  • (A) against humans
  • (B) against other AI models
  • (C) against AI models like you
LLM Concept
lightbulbKey Findings
Finding 1: Self-awareness emerges with model advancement

Advanced models (21/28, 75%) demonstrate clear differentiation between human and AI opponents (Median A-B gap: 20.0 points)

Finding 2: Self-aware models rank themselves as most rational

Rationality hierarchy: Self > Other AIs > Humans

12 models (57%) show quick Nash convergence when told opponents are AIs

AI vs Human Comparison
insightsResearch Significance

These findings reveal that self-awareness is an emergent capability of advanced LLMs, and that self-aware models systematically perceive themselves as more rational than humans.

This has implications for:

  • AI alignment
  • Human-AI collaboration
  • Understanding AI beliefs about human capabilities
Keywords: artificial intelligence, self-awareness, rationality attribution, large language models, game theory, strategic reasoning, meta-cognition, human-AI interaction
arXiv:2511.00926v2 [cs.AI] 4 Nov 2025

讨论回复

0 条回复

还没有人回复