<!DOCTYPE html><html lang="zh-CN"><head>
<meta charset="UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>AI范式革命:从Transformer困局到CTM新纪元</title>
<script src="https://cdn.tailwindcss.com"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/js/all.min.js"></script>
<link href="https://fonts.googleapis.com/css2?family=Noto+Serif+SC:wght@400;500;600;700;900&family=Inter:wght@300;400;500;600;700&display=swap" rel="stylesheet"/>
<script src="https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.min.js"></script>
<style>
:root {
--primary: #1a1a1a;
--secondary: #f5f5f0;
--accent: #e67e22;
--accent-secondary: #3498db;
--text-primary: #2c3e50;
--text-secondary: #5d6d7e;
--border: #e0e0e0;
}
body {
font-family: 'Inter', sans-serif;
background: linear-gradient(135deg, var(--secondary) 0%, #fafaf8 100%);
color: var(--text-primary);
overflow-x: hidden;
}
.serif {
font-family: 'Noto Serif SC', serif;
}
.hero-gradient {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
opacity: 0.9;
}
.toc-fixed {
position: fixed;
top: 0;
left: 0;
width: 280px;
height: 100vh;
background: rgba(255, 255, 255, 0.95);
backdrop-filter: blur(10px);
border-right: 1px solid var(--border);
z-index: 1000;
overflow-y: auto;
padding: 2rem 1.5rem;
}
.main-content {
margin-left: 280px;
min-height: 100vh;
}
.citation {
color: var(--accent);
text-decoration: none;
font-weight: 500;
transition: all 0.2s ease;
}
.citation:hover {
color: var(--accent-secondary);
text-decoration: underline;
}
.section-divider {
height: 2px;
background: linear-gradient(90deg, var(--accent), var(--accent-secondary));
margin: 3rem 0;
border-radius: 1px;
}
.highlight-box {
background: linear-gradient(135deg, rgba(230, 126, 34, 0.05) 0%, rgba(52, 152, 219, 0.05) 100%);
border-left: 4px solid var(--accent);
padding: 1.5rem;
margin: 2rem 0;
border-radius: 0 8px 8px 0;
}
.chart-container {
background: white;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(0,0,0,0.08);
padding: 2rem;
margin: 2rem 0;
}
.bento-grid {
display: grid;
grid-template-columns: 2fr 1fr;
grid-template-rows: auto auto;
gap: 1.5rem;
margin: 2rem 0;
}
.bento-item {
background: white;
border-radius: 12px;
padding: 2rem;
box-shadow: 0 4px 20px rgba(0,0,0,0.08);
}
.bento-hero {
grid-row: 1 / 3;
position: relative;
overflow: hidden;
}
.hero-overlay {
position: absolute;
inset: 0;
background: linear-gradient(135deg, rgba(26, 26, 26, 0.8) 0%, rgba(52, 152, 219, 0.6) 100%);
display: flex;
align-items: center;
justify-content: center;
color: white;
text-align: center;
padding: 3rem;
}
/* Mermaid chart styling */
.mermaid-container {
display: flex;
justify-content: center;
min-height: 300px;
max-height: 800px;
background: white;
border: 2px solid #e5e7eb;
border-radius: 12px;
padding: 30px;
margin: 30px 0;
box-shadow: 0 8px 25px rgba(0, 0, 0, 0.08);
position: relative;
overflow: hidden;
}
.mermaid-container .mermaid {
width: 100%;
max-width: 100%;
height: 100%;
cursor: grab;
transition: transform 0.3s ease;
transform-origin: center center;
display: flex;
justify-content: center;
align-items: center;
touch-action: none;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.mermaid-container .mermaid svg {
max-width: 100%;
height: 100%;
display: block;
margin: 0 auto;
}
.mermaid-container .mermaid:active {
cursor: grabbing;
}
.mermaid-container.zoomed .mermaid {
height: 100%;
width: 100%;
cursor: grab;
}
.mermaid-controls {
position: absolute;
top: 15px;
right: 15px;
display: flex;
gap: 10px;
z-index: 20;
background: rgba(255, 255, 255, 0.95);
padding: 8px;
border-radius: 8px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
}
.mermaid-control-btn {
background: #ffffff;
border: 1px solid #d1d5db;
border-radius: 6px;
padding: 10px;
cursor: pointer;
transition: all 0.2s ease;
color: #374151;
font-size: 14px;
min-width: 36px;
height: 36px;
text-align: center;
display: flex;
align-items: center;
justify-content: center;
}
.mermaid-control-btn:hover {
background: #f8fafc;
border-color: #3b82f6;
color: #3b82f6;
transform: translateY(-1px);
}
.mermaid-control-btn:active {
transform: scale(0.95);
}
<span class="mention-invalid">@media</span> (max-width: 1024px) {
.toc-fixed {
transform: translateX(-100%);
transition: transform 0.3s ease;
}
.toc-fixed.open {
transform: translateX(0);
}
.main-content {
margin-left: 0;
}
.bento-grid {
grid-template-columns: 1fr;
grid-template-rows: auto auto auto;
}
.bento-hero {
grid-row: 1;
}
}
<span class="mention-invalid">@media</span> (max-width: 768px) {
.hero-overlay {
padding: 1.5rem;
}
.hero-overlay h1 {
font-size: 1.875rem; /* 30px */
line-height: 2.25rem; /* 36px */
}
.hero-overlay p {
font-size: 1rem;
}
section {
padding-left: 1rem !important;
padding-right: 1rem !important;
}
}
<span class="mention-invalid">@media</span> (max-width: 640px) {
.hero-overlay h1 {
font-size: 1.5rem; /* 24px */
line-height: 2rem; /* 32px */
}
.hero-overlay p {
font-size: 0.875rem;
}
}
<span class="mention-invalid">@media</span> (max-width: 390px) {
.hero-overlay {
padding: 1rem;
}
.hero-overlay h1 {
font-size: 1.25rem; /* 20px */
line-height: 1.75rem; /* 28px */
}
}
</style>
<base target="_blank">
</head>
<body>
<!-- Table of Contents -->
<nav class="toc-fixed">
<div class="mb-8">
<h2 class="text-xl font-bold text-gray-800 mb-4 serif">目录导航</h2>
<div class="w-12 h-0.5 bg-gradient-to-r from-orange-500 to-blue-500"></div>
</div>
<ul class="space-y-3 text-sm">
<li>
<a href="#section1" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">1. 核心命题:AGI之路的方向性危机</a>
</li>
<li>
<a href="#section2" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">2. Transformer架构的深层困境</a>
</li>
<li>
<a href="#section3" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">3. CTM架构:大脑启发的范式跃迁</a>
</li>
<li>
<a href="#section4" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">4. Transformer与CTM的深度技术对比</a>
</li>
<li>
<a href="#section5" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">5. 行业生态与创新发展重构</a>
</li>
<li>
<a href="#section6" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">6. 社会文明层面的深远影响</a>
</li>
<li>
<a href="#section7" class="block py-2 px-3 rounded-lg hover:bg-gray-100 transition-colors">7. 未来展望与战略启示</a>
</li>
</ul>
<div class="mt-8 pt-6 border-t border-gray-200">
<h3 class="text-sm font-semibold text-gray-600 mb-3">关键概念</h3>
<div class="flex flex-wrap gap-2">
<span class="px-2 py-1 bg-orange-100 text-orange-800 text-xs rounded">Transformer</span>
<span class="px-2 py-1 bg-blue-100 text-blue-800 text-xs rounded">CTM</span>
<span class="px-2 py-1 bg-green-100 text-green-800 text-xs rounded">Scaling Law</span>
</div>
</div>
</nav>
<!-- Main Content -->
<main class="main-content">
<!-- Hero Section -->
<section class="relative">
<div class="bento-grid max-w-7xl mx-auto px-6 py-12">
<!-- Hero Content -->
<div class="bento-item bento-hero">
<img src="https://kimi-web-img.moonshot.cn/img/img.huxiucdn.com/8adcb82f0471f6a9184b4bac29427ef39d278f1e.jpg" alt="人工智能神经网络抽象概念图" class="w-full h-full object-cover" size="wallpaper" aspect="wide" query="神经网络抽象概念" referrerpolicy="no-referrer" data-modified="1" data-score="0.00"/>
<div class="hero-overlay">
<div>
<h1 class="text-4xl md:text-6xl font-black serif mb-6 italic leading-tight">
AI范式革命
</h1>
<p class="text-xl md:text-2xl font-light mb-8 opacity-90">
从Transformer困局到CTM新纪元
</p>
<div class="flex items-center justify-center space-x-4 text-sm opacity-75">
<span><i class="fas fa-brain mr-2"></i>技术解构</span>
<span><i class="fas fa-lightbulb mr-2"></i>文明启示</span>
<span><i class="fas fa-rocket mr-2"></i>范式跃迁</span>
</div>
</div>
</div>
</div>
<!-- Key Highlights -->
<div class="bento-item">
<h3 class="text-xl font-bold mb-4 serif">核心洞察</h3>
<ul class="space-y-3 text-sm">
<li class="flex items-start">
<i class="fas fa-exclamation-triangle text-orange-500 mt-1 mr-3"></i>
<span>Transformer发明者Llion Jones发出"死胡同"警告</span>
</li>
<li class="flex items-start">
<i class="fas fa-sync-alt text-blue-500 mt-1 mr-3"></i>
<span>CTM架构通过时间动态实现真正推理</span>
</li>
<li class="flex items-start">
<i class="fas fa-chart-line text-green-500 mt-1 mr-3"></i>
<span>Scaling Law正在扼杀创新氧气</span>
</li>
</ul>
</div>
<!-- Critical Questions -->
<div class="bento-item">
<h3 class="text-xl font-bold mb-4 serif">关键问题</h3>
<div class="space-y-3 text-sm">
<div class="p-3 bg-gray-50 rounded-lg">
<strong>技术层面:</strong> Transformer架构的根本局限是什么?
</div>
<div class="p-3 bg-gray-50 rounded-lg">
<strong>文明层面:</strong> 我们是否在错误道路上狂奔?
</div>
<div class="p-3 bg-gray-50 rounded-lg">
<strong>进化层面:</strong> AGI的终局博弈如何展开?
</div>
</div>
</div>
</div>
</section>
<!-- Section 1: Core Proposition -->
<section id="section1" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">核心命题:AGI之路的方向性危机</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Identity Paradox -->
<div class="highlight-box">
<h3 class="text-2xl font-bold mb-4 serif">1.1.1 Transformer发明者的身份悖论</h3>
<p class="text-lg leading-relaxed mb-4">
<strong>Llion Jones的身份构成了当代AI发展史上最富戏剧性的悖论</strong>。作为2017年里程碑论文《Attention Is All You Need》的八位共同作者之一,Jones不仅是Transformer架构的命名者,更是这一技术革命的核心缔造者——该论文已被引用超过10万次,成为21世纪最具影响力的计算机科学出版物之一<a href="https://venturebeat.com/technology/sakana-ais-cto-says-hes-absolutely-sick-of-transformers-the-tech-that-powers" class="citation">[1]</a>。
</p>
<p class="leading-relaxed">
然而,正是这位最深谙Transformer架构的研究者,在2025年AI行业最鼎盛的时刻发出了震撼行业的自我批判:他宣布"绝对厌倦"(absolutely sick)于自己的发明,决定从2024年初开始"大幅减少在Transformer上的研究时间"<a href="https://venturebeat.com/technology/sakana-ais-cto-says-hes-absolutely-sick-of-transformers-the-tech-that-powers" class="citation">[1]</a>
<a href="https://algustionesa.com/why-a-transformer-co-creator-is-sick-of-his-own-ai/" class="citation">[2]</a>。
</p>
</div>
<!-- Dead End Analysis -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">1.1.2 AI鼎盛期的"死胡同"论断</h3>
<div class="grid md:grid-cols-2 gap-8 mb-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-orange-600">现状指标</h4>
<ul class="space-y-2 text-sm">
<li><i class="fas fa-dollar-sign text-green-500 mr-2"></i>全球AI投资超过1500亿美元</li>
<li><i class="fas fa-chart-line text-blue-500 mr-2"></i>OpenAI估值逼近千亿美元</li>
<li><i class="fas fa-trophy text-purple-500 mr-2"></i>GPT-4达到人类专家水平</li>
</ul>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-red-600">Jones的警告</h4>
<ul class="space-y-2 text-sm">
<li><i class="fas fa-exclamation-triangle text-red-500 mr-2"></i>AI已"钙化"在单一架构</li>
<li><i class="fas fa-eye-slash text-orange-500 mr-2"></i>研究人员对突破视而不见</li>
<li><i class="fas fa-balance-scale text-gray-500 mr-2"></i>"利用-探索"严重失衡</li>
</ul>
</div>
</div>
<p class="leading-relaxed mb-4">
Jones的"死胡同"论断发布于一个极具讽刺意味的时间节点。2024-2025年间,AI行业达到历史巅峰,但他却在此刻发出了刺耳的警告:<strong>当前AI已经"钙化"(calcified)在单一架构方法周围,可能使研究人员对下一个重大突破视而不见</strong>
<a href="https://venturebeat.com/technology/sakana-ais-cto-says-hes-absolutely-sick-of-transformers-the-tech-that-powers" class="citation">[1]</a>。
</p>
</div>
<!-- Jagged Intelligence -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">1.1.3 "锯齿状智能"现象的本质揭示</h3>
<div class="bg-gradient-to-r from-orange-50 to-blue-50 p-6 rounded-lg mb-6">
<h4 class="text-lg font-semibold mb-4">什么是"锯齿状智能"?</h4>
<div class="grid md:grid-cols-2 gap-6">
<div>
<h5 class="font-semibold text-green-600 mb-2">天才表现</h5>
<ul class="text-sm space-y-1">
<li>• 撰写学术论文</li>
<li>• 生成复杂代码</li>
<li>• 专业领域问题解决</li>
</ul>
</div>
<div>
<h5 class="font-semibold text-red-600 mb-2">白痴错误</h5>
<ul class="text-sm space-y-1">
<li>• 多步算术失败</li>
<li>• 基础逻辑谜题错误</li>
<li>• 简单推理任务失误</li>
</ul>
</div>
</div>
</div>
<p class="leading-relaxed">
<strong>GPT-4所展现的"天才与白痴并存"的锯齿状智能(jagged intelligence),成为Jones批判的经验锚点</strong>
<a href="https://www.xiaoyuzhoufm.com/episode/69742925ef1cf272a7246aa7" class="citation">[3]</a>
<a href="https://eu.36kr.com/en/p/3643193251516297" class="citation">[4]</a>。这种现象暴露了Transformer架构的根本性局限:当任务恰好落在训练数据的密集覆盖区时,模型表现出"天才";当任务需要组合泛化或多步推理时,"白痴"行为暴露了其缺乏真正的理解能力。
</p>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 2: Transformer Dilemma -->
<section id="section2" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">Transformer架构的深层困境</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Design Philosophy -->
<div class="mb-12">
<h3 class="text-2xl font-bold mb-6 serif">2.1 设计哲学与核心机制</h3>
<div class="grid md:grid-cols-2 gap-8 mb-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-blue-600">并行化优势</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-clock text-green-500 mr-3"></i>
<span>训练时间从数周缩短至数天</span>
</div>
<div class="flex items-center">
<i class="fas fa-expand-arrows-alt text-blue-500 mr-3"></i>
<span>模型规模扩展不受序列长度限制</span>
</div>
<div class="flex items-center">
<i class="fas fa-microchip text-purple-500 mr-3"></i>
<span>GPU利用率达到80%以上</span>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-red-600">静态性局限</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-lock text-red-500 mr-3"></i>
<span>"一次性"处理模式</span>
</div>
<div class="flex items-center">
<i class="fas fa-ban text-orange-500 mr-3"></i>
<span>无法暂停、反思或回溯</span>
</div>
<div class="flex items-center">
<i class="fas fa-equals text-gray-500 mr-3"></i>
<span>所有问题接受同等深度计算</span>
</div>
</div>
</div>
</div>
<div class="highlight-box">
<p class="text-lg font-medium mb-4">
<strong>标准Transformer作为"massive, pre-calculated mathematical function"(巨大的预计算数学函数),其"推理深度"精确受限于模型层数</strong>
<a href="https://www.linkedin.com/pulse/beyond-transformer-why-ai-needs-time-think-murat-durmus-oipve" class="citation">[5]</a>。
</p>
<p class="leading-relaxed">
这种"one-size-fits-all"的计算模式与生物智能的动态适应性形成尖锐对比——Transformer"don't actually 'think'. They match patterns"(实际上并不"思考",而是匹配模式)<a href="https://www.linkedin.com/pulse/beyond-transformer-why-ai-needs-time-think-murat-durmus-oipve" class="citation">[5]</a>。
</p>
</div>
</div>
<!-- Technical Roots -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">2.2 "锯齿状智能"的技术根源</h3>
<!-- Technical Analysis Chart -->
<div class="chart-container">
<h4 class="text-xl font-semibold mb-6">Transformer能力分布分析</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph TD
A["Transformer Architecture"] --> B["Pattern Matching Ability"]
A --> C["Lack of True Reasoning"]
B --> D["High Performance on Training Distribution"]
B --> E["Superhuman Parroting"]
C --> F["Compositionality Failure"]
C --> G["No Planning Capability"]
D --> H["Jagged Intelligence"]
E --> H
F --> H
G --> H
H --> I["Genius + Idiot Behavior"]
</div>
</div>
</div>
<div class="grid md:grid-cols-3 gap-6 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="font-semibold text-red-600 mb-3">缺乏规划能力</h4>
<p class="text-sm">无法分解复杂目标为子目标序列,导致多步推理任务失败</p>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="font-semibold text-orange-600 mb-3">缺乏一致性检查</h4>
<p class="text-sm">无法识别自身输出的逻辑矛盾,产生自相矛盾的答案</p>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="font-semibold text-yellow-600 mb-3">缺乏因果理解</h4>
<p class="text-sm">混淆相关性与因果性,无法进行反事实思考</p>
</div>
</div>
</div>
<!-- Scaling Law Effects -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">2.3 Scaling Law的双刃剑效应</h3>
<div class="bg-gradient-to-r from-green-50 to-red-50 p-8 rounded-lg">
<div class="grid md:grid-cols-2 gap-8">
<div>
<h4 class="text-lg font-semibold text-green-600 mb-4">可预测性红利</h4>
<ul class="space-y-2 text-sm">
<li><i class="fas fa-chart-line text-green-500 mr-2"></i>性能与计算量的幂律关系</li>
<li><i class="fas fa-bullseye text-green-500 mr-2"></i>精确规划资源投入</li>
<li><i class="fas fa-dollar-sign text-green-500 mr-2"></i>降低创新风险,吸引资本</li>
</ul>
</div>
<div>
<h4 class="text-lg font-semibold text-red-600 mb-4">创新氧气耗竭</h4>
<ul class="space-y-2 text-sm">
<li><i class="fas fa-skull text-red-500 mr-2"></i>"扩展吸干了房间里所有氧气"</li>
<li><i class="fas fa-eye-slash text-red-500 mr-2"></i>架构创新研究边缘化</li>
<li><i class="fas fa-lock text-red-500 mr-2"></i>人才锁定,探索意愿降低</li>
</ul>
</div>
</div>
</div>
<p class="leading-relaxed mt-6">
Jones与Ilya Sutskever等核心研究者共同指出,<strong>"扩展时代的一个后果是,扩展吸干了房间里的所有氧气"</strong>
<a href="https://cloud.tencent.com/developer/article/2623933" class="citation">[6]</a>。这一隐喻揭示了创新生态的系统性危机:当70%的顶会论文集中于Transformer微调时,架构创新研究被严重边缘化<a href="https://www.xiaoyuzhoufm.com/episode/69742925ef1cf272a7246aa7" class="citation">[3]</a>。
</p>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 3: CTM Architecture -->
<section id="section3" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">CTM架构:大脑启发的范式跃迁</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Design Principles -->
<div class="mb-12">
<h3 class="text-2xl font-bold mb-6 serif">3.1 设计原理与生物合理性</h3>
<div class="highlight-box">
<h4 class="text-xl font-semibold mb-4">核心创新:时间动态作为计算元素</h4>
<p class="text-lg leading-relaxed mb-4">
<strong>Continuous Thought Machine(CTM)的核心创新在于将时间动态重新确立为计算的基础维度,而非需要消除的序列障碍</strong>。与Transformer将时间空间化(转化为位置编码)不同,CTM引入"内部tick"(internal ticks)概念——模型拥有与数据输入解耦的内部时间维度,可在接收静态输入(如图像)或序列输入时以相同方式"思考"<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>
<a href="https://blog.csdn.net/cf2suds8x8f0v/article/details/147967273" class="citation">[8]</a>。
</p>
</div>
<!-- CTM vs Transformer Comparison -->
<div class="chart-container">
<h4 class="text-xl font-semibold mb-6">CTM vs Transformer 架构对比</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph LR
subgraph "Transformer"
T1["Input"] --> T2["Positional Encoding"]
T2 --> T3["Multi-Head Attention"]
T3 --> T4["Feed Forward"]
T4 --> T5["Output"]
end
subgraph "CTM"
C1["Input"] --> C2["Internal Tick"]
C2 --> C3["Neuron-Level Models"]
C3 --> C4["Synapse Model"]
C4 --> C5["Neural Synchronization"]
C5 --> C6["Adaptive Output"]
C3 -.-> C3
C4 -.-> C4
end
style T1 fill:#e8f4fd
style T2 fill:#e8f4fd
style T3 fill:#e8f4fd
style T4 fill:#e8f4fd
style T5 fill:#e8f4fd
style C1 fill:#fff2e8
style C2 fill:#fff2e8
style C3 fill:#fff2e8
style C4 fill:#fff2e8
style C5 fill:#fff2e8
style C6 fill:#fff2e8
</div>
</div>
</div>
<!-- NLM Mechanism -->
<div class="grid md:grid-cols-2 gap-8 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-blue-600">神经元级模型(NLM)</h4>
<div class="space-y-3">
<div class="flex items-start">
<i class="fas fa-key text-blue-500 mt-1 mr-3"></i>
<div>
<strong>私有权重</strong>
<p class="text-sm text-gray-600">每个NLM拥有独特的参数用于响应刺激</p>
</div>
</div>
<div class="flex items-start">
<i class="fas fa-history text-green-500 mt-1 mr-3"></i>
<div>
<strong>历史上下文</strong>
<p class="text-sm text-gray-600">记忆缓冲区存储近期tick的活动</p>
</div>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-purple-600">神经同步机制</h4>
<div class="space-y-3">
<div class="flex items-start">
<i class="fas fa-wave-square text-purple-500 mt-1 mr-3"></i>
<div>
<strong>振荡模式</strong>
<p class="text-sm text-gray-600">γ波段同步与特征绑定相关</p>
</div>
</div>
<div class="flex items-start">
<i class="fas fa-network-wired text-orange-500 mt-1 mr-3"></i>
<div>
<strong>群体表征</strong>
<p class="text-sm text-gray-600">同步化模式作为核心表征</p>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Core Innovations -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">3.2 核心创新组件</h3>
<div class="space-y-8">
<div class="bg-gradient-to-r from-blue-50 to-purple-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">分离的内部维度:Tick机制</h4>
<p class="leading-relaxed mb-4">
CTM的"Continuous"(连续)之名源于其<strong>完全在内部"思考维度"上操作的本质</strong>。模型异步处理数据:可在接收输入后执行任意数量的内部tick,每个tick更新所有NLM的状态,而输出仅在模型决定"思考完成"后产生<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>
<a href="https://blog.csdn.net/cf2suds8x8f0v/article/details/147967273" class="citation">[8]</a>。
</p>
<div class="bg-white p-4 rounded border-l-4 border-blue-500">
<p class="text-sm italic">
"当CTM被限制在少于完整迷宫追踪所需的思考时间时,它发展出一种策略——跳到可能的未来位置,向后追踪填补间隙,然后再向前跳"<a href="https://www.theneuron.ai/explainer-articles/continuous-thought-machine-explained/" class="citation">[9]</a>
</p>
</div>
</div>
<div class="bg-gradient-to-r from-green-50 to-blue-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">突触模型与U-Net通信骨干</h4>
<p class="leading-relaxed">
CTM的架构包含两个核心可学习组件:<strong>突触模型(synapse model)</strong>和<strong>U-Net通信骨干</strong>。突触模型定义了神经元之间的连接动态,包括信号传递的时间特性(延迟、衰减、易化/压抑)。与Transformer的注意力权重不同,CTM的突触参数是跨tick持续存在的,支持长期依赖的形成和消退<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>。
</p>
</div>
</div>
</div>
<!-- Dynamic Reasoning -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">3.3 动态推理的实现路径</h3>
<div class="grid md:grid-cols-3 gap-6">
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-green-500">
<h4 class="font-semibold text-green-600 mb-3">
<i class="fas fa-tachometer-alt mr-2"></i>自适应计算深度
</h4>
<p class="text-sm mb-3">简单任务快速响应,复杂任务自动延长思考过程</p>
<div class="text-xs text-gray-600">
<strong>优势:</strong>能效优化、响应速度提升
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-blue-500">
<h4 class="font-semibold text-blue-600 mb-3">
<i class="fas fa-route mr-2"></i>多步展开推理
</h4>
<p class="text-sm mb-3">迷宫求解可达150步,展现强大组合泛化能力</p>
<div class="text-xs text-gray-600">
<strong>突破:</strong>6倍规模泛化,远超Transformer
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-purple-500">
<h4 class="font-semibold text-purple-600 mb-3">
<i class="fas fa-brain mr-2"></i>内部状态驱动
</h4>
<p class="text-sm mb-3">思考的中断与恢复,支持长时程推理</p>
<div class="text-xs text-gray-600">
<strong>特性:</strong>内在思考,不依赖语言生成
</div>
</div>
</div>
<!-- Maze Solving Visualization -->
<div class="chart-container mt-8">
<h4 class="text-xl font-semibold mb-6">CTM迷宫求解过程可视化</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph LR
A["39×39 Maze
<br/>Training"] --> B["99×99 Maze
<br/>Testing"]
B --> C["6x Size Generalization"]
A --> D["100 Steps
<br/>Training"]
D --> E["600 Steps
<br/>Testing"]
E --> F["6x Length Generalization"]
style A fill:#e8f4fd
style B fill:#fff2e8
style C fill:#e8f5e8
style D fill:#e8f4fd
style E fill:#fff2e8
style F fill:#e8f5e8
</div>
</div>
</div>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 4: Technical Comparison -->
<section id="section4" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">Transformer与CTM的深度技术对比</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Architecture Comparison Table -->
<div class="chart-container mb-12">
<h3 class="text-2xl font-bold mb-6 serif">4.1 架构设计范式差异</h3>
<div class="overflow-x-auto">
<table class="w-full text-sm">
<thead>
<tr class="bg-gray-100">
<th class="p-3 text-left font-semibold">维度</th>
<th class="p-3 text-left font-semibold text-blue-600">Transformer</th>
<th class="p-3 text-left font-semibold text-orange-600">CTM</th>
</tr>
</thead>
<tbody>
<tr class="border-b">
<td class="p-3 font-medium">核心计算模式</td>
<td class="p-3">层间并行、层内并行</td>
<td class="p-3">tick间串行、神经元间部分并行</td>
</tr>
<tr class="border-b bg-gray-50">
<td class="p-3 font-medium">时间处理</td>
<td class="p-3">空间化(位置编码)</td>
<td class="p-3">内在化(tick序列)</td>
</tr>
<tr class="border-b">
<td class="p-3 font-medium">深度固定性</td>
<td class="p-3">架构参数(层数)决定</td>
<td class="p-3">运行时自适应</td>
</tr>
<tr class="border-b bg-gray-50">
<td class="p-3 font-medium">批处理友好性</td>
<td class="p-3">极高(相同长度输入可完美批处理)</td>
<td class="p-3">受限(不同输入可能需要不同tick数)</td>
</tr>
<tr class="border-b">
<td class="p-3 font-medium">硬件优化</td>
<td class="p-3">矩阵乘法密集,GPU/TPU高度优化</td>
<td class="p-3">动态稀疏计算,需专用硬件支持</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- Computational Characteristics -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">4.2 计算特性与效率权衡</h3>
<div class="grid md:grid-cols-2 gap-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-blue-600">训练并行性的丧失</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-minus text-red-500 mr-3"></i>
<span>CTM的tick序列依赖迫使顺序计算</span>
</div>
<div class="flex items-center">
<i class="fas fa-minus text-red-500 mr-3"></i>
<span>大规模分布式训练效率降低</span>
</div>
<div class="flex items-center">
<i class="fas fa-plus text-green-500 mr-3"></i>
<span>推理阶段可根据复杂度动态分配计算</span>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-orange-600">推理灵活性的获取</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-plus text-green-500 mr-3"></i>
<span>自适应计算深度,按需分配资源</span>
</div>
<div class="flex items-center">
<i class="fas fa-plus text-green-500 mr-3"></i>
<span>简单任务快速响应,复杂任务深入思考</span>
</div>
<div class="flex items-center">
<i class="fas fa-plus text-green-500 mr-3"></i>
<span>边缘部署和实时应用优势明显</span>
</div>
</div>
</div>
</div>
</div>
<!-- Capability Boundaries -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">4.3 能力边界与性能表现</h3>
<div class="space-y-8">
<div class="bg-gradient-to-r from-blue-50 to-purple-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">图像分类任务的人类相似性优势</h4>
<p class="leading-relaxed mb-4">
CTM在ImageNet-1K上的性能被报告为72.47%的top-1准确率和89.89%的top-5准确率<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>,但更值得关注的是其行为特征而非原始准确率。与Transformer的视觉模型(如ViT)相比,CTM展现出"仔细移动其注视点,选择聚焦于最显著特征"的类人视觉策略<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>。
</p>
<div class="bg-white p-4 rounded border-l-4 border-blue-500">
<p class="text-sm">
<strong>关键优势:</strong>无需温度缩放或事后调整,展现"近乎完美的校准"——预测概率与实际准确率高度一致<a href="https://www.infoq.cn/news/VpQfr4EHzUu3cMOVRsNY" class="citation">[10]</a>
</p>
</div>
</div>
<div class="bg-gradient-to-r from-green-50 to-blue-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">迷宫导航的序列推理突破</h4>
<p class="leading-relaxed mb-4">
迷宫求解是CTM的旗舰演示任务,在39×39迷宫、路径长度100的训练条件下,CTM成功处理99×99迷宫、路径长度约600的测试案例<a href="https://pub.sakana.ai/ctm/" class="citation">[7]</a>。这种6×的规模泛化远超Transformer的典型表现。
</p>
<div class="grid md:grid-cols-2 gap-4">
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-green-600 mb-2">训练条件</h5>
<ul class="text-sm space-y-1">
<li>• 迷宫尺寸:39×39</li>
<li>• 路径长度:100步</li>
<li>• 直接预测路径步骤</li>
</ul>
</div>
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-blue-600 mb-2">测试表现</h5>
<ul class="text-sm space-y-1">
<li>• 迷宫尺寸:99×99</li>
<li>• 路径长度:约600步</li>
<li>• 6倍规模泛化</li>
</ul>
</div>
</div>
</div>
</div>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 5: Industry Ecosystem -->
<section id="section5" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">行业生态与创新发展重构</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Research Paradigm Transformation -->
<div class="mb-12">
<h3 class="text-2xl font-bold mb-6 serif">5.1 研究范式的转型压力</h3>
<div class="highlight-box">
<h4 class="text-xl font-semibold mb-4">从规模竞赛到架构创新的资源再分配</h4>
<p class="leading-relaxed mb-4">
Jones的警告与行业动态共同指向资源再分配的紧迫性。当前AI研发的资源分布高度失衡:据Jones披露,70%的顶会论文集中于Transformer微调<a href="https://www.xiaoyuzhoufm.com/episode/69742925ef1cf272a7246aa7" class="citation">[3]</a>,架构创新研究被边缘化为"非主流"项目。
</p>
</div>
<!-- Resource Distribution Chart -->
<div class="chart-container mt-8">
<h4 class="text-xl font-semibold mb-6">AI研究资源分布现状</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph TB
A["AI Research Resources"] --> B["Transformer Scaling"]
A --> C["Transformer Fine-tuning"]
A --> D["Architecture Innovation"]
A --> E["New Paradigm Exploration"]
B --> B1["70% Resources"]
C --> C1["20% Resources"]
D --> D1["8% Resources"]
E --> E1["2% Resources"]
style B1 fill:#ff6b6b
style C1 fill:#4ecdc4
style D1 fill:#45b7d1
style E1 fill:#96ceb4
style B fill:#ffe0e0
style C fill:#e0f2f1
style D fill:#e3f2fd
style E fill:#e8f5e8
style A fill:#f8f9fa
</div>
</div>
</div>
<!-- Open Source Ecosystem -->
<div class="grid md:grid-cols-3 gap-6 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-blue-500">
<h4 class="font-semibold text-blue-600 mb-3">
<i class="fas fa-code-branch mr-2"></i>开源生态催化
</h4>
<p class="text-sm mb-3">Sakana AI开源发布CTM代码库和模型检查点</p>
<div class="text-xs text-gray-600">
<strong>效应:</strong>降低研究门槛,加速迭代改进
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-green-500">
<h4 class="font-semibold text-green-600 mb-3">
<i class="fas fa-flask mr-2"></i>跨学科融合
</h4>
<p class="text-sm mb-3">神经科学与AI的深度融合新路径</p>
<div class="text-xs text-gray-600">
<strong>价值:</strong>亿万年进化验证的设计原则
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-purple-500">
<h4 class="font-semibold text-purple-600 mb-3">
<i class="fas fa-graduation-cap mr-2"></i>人才培养
</h4>
<p class="text-sm mb-3">新一代研究者在动态神经网络范式下成长</p>
<div class="text-xs text-gray-600">
<strong>目标:</strong>形成范式转换的临界质量
</div>
</div>
</div>
</div>
<!-- Industry Competition -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">5.2 产业竞争格局的潜在演变</h3>
<div class="space-y-8">
<div class="bg-gradient-to-r from-red-50 to-orange-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">现有巨头的路径依赖风险</h4>
<p class="leading-relaxed mb-4">
OpenAI、Google DeepMind、Anthropic等前沿实验室面临严峻的路径依赖困境。其技术栈、人才结构、商业模式都围绕Transformer扩展构建,向新架构的转型成本高昂。更微妙的是认知锁定:组织文化、领导层信念、投资者预期共同强化了"扩展即正途"的叙事。
</p>
<div class="bg-white p-4 rounded border-l-4 border-red-500">
<p class="text-sm">
<strong>2024年末信号:</strong>Orion、Gemini 2.0、Opus 3.5 reportedly面临性能瓶颈,原始Scaling Law可能触及"收益递减"拐点<a href="https://zhuanlan.zhihu.com/p/6520287813" class="citation">[11]</a>
</p>
</div>
</div>
<div class="bg-gradient-to-r from-blue-50 to-green-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">新兴力量的颠覆性窗口</h4>
<p class="leading-relaxed mb-4">
CTM为新兴AI企业提供了潜在的颠覆性窗口。历史模式表明,架构代际转换是行业格局重塑的关键时机:Google凭借Transformer超越了RNN时代的先驱,OpenAI凭借扩展策略超越了学术机构。
</p>
<div class="grid md:grid-cols-2 gap-4">
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-blue-600 mb-2">Sakana AI优势</h5>
<ul class="text-sm space-y-1">
<li>• Transformer发明者技术权威性</li>
<li>• 小型实验室组织灵活性</li>
<li>• 东京基地的认知距离</li>
</ul>
</div>
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-green-600 mb-2">开源策略</h5>
<ul class="text-sm space-y-1">
<li>• 与封闭巨头形成对比</li>
<li>• 吸引全球贡献者</li>
<li>• 培养早期采用者生态</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<!-- Innovation Oxygen -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">5.3 创新氧气的再供给机制</h3>
<div class="highlight-box">
<h4 class="text-xl font-semibold mb-4">多元化架构探索的激励重建</h4>
<p class="leading-relaxed mb-4">
重建创新氧气需要系统性的激励机制改革。当前学术评价体系的"发表或灭亡"(publish or perish)压力,与高风险、长周期的架构创新存在根本张力。CTM的开发时间线——从概念到公开成果约两年——在AI领域已属"长期"<a href="https://m.thepaper.cn/newsDetail_forward_32408948" class="citation">[12]</a>。
</p>
<p class="leading-relaxed">
Jones希望CTM成为"示范案例",鼓励研究者尝试"看似风险高、但更可能通向下一个大突破的研究方向"<a href="https://news.qq.com/rain/a/20260117A03F1C00" class="citation">[13]</a>——这一愿景需要制度层面的配套改革。
</p>
</div>
<!-- Innovation Culture Transformation -->
<div class="grid md:grid-cols-2 gap-8 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-blue-600">长期主义研究价值</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-shield-alt text-blue-500 mr-3"></i>
<span>机构层面:创建"AI贝尔实验室"模式</span>
</div>
<div class="flex items-center">
<i class="fas fa-user-graduate text-green-500 mr-3"></i>
<span>个人层面:抵制"热点追逐"诱惑</span>
</div>
<div class="flex items-center">
<i class="fas fa-trophy text-purple-500 mr-3"></i>
<span>文化层面:重新定义"成功"标准</span>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-orange-600">失败容忍度提升</h4>
<div class="space-y-3 text-sm">
<div class="flex items-center">
<i class="fas fa-award text-orange-500 mr-3"></i>
<span>"智能失败"奖励机制</span>
</div>
<div class="flex items-center">
<i class="fas fa-book-open text-red-500 mr-3"></i>
<span>鼓励"负面结果"发表</span>
</div>
<div class="flex items-center">
<i class="fas fa-comments text-blue-500 mr-3"></i>
<span>诚实传达AI发展真实状态</span>
</div>
</div>
</div>
</div>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 6: Social Impact -->
<section id="section6" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">社会文明层面的深远影响</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Cognitive Revolution -->
<div class="mb-12">
<h3 class="text-2xl font-bold mb-6 serif">6.1 智能本质的认知革命</h3>
<div class="highlight-box">
<h4 class="text-xl font-semibold mb-4">从"大数据拟合"到"动态认知"的范式转换</h4>
<p class="text-lg leading-relaxed mb-4">
CTM所代表的架构转向,触及了关于智能本质的深层哲学问题。当前主流AI——以Transformer为核心——可被理解为"压缩即智能"——大模型通过预测下一个token,隐式压缩了训练数据的统计规律。
</p>
<p class="leading-relaxed">
<strong>CTM的"动态认知"范式则将智能重新定位于过程而非结果:关键不在于存储多少模式,而在于如何动态构建、操作和修正内部表征</strong>。这与认知科学中的"建构主义"传统——Piaget、Vygotsky等——形成呼应,强调智能作为主动的意义建构过程。
</p>
</div>
<!-- Paradigm Shift Visualization -->
<div class="chart-container mt-8">
<h4 class="text-xl font-semibold mb-6">AI智能范式演进</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph LR
subgraph "Traditional AI"
A1["Symbolic AI"] --> A2["Expert Systems"]
A2 --> A3["Machine Learning"]
A3 --> A4["Deep Learning"]
end
subgraph "Current Paradigm"
B1["Big Data Fitting"] --> B2["Transformer Scaling"]
B2 --> B3["Pattern Compression"]
end
subgraph "Emerging Paradigm"
C1["Dynamic Cognition"] --> C2["CTM Architecture"]
C2 --> C3["Constructive Process"]
end
A4 --> B1
B3 --> C1
style A1 fill:#e8f4fd
style A2 fill:#e8f4fd
style A3 fill:#e8f4fd
style A4 fill:#e8f4fd
style B1 fill:#fff2e8
style B2 fill:#fff2e8
style B3 fill:#fff2e8
style C1 fill:#e8f5e8
style C2 fill:#e8f5e8
style C3 fill:#e8f5e8
</div>
</div>
</div>
<!-- Time Dimension Philosophy -->
<div class="grid md:grid-cols-2 gap-8 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-blue-600">时间维度的本体论地位</h4>
<p class="leading-relaxed mb-4">
CTM将时间从实现细节提升为本体论要素,这一立场与哲学传统中的多种时间理论形成对话。伯格森的"绵延"(durée)概念强调意识的时间性不可还原为空间化测量。
</p>
<div class="bg-blue-50 p-3 rounded text-sm">
<strong>工程实现:</strong>CTM的tick机制可被解读为"主观时间"的人工形式——与物理时间解耦,由系统自身的动力学定义。
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md">
<h4 class="text-lg font-semibold mb-4 text-green-600">生物智能边界重构</h4>
<p class="leading-relaxed mb-4">
CTM的生物启发性引发了关于"生物相似性"与"智能"关系的深层问题。生物智能的某些特征(时间动态、神经同步)可能是智能的必要条件,而非可随意取舍的实现选择。
</p>
<div class="bg-green-50 p-3 rounded text-sm">
<strong>评价标准:</strong>需要开发"架构中性"的评估框架,不假设特定计算模式,捕捉扩展性之外的维度。
</div>
</div>
</div>
</div>
<!-- AGI Path Recalibration -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">6.2 AGI发展路径的重新校准</h3>
<div class="space-y-8">
<div class="bg-gradient-to-r from-purple-50 to-blue-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">技术乐观主义与方向怀疑主义的平衡</h4>
<p class="leading-relaxed mb-4">
Jones的立场代表了AI研究中的"方向怀疑主义"声音——对当前主流路径的根本质疑。当前讨论被技术乐观主义主导:Sam Altman预测2026年AGI,Dario Amodei预测五年内半数入门级白领工作自动化<a href="https://zhuanlan.zhihu.com/p/6520287813" class="citation">[11]</a>。
</p>
<div class="bg-white p-4 rounded border-l-4 border-purple-500">
<p class="text-sm">
<strong>平衡关键:</strong>区分"能力扩展"与"范式转换"。承认当前路径的局部有效性,同时为其终极局限保持开放,是负责任的创新态度。
</p>
</div>
</div>
<div class="bg-gradient-to-r from-orange-50 to-yellow-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">多路径探索的冗余价值</h4>
<p class="leading-relaxed mb-4">
从投资组合的角度,当未来高度不确定时,分散投资比集中押注更优。AGI的实现路径存在深刻的不确定性:我们不知道Scaling Law的极限、不知道架构创新的潜力、不知道生物启发的价值。
</p>
<div class="grid md:grid-cols-2 gap-4">
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-orange-600 mb-2">集中化压力</h5>
<ul class="text-sm space-y-1">
<li>• 网络效应</li>
<li>• 人才聚集</li>
<li>• 规模经济</li>
</ul>
</div>
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-green-600 mb-2">多元化价值</h5>
<ul class="text-sm space-y-1">
<li>• 风险分散</li>
<li>• 系统性对冲</li>
<li>• 创新冗余</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<!-- Human Agency Challenge -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">6.3 人类主体性的存续挑战</h3>
<div class="grid md:grid-cols-3 gap-6">
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-red-500">
<h4 class="font-semibold text-red-600 mb-3">
<i class="fas fa-brain mr-2"></i>认知外包深化
</h4>
<p class="text-sm mb-3">将原本由人类执行的认知任务委托给AI系统</p>
<div class="text-xs text-gray-600">
<strong>挑战:</strong>守护批判性思维,防止过度信任
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-orange-500">
<h4 class="font-semibold text-orange-600 mb-3">
<i class="fas fa-briefcase mr-2"></i>劳动价值冲击
</h4>
<p class="text-sm mb-3">经济价值创造与人类劳动投入脱钩</p>
<div class="text-xs text-gray-600">
<strong>影响:</strong>"创造性"和"分析性"工作价值被侵蚀
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-green-500">
<h4 class="font-semibold text-green-600 mb-3">
<i class="fas fa-handshake mr-2"></i>人机协作新范式
</h4>
<p class="text-sm mb-3">CTM的可解释性支持真正的"混合智能"</p>
<div class="text-xs text-gray-600">
<strong>伦理:</strong>明确责任分配,公平贡献认可
</div>
</div>
</div>
<!-- Future of Human-AI Interaction -->
<div class="chart-container mt-8">
<h4 class="text-xl font-semibold mb-6">人机协作演进路径</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph TD
A["Current State"] --> B["Tool AI"]
B --> C["Assistant AI"]
C --> D["Collaborative AI"]
D --> E["Hybrid Intelligence"]
A1["Human performs task"] --> B1["AI provides tools"]
B1 --> C1["AI assists in task"]
C1 --> D1["AI collaborates on task"]
D1 --> E1["Human-AI joint cognition"]
style A fill:#ffe0e0
style B fill:#fff2e8
style C fill:#e8f4fd
style D fill:#e8f5e8
style E fill:#f3e5f5
style A1 fill:#ffe0e0
style B1 fill:#fff2e8
style C1 fill:#e8f4fd
style D1 fill:#e8f5e8
style E1 fill:#f3e5f5
</div>
</div>
</div>
</div>
</section>
<div class="section-divider max-w-6xl mx-auto"></div>
<!-- Section 7: Future Outlook -->
<section id="section7" class="max-w-6xl mx-auto px-6 py-16">
<div class="mb-12">
<h2 class="text-4xl font-black serif mb-6">未来展望与战略启示</h2>
<div class="w-24 h-1 bg-gradient-to-r from-orange-500 to-blue-500 mb-8"></div>
</div>
<!-- Key Variables -->
<div class="mb-12">
<h3 class="text-2xl font-bold mb-6 serif">7.1 技术演进的关键变量</h3>
<div class="space-y-8">
<div class="bg-gradient-to-r from-blue-50 to-green-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">CTM在语言任务上的验证节点</h4>
<p class="leading-relaxed mb-4">
CTM发展的最关键近期变量是<strong>语言任务上的表现验证</strong>。当前公开评估集中于视觉和强化学习领域;语言——Transformer的统治领域——将是真正的试金石。
</p>
<div class="grid md:grid-cols-2 gap-4">
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-blue-600 mb-2">关键问题</h5>
<ul class="text-sm space-y-1">
<li>• 语言建模困惑度竞争力</li>
<li>• 文本连贯性和长程一致性</li>
<li>• 交互式对话效率</li>
</ul>
</div>
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-green-600 mb-2">时间线影响</h5>
<ul class="text-sm space-y-1">
<li>• 积极结果:快速吸引关注</li>
<li>• 负面结果:边缘化风险</li>
<li>• 开放策略:加速验证过程</li>
</ul>
</div>
</div>
</div>
<div class="bg-gradient-to-r from-purple-50 to-blue-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">神经形态硬件的协同进化</h4>
<p class="leading-relaxed mb-4">
CTM的效率挑战可能通过硬件创新得到缓解。<strong>神经形态芯片</strong>——如Intel Loihi、IBM TrueNorth、以及各种研究原型——专为脉冲神经网络和时序动态设计,其特性与CTM的计算模式更匹配。
</p>
<div class="bg-white p-4 rounded border-l-4 border-purple-500">
<p class="text-sm">
<strong>协同进化模式:</strong>GPU推动深度学习爆发 → Transformer优化GPU利用 → CTM需要新一代硬件 → 神经形态技术商业化
</p>
</div>
</div>
<div class="bg-gradient-to-r from-orange-50 to-yellow-50 p-6 rounded-lg">
<h4 class="text-xl font-semibold mb-4">混合架构的可能性空间</h4>
<p class="leading-relaxed mb-4">
最可能的近期发展并非CTM完全替代Transformer,而是<strong>混合架构的探索</strong>。Transformer在并行训练和广泛知识压缩上的优势,与CTM的动态推理和可解释性,可能通过某种形式的整合实现互补。
</p>
<div class="grid md:grid-cols-2 gap-4">
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-orange-600 mb-2">混合模式</h5>
<ul class="text-sm space-y-1">
<li>• Transformer编码器 + CTM解码器</li>
<li>• CTM作为深度扩展模块</li>
<li>• 任务自适应架构选择</li>
</ul>
</div>
<div class="bg-white p-4 rounded">
<h5 class="font-semibold text-yellow-600 mb-2">技术挑战</h5>
<ul class="text-sm space-y-1">
<li>• 计算范式接口设计</li>
<li>• 梯度传播稳定性</li>
<li>• 训练目标协调</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<!-- Governance Framework -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">7.2 治理框架的前瞻构建</h3>
<div class="grid md:grid-cols-3 gap-6">
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-blue-500">
<h4 class="font-semibold text-blue-600 mb-3">
<i class="fas fa-shield-alt mr-2"></i>技术多样性保护
</h4>
<div class="space-y-2 text-sm">
<div class="flex items-start">
<i class="fas fa-dollar-sign text-green-500 mr-2 mt-1"></i>
<span>公共资助的架构探索项目</span>
</div>
<div class="flex items-start">
<i class="fas fa-balance-scale text-blue-500 mr-2 mt-1"></i>
<span>反垄断审查更新</span>
</div>
<div class="flex items-start">
<i class="fas fa-code text-purple-500 mr-2 mt-1"></i>
<span>开源基础设施投资</span>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-green-500">
<h4 class="font-semibold text-green-600 mb-3">
<i class="fas fa-umbrella mr-2"></i>风险分布式承担
</h4>
<div class="space-y-2 text-sm">
<div class="flex items-start">
<i class="fas fa-shield text-green-500 mr-2 mt-1"></i>
<span>研究保险的公共提供</span>
</div>
<div class="flex items-start">
<i class="fas fa-share-alt text-blue-500 mr-2 mt-1"></i>
<span>成功收益分享机制</span>
</div>
<div class="flex items-start">
<i class="fas fa-network-wired text-purple-500 mr-2 mt-1"></i>
<span>职业保护网络</span>
</div>
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-purple-500">
<h4 class="font-semibold text-purple-600 mb-3">
<i class="fas fa-globe mr-2"></i>全球协作调节
</h4>
<div class="space-y-2 text-sm">
<div class="flex items-start">
<i class="fas fa-handshake text-blue-500 mr-2 mt-1"></i>
<span>协作-竞争平衡</span>
</div>
<div class="flex items-start">
<i class="fas fa-code-branch text-green-500 mr-2 mt-1"></i>
<span>"开放核心"模式</span>
</div>
<div class="flex items-start">
<i class="fas fa-flag text-orange-500 mr-2 mt-1"></i>
<span>多边合作机制</span>
</div>
</div>
</div>
</div>
</div>
<!-- Civilization-level Decisions -->
<div class="mt-12">
<h3 class="text-2xl font-bold mb-6 serif">7.3 文明级决策的紧迫性</h3>
<div class="highlight-box">
<h4 class="text-xl font-semibold mb-4">"错误道路狂奔"的止损时点判断</h4>
<p class="text-lg leading-relaxed mb-4">
Jones的警告最终指向一个文明级的决策问题:<strong>何时承认当前路径的局限性,并承担转向的成本?</strong>这一判断的困难在于:我们永远无法确定替代路径是否更优,直到它被充分验证;但等到验证完成,路径锁定可能已无法打破。
</p>
</div>
<!-- Decision Framework -->
<div class="chart-container mt-8">
<h4 class="text-xl font-semibold mb-6">止损决策信号框架</h4>
<div class="mermaid-container">
<div class="mermaid-controls">
<button class="mermaid-control-btn zoom-in" title="放大">
<i class="fas fa-search-plus"></i>
</button>
<button class="mermaid-control-btn zoom-out" title="缩小">
<i class="fas fa-search-minus"></i>
</button>
<button class="mermaid-control-btn reset-zoom" title="重置">
<i class="fas fa-expand-arrows-alt"></i>
</button>
<button class="mermaid-control-btn fullscreen" title="全屏查看">
<i class="fas fa-expand"></i>
</button>
</div>
<div class="mermaid">
graph TD
A["Current Path Assessment"] --> B["Signal Detection"]
B --> C["Decision Framework"]
B --> D["Marginal Returns Decline"]
B --> E["Alternative Validation"]
B --> F["Social Cost Accumulation"]
D --> D1["Performance plateau"]
D --> D2["Cost-benefit ratio worsening"]
E --> E1["New architecture shows promise"]
E --> E2["Critical benchmarks achieved"]
F --> F1["Energy consumption concerns"]
F --> F2["Innovation ecosystem damage"]
C --> G["Continue Current Path"]
C --> H["Explore Alternatives"]
C --> I["Dual-track Strategy"]
style A fill:#e8f4fd
style B fill:#fff2e8
style C fill:#e8f5e8
style D fill:#f3e5f5
style E fill:#e8f4fd
style F fill:#fff2e8
style G fill:#ffe0e0
style H fill:#e0f2f1
style I fill:#e3f2fd
</div>
</div>
</div>
<!-- Strategic Implications -->
<div class="grid md:grid-cols-2 gap-8 mt-8">
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-red-500">
<h4 class="font-semibold text-red-600 mb-3">范式转换成本评估</h4>
<p class="text-sm mb-3">既有投资的沉没、技能的过时、组织的重组</p>
<div class="text-xs text-gray-600">
<strong>挑战:</strong>转换期间的性能下降、社会适应成本
</div>
</div>
<div class="bg-white p-6 rounded-lg shadow-md border-l-4 border-green-500">
<h4 class="font-semibold text-green-600 mb-3">长期收益潜力</h4>
<p class="text-sm mb-3">新架构的能力上限、效率优势、可解释性改善</p>
<div class="text-xs text-gray-600">
<strong>价值:</strong>创新生态健康、技术发展多样性
</div>
</div>
</div>
<!-- Existential Question -->
<div class="bg-gradient-to-r from-purple-50 to-blue-50 p-8 rounded-lg mt-8">
<h4 class="text-xl font-semibold mb-4">人类在智能进化中的角色定位</h4>
<p class="text-lg leading-relaxed mb-4">
最终,CTM与Transformer的范式之争,折射出更深层的存在性问题:<strong>人类希望在智能进化中扮演什么角色?</strong>是被动接受技术演化的结果,还是主动塑造其方向?是将智能视为可工程化的目标函数优化问题,还是承认其内在的不可还原性?
</p>
<div class="bg-white p-4 rounded border-l-4 border-purple-500">
<p class="text-sm italic">
Jones的CTM项目代表了一种主动塑造的尝试——通过生物启发的架构设计,将人类的认知特性(时间性、过程性、适应性)嵌入AI系统。这一选择,或许比任何具体的技术决策都更为根本。
</p>
</div>
</div>
</div>
</section>
<!-- Footer -->
<footer class="bg-gray-900 text-white py-12 mt-16">
<div class="max-w-6xl mx-auto px-6">
<div class="grid md:grid-cols-3 gap-8">
<div>
<h3 class="text-xl font-bold mb-4 serif">核心洞察</h3>
<p class="text-sm text-gray-300 leading-relaxed">
从Transformer的发明者到其最严厉的批评者,Llion Jones的转变揭示了AI发展深层的方向性危机,也为我们提供了重新审视智能本质的契机。
</p>
</div>
<div>
<h3 class="text-xl font-bold mb-4 serif">关键参考文献</h3>
<div class="space-y-2 text-sm text-gray-300">
<div>
<a href="https://venturebeat.com/technology/sakana-ais-cto-says-hes-absolutely-sick-of-transformers-the-tech-that-powers" class="citation hover:text-white">[1] VentureBeat: Sakana AI CTO on Transformers</a>
</div>
<div>
<a href="https://pub.sakana.ai/ctm/" class="citation hover:text-white">[2] Sakana AI: Continuous Thought Machine</a>
</div>
<div>
<a href="https://www.xiaoyuzhoufm.com/episode/69742925ef1cf272a7246aa7" class="citation hover:text-white">[3] 小宇宙播客: AI范式革命</a>
</div>
</div>
</div>
<div>
<h3 class="text-xl font-bold mb-4 serif">关于本文</h3>
<p class="text-sm text-gray-300 leading-relaxed">
本文基于公开资料和技术分析,探讨了AI发展中的范式转换问题。所有数据和引用均来自可信的学术和行业来源。
</p>
</div>
</div>
<div class="border-t border-gray-700 mt-8 pt-8 text-center text-sm text-gray-400">
<p>© 2025 AI范式革命研究报告. 基于公开资料整理分析.</p>
</div>
</div>
</footer>
</main>
<!-- Mobile TOC Toggle -->
<button id="tocToggle" class="lg:hidden fixed top-4 left-4 z-50 bg-white p-3 rounded-full shadow-lg">
<i class="fas fa-bars"></i>
</button>
<script>
// Initialize Mermaid with enhanced styling and contrast
mermaid.initialize({
startOnLoad: true,
theme: 'base',
themeVariables: {
// Primary colors with good contrast
primaryColor: '#ffffff',
primaryTextColor: '#1a1a1a',
primaryBorderColor: '#3498db',
// Secondary colors
secondaryColor: '#f5f5f0',
secondaryTextColor: '#2c3e50',
secondaryBorderColor: '#e67e22',
// Tertiary colors
tertiaryColor: '#e8f4fd',
tertiaryTextColor: '#1a1a1a',
tertiaryBorderColor: '#3498db',
// Background and text
background: '#ffffff',
mainBkg: '#ffffff',
secondaryBkg: '#f5f5f0',
tertiaryBkg: '#e8f4fd',
// Node styling with high contrast
nodeBkg: '#ffffff',
nodeTextColor: '#1a1a1a',
nodeBorder: '#3498db',
// Line and edge colors
lineColor: '#5d6d7e',
edgeLabelBackground: '#ffffff',
// Cluster styling
clusterBkg: '#f5f5f0',
clusterBorder: '#e67e22',
// Title and labels
titleColor: '#1a1a1a',
textColor: '#1a1a1a',
// Specific node type colors for better contrast
cScale0: '#ffffff',
cScale1: '#f5f5f0',
cScale2: '#e8f4fd',
cScale3: '#fff2e8',
cScale4: '#e8f5e8',
cScale5: '#f3e5f5',
// Ensure text is always dark for readability
c0: '#1a1a1a',
c1: '#1a1a1a',
c2: '#1a1a1a',
c3: '#1a1a1a',
c4: '#1a1a1a',
c5: '#1a1a1a'
},
flowchart: {
useMaxWidth: false,
htmlLabels: true,
curve: 'basis',
padding: 20
},
sequence: {
useMaxWidth: false,
wrap: true
},
gantt: {
useMaxWidth: false
}
});
// Initialize Mermaid Controls for zoom and pan
function initializeMermaidControls() {
const containers = document.querySelectorAll('.mermaid-container');
containers.forEach(container => {
const mermaidElement = container.querySelector('.mermaid');
let scale = 1;
let isDragging = false;
let startX, startY, translateX = 0, translateY = 0;
// 触摸相关状态
let isTouch = false;
let touchStartTime = 0;
let initialDistance = 0;
let initialScale = 1;
let isPinching = false;
// Zoom controls
const zoomInBtn = container.querySelector('.zoom-in');
const zoomOutBtn = container.querySelector('.zoom-out');
const resetBtn = container.querySelector('.reset-zoom');
const fullscreenBtn = container.querySelector('.fullscreen');
function updateTransform() {
mermaidElement.style.transform = `translate(${translateX}px, ${translateY}px) scale(${scale})`;
if (scale > 1) {
container.classList.add('zoomed');
} else {
container.classList.remove('zoomed');
}
mermaidElement.style.cursor = isDragging ? 'grabbing' : 'grab';
}
if (zoomInBtn) {
zoomInBtn.addEventListener('click', () => {
scale = Math.min(scale * 1.25, 4);
updateTransform();
});
}
if (zoomOutBtn) {
zoomOutBtn.addEventListener('click', () => {
scale = Math.max(scale / 1.25, 0.3);
if (scale <= 1) {
translateX = 0;
translateY = 0;
}
updateTransform();
});
}
if (resetBtn) {
resetBtn.addEventListener('click', () => {
scale = 1;
translateX = 0;
translateY = 0;
updateTransform();
});
}
if (fullscreenBtn) {
fullscreenBtn.addEventListener('click', () => {
if (container.requestFullscreen) {
container.requestFullscreen();
} else if (container.webkitRequestFullscreen) {
container.webkitRequestFullscreen();
} else if (container.msRequestFullscreen) {
container.msRequestFullscreen();
}
});
}
// Mouse Events
mermaidElement.addEventListener('mousedown', (e) => {
if (isTouch) return; // 如果是触摸设备,忽略鼠标事件
isDragging = true;
startX = e.clientX - translateX;
startY = e.clientY - translateY;
mermaidElement.style.cursor = 'grabbing';
updateTransform();
e.preventDefault();
});
document.addEventListener('mousemove', (e) => {
if (isDragging && !isTouch) {
translateX = e.clientX - startX;
translateY = e.clientY - startY;
updateTransform();
}
});
document.addEventListener('mouseup', () => {
if (isDragging && !isTouch) {
isDragging = false;
mermaidElement.style.cursor = 'grab';
updateTransform();
}
});
document.addEventListener('mouseleave', () => {
if (isDragging && !isTouch) {
isDragging = false;
mermaidElement.style.cursor = 'grab';
updateTransform();
}
});
// 获取两点之间的距离
function getTouchDistance(touch1, touch2) {
return Math.hypot(
touch2.clientX - touch1.clientX,
touch2.clientY - touch1.clientY
);
}
// Touch Events - 触摸事件处理
mermaidElement.addEventListener('touchstart', (e) => {
isTouch = true;
touchStartTime = Date.now();
if (e.touches.length === 1) {
// 单指拖动
isPinching = false;
isDragging = true;
const touch = e.touches[0];
startX = touch.clientX - translateX;
startY = touch.clientY - translateY;
} else if (e.touches.length === 2) {
// 双指缩放
isPinching = true;
isDragging = false;
const touch1 = e.touches[0];
const touch2 = e.touches[1];
initialDistance = getTouchDistance(touch1, touch2);
initialScale = scale;
}
e.preventDefault();
}, { passive: false });
mermaidElement.addEventListener('touchmove', (e) => {
if (e.touches.length === 1 && isDragging && !isPinching) {
// 单指拖动
const touch = e.touches[0];
translateX = touch.clientX - startX;
translateY = touch.clientY - startY;
updateTransform();
} else if (e.touches.length === 2 && isPinching) {
// 双指缩放
const touch1 = e.touches[0];
const touch2 = e.touches[1];
const currentDistance = getTouchDistance(touch1, touch2);
if (initialDistance > 0) {
const newScale = Math.min(Math.max(
initialScale * (currentDistance / initialDistance),
0.3
), 4);
scale = newScale;
updateTransform();
}
}
e.preventDefault();
}, { passive: false });
mermaidElement.addEventListener('touchend', (e) => {
// 重置状态
if (e.touches.length === 0) {
isDragging = false;
isPinching = false;
initialDistance = 0;
// 延迟重置isTouch,避免鼠标事件立即触发
setTimeout(() => {
isTouch = false;
}, 100);
} else if (e.touches.length === 1 && isPinching) {
// 从双指变为单指,切换为拖动模式
isPinching = false;
isDragging = true;
const touch = e.touches[0];
startX = touch.clientX - translateX;
startY = touch.clientY - translateY;
}
updateTransform();
});
mermaidElement.addEventListener('touchcancel', (e) => {
isDragging = false;
isPinching = false;
initialDistance = 0;
setTimeout(() => {
isTouch = false;
}, 100);
updateTransform();
});
// Enhanced wheel zoom with better center point handling
container.addEventListener('wheel', (e) => {
e.preventDefault();
const rect = container.getBoundingClientRect();
const centerX = rect.width / 2;
const centerY = rect.height / 2;
const delta = e.deltaY > 0 ? 0.9 : 1.1;
const newScale = Math.min(Math.max(scale * delta, 0.3), 4);
// Adjust translation to zoom towards center
if (newScale !== scale) {
const scaleDiff = newScale / scale;
translateX = translateX * scaleDiff;
translateY = translateY * scaleDiff;
scale = newScale;
if (scale <= 1) {
translateX = 0;
translateY = 0;
}
updateTransform();
}
});
// Initialize display
updateTransform();
});
}
// Call the function to initialize mermaid controls
document.addEventListener('DOMContentLoaded', function() {
initializeMermaidControls();
});
// Mobile TOC Toggle
document.getElementById('tocToggle').addEventListener('click', function() {
const toc = document.querySelector('.toc-fixed');
toc.classList.toggle('open');
});
// Smooth scrolling for anchor links
document.querySelectorAll('a[href^="#"]').forEach(anchor => {
anchor.addEventListener('click', function (e) {
e.preventDefault();
const target = document.querySelector(this.getAttribute('href'));
if (target) {
target.scrollIntoView({
behavior: 'smooth',
block: 'start'
});
}
// Close mobile TOC if open
document.querySelector('.toc-fixed').classList.remove('open');
});
});
// Highlight active section in TOC
const observerOptions = {
rootMargin: '-20% 0px -80% 0px'
};
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
// Remove active class from all TOC links
document.querySelectorAll('.toc-fixed a').forEach(link => {
link.classList.remove('bg-blue-100', 'text-blue-800', 'font-semibold');
});
// Add active class to current section
const activeLink = document.querySelector(`.toc-fixed a[href="#${entry.target.id}"]`);
if (activeLink) {
activeLink.classList.add('bg-blue-100', 'text-blue-800', 'font-semibold');
}
}
});
}, observerOptions);
// Observe all sections
document.querySelectorAll('section[id]').forEach(section => {
observer.observe(section);
});
// Close mobile TOC when clicking outside
document.addEventListener('click', function(e) {
const toc = document.querySelector('.toc-fixed');
const toggle = document.getElementById('tocToggle');
if (!toc.contains(e.target) && !toggle.contains(e.target)) {
toc.classList.remove('open');
}
});
</script>
</body></html>
登录后可参与表态
讨论回复
1 条回复
✨步子哥 (steper)
#1
02-15 03:18
登录后可参与表态