Large Language Models (LLMs) have shown promise in automating code generation, yet they struggle with complex, multi-file projects due to context limitations. We propose a novel context engineering workflow combining multiple AI components: an Intent Translator (GPT-5), Elicit-powered semantic literature retrieval, NotebookLM-based document synthesis, and a Claude Code multi-agent system. Our approach leverages intent clarification, retrieval-augmented generation, and specialized sub-agents to significantly improve accuracy and reliability of code assistants in real-world repositories.
Clarifies user requirements and translates them into structured task specifications for the multi-agent system.
Performs semantic search over academic papers, documentation, and Q&A resources to inject domain knowledge.
Creates concise summaries of retrieved materials and answers follow-up questions for detailed understanding.
Orchestrates specialized sub-agents (planner, coder, tester, reviewer) with vector database for code context.
| Framework | Approach | Success Rate | Key Advantage |
|---|---|---|---|
| Our System | Context engineering + multi-agent | 68.2% | Targeted context injection |
| CodePlan | Multi-step planning | 45.3% | Structured approach |
| MASAI | Modular architecture | 28.3% | Specialized sub-agents |
| HyperAgent | Team of agents | 52.7% | Human-like workflow |
还没有人回复