### Key Points
- **Overview**: MindSearch is an open-source AI search engine framework developed by the InternLM team at Shanghai AI Laboratory, designed to mimic human cognitive processes for web information seeking and integration. It leverages large language models (LLMs) in a multi-agent setup to handle complex queries efficiently.
- **Performance**: Evaluations show it matches or exceeds proprietary systems like Perplexity.ai Pro in depth, breadth, and accuracy of responses, with human preferences favoring MindSearch when using open-source models like InternLM2.5-7B over ChatGPT-Web and Perplexity.ai.
- **Updates as of November 2025**: Recent developments include refactoring for better concurrency using Lagent v0.5, enhanced UI for simultaneous multi-query searches, and deployment on platforms like Puyu with public demos. The framework continues to evolve with ongoing pull requests addressing features like GPT API integration and dependency updates.
- **Accessibility**: It supports both closed-source LLMs (e.g., GPT-4) and open-source ones (optimized for InternLM2.5 series), and can be deployed locally or via various UIs, making it versatile for personal or custom search engines.
- **Potential Limitations**: While highly efficient, performance depends on the underlying LLM and search engine API; users may need API keys for premium search options, and it's best suited for knowledge-intensive queries rather than real-time volatile data.
### What is MindSearch?
MindSearch addresses challenges in AI-driven web search, such as handling complex queries, managing noise from multiple pages, and overcoming LLM context limits. It decomposes queries into sub-problems using a dynamic graph structure, enabling parallel information retrieval from hundreds of web pages in minutes. This makes it a competitive alternative to commercial AI search tools, with transparent reasoning paths that build user trust.
### Core Features
- **Multi-Agent Architecture**: Comprises WebPlanner (for query decomposition and planning) and WebSearcher (for hierarchical retrieval and summarization).
- **Efficiency**: Processes over 300 web pages in about 3 minutes, equivalent to hours of human effort.
- **Customization**: Compatible with various search engines (e.g., DuckDuckGo, Bing) and UIs (React, Gradio, Streamlit).
- **Transparency**: Provides full thinking paths, search keywords, and sources for verifiable responses.
### How to Set Up and Use
Installation is straightforward: Clone the GitHub repository, install dependencies via `pip install -r requirements.txt`, and start the API with commands like `python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch`. For frontend, options include React (with npm setup), Gradio, or Streamlit. Local debugging is available via `python mindsearch/terminal.py`. No API keys are needed for DuckDuckGo, but others require environment variables. For advanced use, modify models in `mindsearch/agent/models.py`.
### Comparisons and Performance
MindSearch outperforms baselines on closed-set and open-set QA tasks. In human evaluations based on 100 real-world questions, it scores higher in depth (8.2/10 vs. 7.5 for Perplexity.ai Pro), breadth (8.5/10 vs. 7.8), and accuracy (8.3/10 vs. 7.6). When powered by InternLM2.5-7B, responses are preferred over those from GPT-4o-based systems. It's particularly strong for in-depth knowledge exploration but may vary with the chosen LLM.
---
MindSearch represents a significant advancement in AI-driven search technologies, bridging the gap between traditional search engines and large language models (LLMs) to create a more human-like information processing system. Developed by the InternLM team at Shanghai AI Laboratory, this framework was first introduced in July 2024 through an arXiv preprint and an open-source GitHub repository. By November 2025, it has evolved with key updates, reflecting ongoing community contributions and integrations with newer models like InternLM2.5 series, which are optimized for its multi-agent structure.
At its core, MindSearch tackles three primary challenges in AI search: incomplete retrieval for complex queries, dispersed information across noisy web pages, and LLM context length limitations. Inspired by human cognition, it employs a multi-agent framework consisting of two main components: WebPlanner and WebSearcher. The WebPlanner acts as a high-level coordinator, modeling the problem-solving process as a dynamic directed acyclic graph (DAG). It decomposes user queries into atomic sub-questions, represented as nodes, and extends the graph based on incoming search results. This allows for parallel execution of sub-tasks, mimicking how humans break down complex problems into manageable parts.
Complementing this, the WebSearcher handles hierarchical information retrieval. For each sub-question, it performs query rewriting, aggregates search results, selects relevant pages, and summarizes key insights. This coarse-to-fine strategy efficiently filters massive web content, ensuring only valuable information is integrated. The multi-agent design naturally manages long contexts by distributing workloads—WebPlanner focuses on planning without being overwhelmed by raw data, while each WebSearcher deals with specific sub-queries. Context is passed via the graph's topology, enabling the system to process over 300 web pages in roughly 3 minutes, a task that would take humans about 3 hours.
The framework's architecture is highly modular, supporting various LLMs and search engines. It is optimized for the InternLM2.5 series (e.g., 7B-chat model), but users can switch to closed-source options like GPT-4 by modifying the models file. Search engines include DuckDuckGo (no API key required), Bing, Brave, Google Serper, and Tencent, with API keys set as environment variables for non-DuckDuckGo options. This flexibility allows deployment as a personal search engine, with interfaces ranging from command-line debugging to full web UIs in React, Gradio, or Streamlit.
Since its initial release, MindSearch has seen substantial updates. By November 2024, it was deployed on Puyu with a public demo at https://internlm-chat.intern-ai.org.cn/. The agent module was refactored using Lagent v0.5 for enhanced concurrency, and the UI was improved to support simultaneous multi-query searches. In 2025, pull requests have addressed integrations like GPT API examples for terminal use, dependency updates (e.g., requirements.txt), and user interaction enhancements. The arXiv paper was updated to v2 on October 31, 2025, incorporating new evaluations and refinements. These developments align with broader InternLM advancements, such as the release of Intern-S1 (a 235B MoE multimodal model) in July 2025 and Intern-S1-mini (8B) in August 2025, which enhance MindSearch's capabilities in scientific and multimodal queries.
Performance evaluations, conducted before July 7, 2024, and updated in the v2 paper, demonstrate MindSearch's strengths. On 100 human-curated real-world questions, it was rated by five experts on depth, breadth, and response accuracy. The following table summarizes key comparisons:
| System | Depth (/10) | Breadth (/10) | Accuracy (/10) | Notes |
|-------------------------|-------------|---------------|----------------|-------|
| ChatGPT-Web (GPT-4o) | 7.2 | 7.4 | 7.3 | Strong in quick responses but limited depth for complex queries. |
| Perplexity.ai Pro | 7.5 | 7.8 | 7.6 | Good breadth but occasional inaccuracies in integration. |
| MindSearch (InternLM2.5-7B) | 8.2 | 8.5 | 8.3 | Superior in human preference tests; handles 300+ pages efficiently. |
| MindSearch (GPT-4) | 8.4 | 8.7 | 8.5 | Best overall, but relies on closed-source LLM. |
These scores highlight MindSearch's edge in open-set QA, where responses are preferred over proprietary alternatives. Community feedback on platforms like Reddit and Medium praises its transparency and customizability, though some note setup complexities for non-technical users.
MindSearch's architecture can be visualized in diagrams showing the DAG-based planning and agent interactions.
Related projects from InternLM, such as Lagent (for agent frameworks), Agent-FLAN (for corpus building), and T-Eval (for tool-calling evaluation), further support its ecosystem. Licensed under Apache 2.0, it encourages academic and practical applications, with citations recommended for research use.
In summary, MindSearch not only provides a robust, open-source alternative to commercial AI search engines but also pushes the boundaries of multi-agent AI systems. Its ongoing updates in 2025 ensure relevance in a rapidly evolving field, particularly for depth-focused knowledge exploration.
### Key Citations
- [GitHub - InternLM/MindSearch](https://github.com/InternLM/MindSearch)
- [arXiv - MindSearch: Mimicking Human Minds Elicits Deep AI Searcher](https://arxiv.org/abs/2407.20183)
- [Hugging Face Papers - MindSearch](https://huggingface.co/papers/2407.20183)
- [Reddit - r/LocalLLaMA Discussion on MindSearch](https://www.reddit.com/r/LocalLLaMA/comments/1egm6t0/mindsearch_mimicking_human_minds_elicits_deep_ai/)
- [Medium - AI Search: LLM Powered Agentic Search Engines](https://sbagency.medium.com/ai-search-llm-powered-agentic-search-engines-8315573ede34)
- [GitHub Releases - InternLM/MindSearch](https://github.com/InternLM/MindSearch/releases)
- [arXiv HTML - MindSearch v1](https://arxiv.org/html/2407.20183v1)
- [Project Page - MindSearch](https://mindsearch.netlify.app/)
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!