# AutoAgent **Repository Path**: T520/AutoAgent ## Basic Information - **Project Name**: AutoAgent - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-09-12 - **Last Updated**: 2025-09-12 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
Logo

AutoAgent: Fully-Automated & Zero-Code
LLM Agent Framework

Credits Join our Slack community Join our Discord community Join our Wechat community
Check out the documentation Paper Evaluation Benchmark Score
HKUDS%2FAutoAgent | Trendshift
Welcome to AutoAgent! AutoAgent is a **Fully-Automated** and highly **Self-Developing** framework that enables users to create and deploy LLM agents through **Natural Language Alone**. ## ✨Key Features * πŸ† Top Performers on the GAIA Benchmark
AutoAgent has delivering comparable performance to many **Deep Research Agents**. * ✨ Agent and Workflow Create with Ease
AutoAgent leverages natural language to effortlessly build ready-to-use **tools**, **agents** and **workflows** - no coding required. * πŸ“š Agentic-RAG with Native Self-Managing Vector Database
AutoAgent equipped with a native self-managing vector database, outperforms industry-leading solutions like **LangChain**. * 🌐 Universal LLM Support
AutoAgent seamlessly integrates with **A Wide Range** of LLMs (e.g., OpenAI, Anthropic, Deepseek, vLLM, Grok, Huggingface ...) * πŸ”€ Flexible Interaction
Benefit from support for both **function-calling** and **ReAct** interaction modes. * πŸ€– Dynamic, Extensible, Lightweight
AutoAgent is your **Personal AI Assistant**, designed to be dynamic, extensible, customized, and lightweight. πŸš€ Unlock the Future of LLM Agents. Try πŸ”₯AutoAgentπŸ”₯ Now!
Logo
Quick Overview of AutoAgent.
## πŸ”₯ News
## πŸ“‘ Table of Contents * ✨ Features * πŸ”₯ News * πŸ” How to Use AutoAgent * 1. `user mode` (SOTA πŸ† Open Deep Research) * 2. `agent editor` (Agent Creation without Workflow) * 3. `workflow editor` (Agent Creation with Workflow) * ⚑ Quick Start * Installation * API Keys Setup * Start with CLI Mode * β˜‘οΈ Todo List * πŸ”¬ How To Reproduce the Results in the Paper * πŸ“– Documentation * 🀝 Join the Community * πŸ™ Acknowledgements * 🌟 Cite ## πŸ” How to Use AutoAgent ### 1. `user mode` (SOTA πŸ† Open Deep Research) AutoAgent have an out-of-the-box multi-agent system, which you could choose `user mode` in the start page to use it. This multi-agent system is a general AI assistant, having the same functionality with **OpenAI's Deep Research** and the comparable performance with it in [GAIA](https://gaia-benchmark-leaderboard.hf.space/) benchmark. - πŸš€ **High Performance**: Matches Deep Research using Claude 3.5 rather than OpenAI's o3 model. - πŸ”„ **Model Flexibility**: Compatible with any LLM (including Deepseek-R1, Grok, Gemini, etc.) - πŸ’° **Cost-Effective**: Open-source alternative to Deep Research's $200/month subscription - 🎯 **User-Friendly**: Easy-to-deploy CLI interface for seamless interaction - πŸ“ **File Support**: Handles file uploads for enhanced data interaction

πŸŽ₯ Deep Research (aka User Mode)

### 2. `agent editor` (Agent Creation without Workflow) The most distinctive feature of AutoAgent is its natural language customization capability. Unlike other agent frameworks, AutoAgent allows you to create tools, agents, and workflows using natural language alone. Simply choose `agent editor` or `workflow editor` mode to start your journey of building agents through conversations. You can use `agent editor` as shown in the following figure.
requirement
Input what kind of agent you want to create.
profiling
Automated agent profiling.
profiles
Output the agent profiles.
tools
Create the desired tools.
task
Input what do you want to complete with the agent. (Optional)
output
Create the desired agent(s) and go to the next step.
### 3. `workflow editor` (Agent Creation with Workflow) You can also create the agent workflows using natural language description with the `workflow editor` mode, as shown in the following figure. (Tips: this mode does not support tool creation temporarily.)
requirement
Input what kind of workflow you want to create.
profiling
Automated workflow profiling.
profiles
Output the workflow profiles.
task
Input what do you want to complete with the workflow. (Optional)
output
Create the desired workflow(s) and go to the next step.
## ⚑ Quick Start ### Installation #### AutoAgent Installation ```bash git clone https://github.com/HKUDS/AutoAgent.git cd AutoAgent pip install -e . ``` #### Docker Installation We use Docker to containerize the agent-interactive environment. So please install [Docker](https://www.docker.com/) first. You don't need to manually pull the pre-built image, because we have let Auto-Deep-Research **automatically pull the pre-built image based on your architecture of your machine**. ### API Keys Setup Create an environment variable file, just like `.env.template`, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need. ```bash # Required Github Tokens of your own GITHUB_AI_TOKEN= # Optional API Keys OPENAI_API_KEY= DEEPSEEK_API_KEY= ANTHROPIC_API_KEY= GEMINI_API_KEY= HUGGINGFACE_API_KEY= GROQ_API_KEY= XAI_API_KEY= ``` ### Start with CLI Mode > [🚨 **News**: ] We have updated a more easy-to-use command to start the CLI mode and fix the bug of different LLM providers from issues. You can follow the following steps to start the CLI mode with different LLM providers with much less configuration. #### Command Options: You can run `auto main` to start full part of AutoAgent, including `user mode`, `agent editor` and `workflow editor`. Btw, you can also run `auto deep-research` to start more lightweight `user mode`, just like the [Auto-Deep-Research](https://github.com/HKUDS/Auto-Deep-Research) project. Some configuration of this command is shown below. - `--container_name`: Name of the Docker container (default: 'deepresearch') - `--port`: Port for the container (default: 12346) - `COMPLETION_MODEL`: Specify the LLM model to use, you should follow the name of [Litellm](https://github.com/BerriAI/litellm) to set the model name. (Default: `claude-3-5-sonnet-20241022`) - `DEBUG`: Enable debug mode for detailed logs (default: False) - `API_BASE_URL`: The base URL for the LLM provider (default: None) - `FN_CALL`: Enable function calling (default: None). Most of time, you could ignore this option because we have already set the default value based on the model name. - `git_clone`: Clone the AutoAgent repository to the local environment (only support with the `auto main` command, default: True) - `test_pull_name`: The name of the test pull. (only support with the `auto main` command, default: 'autoagent_mirror') #### More details about `git_clone` and `test_pull_name`] In the `agent editor` and `workflow editor` mode, we should clone a mirror of the AutoAgent repository to the local agent-interactive environment and let our **AutoAgent** automatically update the AutoAgent itself, such as creating new tools, agents and workflows. So if you want to use the `agent editor` and `workflow editor` mode, you should set the `git_clone` to True and set the `test_pull_name` to 'autoagent_mirror' or other branches. #### `auto main` with different LLM Providers Then I will show you how to use the full part of AutoAgent with the `auto main` command and different LLM providers. If you want to use the `auto deep-research` command, you can refer to the [Auto-Deep-Research](https://github.com/HKUDS/Auto-Deep-Research) project for more details. ##### Anthropic * set the `ANTHROPIC_API_KEY` in the `.env` file. ```bash ANTHROPIC_API_KEY=your_anthropic_api_key ``` * run the following command to start Auto-Deep-Research. ```bash auto main # default model is claude-3-5-sonnet-20241022 ``` ##### OpenAI * set the `OPENAI_API_KEY` in the `.env` file. ```bash OPENAI_API_KEY=your_openai_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=gpt-4o auto main ``` ##### Mistral * set the `MISTRAL_API_KEY` in the `.env` file. ```bash MISTRAL_API_KEY=your_mistral_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=mistral/mistral-large-2407 auto main ``` ##### Gemini - Google AI Studio * set the `GEMINI_API_KEY` in the `.env` file. ```bash GEMINI_API_KEY=your_gemini_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=gemini/gemini-2.0-flash auto main ``` ##### Huggingface * set the `HUGGINGFACE_API_KEY` in the `.env` file. ```bash HUGGINGFACE_API_KEY=your_huggingface_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=huggingface/meta-llama/Llama-3.3-70B-Instruct auto main ``` ##### Groq * set the `GROQ_API_KEY` in the `.env` file. ```bash GROQ_API_KEY=your_groq_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=groq/deepseek-r1-distill-llama-70b auto main ``` ##### OpenAI-Compatible Endpoints (e.g., Grok) * set the `OPENAI_API_KEY` in the `.env` file. ```bash OPENAI_API_KEY=your_api_key_for_openai_compatible_endpoints ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=openai/grok-2-latest API_BASE_URL=https://api.x.ai/v1 auto main ``` ##### OpenRouter (e.g., DeepSeek-R1) We recommend using OpenRouter as LLM provider of DeepSeek-R1 temporarily. Because official API of DeepSeek-R1 can not be used efficiently. * set the `OPENROUTER_API_KEY` in the `.env` file. ```bash OPENROUTER_API_KEY=your_openrouter_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=openrouter/deepseek/deepseek-r1 auto main ``` ##### DeepSeek * set the `DEEPSEEK_API_KEY` in the `.env` file. ```bash DEEPSEEK_API_KEY=your_deepseek_api_key ``` * run the following command to start Auto-Deep-Research. ```bash COMPLETION_MODEL=deepseek/deepseek-chat auto main ``` After the CLI mode is started, you can see the start page of AutoAgent:
Logo
Start Page of AutoAgent.
### Tips #### Import browser cookies to browser environment You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to the [cookies](./AutoAgent/environment/cookie_json/README.md) folder. #### Add your own API keys for third-party Tool Platforms If you want to create tools from the third-party tool platforms, such as RapidAPI, you should subscribe tools from the platform and add your own API keys by running [process_tool_docs.py](./process_tool_docs.py). ```bash python process_tool_docs.py ``` More features coming soon! πŸš€ **Web GUI interface** under development. ## β˜‘οΈ Todo List AutoAgent is continuously evolving! Here's what's coming: - πŸ“Š **More Benchmarks**: Expanding evaluations to **SWE-bench**, **WebArena**, and more - πŸ–₯️ **GUI Agent**: Supporting *Computer-Use* agents with GUI interaction - πŸ”§ **Tool Platforms**: Integration with more platforms like **Composio** - πŸ—οΈ **Code Sandboxes**: Supporting additional environments like **E2B** - 🎨 **Web Interface**: Developing comprehensive GUI for better user experience Have ideas or suggestions? Feel free to open an issue! Stay tuned for more exciting updates! πŸš€ ## πŸ”¬ How To Reproduce the Results in the Paper ### GAIA Benchmark For the GAIA benchmark, you can run the following command to run the inference. ```bash cd path/to/AutoAgent && sh evaluation/gaia/scripts/run_infer.sh ``` For the evaluation, you can run the following command. ```bash cd path/to/AutoAgent && python evaluation/gaia/get_score.py ``` ### Agentic-RAG For the Agentic-RAG task, you can run the following command to run the inference. Step1. Turn to [this page](https://huggingface.co/datasets/yixuantt/MultiHopRAG) and download it. Save them to your datapath. Step2. Run the following command to run the inference. ```bash cd path/to/AutoAgent && sh evaluation/multihoprag/scripts/run_rag.sh ``` Step3. The result will be saved in the `evaluation/multihoprag/result.json`. ## πŸ“– Documentation A more detailed documentation is coming soon πŸš€, and we will update in the [Documentation](https://AutoAgent-ai.github.io/docs) page. ## 🀝 Join the Community We want to build a community for AutoAgent, and we welcome everyone to join us. You can join our community by: - [Join our Slack workspace](https://join.slack.com/t/AutoAgent-workspace/shared_invite/zt-2zibtmutw-v7xOJObBf9jE2w3x7nctFQ) - Here we talk about research, architecture, and future development. - [Join our Discord server](https://discord.gg/z68KRvwB) - This is a community-run server for general discussion, questions, and feedback. - [Read or post Github Issues](https://github.com/HKUDS/AutoAgent/issues) - Check out the issues we're working on, or add your own ideas. ## Misc
[![Stargazers repo roster for @HKUDS/AutoAgent](https://reporoster.com/stars/HKUDS/AutoAgent)](https://github.com/HKUDS/AutoAgent/stargazers) [![Forkers repo roster for @HKUDS/AutoAgent](https://reporoster.com/forks/HKUDS/AutoAgent)](https://github.com/HKUDS/AutoAgent/network/members) [![Star History Chart](https://api.star-history.com/svg?repos=HKUDS/AutoAgent&type=Date)](https://star-history.com/#HKUDS/AutoAgent&Date)
## πŸ™ Acknowledgements Rome wasn't built in a day. AutoAgent stands on the shoulders of giants, and we are deeply grateful for the outstanding work that came before us. Our framework architecture draws inspiration from [OpenAI Swarm](https://github.com/openai/swarm), while our user mode's three-agent design benefits from [Magentic-one](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one)'s insights. We've also learned from [OpenHands](https://github.com/All-Hands-AI/OpenHands) for documentation structure and many other excellent projects for agent-environment interaction design, among others. We express our sincere gratitude and respect to all these pioneering works that have been instrumental in shaping AutoAgent. ## 🌟 Cite ```tex @misc{AutoAgent, title={{AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents}}, author={Jiabin Tang, Tianyu Fan, Chao Huang}, year={2025}, eprint={202502.05957}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2502.05957}, } ```