# agentscope-runtime **Repository Path**: gait_admin/agentscope-runtime ## Basic Information - **Project Name**: agentscope-runtime - **Description**: AgentScope Runtime 解决了智能体开发中的两个关键挑战:安全的沙盒工具执行和可扩展的智能体服务化部署 - **Primary Language**: Python - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: https://www.oschina.net/p/agentscope-runtime - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-09-07 - **Last Updated**: 2025-09-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
|
|
---
## 📋 目录
- [🚀 快速开始](#-快速开始)
- [📚 指南](#-指南)
- [🔌 智能体框架集成](#-智能体框架集成)
- [🏗️ 部署](#️-部署)
- [🤝 贡献](#-贡献)
- [📄 许可证](#-许可证)
---
## 🚀 快速开始
### 前提条件
- Python 3.10 或更高版本
- pip 或 uv 包管理器
### 安装
从PyPI安装:
```bash
# 安装核心依赖
pip install agentscope-runtime
# 安装沙盒依赖
pip install "agentscope-runtime[sandbox]"
```
(可选)从源码安装:
```bash
# 从 GitHub 拉取源码
git clone -b main https://github.com/agentscope-ai/agentscope-runtime.git
cd agentscope-runtime
# 安装核心依赖
pip install -e .
# 安装沙盒依赖
pip install -e ".[sandbox]"
```
### 基本智能体使用示例
此示例演示如何使用 AgentScope Runtime 创建简单的 LLM 智能体并从 Qwen 模型流式传输响应。
```python
import asyncio
import os
from agentscope_runtime.engine import Runner
from agentscope_runtime.engine.agents.llm_agent import LLMAgent
from agentscope_runtime.engine.llms import QwenLLM
from agentscope_runtime.engine.schemas.agent_schemas import AgentRequest
from agentscope_runtime.engine.services.context_manager import ContextManager
async def main():
# 设置语言模型和智能体
model = QwenLLM(
model_name="qwen-turbo",
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
llm_agent = LLMAgent(model=model, name="llm_agent")
async with ContextManager() as context_manager:
runner = Runner(agent=llm_agent, context_manager=context_manager)
# 创建请求并流式传输响应
request = AgentRequest(
input=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "法国的首都是什么?",
},
],
},
],
)
async for message in runner.stream_query(request=request):
if hasattr(message, "text"):
print(f"流式答案: {message.text}")
asyncio.run(main())
```
### 基本沙盒使用示例
此示例演示如何创建沙盒并在沙盒中执行工具。
```python
from agentscope_runtime.sandbox import BaseSandbox
with BaseSandbox() as box:
print(box.run_ipython_cell(code="print('你好')"))
print(box.run_shell_command(command="echo hello"))
```
> [!NOTE]
>
> 当前版本需要安装并运行Docker或者Kubernetes,未来我们将提供更多公有云部署选项。请参考[此教程](https://runtime.agentscope.io/zh/sandbox.html)了解更多详情。
---
## 📚 指南
- **[📖 Cookbook](https://runtime.agentscope.io/zh/intro.html)**: 全面教程
- **[💡 概念](https://runtime.agentscope.io/zh/concept.html)**: 核心概念和架构概述
- **[🚀 快速开始](https://runtime.agentscope.io/zh/quickstart.html)**: 快速入门教程
- **[🏠 展示厅](https://runtime.agentscope.io/zh/demohouse.html)**: 丰富的示例项目
- **[📋 API 参考](https://runtime.agentscope.io/zh/api/index.html)**: 完整的API文档
---
## 🔌 智能体框架集成
### AgentScope 集成
```python
# pip install "agentscope-runtime[agentscope]"
import os
from agentscope.agent import ReActAgent
from agentscope.model import OpenAIChatModel
from agentscope_runtime.engine.agents.agentscope_agent import AgentScopeAgent
agent = AgentScopeAgent(
name="Friday",
model=OpenAIChatModel(
"gpt-4",
api_key=os.getenv("OPENAI_API_KEY"),
),
agent_config={
"sys_prompt": "You're a helpful assistant named {name}.",
},
agent_builder=ReActAgent,
)
```
### Agno集成
```python
# pip install "agentscope-runtime[agno]"
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agentscope_runtime.engine.agents.agno_agent import AgnoAgent
agent = AgnoAgent(
name="Friday",
model=OpenAIChat(
id="gpt-4",
),
agent_config={
"instructions": "You're a helpful assistant.",
},
agent_builder=Agent,
)
```
### AutoGen集成
```python
# pip install "agentscope-runtime[autogen]"
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from agentscope_runtime.engine.agents.autogen_agent import AutogenAgent
agent = AutogenAgent(
name="Friday",
model=OpenAIChatCompletionClient(
model="gpt-4",
),
agent_config={
"system_message": "You're a helpful assistant",
},
agent_builder=AssistantAgent,
)
```
### LangGraph集成
```python
# pip install "agentscope-runtime[langgraph]"
from typing import TypedDict
from langgraph import graph, types
from agentscope_runtime.engine.agents.langgraph_agent import LangGraphAgent
# 定义状态
class State(TypedDict, total=False):
id: str
# 定义节点函数
async def set_id(state: State):
new_id = state.get("id")
assert new_id is not None, "must set ID"
return types.Command(update=State(id=new_id), goto="REVERSE_ID")
async def reverse_id(state: State):
new_id = state.get("id")
assert new_id is not None, "ID must be set before reversing"
return types.Command(update=State(id=new_id[::-1]))
state_graph = graph.StateGraph(state_schema=State)
state_graph.add_node("SET_ID", set_id)
state_graph.add_node("REVERSE_ID", reverse_id)
state_graph.set_entry_point("SET_ID")
compiled_graph = state_graph.compile(name="ID Reversal")
agent = LangGraphAgent(graph=compiled_graph)
```
> [!NOTE]
>
> 更多智能体框架集成即将推出!
---
## 🏗️ 部署
智能体运行器使用了`deploy` 方法,该方法采用一个 `DeployManager` 实例并部署智能体。服务端口在创建 `LocalDeployManager` 时设置为参数 `port`。服务端点路径在部署智能体时设置为参数 `endpoint_path`。在此示例中,我们将端点路径设置为 `/process`。部署后,您可以通过 [http://localhost:8090/process](http://localhost:8090/process) 访问该服务。
```python
from agentscope_runtime.engine.deployers import LocalDeployManager
# 创建部署管理器
deploy_manager = LocalDeployManager(
host="localhost",
port=8090,
)
# 将智能体部署为流式服务
deploy_result = await runner.deploy(
deploy_manager=deploy_manager,
endpoint_path="/process",
stream=True, # 启用流式响应
)
```
---
## 🤝 贡献
我们欢迎来自社区的贡献!您可以提供以下帮助:
### 🐛 错误报告
- 使用 GitHub Issues 报告错误
- 包含详细的重现步骤
- 提供系统信息和日志
### 💡 特性请求
- 在 GitHub Discussions 中讨论新想法
- 遵循特性请求模板
- 考虑实施的可行性
### 🔧 代码贡献
1. Fork 这个仓库
2. 创建一个功能分支 (git checkout -b feature/amazing-feature)
3. 提交更改 (git commit -m 'Add amazing feature')
4. 推送到分支 (git push origin feature/amazing-feature)
5. 打开一个 Pull Request
有关如何贡献的详细指南,请查看 [如何贡献](cookbook/zh/contribute.md).
---
## 📄 许可证
AgentScope Runtime 根据 [Apache License 2.0](LICENSE) 发布。
```
Copyright 2025 Tongyi Lab
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## 贡献者 ✨
[](#contributors-)
感谢这些优秀的贡献者们 ([表情符号说明](https://allcontributors.org/docs/en/emoji-key)):
Weirui Kuang 💻 👀 🚧 📆 |
Bruce Luo 💻 👀 💡 |
Zhicheng Zhang 💻 👀 📖 |
ericczq 💻 📖 |
qbc 👀 |
Ran Chen 💻 |
jinliyl 💻 📖 |
Osier-Yi 💻 |
||||||
|
|
||||||