Merge branch 'rc/new-plugin' into refactor/new-plugin-system

This commit is contained in:
Junyan Qin
2025-08-24 21:40:02 +08:00
232 changed files with 11998 additions and 1440 deletions

View File

@@ -9,7 +9,7 @@
*请在方括号间写`x`以打勾 / Please tick the box with `x`* *请在方括号间写`x`以打勾 / Please tick the box with `x`*
- [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)? - [ ] 阅读仓库[贡献指引](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 与项目所有者沟通过了吗? / Have you communicated with the project maintainer? - [ ] 与项目所有者沟通过了吗? / Have you communicated with the project maintainer?
- [ ] 我确定已自行测试所作的更改,确保功能符合预期。 / I have tested the changes and ensured they work as expected. - [ ] 我确定已自行测试所作的更改,确保功能符合预期。 / I have tested the changes and ensured they work as expected.

3
.gitignore vendored
View File

@@ -42,4 +42,5 @@ botpy.log*
test.py test.py
/web_ui /web_ui
.venv/ .venv/
uv.lock uv.lock
/test

View File

@@ -1,50 +1,40 @@
<p align="center"> <p align="center">
<a href="https://langbot.app"> <a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/> <img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a> </a>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / 简体中文 / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">项目主页</a> <a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a> <a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a> <a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a> <a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
</div> </div>
</p> </p>
## ✨ 特性 LangBot 是一个开源的大语言模型原生即时通信机器人开发平台,旨在提供开箱即用的 IM 机器人开发体验,具有 Agent、RAG、MCP 等多种 LLM 应用功能,适配全球主流即时通信平台,并提供丰富的 API 接口,支持自定义开发。
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
## 📦 开始使用 ## 📦 开始使用
#### Docker Compose 部署 #### Docker Compose 部署
```bash ```bash
git clone https://github.com/RockChinQ/LangBot git clone https://github.com/langbot-app/LangBot
cd LangBot cd LangBot
docker compose up -d docker compose up -d
``` ```
@@ -71,23 +61,25 @@ docker compose up -d
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。 直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 📸 效果展示 ## 😎 保持更新
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/> 点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/> ![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/> ## ✨ 特性
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/> - 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态、流式输出能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/> 详细规格特性请访问[文档](https://docs.langbot.app/zh/insight/features.html)。
- WebUI Demo: https://demo.langbot.dev/ 或访问 demo 环境:https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456` - 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示webui效果,公开环境,请不要在其中填入您的任何敏感信息。 - 注意:仅展示 WebUI 效果,公开环境,请不要在其中填入您的任何敏感信息。
## 🔌 组件兼容性
### 消息平台 ### 消息平台
@@ -104,10 +96,6 @@ docker compose up -d
| Discord | ✅ | | | Discord | ✅ | |
| Telegram | ✅ | | | Telegram | ✅ | |
| Slack | ✅ | | | Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
### 大模型能力 ### 大模型能力
@@ -119,8 +107,10 @@ docker compose up -d
| [Anthropic](https://www.anthropic.com/) | ✅ | | | [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | | | [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | | | [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 | | [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [302 AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 | | [胜算云](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 资源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | | | [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 | | [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 | | [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
@@ -148,14 +138,8 @@ docker compose up -d
## 😘 社区贡献 ## 😘 社区贡献
感谢以下[代码贡献者](https://github.com/RockChinQ/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献: 感谢以下[代码贡献者](https://github.com/langbot-app/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors"> <a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" /> <img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a> </a>
## 😎 保持更新
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -1,48 +1,34 @@
<p align="center"> <p align="center">
<a href="https://langbot.app"> <a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/> <img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a> </a>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> English / [简体中文](README.md) / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">Home</a> <a href="https://langbot.app">Home</a>
<a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a> <a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a> <a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a> <a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
<div align="center">
😎High Stability, 🧩Extension Supported, 🦄Multi-modal - LLM Native Instant Messaging Bot Platform🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
</div> </div>
</p> </p>
## ✨ Features LangBot is an open-source LLM native instant messaging robot development platform, aiming to provide out-of-the-box IM robot development experience, with Agent, RAG, MCP and other LLM application functions, adapting to global instant messaging platforms, and providing rich API interfaces, supporting custom development.
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
## 📦 Getting Started ## 📦 Getting Started
#### Docker Compose Deployment #### Docker Compose Deployment
```bash ```bash
git clone https://github.com/RockChinQ/LangBot git clone https://github.com/langbot-app/LangBot
cd LangBot cd LangBot
docker compose up -d docker compose up -d
``` ```
@@ -69,23 +55,25 @@ Community contributed Zeabur template.
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation. Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 Demo ## 😎 Stay Ahead
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/> Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/> ![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/> ## ✨ Features
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/> - 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, multi-modal, and streaming output capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/> For more detailed specifications, please refer to the [documentation](https://docs.langbot.app/en/insight/features.html).
- WebUI Demo: https://demo.langbot.dev/ Or visit the demo environment: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456` - Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment. - Note: For WebUI demo only, please do not fill in any sensitive information in the public environment.
## 🔌 Component Compatibility
### Message Platform ### Message Platform
@@ -101,10 +89,6 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| Discord | ✅ | | | Discord | ✅ | |
| Telegram | ✅ | | | Telegram | ✅ | |
| Slack | ✅ | | | Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: In development
### LLMs ### LLMs
@@ -116,9 +100,11 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| [Anthropic](https://www.anthropic.com/) | ✅ | | | [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | | | [xAI](https://x.ai/) | ✅ | |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | | | [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | LLM and GPU resource platform |
| [Dify](https://dify.ai) | ✅ | LLMOps platform | | [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform | | [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [302 AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) | | [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLM and GPU resource platform |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | | | [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform | | [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
| [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform | | [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform |
@@ -131,14 +117,8 @@ Directly use the released version to run, see the [Manual Deployment](https://do
## 🤝 Community Contribution ## 🤝 Community Contribution
Thank you for the following [code contributors](https://github.com/RockChinQ/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot: Thank you for the following [code contributors](https://github.com/langbot-app/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors"> <a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" /> <img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a> </a>
## 😎 Stay Ahead
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -1,47 +1,34 @@
<p align="center"> <p align="center">
<a href="https://langbot.app"> <a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/> <img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a> </a>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> [English](README_EN.md) / [简体中文](README.md) / [繁體中文](README_TW.md) / 日本語 / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">ホーム</a> <a href="https://langbot.app">ホーム</a>
<a href="https://docs.langbot.app/en/insight/guide.html">デプロイ</a> <a href="https://docs.langbot.app/en/insight/guide.html">デプロイ</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">プラグイン</a> <a href="https://docs.langbot.app/en/plugin/plugin-intro.html">プラグイン</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a> <a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a>
<div align="center">
😎高い安定性、🧩拡張サポート、🦄マルチモーダル - LLMネイティブインスタントメッセージングボットプラットフォーム🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README_CN.md) / [English](README.md) / [日本語](README_JP.md) / (PR for your language)
</div> </div>
</p> </p>
## ✨ 機能 LangBot は、エージェント、RAG、MCP などの LLM アプリケーション機能を備えた、オープンソースの LLM ネイティブのインスタントメッセージングロボット開発プラットフォームです。世界中のインスタントメッセージングプラットフォームに適応し、豊富な API インターフェースを提供し、カスタム開発をサポートします。
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート。 [Dify](https://dify.ai) と深く統合。現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
## 📦 始め方 ## 📦 始め方
#### Docker Compose デプロイ #### Docker Compose デプロイ
```bash ```bash
git clone https://github.com/RockChinQ/LangBot git clone https://github.com/langbot-app/LangBot
cd LangBot cd LangBot
docker compose up -d docker compose up -d
``` ```
@@ -50,7 +37,7 @@ http://localhost:5300 にアクセスして使用を開始します。
詳細なドキュメントは[Dockerデプロイ](https://docs.langbot.app/en/deploy/langbot/docker.html)を参照してください。 詳細なドキュメントは[Dockerデプロイ](https://docs.langbot.app/en/deploy/langbot/docker.html)を参照してください。
#### BTPanelでのワンクリックデプロイ #### Panelでのワンクリックデプロイ
LangBotはBTPanelにリストされています。BTPanelをインストールしている場合は、[ドキュメント](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html)を使用して使用できます。 LangBotはBTPanelにリストされています。BTPanelをインストールしている場合は、[ドキュメント](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html)を使用して使用できます。
@@ -68,23 +55,25 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
リリースバージョンを直接使用して実行します。[手動デプロイ](https://docs.langbot.app/en/deploy/langbot/manual.html)のドキュメントを参照してください。 リリースバージョンを直接使用して実行します。[手動デプロイ](https://docs.langbot.app/en/deploy/langbot/manual.html)のドキュメントを参照してください。
## 📸 デモ ## 😎 最新情報を入手
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/> リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/> ![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/> ## ✨ 機能
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/> - 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル、ストリーミング出力機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
<img alt="返信効果(インターネットプラグイン付き)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/> 詳細な仕様については、[ドキュメント](https://docs.langbot.app/en/insight/features.html)を参照してください。
- WebUIデモ: https://demo.langbot.dev/ または、デモ環境にアクセスしてください: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456` - ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUIの効果のみを示しています。公開環境では機密情報を入力しないでください。 - 注意: WebUI のデモンストレーションのみの場合、公開環境では機密情報を入力しないでください。
## 🔌 コンポーネントの互換性
### メッセージプラットフォーム ### メッセージプラットフォーム
@@ -100,10 +89,6 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| Discord | ✅ | | | Discord | ✅ | |
| Telegram | ✅ | | | Telegram | ✅ | |
| Slack | ✅ | | | Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 開発中
### LLMs ### LLMs
@@ -115,8 +100,10 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [Anthropic](https://www.anthropic.com/) | ✅ | | | [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | | | [xAI](https://x.ai/) | ✅ | |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | | | [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム | | [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [302 AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) | | [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLMとGPUリソースプラットフォーム |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | | | [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム | | [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |
| [Ollama](https://ollama.com/) | ✅ | ローカルLLM実行プラットフォーム | | [Ollama](https://ollama.com/) | ✅ | ローカルLLM実行プラットフォーム |
@@ -130,14 +117,8 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
## 🤝 コミュニティ貢献 ## 🤝 コミュニティ貢献
LangBot への貢献に対して、以下の [コード貢献者](https://github.com/RockChinQ/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。 LangBot への貢献に対して、以下の [コード貢献者](https://github.com/langbot-app/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors"> <a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" /> <img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a> </a>
## 😎 最新情報を入手
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
![star gif](https://docs.langbot.app/star.gif)

140
README_TW.md Normal file
View File

@@ -0,0 +1,140 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a>
<div align="center"><a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / [简体中文](README.md) / 繁體中文 / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">主頁</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文件</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">外掛介紹</a>
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交外掛</a>
</div>
</p>
LangBot 是一個開源的大語言模型原生即時通訊機器人開發平台,旨在提供開箱即用的 IM 機器人開發體驗,具有 Agent、RAG、MCP 等多種 LLM 應用功能,適配全球主流即時通訊平台,並提供豐富的 API 介面,支援自定義開發。
## 📦 開始使用
#### Docker Compose 部署
```bash
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
訪問 http://localhost:5300 即可開始使用。
詳細文件[Docker 部署](https://docs.langbot.app/zh/deploy/langbot/docker.html)。
#### 寶塔面板部署
已上架寶塔面板,若您已安裝寶塔面板,可以根據[文件](https://docs.langbot.app/zh/deploy/langbot/one-click/bt.html)使用。
#### Zeabur 雲端部署
社群貢獻的 Zeabur 模板。
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/zh-CN/templates/ZKTBDH)
#### Railway 雲端部署
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### 手動部署
直接使用發行版運行,查看文件[手動部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 😎 保持更新
點擊倉庫右上角 Star 和 Watch 按鈕,獲取最新動態。
![star gif](https://docs.langbot.app/star.gif)
## ✨ 特性
- 💬 大模型對話、Agent支援多種大模型適配群聊和私聊具有多輪對話、工具調用、多模態、流式輸出能力自帶 RAG知識庫實現並深度適配 [Dify](https://dify.ai)。
- 🤖 多平台支援:目前支援 QQ、QQ頻道、企業微信、個人微信、飛書、Discord、Telegram 等平台。
- 🛠️ 高穩定性、功能完備:原生支援訪問控制、限速、敏感詞過濾等機制;配置簡單,支援多種部署方式。支援多流水線配置,不同機器人用於不同應用場景。
- 🧩 外掛擴展、活躍社群:支援事件驅動、組件擴展等外掛機制;適配 Anthropic [MCP 協議](https://modelcontextprotocol.io/);目前已有數百個外掛。
- 😻 Web 管理面板:支援通過瀏覽器管理 LangBot 實例,不再需要手動編寫配置文件。
詳細規格特性請訪問[文件](https://docs.langbot.app/zh/insight/features.html)。
或訪問 demo 環境https://demo.langbot.dev/
- 登入資訊:郵箱:`demo@langbot.app` 密碼:`langbot123456`
- 注意:僅展示 WebUI 效果,公開環境,請不要在其中填入您的任何敏感資訊。
### 訊息平台
| 平台 | 狀態 | 備註 |
| --- | --- | --- |
| QQ 個人號 | ✅ | QQ 個人號私聊、群聊 |
| QQ 官方機器人 | ✅ | QQ 官方機器人,支援頻道、私聊、群聊 |
| 微信 | ✅ | |
| 企微對外客服 | ✅ | |
| 微信公眾號 | ✅ | |
| Lark | ✅ | |
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
### 大模型能力
| 模型 | 狀態 | 備註 |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | 可接入任何 OpenAI 介面格式模型 |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智譜AI](https://open.bigmodel.cn/) | ✅ | |
| [勝算雲](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 資源平台 |
| [優雲智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型運行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型運行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型介面聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里雲百煉](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支援通過 MCP 協議獲取工具 |
### TTS
| 平台/模型 | 備註 |
| --- | --- |
| [FishAudio](https://fish.audio/zh-CN/discovery/) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [海豚 AI](https://www.ttson.cn/?source=thelazy) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [AzureTTS](https://portal.azure.com/) | [外掛](https://github.com/Ingnaryk/LangBot_AzureTTS) |
### 文生圖
| 平台/模型 | 備註 |
| --- | --- |
| 阿里雲百煉 | [外掛](https://github.com/Thetail001/LangBot_BailianTextToImagePlugin)
## 😘 社群貢獻
感謝以下[程式碼貢獻者](https://github.com/langbot-app/LangBot/graphs/contributors)和社群裡其他成員對 LangBot 的貢獻:
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -253,6 +253,43 @@ class DingTalkClient:
await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}') await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}') raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}')
async def create_and_card(
self, temp_card_id: str, incoming_message: dingtalk_stream.ChatbotMessage, quote_origin: bool = False
):
content_key = 'content'
card_data = {content_key: ''}
card_instance = dingtalk_stream.AICardReplier(self.client, incoming_message)
# print(card_instance)
# 先投放卡片: https://open.dingtalk.com/document/orgapp/create-and-deliver-cards
card_instance_id = await card_instance.async_create_and_deliver_card(
temp_card_id,
card_data,
)
return card_instance, card_instance_id
async def send_card_message(self, card_instance, card_instance_id: str, content: str, is_final: bool):
content_key = 'content'
try:
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value=content,
append=False,
finished=is_final,
failed=False,
)
except Exception as e:
self.logger.exception(e)
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value='',
append=False,
finished=is_final,
failed=True,
)
async def start(self): async def start(self):
"""启动 WebSocket 连接,监听消息""" """启动 WebSocket 连接,监听消息"""
await self.client.start() await self.client.start()

View File

@@ -1,4 +1 @@
from .client import WeChatPadClient from .client import WeChatPadClient as WeChatPadClient
__all__ = ['WeChatPadClient']

View File

@@ -11,7 +11,7 @@ asciiart = r"""
|____\__,_|_||_\__, |___/\___/\__| |____\__,_|_||_\__, |___/\___/\__|
|___/ |___/
⭐️ Open Source 开源地址: https://github.com/RockChinQ/LangBot ⭐️ Open Source 开源地址: https://github.com/langbot-app/LangBot
📖 Documentation 文档地址: https://docs.langbot.app 📖 Documentation 文档地址: https://docs.langbot.app
""" """

View File

@@ -11,10 +11,10 @@ from ....core import app
preregistered_groups: list[type[RouterGroup]] = [] preregistered_groups: list[type[RouterGroup]] = []
"""RouterGroup 的预注册列表""" """Pre-registered list of RouterGroup"""
def group_class(name: str, path: str) -> None: def group_class(name: str, path: str) -> typing.Callable[[typing.Type[RouterGroup]], typing.Type[RouterGroup]]:
"""注册一个 RouterGroup""" """注册一个 RouterGroup"""
def decorator(cls: typing.Type[RouterGroup]) -> typing.Type[RouterGroup]: def decorator(cls: typing.Type[RouterGroup]) -> typing.Type[RouterGroup]:
@@ -27,7 +27,7 @@ def group_class(name: str, path: str) -> None:
class AuthType(enum.Enum): class AuthType(enum.Enum):
"""认证类型""" """Authentication type"""
NONE = 'none' NONE = 'none'
USER_TOKEN = 'user-token' USER_TOKEN = 'user-token'
@@ -56,7 +56,7 @@ class RouterGroup(abc.ABC):
auth_type: AuthType = AuthType.USER_TOKEN, auth_type: AuthType = AuthType.USER_TOKEN,
**options: typing.Any, **options: typing.Any,
) -> typing.Callable[[RouteCallable], RouteCallable]: # decorator ) -> typing.Callable[[RouteCallable], RouteCallable]: # decorator
"""注册一个路由""" """Register a route"""
def decorator(f: RouteCallable) -> RouteCallable: def decorator(f: RouteCallable) -> RouteCallable:
nonlocal rule nonlocal rule
@@ -64,11 +64,11 @@ class RouterGroup(abc.ABC):
async def handler_error(*args, **kwargs): async def handler_error(*args, **kwargs):
if auth_type == AuthType.USER_TOKEN: if auth_type == AuthType.USER_TOKEN:
# Authorization头中获取token # get token from Authorization header
token = quart.request.headers.get('Authorization', '').replace('Bearer ', '') token = quart.request.headers.get('Authorization', '').replace('Bearer ', '')
if not token: if not token:
return self.http_status(401, -1, '未提供有效的用户令牌') return self.http_status(401, -1, 'No valid user token provided')
try: try:
user_email = await self.ap.user_service.verify_jwt_token(token) user_email = await self.ap.user_service.verify_jwt_token(token)
@@ -76,9 +76,9 @@ class RouterGroup(abc.ABC):
# check if this account exists # check if this account exists
user = await self.ap.user_service.get_user_by_email(user_email) user = await self.ap.user_service.get_user_by_email(user_email)
if not user: if not user:
return self.http_status(401, -1, '用户不存在') return self.http_status(401, -1, 'User not found')
# 检查f是否接受user_email参数 # check if f accepts user_email parameter
if 'user_email' in f.__code__.co_varnames: if 'user_email' in f.__code__.co_varnames:
kwargs['user_email'] = user_email kwargs['user_email'] = user_email
except Exception as e: except Exception as e:
@@ -86,10 +86,11 @@ class RouterGroup(abc.ABC):
try: try:
return await f(*args, **kwargs) return await f(*args, **kwargs)
except Exception: # 自动 500
except Exception as e: # 自动 500
traceback.print_exc() traceback.print_exc()
# return self.http_status(500, -2, str(e)) # return self.http_status(500, -2, str(e))
return self.http_status(500, -2, 'internal server error') return self.http_status(500, -2, str(e))
new_f = handler_error new_f = handler_error
new_f.__name__ = (self.name + rule).replace('/', '__') new_f.__name__ = (self.name + rule).replace('/', '__')
@@ -101,7 +102,7 @@ class RouterGroup(abc.ABC):
return decorator return decorator
def success(self, data: typing.Any = None) -> quart.Response: def success(self, data: typing.Any = None) -> quart.Response:
"""返回一个 200 响应""" """Return a 200 response"""
return quart.jsonify( return quart.jsonify(
{ {
'code': 0, 'code': 0,
@@ -111,7 +112,7 @@ class RouterGroup(abc.ABC):
) )
def fail(self, code: int, msg: str) -> quart.Response: def fail(self, code: int, msg: str) -> quart.Response:
"""返回一个异常响应""" """Return an error response"""
return quart.jsonify( return quart.jsonify(
{ {
@@ -120,6 +121,6 @@ class RouterGroup(abc.ABC):
} }
) )
def http_status(self, status: int, code: int, msg: str) -> quart.Response: def http_status(self, status: int, code: int, msg: str) -> typing.Tuple[quart.Response, int]:
"""返回一个指定状态码的响应""" """返回一个指定状态码的响应"""
return self.fail(code, msg), status return (self.fail(code, msg), status)

View File

@@ -2,6 +2,10 @@ from __future__ import annotations
import quart import quart
import mimetypes import mimetypes
import uuid
import asyncio
import quart.datastructures
from .. import group from .. import group
@@ -20,3 +24,23 @@ class FilesRouterGroup(group.RouterGroup):
mime_type = 'image/jpeg' mime_type = 'image/jpeg'
return quart.Response(image_bytes, mimetype=mime_type) return quart.Response(image_bytes, mimetype=mime_type)
@self.route('/documents', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> quart.Response:
request = quart.request
# get file bytes from 'file'
file = (await request.files)['file']
assert isinstance(file, quart.datastructures.FileStorage)
file_bytes = await asyncio.to_thread(file.stream.read)
extension = file.filename.split('.')[-1]
file_name = file.filename.split('.')[0]
file_key = file_name + '_' + str(uuid.uuid4())[:8] + '.' + extension
# save file to storage
await self.ap.storage_mgr.storage_provider.save(file_key, file_bytes)
return self.success(
data={
'file_id': file_key,
}
)

View File

@@ -0,0 +1,90 @@
import quart
from ... import group
@group.group_class('knowledge_base', '/api/v1/knowledge/bases')
class KnowledgeBaseRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['POST', 'GET'])
async def handle_knowledge_bases() -> quart.Response:
if quart.request.method == 'GET':
knowledge_bases = await self.ap.knowledge_service.get_knowledge_bases()
return self.success(data={'bases': knowledge_bases})
elif quart.request.method == 'POST':
json_data = await quart.request.json
knowledge_base_uuid = await self.ap.knowledge_service.create_knowledge_base(json_data)
return self.success(data={'uuid': knowledge_base_uuid})
return self.http_status(405, -1, 'Method not allowed')
@self.route(
'/<knowledge_base_uuid>',
methods=['GET', 'DELETE', 'PUT'],
)
async def handle_specific_knowledge_base(knowledge_base_uuid: str) -> quart.Response:
if quart.request.method == 'GET':
knowledge_base = await self.ap.knowledge_service.get_knowledge_base(knowledge_base_uuid)
if knowledge_base is None:
return self.http_status(404, -1, 'knowledge base not found')
return self.success(
data={
'base': knowledge_base,
}
)
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.knowledge_service.update_knowledge_base(knowledge_base_uuid, json_data)
return self.success({})
elif quart.request.method == 'DELETE':
await self.ap.knowledge_service.delete_knowledge_base(knowledge_base_uuid)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/files',
methods=['GET', 'POST'],
)
async def get_knowledge_base_files(knowledge_base_uuid: str) -> str:
if quart.request.method == 'GET':
files = await self.ap.knowledge_service.get_files_by_knowledge_base(knowledge_base_uuid)
return self.success(
data={
'files': files,
}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json
file_id = json_data.get('file_id')
if not file_id:
return self.http_status(400, -1, 'File ID is required')
# 调用服务层方法将文件与知识库关联
task_id = await self.ap.knowledge_service.store_file(knowledge_base_uuid, file_id)
return self.success(
{
'task_id': task_id,
}
)
@self.route(
'/<knowledge_base_uuid>/files/<file_id>',
methods=['DELETE'],
)
async def delete_specific_file_in_kb(file_id: str, knowledge_base_uuid: str) -> str:
await self.ap.knowledge_service.delete_file(knowledge_base_uuid, file_id)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/retrieve',
methods=['POST'],
)
async def retrieve_knowledge_base(knowledge_base_uuid: str) -> str:
json_data = await quart.request.json
query = json_data.get('query')
results = await self.ap.knowledge_service.retrieve_knowledge_base(knowledge_base_uuid, query)
return self.success(data={'results': results})

View File

@@ -11,7 +11,11 @@ class PipelinesRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST']) @self.route('', methods=['GET', 'POST'])
async def _() -> str: async def _() -> str:
if quart.request.method == 'GET': if quart.request.method == 'GET':
return self.success(data={'pipelines': await self.ap.pipeline_service.get_pipelines()}) sort_by = quart.request.args.get('sort_by', 'created_at')
sort_order = quart.request.args.get('sort_order', 'DESC')
return self.success(
data={'pipelines': await self.ap.pipeline_service.get_pipelines(sort_by, sort_order)}
)
elif quart.request.method == 'POST': elif quart.request.method == 'POST':
json_data = await quart.request.json json_data = await quart.request.json

View File

@@ -1,3 +1,5 @@
import json
import quart import quart
from ... import group from ... import group
@@ -8,11 +10,19 @@ class WebChatDebugRouterGroup(group.RouterGroup):
async def initialize(self) -> None: async def initialize(self) -> None:
@self.route('/send', methods=['POST']) @self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str: async def send_message(pipeline_uuid: str) -> str:
"""发送调试消息到流水线""" """Send a message to the pipeline for debugging"""
async def stream_generator(generator):
yield 'data: {"type": "start"}\n\n'
async for message in generator:
yield f'data: {json.dumps({"message": message})}\n\n'
yield 'data: {"type": "end"}\n\n'
try: try:
data = await quart.request.get_json() data = await quart.request.get_json()
session_type = data.get('session_type', 'person') session_type = data.get('session_type', 'person')
message_chain_obj = data.get('message', []) message_chain_obj = data.get('message', [])
is_stream = data.get('is_stream', False)
if not message_chain_obj: if not message_chain_obj:
return self.http_status(400, -1, 'message is required') return self.http_status(400, -1, 'message is required')
@@ -25,20 +35,40 @@ class WebChatDebugRouterGroup(group.RouterGroup):
if not webchat_adapter: if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found') return self.http_status(404, -1, 'WebChat adapter not found')
result = await webchat_adapter.send_webchat_message(pipeline_uuid, session_type, message_chain_obj) if is_stream:
generator = webchat_adapter.send_webchat_message(
return self.success( pipeline_uuid, session_type, message_chain_obj, is_stream
data={ )
'message': result, # 设置正确的响应头
headers = {
'Content-Type': 'text/event-stream',
'Transfer-Encoding': 'chunked',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
} }
) return quart.Response(stream_generator(generator), mimetype='text/event-stream', headers=headers)
else: # non-stream
result = None
async for message in webchat_adapter.send_webchat_message(
pipeline_uuid, session_type, message_chain_obj
):
result = message
if result is not None:
return self.success(
data={
'message': result,
}
)
else:
return self.http_status(400, -1, 'message is required')
except Exception as e: except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}') return self.http_status(500, -1, f'Internal server error: {str(e)}')
@self.route('/messages/<session_type>', methods=['GET']) @self.route('/messages/<session_type>', methods=['GET'])
async def get_messages(pipeline_uuid: str, session_type: str) -> str: async def get_messages(pipeline_uuid: str, session_type: str) -> str:
"""获取调试消息历史""" """Get the message history of the pipeline for debugging"""
try: try:
if session_type not in ['person', 'group']: if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group') return self.http_status(400, -1, 'session_type must be person or group')
@@ -57,7 +87,7 @@ class WebChatDebugRouterGroup(group.RouterGroup):
@self.route('/reset/<session_type>', methods=['POST']) @self.route('/reset/<session_type>', methods=['POST'])
async def reset_session(session_type: str) -> str: async def reset_session(session_type: str) -> str:
"""重置调试会话""" """Reset the debug session"""
try: try:
if session_type not in ['person', 'group']: if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group') return self.http_status(400, -1, 'session_type must be person or group')

View File

@@ -9,18 +9,18 @@ class LLMModelsRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST']) @self.route('', methods=['GET', 'POST'])
async def _() -> str: async def _() -> str:
if quart.request.method == 'GET': if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.model_service.get_llm_models()}) return self.success(data={'models': await self.ap.llm_model_service.get_llm_models()})
elif quart.request.method == 'POST': elif quart.request.method == 'POST':
json_data = await quart.request.json json_data = await quart.request.json
model_uuid = await self.ap.model_service.create_llm_model(json_data) model_uuid = await self.ap.llm_model_service.create_llm_model(json_data)
return self.success(data={'uuid': model_uuid}) return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE']) @self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str: async def _(model_uuid: str) -> str:
if quart.request.method == 'GET': if quart.request.method == 'GET':
model = await self.ap.model_service.get_llm_model(model_uuid) model = await self.ap.llm_model_service.get_llm_model(model_uuid)
if model is None: if model is None:
return self.http_status(404, -1, 'model not found') return self.http_status(404, -1, 'model not found')
@@ -29,11 +29,11 @@ class LLMModelsRouterGroup(group.RouterGroup):
elif quart.request.method == 'PUT': elif quart.request.method == 'PUT':
json_data = await quart.request.json json_data = await quart.request.json
await self.ap.model_service.update_llm_model(model_uuid, json_data) await self.ap.llm_model_service.update_llm_model(model_uuid, json_data)
return self.success() return self.success()
elif quart.request.method == 'DELETE': elif quart.request.method == 'DELETE':
await self.ap.model_service.delete_llm_model(model_uuid) await self.ap.llm_model_service.delete_llm_model(model_uuid)
return self.success() return self.success()
@@ -41,6 +41,49 @@ class LLMModelsRouterGroup(group.RouterGroup):
async def _(model_uuid: str) -> str: async def _(model_uuid: str) -> str:
json_data = await quart.request.json json_data = await quart.request.json
await self.ap.model_service.test_llm_model(model_uuid, json_data) await self.ap.llm_model_service.test_llm_model(model_uuid, json_data)
return self.success()
@group.group_class('models/embedding', '/api/v1/provider/models/embedding')
class EmbeddingModelsRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.embedding_models_service.get_embedding_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.embedding_models_service.create_embedding_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.embedding_models_service.get_embedding_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
return self.success(data={'model': model})
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.embedding_models_service.update_embedding_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.embedding_models_service.delete_embedding_model(model_uuid)
return self.success()
@self.route('/<model_uuid>/test', methods=['POST'])
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.embedding_models_service.test_embedding_model(model_uuid, json_data)
return self.success() return self.success()

View File

@@ -8,7 +8,8 @@ class RequestersRouterGroup(group.RouterGroup):
async def initialize(self) -> None: async def initialize(self) -> None:
@self.route('', methods=['GET']) @self.route('', methods=['GET'])
async def _() -> quart.Response: async def _() -> quart.Response:
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info()}) model_type = quart.request.args.get('type', '')
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info(model_type)})
@self.route('/<requester_name>', methods=['GET']) @self.route('/<requester_name>', methods=['GET'])
async def _(requester_name: str) -> quart.Response: async def _(requester_name: str) -> quart.Response:

View File

@@ -1,5 +1,6 @@
import quart import quart
import argon2 import argon2
import asyncio
from .. import group from .. import group
@@ -13,7 +14,7 @@ class UserRouterGroup(group.RouterGroup):
return self.success(data={'initialized': await self.ap.user_service.is_initialized()}) return self.success(data={'initialized': await self.ap.user_service.is_initialized()})
if await self.ap.user_service.is_initialized(): if await self.ap.user_service.is_initialized():
return self.fail(1, '系统已初始化') return self.fail(1, 'System already initialized')
json_data = await quart.request.json json_data = await quart.request.json
@@ -31,7 +32,7 @@ class UserRouterGroup(group.RouterGroup):
try: try:
token = await self.ap.user_service.authenticate(json_data['user'], json_data['password']) token = await self.ap.user_service.authenticate(json_data['user'], json_data['password'])
except argon2.exceptions.VerifyMismatchError: except argon2.exceptions.VerifyMismatchError:
return self.fail(1, '用户名或密码错误') return self.fail(1, 'Invalid username or password')
return self.success(data={'token': token}) return self.success(data={'token': token})
@@ -40,3 +41,45 @@ class UserRouterGroup(group.RouterGroup):
token = await self.ap.user_service.generate_jwt_token(user_email) token = await self.ap.user_service.generate_jwt_token(user_email)
return self.success(data={'token': token}) return self.success(data={'token': token})
@self.route('/reset-password', methods=['POST'], auth_type=group.AuthType.NONE)
async def _() -> str:
json_data = await quart.request.json
user_email = json_data['user']
recovery_key = json_data['recovery_key']
new_password = json_data['new_password']
# hard sleep 3s for security
await asyncio.sleep(3)
if not await self.ap.user_service.is_initialized():
return self.http_status(400, -1, 'System not initialized')
user_obj = await self.ap.user_service.get_user_by_email(user_email)
if user_obj is None:
return self.http_status(400, -1, 'User not found')
if recovery_key != self.ap.instance_config.data['system']['recovery_key']:
return self.http_status(403, -1, 'Invalid recovery key')
await self.ap.user_service.reset_password(user_email, new_password)
return self.success(data={'user': user_email})
@self.route('/change-password', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _(user_email: str) -> str:
json_data = await quart.request.json
current_password = json_data['current_password']
new_password = json_data['new_password']
try:
await self.ap.user_service.change_password(user_email, current_password, new_password)
except argon2.exceptions.VerifyMismatchError:
return self.http_status(400, -1, 'Current password is incorrect')
except ValueError as e:
return self.http_status(400, -1, str(e))
return self.success(data={'user': user_email})

View File

@@ -14,11 +14,13 @@ from . import group
from .groups import provider as groups_provider from .groups import provider as groups_provider
from .groups import platform as groups_platform from .groups import platform as groups_platform
from .groups import pipelines as groups_pipelines from .groups import pipelines as groups_pipelines
from .groups import knowledge as groups_knowledge
importutil.import_modules_in_pkg(groups) importutil.import_modules_in_pkg(groups)
importutil.import_modules_in_pkg(groups_provider) importutil.import_modules_in_pkg(groups_provider)
importutil.import_modules_in_pkg(groups_platform) importutil.import_modules_in_pkg(groups_platform)
importutil.import_modules_in_pkg(groups_pipelines) importutil.import_modules_in_pkg(groups_pipelines)
importutil.import_modules_in_pkg(groups_knowledge)
class HTTPController: class HTTPController:
@@ -45,7 +47,7 @@ class HTTPController:
try: try:
await self.quart_app.run_task(*args, **kwargs) await self.quart_app.run_task(*args, **kwargs)
except Exception as e: except Exception as e:
self.ap.logger.error(f'启动 HTTP 服务失败: {e}') self.ap.logger.error(f'Failed to start HTTP service: {e}')
self.ap.task_mgr.create_task( self.ap.task_mgr.create_task(
exception_handler( exception_handler(

View File

@@ -10,7 +10,7 @@ from ....entity.persistence import pipeline as persistence_pipeline
class BotService: class BotService:
"""机器人服务""" """Bot service"""
ap: app.Application ap: app.Application
@@ -63,7 +63,7 @@ class BotService:
return persistence_bot return persistence_bot
async def create_bot(self, bot_data: dict) -> str: async def create_bot(self, bot_data: dict) -> str:
"""创建机器人""" """Create bot"""
# TODO: 检查配置信息格式 # TODO: 检查配置信息格式
bot_data['uuid'] = str(uuid.uuid4()) bot_data['uuid'] = str(uuid.uuid4())
@@ -87,7 +87,7 @@ class BotService:
return bot_data['uuid'] return bot_data['uuid']
async def update_bot(self, bot_uuid: str, bot_data: dict) -> None: async def update_bot(self, bot_uuid: str, bot_data: dict) -> None:
"""更新机器人""" """Update bot"""
if 'uuid' in bot_data: if 'uuid' in bot_data:
del bot_data['uuid'] del bot_data['uuid']
@@ -123,7 +123,7 @@ class BotService:
session.using_conversation = None session.using_conversation = None
async def delete_bot(self, bot_uuid: str) -> None: async def delete_bot(self, bot_uuid: str) -> None:
"""删除机器人""" """Delete bot"""
await self.ap.platform_mgr.remove_bot(bot_uuid) await self.ap.platform_mgr.remove_bot(bot_uuid)
await self.ap.persistence_mgr.execute_async( await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid) sqlalchemy.delete(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)

View File

@@ -0,0 +1,120 @@
from __future__ import annotations
import uuid
import sqlalchemy
from ....core import app
from ....entity.persistence import rag as persistence_rag
class KnowledgeService:
"""知识库服务"""
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_knowledge_bases(self) -> list[dict]:
"""获取所有知识库"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_rag.KnowledgeBase))
knowledge_bases = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
for knowledge_base in knowledge_bases
]
async def get_knowledge_base(self, kb_uuid: str) -> dict | None:
"""获取知识库"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
knowledge_base = result.first()
if knowledge_base is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
async def create_knowledge_base(self, kb_data: dict) -> str:
"""创建知识库"""
kb_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.KnowledgeBase).values(kb_data))
kb = await self.get_knowledge_base(kb_data['uuid'])
await self.ap.rag_mgr.load_knowledge_base(kb)
return kb_data['uuid']
async def update_knowledge_base(self, kb_uuid: str, kb_data: dict) -> None:
"""更新知识库"""
if 'uuid' in kb_data:
del kb_data['uuid']
if 'embedding_model_uuid' in kb_data:
del kb_data['embedding_model_uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_rag.KnowledgeBase)
.values(kb_data)
.where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
await self.ap.rag_mgr.remove_knowledge_base_from_runtime(kb_uuid)
kb = await self.get_knowledge_base(kb_uuid)
await self.ap.rag_mgr.load_knowledge_base(kb)
async def store_file(self, kb_uuid: str, file_id: str) -> int:
"""存储文件"""
# await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.File).values(kb_id=kb_uuid, file_id=file_id))
# await self.ap.rag_mgr.store_file(file_id)
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return await runtime_kb.store_file(file_id)
async def retrieve_knowledge_base(self, kb_uuid: str, query: str) -> list[dict]:
"""检索知识库"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return [
result.model_dump() for result in await runtime_kb.retrieve(query, runtime_kb.knowledge_base_entity.top_k)
]
async def get_files_by_knowledge_base(self, kb_uuid: str) -> list[dict]:
"""获取知识库文件"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
files = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_rag.File, file) for file in files]
async def delete_file(self, kb_uuid: str, file_id: str) -> None:
"""删除文件"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
await runtime_kb.delete_file(file_id)
async def delete_knowledge_base(self, kb_uuid: str) -> None:
"""删除知识库"""
await self.ap.rag_mgr.delete_knowledge_base(kb_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
# delete files
files = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
for file in files:
# delete chunks
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.Chunk).where(persistence_rag.Chunk.file_id == file.uuid)
)
# delete file
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.File).where(persistence_rag.File.uuid == file.uuid)
)

View File

@@ -10,7 +10,7 @@ from ....provider.modelmgr import requester as model_requester
from langbot_plugin.api.entities.builtin.provider import message as provider_message from langbot_plugin.api.entities.builtin.provider import message as provider_message
class ModelsService: class LLMModelsService:
ap: app.Application ap: app.Application
def __init__(self, ap: app.Application) -> None: def __init__(self, ap: app.Application) -> None:
@@ -109,5 +109,91 @@ class ModelsService:
model=runtime_llm_model, model=runtime_llm_model,
messages=[provider_message.Message(role='user', content='Hello, world!')], messages=[provider_message.Message(role='user', content='Hello, world!')],
funcs=[], funcs=[],
extra_args=model_data.get('extra_args', {}),
)
class EmbeddingModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_embedding_models(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
models = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model) for model in models]
async def create_embedding_model(self, model_data: dict) -> str:
model_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(
sqlalchemy.insert(persistence_model.EmbeddingModel).values(**model_data)
)
embedding_model = await self.get_embedding_model(model_data['uuid'])
await self.ap.model_mgr.load_embedding_model(embedding_model)
return model_data['uuid']
async def get_embedding_model(self, model_uuid: str) -> dict | None:
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
model = result.first()
if model is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model)
async def update_embedding_model(self, model_uuid: str, model_data: dict) -> None:
if 'uuid' in model_data:
del model_data['uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_model.EmbeddingModel)
.where(persistence_model.EmbeddingModel.uuid == model_uuid)
.values(**model_data)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
embedding_model = await self.get_embedding_model(model_uuid)
await self.ap.model_mgr.load_embedding_model(embedding_model)
async def delete_embedding_model(self, model_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
async def test_embedding_model(self, model_uuid: str, model_data: dict) -> None:
runtime_embedding_model: model_requester.RuntimeEmbeddingModel | None = None
if model_uuid != '_':
for model in self.ap.model_mgr.embedding_models:
if model.model_entity.uuid == model_uuid:
runtime_embedding_model = model
break
if runtime_embedding_model is None:
raise Exception('model not found')
else:
runtime_embedding_model = await self.ap.model_mgr.init_runtime_embedding_model(model_data)
await runtime_embedding_model.requester.invoke_embedding(
model=runtime_embedding_model,
input_text=['Hello, world!'],
extra_args={}, extra_args={},
) )

View File

@@ -38,9 +38,21 @@ class PipelineService:
self.ap.pipeline_config_meta_output.data, self.ap.pipeline_config_meta_output.data,
] ]
async def get_pipelines(self) -> list[dict]: async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline)) query = sqlalchemy.select(persistence_pipeline.LegacyPipeline)
if sort_by == 'created_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.asc())
elif sort_by == 'updated_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.asc())
result = await self.ap.persistence_mgr.execute_async(query)
pipelines = result.all() pipelines = result.all()
return [ return [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline) self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)

View File

@@ -73,3 +73,27 @@ class UserService:
jwt_secret = self.ap.instance_config.data['system']['jwt']['secret'] jwt_secret = self.ap.instance_config.data['system']['jwt']['secret']
return jwt.decode(token, jwt_secret, algorithms=['HS256'])['user'] return jwt.decode(token, jwt_secret, algorithms=['HS256'])['user']
async def reset_password(self, user_email: str, new_password: str) -> None:
ph = argon2.PasswordHasher()
hashed_password = ph.hash(new_password)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)
async def change_password(self, user_email: str, current_password: str, new_password: str) -> None:
ph = argon2.PasswordHasher()
user_obj = await self.get_user_by_email(user_email)
if user_obj is None:
raise ValueError('User not found')
ph.verify(user_obj.password, current_password)
hashed_password = ph.hash(new_password)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class JSONConfigFile(file_model.ConfigFile): class JSONConfigFile(file_model.ConfigFile):
"""JSON配置文件""" """JSON config file"""
def __init__( def __init__(
self, self,
@@ -42,7 +42,7 @@ class JSONConfigFile(file_model.ConfigFile):
try: try:
cfg = json.load(f) cfg = json.load(f)
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}') raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion: if completion:
for key in self.template_data: for key in self.template_data:

View File

@@ -7,13 +7,13 @@ from .. import model as file_model
class PythonModuleConfigFile(file_model.ConfigFile): class PythonModuleConfigFile(file_model.ConfigFile):
"""Python模块配置文件""" """Python module config file"""
config_file_name: str = None config_file_name: str = None
"""配置文件名""" """Config file name"""
template_file_name: str = None template_file_name: str = None
"""模板文件名""" """Template file name"""
def __init__(self, config_file_name: str, template_file_name: str) -> None: def __init__(self, config_file_name: str, template_file_name: str) -> None:
self.config_file_name = config_file_name self.config_file_name = config_file_name
@@ -42,7 +42,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
cfg[key] = getattr(module, key) cfg[key] = getattr(module, key)
# 从模板模块文件中进行补全 # complete from template module file
if completion: if completion:
module_name = os.path.splitext(os.path.basename(self.template_file_name))[0] module_name = os.path.splitext(os.path.basename(self.template_file_name))[0]
module = importlib.import_module(module_name) module = importlib.import_module(module_name)
@@ -60,7 +60,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
return cfg return cfg
async def save(self, data: dict): async def save(self, data: dict):
logging.warning('Python模块配置文件不支持保存') logging.warning('Python module config file does not support saving')
def save_sync(self, data: dict): def save_sync(self, data: dict):
logging.warning('Python模块配置文件不支持保存') logging.warning('Python module config file does not support saving')

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class YAMLConfigFile(file_model.ConfigFile): class YAMLConfigFile(file_model.ConfigFile):
"""YAML配置文件""" """YAML config file"""
def __init__( def __init__(
self, self,
@@ -42,7 +42,7 @@ class YAMLConfigFile(file_model.ConfigFile):
try: try:
cfg = yaml.load(f, Loader=yaml.FullLoader) cfg = yaml.load(f, Loader=yaml.FullLoader)
except yaml.YAMLError as e: except yaml.YAMLError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}') raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion: if completion:
for key in self.template_data: for key in self.template_data:

View File

@@ -5,27 +5,27 @@ from .impls import pymodule, json as json_file, yaml as yaml_file
class ConfigManager: class ConfigManager:
"""配置文件管理器""" """Config file manager"""
name: str = None name: str = None
"""配置管理器名""" """Config manager name"""
description: str = None description: str = None
"""配置管理器描述""" """Config manager description"""
schema: dict = None schema: dict = None
"""配置文件 schema """Config file schema
需要符合 JSON Schema Draft 7 规范 Must conform to JSON Schema Draft 7 specification
""" """
file: file_model.ConfigFile = None file: file_model.ConfigFile = None
"""配置文件实例""" """Config file instance"""
data: dict = None data: dict = None
"""配置数据""" """Config data"""
doc_link: str = None doc_link: str = None
"""配置文件文档链接""" """Config file documentation link"""
def __init__(self, cfg_file: file_model.ConfigFile) -> None: def __init__(self, cfg_file: file_model.ConfigFile) -> None:
self.file = cfg_file self.file = cfg_file
@@ -42,15 +42,15 @@ class ConfigManager:
async def load_python_module_config(config_name: str, template_name: str, completion: bool = True) -> ConfigManager: async def load_python_module_config(config_name: str, template_name: str, completion: bool = True) -> ConfigManager:
"""加载Python模块配置文件 """Load Python module config file
Args: Args:
config_name (str): 配置文件名 config_name (str): Config file name
template_name (str): 模板文件名 template_name (str): Template file name
completion (bool): 是否自动补全内存中的配置文件 completion (bool): Whether to automatically complete the config file in memory
Returns: Returns:
ConfigManager: 配置文件管理器 ConfigManager: Config file manager
""" """
cfg_inst = pymodule.PythonModuleConfigFile(config_name, template_name) cfg_inst = pymodule.PythonModuleConfigFile(config_name, template_name)
@@ -66,13 +66,13 @@ async def load_json_config(
template_data: dict = None, template_data: dict = None,
completion: bool = True, completion: bool = True,
) -> ConfigManager: ) -> ConfigManager:
"""加载JSON配置文件 """Load JSON config file
Args: Args:
config_name (str): 配置文件名 config_name (str): Config file name
template_name (str): 模板文件名 template_name (str): Template file name
template_data (dict): 模板数据 template_data (dict): Template data
completion (bool): 是否自动补全内存中的配置文件 completion (bool): Whether to automatically complete the config file in memory
""" """
cfg_inst = json_file.JSONConfigFile(config_name, template_name, template_data) cfg_inst = json_file.JSONConfigFile(config_name, template_name, template_data)
@@ -88,16 +88,16 @@ async def load_yaml_config(
template_data: dict = None, template_data: dict = None,
completion: bool = True, completion: bool = True,
) -> ConfigManager: ) -> ConfigManager:
"""加载YAML配置文件 """Load YAML config file
Args: Args:
config_name (str): 配置文件名 config_name (str): Config file name
template_name (str): 模板文件名 template_name (str): Template file name
template_data (dict): 模板数据 template_data (dict): Template data
completion (bool): 是否自动补全内存中的配置文件 completion (bool): Whether to automatically complete the config file in memory
Returns: Returns:
ConfigManager: 配置文件管理器 ConfigManager: Config file manager
""" """
cfg_inst = yaml_file.YAMLConfigFile(config_name, template_name, template_data) cfg_inst = yaml_file.YAMLConfigFile(config_name, template_name, template_data)

View File

@@ -2,16 +2,16 @@ import abc
class ConfigFile(metaclass=abc.ABCMeta): class ConfigFile(metaclass=abc.ABCMeta):
"""配置文件抽象类""" """Config file abstract class"""
config_file_name: str = None config_file_name: str = None
"""配置文件名""" """Config file name"""
template_file_name: str = None template_file_name: str = None
"""模板文件名""" """Template file name"""
template_data: dict = None template_data: dict = None
"""模板数据""" """Template data"""
@abc.abstractmethod @abc.abstractmethod
def exists(self) -> bool: def exists(self) -> bool:

View File

@@ -21,15 +21,18 @@ from ..api.http.service import user as user_service
from ..api.http.service import model as model_service from ..api.http.service import model as model_service
from ..api.http.service import pipeline as pipeline_service from ..api.http.service import pipeline as pipeline_service
from ..api.http.service import bot as bot_service from ..api.http.service import bot as bot_service
from ..api.http.service import knowledge as knowledge_service
from ..discover import engine as discover_engine from ..discover import engine as discover_engine
from ..storage import mgr as storagemgr from ..storage import mgr as storagemgr
from ..utils import logcache from ..utils import logcache
from . import taskmgr from . import taskmgr
from . import entities as core_entities from . import entities as core_entities
from ..rag.knowledge import kbmgr as rag_mgr
from ..vector import mgr as vectordb_mgr
class Application: class Application:
"""运行时应用对象和上下文""" """Runtime application object and context"""
event_loop: asyncio.AbstractEventLoop = None event_loop: asyncio.AbstractEventLoop = None
@@ -46,10 +49,12 @@ class Application:
model_mgr: llm_model_mgr.ModelManager = None model_mgr: llm_model_mgr.ModelManager = None
# TODO 移动到 pipeline 里 rag_mgr: rag_mgr.RAGManager = None
# TODO move to pipeline
tool_mgr: llm_tool_mgr.ToolManager = None tool_mgr: llm_tool_mgr.ToolManager = None
# ======= 配置管理器 ======= # ======= Config manager =======
command_cfg: config_mgr.ConfigManager = None # deprecated command_cfg: config_mgr.ConfigManager = None # deprecated
@@ -63,7 +68,7 @@ class Application:
instance_config: config_mgr.ConfigManager = None instance_config: config_mgr.ConfigManager = None
# ======= 元数据配置管理器 ======= # ======= Metadata config manager =======
sensitive_meta: config_mgr.ConfigManager = None sensitive_meta: config_mgr.ConfigManager = None
@@ -92,6 +97,8 @@ class Application:
persistence_mgr: persistencemgr.PersistenceManager = None persistence_mgr: persistencemgr.PersistenceManager = None
vector_db_mgr: vectordb_mgr.VectorDBManager = None
http_ctrl: http_controller.HTTPController = None http_ctrl: http_controller.HTTPController = None
log_cache: logcache.LogCache = None log_cache: logcache.LogCache = None
@@ -102,12 +109,16 @@ class Application:
user_service: user_service.UserService = None user_service: user_service.UserService = None
model_service: model_service.ModelsService = None llm_model_service: model_service.LLMModelsService = None
embedding_models_service: model_service.EmbeddingModelsService = None
pipeline_service: pipeline_service.PipelineService = None pipeline_service: pipeline_service.PipelineService = None
bot_service: bot_service.BotService = None bot_service: bot_service.BotService = None
knowledge_service: knowledge_service.KnowledgeService = None
def __init__(self): def __init__(self):
pass pass
@@ -142,6 +153,7 @@ class Application:
name='http-api-controller', name='http-api-controller',
scopes=[core_entities.LifecycleControlScope.APPLICATION], scopes=[core_entities.LifecycleControlScope.APPLICATION],
) )
self.task_mgr.create_task( self.task_mgr.create_task(
never_ending(), never_ending(),
name='never-ending-task', name='never-ending-task',
@@ -153,14 +165,14 @@ class Application:
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass
except Exception as e: except Exception as e:
self.logger.error(f'应用运行致命异常: {e}') self.logger.error(f'Application runtime fatal exception: {e}')
self.logger.debug(f'Traceback: {traceback.format_exc()}') self.logger.debug(f'Traceback: {traceback.format_exc()}')
def dispose(self): def dispose(self):
self.plugin_connector.dispose() self.plugin_connector.dispose()
async def print_web_access_info(self): async def print_web_access_info(self):
"""打印访问 webui 的提示""" """Print access webui tips"""
if not os.path.exists(os.path.join('.', 'web/out')): if not os.path.exists(os.path.join('.', 'web/out')):
self.logger.warning('WebUI 文件缺失请根据文档部署https://docs.langbot.app/zh') self.logger.warning('WebUI 文件缺失请根据文档部署https://docs.langbot.app/zh')

View File

@@ -8,7 +8,7 @@ from . import app
from . import stage from . import stage
from ..utils import constants, importutil from ..utils import constants, importutil
# 引入启动阶段实现以便注册 # Import startup stage implementation to register
from . import stages from . import stages
importutil.import_modules_in_pkg(stages) importutil.import_modules_in_pkg(stages)
@@ -25,7 +25,7 @@ stage_order = [
async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application: async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
# 确定是否为调试模式 # Determine if it is debug mode
if 'DEBUG' in os.environ and os.environ['DEBUG'] in ['true', '1']: if 'DEBUG' in os.environ and os.environ['DEBUG'] in ['true', '1']:
constants.debug_mode = True constants.debug_mode = True
@@ -33,7 +33,7 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
ap.event_loop = loop ap.event_loop = loop
# 执行启动阶段 # Execute startup stage
for stage_name in stage_order: for stage_name in stage_order:
stage_cls = stage.preregistered_stages[stage_name] stage_cls = stage.preregistered_stages[stage_name]
stage_inst = stage_cls() stage_inst = stage_cls()
@@ -47,12 +47,12 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
async def main(loop: asyncio.AbstractEventLoop): async def main(loop: asyncio.AbstractEventLoop):
try: try:
# 挂系统信号处理 # Hang system signal processing
import signal import signal
def signal_handler(sig, frame): def signal_handler(sig, frame):
app_inst.dispose() app_inst.dispose()
print('[Signal] 程序退出.') print('[Signal] Program exit.')
os._exit(0) os._exit(0)
signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGINT, signal_handler)

View File

@@ -2,8 +2,8 @@ import pip
import os import os
from ...utils import pkgmgr from ...utils import pkgmgr
# 检查依赖,防止用户未安装 # Check dependencies to prevent users from not installing
# 左边为引入名称,右边为依赖名称 # Left is the import name, right is the dependency name
required_deps = { required_deps = {
'requests': 'requests', 'requests': 'requests',
'openai': 'openai', 'openai': 'openai',
@@ -65,7 +65,7 @@ async def install_deps(deps: list[str]):
async def precheck_plugin_deps(): async def precheck_plugin_deps():
print('[Startup] Prechecking plugin dependencies...') print('[Startup] Prechecking plugin dependencies...')
# 只有在plugins目录存在时才执行插件依赖安装 # Only execute plugin dependency installation when the plugins directory exists
if os.path.exists('plugins'): if os.path.exists('plugins'):
for dir in os.listdir('plugins'): for dir in os.listdir('plugins'):
subdir = os.path.join('plugins', dir) subdir = os.path.join('plugins', dir)

View File

@@ -17,7 +17,7 @@ log_colors_config = {
async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.Logger: async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.Logger:
# 删除所有现有的logger # Remove all existing loggers
for handler in logging.root.handlers[:]: for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler) logging.root.removeHandler(handler)
@@ -54,13 +54,13 @@ async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.
handler.setFormatter(color_formatter) handler.setFormatter(color_formatter)
qcg_logger.addHandler(handler) qcg_logger.addHandler(handler)
qcg_logger.debug('日志初始化完成,日志级别:%s' % level) qcg_logger.debug('Logging initialized, log level: %s' % level)
logging.basicConfig( logging.basicConfig(
level=logging.CRITICAL, # 设置日志输出格式 level=logging.CRITICAL, # Set log output format
format='[DEPR][%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s', format='[DEPR][%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s',
# 日志输出的格式 # Log output format
# -8表示占位符让输出左对齐输出长度都为8位 # -8 is a placeholder, left-align the output, and output length is 8
datefmt='%Y-%m-%d %H:%M:%S', # 时间输出的格式 datefmt='%Y-%m-%d %H:%M:%S', # Time output format
handlers=[logging.NullHandler()], handlers=[logging.NullHandler()],
) )

View File

@@ -7,11 +7,11 @@ from . import app
preregistered_migrations: list[typing.Type[Migration]] = [] preregistered_migrations: list[typing.Type[Migration]] = []
"""当前阶段暂不支持扩展""" """Currently not supported for extension"""
def migration_class(name: str, number: int): def migration_class(name: str, number: int):
"""注册一个迁移""" """Register a migration"""
def decorator(cls: typing.Type[Migration]) -> typing.Type[Migration]: def decorator(cls: typing.Type[Migration]) -> typing.Type[Migration]:
cls.name = name cls.name = name
@@ -23,7 +23,7 @@ def migration_class(name: str, number: int):
class Migration(abc.ABC): class Migration(abc.ABC):
"""一个版本的迁移""" """A version migration"""
name: str name: str
@@ -36,10 +36,10 @@ class Migration(abc.ABC):
@abc.abstractmethod @abc.abstractmethod
async def need_migrate(self) -> bool: async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移""" """Determine if the current environment needs to run this migration"""
pass pass
@abc.abstractmethod @abc.abstractmethod
async def run(self): async def run(self):
"""执行迁移""" """Run migration"""
pass pass

View File

@@ -9,7 +9,7 @@ preregistered_notes: list[typing.Type[LaunchNote]] = []
def note_class(name: str, number: int): def note_class(name: str, number: int):
"""注册一个启动信息""" """Register a launch information"""
def decorator(cls: typing.Type[LaunchNote]) -> typing.Type[LaunchNote]: def decorator(cls: typing.Type[LaunchNote]) -> typing.Type[LaunchNote]:
cls.name = name cls.name = name
@@ -21,7 +21,7 @@ def note_class(name: str, number: int):
class LaunchNote(abc.ABC): class LaunchNote(abc.ABC):
"""启动信息""" """Launch information"""
name: str name: str
@@ -34,10 +34,10 @@ class LaunchNote(abc.ABC):
@abc.abstractmethod @abc.abstractmethod
async def need_show(self) -> bool: async def need_show(self) -> bool:
"""判断当前环境是否需要显示此启动信息""" """Determine if the current environment needs to display this launch information"""
pass pass
@abc.abstractmethod @abc.abstractmethod
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]: async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
"""生成启动信息""" """Generate launch information"""
pass pass

View File

@@ -7,7 +7,7 @@ from .. import note
@note.note_class('ClassicNotes', 1) @note.note_class('ClassicNotes', 1)
class ClassicNotes(note.LaunchNote): class ClassicNotes(note.LaunchNote):
"""经典启动信息""" """Classic launch information"""
async def need_show(self) -> bool: async def need_show(self) -> bool:
return True return True

View File

@@ -9,7 +9,7 @@ from .. import note
@note.note_class('SelectionModeOnWindows', 2) @note.note_class('SelectionModeOnWindows', 2)
class SelectionModeOnWindows(note.LaunchNote): class SelectionModeOnWindows(note.LaunchNote):
"""Windows 上的选择模式提示信息""" """Selection mode prompt information on Windows"""
async def need_show(self) -> bool: async def need_show(self) -> bool:
return os.name == 'nt' return os.name == 'nt'
@@ -19,3 +19,8 @@ class SelectionModeOnWindows(note.LaunchNote):
"""您正在使用 Windows 系统,若窗口左上角显示处于”选择“模式,程序将被暂停运行,此时请右键窗口中空白区域退出选择模式。""", """您正在使用 Windows 系统,若窗口左上角显示处于”选择“模式,程序将被暂停运行,此时请右键窗口中空白区域退出选择模式。""",
logging.INFO, logging.INFO,
) )
yield (
"""You are using Windows system, if the top left corner of the window displays "Selection" mode, the program will be paused running, please right-click on the blank area in the window to exit the selection mode.""",
logging.INFO,
)

View File

@@ -7,9 +7,9 @@ from . import app
preregistered_stages: dict[str, typing.Type[BootingStage]] = {} preregistered_stages: dict[str, typing.Type[BootingStage]] = {}
"""预注册的请求处理阶段。在初始化时,所有请求处理阶段类会被注册到此字典中。 """Pre-registered request processing stages. All request processing stage classes are registered in this dictionary during initialization.
当前阶段暂不支持扩展 Currently not supported for extension
""" """
@@ -22,11 +22,11 @@ def stage_class(name: str):
class BootingStage(abc.ABC): class BootingStage(abc.ABC):
"""启动阶段""" """Booting stage"""
name: str = None name: str = None
@abc.abstractmethod @abc.abstractmethod
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""启动""" """Run"""
pass pass

View File

@@ -10,6 +10,7 @@ from ...command import cmdmgr
from ...provider.session import sessionmgr as llm_session_mgr from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr from ...provider.modelmgr import modelmgr as llm_model_mgr
from ...provider.tools import toolmgr as llm_tool_mgr from ...provider.tools import toolmgr as llm_tool_mgr
from ...rag.knowledge import kbmgr as rag_mgr
from ...platform import botmgr as im_mgr from ...platform import botmgr as im_mgr
from ...persistence import mgr as persistencemgr from ...persistence import mgr as persistencemgr
from ...api.http.controller import main as http_controller from ...api.http.controller import main as http_controller
@@ -17,18 +18,20 @@ from ...api.http.service import user as user_service
from ...api.http.service import model as model_service from ...api.http.service import model as model_service
from ...api.http.service import pipeline as pipeline_service from ...api.http.service import pipeline as pipeline_service
from ...api.http.service import bot as bot_service from ...api.http.service import bot as bot_service
from ...api.http.service import knowledge as knowledge_service
from ...discover import engine as discover_engine from ...discover import engine as discover_engine
from ...storage import mgr as storagemgr from ...storage import mgr as storagemgr
from ...utils import logcache from ...utils import logcache
from ...vector import mgr as vectordb_mgr
from .. import taskmgr from .. import taskmgr
@stage.stage_class('BuildAppStage') @stage.stage_class('BuildAppStage')
class BuildAppStage(stage.BootingStage): class BuildAppStage(stage.BootingStage):
"""构建应用阶段""" """Build LangBot application"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""构建app对象的各个组件对象并初始化""" """Build LangBot application"""
ap.task_mgr = taskmgr.AsyncTaskManager(ap) ap.task_mgr = taskmgr.AsyncTaskManager(ap)
discover = discover_engine.ComponentDiscoveryEngine(ap) discover = discover_engine.ComponentDiscoveryEngine(ap)
@@ -43,7 +46,7 @@ class BuildAppStage(stage.BootingStage):
await ver_mgr.initialize() await ver_mgr.initialize()
ap.ver_mgr = ver_mgr ap.ver_mgr = ver_mgr
# 发送公告 # Send announcement
ann_mgr = announce.AnnouncementManager(ap) ann_mgr = announce.AnnouncementManager(ap)
ap.ann_mgr = ann_mgr ap.ann_mgr = ann_mgr
@@ -92,6 +95,15 @@ class BuildAppStage(stage.BootingStage):
await pipeline_mgr.initialize() await pipeline_mgr.initialize()
ap.pipeline_mgr = pipeline_mgr ap.pipeline_mgr = pipeline_mgr
rag_mgr_inst = rag_mgr.RAGManager(ap)
await rag_mgr_inst.initialize()
ap.rag_mgr = rag_mgr_inst
# 初始化向量数据库管理器
vectordb_mgr_inst = vectordb_mgr.VectorDBManager(ap)
await vectordb_mgr_inst.initialize()
ap.vector_db_mgr = vectordb_mgr_inst
http_ctrl = http_controller.HTTPController(ap) http_ctrl = http_controller.HTTPController(ap)
await http_ctrl.initialize() await http_ctrl.initialize()
ap.http_ctrl = http_ctrl ap.http_ctrl = http_ctrl
@@ -99,8 +111,11 @@ class BuildAppStage(stage.BootingStage):
user_service_inst = user_service.UserService(ap) user_service_inst = user_service.UserService(ap)
ap.user_service = user_service_inst ap.user_service = user_service_inst
model_service_inst = model_service.ModelsService(ap) llm_model_service_inst = model_service.LLMModelsService(ap)
ap.model_service = model_service_inst ap.llm_model_service = llm_model_service_inst
embedding_models_service_inst = model_service.EmbeddingModelsService(ap)
ap.embedding_models_service = embedding_models_service_inst
pipeline_service_inst = pipeline_service.PipelineService(ap) pipeline_service_inst = pipeline_service.PipelineService(ap)
ap.pipeline_service = pipeline_service_inst ap.pipeline_service = pipeline_service_inst
@@ -108,5 +123,8 @@ class BuildAppStage(stage.BootingStage):
bot_service_inst = bot_service.BotService(ap) bot_service_inst = bot_service.BotService(ap)
ap.bot_service = bot_service_inst ap.bot_service = bot_service_inst
knowledge_service_inst = knowledge_service.KnowledgeService(ap)
ap.knowledge_service = knowledge_service_inst
ctrl = controller.Controller(ap) ctrl = controller.Controller(ap)
ap.ctrl = ctrl ap.ctrl = ctrl

View File

@@ -7,11 +7,18 @@ from .. import stage, app
@stage.stage_class('GenKeysStage') @stage.stage_class('GenKeysStage')
class GenKeysStage(stage.BootingStage): class GenKeysStage(stage.BootingStage):
"""生成密钥阶段""" """Generate keys stage"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""启动""" """Generate keys"""
if not ap.instance_config.data['system']['jwt']['secret']: if not ap.instance_config.data['system']['jwt']['secret']:
ap.instance_config.data['system']['jwt']['secret'] = secrets.token_hex(16) ap.instance_config.data['system']['jwt']['secret'] = secrets.token_hex(16)
await ap.instance_config.dump_config() await ap.instance_config.dump_config()
if 'recovery_key' not in ap.instance_config.data['system']:
ap.instance_config.data['system']['recovery_key'] = ''
if not ap.instance_config.data['system']['recovery_key']:
ap.instance_config.data['system']['recovery_key'] = secrets.token_hex(3).upper()
await ap.instance_config.dump_config()

View File

@@ -8,10 +8,10 @@ from ..bootutils import config
@stage.stage_class('LoadConfigStage') @stage.stage_class('LoadConfigStage')
class LoadConfigStage(stage.BootingStage): class LoadConfigStage(stage.BootingStage):
"""加载配置文件阶段""" """Load config file stage"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""启动""" """Load config file"""
# ======= deprecated ======= # ======= deprecated =======
if os.path.exists('data/config/command.json'): if os.path.exists('data/config/command.json'):

View File

@@ -11,10 +11,13 @@ importutil.import_modules_in_pkg(migrations)
@stage.stage_class('MigrationStage') @stage.stage_class('MigrationStage')
class MigrationStage(stage.BootingStage): class MigrationStage(stage.BootingStage):
"""迁移阶段""" """Migration stage
These migrations are legacy, only performed in version 3.x
"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""启动""" """Run migration"""
if any( if any(
[ [
@@ -29,7 +32,7 @@ class MigrationStage(stage.BootingStage):
migrations = migration.preregistered_migrations migrations = migration.preregistered_migrations
# 按照迁移号排序 # Sort by migration number
migrations.sort(key=lambda x: x.number) migrations.sort(key=lambda x: x.number)
for migration_cls in migrations: for migration_cls in migrations:
@@ -37,4 +40,4 @@ class MigrationStage(stage.BootingStage):
if await migration_instance.need_migrate(): if await migration_instance.need_migrate():
await migration_instance.run() await migration_instance.run()
print(f'已执行迁移 {migration_instance.name}') print(f'Migration {migration_instance.name} executed')

View File

@@ -8,7 +8,7 @@ from ..bootutils import log
class PersistenceHandler(logging.Handler, object): class PersistenceHandler(logging.Handler, object):
""" """
保存日志到数据库 Save logs to database
""" """
ap: app.Application ap: app.Application
@@ -19,9 +19,9 @@ class PersistenceHandler(logging.Handler, object):
def emit(self, record): def emit(self, record):
""" """
emit函数为自定义handler类时必重写的函数这里可以根据需要对日志消息做一些处理比如发送日志到服务器 emit function is a required function for custom handler classes, here you can process the log messages as needed, such as sending logs to the server
发出记录(Emit a record) Emit a record
""" """
try: try:
msg = self.format(record) msg = self.format(record)
@@ -34,10 +34,10 @@ class PersistenceHandler(logging.Handler, object):
@stage.stage_class('SetupLoggerStage') @stage.stage_class('SetupLoggerStage')
class SetupLoggerStage(stage.BootingStage): class SetupLoggerStage(stage.BootingStage):
"""设置日志器阶段""" """Setup logger stage"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
"""启动""" """Setup logger"""
persistence_handler = PersistenceHandler('LoggerHandler', ap) persistence_handler = PersistenceHandler('LoggerHandler', ap)
extra_handlers = [] extra_handlers = []

View File

@@ -12,10 +12,10 @@ importutil.import_modules_in_pkg(notes)
@stage.stage_class('ShowNotesStage') @stage.stage_class('ShowNotesStage')
class ShowNotesStage(stage.BootingStage): class ShowNotesStage(stage.BootingStage):
"""显示启动信息阶段""" """Show notes stage"""
async def run(self, ap: app.Application): async def run(self, ap: app.Application):
# 排序 # Sort
note.preregistered_notes.sort(key=lambda x: x.number) note.preregistered_notes.sort(key=lambda x: x.number)
for note_cls in note.preregistered_notes: for note_cls in note.preregistered_notes:

View File

@@ -9,13 +9,13 @@ from . import entities as core_entities
class TaskContext: class TaskContext:
"""任务跟踪上下文""" """Task tracking context"""
current_action: str current_action: str
"""当前正在执行的动作""" """Current action being executed"""
log: str log: str
"""记录日志""" """Log"""
def __init__(self): def __init__(self):
self.current_action = 'default' self.current_action = 'default'
@@ -58,40 +58,40 @@ placeholder_context: TaskContext | None = None
class TaskWrapper: class TaskWrapper:
"""任务包装器""" """Task wrapper"""
_id_index: int = 0 _id_index: int = 0
"""任务ID索引""" """Task ID index"""
id: int id: int
"""任务ID""" """Task ID"""
task_type: str = 'system' # 任务类型: system user task_type: str = 'system' # Task type: system or user
"""任务类型""" """Task type"""
kind: str = 'system_task' # 由发起者确定任务种类,通常同质化的任务种类相同 kind: str = 'system_task' # Task type determined by the initiator, usually the same task type
"""任务种类""" """Task type"""
name: str = '' name: str = ''
"""任务唯一名称""" """Task unique name"""
label: str = '' label: str = ''
"""任务显示名称""" """Task display name"""
task_context: TaskContext task_context: TaskContext
"""任务上下文""" """Task context"""
task: asyncio.Task task: asyncio.Task
"""任务""" """Task"""
task_stack: list = None task_stack: list = None
"""任务堆栈""" """Task stack"""
ap: app.Application ap: app.Application
"""应用实例""" """Application instance"""
scopes: list[core_entities.LifecycleControlScope] scopes: list[core_entities.LifecycleControlScope]
"""任务所属生命周期控制范围""" """Task scope"""
def __init__( def __init__(
self, self,
@@ -165,13 +165,13 @@ class TaskWrapper:
class AsyncTaskManager: class AsyncTaskManager:
"""保存app中的所有异步任务 """Save all asynchronous tasks in the app
包含系统级的和用户级(插件安装、更新等由用户直接发起的)的""" Include system-level and user-level (plugin installation, update, etc. initiated by users directly)"""
ap: app.Application ap: app.Application
tasks: list[TaskWrapper] tasks: list[TaskWrapper]
"""所有任务""" """All tasks"""
def __init__(self, ap: app.Application): def __init__(self, ap: app.Application):
self.ap = ap self.ap = ap

View File

@@ -4,7 +4,7 @@ from .base import Base
class Bot(Base): class Bot(Base):
"""机器人""" """Bot"""
__tablename__ = 'bots' __tablename__ = 'bots'

View File

@@ -12,7 +12,7 @@ initial_metadata = [
class Metadata(Base): class Metadata(Base):
"""数据库元数据""" """Database metadata"""
__tablename__ = 'metadata' __tablename__ = 'metadata'

View File

@@ -4,7 +4,7 @@ from .base import Base
class LLMModel(Base): class LLMModel(Base):
"""LLM 模型""" """LLM model"""
__tablename__ = 'llm_models' __tablename__ = 'llm_models'
@@ -23,3 +23,24 @@ class LLMModel(Base):
server_default=sqlalchemy.func.now(), server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(), onupdate=sqlalchemy.func.now(),
) )
class EmbeddingModel(Base):
"""Embedding 模型"""
__tablename__ = 'embedding_models'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
description = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester_config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
api_keys = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
extra_args = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,
nullable=False,
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)

View File

@@ -4,7 +4,7 @@ from .base import Base
class LegacyPipeline(Base): class LegacyPipeline(Base):
"""旧版流水线""" """Legacy pipeline"""
__tablename__ = 'legacy_pipelines' __tablename__ = 'legacy_pipelines'
@@ -20,13 +20,12 @@ class LegacyPipeline(Base):
) )
for_version = sqlalchemy.Column(sqlalchemy.String(255), nullable=False) for_version = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
is_default = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=False) is_default = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=False)
stages = sqlalchemy.Column(sqlalchemy.JSON, nullable=False) stages = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False) config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
class PipelineRunRecord(Base): class PipelineRunRecord(Base):
"""流水线运行记录""" """Pipeline run record"""
__tablename__ = 'pipeline_run_records' __tablename__ = 'pipeline_run_records'
@@ -43,3 +42,4 @@ class PipelineRunRecord(Base):
started_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) started_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
finished_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False) finished_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
result = sqlalchemy.Column(sqlalchemy.JSON, nullable=False) result = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
knowledge_base_uuid = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)

View File

@@ -4,7 +4,7 @@ from .base import Base
class PluginSetting(Base): class PluginSetting(Base):
"""插件配置""" """Plugin setting"""
__tablename__ = 'plugin_settings' __tablename__ = 'plugin_settings'

View File

@@ -0,0 +1,50 @@
import sqlalchemy
from .base import Base
# Base = declarative_base()
# DATABASE_URL = os.getenv('DATABASE_URL', 'sqlite:///./rag_knowledge.db')
# print("Using database URL:", DATABASE_URL)
# engine = create_engine(DATABASE_URL, connect_args={'check_same_thread': False})
# SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# def create_db_and_tables():
# """Creates all database tables defined in the Base."""
# Base.metadata.create_all(bind=engine)
# print('Database tables created or already exist.')
class KnowledgeBase(Base):
__tablename__ = 'knowledge_bases'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String, index=True)
description = sqlalchemy.Column(sqlalchemy.Text)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
embedding_model_uuid = sqlalchemy.Column(sqlalchemy.String, default='')
top_k = sqlalchemy.Column(sqlalchemy.Integer, default=5)
class File(Base):
__tablename__ = 'knowledge_base_files'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
kb_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
file_name = sqlalchemy.Column(sqlalchemy.String)
extension = sqlalchemy.Column(sqlalchemy.String)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
status = sqlalchemy.Column(sqlalchemy.String, default='pending') # pending, processing, completed, failed
class Chunk(Base):
__tablename__ = 'knowledge_base_chunks'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
file_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
text = sqlalchemy.Column(sqlalchemy.Text)
# class Vector(Base):
# __tablename__ = 'knowledge_base_vectors'
# uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
# chunk_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# embedding = sqlalchemy.Column(sqlalchemy.LargeBinary)

View File

@@ -0,0 +1,13 @@
from sqlalchemy import Column, Integer, ForeignKey, LargeBinary
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class Vector(Base):
__tablename__ = 'vectors'
id = Column(Integer, primary_key=True, index=True)
chunk_id = Column(Integer, ForeignKey('chunks.id'), unique=True)
embedding = Column(LargeBinary) # Store embeddings as binary
chunk = relationship('Chunk', back_populates='vector')

View File

View File

@@ -0,0 +1,13 @@
from __future__ import annotations
import pydantic
from typing import Any
class RetrieveResultEntry(pydantic.BaseModel):
id: str
metadata: dict[str, Any]
distance: float

View File

@@ -11,7 +11,7 @@ preregistered_managers: list[type[BaseDatabaseManager]] = []
def manager_class(name: str) -> None: def manager_class(name: str) -> None:
"""注册一个数据库管理类""" """Register a database manager class"""
def decorator(cls: type[BaseDatabaseManager]) -> type[BaseDatabaseManager]: def decorator(cls: type[BaseDatabaseManager]) -> type[BaseDatabaseManager]:
cls.name = name cls.name = name
@@ -22,7 +22,7 @@ def manager_class(name: str) -> None:
class BaseDatabaseManager(abc.ABC): class BaseDatabaseManager(abc.ABC):
"""基础数据库管理类""" """Base database manager class"""
name: str name: str

View File

@@ -7,7 +7,7 @@ from .. import database
@database.manager_class('sqlite') @database.manager_class('sqlite')
class SQLiteDatabaseManager(database.BaseDatabaseManager): class SQLiteDatabaseManager(database.BaseDatabaseManager):
"""SQLite 数据库管理类""" """SQLite database manager"""
async def initialize(self) -> None: async def initialize(self) -> None:
sqlite_path = 'data/langbot.db' sqlite_path = 'data/langbot.db'

View File

@@ -22,12 +22,12 @@ importutil.import_modules_in_pkg(persistence)
class PersistenceManager: class PersistenceManager:
"""持久化模块管理器""" """Persistence module manager"""
ap: app.Application ap: app.Application
db: database.BaseDatabaseManager db: database.BaseDatabaseManager
"""数据库管理器""" """Database manager"""
meta: sqlalchemy.MetaData meta: sqlalchemy.MetaData
@@ -111,7 +111,7 @@ class PersistenceManager:
'stages': pipeline_service.default_stage_order, 'stages': pipeline_service.default_stage_order,
'is_default': True, 'is_default': True,
'name': 'ChatPipeline', 'name': 'ChatPipeline',
'description': '默认提供的流水线,您配置的机器人、第一个模型将自动绑定到此流水线', 'description': 'Default pipeline, new bots will be bound to this pipeline | 默认提供的流水线,您配置的机器人将自动绑定到此流水线',
'config': pipeline_config, 'config': pipeline_config,
} }

View File

@@ -10,7 +10,7 @@ preregistered_db_migrations: list[typing.Type[DBMigration]] = []
def migration_class(number: int): def migration_class(number: int):
"""迁移类装饰器""" """Migration class decorator"""
def wrapper(cls: typing.Type[DBMigration]) -> typing.Type[DBMigration]: def wrapper(cls: typing.Type[DBMigration]) -> typing.Type[DBMigration]:
cls.number = number cls.number = number
@@ -21,20 +21,20 @@ def migration_class(number: int):
class DBMigration(abc.ABC): class DBMigration(abc.ABC):
"""数据库迁移""" """Database migration"""
number: int number: int
"""迁移号""" """Migration number"""
def __init__(self, ap: app.Application): def __init__(self, ap: app.Application):
self.ap = ap self.ap = ap
@abc.abstractmethod @abc.abstractmethod
async def upgrade(self): async def upgrade(self):
"""升级""" """Upgrade"""
pass pass
@abc.abstractmethod @abc.abstractmethod
async def downgrade(self): async def downgrade(self):
"""降级""" """Downgrade"""
pass pass

View File

@@ -15,21 +15,21 @@ from ...entity.persistence import (
@migration.migration_class(1) @migration.migration_class(1)
class DBMigrateV3Config(migration.DBMigration): class DBMigrateV3Config(migration.DBMigration):
"""从 v3 的配置迁移到 v4 的数据库""" """Migrate v3 config to v4 database"""
async def upgrade(self): async def upgrade(self):
"""升级""" """Upgrade"""
""" """
将 data/config 下的所有配置文件进行迁移。 Migrate all config files under data/config.
迁移后,之前的配置文件都保存到 data/legacy/config 下。 After migration, all previous config files are saved under data/legacy/config.
迁移后data/metadata/ 下的所有配置文件都保存到 data/legacy/metadata 下。 After migration, all config files under data/metadata/ are saved under data/legacy/metadata.
""" """
if self.ap.provider_cfg is None: if self.ap.provider_cfg is None:
return return
# ======= 迁移模型 ======= # ======= Migrate model =======
# 只迁移当前选中的模型 # Only migrate the currently selected model
model_name = self.ap.provider_cfg.data.get('model', 'gpt-4o') model_name = self.ap.provider_cfg.data.get('model', 'gpt-4o')
model_requester = 'openai-chat-completions' model_requester = 'openai-chat-completions'
@@ -91,8 +91,8 @@ class DBMigrateV3Config(migration.DBMigration):
sqlalchemy.insert(persistence_model.LLMModel).values(**llm_model_data) sqlalchemy.insert(persistence_model.LLMModel).values(**llm_model_data)
) )
# ======= 迁移流水线配置 ======= # ======= Migrate pipeline config =======
# 修改到默认流水线 # Modify to default pipeline
default_pipeline = [ default_pipeline = [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline) self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
for pipeline in ( for pipeline in (
@@ -184,8 +184,8 @@ class DBMigrateV3Config(migration.DBMigration):
.where(persistence_pipeline.LegacyPipeline.uuid == default_pipeline['uuid']) .where(persistence_pipeline.LegacyPipeline.uuid == default_pipeline['uuid'])
) )
# ======= 迁移机器人 ======= # ======= Migrate bot =======
# 只迁移启用的机器人 # Only migrate enabled bots
for adapter in self.ap.platform_cfg.data.get('platform-adapters', []): for adapter in self.ap.platform_cfg.data.get('platform-adapters', []):
if not adapter.get('enable'): if not adapter.get('enable'):
continue continue
@@ -207,7 +207,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_bot.Bot).values(**bot_data)) await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_bot.Bot).values(**bot_data))
# ======= 迁移系统设置 ======= # ======= Migrate system settings =======
self.ap.instance_config.data['admins'] = self.ap.system_cfg.data['admin-sessions'] self.ap.instance_config.data['admins'] = self.ap.system_cfg.data['admin-sessions']
self.ap.instance_config.data['api']['port'] = self.ap.system_cfg.data['http-api']['port'] self.ap.instance_config.data['api']['port'] = self.ap.system_cfg.data['http-api']['port']
self.ap.instance_config.data['command'] = { self.ap.instance_config.data['command'] = {
@@ -223,7 +223,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.instance_config.dump_config() await self.ap.instance_config.dump_config()
# ======= move files ======= # ======= move files =======
# 迁移 data/config 下的所有配置文件 # Migrate all config files under data/config
all_legacy_dir_name = [ all_legacy_dir_name = [
'config', 'config',
# 'metadata', # 'metadata',
@@ -246,4 +246,4 @@ class DBMigrateV3Config(migration.DBMigration):
move_legacy_files(dir_name) move_legacy_files(dir_name)
async def downgrade(self): async def downgrade(self):
"""降级""" """Downgrade"""

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(2) @migration.migration_class(2)
class DBMigrateCombineQuoteMsgConfig(migration.DBMigration): class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
"""引用消息合并配置""" """Combine quote message config"""
async def upgrade(self): async def upgrade(self):
"""升级""" """Upgrade"""
# read all pipelines # read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline)) pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -37,5 +37,5 @@ class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
) )
async def downgrade(self): async def downgrade(self):
"""降级""" """Downgrade"""
pass pass

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(3) @migration.migration_class(3)
class DBMigrateN8nConfig(migration.DBMigration): class DBMigrateN8nConfig(migration.DBMigration):
"""N8n配置""" """N8n config"""
async def upgrade(self): async def upgrade(self):
"""升级""" """Upgrade"""
# read all pipelines # read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline)) pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -45,5 +45,5 @@ class DBMigrateN8nConfig(migration.DBMigration):
) )
async def downgrade(self): async def downgrade(self):
"""降级""" """Downgrade"""
pass pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(4)
class DBMigrateRAGKBUUID(migration.DBMigration):
"""RAG知识库UUID"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'knowledge-base' not in config['ai']['local-agent']:
config['ai']['local-agent']['knowledge-base'] = ''
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(5)
class DBMigratePipelineRemoveCotConfig(migration.DBMigration):
"""Pipeline remove cot config"""
async def upgrade(self):
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'remove-think' not in config['output']['misc']:
config['output']['misc']['remove-think'] = False
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""Downgrade"""
pass

View File

@@ -6,9 +6,9 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('BanSessionCheckStage') @stage.stage_class('BanSessionCheckStage')
class BanSessionCheckStage(stage.PipelineStage): class BanSessionCheckStage(stage.PipelineStage):
"""访问控制处理阶段 """Access control processing stage
仅检查query中群号或个人号是否在访问控制列表中。 Only check if the group or personal number in the query is in the access control list.
""" """
async def initialize(self, pipeline_config: dict): async def initialize(self, pipeline_config: dict):
@@ -41,5 +41,7 @@ class BanSessionCheckStage(stage.PipelineStage):
return entities.StageProcessResult( return entities.StageProcessResult(
result_type=entities.ResultType.CONTINUE if ctn else entities.ResultType.INTERRUPT, result_type=entities.ResultType.CONTINUE if ctn else entities.ResultType.INTERRUPT,
new_query=query, new_query=query,
console_notice=f'根据访问控制忽略消息: {query.launcher_type.value}_{query.launcher_id}' if not ctn else '', console_notice=f'Ignore message according to access control: {query.launcher_type.value}_{query.launcher_id}'
if not ctn
else '',
) )

View File

@@ -13,13 +13,13 @@ preregistered_filters: list[typing.Type[ContentFilter]] = []
def filter_class( def filter_class(
name: str, name: str,
) -> typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: ) -> typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]:
"""内容过滤器类装饰器 """Content filter class decorator
Args: Args:
name (str): 过滤器名称 name (str): Filter name
Returns: Returns:
typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: 装饰器 typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: Decorator
""" """
def decorator(cls: typing.Type[ContentFilter]) -> typing.Type[ContentFilter]: def decorator(cls: typing.Type[ContentFilter]) -> typing.Type[ContentFilter]:
@@ -35,7 +35,7 @@ def filter_class(
class ContentFilter(metaclass=abc.ABCMeta): class ContentFilter(metaclass=abc.ABCMeta):
"""内容过滤器抽象类""" """Content filter abstract class"""
name: str name: str
@@ -46,31 +46,31 @@ class ContentFilter(metaclass=abc.ABCMeta):
@property @property
def enable_stages(self): def enable_stages(self):
"""启用的阶段 """Enabled stages
默认为消息请求AI前后的两个阶段。 Default is the two stages before and after the message request to AI.
entity.EnableStage.PRE: 消息请求AI前此时需要检查的内容是用户的输入消息。 entity.EnableStage.PRE: Before message request to AI, the content to check is the user's input message.
entity.EnableStage.POST: 消息请求AI后此时需要检查的内容是AI的回复消息。 entity.EnableStage.POST: After message request to AI, the content to check is the AI's reply message.
""" """
return [entities.EnableStage.PRE, entities.EnableStage.POST] return [entities.EnableStage.PRE, entities.EnableStage.POST]
async def initialize(self): async def initialize(self):
"""初始化过滤器""" """Initialize filter"""
pass pass
@abc.abstractmethod @abc.abstractmethod
async def process(self, query: pipeline_query.Query, message: str = None, image_url=None) -> entities.FilterResult: async def process(self, query: pipeline_query.Query, message: str = None, image_url=None) -> entities.FilterResult:
"""处理消息 """处理消息
分为前后阶段,具体取决于 enable_stages 的值。 It is divided into two stages, depending on the value of enable_stages.
对于内容过滤器来说,不需要考虑消息所处的阶段,只需要检查消息内容即可。 For content filters, you do not need to consider the stage of the message, you only need to check the message content.
Args: Args:
message (str): 需要检查的内容 message (str): Content to check
image_url (str): 要检查的图片的 URL image_url (str): URL of the image to check
Returns: Returns:
entities.FilterResult: 过滤结果,具体内容请查看 entities.FilterResult 类的文档 entities.FilterResult: Filter result, please refer to the documentation of entities.FilterResult class
""" """
raise NotImplementedError raise NotImplementedError

View File

@@ -8,7 +8,7 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@filter_model.filter_class('ban-word-filter') @filter_model.filter_class('ban-word-filter')
class BanWordFilter(filter_model.ContentFilter): class BanWordFilter(filter_model.ContentFilter):
"""根据内容过滤""" """Filter content"""
async def initialize(self): async def initialize(self):
pass pass

View File

@@ -8,7 +8,7 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@filter_model.filter_class('content-ignore') @filter_model.filter_class('content-ignore')
class ContentIgnore(filter_model.ContentFilter): class ContentIgnore(filter_model.ContentFilter):
"""根据内容忽略消息""" """Ignore message according to content"""
@property @property
def enable_stages(self): def enable_stages(self):
@@ -24,7 +24,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK, level=entities.ResultLevel.BLOCK,
replacement='', replacement='',
user_notice='', user_notice='',
console_notice='根据 ignore_rules 中的 prefix 规则,忽略消息', console_notice='Ignore message according to prefix rule in ignore_rules',
) )
if 'regexp' in query.pipeline_config['trigger']['ignore-rules']: if 'regexp' in query.pipeline_config['trigger']['ignore-rules']:
@@ -34,7 +34,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK, level=entities.ResultLevel.BLOCK,
replacement='', replacement='',
user_notice='', user_notice='',
console_notice='根据 ignore_rules 中的 regexp 规则,忽略消息', console_notice='Ignore message according to regexp rule in ignore_rules',
) )
return entities.FilterResult( return entities.FilterResult(

View File

@@ -15,9 +15,9 @@ importutil.import_modules_in_pkg(strategies)
@stage.stage_class('LongTextProcessStage') @stage.stage_class('LongTextProcessStage')
class LongTextProcessStage(stage.PipelineStage): class LongTextProcessStage(stage.PipelineStage):
"""长消息处理阶段 """Long message processing stage
改写: Rewrite:
- resp_message_chain - resp_message_chain
""" """
@@ -35,22 +35,22 @@ class LongTextProcessStage(stage.PipelineStage):
use_font = 'C:/Windows/Fonts/msyh.ttc' use_font = 'C:/Windows/Fonts/msyh.ttc'
if not os.path.exists(use_font): if not os.path.exists(use_font):
self.ap.logger.warn( self.ap.logger.warn(
'未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在配置文件中调整相关设置。' 'Font file not found, and Windows system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
) )
config['blob_message_strategy'] = 'forward' config['blob_message_strategy'] = 'forward'
else: else:
self.ap.logger.info('使用Windows自带字体:' + use_font) self.ap.logger.info('Using Windows system font: ' + use_font)
config['font-path'] = use_font config['font-path'] = use_font
else: else:
self.ap.logger.warn( self.ap.logger.warn(
'未找到字体文件,且无法使用系统自带字体,更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。' 'Font file not found, and system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
) )
pipeline_config['output']['long-text-processing']['strategy'] = 'forward' pipeline_config['output']['long-text-processing']['strategy'] = 'forward'
except Exception: except Exception:
traceback.print_exc() traceback.print_exc()
self.ap.logger.error( self.ap.logger.error(
'加载字体文件失败({}),更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。'.format( 'Failed to load font file ({}), switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'.format(
use_font use_font
) )
) )
@@ -62,7 +62,7 @@ class LongTextProcessStage(stage.PipelineStage):
self.strategy_impl = strategy_cls(self.ap) self.strategy_impl = strategy_cls(self.ap)
break break
else: else:
raise ValueError(f'未找到名为 {config["strategy"]} 的长消息处理策略') raise ValueError(f'Long message processing strategy not found: {config["strategy"]}')
await self.strategy_impl.initialize() await self.strategy_impl.initialize()
@@ -76,7 +76,7 @@ class LongTextProcessStage(stage.PipelineStage):
break break
if contains_non_plain: if contains_non_plain:
self.ap.logger.debug('消息中包含非 Plain 组件,跳过长消息处理。') self.ap.logger.debug('Message contains non-Plain components, skip long message processing.')
elif ( elif (
len(str(query.resp_message_chain[-1])) len(str(query.resp_message_chain[-1]))
> query.pipeline_config['output']['long-text-processing']['threshold'] > query.pipeline_config['output']['long-text-processing']['threshold']

View File

@@ -15,17 +15,17 @@ Forward = platform_message.Forward
class ForwardComponentStrategy(strategy_model.LongTextStrategy): class ForwardComponentStrategy(strategy_model.LongTextStrategy):
async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]: async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]:
display = ForwardMessageDiaplay( display = ForwardMessageDiaplay(
title='群聊的聊天记录', title='Group chat history',
brief='[聊天记录]', brief='[Chat history]',
source='聊天记录', source='Chat history',
preview=['QQ用户: ' + message], preview=['User: ' + message],
summary='查看1条转发消息', summary='View 1 forwarded message',
) )
node_list = [ node_list = [
platform_message.ForwardMessageNode( platform_message.ForwardMessageNode(
sender_id=query.adapter.bot_account_id, sender_id=query.adapter.bot_account_id,
sender_name='QQ用户', sender_name='User',
message_chain=platform_message.MessageChain([message]), message_chain=platform_message.MessageChain([message]),
) )
] ]

View File

@@ -15,13 +15,13 @@ preregistered_strategies: list[typing.Type[LongTextStrategy]] = []
def strategy_class( def strategy_class(
name: str, name: str,
) -> typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: ) -> typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]:
"""长文本处理策略类装饰器 """Long text processing strategy class decorator
Args: Args:
name (str): 策略名称 name (str): Strategy name
Returns: Returns:
typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: 装饰器 typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: Decorator
""" """
def decorator(cls: typing.Type[LongTextStrategy]) -> typing.Type[LongTextStrategy]: def decorator(cls: typing.Type[LongTextStrategy]) -> typing.Type[LongTextStrategy]:
@@ -37,7 +37,7 @@ def strategy_class(
class LongTextStrategy(metaclass=abc.ABCMeta): class LongTextStrategy(metaclass=abc.ABCMeta):
"""长文本处理策略抽象类""" """Long text processing strategy abstract class"""
name: str name: str
@@ -53,13 +53,13 @@ class LongTextStrategy(metaclass=abc.ABCMeta):
async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]: async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]:
"""处理长文本 """处理长文本
在 platform.json 中配置 long-text-process 字段,只要 文本长度超过了 threshold 就会调用此方法 If the text length exceeds the threshold, this method will be called.
Args: Args:
message (str): 消息 message (str): Message
query (core_entities.Query): 此次请求的上下文对象 query (core_entities.Query): Query object
Returns: Returns:
list[platform_message.MessageComponent]: 转换后的 平台 消息组件列表 list[platform_message.MessageComponent]: Converted platform message components
""" """
return [] return []

View File

@@ -11,9 +11,9 @@ importutil.import_modules_in_pkg(truncators)
@stage.stage_class('ConversationMessageTruncator') @stage.stage_class('ConversationMessageTruncator')
class ConversationMessageTruncator(stage.PipelineStage): class ConversationMessageTruncator(stage.PipelineStage):
"""会话消息截断器 """Conversation message truncator
用于截断会话消息链,以适应平台消息长度限制。 Used to truncate the conversation message chain to adapt to the LLM message length limit.
""" """
trun: truncator.Truncator trun: truncator.Truncator
@@ -26,7 +26,7 @@ class ConversationMessageTruncator(stage.PipelineStage):
self.trun = trun(self.ap) self.trun = trun(self.ap)
break break
else: else:
raise ValueError(f'未知的截断器: {use_method}') raise ValueError(f'Unknown truncator: {use_method}')
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult: async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理""" """处理"""

View File

@@ -6,7 +6,7 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@truncator.truncator_class('round') @truncator.truncator_class('round')
class RoundTruncator(truncator.Truncator): class RoundTruncator(truncator.Truncator):
"""前文回合数阶段器""" """Truncate the conversation message chain to adapt to the LLM message length limit."""
async def truncate(self, query: pipeline_query.Query) -> pipeline_query.Query: async def truncate(self, query: pipeline_query.Query) -> pipeline_query.Query:
"""截断""" """截断"""
@@ -16,7 +16,7 @@ class RoundTruncator(truncator.Truncator):
current_round = 0 current_round = 0
# 从后往前遍历 # Traverse from back to front
for msg in query.messages[::-1]: for msg in query.messages[::-1]:
if current_round < max_round: if current_round < max_round:
temp_messages.append(msg) temp_messages.append(msg)

View File

@@ -97,12 +97,20 @@ class RuntimePipeline:
query.message_event, platform_events.GroupMessage query.message_event, platform_events.GroupMessage
): ):
result.user_notice.insert(0, platform_message.At(query.message_event.sender.id)) result.user_notice.insert(0, platform_message.At(query.message_event.sender.id))
if await query.adapter.is_stream_output_supported():
await query.adapter.reply_message( await query.adapter.reply_message_chunk(
message_source=query.message_event, message_source=query.message_event,
message=result.user_notice, bot_message=query.resp_messages[-1],
quote_origin=query.pipeline_config['output']['misc']['quote-origin'], message=result.user_notice,
) quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
is_final=[msg.is_final for msg in query.resp_messages][0],
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
)
if result.debug_notice: if result.debug_notice:
self.ap.logger.debug(result.debug_notice) self.ap.logger.debug(result.debug_notice)
if result.console_notice: if result.console_notice:
@@ -148,23 +156,27 @@ class RuntimePipeline:
result = await result result = await result
if isinstance(result, pipeline_entities.StageProcessResult): # 直接返回结果 if isinstance(result, pipeline_entities.StageProcessResult): # 直接返回结果
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {result}') self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {result.result_type}'
)
await self._check_output(query, result) await self._check_output(query, result)
if result.result_type == pipeline_entities.ResultType.INTERRUPT: if result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}') self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break break
elif result.result_type == pipeline_entities.ResultType.CONTINUE: elif result.result_type == pipeline_entities.ResultType.CONTINUE:
query = result.new_query query = result.new_query
elif isinstance(result, typing.AsyncGenerator): # 生成器 elif isinstance(result, typing.AsyncGenerator): # 生成器
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} gen') self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query.query_id} gen')
async for sub_result in result: async for sub_result in result:
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {sub_result}') self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {sub_result.result_type}'
)
await self._check_output(query, sub_result) await self._check_output(query, sub_result)
if sub_result.result_type == pipeline_entities.ResultType.INTERRUPT: if sub_result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}') self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break break
elif sub_result.result_type == pipeline_entities.ResultType.CONTINUE: elif sub_result.result_type == pipeline_entities.ResultType.CONTINUE:
query = sub_result.new_query query = sub_result.new_query
@@ -196,7 +208,7 @@ class RuntimePipeline:
if event_ctx.is_prevented_default(): if event_ctx.is_prevented_default():
return return
self.ap.logger.debug(f'Processing query {query}') self.ap.logger.debug(f'Processing query {query.query_id}')
await self._execute_from_stage(0, query) await self._execute_from_stage(0, query)
except Exception as e: except Exception as e:
@@ -204,7 +216,7 @@ class RuntimePipeline:
self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}') self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}')
self.ap.logger.error(f'Traceback: {traceback.format_exc()}') self.ap.logger.error(f'Traceback: {traceback.format_exc()}')
finally: finally:
self.ap.logger.debug(f'Query {query} processed') self.ap.logger.debug(f'Query {query.query_id} processed')
del self.ap.query_pool.cached_queries[query.query_id] del self.ap.query_pool.cached_queries[query.query_id]

View File

@@ -11,11 +11,11 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('PreProcessor') @stage.stage_class('PreProcessor')
class PreProcessor(stage.PipelineStage): class PreProcessor(stage.PipelineStage):
"""请求预处理阶段 """Request pre-processing stage
签出会话、prompt、上文、模型、内容函数。 Check out session, prompt, context, model, and content functions.
改写: Rewrite:
- session - session
- prompt - prompt
- messages - messages
@@ -29,12 +29,12 @@ class PreProcessor(stage.PipelineStage):
query: pipeline_query.Query, query: pipeline_query.Query,
stage_inst_name: str, stage_inst_name: str,
) -> entities.StageProcessResult: ) -> entities.StageProcessResult:
"""处理""" """Process"""
selected_runner = query.pipeline_config['ai']['runner']['runner'] selected_runner = query.pipeline_config['ai']['runner']['runner']
session = await self.ap.sess_mgr.get_session(query) session = await self.ap.sess_mgr.get_session(query)
# local-agent 时,llm_model None # When not local-agent, llm_model is None
llm_model = ( llm_model = (
await self.ap.model_mgr.get_model_by_uuid(query.pipeline_config['ai']['local-agent']['model']) await self.ap.model_mgr.get_model_by_uuid(query.pipeline_config['ai']['local-agent']['model'])
if selected_runner == 'local-agent' if selected_runner == 'local-agent'
@@ -80,7 +80,7 @@ class PreProcessor(stage.PipelineStage):
if me.type == 'image_url': if me.type == 'image_url':
msg.content.remove(me) msg.content.remove(me)
content_list = [] content_list: list[provider_message.ContentElement] = []
plain_text = '' plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message') qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')

View File

@@ -25,7 +25,7 @@ class MessageHandler(metaclass=abc.ABCMeta):
def cut_str(self, s: str) -> str: def cut_str(self, s: str) -> str:
""" """
取字符串第一行最多20个字符若有多行或超过20个字符则加省略号 Take the first line of the string, up to 20 characters, if there are multiple lines, or more than 20 characters, add an ellipsis
""" """
s0 = s.split('\n')[0] s0 = s.split('\n')[0]
if len(s0) > 20 or '\n' in s: if len(s0) > 20 or '\n' in s:

View File

@@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
import uuid
import typing import typing
import traceback import traceback
@@ -48,7 +49,6 @@ class ChatMessageHandler(handler.MessageHandler):
if event_ctx.is_prevented_default(): if event_ctx.is_prevented_default():
if event_ctx.event.reply is not None: if event_ctx.event.reply is not None:
mc = platform_message.MessageChain(event_ctx.event.reply) mc = platform_message.MessageChain(event_ctx.event.reply)
query.resp_messages.append(mc) query.resp_messages.append(mc)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query) yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
@@ -60,6 +60,10 @@ class ChatMessageHandler(handler.MessageHandler):
query.user_message.content = event_ctx.event.alter query.user_message.content = event_ctx.event.alter
text_length = 0 text_length = 0
try:
is_stream = await query.adapter.is_stream_output_supported()
except AttributeError:
is_stream = False
try: try:
for r in runner_module.preregistered_runners: for r in runner_module.preregistered_runners:
@@ -68,21 +72,41 @@ class ChatMessageHandler(handler.MessageHandler):
break break
else: else:
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}') raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
if is_stream:
resp_message_id = uuid.uuid4()
await query.adapter.create_message_card(str(resp_message_id), query.message_event)
async for result in runner.run(query):
result.resp_message_id = str(resp_message_id)
if query.resp_messages:
query.resp_messages.pop()
if query.resp_message_chain:
query.resp_message_chain.pop()
async for result in runner.run(query): query.resp_messages.append(result)
query.resp_messages.append(result) self.ap.logger.info(f'对话({query.query_id})流式响应: {self.cut_str(result.readable_str())}')
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}') if result.content is not None:
text_length += len(result.content)
if result.content is not None: yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query) else:
async for result in runner.run(query):
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
query.session.using_conversation.messages.append(query.user_message) query.session.using_conversation.messages.append(query.user_message)
query.session.using_conversation.messages.extend(query.resp_messages) query.session.using_conversation.messages.extend(query.resp_messages)
except Exception as e: except Exception as e:
self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}') self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}')
traceback.print_exc()
hide_exception_info = query.pipeline_config['output']['misc']['hide-exception'] hide_exception_info = query.pipeline_config['output']['misc']['hide-exception']

View File

@@ -16,7 +16,7 @@ class CommandHandler(handler.MessageHandler):
self, self,
query: pipeline_query.Query, query: pipeline_query.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]: ) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理""" """Process"""
command_text = str(query.message_chain).strip()[1:] command_text = str(query.message_chain).strip()[1:]
@@ -71,7 +71,7 @@ class CommandHandler(handler.MessageHandler):
) )
) )
self.ap.logger.info(f'命令({query.query_id})报错: {self.cut_str(str(ret.error))}') self.ap.logger.info(f'Command({query.query_id}) error: {self.cut_str(str(ret.error))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query) yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
elif ret.text is not None or ret.image_url is not None: elif ret.text is not None or ret.image_url is not None:
@@ -90,7 +90,7 @@ class CommandHandler(handler.MessageHandler):
) )
) )
self.ap.logger.info(f'命令返回: {self.cut_str(str(content[0]))}') self.ap.logger.info(f'Command returned: {self.cut_str(str(content[0]))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query) yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else: else:

View File

@@ -33,11 +33,11 @@ class Processor(stage.PipelineStage):
query: pipeline_query.Query, query: pipeline_query.Query,
stage_inst_name: str, stage_inst_name: str,
) -> entities.StageProcessResult: ) -> entities.StageProcessResult:
"""处理""" """Process"""
message_text = str(query.message_chain).strip() message_text = str(query.message_chain).strip()
self.ap.logger.info( self.ap.logger.info(
f'处理 {query.launcher_type.value}_{query.launcher_id} 的请求({query.query_id}): {message_text}' f'Processing request from {query.launcher_type.value}_{query.launcher_id} ({query.query_id}): {message_text}'
) )
async def generator(): async def generator():

View File

@@ -6,6 +6,7 @@ import asyncio
import langbot_plugin.api.entities.builtin.platform.events as platform_events import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.message as platform_message import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.provider.message as provider_message
from .. import stage, entities from .. import stage, entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@@ -36,10 +37,22 @@ class SendResponseBackStage(stage.PipelineStage):
quote_origin = query.pipeline_config['output']['misc']['quote-origin'] quote_origin = query.pipeline_config['output']['misc']['quote-origin']
await query.adapter.reply_message( has_chunks = any(isinstance(msg, provider_message.MessageChunk) for msg in query.resp_messages)
message_source=query.message_event, # TODO 命令与流式的兼容性问题
message=query.resp_message_chain[-1], if await query.adapter.is_stream_output_supported() and has_chunks:
quote_origin=quote_origin, is_final = [msg.is_final for msg in query.resp_messages][0]
) await query.adapter.reply_message_chunk(
message_source=query.message_event,
bot_message=query.resp_messages[-1],
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
is_final=is_final,
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query) return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -115,8 +115,10 @@ class RuntimeBot:
if isinstance(e, asyncio.CancelledError): if isinstance(e, asyncio.CancelledError):
self.task_context.set_current_action('Exited.') self.task_context.set_current_action('Exited.')
return return
traceback_str = traceback.format_exc()
self.task_context.set_current_action('Exited with error.') self.task_context.set_current_action('Exited with error.')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback.format_exc()}') await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback_str}')
self.task_wrapper = self.ap.task_mgr.create_task( self.task_wrapper = self.ap.task_mgr.create_task(
exception_wrapper(), exception_wrapper(),
@@ -169,8 +171,8 @@ class PlatformManager:
{}, {},
webchat_logger, webchat_logger,
ap=self.ap, ap=self.ap,
is_stream=False,
) )
webchat_adapter_inst.ap = self.ap
self.webchat_proxy_bot = RuntimeBot( self.webchat_proxy_bot = RuntimeBot(
ap=self.ap, ap=self.ap,

View File

@@ -120,7 +120,7 @@ class EventLogger(abstract_platform_event_logger.AbstractEventLogger):
async def _truncate_logs(self): async def _truncate_logs(self):
if len(self.logs) > MAX_LOG_COUNT: if len(self.logs) > MAX_LOG_COUNT:
for i in range(DELETE_COUNT_PER_TIME): for i in range(DELETE_COUNT_PER_TIME):
for image_key in self.logs[i].images: for image_key in self.logs[i].images: # type: ignore
await self.ap.storage_mgr.storage_provider.delete(image_key) await self.ap.storage_mgr.storage_provider.delete(image_key)
self.logs = self.logs[DELETE_COUNT_PER_TIME:] self.logs = self.logs[DELETE_COUNT_PER_TIME:]

View File

@@ -61,13 +61,13 @@ class AiocqhttpMessageConverter(abstract_platform_adapter.AbstractMessageConvert
for node in msg.node_list: for node in msg.node_list:
msg_list.extend((await AiocqhttpMessageConverter.yiri2target(node.message_chain))[0]) msg_list.extend((await AiocqhttpMessageConverter.yiri2target(node.message_chain))[0])
elif isinstance(msg, platform_message.File): elif isinstance(msg, platform_message.File):
msg_list.append({"type":"file", "data":{'file': msg.url, "name": msg.name}}) msg_list.append({'type': 'file', 'data': {'file': msg.url, 'name': msg.name}})
elif isinstance(msg, platform_message.Face): elif isinstance(msg, platform_message.Face):
if msg.face_type=='face': if msg.face_type == 'face':
msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id)) msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id))
elif msg.face_type=='rps': elif msg.face_type == 'rps':
msg_list.append(aiocqhttp.MessageSegment.rps()) msg_list.append(aiocqhttp.MessageSegment.rps())
elif msg.face_type=='dice': elif msg.face_type == 'dice':
msg_list.append(aiocqhttp.MessageSegment.dice()) msg_list.append(aiocqhttp.MessageSegment.dice())
else: else:
@@ -76,44 +76,130 @@ class AiocqhttpMessageConverter(abstract_platform_adapter.AbstractMessageConvert
return msg_list, msg_id, msg_time return msg_list, msg_id, msg_time
@staticmethod @staticmethod
async def target2yiri(message: str, message_id: int = -1, bot=None): async def target2yiri(message: str, message_id: int = -1, bot: aiocqhttp.CQHttp = None):
message = aiocqhttp.Message(message) message = aiocqhttp.Message(message)
def get_face_name(face_id): def get_face_name(face_id):
face_code_dict = { face_code_dict = {
"2": '好色', '2': '好色',
"4": "得意", "5": "流泪", "8": "", "9": "大哭", "10": "尴尬", "12": "调皮", "14": "微笑", "16": "", '4': '得意',
"21": "可爱", '5': '流泪',
"23": "傲慢", "24": "饥饿", "25": "", "26": "惊恐", "27": "流汗", "28": "憨笑", "29": "悠闲", '8': '',
"30": "奋斗", '9': '大哭',
"32": "疑问", "33": "", "34": "", "38": "敲打", "39": "再见", "41": "发抖", "42": "爱情", '10': '尴尬',
"43": "跳跳", '12': '调皮',
"49": "拥抱", "53": "蛋糕", "60": "咖啡", "63": "玫瑰", "66": "爱心", "74": "太阳", "75": "月亮", '14': '微笑',
"76": "", '16': '',
"78": "握手", "79": "胜利", "85": "飞吻", "89": "西瓜", "96": "冷汗", "97": "擦汗", "98": "抠鼻", '21': '可爱',
"99": "鼓掌", '23': '傲慢',
"100": "糗大了", "101": "坏笑", "102": "左哼哼", "103": "右哼哼", "104": "哈欠", "106": "委屈", '24': '饥饿',
"109": "左亲亲", '25': '',
"111": "可怜", "116": "示爱", "118": "抱拳", "120": "拳头", "122": "爱你", "123": "NO", "124": "OK", '26': '惊恐',
"125": "转圈", '27': '流汗',
"129": "挥手", "144": "喝彩", "147": "棒棒糖", "171": "", "173": "泪奔", "174": "无奈", "175": "卖萌", '28': '憨笑',
"176": "小纠结", "179": "doge", "180": "惊喜", "181": "骚扰", "182": "笑哭", "183": "我最美", '29': '悠闲',
"201": "点赞", '30': '奋斗',
"203": "托脸", "212": "托腮", "214": "啵啵", "219": "蹭一蹭", "222": "抱抱", "227": "拍手", '32': '疑问',
"232": "佛系", '33': '',
"240": "喷脸", "243": "甩头", "246": "加油抱抱", "262": "脑阔疼", "264": "捂脸", "265": "辣眼睛", '34': '',
"266": "哦哟", '38': '敲打',
"267": "头秃", "268": "问号脸", "269": "暗中观察", "270": "emm", "271": "吃瓜", "272": "呵呵哒", '39': '再见',
"273": "我酸了", '41': '发抖',
"277": "汪汪", "278": "", "281": "无眼笑", "282": "敬礼", "284": "面无表情", "285": "摸鱼", '42': '爱情',
"287": "", '43': '跳跳',
"289": "睁眼", "290": "敲开心", "293": "摸锦鲤", "294": "期待", "297": "拜谢", "298": "元宝", '49': '拥抱',
"299": "牛啊", '53': '蛋糕',
"305": "右亲亲", "306": "牛气冲天", "307": "喵喵", "314": "仔细分析", "315": "加油", "318": "崇拜", '60': '咖啡',
"319": "比心", '63': '玫瑰',
"320": "庆祝", "322": "拒绝", "324": "吃糖", "326": "生气" '66': '爱心',
'74': '太阳',
'75': '月亮',
'76': '',
'78': '握手',
'79': '胜利',
'85': '飞吻',
'89': '西瓜',
'96': '冷汗',
'97': '擦汗',
'98': '抠鼻',
'99': '鼓掌',
'100': '糗大了',
'101': '坏笑',
'102': '左哼哼',
'103': '右哼哼',
'104': '哈欠',
'106': '委屈',
'109': '左亲亲',
'111': '可怜',
'116': '示爱',
'118': '抱拳',
'120': '拳头',
'122': '爱你',
'123': 'NO',
'124': 'OK',
'125': '转圈',
'129': '挥手',
'144': '喝彩',
'147': '棒棒糖',
'171': '',
'173': '泪奔',
'174': '无奈',
'175': '卖萌',
'176': '小纠结',
'179': 'doge',
'180': '惊喜',
'181': '骚扰',
'182': '笑哭',
'183': '我最美',
'201': '点赞',
'203': '托脸',
'212': '托腮',
'214': '啵啵',
'219': '蹭一蹭',
'222': '抱抱',
'227': '拍手',
'232': '佛系',
'240': '喷脸',
'243': '甩头',
'246': '加油抱抱',
'262': '脑阔疼',
'264': '捂脸',
'265': '辣眼睛',
'266': '哦哟',
'267': '头秃',
'268': '问号脸',
'269': '暗中观察',
'270': 'emm',
'271': '吃瓜',
'272': '呵呵哒',
'273': '我酸了',
'277': '汪汪',
'278': '',
'281': '无眼笑',
'282': '敬礼',
'284': '面无表情',
'285': '摸鱼',
'287': '',
'289': '睁眼',
'290': '敲开心',
'293': '摸锦鲤',
'294': '期待',
'297': '拜谢',
'298': '元宝',
'299': '牛啊',
'305': '右亲亲',
'306': '牛气冲天',
'307': '喵喵',
'314': '仔细分析',
'315': '加油',
'318': '崇拜',
'319': '比心',
'320': '庆祝',
'322': '拒绝',
'324': '吃糖',
'326': '生气',
} }
return face_code_dict.get(face_id,'') return face_code_dict.get(face_id, '')
async def process_message_data(msg_data, reply_list): async def process_message_data(msg_data, reply_list):
if msg_data['type'] == 'image': if msg_data['type'] == 'image':
@@ -156,10 +242,10 @@ class AiocqhttpMessageConverter(abstract_platform_adapter.AbstractMessageConvert
elif msg.type == 'text': elif msg.type == 'text':
yiri_msg_list.append(platform_message.Plain(text=msg.data['text'])) yiri_msg_list.append(platform_message.Plain(text=msg.data['text']))
elif msg.type == 'image': elif msg.type == 'image':
emoji_id = msg.data.get("emoji_package_id", None) emoji_id = msg.data.get('emoji_package_id', None)
if emoji_id: if emoji_id:
face_id = emoji_id face_id = emoji_id
face_name = msg.data.get("summary", '') face_name = msg.data.get('summary', '')
image_msg = platform_message.Face(face_id=face_id, face_name=face_name) image_msg = platform_message.Face(face_id=face_id, face_name=face_name)
else: else:
image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url']) image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url'])
@@ -185,27 +271,28 @@ class AiocqhttpMessageConverter(abstract_platform_adapter.AbstractMessageConvert
yiri_msg_list.append(reply_msg) yiri_msg_list.append(reply_msg)
elif msg.type == 'file': elif msg.type == 'file':
pass
# file_name = msg.data['file'] # file_name = msg.data['file']
file_id = msg.data['file_id'] # file_id = msg.data['file_id']
file_data = await bot.get_file(file_id=file_id) # file_data = await bot.get_file(file_id=file_id)
file_name = file_data.get('file_name') # file_name = file_data.get('file_name')
file_path = file_data.get('file') # file_path = file_data.get('file')
_ = file_path # _ = file_path
file_url = file_data.get('file_url') # file_url = file_data.get('file_url')
file_size = file_data.get('file_size') # file_size = file_data.get('file_size')
yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size)) # yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
elif msg.type == 'face': elif msg.type == 'face':
face_id = msg.data['id'] face_id = msg.data['id']
face_name = msg.data['raw']['faceText'] face_name = msg.data['raw']['faceText']
if not face_name: if not face_name:
face_name = get_face_name(face_id) face_name = get_face_name(face_id)
yiri_msg_list.append(platform_message.Face(face_id=int(face_id),face_name=face_name.replace('/',''))) yiri_msg_list.append(platform_message.Face(face_id=int(face_id), face_name=face_name.replace('/', '')))
elif msg.type == 'rps': elif msg.type == 'rps':
face_id = msg.data['result'] face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type="rps",face_id=int(face_id),face_name='猜拳')) yiri_msg_list.append(platform_message.Face(face_type='rps', face_id=int(face_id), face_name='猜拳'))
elif msg.type == 'dice': elif msg.type == 'dice':
face_id = msg.data['result'] face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type='dice',face_id=int(face_id),face_name='骰子')) yiri_msg_list.append(platform_message.Face(face_type='dice', face_id=int(face_id), face_name='骰子'))
chain = platform_message.MessageChain(yiri_msg_list) chain = platform_message.MessageChain(yiri_msg_list)
@@ -221,7 +308,6 @@ class AiocqhttpEventConverter(abstract_platform_adapter.AbstractEventConverter):
async def target2yiri(event: aiocqhttp.Event, bot=None): async def target2yiri(event: aiocqhttp.Event, bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id, bot) yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id, bot)
if event.message_type == 'group': if event.message_type == 'group':
permission = 'MEMBER' permission = 'MEMBER'

View File

@@ -96,10 +96,16 @@ class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
message_converter: DingTalkMessageConverter = DingTalkMessageConverter() message_converter: DingTalkMessageConverter = DingTalkMessageConverter()
event_converter: DingTalkEventConverter = DingTalkEventConverter() event_converter: DingTalkEventConverter = DingTalkEventConverter()
config: dict config: dict
card_instance_id_dict: (
dict # 回复卡片消息字典key为消息idvalue为回复卡片实例id用于在流式消息时判断是否发送到指定卡片
)
seq: int # 消息顺序直接以seq作为标识
def __init__(self, config: dict, logger: EventLogger): def __init__(self, config: dict, logger: EventLogger):
self.config = config self.config = config
self.logger = logger self.logger = logger
self.card_instance_id_dict = {}
# self.seq = 1
required_keys = [ required_keys = [
'client_id', 'client_id',
'client_secret', 'client_secret',
@@ -112,6 +118,15 @@ class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
self.bot_account_id = self.config['robot_name'] self.bot_account_id = self.config['robot_name']
self.bot = DingTalkClient(
client_id=config['client_id'],
client_secret=config['client_secret'],
robot_name=config['robot_name'],
robot_code=config['robot_code'],
markdown_card=config['markdown_card'],
logger=self.logger,
)
async def reply_message( async def reply_message(
self, self,
message_source: platform_events.MessageEvent, message_source: platform_events.MessageEvent,
@@ -126,6 +141,33 @@ class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
content, at = await DingTalkMessageConverter.yiri2target(message) content, at = await DingTalkMessageConverter.yiri2target(message)
await self.bot.send_message(content, incoming_message, at) await self.bot.send_message(content, incoming_message, at)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
# event = await DingTalkEventConverter.yiri2target(
# message_source,
# )
# incoming_message = event.incoming_message
# msg_id = incoming_message.message_id
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if (msg_seq - 1) % 8 == 0 or is_final:
content, at = await DingTalkMessageConverter.yiri2target(message)
card_instance, card_instance_id = self.card_instance_id_dict[message_id]
# print(card_instance_id)
await self.bot.send_card_message(card_instance, card_instance_id, content, is_final)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_instance_id_dict.pop(message_id) # 消息回复结束之后删除卡片实例id
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain): async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
content = await DingTalkMessageConverter.yiri2target(message) content = await DingTalkMessageConverter.yiri2target(message)
if target_type == 'person': if target_type == 'person':
@@ -133,6 +175,20 @@ class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
if target_type == 'group': if target_type == 'group':
await self.bot.send_proactive_message_to_group(target_id, content) await self.bot.send_proactive_message_to_group(target_id, content)
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_message_card(self, message_id, event):
card_template_id = self.config['card_template_id']
incoming_message = event.source_platform_object.incoming_message
# message_id = incoming_message.message_id
card_instance, card_instance_id = await self.bot.create_and_card(card_template_id, incoming_message)
self.card_instance_id_dict[message_id] = (card_instance, card_instance_id)
return True
def register_listener( def register_listener(
self, self,
event_type: typing.Type[platform_events.Event], event_type: typing.Type[platform_events.Event],
@@ -155,15 +211,6 @@ class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
self.bot.on_message('GroupMessage')(on_message) self.bot.on_message('GroupMessage')(on_message)
async def run_async(self): async def run_async(self):
config = self.config
self.bot = DingTalkClient(
client_id=config['client_id'],
client_secret=config['client_secret'],
robot_name=config['robot_name'],
robot_code=config['robot_code'],
markdown_card=config['markdown_card'],
logger=self.logger,
)
await self.bot.start() await self.bot.start()
async def kill(self) -> bool: async def kill(self) -> bool:

View File

@@ -46,6 +46,23 @@ spec:
type: boolean type: boolean
required: false required: false
default: true default: true
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用钉钉卡片流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用钉钉卡片流式方式来回复内容
type: boolean
required: true
default: false
- name: card_template_id
label:
en_US: card template id
zh_Hans: 卡片模板ID
type: string
required: true
default: "填写你的卡片template_id"
execution: execution:
python: python:
path: ./dingtalk.py path: ./dingtalk.py

View File

@@ -8,6 +8,8 @@ import base64
import uuid import uuid
import os import os
import datetime import datetime
import asyncio
from enum import Enum
import aiohttp import aiohttp
import pydantic import pydantic
@@ -17,6 +19,568 @@ import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger
from ..logger import EventLogger
# 语音功能相关异常定义
class VoiceConnectionError(Exception):
"""语音连接基础异常"""
def __init__(self, message: str, error_code: str = None, guild_id: int = None):
super().__init__(message)
self.error_code = error_code
self.guild_id = guild_id
self.timestamp = datetime.datetime.now()
class VoicePermissionError(VoiceConnectionError):
"""语音权限异常"""
def __init__(self, message: str, missing_permissions: list = None, user_id: int = None, channel_id: int = None):
super().__init__(message, 'PERMISSION_ERROR')
self.missing_permissions = missing_permissions or []
self.user_id = user_id
self.channel_id = channel_id
class VoiceNetworkError(VoiceConnectionError):
"""语音网络异常"""
def __init__(self, message: str, retry_count: int = 0):
super().__init__(message, 'NETWORK_ERROR')
self.retry_count = retry_count
self.last_attempt = datetime.datetime.now()
class VoiceConnectionStatus(Enum):
"""语音连接状态枚举"""
IDLE = 'idle'
CONNECTING = 'connecting'
CONNECTED = 'connected'
PLAYING = 'playing'
RECONNECTING = 'reconnecting'
FAILED = 'failed'
class VoiceConnectionInfo:
"""
语音连接信息类
用于存储和管理单个语音连接的详细信息,包括连接状态、时间戳、
频道信息等。提供连接信息的标准化数据结构。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
"""
def __init__(self, guild_id: int, channel_id: int, channel_name: str = None):
"""
初始化语音连接信息
@author: @ydzat
Args:
guild_id (int): 服务器ID
channel_id (int): 语音频道ID
channel_name (str, optional): 语音频道名称
"""
self.guild_id = guild_id
self.channel_id = channel_id
self.channel_name = channel_name or f'Channel-{channel_id}'
self.connected = False
self.connection_time: datetime.datetime = None
self.last_activity = datetime.datetime.now()
self.status = VoiceConnectionStatus.IDLE
self.user_count = 0
self.latency = 0.0
self.connection_health = 'unknown'
self.voice_client = None
def update_status(self, status: VoiceConnectionStatus):
"""
更新连接状态
@author: @ydzat
Args:
status (VoiceConnectionStatus): 新的连接状态
"""
self.status = status
self.last_activity = datetime.datetime.now()
if status == VoiceConnectionStatus.CONNECTED:
self.connected = True
if self.connection_time is None:
self.connection_time = datetime.datetime.now()
elif status in [VoiceConnectionStatus.IDLE, VoiceConnectionStatus.FAILED]:
self.connected = False
self.connection_time = None
self.voice_client = None
def to_dict(self) -> dict:
"""
转换为字典格式
@author: @ydzat
Returns:
dict: 连接信息的字典表示
"""
return {
'guild_id': self.guild_id,
'channel_id': self.channel_id,
'channel_name': self.channel_name,
'connected': self.connected,
'connection_time': self.connection_time.isoformat() if self.connection_time else None,
'last_activity': self.last_activity.isoformat(),
'status': self.status.value,
'user_count': self.user_count,
'latency': self.latency,
'connection_health': self.connection_health,
}
class VoiceConnectionManager:
"""
语音连接管理器
负责管理多个服务器的语音连接,提供连接建立、断开、状态查询等功能。
采用单例模式确保全局只有一个连接管理器实例。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
"""
def __init__(self, bot: discord.Client, logger: EventLogger):
"""
初始化语音连接管理器
@author: @ydzat
Args:
bot (discord.Client): Discord 客户端实例
logger (EventLogger): 事件日志记录器
"""
self.bot = bot
self.logger = logger
self.connections: typing.Dict[int, VoiceConnectionInfo] = {}
self._connection_lock = asyncio.Lock()
self._cleanup_task = None
self._monitoring_enabled = True
async def join_voice_channel(self, guild_id: int, channel_id: int, user_id: int = None) -> discord.VoiceClient:
"""
加入语音频道
验证用户权限和频道状态后,建立到指定语音频道的连接。
支持连接复用和自动重连机制。
@author: @ydzat
Args:
guild_id (int): 服务器ID
channel_id (int): 语音频道ID
user_id (int, optional): 请求用户ID用于权限验证
Returns:
discord.VoiceClient: 语音客户端实例
Raises:
VoicePermissionError: 权限不足时抛出
VoiceNetworkError: 网络连接失败时抛出
VoiceConnectionError: 其他连接错误时抛出
"""
async with self._connection_lock:
try:
# 获取服务器和频道对象
guild = self.bot.get_guild(guild_id)
if not guild:
raise VoiceConnectionError(f'无法找到服务器 {guild_id}', 'GUILD_NOT_FOUND', guild_id)
channel = guild.get_channel(channel_id)
if not channel or not isinstance(channel, discord.VoiceChannel):
raise VoiceConnectionError(f'无法找到语音频道 {channel_id}', 'CHANNEL_NOT_FOUND', guild_id)
# 验证用户是否在语音频道中如果提供了用户ID
if user_id:
await self._validate_user_in_channel(guild, channel, user_id)
# 验证机器人权限
await self._validate_bot_permissions(channel)
# 检查是否已有连接
if guild_id in self.connections:
existing_conn = self.connections[guild_id]
if existing_conn.connected and existing_conn.voice_client:
if existing_conn.channel_id == channel_id:
# 已连接到相同频道,返回现有连接
await self.logger.info(f'复用现有语音连接: {guild.name} -> {channel.name}')
return existing_conn.voice_client
else:
# 连接到不同频道,先断开旧连接
await self._disconnect_internal(guild_id)
# 建立新连接
voice_client = await channel.connect()
# 更新连接信息
conn_info = VoiceConnectionInfo(guild_id, channel_id, channel.name)
conn_info.voice_client = voice_client
conn_info.update_status(VoiceConnectionStatus.CONNECTED)
conn_info.user_count = len(channel.members)
self.connections[guild_id] = conn_info
await self.logger.info(f'成功连接到语音频道: {guild.name} -> {channel.name}')
return voice_client
except discord.ClientException as e:
raise VoiceNetworkError(f'Discord 客户端错误: {str(e)}')
except discord.opus.OpusNotLoaded as e:
raise VoiceConnectionError(f'Opus 编码器未加载: {str(e)}', 'OPUS_NOT_LOADED', guild_id)
except Exception as e:
await self.logger.error(f'连接语音频道时发生未知错误: {str(e)}')
raise VoiceConnectionError(f'连接失败: {str(e)}', 'UNKNOWN_ERROR', guild_id)
async def leave_voice_channel(self, guild_id: int) -> bool:
"""
离开语音频道
断开指定服务器的语音连接,清理相关资源和状态信息。
确保音频播放停止后再断开连接。
@author: @ydzat
Args:
guild_id (int): 服务器ID
Returns:
bool: 断开是否成功
"""
async with self._connection_lock:
return await self._disconnect_internal(guild_id)
async def _disconnect_internal(self, guild_id: int) -> bool:
"""
内部断开连接方法
@author: @ydzat
Args:
guild_id (int): 服务器ID
Returns:
bool: 断开是否成功
"""
if guild_id not in self.connections:
return True
conn_info = self.connections[guild_id]
try:
if conn_info.voice_client and conn_info.voice_client.is_connected():
# 停止当前播放
if conn_info.voice_client.is_playing():
conn_info.voice_client.stop()
# 等待播放完全停止
await asyncio.sleep(0.1)
# 断开连接
await conn_info.voice_client.disconnect()
conn_info.update_status(VoiceConnectionStatus.IDLE)
del self.connections[guild_id]
await self.logger.info(f'已断开语音连接: Guild {guild_id}')
return True
except Exception as e:
await self.logger.error(f'断开语音连接时发生错误: {str(e)}')
# 即使出错也要清理连接记录
conn_info.update_status(VoiceConnectionStatus.FAILED)
if guild_id in self.connections:
del self.connections[guild_id]
return False
async def get_voice_client(self, guild_id: int) -> typing.Optional[discord.VoiceClient]:
"""
获取语音客户端
返回指定服务器的语音客户端实例,如果未连接则返回 None。
会验证连接的有效性,自动清理无效连接。
@author: @ydzat
Args:
guild_id (int): 服务器ID
Returns:
Optional[discord.VoiceClient]: 语音客户端实例或 None
"""
if guild_id not in self.connections:
return None
conn_info = self.connections[guild_id]
# 验证连接是否仍然有效
if conn_info.voice_client and not conn_info.voice_client.is_connected():
# 连接已失效,清理状态
await self._disconnect_internal(guild_id)
return None
return conn_info.voice_client if conn_info.connected else None
async def is_connected_to_voice(self, guild_id: int) -> bool:
"""
检查是否连接到语音频道
@author: @ydzat
Args:
guild_id (int): 服务器ID
Returns:
bool: 是否已连接
"""
if guild_id not in self.connections:
return False
conn_info = self.connections[guild_id]
# 检查实际连接状态
if conn_info.voice_client and not conn_info.voice_client.is_connected():
# 连接已失效,清理状态
await self._disconnect_internal(guild_id)
return False
return conn_info.connected
async def get_connection_status(self, guild_id: int) -> typing.Optional[dict]:
"""
获取连接状态信息
@author: @ydzat
Args:
guild_id (int): 服务器ID
Returns:
Optional[dict]: 连接状态信息字典或 None
"""
if guild_id not in self.connections:
return None
conn_info = self.connections[guild_id]
# 更新实时信息
if conn_info.voice_client and conn_info.voice_client.is_connected():
conn_info.latency = conn_info.voice_client.latency * 1000 # 转换为毫秒
conn_info.connection_health = 'good' if conn_info.latency < 100 else 'poor'
# 更新频道用户数
guild = self.bot.get_guild(guild_id)
if guild:
channel = guild.get_channel(conn_info.channel_id)
if channel and isinstance(channel, discord.VoiceChannel):
conn_info.user_count = len(channel.members)
return conn_info.to_dict()
async def list_active_connections(self) -> typing.List[dict]:
"""
列出所有活跃连接
@author: @ydzat
Returns:
List[dict]: 活跃连接列表
"""
active_connections = []
for guild_id, conn_info in self.connections.items():
if conn_info.connected:
status = await self.get_connection_status(guild_id)
if status:
active_connections.append(status)
return active_connections
async def get_voice_channel_info(self, guild_id: int, channel_id: int) -> typing.Optional[dict]:
"""
获取语音频道信息
@author: @ydzat
Args:
guild_id (int): 服务器ID
channel_id (int): 频道ID
Returns:
Optional[dict]: 频道信息字典或 None
"""
guild = self.bot.get_guild(guild_id)
if not guild:
return None
channel = guild.get_channel(channel_id)
if not channel or not isinstance(channel, discord.VoiceChannel):
return None
# 获取用户信息
users = []
for member in channel.members:
users.append(
{'id': member.id, 'name': member.display_name, 'status': str(member.status), 'is_bot': member.bot}
)
# 获取权限信息
bot_member = guild.me
permissions = channel.permissions_for(bot_member)
return {
'channel_id': channel_id,
'channel_name': channel.name,
'guild_id': guild_id,
'guild_name': guild.name,
'user_limit': channel.user_limit,
'current_users': users,
'user_count': len(users),
'bitrate': channel.bitrate,
'permissions': {
'connect': permissions.connect,
'speak': permissions.speak,
'use_voice_activation': permissions.use_voice_activation,
'priority_speaker': permissions.priority_speaker,
},
}
async def _validate_user_in_channel(self, guild: discord.Guild, channel: discord.VoiceChannel, user_id: int):
"""
验证用户是否在语音频道中
@author: @ydzat
Args:
guild: Discord 服务器对象
channel: 语音频道对象
user_id: 用户ID
Raises:
VoicePermissionError: 用户不在频道中时抛出
"""
member = guild.get_member(user_id)
if not member:
raise VoicePermissionError(f'无法找到用户 {user_id}', ['member_not_found'], user_id, channel.id)
if not member.voice or member.voice.channel != channel:
raise VoicePermissionError(
f'用户 {member.display_name} 不在语音频道 {channel.name}',
['user_not_in_channel'],
user_id,
channel.id,
)
async def _validate_bot_permissions(self, channel: discord.VoiceChannel):
"""
验证机器人权限
@author: @ydzat
Args:
channel: 语音频道对象
Raises:
VoicePermissionError: 权限不足时抛出
"""
bot_member = channel.guild.me
permissions = channel.permissions_for(bot_member)
missing_permissions = []
if not permissions.connect:
missing_permissions.append('connect')
if not permissions.speak:
missing_permissions.append('speak')
if missing_permissions:
raise VoicePermissionError(
f'机器人在频道 {channel.name} 中缺少权限: {", ".join(missing_permissions)}',
missing_permissions,
channel_id=channel.id,
)
async def cleanup_inactive_connections(self):
"""
清理无效连接
定期检查并清理已断开或无效的语音连接,释放资源。
@author: @ydzat
"""
cleanup_guilds = []
for guild_id, conn_info in self.connections.items():
if not conn_info.voice_client or not conn_info.voice_client.is_connected():
cleanup_guilds.append(guild_id)
for guild_id in cleanup_guilds:
await self._disconnect_internal(guild_id)
if cleanup_guilds:
await self.logger.info(f'清理了 {len(cleanup_guilds)} 个无效的语音连接')
async def start_monitoring(self):
"""
开始连接监控
@author: @ydzat
"""
if self._cleanup_task is None and self._monitoring_enabled:
self._cleanup_task = asyncio.create_task(self._monitoring_loop())
async def stop_monitoring(self):
"""
停止连接监控
@author: @ydzat
"""
self._monitoring_enabled = False
if self._cleanup_task:
self._cleanup_task.cancel()
try:
await self._cleanup_task
except asyncio.CancelledError:
pass
self._cleanup_task = None
async def _monitoring_loop(self):
"""
监控循环
@author: @ydzat
"""
try:
while self._monitoring_enabled:
await asyncio.sleep(60) # 每分钟检查一次
await self.cleanup_inactive_connections()
except asyncio.CancelledError:
pass
async def disconnect_all(self):
"""
断开所有连接
@author: @ydzat
"""
async with self._connection_lock:
guild_ids = list(self.connections.keys())
for guild_id in guild_ids:
await self._disconnect_internal(guild_id)
await self.stop_monitoring()
class DiscordMessageConverter(abstract_platform_adapter.AbstractMessageConverter): class DiscordMessageConverter(abstract_platform_adapter.AbstractMessageConverter):
@@ -35,28 +599,89 @@ class DiscordMessageConverter(abstract_platform_adapter.AbstractMessageConverter
for ele in message_chain: for ele in message_chain:
if isinstance(ele, platform_message.Image): if isinstance(ele, platform_message.Image):
image_bytes = None image_bytes = None
filename = f'{uuid.uuid4()}.png' # 默认文件名
if ele.base64: if ele.base64:
image_bytes = base64.b64decode(ele.base64) # 处理base64编码的图片
if ele.base64.startswith('data:'):
# 从data URL中提取文件类型
data_header = ele.base64.split(',')[0]
if 'jpeg' in data_header or 'jpg' in data_header:
filename = f'{uuid.uuid4()}.jpg'
elif 'gif' in data_header:
filename = f'{uuid.uuid4()}.gif'
elif 'webp' in data_header:
filename = f'{uuid.uuid4()}.webp'
# 去掉data:image/xxx;base64,前缀
base64_data = ele.base64.split(',')[1]
else:
base64_data = ele.base64
image_bytes = base64.b64decode(base64_data)
elif ele.url: elif ele.url:
# 从URL下载图片
async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession() as session:
async with session.get(ele.url) as response: async with session.get(ele.url) as response:
image_bytes = await response.read() image_bytes = await response.read()
# 从URL或Content-Type推断文件类型
content_type = response.headers.get('Content-Type', '')
if 'jpeg' in content_type or 'jpg' in content_type:
filename = f'{uuid.uuid4()}.jpg'
elif 'gif' in content_type:
filename = f'{uuid.uuid4()}.gif'
elif 'webp' in content_type:
filename = f'{uuid.uuid4()}.webp'
elif ele.url.lower().endswith(('.jpg', '.jpeg')):
filename = f'{uuid.uuid4()}.jpg'
elif ele.url.lower().endswith('.gif'):
filename = f'{uuid.uuid4()}.gif'
elif ele.url.lower().endswith('.webp'):
filename = f'{uuid.uuid4()}.webp'
elif ele.path: elif ele.path:
with open(ele.path, 'rb') as f: # 从文件路径读取图片
image_bytes = f.read() # 确保路径没有空字节
clean_path = ele.path.replace('\x00', '')
clean_path = os.path.abspath(clean_path)
image_files.append(discord.File(fp=image_bytes, filename=f'{uuid.uuid4()}.png')) if not os.path.exists(clean_path):
continue # 跳过不存在的文件
try:
with open(clean_path, 'rb') as f:
image_bytes = f.read()
# 从文件路径获取文件名,保持原始扩展名
original_filename = os.path.basename(clean_path)
if original_filename and '.' in original_filename:
# 保持原始文件名的扩展名
ext = original_filename.split('.')[-1].lower()
filename = f'{uuid.uuid4()}.{ext}'
else:
# 如果没有扩展名,尝试从文件内容检测
if image_bytes.startswith(b'\xff\xd8\xff'):
filename = f'{uuid.uuid4()}.jpg'
elif image_bytes.startswith(b'GIF'):
filename = f'{uuid.uuid4()}.gif'
elif image_bytes.startswith(b'RIFF') and b'WEBP' in image_bytes[:20]:
filename = f'{uuid.uuid4()}.webp'
# 默认保持PNG
except Exception as e:
print(f'Error reading image file {clean_path}: {e}')
continue # 跳过读取失败的文件
if image_bytes:
# 使用BytesIO创建文件对象避免路径问题
import io
image_files.append(discord.File(fp=io.BytesIO(image_bytes), filename=filename))
elif isinstance(ele, platform_message.Plain): elif isinstance(ele, platform_message.Plain):
text_string += ele.text text_string += ele.text
elif isinstance(ele, platform_message.Forward): elif isinstance(ele, platform_message.Forward):
for node in ele.node_list: for node in ele.node_list:
( (
text_string, node_text,
image_files, node_images,
) = await DiscordMessageConverter.yiri2target(node.message_chain) ) = await DiscordMessageConverter.yiri2target(node.message_chain)
text_string += text_string text_string += node_text
image_files.extend(image_files) image_files.extend(node_images)
return text_string, image_files return text_string, image_files
@@ -165,11 +790,16 @@ class DiscordAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None], typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None],
] = {} ] = {}
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger): voice_manager: VoiceConnectionManager | None = pydantic.Field(exclude=True, default=None)
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger, **kwargs):
bot_account_id = config['client_id'] bot_account_id = config['client_id']
listeners = {} listeners = {}
# 初始化语音连接管理器
# self.voice_manager: VoiceConnectionManager = None
adapter_self = self adapter_self = self
class MyClient(discord.Client): class MyClient(discord.Client):
@@ -196,10 +826,194 @@ class DiscordAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
bot_account_id=bot_account_id, bot_account_id=bot_account_id,
listeners=listeners, listeners=listeners,
bot=bot, bot=bot,
voice_manager=None,
**kwargs,
) )
# Voice functionality methods
async def join_voice_channel(self, guild_id: int, channel_id: int, user_id: int = None) -> discord.VoiceClient:
"""
加入语音频道
为指定服务器的语音频道建立连接,支持用户权限验证和连接复用。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
channel_id (int): 语音频道ID
user_id (int, optional): 请求用户ID用于权限验证
Returns:
discord.VoiceClient: 语音客户端实例
Raises:
VoicePermissionError: 权限不足
VoiceNetworkError: 网络连接失败
VoiceConnectionError: 其他连接错误
"""
if not self.voice_manager:
raise VoiceConnectionError('语音管理器未初始化', 'MANAGER_NOT_READY')
return await self.voice_manager.join_voice_channel(guild_id, channel_id, user_id)
async def leave_voice_channel(self, guild_id: int) -> bool:
"""
离开语音频道
断开指定服务器的语音连接,清理相关资源。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
Returns:
bool: 是否成功断开连接
"""
if not self.voice_manager:
return False
return await self.voice_manager.leave_voice_channel(guild_id)
async def get_voice_client(self, guild_id: int) -> typing.Optional[discord.VoiceClient]:
"""
获取语音客户端
返回指定服务器的语音客户端实例,用于音频播放控制。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
Returns:
Optional[discord.VoiceClient]: 语音客户端实例或 None
"""
if not self.voice_manager:
return None
return await self.voice_manager.get_voice_client(guild_id)
async def is_connected_to_voice(self, guild_id: int) -> bool:
"""
检查语音连接状态
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
Returns:
bool: 是否已连接到语音频道
"""
if not self.voice_manager:
return False
return await self.voice_manager.is_connected_to_voice(guild_id)
async def get_voice_connection_status(self, guild_id: int) -> typing.Optional[dict]:
"""
获取语音连接详细状态
返回包含连接时间、延迟、用户数等详细信息的状态字典。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
Returns:
Optional[dict]: 连接状态信息或 None
"""
if not self.voice_manager:
return None
return await self.voice_manager.get_connection_status(guild_id)
async def list_active_voice_connections(self) -> typing.List[dict]:
"""
列出所有活跃的语音连接
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Returns:
List[dict]: 活跃语音连接列表
"""
if not self.voice_manager:
return []
return await self.voice_manager.list_active_connections()
async def get_voice_channel_info(self, guild_id: int, channel_id: int) -> typing.Optional[dict]:
"""
获取语音频道详细信息
包括频道名称、用户列表、权限信息等。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
Args:
guild_id (int): Discord 服务器ID
channel_id (int): 语音频道ID
Returns:
Optional[dict]: 频道信息字典或 None
"""
if not self.voice_manager:
return None
return await self.voice_manager.get_voice_channel_info(guild_id, channel_id)
async def cleanup_voice_connections(self):
"""
清理无效的语音连接
手动触发语音连接清理,移除已断开或无效的连接。
@author: @ydzat
@version: 1.0
@since: 2025-07-04
"""
if self.voice_manager:
await self.voice_manager.cleanup_inactive_connections()
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain): async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
pass msg_to_send, image_files = await self.message_converter.yiri2target(message)
try:
# 获取频道对象
channel = self.bot.get_channel(int(target_id))
if channel is None:
# 如果本地缓存中没有尝试从API获取
channel = await self.bot.fetch_channel(int(target_id))
args = {
'content': msg_to_send,
}
if len(image_files) > 0:
args['files'] = image_files
await channel.send(**args)
except Exception as e:
await self.logger.error(f'Discord send_message failed: {e}')
raise e
async def reply_message( async def reply_message(
self, self,
@@ -247,9 +1061,31 @@ class DiscordAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
self.listeners.pop(event_type) self.listeners.pop(event_type)
async def run_async(self): async def run_async(self):
"""
启动 Discord 适配器
初始化语音管理器并启动 Discord 客户端连接。
@author: @ydzat (修改)
"""
async with self.bot: async with self.bot:
# 初始化语音管理器
self.voice_manager = VoiceConnectionManager(self.bot, self.logger)
await self.voice_manager.start_monitoring()
await self.logger.info('Discord 适配器语音功能已启用')
await self.bot.start(self.config['token'], reconnect=True) await self.bot.start(self.config['token'], reconnect=True)
async def kill(self) -> bool: async def kill(self) -> bool:
"""
关闭 Discord 适配器
清理语音连接并关闭 Discord 客户端。
@author: @ydzat (修改)
"""
if self.voice_manager:
await self.voice_manager.disconnect_all()
await self.bot.close() await self.bot.close()
return True return True

View File

@@ -18,6 +18,7 @@ import lark_oapi.ws.exception
import quart import quart
from lark_oapi.api.im.v1 import * from lark_oapi.api.im.v1 import *
import pydantic import pydantic
from lark_oapi.api.cardkit.v1 import *
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
import langbot_plugin.api.entities.builtin.platform.message as platform_message import langbot_plugin.api.entities.builtin.platform.message as platform_message
@@ -320,10 +321,17 @@ class LarkEventConverter(abstract_platform_adapter.AbstractEventConverter):
) )
CARD_ID_CACHE_SIZE = 500
CARD_ID_CACHE_MAX_LIFETIME = 20 * 60 # 20分钟
class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter): class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
bot: lark_oapi.ws.Client = pydantic.Field(exclude=True) bot: lark_oapi.ws.Client = pydantic.Field(exclude=True)
api_client: lark_oapi.Client = pydantic.Field(exclude=True) api_client: lark_oapi.Client = pydantic.Field(exclude=True)
bot_account_id: str # 用于在流水线中识别at是否是本bot直接以bot_name作为标识
lark_tenant_key: str = pydantic.Field(exclude=True, default='') # 飞书企业key
message_converter: LarkMessageConverter = LarkMessageConverter() message_converter: LarkMessageConverter = LarkMessageConverter()
event_converter: LarkEventConverter = LarkEventConverter() event_converter: LarkEventConverter = LarkEventConverter()
@@ -334,7 +342,11 @@ class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
quart_app: quart.Quart = pydantic.Field(exclude=True) quart_app: quart.Quart = pydantic.Field(exclude=True)
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger): card_id_dict: dict[str, str] # 消息id到卡片id的映射便于创建卡片后的发送消息到指定卡片
seq: int # 用于在发送卡片消息中识别消息顺序直接以seq作为标识
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger, **kwargs):
quart_app = quart.Quart(__name__) quart_app = quart.Quart(__name__)
@quart_app.route('/lark/callback', methods=['POST']) @quart_app.route('/lark/callback', methods=['POST'])
@@ -343,7 +355,7 @@ class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
data = await quart.request.json data = await quart.request.json
if 'encrypt' in data: if 'encrypt' in data:
cipher = AESCipher(self.config['encrypt-key']) cipher = AESCipher(config['encrypt-key'])
data = cipher.decrypt_string(data['encrypt']) data = cipher.decrypt_string(data['encrypt'])
data = json.loads(data) data = json.loads(data)
@@ -398,16 +410,256 @@ class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
super().__init__( super().__init__(
config=config, config=config,
logger=logger, logger=logger,
lark_tenant_key=config.get('lark_tenant_key', ''),
card_id_dict={},
seq=1,
listeners={}, listeners={},
quart_app=quart_app, quart_app=quart_app,
bot=bot, bot=bot,
api_client=api_client, api_client=api_client,
bot_account_id=bot_account_id, bot_account_id=bot_account_id,
**kwargs,
) )
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain): async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
pass pass
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_card_id(self, message_id):
try:
# self.logger.debug('飞书支持stream输出,创建卡片......')
card_data = {
'schema': '2.0',
'config': {
'update_multi': True,
'streaming_mode': True,
'streaming_config': {
'print_step': {'default': 1},
'print_frequency_ms': {'default': 70},
'print_strategy': 'fast',
},
},
'body': {
'direction': 'vertical',
'padding': '12px 12px 12px 12px',
'elements': [
{
'tag': 'div',
'text': {
'tag': 'plain_text',
'content': 'LangBot',
'text_size': 'normal',
'text_align': 'left',
'text_color': 'default',
},
'icon': {
'tag': 'custom_icon',
'img_key': 'img_v3_02p3_05c65d5d-9bad-440a-a2fb-c89571bfd5bg',
},
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
'element_id': 'streaming_txt',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'column_set',
'horizontal_spacing': '8px',
'horizontal_align': 'left',
'columns': [
{
'tag': 'column',
'width': 'weighted',
'elements': [
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '2px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
'weight': 1,
}
],
'margin': '0px 0px 0px 0px',
},
{'tag': 'hr', 'margin': '0px 0px 0px 0px'},
{
'tag': 'column_set',
'horizontal_spacing': '12px',
'horizontal_align': 'right',
'columns': [
{
'tag': 'column',
'width': 'weighted',
'elements': [
{
'tag': 'markdown',
'content': '<font color="grey-600">以上内容由 AI 生成,仅供参考。更多详细、准确信息可点击引用链接查看</font>',
'text_align': 'left',
'text_size': 'notation',
'margin': '4px 0px 0px 0px',
'icon': {
'tag': 'standard_icon',
'token': 'robot_outlined',
'color': 'grey',
},
}
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
'weight': 1,
},
{
'tag': 'column',
'width': '20px',
'elements': [
{
'tag': 'button',
'text': {'tag': 'plain_text', 'content': ''},
'type': 'text',
'width': 'fill',
'size': 'medium',
'icon': {'tag': 'standard_icon', 'token': 'thumbsup_outlined'},
'hover_tips': {'tag': 'plain_text', 'content': '有帮助'},
'margin': '0px 0px 0px 0px',
}
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'column',
'width': '30px',
'elements': [
{
'tag': 'button',
'text': {'tag': 'plain_text', 'content': ''},
'type': 'text',
'width': 'default',
'size': 'medium',
'icon': {'tag': 'standard_icon', 'token': 'thumbdown_outlined'},
'hover_tips': {'tag': 'plain_text', 'content': '无帮助'},
'margin': '0px 0px 0px 0px',
}
],
'padding': '0px 0px 0px 0px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
},
],
'margin': '0px 0px 4px 0px',
},
],
},
}
# delay / fast 创建卡片模板delay 延迟打印fast 实时打印,可以自定义更好看的消息模板
request: CreateCardRequest = (
CreateCardRequest.builder()
.request_body(CreateCardRequestBody.builder().type('card_json').data(json.dumps(card_data)).build())
.build()
)
# 发起请求
response: CreateCardResponse = self.api_client.cardkit.v1.card.create(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.cardkit.v1.card.create failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
self.ap.logger.debug(f'飞书卡片创建成功,卡片ID: {response.data.card_id}')
self.card_id_dict[message_id] = response.data.card_id
card_id = response.data.card_id
return card_id
except Exception as e:
self.ap.logger.error(f'飞书卡片创建失败,错误信息: {e}')
async def create_message_card(self, message_id, event) -> str:
"""
创建卡片消息。
使用卡片消息是因为普通消息更新次数有限制而大模型流式返回结果可能很多而超过限制而飞书卡片没有这个限制api免费次数有限
"""
# message_id = event.message_chain.message_id
card_id = await self.create_card_id(message_id)
content = {
'type': 'card',
'data': {'card_id': card_id, 'template_variable': {'content': 'Thinking...'}},
} # 当收到消息时发送消息模板,可添加模板变量,详情查看飞书中接口文档
request: ReplyMessageRequest = (
ReplyMessageRequest.builder()
.message_id(event.message_chain.message_id)
.request_body(
ReplyMessageRequestBody.builder().content(json.dumps(content)).msg_type('interactive').build()
)
.build()
)
# 发起请求
response: ReplyMessageResponse = await self.api_client.im.v1.message.areply(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return True
async def reply_message( async def reply_message(
self, self,
message_source: platform_events.MessageEvent, message_source: platform_events.MessageEvent,
@@ -446,6 +698,62 @@ class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}' f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
) )
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
"""
回复消息变成更新卡片消息
"""
# self.seq += 1
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if msg_seq % 8 == 0 or is_final:
lark_message = await self.message_converter.yiri2target(message, self.api_client)
text_message = ''
for ele in lark_message[0]:
if ele['tag'] == 'text':
text_message += ele['text']
elif ele['tag'] == 'md':
text_message += ele['text']
# content = {
# 'type': 'card_json',
# 'data': {'card_id': self.card_id_dict[message_id], 'elements': {'content': text_message}},
# }
request: ContentCardElementRequest = (
ContentCardElementRequest.builder()
.card_id(self.card_id_dict[message_id])
.element_id('streaming_txt')
.request_body(
ContentCardElementRequestBody.builder()
# .uuid("a0d69e20-1dd1-458b-k525-dfeca4015204")
.content(text_message)
.sequence(msg_seq)
.build()
)
.build()
)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_id_dict.pop(message_id) # 清理已经使用过的卡片
# 发起请求
response: ContentCardElementResponse = self.api_client.cardkit.v1.card_element.content(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.patch failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return
async def is_muted(self, group_id: int) -> bool: async def is_muted(self, group_id: int) -> bool:
return False return False
@@ -495,4 +803,9 @@ class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
) )
async def kill(self) -> bool: async def kill(self) -> bool:
# 需要断开连接,不然旧的连接会继续运行,导致飞书消息来时会随机选择一个连接
# 断开时lark.ws.Client的_receive_message_loop会打印error日志: receive message loop exit。然后进行重连
# 所以要设置_auto_reconnect=False,让其不重连。
self.bot._auto_reconnect = False
await self.bot._disconnect()
return False return False

View File

@@ -65,6 +65,16 @@ spec:
type: string type: string
required: true required: true
default: "" default: ""
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用飞书流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用飞书流式方式来回复内容
type: boolean
required: true
default: false
execution: execution:
python: python:
path: ./lark.py path: ./lark.py

View File

@@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
import telegram import telegram
import telegram.ext import telegram.ext
from telegram import Update from telegram import Update
@@ -136,6 +137,12 @@ class TelegramAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
message_converter: TelegramMessageConverter = TelegramMessageConverter() message_converter: TelegramMessageConverter = TelegramMessageConverter()
event_converter: TelegramEventConverter = TelegramEventConverter() event_converter: TelegramEventConverter = TelegramEventConverter()
config: dict
msg_stream_id: dict # 流式消息id字典key为流式消息idvalue为首次消息源id用于在流式消息时判断编辑那条消息
seq: int # 消息中识别消息顺序直接以seq作为标识
listeners: typing.Dict[ listeners: typing.Dict[
typing.Type[platform_events.Event], typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None], typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None],
@@ -149,6 +156,7 @@ class TelegramAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
try: try:
lb_event = await self.event_converter.target2yiri(update, self.bot, self.bot_account_id) lb_event = await self.event_converter.target2yiri(update, self.bot, self.bot_account_id)
await self.listeners[type(lb_event)](lb_event, self) await self.listeners[type(lb_event)](lb_event, self)
await self.is_stream_output_supported()
except Exception: except Exception:
await self.logger.error(f'Error in telegram callback: {traceback.format_exc()}') await self.logger.error(f'Error in telegram callback: {traceback.format_exc()}')
@@ -158,6 +166,8 @@ class TelegramAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
super().__init__( super().__init__(
config=config, config=config,
logger=logger, logger=logger,
msg_stream_id={},
seq=1,
bot=bot, bot=bot,
application=application, application=application,
bot_account_id='', bot_account_id='',
@@ -195,6 +205,70 @@ class TelegramAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
await self.bot.send_message(**args) await self.bot.send_message(**args)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
msg_seq = bot_message.msg_sequence
if (msg_seq - 1) % 8 == 0 or is_final:
assert isinstance(message_source.source_platform_object, Update)
components = await TelegramMessageConverter.yiri2target(message, self.bot)
args = {}
message_id = message_source.source_platform_object.message.id
if quote_origin:
args['reply_to_message_id'] = message_source.source_platform_object.message.id
component = components[0]
if message_id not in self.msg_stream_id: # 当消息回复第一次时,发送新消息
# time.sleep(0.6)
if component['type'] == 'text':
if self.config['markdown_card'] is True:
content = telegramify_markdown.markdownify(
content=component['text'],
)
else:
content = component['text']
args = {
'chat_id': message_source.source_platform_object.effective_chat.id,
'text': content,
}
if self.config['markdown_card'] is True:
args['parse_mode'] = 'MarkdownV2'
send_msg = await self.bot.send_message(**args)
send_msg_id = send_msg.message_id
self.msg_stream_id[message_id] = send_msg_id
else: # 存在消息的时候直接编辑消息1
if component['type'] == 'text':
if self.config['markdown_card'] is True:
content = telegramify_markdown.markdownify(
content=component['text'],
)
else:
content = component['text']
args = {
'message_id': self.msg_stream_id[message_id],
'chat_id': message_source.source_platform_object.effective_chat.id,
'text': content,
}
if self.config['markdown_card'] is True:
args['parse_mode'] = 'MarkdownV2'
await self.bot.edit_message_text(**args)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.msg_stream_id.pop(message_id) # 消息回复结束之后删除流式消息id
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def is_muted(self, group_id: int) -> bool: async def is_muted(self, group_id: int) -> bool:
return False return False
@@ -221,8 +295,12 @@ class TelegramAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
self.bot_account_id = (await self.bot.get_me()).username self.bot_account_id = (await self.bot.get_me()).username
await self.application.updater.start_polling(allowed_updates=Update.ALL_TYPES) await self.application.updater.start_polling(allowed_updates=Update.ALL_TYPES)
await self.application.start() await self.application.start()
await self.logger.info('Telegram adapter running')
async def kill(self) -> bool: async def kill(self) -> bool:
if self.application.running: if self.application.running:
await self.application.stop() await self.application.stop()
if self.application.updater:
await self.application.updater.stop()
await self.logger.info('Telegram adapter stopped')
return True return True

View File

@@ -25,6 +25,16 @@ spec:
type: boolean type: boolean
required: false required: false
default: true default: true
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用电报流式回复模式
description:
en_US: If enabled, the bot will use the stream of telegram reply mode
zh_Hans: 如果启用,将使用电报流式方式来回复内容
type: boolean
required: true
default: false
execution: execution:
python: python:
path: ./telegram.py path: ./telegram.py

View File

@@ -9,8 +9,8 @@ import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platf
import langbot_plugin.api.entities.builtin.platform.message as platform_message import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
from ...core import app
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger
from ...core import app
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -21,17 +21,20 @@ class WebChatMessage(pydantic.BaseModel):
content: str content: str
message_chain: list[dict] message_chain: list[dict]
timestamp: str timestamp: str
is_final: bool = False
class WebChatSession: class WebChatSession:
id: str id: str
message_lists: dict[str, list[WebChatMessage]] = {} message_lists: dict[str, list[WebChatMessage]] = {}
resp_waiters: dict[int, asyncio.Future[WebChatMessage]] resp_waiters: dict[int, asyncio.Future[WebChatMessage]]
resp_queues: dict[int, asyncio.Queue[WebChatMessage]]
def __init__(self, id: str): def __init__(self, id: str):
self.id = id self.id = id
self.message_lists = {} self.message_lists = {}
self.resp_waiters = {} self.resp_waiters = {}
self.resp_queues = {}
def get_message_list(self, pipeline_uuid: str) -> list[WebChatMessage]: def get_message_list(self, pipeline_uuid: str) -> list[WebChatMessage]:
if pipeline_uuid not in self.message_lists: if pipeline_uuid not in self.message_lists:
@@ -46,20 +49,21 @@ class WebChatAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
webchat_person_session: WebChatSession = pydantic.Field(exclude=True, default_factory=WebChatSession) webchat_person_session: WebChatSession = pydantic.Field(exclude=True, default_factory=WebChatSession)
webchat_group_session: WebChatSession = pydantic.Field(exclude=True, default_factory=WebChatSession) webchat_group_session: WebChatSession = pydantic.Field(exclude=True, default_factory=WebChatSession)
ap: app.Application = pydantic.Field(exclude=True) # set by bot manager
listeners: dict[ listeners: dict[
typing.Type[platform_events.Event], typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None], typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None],
] = pydantic.Field(default_factory=dict, exclude=True) ] = pydantic.Field(default_factory=dict, exclude=True)
is_stream: bool = pydantic.Field(exclude=True)
debug_messages: dict[str, list[dict]] = pydantic.Field(default_factory=dict, exclude=True) debug_messages: dict[str, list[dict]] = pydantic.Field(default_factory=dict, exclude=True)
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger, ap: app.Application): ap: app.Application = pydantic.Field(exclude=True)
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger, **kwargs):
super().__init__( super().__init__(
config=config, config=config,
logger=logger, logger=logger,
ap=ap, **kwargs,
) )
self.webchat_person_session = WebChatSession(id='webchatperson') self.webchat_person_session = WebChatSession(id='webchatperson')
@@ -112,12 +116,53 @@ class WebChatAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
# notify waiter # notify waiter
if isinstance(message_source, platform_events.FriendMessage): if isinstance(message_source, platform_events.FriendMessage):
self.webchat_person_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data) await self.webchat_person_session.resp_queues[message_source.message_chain.message_id].put(message_data)
elif isinstance(message_source, platform_events.GroupMessage): elif isinstance(message_source, platform_events.GroupMessage):
self.webchat_group_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data) await self.webchat_group_session.resp_queues[message_source.message_chain.message_id].put(message_data)
return message_data.model_dump() return message_data.model_dump()
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
) -> dict:
"""回复消息"""
message_data = WebChatMessage(
id=-1,
role='assistant',
content=str(message),
message_chain=[component.__dict__ for component in message],
timestamp=datetime.now().isoformat(),
)
# notify waiter
session = (
self.webchat_group_session
if isinstance(message_source, platform_events.GroupMessage)
else self.webchat_person_session
)
if message_source.message_chain.message_id not in session.resp_waiters:
# session.resp_waiters[message_source.message_chain.message_id] = asyncio.Queue()
queue = session.resp_queues[message_source.message_chain.message_id]
# if isinstance(message_source, platform_events.FriendMessage):
# queue = self.webchat_person_session.resp_queues[message_source.message_chain.message_id]
# elif isinstance(message_source, platform_events.GroupMessage):
# queue = self.webchat_group_session.resp_queues[message_source.message_chain.message_id]
if is_final and bot_message.tool_calls is None:
message_data.is_final = True
# print(message_data)
await queue.put(message_data)
return message_data.model_dump()
async def is_stream_output_supported(self) -> bool:
return self.is_stream
def register_listener( def register_listener(
self, self,
event_type: typing.Type[platform_events.Event], event_type: typing.Type[platform_events.Event],
@@ -157,8 +202,13 @@ class WebChatAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
await self.logger.info('WebChat调试适配器正在停止') await self.logger.info('WebChat调试适配器正在停止')
async def send_webchat_message( async def send_webchat_message(
self, pipeline_uuid: str, session_type: str, message_chain_obj: typing.List[dict] self,
pipeline_uuid: str,
session_type: str,
message_chain_obj: typing.List[dict],
is_stream: bool = False,
) -> dict: ) -> dict:
self.is_stream = is_stream
"""发送调试消息到流水线""" """发送调试消息到流水线"""
if session_type == 'person': if session_type == 'person':
use_session = self.webchat_person_session use_session = self.webchat_person_session
@@ -169,6 +219,9 @@ class WebChatAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
message_id = len(use_session.get_message_list(pipeline_uuid)) + 1 message_id = len(use_session.get_message_list(pipeline_uuid)) + 1
use_session.resp_queues[message_id] = asyncio.Queue()
logger.debug(f'Initialized queue for message_id: {message_id}')
use_session.get_message_list(pipeline_uuid).append( use_session.get_message_list(pipeline_uuid).append(
WebChatMessage( WebChatMessage(
id=message_id, id=message_id,
@@ -202,21 +255,46 @@ class WebChatAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
self.ap.platform_mgr.webchat_proxy_bot.bot_entity.use_pipeline_uuid = pipeline_uuid self.ap.platform_mgr.webchat_proxy_bot.bot_entity.use_pipeline_uuid = pipeline_uuid
# trigger pipeline
if event.__class__ in self.listeners: if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self) await self.listeners[event.__class__](event, self)
# set waiter if is_stream:
waiter = asyncio.Future[WebChatMessage]() queue = use_session.resp_queues[message_id]
use_session.resp_waiters[message_id] = waiter msg_id = len(use_session.get_message_list(pipeline_uuid)) + 1
waiter.add_done_callback(lambda future: use_session.resp_waiters.pop(message_id)) while True:
resp_message = await queue.get()
resp_message.id = msg_id
if resp_message.is_final:
resp_message.id = msg_id
use_session.get_message_list(pipeline_uuid).append(resp_message)
yield resp_message.model_dump()
break
yield resp_message.model_dump()
use_session.resp_queues.pop(message_id)
resp_message = await waiter else: # non-stream
# set waiter
# waiter = asyncio.Future[WebChatMessage]()
# use_session.resp_waiters[message_id] = waiter
# # waiter.add_done_callback(lambda future: use_session.resp_waiters.pop(message_id))
#
# resp_message = await waiter
#
# resp_message.id = len(use_session.get_message_list(pipeline_uuid)) + 1
#
# use_session.get_message_list(pipeline_uuid).append(resp_message)
#
# yield resp_message.model_dump()
msg_id = len(use_session.get_message_list(pipeline_uuid)) + 1
resp_message.id = len(use_session.get_message_list(pipeline_uuid)) + 1 queue = use_session.resp_queues[message_id]
resp_message = await queue.get()
use_session.get_message_list(pipeline_uuid).append(resp_message)
resp_message.id = msg_id
resp_message.is_final = True
use_session.get_message_list(pipeline_uuid).append(resp_message) yield resp_message.model_dump()
return resp_message.model_dump()
def get_webchat_messages(self, pipeline_uuid: str, session_type: str) -> list[dict]: def get_webchat_messages(self, pipeline_uuid: str, session_type: str) -> list[dict]:
"""获取调试消息历史""" """获取调试消息历史"""

View File

@@ -9,7 +9,8 @@ metadata:
en_US: "WebChat adapter for pipeline debugging" en_US: "WebChat adapter for pipeline debugging"
zh_Hans: "用于流水线调试的网页聊天适配器" zh_Hans: "用于流水线调试的网页聊天适配器"
icon: "" icon: ""
spec: {} spec:
config: []
execution: execution:
python: python:
path: "webchat.py" path: "webchat.py"

View File

@@ -11,13 +11,11 @@ import asyncio
import traceback import traceback
import re import re
import base64 import base64
import os
import copy import copy
import threading import threading
import quart import quart
from ...core import app
from ..logger import EventLogger from ..logger import EventLogger
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from typing import Optional, Tuple from typing import Optional, Tuple
@@ -27,21 +25,23 @@ import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger
class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConverter): class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConverter):
def __init__(self, config: dict): def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger):
self.config = config self.config = config
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token']) self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'])
self.logger = logging.getLogger('WeChatPadMessageConverter') self.logger = logger
@staticmethod @staticmethod
async def yiri2target(message_chain: platform_message.MessageChain) -> list[dict]: async def yiri2target(message_chain: platform_message.MessageChain) -> list[dict]:
content_list = [] content_list = []
_ = os.path.abspath(__file__)
for component in message_chain: for component in message_chain:
if isinstance(component, platform_message.At): if isinstance(component, platform_message.AtAll):
content_list.append({'type': 'at', 'target': 'all'})
elif isinstance(component, platform_message.At):
content_list.append({'type': 'at', 'target': component.target}) content_list.append({'type': 'at', 'target': component.target})
elif isinstance(component, platform_message.Plain): elif isinstance(component, platform_message.Plain):
content_list.append({'type': 'text', 'content': component.text}) content_list.append({'type': 'text', 'content': component.text})
@@ -75,20 +75,34 @@ class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConvert
return content_list return content_list
async def target2yiri(self, message: dict, bot_account_id: str) -> platform_message.MessageChain: async def target2yiri(
self,
message: dict,
bot_account_id: str,
) -> platform_message.MessageChain:
"""外部消息转平台消息""" """外部消息转平台消息"""
# 数据预处理 # 数据预处理
message_list = [] message_list = []
bot_wxid = self.config['wxid']
ats_bot = False # 是否被@ ats_bot = False # 是否被@
content = message['content']['str'] content = message['content']['str']
content_no_preifx = content # 群消息则去掉前缀 content_no_preifx = content # 群消息则去掉前缀
is_group_message = self._is_group_message(message) is_group_message = self._is_group_message(message)
if is_group_message: if is_group_message:
ats_bot = self._ats_bot(message, bot_account_id) ats_bot = self._ats_bot(message, bot_account_id)
self.logger.info(f'ats_bot: {ats_bot}; bot_account_id: {bot_account_id}; bot_wxid: {bot_wxid}')
if '@所有人' in content: if '@所有人' in content:
message_list.append(platform_message.AtAll()) message_list.append(platform_message.AtAll())
elif ats_bot: if ats_bot:
message_list.append(platform_message.At(target=bot_account_id)) message_list.append(platform_message.At(target=bot_account_id))
# 解析@信息并生成At组件
at_targets = self._extract_at_targets(message)
for target_id in at_targets:
if target_id != bot_wxid: # 避免重复添加机器人的At
message_list.append(platform_message.At(target=target_id))
content_no_preifx, _ = self._extract_content_and_sender(content) content_no_preifx, _ = self._extract_content_and_sender(content)
msg_type = message['msg_type'] msg_type = message['msg_type']
@@ -226,8 +240,8 @@ class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConvert
# self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode')) # self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode'))
appmsg_data = xml_data.find('.//appmsg') appmsg_data = xml_data.find('.//appmsg')
quote_data = '' # 引用原文 quote_data = '' # 引用原文
quote_id = None # 引用消息的原发送者 # quote_id = None # 引用消息的原发送者
tousername = None # 接收方: 所属微信的wxid # tousername = None # 接收方: 所属微信的wxid
user_data = '' # 用户消息 user_data = '' # 用户消息
sender_id = xml_data.findtext('.//fromusername') # 发送方:单聊用户/群member sender_id = xml_data.findtext('.//fromusername') # 发送方:单聊用户/群member
@@ -235,13 +249,10 @@ class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConvert
if appmsg_data: if appmsg_data:
user_data = appmsg_data.findtext('.//title') or '' user_data = appmsg_data.findtext('.//title') or ''
quote_data = appmsg_data.find('.//refermsg').findtext('.//content') quote_data = appmsg_data.find('.//refermsg').findtext('.//content')
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr') # quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr')
message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg_data, encoding='unicode'))) message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg_data, encoding='unicode')))
if message: # if message:
tousername = message['to_user_name']['str'] # tousername = message['to_user_name']['str']
_ = tousername
_ = quote_id
if quote_data: if quote_data:
quote_data_message_list = platform_message.MessageChain() quote_data_message_list = platform_message.MessageChain()
@@ -397,6 +408,23 @@ class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConvert
finally: finally:
return ats_bot return ats_bot
# 提取一下at的wxid列表
def _extract_at_targets(self, message: dict) -> list[str]:
"""从消息中提取被@用户的ID列表"""
at_targets = []
try:
# 从msg_source中解析atuserlist
msg_source = message.get('msg_source', '') or ''
if len(msg_source) > 0:
msg_source_data = ET.fromstring(msg_source)
at_user_list = msg_source_data.findtext('atuserlist') or ''
if at_user_list:
# atuserlist格式通常是逗号分隔的用户ID列表
at_targets = [user_id.strip() for user_id in at_user_list.split(',') if user_id.strip()]
except Exception as e:
self.logger.error(f'_extract_at_targets got except: {e}')
return at_targets
# 提取一下content前面的sender_id, 和去掉前缀的内容 # 提取一下content前面的sender_id, 和去掉前缀的内容
def _extract_content_and_sender(self, raw_content: str) -> Tuple[str, Optional[str]]: def _extract_content_and_sender(self, raw_content: str) -> Tuple[str, Optional[str]]:
try: try:
@@ -420,16 +448,20 @@ class WeChatPadMessageConverter(abstract_platform_adapter.AbstractMessageConvert
class WeChatPadEventConverter(abstract_platform_adapter.AbstractEventConverter): class WeChatPadEventConverter(abstract_platform_adapter.AbstractEventConverter):
def __init__(self, config: dict): def __init__(self, config: dict, logger: logging.Logger):
self.config = config self.config = config
self.message_converter = WeChatPadMessageConverter(config) self.message_converter = WeChatPadMessageConverter(config, logger)
self.logger = logging.getLogger('WeChatPadEventConverter') self.logger = logger
@staticmethod @staticmethod
async def yiri2target(event: platform_events.MessageEvent) -> dict: async def yiri2target(event: platform_events.MessageEvent) -> dict:
pass pass
async def target2yiri(self, event: dict, bot_account_id: str) -> platform_events.MessageEvent: async def target2yiri(
self,
event: dict,
bot_account_id: str,
) -> platform_events.MessageEvent:
# 排除公众号以及微信团队消息 # 排除公众号以及微信团队消息
if ( if (
event['from_user_name']['str'].startswith('gh_') event['from_user_name']['str'].startswith('gh_')
@@ -489,8 +521,6 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
config: dict config: dict
ap: app.Application
logger: EventLogger logger: EventLogger
message_converter: WeChatPadMessageConverter message_converter: WeChatPadMessageConverter
@@ -501,14 +531,13 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None], typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None],
] = {} ] = {}
def __init__(self, config: dict, ap: app.Application, logger: EventLogger): def __init__(self, config: dict, logger: EventLogger):
self.config = config self.config = config
self.ap = ap
self.logger = logger self.logger = logger
self.quart_app = quart.Quart(__name__) self.quart_app = quart.Quart(__name__)
self.message_converter = WeChatPadMessageConverter(config) self.message_converter = WeChatPadMessageConverter(config, logger)
self.event_converter = WeChatPadEventConverter(config) self.event_converter = WeChatPadEventConverter(config, logger)
async def ws_message(self, data): async def ws_message(self, data):
"""处理接收到的消息""" """处理接收到的消息"""
@@ -541,19 +570,22 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
for msg in content_list: for msg in content_list:
# 文本消息处理@ # 文本消息处理@
if msg['type'] == 'text' and at_targets: if msg['type'] == 'text' and at_targets:
at_nick_name_list = [] if 'all' in at_targets:
for member in member_info: msg['content'] = f'@所有人 {msg["content"]}'
if member['user_name'] in at_targets: else:
at_nick_name_list.append(f'@{member["nick_name"]}') at_nick_name_list = []
msg['content'] = f'{" ".join(at_nick_name_list)} {msg["content"]}' for member in member_info:
if member['user_name'] in at_targets:
at_nick_name_list.append(f'@{member["nick_name"]}')
msg['content'] = f'{" ".join(at_nick_name_list)} {msg["content"]}'
# 统一消息派发 # 统一消息派发
handler_map = { handler_map = {
'text': lambda msg: self.bot.send_text_message( 'text': lambda msg: self.bot.send_text_message(
to_wxid=target_id, message=msg['content'], ats=at_targets to_wxid=target_id, message=msg['content'], ats=['notify@all'] if 'all' in at_targets else at_targets
), ),
'image': lambda msg: self.bot.send_image_message( 'image': lambda msg: self.bot.send_image_message(
to_wxid=target_id, img_url=msg['image'], ats=at_targets to_wxid=target_id, img_url=msg['image'], ats=['notify@all'] if 'all' in at_targets else at_targets
), ),
'WeChatEmoji': lambda msg: self.bot.send_emoji_message( 'WeChatEmoji': lambda msg: self.bot.send_emoji_message(
to_wxid=target_id, emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size'] to_wxid=target_id, emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size']
@@ -575,7 +607,7 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
if handler := handler_map.get(msg['type']): if handler := handler_map.get(msg['type']):
handler(msg) handler(msg)
else: else:
print(f'未处理的消息类型: {msg["type"]}') self.logger.warning(f'未处理的消息类型: {msg["type"]}')
continue continue
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain): async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
@@ -650,7 +682,7 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
# url = login_data['Data']["QrCodeUrl"] # url = login_data['Data']["QrCodeUrl"]
profile = self.bot.get_profile() profile = self.bot.get_profile()
self.logger.info(profile) # self.logger.info(profile)
self.bot_account_id = profile['Data']['userInfo']['nickName']['str'] self.bot_account_id = profile['Data']['userInfo']['nickName']['str']
self.config['wxid'] = profile['Data']['userInfo']['userName']['str'] self.config['wxid'] = profile['Data']['userInfo']['userName']['str']
@@ -670,18 +702,18 @@ class WeChatPadAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter)
# 这里需要确保ws_message是同步的或者使用asyncio.run调用异步方法 # 这里需要确保ws_message是同步的或者使用asyncio.run调用异步方法
asyncio.run(self.ws_message(data)) asyncio.run(self.ws_message(data))
except json.JSONDecodeError: except json.JSONDecodeError:
print(f'Non-JSON message: {message[:100]}...') self.logger.error(f'Non-JSON message: {message[:100]}...')
def on_error(ws, error): def on_error(ws, error):
print(f'WebSocket error: {str(error)[:200]}') self.logger.error(f'WebSocket error: {str(error)[:200]}')
def on_close(ws, close_status_code, close_msg): def on_close(ws, close_status_code, close_msg):
print('WebSocket closed, reconnecting...') self.logger.info('WebSocket closed, reconnecting...')
time.sleep(5) time.sleep(5)
connect_websocket_sync() # 自动重连 connect_websocket_sync() # 自动重连
def on_open(ws): def on_open(ws):
print('WebSocket connected successfully!') self.logger.info('WebSocket connected successfully!')
ws = websocket.WebSocketApp( ws = websocket.WebSocketApp(
uri, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open uri, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open

View File

@@ -144,9 +144,9 @@ class WecomCSAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
super().__init__( super().__init__(
config=config, config=config,
logger=logger, logger=logger,
bot=bot,
bot_account_id='', bot_account_id='',
listeners={}, listeners={},
bot=bot,
) )
async def reply_message( async def reply_message(

View File

@@ -17,7 +17,7 @@ class LLMModelInfo(pydantic.BaseModel):
token_mgr: token.TokenManager token_mgr: token.TokenManager
requester: requester.LLMAPIRequester requester: requester.ProviderAPIRequester
tool_call_supported: typing.Optional[bool] = False tool_call_supported: typing.Optional[bool] = False

View File

@@ -20,13 +20,16 @@ class ModelManager:
llm_models: list[requester.RuntimeLLMModel] llm_models: list[requester.RuntimeLLMModel]
embedding_models: list[requester.RuntimeEmbeddingModel]
requester_components: list[engine.Component] requester_components: list[engine.Component]
requester_dict: dict[str, type[requester.LLMAPIRequester]] # cache requester_dict: dict[str, type[requester.ProviderAPIRequester]] # cache
def __init__(self, ap: app.Application): def __init__(self, ap: app.Application):
self.ap = ap self.ap = ap
self.llm_models = [] self.llm_models = []
self.embedding_models = []
self.requester_components = [] self.requester_components = []
self.requester_dict = {} self.requester_dict = {}
@@ -34,7 +37,7 @@ class ModelManager:
self.requester_components = self.ap.discover.get_components_by_kind('LLMAPIRequester') self.requester_components = self.ap.discover.get_components_by_kind('LLMAPIRequester')
# forge requester class dict # forge requester class dict
requester_dict: dict[str, type[requester.LLMAPIRequester]] = {} requester_dict: dict[str, type[requester.ProviderAPIRequester]] = {}
for component in self.requester_components: for component in self.requester_components:
requester_dict[component.metadata.name] = component.get_python_component_class() requester_dict[component.metadata.name] = component.get_python_component_class()
@@ -47,13 +50,11 @@ class ModelManager:
self.ap.logger.info('Loading models from db...') self.ap.logger.info('Loading models from db...')
self.llm_models = [] self.llm_models = []
self.embedding_models = []
# llm models # llm models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.LLMModel)) result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.LLMModel))
llm_models = result.all() llm_models = result.all()
# load models
for llm_model in llm_models: for llm_model in llm_models:
try: try:
await self.load_llm_model(llm_model) await self.load_llm_model(llm_model)
@@ -62,11 +63,17 @@ class ModelManager:
except Exception as e: except Exception as e:
self.ap.logger.error(f'Failed to load model {llm_model.uuid}: {e}\n{traceback.format_exc()}') self.ap.logger.error(f'Failed to load model {llm_model.uuid}: {e}\n{traceback.format_exc()}')
# embedding models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
embedding_models = result.all()
for embedding_model in embedding_models:
await self.load_embedding_model(embedding_model)
async def init_runtime_llm_model( async def init_runtime_llm_model(
self, self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict, model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
): ):
"""初始化运行时模型""" """初始化运行时 LLM 模型"""
if isinstance(model_info, sqlalchemy.Row): if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.LLMModel(**model_info._mapping) model_info = persistence_model.LLMModel(**model_info._mapping)
elif isinstance(model_info, dict): elif isinstance(model_info, dict):
@@ -90,31 +97,85 @@ class ModelManager:
return runtime_llm_model return runtime_llm_model
async def init_runtime_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""初始化运行时 Embedding 模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.EmbeddingModel(**model_info._mapping)
elif isinstance(model_info, dict):
model_info = persistence_model.EmbeddingModel(**model_info)
requester_inst = self.requester_dict[model_info.requester](ap=self.ap, config=model_info.requester_config)
await requester_inst.initialize()
runtime_embedding_model = requester.RuntimeEmbeddingModel(
model_entity=model_info,
token_mgr=token.TokenManager(
name=model_info.uuid,
tokens=model_info.api_keys,
),
requester=requester_inst,
)
return runtime_embedding_model
async def load_llm_model( async def load_llm_model(
self, self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict, model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
): ):
"""加载模型""" """加载 LLM 模型"""
runtime_llm_model = await self.init_runtime_llm_model(model_info) runtime_llm_model = await self.init_runtime_llm_model(model_info)
self.llm_models.append(runtime_llm_model) self.llm_models.append(runtime_llm_model)
async def load_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""加载 Embedding 模型"""
runtime_embedding_model = await self.init_runtime_embedding_model(model_info)
self.embedding_models.append(runtime_embedding_model)
async def get_model_by_uuid(self, uuid: str) -> requester.RuntimeLLMModel: async def get_model_by_uuid(self, uuid: str) -> requester.RuntimeLLMModel:
"""通过uuid获取模型""" """通过uuid获取 LLM 模型"""
for model in self.llm_models: for model in self.llm_models:
if model.model_entity.uuid == uuid: if model.model_entity.uuid == uuid:
return model return model
raise ValueError(f'model {uuid} not found') raise ValueError(f'LLM model {uuid} not found')
async def get_embedding_model_by_uuid(self, uuid: str) -> requester.RuntimeEmbeddingModel:
"""通过uuid获取 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == uuid:
return model
raise ValueError(f'Embedding model {uuid} not found')
async def remove_llm_model(self, model_uuid: str): async def remove_llm_model(self, model_uuid: str):
"""移除模型""" """移除 LLM 模型"""
for model in self.llm_models: for model in self.llm_models:
if model.model_entity.uuid == model_uuid: if model.model_entity.uuid == model_uuid:
self.llm_models.remove(model) self.llm_models.remove(model)
return return
def get_available_requesters_info(self) -> list[dict]: async def remove_embedding_model(self, model_uuid: str):
"""移除 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == model_uuid:
self.embedding_models.remove(model)
return
def get_available_requesters_info(self, model_type: str) -> list[dict]:
"""获取所有可用的请求器""" """获取所有可用的请求器"""
return [component.to_plain_dict() for component in self.requester_components] if model_type != '':
return [
component.to_plain_dict()
for component in self.requester_components
if model_type in component.spec['support_type']
]
else:
return [component.to_plain_dict() for component in self.requester_components]
def get_available_requester_info_by_name(self, name: str) -> dict | None: def get_available_requester_info_by_name(self, name: str) -> dict | None:
"""通过名称获取请求器信息""" """通过名称获取请求器信息"""

View File

@@ -20,22 +20,45 @@ class RuntimeLLMModel:
token_mgr: token.TokenManager token_mgr: token.TokenManager
"""api key管理器""" """api key管理器"""
requester: LLMAPIRequester requester: ProviderAPIRequester
"""请求器实例""" """请求器实例"""
def __init__( def __init__(
self, self,
model_entity: persistence_model.LLMModel, model_entity: persistence_model.LLMModel,
token_mgr: token.TokenManager, token_mgr: token.TokenManager,
requester: LLMAPIRequester, requester: ProviderAPIRequester,
): ):
self.model_entity = model_entity self.model_entity = model_entity
self.token_mgr = token_mgr self.token_mgr = token_mgr
self.requester = requester self.requester = requester
class LLMAPIRequester(metaclass=abc.ABCMeta): class RuntimeEmbeddingModel:
"""LLM API请求器""" """运行时 Embedding 模型"""
model_entity: persistence_model.EmbeddingModel
"""模型数据"""
token_mgr: token.TokenManager
"""api key管理器"""
requester: ProviderAPIRequester
"""请求器实例"""
def __init__(
self,
model_entity: persistence_model.EmbeddingModel,
token_mgr: token.TokenManager,
requester: ProviderAPIRequester,
):
self.model_entity = model_entity
self.token_mgr = token_mgr
self.requester = requester
class ProviderAPIRequester(metaclass=abc.ABCMeta):
"""Provider API请求器"""
name: str = None name: str = None
@@ -61,6 +84,7 @@ class LLMAPIRequester(metaclass=abc.ABCMeta):
messages: typing.List[provider_message.Message], messages: typing.List[provider_message.Message],
funcs: typing.List[resource_tool.LLMTool] = None, funcs: typing.List[resource_tool.LLMTool] = None,
extra_args: dict[str, typing.Any] = {}, extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> provider_message.Message: ) -> provider_message.Message:
"""调用API """调用API
@@ -69,8 +93,50 @@ class LLMAPIRequester(metaclass=abc.ABCMeta):
messages (typing.List[llm_entities.Message]): 消息对象列表 messages (typing.List[llm_entities.Message]): 消息对象列表
funcs (typing.List[tools_entities.LLMFunction], optional): 使用的工具函数列表. Defaults to None. funcs (typing.List[tools_entities.LLMFunction], optional): 使用的工具函数列表. Defaults to None.
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}. extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
remove_think (bool, optional): 是否移思考中的消息. Defaults to False.
Returns: Returns:
llm_entities.Message: 返回消息对象 llm_entities.Message: 返回消息对象
""" """
pass pass
async def invoke_llm_stream(
self,
query: pipeline_query.Query,
model: RuntimeLLMModel,
messages: typing.List[provider_message.Message],
funcs: typing.List[resource_tool.LLMTool] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> provider_message.MessageChunk:
"""调用API
Args:
model (RuntimeLLMModel): 使用的模型信息
messages (typing.List[provider_message.Message]): 消息对象列表
funcs (typing.List[resource_tool.LLMTool], optional): 使用的工具函数列表. Defaults to None.
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
remove_think (bool, optional): 是否移除思考中的消息. Defaults to False.
Returns:
typing.AsyncGenerator[provider_message.MessageChunk]: 返回消息对象
"""
pass
async def invoke_embedding(
self,
model: RuntimeEmbeddingModel,
input_text: typing.List[str],
extra_args: dict[str, typing.Any] = {},
) -> typing.List[typing.List[float]]:
"""调用 Embedding API
Args:
model (RuntimeEmbeddingModel): 使用的模型信息
input_text (typing.List[str]): 输入文本
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
Returns:
typing.List[typing.List[float]]: 返回的 embedding 向量
"""
pass

View File

@@ -7,7 +7,7 @@ from . import chatcmpl
class AI302ChatCompletions(chatcmpl.OpenAIChatCompletions): class AI302ChatCompletions(chatcmpl.OpenAIChatCompletions):
"""302 AI ChatCompletion API 请求器""" """302.AI ChatCompletion API 请求器"""
client: openai.AsyncClient client: openai.AsyncClient

View File

@@ -3,8 +3,8 @@ kind: LLMAPIRequester
metadata: metadata:
name: 302-ai-chat-completions name: 302-ai-chat-completions
label: label:
en_US: 302 AI en_US: 302.AI
zh_Hans: 302 AI zh_Hans: 302.AI
icon: 302ai.png icon: 302ai.png
spec: spec:
config: config:
@@ -22,6 +22,9 @@ spec:
type: integer type: integer
required: true required: true
default: 120 default: 120
support_type:
- llm
- text-embedding
execution: execution:
python: python:
path: ./302aichatcmpl.py path: ./302aichatcmpl.py

View File

@@ -15,13 +15,13 @@ import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.builtin.provider.message as provider_message import langbot_plugin.api.entities.builtin.provider.message as provider_message
class AnthropicMessages(requester.LLMAPIRequester): class AnthropicMessages(requester.ProviderAPIRequester):
"""Anthropic Messages API 请求器""" """Anthropic Messages API 请求器"""
client: anthropic.AsyncAnthropic client: anthropic.AsyncAnthropic
default_config: dict[str, typing.Any] = { default_config: dict[str, typing.Any] = {
'base_url': 'https://api.anthropic.com/v1', 'base_url': 'https://api.anthropic.com',
'timeout': 120, 'timeout': 120,
} }
@@ -44,6 +44,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
self.client = anthropic.AsyncAnthropic( self.client = anthropic.AsyncAnthropic(
api_key='', api_key='',
http_client=httpx_client, http_client=httpx_client,
base_url=self.requester_cfg['base_url'],
) )
async def invoke_llm( async def invoke_llm(
@@ -53,6 +54,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
messages: typing.List[provider_message.Message], messages: typing.List[provider_message.Message],
funcs: typing.List[resource_tool.LLMTool] = None, funcs: typing.List[resource_tool.LLMTool] = None,
extra_args: dict[str, typing.Any] = {}, extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> provider_message.Message: ) -> provider_message.Message:
self.client.api_key = model.token_mgr.get_token() self.client.api_key = model.token_mgr.get_token()
@@ -89,7 +91,8 @@ class AnthropicMessages(requester.LLMAPIRequester):
{ {
'type': 'tool_result', 'type': 'tool_result',
'tool_use_id': tool_call_id, 'tool_use_id': tool_call_id,
'content': m.content, 'is_error': False,
'content': [{'type': 'text', 'text': m.content}],
} }
], ],
} }
@@ -133,6 +136,9 @@ class AnthropicMessages(requester.LLMAPIRequester):
args['messages'] = req_messages args['messages'] = req_messages
if 'thinking' in args:
args['thinking'] = {'type': 'enabled', 'budget_tokens': 10000}
if funcs: if funcs:
tools = await self.ap.tool_mgr.generate_tools_for_anthropic(funcs) tools = await self.ap.tool_mgr.generate_tools_for_anthropic(funcs)
@@ -140,19 +146,17 @@ class AnthropicMessages(requester.LLMAPIRequester):
args['tools'] = tools args['tools'] = tools
try: try:
# print(json.dumps(args, indent=4, ensure_ascii=False))
resp = await self.client.messages.create(**args) resp = await self.client.messages.create(**args)
args = { args = {
'content': '', 'content': '',
'role': resp.role, 'role': resp.role,
} }
assert type(resp) is anthropic.types.message.Message assert type(resp) is anthropic.types.message.Message
for block in resp.content: for block in resp.content:
if block.type == 'thinking': if not remove_think and block.type == 'thinking':
args['content'] = '<think>' + block.thinking + '</think>\n' + args['content'] args['content'] = '<think>\n' + block.thinking + '\n</think>\n' + args['content']
elif block.type == 'text': elif block.type == 'text':
args['content'] += block.text args['content'] += block.text
elif block.type == 'tool_use': elif block.type == 'tool_use':
@@ -176,3 +180,191 @@ class AnthropicMessages(requester.LLMAPIRequester):
raise errors.RequesterError(f'模型无效: {e.message}') raise errors.RequesterError(f'模型无效: {e.message}')
else: else:
raise errors.RequesterError(f'请求地址无效: {e.message}') raise errors.RequesterError(f'请求地址无效: {e.message}')
async def invoke_llm_stream(
self,
query: pipeline_query.Query,
model: requester.RuntimeLLMModel,
messages: typing.List[provider_message.Message],
funcs: typing.List[resource_tool.LLMTool] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> provider_message.Message:
self.client.api_key = model.token_mgr.get_token()
args = extra_args.copy()
args['model'] = model.model_entity.name
args['stream'] = True
# 处理消息
# system
system_role_message = None
for i, m in enumerate(messages):
if m.role == 'system':
system_role_message = m
break
if system_role_message:
messages.pop(i)
if isinstance(system_role_message, provider_message.Message) and isinstance(system_role_message.content, str):
args['system'] = system_role_message.content
req_messages = []
for m in messages:
if m.role == 'tool':
tool_call_id = m.tool_call_id
req_messages.append(
{
'role': 'user',
'content': [
{
'type': 'tool_result',
'tool_use_id': tool_call_id,
'is_error': False, # 暂时直接写false
'content': [
{'type': 'text', 'text': m.content}
], # 这里要是list包裹应该是多个返回的情况type类型好像也可以填其他的暂时只写text
}
],
}
)
continue
msg_dict = m.dict(exclude_none=True)
if isinstance(m.content, str) and m.content.strip() != '':
msg_dict['content'] = [{'type': 'text', 'text': m.content}]
elif isinstance(m.content, list):
for i, ce in enumerate(m.content):
if ce.type == 'image_base64':
image_b64, image_format = await image.extract_b64_and_format(ce.image_base64)
alter_image_ele = {
'type': 'image',
'source': {
'type': 'base64',
'media_type': f'image/{image_format}',
'data': image_b64,
},
}
msg_dict['content'][i] = alter_image_ele
if isinstance(msg_dict['content'], str) and msg_dict['content'] == '':
msg_dict['content'] = [] # 这里不知道为什么会莫名有个空导致content为字符
if m.tool_calls:
for tool_call in m.tool_calls:
msg_dict['content'].append(
{
'type': 'tool_use',
'id': tool_call.id,
'name': tool_call.function.name,
'input': json.loads(tool_call.function.arguments),
}
)
del msg_dict['tool_calls']
req_messages.append(msg_dict)
if 'thinking' in args:
args['thinking'] = {'type': 'enabled', 'budget_tokens': 10000}
args['messages'] = req_messages
if funcs:
tools = await self.ap.tool_mgr.generate_tools_for_anthropic(funcs)
if tools:
args['tools'] = tools
try:
role = 'assistant' # 默认角色
# chunk_idx = 0
think_started = False
think_ended = False
finish_reason = False
content = ''
tool_name = ''
tool_id = ''
async for chunk in await self.client.messages.create(**args):
tool_call = {'id': None, 'function': {'name': None, 'arguments': None}, 'type': 'function'}
if isinstance(
chunk, anthropic.types.raw_content_block_start_event.RawContentBlockStartEvent
): # 记录开始
if chunk.content_block.type == 'tool_use':
if chunk.content_block.name is not None:
tool_name = chunk.content_block.name
if chunk.content_block.id is not None:
tool_id = chunk.content_block.id
tool_call['function']['name'] = tool_name
tool_call['function']['arguments'] = ''
tool_call['id'] = tool_id
if not remove_think:
if chunk.content_block.type == 'thinking' and not remove_think:
think_started = True
elif chunk.content_block.type == 'text' and chunk.index != 0 and not remove_think:
think_ended = True
continue
elif isinstance(chunk, anthropic.types.raw_content_block_delta_event.RawContentBlockDeltaEvent):
if chunk.delta.type == 'thinking_delta':
if think_started:
think_started = False
content = '<think>\n' + chunk.delta.thinking
elif remove_think:
continue
else:
content = chunk.delta.thinking
elif chunk.delta.type == 'text_delta':
if think_ended:
think_ended = False
content = '\n</think>\n' + chunk.delta.text
else:
content = chunk.delta.text
elif chunk.delta.type == 'input_json_delta':
tool_call['function']['arguments'] = chunk.delta.partial_json
tool_call['function']['name'] = tool_name
tool_call['id'] = tool_id
elif isinstance(chunk, anthropic.types.raw_content_block_stop_event.RawContentBlockStopEvent):
continue # 记录raw_content_block结束的
elif isinstance(chunk, anthropic.types.raw_message_delta_event.RawMessageDeltaEvent):
if chunk.delta.stop_reason == 'end_turn':
finish_reason = True
elif isinstance(chunk, anthropic.types.raw_message_stop_event.RawMessageStopEvent):
continue # 这个好像是完全结束
else:
# print(chunk)
self.ap.logger.debug(f'anthropic chunk: {chunk}')
continue
args = {
'content': content,
'role': role,
'is_final': finish_reason,
'tool_calls': None if tool_call['id'] is None else [tool_call],
}
# if chunk_idx == 0:
# chunk_idx += 1
# continue
# assert type(chunk) is anthropic.types.message.Chunk
yield provider_message.MessageChunk(**args)
# return llm_entities.Message(**args)
except anthropic.AuthenticationError as e:
raise errors.RequesterError(f'api-key 无效: {e.message}')
except anthropic.BadRequestError as e:
raise errors.RequesterError(str(e.message))
except anthropic.NotFoundError as e:
if 'model: ' in str(e):
raise errors.RequesterError(f'模型无效: {e.message}')
else:
raise errors.RequesterError(f'请求地址无效: {e.message}')

View File

@@ -14,7 +14,7 @@ spec:
zh_Hans: 基础 URL zh_Hans: 基础 URL
type: string type: string
required: true required: true
default: "https://api.anthropic.com/v1" default: "https://api.anthropic.com"
- name: timeout - name: timeout
label: label:
en_US: Timeout en_US: Timeout
@@ -22,6 +22,8 @@ spec:
type: integer type: integer
required: true required: true
default: 120 default: 120
support_type:
- llm
execution: execution:
python: python:
path: ./anthropicmsgs.py path: ./anthropicmsgs.py

Some files were not shown because too many files have changed in this diff Show More