Compare commits

..

82 Commits

Author SHA1 Message Date
Junyan Qin
28ce986a8c chore: release v4.1.0 2025-07-20 12:32:06 +08:00
Junyan Qin
489b145606 doc: update README 2025-07-20 12:30:41 +08:00
Junyan Qin (Chin)
5e92bffaa6 Merge pull request #1581 from langbot-app/RockChinQ-patch-1
Update README.md
2025-07-19 23:09:53 +08:00
Junyan Qin (Chin)
277d1b0e30 feat: rag engine (#1492)
* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* kb

* delete ap

* feat: add functions

* fix: modify rag database

* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* feat: add sidebar for rag and related i18n

* feat: add knowledge base page

* feat: basic entities of kb

* feat: complete support_type for 302ai and compshare requester

* perf: format

* perf: ruff check --fix

* feat: basic definition

* feat: rag fe framework

* perf: en comments

* feat: modify the rag.py

* perf: definitions

* fix: success method bad params

* fix: bugs

* fix: bug

* feat: kb dialog action

* fix: create knwoledge base issue

* fix: kb get api format

* fix: kb get api not contains model uuid

* fix: api bug

* fix: the fucking logger

* feat(fe): component for available apis

* fix: embbeding and chunking

* fix: ensure File.status is set correctly after storing data to avoid null values

* fix: add functions for deleting files

* feat(fe): file uploading

* perf: adjust ui

* fix: file be deleted twice

* feat(fe): complete kb ui

* fix: ui bugs

* fix: no longer require Query for invoking embedding

* feat: add embedder

* fix: delete embedding models file

* chore: stash

* chore: stash

* feat(rag): make embedding and retrieving available

* feat(rag): all APIs ok

* fix: delete utils

* feat: rag pipeline backend

* feat: combine kb with pipeline

* fix: .md file parse failed

* perf: debug output

* feat: add functions for frontend of kb

* perf(rag): ui and related apis

* perf(rag): use badge show doc status

* perf: open kb detail dialog after creating

* fix: linter error

* deps: remove sentence-transformers

* perf: description of default pipeline

* feat: add html and epub

* chore: no longer supports epub

---------

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: WangCham <651122857@qq.com>
2025-07-19 22:06:11 +08:00
Junyan Qin
13f4ed8d2c chore: no longer supports epub 2025-07-19 21:56:50 +08:00
WangCham
91cb5ca36c feat: add html and epub 2025-07-19 19:57:57 +08:00
TwperBody
c34d54a6cb Fixed a bug where some Windows systems failed to recognize spaces. (#1577)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx
2025-07-19 16:48:15 +08:00
TwperBody
2d1737da1f Optimize plugin display (#1578)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json
2025-07-19 16:47:34 +08:00
Junyan Qin
a1b8b9d47b perf: description of default pipeline 2025-07-18 18:57:42 +08:00
Junyan Qin
8df14bf9d9 deps: remove sentence-transformers 2025-07-18 18:46:07 +08:00
Junyan Qin
c98d265a1e fix: linter error 2025-07-18 17:52:24 +08:00
Junyan Qin
4e6782a6b7 perf: open kb detail dialog after creating 2025-07-18 16:52:54 +08:00
Junyan Qin
5541e9e6d0 perf(rag): use badge show doc status 2025-07-18 16:38:55 +08:00
gaord
878ab0ef6b fix(wechatpad): @所有人的情况下,修复@机器人消息未被正确解析的问题 (#1575) 2025-07-18 12:52:30 +08:00
Junyan Qin
b61bd36b14 perf(rag): ui and related apis 2025-07-18 00:45:38 +08:00
Junyan Qin (Chin)
bb672d8f46 Merge branch 'master' into feat/rag 2025-07-18 00:45:24 +08:00
WangCham
ba1a26543b Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-17 23:57:52 +08:00
WangCham
cb868ee7b2 feat: add functions for frontend of kb 2025-07-17 23:52:46 +08:00
Junyan Qin
5dd5cb12ad perf: debug output 2025-07-17 23:34:35 +08:00
Junyan Qin
2dfa83ff22 fix: .md file parse failed 2025-07-17 23:22:20 +08:00
Junyan Qin
27bb4e1253 feat: combine kb with pipeline 2025-07-17 23:15:13 +08:00
WangCham
45afdbdfbb feat: rag pipeline backend 2025-07-17 15:05:11 +08:00
WangCham
4cbbe9e000 fix: delete utils 2025-07-16 23:25:12 +08:00
Junyan Qin
333ec346ef feat(rag): all APIs ok 2025-07-16 22:15:03 +08:00
Junyan Qin
2f2db4d445 feat(rag): make embedding and retrieving available 2025-07-16 21:17:18 +08:00
Junyan Qin
fdc79b8d77 chore: release v4.0.9 2025-07-16 11:39:15 +08:00
Junyan Qin
f244795e57 fix: rename to '302.AI' 2025-07-16 11:36:57 +08:00
Junyan Qin
5a2aa19d0f feat(aiocqhttp): no longer download files for now 2025-07-16 11:36:01 +08:00
Junyan Qin
f731115805 chore: stash 2025-07-16 11:31:55 +08:00
Junyan Qin
67bc065ccd chore: stash 2025-07-15 22:09:10 +08:00
Junyan Qin
81eb92646f doc: perf README_JP 2025-07-14 11:22:59 +08:00
Junyan Qin
019a9317e9 doc: perf README 2025-07-14 11:17:58 +08:00
WangCham
199164fc4b fix: delete embedding models file 2025-07-13 23:12:08 +08:00
WangCham
c9c26213df Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-13 23:09:41 +08:00
WangCham
b7c57104c4 feat: add embedder 2025-07-13 23:04:03 +08:00
TwperBody
858cfd8d5a Update package.json (#1570)
Compatible with the creation of environment variables in the Windows environment
2025-07-12 22:31:30 +08:00
Junyan Qin
cbe297dc59 fix: no longer require Query for invoking embedding 2025-07-12 21:23:19 +08:00
Junyan Qin
de76fed25a fix: ui bugs 2025-07-12 18:12:53 +08:00
Junyan Qin
a10e61735d feat(fe): complete kb ui 2025-07-12 18:00:54 +08:00
Junyan Qin
1ef0193028 fix: file be deleted twice 2025-07-12 17:47:53 +08:00
Junyan Qin
1e85d02ae4 perf: adjust ui 2025-07-12 17:29:39 +08:00
Junyan Qin
d78a329aa9 feat(fe): file uploading 2025-07-12 17:15:07 +08:00
Junyan Qin
bfdf238db5 chore: use new social image 2025-07-12 11:44:08 +08:00
WangCham
234b61e2f8 fix: add functions for deleting files 2025-07-12 01:37:44 +08:00
WangCham
9f43097361 fix: ensure File.status is set correctly after storing data to avoid null values 2025-07-12 01:21:02 +08:00
WangCham
f395cac893 fix: embbeding and chunking 2025-07-12 01:07:49 +08:00
Junyan Qin
fe122281fd feat(fe): component for available apis 2025-07-11 21:40:42 +08:00
Junyan Qin
6d788cadbc fix: the fucking logger 2025-07-11 21:37:31 +08:00
Junyan Qin
a79a22a74d fix: api bug 2025-07-11 21:30:47 +08:00
Junyan Qin
2ed3b68790 fix: kb get api not contains model uuid 2025-07-11 20:58:51 +08:00
Junyan Qin
bd9331ce62 fix: kb get api format 2025-07-11 20:57:09 +08:00
WangCham
14c161b733 fix: create knwoledge base issue 2025-07-11 18:14:03 +08:00
Junyan Qin
815cdf8b4a feat: kb dialog action 2025-07-11 17:22:43 +08:00
Junyan Qin
7d5503dab2 fix: bug 2025-07-11 16:49:55 +08:00
Junyan Qin
9ba1ad5bd3 fix: bugs 2025-07-11 16:38:08 +08:00
Junyan Qin
367d04d0f0 fix: success method bad params 2025-07-11 11:28:43 +08:00
Junyan Qin
75c3ddde19 perf: definitions 2025-07-10 16:45:59 +08:00
Junyan Qin
c6e77e42be chore: switch some comments to en 2025-07-10 11:09:33 +08:00
Junyan Qin
4d0a39eb65 chore: switch comments to en 2025-07-10 11:01:16 +08:00
WangCham
ac03a2dceb feat: modify the rag.py 2025-07-09 22:09:46 +08:00
Junyan Qin
56248c350f chore: repo transferred 2025-07-07 19:00:55 +08:00
gaord
244aaf6e20 feat: 聊天的@用户id内容需要保留 (#1564)
* converters could use the application logger

* keep @targets in message for some plugins may need it to their functionality

* fix:form wxid in config

fix:传参问题,可以直接从config中拿到wxid

---------

Co-authored-by: fdc310 <82008029+fdc310@users.noreply.github.com>
2025-07-07 10:28:12 +08:00
Junyan Qin
cd25340826 perf: en comments 2025-07-06 16:08:02 +08:00
Junyan Qin
ebd8e014c6 feat: rag fe framework 2025-07-06 15:52:53 +08:00
Junyan Qin
bef0d73e83 feat: basic definition 2025-07-06 10:25:28 +08:00
Junyan Qin
8d28ace252 perf: ruff check --fix 2025-07-05 21:56:54 +08:00
Junyan Qin
39c062f73e perf: format 2025-07-05 21:56:17 +08:00
Junyan Qin
0e5c9e19e1 feat: complete support_type for 302ai and compshare requester 2025-07-05 21:03:14 +08:00
Junyan Qin
c5b62b6ba3 Merge remote-tracking branch 'wangcham/feat/rag' into feat/rag 2025-07-05 20:16:37 +08:00
Junyan Qin
bbf583ddb5 feat: basic entities of kb 2025-07-05 20:07:27 +08:00
Junyan Qin
22ef1a399e feat: add knowledge base page 2025-07-05 20:07:27 +08:00
Junyan Qin
0733f8878f feat: add sidebar for rag and related i18n 2025-07-05 20:07:27 +08:00
Junyan Qin
f36a61dbb2 feat: add api for uploading files 2025-07-05 20:07:15 +08:00
Junyan Qin
6d8936bd74 feat: add knowledge page 2025-07-05 20:07:15 +08:00
devin-ai-integration[bot]
d2b93b3296 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 20:07:15 +08:00
WangCham
552fee9bac fix: modify rag database 2025-07-05 18:58:17 +08:00
WangCham
34fe8b324d feat: add functions 2025-07-05 18:58:16 +08:00
WangCham
c4671fbf1c delete ap 2025-07-05 18:58:16 +08:00
WangCham
4bcc06c955 kb 2025-07-05 18:58:16 +08:00
Junyan Qin
348f6d9eaa feat: add api for uploading files 2025-07-05 18:57:24 +08:00
Junyan Qin
157ffdc34c feat: add knowledge page 2025-07-05 18:57:24 +08:00
devin-ai-integration[bot]
c81d5a1a49 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 18:57:23 +08:00
175 changed files with 5730 additions and 1065 deletions

View File

@@ -9,7 +9,7 @@
*请在方括号间写`x`以打勾 / Please tick the box with `x`*
- [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 阅读仓库[贡献指引](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 与项目所有者沟通过了吗? / Have you communicated with the project maintainer?
- [ ] 我确定已自行测试所作的更改,确保功能符合预期。 / I have tested the changes and ensured they work as expected.

3
.gitignore vendored
View File

@@ -42,4 +42,5 @@ botpy.log*
test.py
/web_ui
.venv/
uv.lock
uv.lock
/test

View File

@@ -1,32 +1,25 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
</div>
@@ -34,7 +27,8 @@
## ✨ 特性
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
@@ -44,7 +38,7 @@
#### Docker Compose 部署
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -149,10 +143,10 @@ docker compose up -d
## 😘 社区贡献
感谢以下[代码贡献者](https://github.com/RockChinQ/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
感谢以下[代码贡献者](https://github.com/langbot-app/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 保持更新

View File

@@ -1,30 +1,21 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">Home</a>
<a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
<div align="center">
😎High Stability, 🧩Extension Supported, 🦄Multi-modal - LLM Native Instant Messaging Bot Platform🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
</div>
@@ -32,7 +23,8 @@
## ✨ Features
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
@@ -42,7 +34,7 @@
#### Docker Compose Deployment
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -132,10 +124,10 @@ Directly use the released version to run, see the [Manual Deployment](https://do
## 🤝 Community Contribution
Thank you for the following [code contributors](https://github.com/RockChinQ/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
Thank you for the following [code contributors](https://github.com/langbot-app/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 Stay Ahead

View File

@@ -1,29 +1,21 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[简体中文](README.md) / [English](README_EN.md) / 日本語 / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">ホーム</a>
<a href="https://docs.langbot.app/en/insight/guide.html">デプロイ</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">プラグイン</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a>
<div align="center">
😎高い安定性、🧩拡張サポート、🦄マルチモーダル - LLMネイティブインスタントメッセージングボットプラットフォーム🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README_CN.md) / [English](README.md) / [日本語](README_JP.md) / (PR for your language)
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a>
</div>
@@ -31,7 +23,8 @@
## ✨ 機能
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート[Dify](https://dify.ai) と深く統合。現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
@@ -41,7 +34,7 @@
#### Docker Compose デプロイ
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -131,10 +124,10 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
## 🤝 コミュニティ貢献
LangBot への貢献に対して、以下の [コード貢献者](https://github.com/RockChinQ/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。
LangBot への貢献に対して、以下の [コード貢献者](https://github.com/langbot-app/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 最新情報を入手

View File

@@ -1 +1 @@
from .client import WeChatPadClient
from .client import WeChatPadClient as WeChatPadClient

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class ChatRoomApi:
@@ -7,8 +7,6 @@ class ChatRoomApi:
self.token = token
def get_chatroom_member_detail(self, chatroom_name):
params = {
"ChatRoomName": chatroom_name
}
params = {'ChatRoomName': chatroom_name}
url = self.base_url + '/group/GetChatroomMemberDetail'
return post_json(url, token=self.token, data=params)

View File

@@ -1,32 +1,23 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
import httpx
import base64
class DownloadApi:
def __init__(self, base_url, token):
self.base_url = base_url
self.token = token
def send_download(self, aeskey, file_type, file_url):
json_data = {
"AesKey": aeskey,
"FileType": file_type,
"FileURL": file_url
}
url = self.base_url + "/message/SendCdnDownload"
json_data = {'AesKey': aeskey, 'FileType': file_type, 'FileURL': file_url}
url = self.base_url + '/message/SendCdnDownload'
return post_json(url, token=self.token, data=json_data)
def get_msg_voice(self,buf_id, length, new_msgid):
json_data = {
"Bufid": buf_id,
"Length": length,
"NewMsgId": new_msgid,
"ToUserName": ""
}
url = self.base_url + "/message/GetMsgVoice"
def get_msg_voice(self, buf_id, length, new_msgid):
json_data = {'Bufid': buf_id, 'Length': length, 'NewMsgId': new_msgid, 'ToUserName': ''}
url = self.base_url + '/message/GetMsgVoice'
return post_json(url, token=self.token, data=json_data)
async def download_url_to_base64(self, download_url):
async with httpx.AsyncClient() as client:
response = await client.get(download_url)
@@ -36,4 +27,4 @@ class DownloadApi:
base64_str = base64.b64encode(file_bytes).decode('utf-8') # 返回字符串格式
return base64_str
else:
raise Exception('获取文件失败')
raise Exception('获取文件失败')

View File

@@ -1,11 +1,6 @@
from libs.wechatpad_api.util.http_util import post_json,async_request
from typing import List, Dict, Any, Optional
class FriendApi:
"""联系人API类处理所有与联系人相关的操作"""
def __init__(self, base_url: str, token: str):
self.base_url = base_url
self.token = token

View File

@@ -1,37 +1,34 @@
from libs.wechatpad_api.util.http_util import async_request,post_json,get_json
from libs.wechatpad_api.util.http_util import post_json, get_json
class LoginApi:
def __init__(self, base_url: str, token: str = None, admin_key: str = None):
'''
"""
Args:
base_url: 原始路径
token: token
admin_key: 管理员key
'''
"""
self.base_url = base_url
self.token = token
# self.admin_key = admin_key
def get_token(self, admin_key, day: int=365):
def get_token(self, admin_key, day: int = 365):
# 获取普通token
url = f"{self.base_url}/admin/GenAuthKey1"
json_data = {
"Count": 1,
"Days": day
}
url = f'{self.base_url}/admin/GenAuthKey1'
json_data = {'Count': 1, 'Days': day}
return post_json(base_url=url, token=admin_key, data=json_data)
def get_login_qr(self, Proxy: str = ""):
'''
def get_login_qr(self, Proxy: str = ''):
"""
Args:
Proxy:异地使用时代理
Returns:json数据
'''
"""
"""
{
@@ -49,54 +46,37 @@ class LoginApi:
}
"""
#获取登录二维码
url = f"{self.base_url}/login/GetLoginQrCodeNew"
# 获取登录二维码
url = f'{self.base_url}/login/GetLoginQrCodeNew'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": Proxy
}
json_data = {'Check': check, 'Proxy': Proxy}
return post_json(base_url=url, token=self.token, data=json_data)
def get_login_status(self):
# 获取登录状态
url = f'{self.base_url}/login/GetLoginStatus'
return get_json(base_url=url, token=self.token)
def logout(self):
# 退出登录
url = f'{self.base_url}/login/LogOut'
return post_json(base_url=url, token=self.token)
def wake_up_login(self, Proxy: str = ""):
def wake_up_login(self, Proxy: str = ''):
# 唤醒登录
url = f'{self.base_url}/login/WakeUpLogin'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": ""
}
json_data = {'Check': check, 'Proxy': ''}
return post_json(base_url=url, token=self.token, data=json_data)
def login(self,admin_key):
def login(self, admin_key):
login_status = self.get_login_status()
if login_status["Code"] == 300 and login_status["Text"] == "你已退出微信":
print("token已经失效重新获取")
if login_status['Code'] == 300 and login_status['Text'] == '你已退出微信':
print('token已经失效重新获取')
token_data = self.get_token(admin_key)
self.token = token_data["Data"][0]
self.token = token_data['Data'][0]

View File

@@ -1,5 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class MessageApi:
@@ -7,8 +6,8 @@ class MessageApi:
self.base_url = base_url
self.token = token
def post_text(self, to_wxid, content, ats: list= []):
'''
def post_text(self, to_wxid, content, ats: list = []):
"""
Args:
app_id: 微信id
@@ -18,106 +17,64 @@ class MessageApi:
Returns:
'''
url = self.base_url + "/message/SendTextMessage"
"""
url = self.base_url + '/message/SendTextMessage'
"""发送文字消息"""
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": "",
"MsgType": 0,
"TextContent": content,
"ToUserName": to_wxid
}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': '', 'MsgType': 0, 'TextContent': content, 'ToUserName': to_wxid}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
def post_image(self, to_wxid, img_url, ats: list= []):
def post_image(self, to_wxid, img_url, ats: list = []):
"""发送图片消息"""
# 这里好像可以尝试发送多个暂时未测试
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": img_url,
"MsgType": 0,
"TextContent": '',
"ToUserName": to_wxid
}
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': img_url, 'MsgType': 0, 'TextContent': '', 'ToUserName': to_wxid}
]
}
url = self.base_url + "/message/SendImageMessage"
url = self.base_url + '/message/SendImageMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_voice(self, to_wxid, voice_data, voice_forma, voice_duration):
"""发送语音消息"""
json_data = {
"ToUserName": to_wxid,
"VoiceData": voice_data,
"VoiceFormat": voice_forma,
"VoiceSecond": voice_duration
'ToUserName': to_wxid,
'VoiceData': voice_data,
'VoiceFormat': voice_forma,
'VoiceSecond': voice_duration,
}
url = self.base_url + "/message/SendVoice"
url = self.base_url + '/message/SendVoice'
return post_json(base_url=url, token=self.token, data=json_data)
def post_name_card(self, alias, to_wxid, nick_name, name_card_wxid, flag):
"""发送名片消息"""
param = {
"CardAlias": alias,
"CardFlag": flag,
"CardNickName": nick_name,
"CardWxId": name_card_wxid,
"ToUserName": to_wxid
'CardAlias': alias,
'CardFlag': flag,
'CardNickName': nick_name,
'CardWxId': name_card_wxid,
'ToUserName': to_wxid,
}
url = f"{self.base_url}/message/ShareCardMessage"
url = f'{self.base_url}/message/ShareCardMessage'
return post_json(base_url=url, token=self.token, data=param)
def post_emoji(self, to_wxid, emoji_md5, emoji_size:int=0):
def post_emoji(self, to_wxid, emoji_md5, emoji_size: int = 0):
"""发送emoji消息"""
json_data = {
"EmojiList": [
{
"EmojiMd5": emoji_md5,
"EmojiSize": emoji_size,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendEmojiMessage"
json_data = {'EmojiList': [{'EmojiMd5': emoji_md5, 'EmojiSize': emoji_size, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendEmojiMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_app_msg(self, to_wxid,xml_data, contenttype:int=0):
def post_app_msg(self, to_wxid, xml_data, contenttype: int = 0):
"""发送appmsg消息"""
json_data = {
"AppList": [
{
"ContentType": contenttype,
"ContentXML": xml_data,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendAppMessage"
json_data = {'AppList': [{'ContentType': contenttype, 'ContentXML': xml_data, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendAppMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def revoke_msg(self, to_wxid, msg_id, new_msg_id, create_time):
"""撤回消息"""
param = {
"ClientMsgId": msg_id,
"CreateTime": create_time,
"NewMsgId": new_msg_id,
"ToUserName": to_wxid
}
url = f"{self.base_url}/message/RevokeMsg"
return post_json(base_url=url, token=self.token, data=param)
param = {'ClientMsgId': msg_id, 'CreateTime': create_time, 'NewMsgId': new_msg_id, 'ToUserName': to_wxid}
url = f'{self.base_url}/message/RevokeMsg'
return post_json(base_url=url, token=self.token, data=param)

View File

@@ -1,10 +1,9 @@
import requests
import aiohttp
def post_json(base_url, token, data=None):
headers = {
'Content-Type': 'application/json'
}
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -18,14 +17,12 @@ def post_json(base_url, token, data=None):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
def get_json(base_url, token):
headers = {
'Content-Type': 'application/json'
}
def get_json(base_url, token):
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -39,21 +36,18 @@ def get_json(base_url, token):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
import aiohttp
import asyncio
async def async_request(
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None,
):
"""
通用异步请求函数
@@ -67,18 +61,11 @@ async def async_request(
:param json: JSON数据
:return: 响应文本
"""
headers = {
'Content-Type': 'application/json'
}
url = f"{base_url}?key={token_key}"
headers = {'Content-Type': 'application/json'}
url = f'{base_url}?key={token_key}'
async with aiohttp.ClientSession() as session:
async with session.request(
method=method,
url=url,
params=params,
headers=headers,
data=data,
json=json
method=method, url=url, params=params, headers=headers, data=data, json=json
) as response:
response.raise_for_status() # 如果状态码不是200抛出异常
result = await response.json()
@@ -89,4 +76,3 @@ async def async_request(
# return await result
# else:
# raise RuntimeError("请求失败",response.text)

View File

@@ -11,7 +11,7 @@ asciiart = r"""
|____\__,_|_||_\__, |___/\___/\__|
|___/
⭐️ Open Source 开源地址: https://github.com/RockChinQ/LangBot
⭐️ Open Source 开源地址: https://github.com/langbot-app/LangBot
📖 Documentation 文档地址: https://docs.langbot.app
"""

View File

@@ -11,10 +11,10 @@ from ....core import app
preregistered_groups: list[type[RouterGroup]] = []
"""RouterGroup 的预注册列表"""
"""Pre-registered list of RouterGroup"""
def group_class(name: str, path: str) -> None:
def group_class(name: str, path: str) -> typing.Callable[[typing.Type[RouterGroup]], typing.Type[RouterGroup]]:
"""注册一个 RouterGroup"""
def decorator(cls: typing.Type[RouterGroup]) -> typing.Type[RouterGroup]:
@@ -27,7 +27,7 @@ def group_class(name: str, path: str) -> None:
class AuthType(enum.Enum):
"""认证类型"""
"""Authentication type"""
NONE = 'none'
USER_TOKEN = 'user-token'
@@ -56,7 +56,7 @@ class RouterGroup(abc.ABC):
auth_type: AuthType = AuthType.USER_TOKEN,
**options: typing.Any,
) -> typing.Callable[[RouteCallable], RouteCallable]: # decorator
"""注册一个路由"""
"""Register a route"""
def decorator(f: RouteCallable) -> RouteCallable:
nonlocal rule
@@ -64,11 +64,11 @@ class RouterGroup(abc.ABC):
async def handler_error(*args, **kwargs):
if auth_type == AuthType.USER_TOKEN:
# Authorization头中获取token
# get token from Authorization header
token = quart.request.headers.get('Authorization', '').replace('Bearer ', '')
if not token:
return self.http_status(401, -1, '未提供有效的用户令牌')
return self.http_status(401, -1, 'No valid user token provided')
try:
user_email = await self.ap.user_service.verify_jwt_token(token)
@@ -76,9 +76,9 @@ class RouterGroup(abc.ABC):
# check if this account exists
user = await self.ap.user_service.get_user_by_email(user_email)
if not user:
return self.http_status(401, -1, '用户不存在')
return self.http_status(401, -1, 'User not found')
# 检查f是否接受user_email参数
# check if f accepts user_email parameter
if 'user_email' in f.__code__.co_varnames:
kwargs['user_email'] = user_email
except Exception as e:
@@ -86,10 +86,11 @@ class RouterGroup(abc.ABC):
try:
return await f(*args, **kwargs)
except Exception: # 自动 500
except Exception as e: # 自动 500
traceback.print_exc()
# return self.http_status(500, -2, str(e))
return self.http_status(500, -2, 'internal server error')
return self.http_status(500, -2, str(e))
new_f = handler_error
new_f.__name__ = (self.name + rule).replace('/', '__')
@@ -101,7 +102,7 @@ class RouterGroup(abc.ABC):
return decorator
def success(self, data: typing.Any = None) -> quart.Response:
"""返回一个 200 响应"""
"""Return a 200 response"""
return quart.jsonify(
{
'code': 0,
@@ -111,7 +112,7 @@ class RouterGroup(abc.ABC):
)
def fail(self, code: int, msg: str) -> quart.Response:
"""返回一个异常响应"""
"""Return an error response"""
return quart.jsonify(
{
@@ -120,6 +121,6 @@ class RouterGroup(abc.ABC):
}
)
def http_status(self, status: int, code: int, msg: str) -> quart.Response:
def http_status(self, status: int, code: int, msg: str) -> typing.Tuple[quart.Response, int]:
"""返回一个指定状态码的响应"""
return self.fail(code, msg), status
return (self.fail(code, msg), status)

View File

@@ -2,6 +2,10 @@ from __future__ import annotations
import quart
import mimetypes
import uuid
import asyncio
import quart.datastructures
from .. import group
@@ -20,3 +24,23 @@ class FilesRouterGroup(group.RouterGroup):
mime_type = 'image/jpeg'
return quart.Response(image_bytes, mimetype=mime_type)
@self.route('/documents', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> quart.Response:
request = quart.request
# get file bytes from 'file'
file = (await request.files)['file']
assert isinstance(file, quart.datastructures.FileStorage)
file_bytes = await asyncio.to_thread(file.stream.read)
extension = file.filename.split('.')[-1]
file_name = file.filename.split('.')[0]
file_key = file_name + '_' + str(uuid.uuid4())[:8] + '.' + extension
# save file to storage
await self.ap.storage_mgr.storage_provider.save(file_key, file_bytes)
return self.success(
data={
'file_id': file_key,
}
)

View File

@@ -0,0 +1,90 @@
import quart
from ... import group
@group.group_class('knowledge_base', '/api/v1/knowledge/bases')
class KnowledgeBaseRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['POST', 'GET'])
async def handle_knowledge_bases() -> quart.Response:
if quart.request.method == 'GET':
knowledge_bases = await self.ap.knowledge_service.get_knowledge_bases()
return self.success(data={'bases': knowledge_bases})
elif quart.request.method == 'POST':
json_data = await quart.request.json
knowledge_base_uuid = await self.ap.knowledge_service.create_knowledge_base(json_data)
return self.success(data={'uuid': knowledge_base_uuid})
return self.http_status(405, -1, 'Method not allowed')
@self.route(
'/<knowledge_base_uuid>',
methods=['GET', 'DELETE', 'PUT'],
)
async def handle_specific_knowledge_base(knowledge_base_uuid: str) -> quart.Response:
if quart.request.method == 'GET':
knowledge_base = await self.ap.knowledge_service.get_knowledge_base(knowledge_base_uuid)
if knowledge_base is None:
return self.http_status(404, -1, 'knowledge base not found')
return self.success(
data={
'base': knowledge_base,
}
)
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.knowledge_service.update_knowledge_base(knowledge_base_uuid, json_data)
return self.success({})
elif quart.request.method == 'DELETE':
await self.ap.knowledge_service.delete_knowledge_base(knowledge_base_uuid)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/files',
methods=['GET', 'POST'],
)
async def get_knowledge_base_files(knowledge_base_uuid: str) -> str:
if quart.request.method == 'GET':
files = await self.ap.knowledge_service.get_files_by_knowledge_base(knowledge_base_uuid)
return self.success(
data={
'files': files,
}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json
file_id = json_data.get('file_id')
if not file_id:
return self.http_status(400, -1, 'File ID is required')
# 调用服务层方法将文件与知识库关联
task_id = await self.ap.knowledge_service.store_file(knowledge_base_uuid, file_id)
return self.success(
{
'task_id': task_id,
}
)
@self.route(
'/<knowledge_base_uuid>/files/<file_id>',
methods=['DELETE'],
)
async def delete_specific_file_in_kb(file_id: str, knowledge_base_uuid: str) -> str:
await self.ap.knowledge_service.delete_file(knowledge_base_uuid, file_id)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/retrieve',
methods=['POST'],
)
async def retrieve_knowledge_base(knowledge_base_uuid: str) -> str:
json_data = await quart.request.json
query = json_data.get('query')
results = await self.ap.knowledge_service.retrieve_knowledge_base(knowledge_base_uuid, query)
return self.success(data={'results': results})

View File

@@ -8,7 +8,7 @@ class WebChatDebugRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str:
"""发送调试消息到流水线"""
"""Send a message to the pipeline for debugging"""
try:
data = await quart.request.get_json()
session_type = data.get('session_type', 'person')
@@ -38,7 +38,7 @@ class WebChatDebugRouterGroup(group.RouterGroup):
@self.route('/messages/<session_type>', methods=['GET'])
async def get_messages(pipeline_uuid: str, session_type: str) -> str:
"""获取调试消息历史"""
"""Get the message history of the pipeline for debugging"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
@@ -57,7 +57,7 @@ class WebChatDebugRouterGroup(group.RouterGroup):
@self.route('/reset/<session_type>', methods=['POST'])
async def reset_session(session_type: str) -> str:
"""重置调试会话"""
"""Reset the debug session"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')

View File

@@ -40,7 +40,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.update_plugin(plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-update-{plugin_name}',
label=f'更新插件 {plugin_name}',
label=f'Updating plugin {plugin_name}',
context=ctx,
)
return self.success(data={'task_id': wrapper.id})
@@ -62,7 +62,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.uninstall_plugin(plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-remove-{plugin_name}',
label=f'删除插件 {plugin_name}',
label=f'Removing plugin {plugin_name}',
context=ctx,
)
@@ -102,7 +102,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.install_plugin(data['source'], task_context=ctx),
kind='plugin-operation',
name='plugin-install-github',
label=f'安装插件 ...{short_source_str}',
label=f'Installing plugin ...{short_source_str}',
context=ctx,
)

View File

@@ -9,18 +9,18 @@ class LLMModelsRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.model_service.get_llm_models()})
return self.success(data={'models': await self.ap.llm_model_service.get_llm_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.model_service.create_llm_model(json_data)
model_uuid = await self.ap.llm_model_service.create_llm_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.model_service.get_llm_model(model_uuid)
model = await self.ap.llm_model_service.get_llm_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
@@ -29,11 +29,11 @@ class LLMModelsRouterGroup(group.RouterGroup):
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.model_service.update_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.update_llm_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.model_service.delete_llm_model(model_uuid)
await self.ap.llm_model_service.delete_llm_model(model_uuid)
return self.success()
@@ -41,6 +41,49 @@ class LLMModelsRouterGroup(group.RouterGroup):
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.model_service.test_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.test_llm_model(model_uuid, json_data)
return self.success()
@group.group_class('models/embedding', '/api/v1/provider/models/embedding')
class EmbeddingModelsRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.embedding_models_service.get_embedding_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.embedding_models_service.create_embedding_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.embedding_models_service.get_embedding_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
return self.success(data={'model': model})
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.embedding_models_service.update_embedding_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.embedding_models_service.delete_embedding_model(model_uuid)
return self.success()
@self.route('/<model_uuid>/test', methods=['POST'])
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.embedding_models_service.test_embedding_model(model_uuid, json_data)
return self.success()

View File

@@ -8,7 +8,8 @@ class RequestersRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET'])
async def _() -> quart.Response:
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info()})
model_type = quart.request.args.get('type', '')
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info(model_type)})
@self.route('/<requester_name>', methods=['GET'])
async def _(requester_name: str) -> quart.Response:

View File

@@ -14,7 +14,7 @@ class UserRouterGroup(group.RouterGroup):
return self.success(data={'initialized': await self.ap.user_service.is_initialized()})
if await self.ap.user_service.is_initialized():
return self.fail(1, '系统已初始化')
return self.fail(1, 'System already initialized')
json_data = await quart.request.json
@@ -32,7 +32,7 @@ class UserRouterGroup(group.RouterGroup):
try:
token = await self.ap.user_service.authenticate(json_data['user'], json_data['password'])
except argon2.exceptions.VerifyMismatchError:
return self.fail(1, '用户名或密码错误')
return self.fail(1, 'Invalid username or password')
return self.success(data={'token': token})
@@ -54,15 +54,15 @@ class UserRouterGroup(group.RouterGroup):
await asyncio.sleep(3)
if not await self.ap.user_service.is_initialized():
return self.http_status(400, -1, 'system not initialized')
return self.http_status(400, -1, 'System not initialized')
user_obj = await self.ap.user_service.get_user_by_email(user_email)
if user_obj is None:
return self.http_status(400, -1, 'user not found')
return self.http_status(400, -1, 'User not found')
if recovery_key != self.ap.instance_config.data['system']['recovery_key']:
return self.http_status(403, -1, 'invalid recovery key')
return self.http_status(403, -1, 'Invalid recovery key')
await self.ap.user_service.reset_password(user_email, new_password)

View File

@@ -14,11 +14,13 @@ from . import group
from .groups import provider as groups_provider
from .groups import platform as groups_platform
from .groups import pipelines as groups_pipelines
from .groups import knowledge as groups_knowledge
importutil.import_modules_in_pkg(groups)
importutil.import_modules_in_pkg(groups_provider)
importutil.import_modules_in_pkg(groups_platform)
importutil.import_modules_in_pkg(groups_pipelines)
importutil.import_modules_in_pkg(groups_knowledge)
class HTTPController:
@@ -45,7 +47,7 @@ class HTTPController:
try:
await self.quart_app.run_task(*args, **kwargs)
except Exception as e:
self.ap.logger.error(f'启动 HTTP 服务失败: {e}')
self.ap.logger.error(f'Failed to start HTTP service: {e}')
self.ap.task_mgr.create_task(
exception_handler(

View File

@@ -10,7 +10,7 @@ from ....entity.persistence import pipeline as persistence_pipeline
class BotService:
"""机器人服务"""
"""Bot service"""
ap: app.Application
@@ -18,7 +18,7 @@ class BotService:
self.ap = ap
async def get_bots(self) -> list[dict]:
"""获取所有机器人"""
"""Get all bots"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_bot.Bot))
bots = result.all()
@@ -26,7 +26,7 @@ class BotService:
return [self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot) for bot in bots]
async def get_bot(self, bot_uuid: str) -> dict | None:
"""获取机器人"""
"""Get bot"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)
)
@@ -39,7 +39,7 @@ class BotService:
return self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot)
async def create_bot(self, bot_data: dict) -> str:
"""创建机器人"""
"""Create bot"""
# TODO: 检查配置信息格式
bot_data['uuid'] = str(uuid.uuid4())
@@ -63,7 +63,7 @@ class BotService:
return bot_data['uuid']
async def update_bot(self, bot_uuid: str, bot_data: dict) -> None:
"""更新机器人"""
"""Update bot"""
if 'uuid' in bot_data:
del bot_data['uuid']
@@ -99,7 +99,7 @@ class BotService:
session.using_conversation = None
async def delete_bot(self, bot_uuid: str) -> None:
"""删除机器人"""
"""Delete bot"""
await self.ap.platform_mgr.remove_bot(bot_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)

View File

@@ -0,0 +1,118 @@
from __future__ import annotations
import uuid
import sqlalchemy
from ....core import app
from ....entity.persistence import rag as persistence_rag
class KnowledgeService:
"""知识库服务"""
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_knowledge_bases(self) -> list[dict]:
"""获取所有知识库"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_rag.KnowledgeBase))
knowledge_bases = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
for knowledge_base in knowledge_bases
]
async def get_knowledge_base(self, kb_uuid: str) -> dict | None:
"""获取知识库"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
knowledge_base = result.first()
if knowledge_base is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
async def create_knowledge_base(self, kb_data: dict) -> str:
"""创建知识库"""
kb_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.KnowledgeBase).values(kb_data))
kb = await self.get_knowledge_base(kb_data['uuid'])
await self.ap.rag_mgr.load_knowledge_base(kb)
return kb_data['uuid']
async def update_knowledge_base(self, kb_uuid: str, kb_data: dict) -> None:
"""更新知识库"""
if 'uuid' in kb_data:
del kb_data['uuid']
if 'embedding_model_uuid' in kb_data:
del kb_data['embedding_model_uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_rag.KnowledgeBase)
.values(kb_data)
.where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
await self.ap.rag_mgr.remove_knowledge_base_from_runtime(kb_uuid)
kb = await self.get_knowledge_base(kb_uuid)
await self.ap.rag_mgr.load_knowledge_base(kb)
async def store_file(self, kb_uuid: str, file_id: str) -> int:
"""存储文件"""
# await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.File).values(kb_id=kb_uuid, file_id=file_id))
# await self.ap.rag_mgr.store_file(file_id)
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return await runtime_kb.store_file(file_id)
async def retrieve_knowledge_base(self, kb_uuid: str, query: str) -> list[dict]:
"""检索知识库"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return [result.model_dump() for result in await runtime_kb.retrieve(query)]
async def get_files_by_knowledge_base(self, kb_uuid: str) -> list[dict]:
"""获取知识库文件"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
files = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_rag.File, file) for file in files]
async def delete_file(self, kb_uuid: str, file_id: str) -> None:
"""删除文件"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
await runtime_kb.delete_file(file_id)
async def delete_knowledge_base(self, kb_uuid: str) -> None:
"""删除知识库"""
await self.ap.rag_mgr.delete_knowledge_base(kb_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
# delete files
files = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
for file in files:
# delete chunks
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.Chunk).where(persistence_rag.Chunk.file_id == file.uuid)
)
# delete file
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.File).where(persistence_rag.File.uuid == file.uuid)
)

View File

@@ -10,7 +10,7 @@ from ....provider.modelmgr import requester as model_requester
from ....provider import entities as llm_entities
class ModelsService:
class LLMModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
@@ -103,3 +103,89 @@ class ModelsService:
funcs=[],
extra_args={},
)
class EmbeddingModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_embedding_models(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
models = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model) for model in models]
async def create_embedding_model(self, model_data: dict) -> str:
model_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(
sqlalchemy.insert(persistence_model.EmbeddingModel).values(**model_data)
)
embedding_model = await self.get_embedding_model(model_data['uuid'])
await self.ap.model_mgr.load_embedding_model(embedding_model)
return model_data['uuid']
async def get_embedding_model(self, model_uuid: str) -> dict | None:
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
model = result.first()
if model is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model)
async def update_embedding_model(self, model_uuid: str, model_data: dict) -> None:
if 'uuid' in model_data:
del model_data['uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_model.EmbeddingModel)
.where(persistence_model.EmbeddingModel.uuid == model_uuid)
.values(**model_data)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
embedding_model = await self.get_embedding_model(model_uuid)
await self.ap.model_mgr.load_embedding_model(embedding_model)
async def delete_embedding_model(self, model_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
async def test_embedding_model(self, model_uuid: str, model_data: dict) -> None:
runtime_embedding_model: model_requester.RuntimeEmbeddingModel | None = None
if model_uuid != '_':
for model in self.ap.model_mgr.embedding_models:
if model.model_entity.uuid == model_uuid:
runtime_embedding_model = model
break
if runtime_embedding_model is None:
raise Exception('model not found')
else:
runtime_embedding_model = await self.ap.model_mgr.init_runtime_embedding_model(model_data)
await runtime_embedding_model.requester.invoke_embedding(
model=runtime_embedding_model,
input_text=['Hello, world!'],
extra_args={},
)

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class JSONConfigFile(file_model.ConfigFile):
"""JSON配置文件"""
"""JSON config file"""
def __init__(
self,
@@ -42,7 +42,7 @@ class JSONConfigFile(file_model.ConfigFile):
try:
cfg = json.load(f)
except json.JSONDecodeError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}')
raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion:
for key in self.template_data:

View File

@@ -7,13 +7,13 @@ from .. import model as file_model
class PythonModuleConfigFile(file_model.ConfigFile):
"""Python模块配置文件"""
"""Python module config file"""
config_file_name: str = None
"""配置文件名"""
"""Config file name"""
template_file_name: str = None
"""模板文件名"""
"""Template file name"""
def __init__(self, config_file_name: str, template_file_name: str) -> None:
self.config_file_name = config_file_name
@@ -42,7 +42,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
cfg[key] = getattr(module, key)
# 从模板模块文件中进行补全
# complete from template module file
if completion:
module_name = os.path.splitext(os.path.basename(self.template_file_name))[0]
module = importlib.import_module(module_name)
@@ -60,7 +60,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
return cfg
async def save(self, data: dict):
logging.warning('Python模块配置文件不支持保存')
logging.warning('Python module config file does not support saving')
def save_sync(self, data: dict):
logging.warning('Python模块配置文件不支持保存')
logging.warning('Python module config file does not support saving')

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class YAMLConfigFile(file_model.ConfigFile):
"""YAML配置文件"""
"""YAML config file"""
def __init__(
self,
@@ -42,7 +42,7 @@ class YAMLConfigFile(file_model.ConfigFile):
try:
cfg = yaml.load(f, Loader=yaml.FullLoader)
except yaml.YAMLError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}')
raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion:
for key in self.template_data:

View File

@@ -5,27 +5,27 @@ from .impls import pymodule, json as json_file, yaml as yaml_file
class ConfigManager:
"""配置文件管理器"""
"""Config file manager"""
name: str = None
"""配置管理器名"""
"""Config manager name"""
description: str = None
"""配置管理器描述"""
"""Config manager description"""
schema: dict = None
"""配置文件 schema
需要符合 JSON Schema Draft 7 规范
"""Config file schema
Must conform to JSON Schema Draft 7 specification
"""
file: file_model.ConfigFile = None
"""配置文件实例"""
"""Config file instance"""
data: dict = None
"""配置数据"""
"""Config data"""
doc_link: str = None
"""配置文件文档链接"""
"""Config file documentation link"""
def __init__(self, cfg_file: file_model.ConfigFile) -> None:
self.file = cfg_file
@@ -42,15 +42,15 @@ class ConfigManager:
async def load_python_module_config(config_name: str, template_name: str, completion: bool = True) -> ConfigManager:
"""加载Python模块配置文件
"""Load Python module config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
completion (bool): Whether to automatically complete the config file in memory
Returns:
ConfigManager: 配置文件管理器
ConfigManager: Config file manager
"""
cfg_inst = pymodule.PythonModuleConfigFile(config_name, template_name)
@@ -66,13 +66,13 @@ async def load_json_config(
template_data: dict = None,
completion: bool = True,
) -> ConfigManager:
"""加载JSON配置文件
"""Load JSON config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
template_data (dict): Template data
completion (bool): Whether to automatically complete the config file in memory
"""
cfg_inst = json_file.JSONConfigFile(config_name, template_name, template_data)
@@ -88,16 +88,16 @@ async def load_yaml_config(
template_data: dict = None,
completion: bool = True,
) -> ConfigManager:
"""加载YAML配置文件
"""Load YAML config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
template_data (dict): Template data
completion (bool): Whether to automatically complete the config file in memory
Returns:
ConfigManager: 配置文件管理器
ConfigManager: Config file manager
"""
cfg_inst = yaml_file.YAMLConfigFile(config_name, template_name, template_data)

View File

@@ -2,16 +2,16 @@ import abc
class ConfigFile(metaclass=abc.ABCMeta):
"""配置文件抽象类"""
"""Config file abstract class"""
config_file_name: str = None
"""配置文件名"""
"""Config file name"""
template_file_name: str = None
"""模板文件名"""
"""Template file name"""
template_data: dict = None
"""模板数据"""
"""Template data"""
@abc.abstractmethod
def exists(self) -> bool:

View File

@@ -22,15 +22,18 @@ from ..api.http.service import user as user_service
from ..api.http.service import model as model_service
from ..api.http.service import pipeline as pipeline_service
from ..api.http.service import bot as bot_service
from ..api.http.service import knowledge as knowledge_service
from ..discover import engine as discover_engine
from ..storage import mgr as storagemgr
from ..utils import logcache
from . import taskmgr
from . import entities as core_entities
from ..rag.knowledge import kbmgr as rag_mgr
from ..vector import mgr as vectordb_mgr
class Application:
"""运行时应用对象和上下文"""
"""Runtime application object and context"""
event_loop: asyncio.AbstractEventLoop = None
@@ -47,10 +50,12 @@ class Application:
model_mgr: llm_model_mgr.ModelManager = None
# TODO 移动到 pipeline 里
rag_mgr: rag_mgr.RAGManager = None
# TODO move to pipeline
tool_mgr: llm_tool_mgr.ToolManager = None
# ======= 配置管理器 =======
# ======= Config manager =======
command_cfg: config_mgr.ConfigManager = None # deprecated
@@ -64,7 +69,7 @@ class Application:
instance_config: config_mgr.ConfigManager = None
# ======= 元数据配置管理器 =======
# ======= Metadata config manager =======
sensitive_meta: config_mgr.ConfigManager = None
@@ -93,6 +98,8 @@ class Application:
persistence_mgr: persistencemgr.PersistenceManager = None
vector_db_mgr: vectordb_mgr.VectorDBManager = None
http_ctrl: http_controller.HTTPController = None
log_cache: logcache.LogCache = None
@@ -103,12 +110,16 @@ class Application:
user_service: user_service.UserService = None
model_service: model_service.ModelsService = None
llm_model_service: model_service.LLMModelsService = None
embedding_models_service: model_service.EmbeddingModelsService = None
pipeline_service: pipeline_service.PipelineService = None
bot_service: bot_service.BotService = None
knowledge_service: knowledge_service.KnowledgeService = None
def __init__(self):
pass
@@ -143,6 +154,7 @@ class Application:
name='http-api-controller',
scopes=[core_entities.LifecycleControlScope.APPLICATION],
)
self.task_mgr.create_task(
never_ending(),
name='never-ending-task',
@@ -154,11 +166,11 @@ class Application:
except asyncio.CancelledError:
pass
except Exception as e:
self.logger.error(f'应用运行致命异常: {e}')
self.logger.error(f'Application runtime fatal exception: {e}')
self.logger.debug(f'Traceback: {traceback.format_exc()}')
async def print_web_access_info(self):
"""打印访问 webui 的提示"""
"""Print access webui tips"""
if not os.path.exists(os.path.join('.', 'web/out')):
self.logger.warning('WebUI 文件缺失请根据文档部署https://docs.langbot.app/zh')
@@ -190,7 +202,7 @@ class Application:
):
match scope:
case core_entities.LifecycleControlScope.PLATFORM.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.platform_mgr.shutdown()
self.platform_mgr = im_mgr.PlatformManager(self)
@@ -206,7 +218,7 @@ class Application:
],
)
case core_entities.LifecycleControlScope.PLUGIN.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.plugin_mgr.destroy_plugins()
# 删除 sys.module 中所有的 plugins/* 下的模块
@@ -222,7 +234,7 @@ class Application:
await self.plugin_mgr.load_plugins()
await self.plugin_mgr.initialize_plugins()
case core_entities.LifecycleControlScope.PROVIDER.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.tool_mgr.shutdown()

View File

@@ -8,7 +8,7 @@ from . import app
from . import stage
from ..utils import constants, importutil
# 引入启动阶段实现以便注册
# Import startup stage implementation to register
from . import stages
importutil.import_modules_in_pkg(stages)
@@ -25,7 +25,7 @@ stage_order = [
async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
# 确定是否为调试模式
# Determine if it is debug mode
if 'DEBUG' in os.environ and os.environ['DEBUG'] in ['true', '1']:
constants.debug_mode = True
@@ -33,7 +33,7 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
ap.event_loop = loop
# 执行启动阶段
# Execute startup stage
for stage_name in stage_order:
stage_cls = stage.preregistered_stages[stage_name]
stage_inst = stage_cls()
@@ -47,11 +47,11 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
async def main(loop: asyncio.AbstractEventLoop):
try:
# 挂系统信号处理
# Hang system signal processing
import signal
def signal_handler(sig, frame):
print('[Signal] 程序退出.')
print('[Signal] Program exit.')
# ap.shutdown()
os._exit(0)

View File

@@ -2,8 +2,8 @@ import pip
import os
from ...utils import pkgmgr
# 检查依赖,防止用户未安装
# 左边为引入名称,右边为依赖名称
# Check dependencies to prevent users from not installing
# Left is the import name, right is the dependency name
required_deps = {
'requests': 'requests',
'openai': 'openai',
@@ -65,7 +65,7 @@ async def install_deps(deps: list[str]):
async def precheck_plugin_deps():
print('[Startup] Prechecking plugin dependencies...')
# 只有在plugins目录存在时才执行插件依赖安装
# Only execute plugin dependency installation when the plugins directory exists
if os.path.exists('plugins'):
for dir in os.listdir('plugins'):
subdir = os.path.join('plugins', dir)

View File

@@ -17,7 +17,7 @@ log_colors_config = {
async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.Logger:
# 删除所有现有的logger
# Remove all existing loggers
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
@@ -54,13 +54,13 @@ async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.
handler.setFormatter(color_formatter)
qcg_logger.addHandler(handler)
qcg_logger.debug('日志初始化完成,日志级别:%s' % level)
qcg_logger.debug('Logging initialized, log level: %s' % level)
logging.basicConfig(
level=logging.CRITICAL, # 设置日志输出格式
level=logging.CRITICAL, # Set log output format
format='[DEPR][%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s',
# 日志输出的格式
# -8表示占位符让输出左对齐输出长度都为8位
datefmt='%Y-%m-%d %H:%M:%S', # 时间输出的格式
# Log output format
# -8 is a placeholder, left-align the output, and output length is 8
datefmt='%Y-%m-%d %H:%M:%S', # Time output format
handlers=[logging.NullHandler()],
)

View File

@@ -19,7 +19,7 @@ class LifecycleControlScope(enum.Enum):
APPLICATION = 'application'
PLATFORM = 'platform'
PLUGIN = 'plugin'
PROVIDER = 'provider'
PROVIDER = 'provider'
class LauncherTypes(enum.Enum):

View File

@@ -7,11 +7,11 @@ from . import app
preregistered_migrations: list[typing.Type[Migration]] = []
"""当前阶段暂不支持扩展"""
"""Currently not supported for extension"""
def migration_class(name: str, number: int):
"""注册一个迁移"""
"""Register a migration"""
def decorator(cls: typing.Type[Migration]) -> typing.Type[Migration]:
cls.name = name
@@ -23,7 +23,7 @@ def migration_class(name: str, number: int):
class Migration(abc.ABC):
"""一个版本的迁移"""
"""A version migration"""
name: str
@@ -36,10 +36,10 @@ class Migration(abc.ABC):
@abc.abstractmethod
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
"""Determine if the current environment needs to run this migration"""
pass
@abc.abstractmethod
async def run(self):
"""执行迁移"""
"""Run migration"""
pass

View File

@@ -9,7 +9,7 @@ preregistered_notes: list[typing.Type[LaunchNote]] = []
def note_class(name: str, number: int):
"""注册一个启动信息"""
"""Register a launch information"""
def decorator(cls: typing.Type[LaunchNote]) -> typing.Type[LaunchNote]:
cls.name = name
@@ -21,7 +21,7 @@ def note_class(name: str, number: int):
class LaunchNote(abc.ABC):
"""启动信息"""
"""Launch information"""
name: str
@@ -34,10 +34,10 @@ class LaunchNote(abc.ABC):
@abc.abstractmethod
async def need_show(self) -> bool:
"""判断当前环境是否需要显示此启动信息"""
"""Determine if the current environment needs to display this launch information"""
pass
@abc.abstractmethod
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
"""生成启动信息"""
"""Generate launch information"""
pass

View File

@@ -7,7 +7,7 @@ from .. import note
@note.note_class('ClassicNotes', 1)
class ClassicNotes(note.LaunchNote):
"""经典启动信息"""
"""Classic launch information"""
async def need_show(self) -> bool:
return True

View File

@@ -9,7 +9,7 @@ from .. import note
@note.note_class('SelectionModeOnWindows', 2)
class SelectionModeOnWindows(note.LaunchNote):
"""Windows 上的选择模式提示信息"""
"""Selection mode prompt information on Windows"""
async def need_show(self) -> bool:
return os.name == 'nt'
@@ -19,3 +19,8 @@ class SelectionModeOnWindows(note.LaunchNote):
"""您正在使用 Windows 系统,若窗口左上角显示处于”选择“模式,程序将被暂停运行,此时请右键窗口中空白区域退出选择模式。""",
logging.INFO,
)
yield (
"""You are using Windows system, if the top left corner of the window displays "Selection" mode, the program will be paused running, please right-click on the blank area in the window to exit the selection mode.""",
logging.INFO,
)

View File

@@ -7,9 +7,9 @@ from . import app
preregistered_stages: dict[str, typing.Type[BootingStage]] = {}
"""预注册的请求处理阶段。在初始化时,所有请求处理阶段类会被注册到此字典中。
"""Pre-registered request processing stages. All request processing stage classes are registered in this dictionary during initialization.
当前阶段暂不支持扩展
Currently not supported for extension
"""
@@ -22,11 +22,11 @@ def stage_class(name: str):
class BootingStage(abc.ABC):
"""启动阶段"""
"""Booting stage"""
name: str = None
@abc.abstractmethod
async def run(self, ap: app.Application):
"""启动"""
"""Run"""
pass

View File

@@ -9,6 +9,7 @@ from ...command import cmdmgr
from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr
from ...provider.tools import toolmgr as llm_tool_mgr
from ...rag.knowledge import kbmgr as rag_mgr
from ...platform import botmgr as im_mgr
from ...persistence import mgr as persistencemgr
from ...api.http.controller import main as http_controller
@@ -16,18 +17,20 @@ from ...api.http.service import user as user_service
from ...api.http.service import model as model_service
from ...api.http.service import pipeline as pipeline_service
from ...api.http.service import bot as bot_service
from ...api.http.service import knowledge as knowledge_service
from ...discover import engine as discover_engine
from ...storage import mgr as storagemgr
from ...utils import logcache
from ...vector import mgr as vectordb_mgr
from .. import taskmgr
@stage.stage_class('BuildAppStage')
class BuildAppStage(stage.BootingStage):
"""构建应用阶段"""
"""Build LangBot application"""
async def run(self, ap: app.Application):
"""构建app对象的各个组件对象并初始化"""
"""Build LangBot application"""
ap.task_mgr = taskmgr.AsyncTaskManager(ap)
discover = discover_engine.ComponentDiscoveryEngine(ap)
@@ -42,7 +45,7 @@ class BuildAppStage(stage.BootingStage):
await ver_mgr.initialize()
ap.ver_mgr = ver_mgr
# 发送公告
# Send announcement
ann_mgr = announce.AnnouncementManager(ap)
ap.ann_mgr = ann_mgr
@@ -88,6 +91,15 @@ class BuildAppStage(stage.BootingStage):
await pipeline_mgr.initialize()
ap.pipeline_mgr = pipeline_mgr
rag_mgr_inst = rag_mgr.RAGManager(ap)
await rag_mgr_inst.initialize()
ap.rag_mgr = rag_mgr_inst
# 初始化向量数据库管理器
vectordb_mgr_inst = vectordb_mgr.VectorDBManager(ap)
await vectordb_mgr_inst.initialize()
ap.vector_db_mgr = vectordb_mgr_inst
http_ctrl = http_controller.HTTPController(ap)
await http_ctrl.initialize()
ap.http_ctrl = http_ctrl
@@ -95,8 +107,11 @@ class BuildAppStage(stage.BootingStage):
user_service_inst = user_service.UserService(ap)
ap.user_service = user_service_inst
model_service_inst = model_service.ModelsService(ap)
ap.model_service = model_service_inst
llm_model_service_inst = model_service.LLMModelsService(ap)
ap.llm_model_service = llm_model_service_inst
embedding_models_service_inst = model_service.EmbeddingModelsService(ap)
ap.embedding_models_service = embedding_models_service_inst
pipeline_service_inst = pipeline_service.PipelineService(ap)
ap.pipeline_service = pipeline_service_inst
@@ -104,5 +119,8 @@ class BuildAppStage(stage.BootingStage):
bot_service_inst = bot_service.BotService(ap)
ap.bot_service = bot_service_inst
knowledge_service_inst = knowledge_service.KnowledgeService(ap)
ap.knowledge_service = knowledge_service_inst
ctrl = controller.Controller(ap)
ap.ctrl = ctrl

View File

@@ -7,10 +7,10 @@ from .. import stage, app
@stage.stage_class('GenKeysStage')
class GenKeysStage(stage.BootingStage):
"""生成密钥阶段"""
"""Generate keys stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Generate keys"""
if not ap.instance_config.data['system']['jwt']['secret']:
ap.instance_config.data['system']['jwt']['secret'] = secrets.token_hex(16)

View File

@@ -8,10 +8,10 @@ from ..bootutils import config
@stage.stage_class('LoadConfigStage')
class LoadConfigStage(stage.BootingStage):
"""加载配置文件阶段"""
"""Load config file stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Load config file"""
# ======= deprecated =======
if os.path.exists('data/config/command.json'):

View File

@@ -11,10 +11,13 @@ importutil.import_modules_in_pkg(migrations)
@stage.stage_class('MigrationStage')
class MigrationStage(stage.BootingStage):
"""迁移阶段"""
"""Migration stage
These migrations are legacy, only performed in version 3.x
"""
async def run(self, ap: app.Application):
"""启动"""
"""Run migration"""
if any(
[
@@ -29,7 +32,7 @@ class MigrationStage(stage.BootingStage):
migrations = migration.preregistered_migrations
# 按照迁移号排序
# Sort by migration number
migrations.sort(key=lambda x: x.number)
for migration_cls in migrations:
@@ -37,4 +40,4 @@ class MigrationStage(stage.BootingStage):
if await migration_instance.need_migrate():
await migration_instance.run()
print(f'已执行迁移 {migration_instance.name}')
print(f'Migration {migration_instance.name} executed')

View File

@@ -8,7 +8,7 @@ from ..bootutils import log
class PersistenceHandler(logging.Handler, object):
"""
保存日志到数据库
Save logs to database
"""
ap: app.Application
@@ -19,9 +19,9 @@ class PersistenceHandler(logging.Handler, object):
def emit(self, record):
"""
emit函数为自定义handler类时必重写的函数这里可以根据需要对日志消息做一些处理比如发送日志到服务器
emit function is a required function for custom handler classes, here you can process the log messages as needed, such as sending logs to the server
发出记录(Emit a record)
Emit a record
"""
try:
msg = self.format(record)
@@ -34,10 +34,10 @@ class PersistenceHandler(logging.Handler, object):
@stage.stage_class('SetupLoggerStage')
class SetupLoggerStage(stage.BootingStage):
"""设置日志器阶段"""
"""Setup logger stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Setup logger"""
persistence_handler = PersistenceHandler('LoggerHandler', ap)
extra_handlers = []

View File

@@ -12,10 +12,10 @@ importutil.import_modules_in_pkg(notes)
@stage.stage_class('ShowNotesStage')
class ShowNotesStage(stage.BootingStage):
"""显示启动信息阶段"""
"""Show notes stage"""
async def run(self, ap: app.Application):
# 排序
# Sort
note.preregistered_notes.sort(key=lambda x: x.number)
for note_cls in note.preregistered_notes:

View File

@@ -9,13 +9,13 @@ from . import entities as core_entities
class TaskContext:
"""任务跟踪上下文"""
"""Task tracking context"""
current_action: str
"""当前正在执行的动作"""
"""Current action being executed"""
log: str
"""记录日志"""
"""Log"""
def __init__(self):
self.current_action = 'default'
@@ -58,40 +58,40 @@ placeholder_context: TaskContext | None = None
class TaskWrapper:
"""任务包装器"""
"""Task wrapper"""
_id_index: int = 0
"""任务ID索引"""
"""Task ID index"""
id: int
"""任务ID"""
"""Task ID"""
task_type: str = 'system' # 任务类型: system user
"""任务类型"""
task_type: str = 'system' # Task type: system or user
"""Task type"""
kind: str = 'system_task' # 由发起者确定任务种类,通常同质化的任务种类相同
"""任务种类"""
kind: str = 'system_task' # Task type determined by the initiator, usually the same task type
"""Task type"""
name: str = ''
"""任务唯一名称"""
"""Task unique name"""
label: str = ''
"""任务显示名称"""
"""Task display name"""
task_context: TaskContext
"""任务上下文"""
"""Task context"""
task: asyncio.Task
"""任务"""
"""Task"""
task_stack: list = None
"""任务堆栈"""
"""Task stack"""
ap: app.Application
"""应用实例"""
"""Application instance"""
scopes: list[core_entities.LifecycleControlScope]
"""任务所属生命周期控制范围"""
"""Task scope"""
def __init__(
self,
@@ -165,13 +165,13 @@ class TaskWrapper:
class AsyncTaskManager:
"""保存app中的所有异步任务
包含系统级的和用户级(插件安装、更新等由用户直接发起的)的"""
"""Save all asynchronous tasks in the app
Include system-level and user-level (plugin installation, update, etc. initiated by users directly)"""
ap: app.Application
tasks: list[TaskWrapper]
"""所有任务"""
"""All tasks"""
def __init__(self, ap: app.Application):
self.ap = ap

View File

@@ -4,7 +4,7 @@ from .base import Base
class Bot(Base):
"""机器人"""
"""Bot"""
__tablename__ = 'bots'

View File

@@ -12,7 +12,7 @@ initial_metadata = [
class Metadata(Base):
"""数据库元数据"""
"""Database metadata"""
__tablename__ = 'metadata'

View File

@@ -4,7 +4,7 @@ from .base import Base
class LLMModel(Base):
"""LLM 模型"""
"""LLM model"""
__tablename__ = 'llm_models'
@@ -23,3 +23,24 @@ class LLMModel(Base):
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)
class EmbeddingModel(Base):
"""Embedding 模型"""
__tablename__ = 'embedding_models'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
description = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester_config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
api_keys = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
extra_args = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,
nullable=False,
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)

View File

@@ -4,7 +4,7 @@ from .base import Base
class LegacyPipeline(Base):
"""旧版流水线"""
"""Legacy pipeline"""
__tablename__ = 'legacy_pipelines'
@@ -20,13 +20,12 @@ class LegacyPipeline(Base):
)
for_version = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
is_default = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=False)
stages = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
class PipelineRunRecord(Base):
"""流水线运行记录"""
"""Pipeline run record"""
__tablename__ = 'pipeline_run_records'
@@ -43,3 +42,4 @@ class PipelineRunRecord(Base):
started_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
finished_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
result = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
knowledge_base_uuid = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)

View File

@@ -4,7 +4,7 @@ from .base import Base
class PluginSetting(Base):
"""插件配置"""
"""Plugin setting"""
__tablename__ = 'plugin_settings'

View File

@@ -0,0 +1,50 @@
import sqlalchemy
from .base import Base
# Base = declarative_base()
# DATABASE_URL = os.getenv('DATABASE_URL', 'sqlite:///./rag_knowledge.db')
# print("Using database URL:", DATABASE_URL)
# engine = create_engine(DATABASE_URL, connect_args={'check_same_thread': False})
# SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# def create_db_and_tables():
# """Creates all database tables defined in the Base."""
# Base.metadata.create_all(bind=engine)
# print('Database tables created or already exist.')
class KnowledgeBase(Base):
__tablename__ = 'knowledge_bases'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String, index=True)
description = sqlalchemy.Column(sqlalchemy.Text)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
embedding_model_uuid = sqlalchemy.Column(sqlalchemy.String, default='')
top_k = sqlalchemy.Column(sqlalchemy.Integer, default=5)
class File(Base):
__tablename__ = 'knowledge_base_files'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
kb_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
file_name = sqlalchemy.Column(sqlalchemy.String)
extension = sqlalchemy.Column(sqlalchemy.String)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
status = sqlalchemy.Column(sqlalchemy.String, default='pending') # pending, processing, completed, failed
class Chunk(Base):
__tablename__ = 'knowledge_base_chunks'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
file_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
text = sqlalchemy.Column(sqlalchemy.Text)
# class Vector(Base):
# __tablename__ = 'knowledge_base_vectors'
# uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
# chunk_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# embedding = sqlalchemy.Column(sqlalchemy.LargeBinary)

View File

@@ -0,0 +1,13 @@
from sqlalchemy import Column, Integer, ForeignKey, LargeBinary
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class Vector(Base):
__tablename__ = 'vectors'
id = Column(Integer, primary_key=True, index=True)
chunk_id = Column(Integer, ForeignKey('chunks.id'), unique=True)
embedding = Column(LargeBinary) # Store embeddings as binary
chunk = relationship('Chunk', back_populates='vector')

View File

View File

@@ -0,0 +1,13 @@
from __future__ import annotations
import pydantic
from typing import Any
class RetrieveResultEntry(pydantic.BaseModel):
id: str
metadata: dict[str, Any]
distance: float

View File

@@ -11,7 +11,7 @@ preregistered_managers: list[type[BaseDatabaseManager]] = []
def manager_class(name: str) -> None:
"""注册一个数据库管理类"""
"""Register a database manager class"""
def decorator(cls: type[BaseDatabaseManager]) -> type[BaseDatabaseManager]:
cls.name = name
@@ -22,7 +22,7 @@ def manager_class(name: str) -> None:
class BaseDatabaseManager(abc.ABC):
"""基础数据库管理类"""
"""Base database manager class"""
name: str

View File

@@ -7,7 +7,7 @@ from .. import database
@database.manager_class('sqlite')
class SQLiteDatabaseManager(database.BaseDatabaseManager):
"""SQLite 数据库管理类"""
"""SQLite database manager"""
async def initialize(self) -> None:
sqlite_path = 'data/langbot.db'

View File

@@ -22,12 +22,12 @@ importutil.import_modules_in_pkg(persistence)
class PersistenceManager:
"""持久化模块管理器"""
"""Persistence module manager"""
ap: app.Application
db: database.BaseDatabaseManager
"""数据库管理器"""
"""Database manager"""
meta: sqlalchemy.MetaData
@@ -79,7 +79,7 @@ class PersistenceManager:
'stages': pipeline_service.default_stage_order,
'is_default': True,
'name': 'ChatPipeline',
'description': '默认提供的流水线,您配置的机器人、第一个模型将自动绑定到此流水线',
'description': 'Default pipeline, new bots will be bound to this pipeline | 默认提供的流水线,您配置的机器人将自动绑定到此流水线',
'config': pipeline_config,
}

View File

@@ -10,7 +10,7 @@ preregistered_db_migrations: list[typing.Type[DBMigration]] = []
def migration_class(number: int):
"""迁移类装饰器"""
"""Migration class decorator"""
def wrapper(cls: typing.Type[DBMigration]) -> typing.Type[DBMigration]:
cls.number = number
@@ -21,20 +21,20 @@ def migration_class(number: int):
class DBMigration(abc.ABC):
"""数据库迁移"""
"""Database migration"""
number: int
"""迁移号"""
"""Migration number"""
def __init__(self, ap: app.Application):
self.ap = ap
@abc.abstractmethod
async def upgrade(self):
"""升级"""
"""Upgrade"""
pass
@abc.abstractmethod
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -15,21 +15,21 @@ from ...entity.persistence import (
@migration.migration_class(1)
class DBMigrateV3Config(migration.DBMigration):
"""从 v3 的配置迁移到 v4 的数据库"""
"""Migrate v3 config to v4 database"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
"""
将 data/config 下的所有配置文件进行迁移。
迁移后,之前的配置文件都保存到 data/legacy/config 下。
迁移后data/metadata/ 下的所有配置文件都保存到 data/legacy/metadata 下。
Migrate all config files under data/config.
After migration, all previous config files are saved under data/legacy/config.
After migration, all config files under data/metadata/ are saved under data/legacy/metadata.
"""
if self.ap.provider_cfg is None:
return
# ======= 迁移模型 =======
# 只迁移当前选中的模型
# ======= Migrate model =======
# Only migrate the currently selected model
model_name = self.ap.provider_cfg.data.get('model', 'gpt-4o')
model_requester = 'openai-chat-completions'
@@ -91,8 +91,8 @@ class DBMigrateV3Config(migration.DBMigration):
sqlalchemy.insert(persistence_model.LLMModel).values(**llm_model_data)
)
# ======= 迁移流水线配置 =======
# 修改到默认流水线
# ======= Migrate pipeline config =======
# Modify to default pipeline
default_pipeline = [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
for pipeline in (
@@ -184,8 +184,8 @@ class DBMigrateV3Config(migration.DBMigration):
.where(persistence_pipeline.LegacyPipeline.uuid == default_pipeline['uuid'])
)
# ======= 迁移机器人 =======
# 只迁移启用的机器人
# ======= Migrate bot =======
# Only migrate enabled bots
for adapter in self.ap.platform_cfg.data.get('platform-adapters', []):
if not adapter.get('enable'):
continue
@@ -207,7 +207,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_bot.Bot).values(**bot_data))
# ======= 迁移系统设置 =======
# ======= Migrate system settings =======
self.ap.instance_config.data['admins'] = self.ap.system_cfg.data['admin-sessions']
self.ap.instance_config.data['api']['port'] = self.ap.system_cfg.data['http-api']['port']
self.ap.instance_config.data['command'] = {
@@ -223,7 +223,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.instance_config.dump_config()
# ======= move files =======
# 迁移 data/config 下的所有配置文件
# Migrate all config files under data/config
all_legacy_dir_name = [
'config',
# 'metadata',
@@ -246,4 +246,4 @@ class DBMigrateV3Config(migration.DBMigration):
move_legacy_files(dir_name)
async def downgrade(self):
"""降级"""
"""Downgrade"""

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(2)
class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
"""引用消息合并配置"""
"""Combine quote message config"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -37,5 +37,5 @@ class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
)
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(3)
class DBMigrateN8nConfig(migration.DBMigration):
"""N8n配置"""
"""N8n config"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -45,5 +45,5 @@ class DBMigrateN8nConfig(migration.DBMigration):
)
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(4)
class DBMigrateRAGKBUUID(migration.DBMigration):
"""RAG知识库UUID"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'knowledge-base' not in config['ai']['local-agent']:
config['ai']['local-agent']['knowledge-base'] = ''
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -6,9 +6,9 @@ from ...core import entities as core_entities
@stage.stage_class('BanSessionCheckStage')
class BanSessionCheckStage(stage.PipelineStage):
"""访问控制处理阶段
"""Access control processing stage
仅检查query中群号或个人号是否在访问控制列表中。
Only check if the group or personal number in the query is in the access control list.
"""
async def initialize(self, pipeline_config: dict):
@@ -41,5 +41,7 @@ class BanSessionCheckStage(stage.PipelineStage):
return entities.StageProcessResult(
result_type=entities.ResultType.CONTINUE if ctn else entities.ResultType.INTERRUPT,
new_query=query,
console_notice=f'根据访问控制忽略消息: {query.launcher_type.value}_{query.launcher_id}' if not ctn else '',
console_notice=f'Ignore message according to access control: {query.launcher_type.value}_{query.launcher_id}'
if not ctn
else '',
)

View File

@@ -13,13 +13,13 @@ preregistered_filters: list[typing.Type[ContentFilter]] = []
def filter_class(
name: str,
) -> typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]:
"""内容过滤器类装饰器
"""Content filter class decorator
Args:
name (str): 过滤器名称
name (str): Filter name
Returns:
typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: 装饰器
typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: Decorator
"""
def decorator(cls: typing.Type[ContentFilter]) -> typing.Type[ContentFilter]:
@@ -35,7 +35,7 @@ def filter_class(
class ContentFilter(metaclass=abc.ABCMeta):
"""内容过滤器抽象类"""
"""Content filter abstract class"""
name: str
@@ -46,31 +46,31 @@ class ContentFilter(metaclass=abc.ABCMeta):
@property
def enable_stages(self):
"""启用的阶段
"""Enabled stages
默认为消息请求AI前后的两个阶段。
Default is the two stages before and after the message request to AI.
entity.EnableStage.PRE: 消息请求AI前此时需要检查的内容是用户的输入消息。
entity.EnableStage.POST: 消息请求AI后此时需要检查的内容是AI的回复消息。
entity.EnableStage.PRE: Before message request to AI, the content to check is the user's input message.
entity.EnableStage.POST: After message request to AI, the content to check is the AI's reply message.
"""
return [entities.EnableStage.PRE, entities.EnableStage.POST]
async def initialize(self):
"""初始化过滤器"""
"""Initialize filter"""
pass
@abc.abstractmethod
async def process(self, query: core_entities.Query, message: str = None, image_url=None) -> entities.FilterResult:
"""处理消息
"""Process message
分为前后阶段,具体取决于 enable_stages 的值。
对于内容过滤器来说,不需要考虑消息所处的阶段,只需要检查消息内容即可。
It is divided into two stages, depending on the value of enable_stages.
For content filters, you do not need to consider the stage of the message, you only need to check the message content.
Args:
message (str): 需要检查的内容
image_url (str): 要检查的图片的 URL
message (str): Content to check
image_url (str): URL of the image to check
Returns:
entities.FilterResult: 过滤结果,具体内容请查看 entities.FilterResult 类的文档
entities.FilterResult: Filter result, please refer to the documentation of entities.FilterResult class
"""
raise NotImplementedError

View File

@@ -8,7 +8,7 @@ from ....core import entities as core_entities
@filter_model.filter_class('ban-word-filter')
class BanWordFilter(filter_model.ContentFilter):
"""根据内容过滤"""
"""Filter content"""
async def initialize(self):
pass

View File

@@ -8,7 +8,7 @@ from ....core import entities as core_entities
@filter_model.filter_class('content-ignore')
class ContentIgnore(filter_model.ContentFilter):
"""根据内容忽略消息"""
"""Ignore message according to content"""
@property
def enable_stages(self):
@@ -24,7 +24,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK,
replacement='',
user_notice='',
console_notice='根据 ignore_rules 中的 prefix 规则,忽略消息',
console_notice='Ignore message according to prefix rule in ignore_rules',
)
if 'regexp' in query.pipeline_config['trigger']['ignore-rules']:
@@ -34,7 +34,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK,
replacement='',
user_notice='',
console_notice='根据 ignore_rules 中的 regexp 规则,忽略消息',
console_notice='Ignore message according to regexp rule in ignore_rules',
)
return entities.FilterResult(

View File

@@ -16,9 +16,9 @@ importutil.import_modules_in_pkg(strategies)
@stage.stage_class('LongTextProcessStage')
class LongTextProcessStage(stage.PipelineStage):
"""长消息处理阶段
"""Long message processing stage
改写:
Rewrite:
- resp_message_chain
"""
@@ -36,22 +36,22 @@ class LongTextProcessStage(stage.PipelineStage):
use_font = 'C:/Windows/Fonts/msyh.ttc'
if not os.path.exists(use_font):
self.ap.logger.warn(
'未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在配置文件中调整相关设置。'
'Font file not found, and Windows system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
)
config['blob_message_strategy'] = 'forward'
else:
self.ap.logger.info('使用Windows自带字体:' + use_font)
self.ap.logger.info('Using Windows system font: ' + use_font)
config['font-path'] = use_font
else:
self.ap.logger.warn(
'未找到字体文件,且无法使用系统自带字体,更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。'
'Font file not found, and system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
)
pipeline_config['output']['long-text-processing']['strategy'] = 'forward'
except Exception:
traceback.print_exc()
self.ap.logger.error(
'加载字体文件失败({}),更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。'.format(
'Failed to load font file ({}), switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'.format(
use_font
)
)
@@ -63,12 +63,12 @@ class LongTextProcessStage(stage.PipelineStage):
self.strategy_impl = strategy_cls(self.ap)
break
else:
raise ValueError(f'未找到名为 {config["strategy"]} 的长消息处理策略')
raise ValueError(f'Long message processing strategy not found: {config["strategy"]}')
await self.strategy_impl.initialize()
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
# 检查是否包含非 Plain 组件
# Check if it contains non-Plain components
contains_non_plain = False
for msg in query.resp_message_chain[-1]:
@@ -77,7 +77,7 @@ class LongTextProcessStage(stage.PipelineStage):
break
if contains_non_plain:
self.ap.logger.debug('消息中包含非 Plain 组件,跳过长消息处理。')
self.ap.logger.debug('Message contains non-Plain components, skip long message processing.')
elif (
len(str(query.resp_message_chain[-1]))
> query.pipeline_config['output']['long-text-processing']['threshold']

View File

@@ -15,17 +15,17 @@ Forward = platform_message.Forward
class ForwardComponentStrategy(strategy_model.LongTextStrategy):
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
display = ForwardMessageDiaplay(
title='群聊的聊天记录',
brief='[聊天记录]',
source='聊天记录',
preview=['QQ用户: ' + message],
summary='查看1条转发消息',
title='Group chat history',
brief='[Chat history]',
source='Chat history',
preview=['User: ' + message],
summary='View 1 forwarded message',
)
node_list = [
platform_message.ForwardMessageNode(
sender_id=query.adapter.bot_account_id,
sender_name='QQ用户',
sender_name='User',
message_chain=platform_message.MessageChain([message]),
)
]

View File

@@ -14,13 +14,13 @@ preregistered_strategies: list[typing.Type[LongTextStrategy]] = []
def strategy_class(
name: str,
) -> typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]:
"""长文本处理策略类装饰器
"""Long text processing strategy class decorator
Args:
name (str): 策略名称
name (str): Strategy name
Returns:
typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: 装饰器
typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: Decorator
"""
def decorator(cls: typing.Type[LongTextStrategy]) -> typing.Type[LongTextStrategy]:
@@ -36,7 +36,7 @@ def strategy_class(
class LongTextStrategy(metaclass=abc.ABCMeta):
"""长文本处理策略抽象类"""
"""Long text processing strategy abstract class"""
name: str
@@ -50,15 +50,15 @@ class LongTextStrategy(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
"""处理长文本
"""Process long text
在 platform.json 中配置 long-text-process 字段,只要 文本长度超过了 threshold 就会调用此方法
If the text length exceeds the threshold, this method will be called.
Args:
message (str): 消息
query (core_entities.Query): 此次请求的上下文对象
message (str): Message
query (core_entities.Query): Query object
Returns:
list[platform_message.MessageComponent]: 转换后的 平台 消息组件列表
list[platform_message.MessageComponent]: Converted platform message components
"""
return []

View File

@@ -12,9 +12,9 @@ importutil.import_modules_in_pkg(truncators)
@stage.stage_class('ConversationMessageTruncator')
class ConversationMessageTruncator(stage.PipelineStage):
"""会话消息截断器
"""Conversation message truncator
用于截断会话消息链,以适应平台消息长度限制。
Used to truncate the conversation message chain to adapt to the LLM message length limit.
"""
trun: truncator.Truncator
@@ -27,10 +27,10 @@ class ConversationMessageTruncator(stage.PipelineStage):
self.trun = trun(self.ap)
break
else:
raise ValueError(f'未知的截断器: {use_method}')
raise ValueError(f'Unknown truncator: {use_method}')
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理"""
"""Process"""
query = await self.trun.truncate(query)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -6,17 +6,17 @@ from ....core import entities as core_entities
@truncator.truncator_class('round')
class RoundTruncator(truncator.Truncator):
"""前文回合数阶段器"""
"""Truncate the conversation message chain to adapt to the LLM message length limit."""
async def truncate(self, query: core_entities.Query) -> core_entities.Query:
"""截断"""
"""Truncate"""
max_round = query.pipeline_config['ai']['local-agent']['max-round']
temp_messages = []
current_round = 0
# 从后往前遍历
# Traverse from back to front
for msg in query.messages[::-1]:
if current_round < max_round:
temp_messages.append(msg)

View File

@@ -144,23 +144,27 @@ class RuntimePipeline:
result = await result
if isinstance(result, pipeline_entities.StageProcessResult): # 直接返回结果
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {result.result_type}'
)
await self._check_output(query, result)
if result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif result.result_type == pipeline_entities.ResultType.CONTINUE:
query = result.new_query
elif isinstance(result, typing.AsyncGenerator): # 生成器
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} gen')
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query.query_id} gen')
async for sub_result in result:
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {sub_result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {sub_result.result_type}'
)
await self._check_output(query, sub_result)
if sub_result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif sub_result.result_type == pipeline_entities.ResultType.CONTINUE:
query = sub_result.new_query
@@ -192,7 +196,7 @@ class RuntimePipeline:
if event_ctx.is_prevented_default():
return
self.ap.logger.debug(f'Processing query {query}')
self.ap.logger.debug(f'Processing query {query.query_id}')
await self._execute_from_stage(0, query)
except Exception as e:
@@ -200,7 +204,7 @@ class RuntimePipeline:
self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}')
self.ap.logger.error(f'Traceback: {traceback.format_exc()}')
finally:
self.ap.logger.debug(f'Query {query} processed')
self.ap.logger.debug(f'Query {query.query_id} processed')
class PipelineManager:

View File

@@ -11,11 +11,11 @@ from ...platform.types import message as platform_message
@stage.stage_class('PreProcessor')
class PreProcessor(stage.PipelineStage):
"""请求预处理阶段
"""Request pre-processing stage
签出会话、prompt、上文、模型、内容函数。
Check out session, prompt, context, model, and content functions.
改写:
Rewrite:
- session
- prompt
- messages
@@ -29,12 +29,12 @@ class PreProcessor(stage.PipelineStage):
query: core_entities.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""处理"""
"""Process"""
selected_runner = query.pipeline_config['ai']['runner']['runner']
session = await self.ap.sess_mgr.get_session(query)
# local-agent 时,llm_model None
# When not local-agent, llm_model is None
llm_model = (
await self.ap.model_mgr.get_model_by_uuid(query.pipeline_config['ai']['local-agent']['model'])
if selected_runner == 'local-agent'
@@ -51,7 +51,7 @@ class PreProcessor(stage.PipelineStage):
conversation.use_llm_model = llm_model
# 设置query
# Set query
query.session = session
query.prompt = conversation.prompt.copy()
query.messages = conversation.messages.copy()
@@ -80,14 +80,15 @@ class PreProcessor(stage.PipelineStage):
if me.type == 'image_url':
msg.content.remove(me)
content_list = []
content_list: list[llm_entities.ContentElement] = []
plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')
# tidy the content_list
# combine all text content into one, and put it in the first position
for me in query.message_chain:
if isinstance(me, platform_message.Plain):
content_list.append(llm_entities.ContentElement.from_text(me.text))
plain_text += me.text
elif isinstance(me, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
@@ -106,10 +107,12 @@ class PreProcessor(stage.PipelineStage):
if msg.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(msg.base64))
content_list.insert(0, llm_entities.ContentElement.from_text(plain_text))
query.variables['user_message_text'] = plain_text
query.user_message = llm_entities.Message(role='user', content=content_list)
# =========== 触发事件 PromptPreProcessing
# =========== Trigger event PromptPreProcessing
event_ctx = await self.ap.plugin_mgr.emit_event(
event=events.PromptPreProcessing(

View File

@@ -25,7 +25,7 @@ class MessageHandler(metaclass=abc.ABCMeta):
def cut_str(self, s: str) -> str:
"""
取字符串第一行最多20个字符若有多行或超过20个字符则加省略号
Take the first line of the string, up to 20 characters, if there are multiple lines, or more than 20 characters, add an ellipsis
"""
s0 = s.split('\n')[0]
if len(s0) > 20 or '\n' in s:

View File

@@ -22,11 +22,11 @@ class ChatMessageHandler(handler.MessageHandler):
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理"""
# API
# 生成器
"""Process"""
# Call API
# generator
# 触发插件事件
# Trigger plugin event
event_class = (
events.PersonNormalMessageReceived
if query.launcher_type == core_entities.LauncherTypes.PERSON
@@ -54,7 +54,7 @@ class ChatMessageHandler(handler.MessageHandler):
yield entities.StageProcessResult(result_type=entities.ResultType.INTERRUPT, new_query=query)
else:
if event_ctx.event.alter is not None:
# if isinstance(event_ctx.event, str): # 现在暂时不考虑多模态alter
# if isinstance(event_ctx.event, str): # Currently not considering multi-modal alter
query.user_message.content = event_ctx.event.alter
text_length = 0
@@ -65,12 +65,12 @@ class ChatMessageHandler(handler.MessageHandler):
runner = r(self.ap, query.pipeline_config)
break
else:
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
raise ValueError(f'Request runner not found: {query.pipeline_config["ai"]["runner"]["runner"]}')
async for result in runner.run(query):
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
self.ap.logger.info(f'Response({query.query_id}): {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
@@ -80,7 +80,7 @@ class ChatMessageHandler(handler.MessageHandler):
query.session.using_conversation.messages.append(query.user_message)
query.session.using_conversation.messages.extend(query.resp_messages)
except Exception as e:
self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}')
self.ap.logger.error(f'Request failed({query.query_id}): {type(e).__name__} {str(e)}')
hide_exception_info = query.pipeline_config['output']['misc']['hide-exception']

View File

@@ -15,7 +15,7 @@ class CommandHandler(handler.MessageHandler):
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理"""
"""Process"""
command_text = str(query.message_chain).strip()[1:]
@@ -70,7 +70,7 @@ class CommandHandler(handler.MessageHandler):
)
)
self.ap.logger.info(f'命令({query.query_id})报错: {self.cut_str(str(ret.error))}')
self.ap.logger.info(f'Command({query.query_id}) error: {self.cut_str(str(ret.error))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
elif ret.text is not None or ret.image_url is not None:
@@ -89,7 +89,7 @@ class CommandHandler(handler.MessageHandler):
)
)
self.ap.logger.info(f'命令返回: {self.cut_str(str(content[0]))}')
self.ap.logger.info(f'Command returned: {self.cut_str(str(content[0]))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:

View File

@@ -33,11 +33,11 @@ class Processor(stage.PipelineStage):
query: core_entities.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""处理"""
"""Process"""
message_text = str(query.message_chain).strip()
self.ap.logger.info(
f'处理 {query.launcher_type.value}_{query.launcher_id} 的请求({query.query_id}): {message_text}'
f'Processing request from {query.launcher_type.value}_{query.launcher_id} ({query.query_id}): {message_text}'
)
async def generator():

View File

@@ -119,7 +119,7 @@ class EventLogger:
async def _truncate_logs(self):
if len(self.logs) > MAX_LOG_COUNT:
for i in range(DELETE_COUNT_PER_TIME):
for image_key in self.logs[i].images:
for image_key in self.logs[i].images: # type: ignore
await self.ap.storage_mgr.storage_provider.delete(image_key)
self.logs = self.logs[DELETE_COUNT_PER_TIME:]

View File

@@ -16,7 +16,6 @@ from ..logger import EventLogger
class AiocqhttpMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain,
@@ -62,87 +61,169 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
for node in msg.node_list:
msg_list.extend((await AiocqhttpMessageConverter.yiri2target(node.message_chain))[0])
elif isinstance(msg, platform_message.File):
msg_list.append({"type":"file", "data":{'file': msg.url, "name": msg.name}})
msg_list.append({'type': 'file', 'data': {'file': msg.url, 'name': msg.name}})
elif isinstance(msg, platform_message.Face):
if msg.face_type=='face':
if msg.face_type == 'face':
msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id))
elif msg.face_type=='rps':
elif msg.face_type == 'rps':
msg_list.append(aiocqhttp.MessageSegment.rps())
elif msg.face_type=='dice':
elif msg.face_type == 'dice':
msg_list.append(aiocqhttp.MessageSegment.dice())
else:
msg_list.append(aiocqhttp.MessageSegment.text(str(msg)))
return msg_list, msg_id, msg_time
@staticmethod
async def target2yiri(message: str, message_id: int = -1,bot=None):
print(message)
async def target2yiri(message: str, message_id: int = -1, bot: aiocqhttp.CQHttp = None):
message = aiocqhttp.Message(message)
def get_face_name(face_id):
face_code_dict = {
"2": '好色',
"4": "得意", "5": "流泪", "8": "", "9": "大哭", "10": "尴尬", "12": "调皮", "14": "微笑", "16": "",
"21": "可爱",
"23": "傲慢", "24": "饥饿", "25": "", "26": "惊恐", "27": "流汗", "28": "憨笑", "29": "悠闲",
"30": "奋斗",
"32": "疑问", "33": "", "34": "", "38": "敲打", "39": "再见", "41": "发抖", "42": "爱情",
"43": "跳跳",
"49": "拥抱", "53": "蛋糕", "60": "咖啡", "63": "玫瑰", "66": "爱心", "74": "太阳", "75": "月亮",
"76": "",
"78": "握手", "79": "胜利", "85": "飞吻", "89": "西瓜", "96": "冷汗", "97": "擦汗", "98": "抠鼻",
"99": "鼓掌",
"100": "糗大了", "101": "坏笑", "102": "左哼哼", "103": "右哼哼", "104": "哈欠", "106": "委屈",
"109": "左亲亲",
"111": "可怜", "116": "示爱", "118": "抱拳", "120": "拳头", "122": "爱你", "123": "NO", "124": "OK",
"125": "转圈",
"129": "挥手", "144": "喝彩", "147": "棒棒糖", "171": "", "173": "泪奔", "174": "无奈", "175": "卖萌",
"176": "小纠结", "179": "doge", "180": "惊喜", "181": "骚扰", "182": "笑哭", "183": "我最美",
"201": "点赞",
"203": "托脸", "212": "托腮", "214": "啵啵", "219": "蹭一蹭", "222": "抱抱", "227": "拍手",
"232": "佛系",
"240": "喷脸", "243": "甩头", "246": "加油抱抱", "262": "脑阔疼", "264": "捂脸", "265": "辣眼睛",
"266": "哦哟",
"267": "头秃", "268": "问号脸", "269": "暗中观察", "270": "emm", "271": "吃瓜", "272": "呵呵哒",
"273": "我酸了",
"277": "汪汪", "278": "", "281": "无眼笑", "282": "敬礼", "284": "面无表情", "285": "摸鱼",
"287": "",
"289": "睁眼", "290": "敲开心", "293": "摸锦鲤", "294": "期待", "297": "拜谢", "298": "元宝",
"299": "牛啊",
"305": "右亲亲", "306": "牛气冲天", "307": "喵喵", "314": "仔细分析", "315": "加油", "318": "崇拜",
"319": "比心",
"320": "庆祝", "322": "拒绝", "324": "吃糖", "326": "生气"
'2': '好色',
'4': '得意',
'5': '流泪',
'8': '',
'9': '大哭',
'10': '尴尬',
'12': '调皮',
'14': '微笑',
'16': '',
'21': '可爱',
'23': '傲慢',
'24': '饥饿',
'25': '',
'26': '惊恐',
'27': '流汗',
'28': '憨笑',
'29': '悠闲',
'30': '奋斗',
'32': '疑问',
'33': '',
'34': '',
'38': '敲打',
'39': '再见',
'41': '发抖',
'42': '爱情',
'43': '跳跳',
'49': '拥抱',
'53': '蛋糕',
'60': '咖啡',
'63': '玫瑰',
'66': '爱心',
'74': '太阳',
'75': '月亮',
'76': '',
'78': '握手',
'79': '胜利',
'85': '飞吻',
'89': '西瓜',
'96': '冷汗',
'97': '擦汗',
'98': '抠鼻',
'99': '鼓掌',
'100': '糗大了',
'101': '坏笑',
'102': '左哼哼',
'103': '右哼哼',
'104': '哈欠',
'106': '委屈',
'109': '左亲亲',
'111': '可怜',
'116': '示爱',
'118': '抱拳',
'120': '拳头',
'122': '爱你',
'123': 'NO',
'124': 'OK',
'125': '转圈',
'129': '挥手',
'144': '喝彩',
'147': '棒棒糖',
'171': '',
'173': '泪奔',
'174': '无奈',
'175': '卖萌',
'176': '小纠结',
'179': 'doge',
'180': '惊喜',
'181': '骚扰',
'182': '笑哭',
'183': '我最美',
'201': '点赞',
'203': '托脸',
'212': '托腮',
'214': '啵啵',
'219': '蹭一蹭',
'222': '抱抱',
'227': '拍手',
'232': '佛系',
'240': '喷脸',
'243': '甩头',
'246': '加油抱抱',
'262': '脑阔疼',
'264': '捂脸',
'265': '辣眼睛',
'266': '哦哟',
'267': '头秃',
'268': '问号脸',
'269': '暗中观察',
'270': 'emm',
'271': '吃瓜',
'272': '呵呵哒',
'273': '我酸了',
'277': '汪汪',
'278': '',
'281': '无眼笑',
'282': '敬礼',
'284': '面无表情',
'285': '摸鱼',
'287': '',
'289': '睁眼',
'290': '敲开心',
'293': '摸锦鲤',
'294': '期待',
'297': '拜谢',
'298': '元宝',
'299': '牛啊',
'305': '右亲亲',
'306': '牛气冲天',
'307': '喵喵',
'314': '仔细分析',
'315': '加油',
'318': '崇拜',
'319': '比心',
'320': '庆祝',
'322': '拒绝',
'324': '吃糖',
'326': '生气',
}
return face_code_dict.get(face_id,'')
return face_code_dict.get(face_id, '')
async def process_message_data(msg_data, reply_list):
if msg_data["type"] == "image":
image_base64, image_format = await image.qq_image_url_to_base64(msg_data["data"]['url'])
reply_list.append(
platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
if msg_data['type'] == 'image':
image_base64, image_format = await image.qq_image_url_to_base64(msg_data['data']['url'])
reply_list.append(platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
elif msg_data["type"] == "text":
reply_list.append(platform_message.Plain(text=msg_data["data"]["text"]))
elif msg_data['type'] == 'text':
reply_list.append(platform_message.Plain(text=msg_data['data']['text']))
elif msg_data["type"] == "forward": # 这里来应该传入转发消息组暂时传入qoute
for forward_msg_datas in msg_data["data"]["content"]:
for forward_msg_data in forward_msg_datas["message"]:
elif msg_data['type'] == 'forward': # 这里来应该传入转发消息组暂时传入qoute
for forward_msg_datas in msg_data['data']['content']:
for forward_msg_data in forward_msg_datas['message']:
await process_message_data(forward_msg_data, reply_list)
elif msg_data["type"] == "at":
if msg_data["data"]['qq'] == 'all':
elif msg_data['type'] == 'at':
if msg_data['data']['qq'] == 'all':
reply_list.append(platform_message.AtAll())
else:
reply_list.append(
platform_message.At(
target=msg_data["data"]['qq'],
target=msg_data['data']['qq'],
)
)
yiri_msg_list = []
yiri_msg_list.append(platform_message.Source(id=message_id, time=datetime.datetime.now()))
@@ -161,10 +242,10 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif msg.type == 'text':
yiri_msg_list.append(platform_message.Plain(text=msg.data['text']))
elif msg.type == 'image':
emoji_id = msg.data.get("emoji_package_id", None)
emoji_id = msg.data.get('emoji_package_id', None)
if emoji_id:
face_id = emoji_id
face_name = msg.data.get("summary", '')
face_name = msg.data.get('summary', '')
image_msg = platform_message.Face(face_id=face_id, face_name=face_name)
else:
image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url'])
@@ -178,65 +259,53 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
# await process_message_data(msg_data, yiri_msg_list)
pass
elif msg.type == 'reply': # 此处处理引用消息传入Qoute
msg_datas = await bot.get_msg(message_id=msg.data["id"])
msg_datas = await bot.get_msg(message_id=msg.data['id'])
for msg_data in msg_datas["message"]:
for msg_data in msg_datas['message']:
await process_message_data(msg_data, reply_list)
reply_msg = platform_message.Quote(message_id=msg.data["id"],sender_id=msg_datas["user_id"],origin=reply_list)
reply_msg = platform_message.Quote(
message_id=msg.data['id'], sender_id=msg_datas['user_id'], origin=reply_list
)
yiri_msg_list.append(reply_msg)
elif msg.type == 'file':
# file_name = msg.data['file']
file_id = msg.data['file_id']
file_data = await bot.get_file(file_id=file_id)
file_name = file_data.get('file_name')
file_path = file_data.get('file')
file_url = file_data.get('file_url')
file_size = file_data.get('file_size')
yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
# 这里下载所有文件会导致下载文件过多,暂时不下载
# elif msg.type == 'file':
# # file_name = msg.data['file']
# file_id = msg.data['file_id']
# file_data = await bot.get_file(file_id=file_id)
# file_name = file_data.get('file_name')
# file_path = file_data.get('file')
# file_url = file_data.get('file_url')
# file_size = file_data.get('file_size')
# yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
elif msg.type == 'face':
face_id = msg.data['id']
face_name = msg.data['raw']['faceText']
if not face_name:
face_name = get_face_name(face_id)
yiri_msg_list.append(platform_message.Face(face_id=int(face_id),face_name=face_name.replace('/','')))
yiri_msg_list.append(platform_message.Face(face_id=int(face_id), face_name=face_name.replace('/', '')))
elif msg.type == 'rps':
face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type="rps",face_id=int(face_id),face_name='猜拳'))
yiri_msg_list.append(platform_message.Face(face_type='rps', face_id=int(face_id), face_name='猜拳'))
elif msg.type == 'dice':
face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type='dice',face_id=int(face_id),face_name='骰子'))
yiri_msg_list.append(platform_message.Face(face_type='dice', face_id=int(face_id), face_name='骰子'))
chain = platform_message.MessageChain(yiri_msg_list)
return chain
class AiocqhttpEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(event: platform_events.MessageEvent, bot_account_id: int):
return event.source_platform_object
@staticmethod
async def target2yiri(event: aiocqhttp.Event,bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id,bot)
async def target2yiri(event: aiocqhttp.Event, bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id, bot)
if event.message_type == 'group':
permission = 'MEMBER'
@@ -316,7 +385,6 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
aiocq_msg = (await AiocqhttpMessageConverter.yiri2target(message))[0]
if target_type == 'group':
await self.bot.send_group_msg(group_id=int(target_id), message=aiocq_msg)
elif target_type == 'person':
await self.bot.send_private_msg(user_id=int(target_id), message=aiocq_msg)
@@ -345,7 +413,7 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
async def on_message(event: aiocqhttp.Event):
self.bot_account_id = event.self_id
try:
return await callback(await self.event_converter.target2yiri(event,self.bot), self)
return await callback(await self.event_converter.target2yiri(event, self.bot), self)
except Exception:
await self.logger.error(f'Error in on_message: {traceback.format_exc()}')
traceback.print_exc()

View File

@@ -654,10 +654,10 @@ class DiscordMessageConverter(adapter.MessageConverter):
# 确保路径没有空字节
clean_path = ele.path.replace('\x00', '')
clean_path = os.path.abspath(clean_path)
if not os.path.exists(clean_path):
continue # 跳过不存在的文件
try:
with open(clean_path, 'rb') as f:
image_bytes = f.read()
@@ -677,12 +677,13 @@ class DiscordMessageConverter(adapter.MessageConverter):
filename = f'{uuid.uuid4()}.webp'
# 默认保持PNG
except Exception as e:
print(f"Error reading image file {clean_path}: {e}")
print(f'Error reading image file {clean_path}: {e}')
continue # 跳过读取失败的文件
if image_bytes:
# 使用BytesIO创建文件对象避免路径问题
import io
image_files.append(discord.File(fp=io.BytesIO(image_bytes), filename=filename))
elif isinstance(ele, platform_message.Plain):
text_string += ele.text
@@ -1003,25 +1004,25 @@ class DiscordAdapter(adapter.MessagePlatformAdapter):
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
msg_to_send, image_files = await self.message_converter.yiri2target(message)
try:
# 获取频道对象
channel = self.bot.get_channel(int(target_id))
if channel is None:
# 如果本地缓存中没有尝试从API获取
channel = await self.bot.fetch_channel(int(target_id))
args = {
'content': msg_to_send,
}
if len(image_files) > 0:
args['files'] = image_files
await channel.send(**args)
except Exception as e:
await self.logger.error(f"Discord send_message failed: {e}")
await self.logger.error(f'Discord send_message failed: {e}')
raise e
async def reply_message(

View File

@@ -378,15 +378,15 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
if 'im.message.receive_v1' == type:
try:
event = await self.event_converter.target2yiri(p2v1, self.api_client)
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in lark callback: {traceback.format_exc()}')
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
return {'code': 200, 'message': 'ok'}
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in lark callback: {traceback.format_exc()}')
return {'code': 500, 'message': 'error'}
async def on_message(event: lark_oapi.im.v1.P2ImMessageReceiveV1):

View File

@@ -72,8 +72,9 @@ class NakuruProjectMessageConverter(adapter_model.MessageConverter):
content=content_list,
)
nakuru_forward_node_list.append(nakuru_forward_node)
except Exception as e:
except Exception:
import traceback
traceback.print_exc()
nakuru_msg_list.append(nakuru_forward_node_list)
@@ -276,7 +277,7 @@ class NakuruAdapter(adapter_model.MessagePlatformAdapter):
# 注册监听器
self.bot.receiver(source_cls.__name__)(listener_wrapper)
except Exception as e:
self.logger.error(f"Error in nakuru register_listener: {traceback.format_exc()}")
self.logger.error(f'Error in nakuru register_listener: {traceback.format_exc()}')
raise e
def unregister_listener(

View File

@@ -125,8 +125,8 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in officialaccount callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in officialaccount callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -154,10 +154,7 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
raise ParamNotEnoughError('QQ官方机器人缺少相关配置项请查看文档或联系管理员')
self.bot = QQOfficialClient(
app_id=config['appid'],
secret=config['secret'],
token=config['token'],
logger=self.logger
app_id=config['appid'], secret=config['secret'], token=config['token'], logger=self.logger
)
async def reply_message(
@@ -224,8 +221,8 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'justbot'
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in qqofficial callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in qqofficial callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('DIRECT_MESSAGE_CREATE')(on_message)

View File

@@ -104,7 +104,9 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
if missing_keys:
raise ParamNotEnoughError('Slack机器人缺少相关配置项请查看文档或联系管理员')
self.bot = SlackClient(bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'], logger=self.logger)
self.bot = SlackClient(
bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'], logger=self.logger
)
async def reply_message(
self,
@@ -139,8 +141,8 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'SlackBot'
try:
return await callback(await self.event_converter.target2yiri(event, self.bot), self)
except Exception as e:
await self.logger.error(f"Error in slack callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in slack callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('im')(on_message)

View File

@@ -160,8 +160,8 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
try:
lb_event = await self.event_converter.target2yiri(update, self.bot, self.bot_account_id)
await self.listeners[type(lb_event)](lb_event, self)
except Exception as e:
await self.logger.error(f"Error in telegram callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in telegram callback: {traceback.format_exc()}')
self.application = ApplicationBuilder().token(self.config['token']).build()
self.bot = self.application.bot

View File

@@ -1,5 +1,4 @@
import requests
import websockets
import websocket
import json
import time
@@ -10,53 +9,41 @@ from libs.wechatpad_api.client import WeChatPadClient
import typing
import asyncio
import traceback
import time
import re
import base64
import uuid
import json
import os
import copy
import datetime
import threading
import quart
import aiohttp
from .. import adapter
from ...pipeline.longtext.strategies import forward
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
from ..logger import EventLogger
import xml.etree.ElementTree as ET
from typing import Optional, List, Tuple
from typing import Optional, Tuple
from functools import partial
import logging
class WeChatPadMessageConverter(adapter.MessageConverter):
def __init__(self, config: dict):
def __init__(self, config: dict, logger: logging.Logger):
self.config = config
self.bot = WeChatPadClient(self.config["wechatpad_url"],self.config["token"])
self.logger = logging.getLogger("WeChatPadMessageConverter")
self.logger = logger
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain
) -> list[dict]:
async def yiri2target(message_chain: platform_message.MessageChain) -> list[dict]:
content_list = []
current_file_path = os.path.abspath(__file__)
for component in message_chain:
if isinstance(component, platform_message.At):
content_list.append({"type": "at", "target": component.target})
content_list.append({'type': 'at', 'target': component.target})
elif isinstance(component, platform_message.Plain):
content_list.append({"type": "text", "content": component.text})
content_list.append({'type': 'text', 'content': component.text})
elif isinstance(component, platform_message.Image):
if component.url:
async with httpx.AsyncClient() as client:
@@ -68,15 +55,16 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
else:
raise Exception('获取文件失败')
# pass
content_list.append({"type": "image", "image": base64_str})
content_list.append({'type': 'image', 'image': base64_str})
elif component.base64:
content_list.append({"type": "image", "image": component.base64})
content_list.append({'type': 'image', 'image': component.base64})
elif isinstance(component, platform_message.WeChatEmoji):
content_list.append(
{'type': 'WeChatEmoji', 'emoji_md5': component.emoji_md5, 'emoji_size': component.emoji_size})
{'type': 'WeChatEmoji', 'emoji_md5': component.emoji_md5, 'emoji_size': component.emoji_size}
)
elif isinstance(component, platform_message.Voice):
content_list.append({"type": "voice", "data": component.url, "duration": component.length, "forma": 0})
content_list.append({'type': 'voice', 'data': component.url, 'duration': component.length, 'forma': 0})
elif isinstance(component, platform_message.WeChatAppMsg):
content_list.append({'type': 'WeChatAppMsg', 'app_msg': component.app_msg})
elif isinstance(component, platform_message.Forward):
@@ -86,28 +74,37 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return content_list
async def target2yiri(
self,
message: dict,
bot_account_id: str
bot_account_id: str,
) -> platform_message.MessageChain:
"""外部消息转平台消息"""
# 数据预处理
message_list = []
bot_wxid = self.config['wxid']
ats_bot = False # 是否被@
content = message["content"]["str"]
content = message['content']['str']
content_no_preifx = content # 群消息则去掉前缀
is_group_message = self._is_group_message(message)
if is_group_message:
ats_bot = self._ats_bot(message, bot_account_id)
self.logger.info(f"ats_bot: {ats_bot}; bot_account_id: {bot_account_id}; bot_wxid: {bot_wxid}")
if "@所有人" in content:
message_list.append(platform_message.AtAll())
elif ats_bot:
if ats_bot:
message_list.append(platform_message.At(target=bot_account_id))
# 解析@信息并生成At组件
at_targets = self._extract_at_targets(message)
for target_id in at_targets:
if target_id != bot_wxid: # 避免重复添加机器人的At
message_list.append(platform_message.At(target=target_id))
content_no_preifx, _ = self._extract_content_and_sender(content)
msg_type = message["msg_type"]
msg_type = message['msg_type']
# 映射消息类型到处理器方法
handler_map = {
@@ -129,11 +126,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain(message_list)
async def _handler_text(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_text(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理文本消息 (msg_type=1)"""
if message and self._is_group_message(message):
pattern = r'@\S{1,20}'
@@ -141,16 +134,12 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain([platform_message.Plain(content_no_preifx)])
async def _handler_image(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_image(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理图像消息 (msg_type=3)"""
try:
image_xml = content_no_preifx
if not image_xml:
return platform_message.MessageChain([platform_message.Unknown("[图片内容为空]")])
return platform_message.MessageChain([platform_message.Unknown('[图片内容为空]')])
root = ET.fromstring(image_xml)
# 提取img标签的属性
@@ -160,28 +149,22 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
cdnthumburl = img_tag.get('cdnthumburl')
# cdnmidimgurl = img_tag.get('cdnmidimgurl')
image_data = self.bot.cdn_download(aeskey=aeskey, file_type=1, file_url=cdnthumburl)
if image_data["Data"]['FileData'] == '':
if image_data['Data']['FileData'] == '':
image_data = self.bot.cdn_download(aeskey=aeskey, file_type=2, file_url=cdnthumburl)
base64_str = image_data["Data"]['FileData']
base64_str = image_data['Data']['FileData']
# self.logger.info(f"data:image/png;base64,{base64_str}")
elements = [
platform_message.Image(base64=f"data:image/png;base64,{base64_str}"),
platform_message.Image(base64=f'data:image/png;base64,{base64_str}'),
# platform_message.WeChatForwardImage(xml_data=image_xml) # 微信消息转发
]
return platform_message.MessageChain(elements)
except Exception as e:
self.logger.error(f"处理图片失败: {str(e)}")
return platform_message.MessageChain([platform_message.Unknown("[图片处理失败]")])
self.logger.error(f'处理图片失败: {str(e)}')
return platform_message.MessageChain([platform_message.Unknown('[图片处理失败]')])
async def _handler_voice(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_voice(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理语音消息 (msg_type=34)"""
message_List = []
try:
@@ -197,39 +180,33 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
bufid = voicemsg.get('bufid')
length = voicemsg.get('voicelength')
voice_data = self.bot.get_msg_voice(buf_id=str(bufid), length=int(length), msgid=str(new_msg_id))
audio_base64 = voice_data["Data"]['Base64']
audio_base64 = voice_data['Data']['Base64']
# 验证语音数据有效性
if not audio_base64:
message_List.append(platform_message.Unknown(text="[语音内容为空]"))
message_List.append(platform_message.Unknown(text='[语音内容为空]'))
return platform_message.MessageChain(message_List)
# 转换为平台支持的语音格式(如 Silk 格式)
voice_element = platform_message.Voice(
base64=f"data:audio/silk;base64,{audio_base64}"
)
voice_element = platform_message.Voice(base64=f'data:audio/silk;base64,{audio_base64}')
message_List.append(voice_element)
except KeyError as e:
self.logger.error(f"语音数据字段缺失: {str(e)}")
message_List.append(platform_message.Unknown(text="[语音数据解析失败]"))
self.logger.error(f'语音数据字段缺失: {str(e)}')
message_List.append(platform_message.Unknown(text='[语音数据解析失败]'))
except Exception as e:
self.logger.error(f"处理语音消息异常: {str(e)}")
message_List.append(platform_message.Unknown(text="[语音处理失败]"))
self.logger.error(f'处理语音消息异常: {str(e)}')
message_List.append(platform_message.Unknown(text='[语音处理失败]'))
return platform_message.MessageChain(message_List)
async def _handler_compound(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_compound(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理复合消息 (msg_type=49),根据子类型分派"""
try:
xml_data = ET.fromstring(content_no_preifx)
appmsg_data = xml_data.find('.//appmsg')
if appmsg_data:
data_type = appmsg_data.findtext('.//type', "")
data_type = appmsg_data.findtext('.//type', '')
# 二次分派处理器
sub_handler_map = {
'57': self._handler_compound_quote,
@@ -238,9 +215,9 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
'74': self._handler_compound_file,
'33': self._handler_compound_mini_program,
'36': self._handler_compound_mini_program,
'2000': partial(self._handler_compound_unsupported, text="[转账消息]"),
'2001': partial(self._handler_compound_unsupported, text="[红包消息]"),
'51': partial(self._handler_compound_unsupported, text="[视频号消息]"),
'2000': partial(self._handler_compound_unsupported, text='[转账消息]'),
'2001': partial(self._handler_compound_unsupported, text='[红包消息]'),
'51': partial(self._handler_compound_unsupported, text='[视频号消息]'),
}
handler = sub_handler_map.get(data_type, self._handler_compound_unsupported)
@@ -251,56 +228,54 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
else:
return platform_message.MessageChain([platform_message.Unknown(text=content_no_preifx)])
except Exception as e:
self.logger.error(f"解析复合消息失败: {str(e)}")
self.logger.error(f'解析复合消息失败: {str(e)}')
return platform_message.MessageChain([platform_message.Unknown(text=content_no_preifx)])
async def _handler_compound_quote(
self,
message: Optional[dict],
xml_data: ET.Element
self, message: Optional[dict], xml_data: ET.Element
) -> platform_message.MessageChain:
"""处理引用消息 (data_type=57)"""
message_list = []
# self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode'))
# self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode'))
appmsg_data = xml_data.find('.//appmsg')
quote_data = "" # 引用原文
quote_data = '' # 引用原文
quote_id = None # 引用消息的原发送者
tousername = None # 接收方: 所属微信的wxid
user_data = "" # 用户消息
user_data = '' # 用户消息
sender_id = xml_data.findtext('.//fromusername') # 发送方:单聊用户/群member
# 引用消息转发
if appmsg_data:
user_data = appmsg_data.findtext('.//title') or ""
user_data = appmsg_data.findtext('.//title') or ''
quote_data = appmsg_data.find('.//refermsg').findtext('.//content')
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr')
message_list.append(
platform_message.WeChatAppMsg(
app_msg=ET.tostring(appmsg_data, encoding='unicode'))
)
message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg_data, encoding='unicode')))
if message:
tousername = message['to_user_name']["str"]
tousername = message['to_user_name']['str']
_ = quote_id
_ = tousername
if quote_data:
quote_data_message_list = platform_message.MessageChain()
# 文本消息
try:
if "<msg>" not in quote_data:
if '<msg>' not in quote_data:
quote_data_message_list.append(platform_message.Plain(quote_data))
else:
# 引用消息展开
quote_data_xml = ET.fromstring(quote_data)
if quote_data_xml.find("img"):
if quote_data_xml.find('img'):
quote_data_message_list.extend(await self._handler_image(None, quote_data))
elif quote_data_xml.find("voicemsg"):
elif quote_data_xml.find('voicemsg'):
quote_data_message_list.extend(await self._handler_voice(None, quote_data))
elif quote_data_xml.find("videomsg"):
elif quote_data_xml.find('videomsg'):
quote_data_message_list.extend(await self._handler_default(None, quote_data)) # 先不处理
else:
# appmsg
quote_data_message_list.extend(await self._handler_compound(None, quote_data))
except Exception as e:
self.logger.error(f"处理引用消息异常 expcetion:{e}")
self.logger.error(f'处理引用消息异常 expcetion:{e}')
quote_data_message_list.append(platform_message.Plain(quote_data))
message_list.append(
platform_message.Quote(
@@ -315,15 +290,11 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain(message_list)
async def _handler_compound_file(
self,
message: dict,
xml_data: ET.Element
) -> platform_message.MessageChain:
async def _handler_compound_file(self, message: dict, xml_data: ET.Element) -> platform_message.MessageChain:
"""处理文件消息 (data_type=6)"""
file_data = xml_data.find('.//appmsg')
if file_data.findtext('.//type', "") == "74":
if file_data.findtext('.//type', '') == '74':
return None
else:
@@ -346,22 +317,21 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
file_data = self.bot.cdn_download(aeskey=aeskey, file_type=5, file_url=cdnthumburl)
file_base64 = file_data["Data"]['FileData']
file_base64 = file_data['Data']['FileData']
# print(file_data)
file_size = file_data["Data"]['TotalSize']
file_size = file_data['Data']['TotalSize']
# print(file_base64)
return platform_message.MessageChain([
platform_message.WeChatFile(file_id=file_id, file_name=file_name, file_size=file_size,
file_base64=file_base64),
platform_message.WeChatForwardFile(xml_data=xml_data_str)
])
return platform_message.MessageChain(
[
platform_message.WeChatFile(
file_id=file_id, file_name=file_name, file_size=file_size, file_base64=file_base64
),
platform_message.WeChatForwardFile(xml_data=xml_data_str),
]
)
async def _handler_compound_link(
self,
message: dict,
xml_data: ET.Element
) -> platform_message.MessageChain:
async def _handler_compound_link(self, message: dict, xml_data: ET.Element) -> platform_message.MessageChain:
"""处理链接消息(如公众号文章、外部网页)"""
message_list = []
try:
@@ -374,56 +344,38 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
link_title=appmsg.findtext('title', ''),
link_desc=appmsg.findtext('des', ''),
link_url=appmsg.findtext('url', ''),
link_thumb_url=appmsg.findtext("thumburl", '') # 这个字段拿不到
link_thumb_url=appmsg.findtext('thumburl', ''), # 这个字段拿不到
)
)
# 还没有发链接的接口, 暂时还需要自己构造appmsg, 先用WeChatAppMsg。
message_list.append(
platform_message.WeChatAppMsg(
app_msg=ET.tostring(appmsg, encoding='unicode')
)
)
message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg, encoding='unicode')))
except Exception as e:
self.logger.error(f"解析链接消息失败: {str(e)}")
self.logger.error(f'解析链接消息失败: {str(e)}')
return platform_message.MessageChain(message_list)
async def _handler_compound_mini_program(
self,
message: dict,
xml_data: ET.Element
self, message: dict, xml_data: ET.Element
) -> platform_message.MessageChain:
"""处理小程序消息(如小程序卡片、服务通知)"""
xml_data_str = ET.tostring(xml_data, encoding='unicode')
return platform_message.MessageChain([
platform_message.WeChatForwardMiniPrograms(xml_data=xml_data_str)
])
return platform_message.MessageChain([platform_message.WeChatForwardMiniPrograms(xml_data=xml_data_str)])
async def _handler_default(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_default(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理未知消息类型"""
if message:
msg_type = message["msg_type"]
msg_type = message['msg_type']
else:
msg_type = ""
return platform_message.MessageChain([
platform_message.Unknown(text=f"[未知消息类型 msg_type:{msg_type}]")
])
msg_type = ''
return platform_message.MessageChain([platform_message.Unknown(text=f'[未知消息类型 msg_type:{msg_type}]')])
def _handler_compound_unsupported(
self,
message: dict,
xml_data: str,
text: Optional[str] = None
self, message: dict, xml_data: str, text: Optional[str] = None
) -> platform_message.MessageChain:
"""处理未支持复合消息类型(msg_type=49)子类型"""
if not text:
text = f"[xml_data={xml_data}]"
text = f'[xml_data={xml_data}]'
content_list = []
content_list.append(
platform_message.Unknown(text=f"[处理未支持复合消息类型[msg_type=49]|{text}"))
content_list.append(platform_message.Unknown(text=f'[处理未支持复合消息类型[msg_type=49]|{text}'))
return platform_message.MessageChain(content_list)
@@ -432,7 +384,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
ats_bot = False
try:
to_user_name = message['to_user_name']['str'] # 接收方: 所属微信的wxid
raw_content = message["content"]["str"] # 原始消息内容
raw_content = message['content']['str'] # 原始消息内容
content_no_prefix, _ = self._extract_content_and_sender(raw_content)
# 直接艾特机器人这个有bug当被引用的消息里面有@bot,会套娃
# ats_bot = ats_bot or (f"@{bot_account_id}" in content_no_prefix)
@@ -443,7 +395,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
msg_source = message.get('msg_source', '') or ''
if len(msg_source) > 0:
msg_source_data = ET.fromstring(msg_source)
at_user_list = msg_source_data.findtext("atuserlist") or ""
at_user_list = msg_source_data.findtext('atuserlist') or ''
ats_bot = ats_bot or (to_user_name in at_user_list)
# 引用bot
if message.get('msg_type', 0) == 49:
@@ -454,56 +406,73 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr') # 引用消息的原发送者
ats_bot = ats_bot or (quote_id == tousername)
except Exception as e:
self.logger.error(f"_ats_bot got except: {e}")
self.logger.error(f'_ats_bot got except: {e}')
finally:
return ats_bot
# 提取一下at的wxid列表
def _extract_at_targets(self, message: dict) -> list[str]:
"""从消息中提取被@用户的ID列表"""
at_targets = []
try:
# 从msg_source中解析atuserlist
msg_source = message.get('msg_source', '') or ''
if len(msg_source) > 0:
msg_source_data = ET.fromstring(msg_source)
at_user_list = msg_source_data.findtext("atuserlist") or ""
if at_user_list:
# atuserlist格式通常是逗号分隔的用户ID列表
at_targets = [user_id.strip() for user_id in at_user_list.split(',') if user_id.strip()]
except Exception as e:
self.logger.error(f"_extract_at_targets got except: {e}")
return at_targets
# 提取一下content前面的sender_id, 和去掉前缀的内容
def _extract_content_and_sender(self, raw_content: str) -> Tuple[str, Optional[str]]:
try:
# 检查消息开头,如果有 wxid_sbitaz0mt65n22:\n 则删掉
# add: 有些用户的wxid不是上述格式。换成user_name:
regex = re.compile(r"^[a-zA-Z0-9_\-]{5,20}:")
line_split = raw_content.split("\n")
regex = re.compile(r'^[a-zA-Z0-9_\-]{5,20}:')
line_split = raw_content.split('\n')
if len(line_split) > 0 and regex.match(line_split[0]):
raw_content = "\n".join(line_split[1:])
sender_id = line_split[0].strip(":")
raw_content = '\n'.join(line_split[1:])
sender_id = line_split[0].strip(':')
return raw_content, sender_id
except Exception as e:
self.logger.error(f"_extract_content_and_sender got except: {e}")
self.logger.error(f'_extract_content_and_sender got except: {e}')
finally:
return raw_content, None
# 是否是群消息
def _is_group_message(self, message: dict) -> bool:
from_user_name = message['from_user_name']['str']
return from_user_name.endswith("@chatroom")
return from_user_name.endswith('@chatroom')
class WeChatPadEventConverter(adapter.EventConverter):
def __init__(self, config: dict):
def __init__(self, config: dict, logger: logging.Logger):
self.config = config
self.message_converter = WeChatPadMessageConverter(config)
self.logger = logging.getLogger("WeChatPadEventConverter")
self.message_converter = WeChatPadMessageConverter(config, logger)
self.logger = logger
@staticmethod
async def yiri2target(
event: platform_events.MessageEvent
) -> dict:
async def yiri2target(event: platform_events.MessageEvent) -> dict:
pass
async def target2yiri(
self,
event: dict,
bot_account_id: str
bot_account_id: str,
) -> platform_events.MessageEvent:
# 排除公众号以及微信团队消息
if event['from_user_name']['str'].startswith('gh_') \
or event['from_user_name']['str']=='weixin'\
or event['from_user_name']['str'] == "newsapp"\
or event['from_user_name']['str'] == self.config["wxid"]:
if (
event['from_user_name']['str'].startswith('gh_')
or event['from_user_name']['str'] == 'weixin'
or event['from_user_name']['str'] == 'newsapp'
or event['from_user_name']['str'] == self.config['wxid']
):
return None
message_chain = await self.message_converter.target2yiri(copy.deepcopy(event), bot_account_id)
@@ -512,7 +481,7 @@ class WeChatPadEventConverter(adapter.EventConverter):
if '@chatroom' in event['from_user_name']['str']:
# 找出开头的 wxid_ 字符串,以:结尾
sender_wxid = event['content']['str'].split(":")[0]
sender_wxid = event['content']['str'].split(':')[0]
return platform_events.GroupMessage(
sender=platform_entities.GroupMember(
@@ -524,13 +493,13 @@ class WeChatPadEventConverter(adapter.EventConverter):
name=event['from_user_name']['str'],
permission=platform_entities.Permission.Member,
),
special_title="",
special_title='',
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=message_chain,
time=event["create_time"],
time=event['create_time'],
source_platform_object=event,
)
else:
@@ -541,13 +510,13 @@ class WeChatPadEventConverter(adapter.EventConverter):
remark='',
),
message_chain=message_chain,
time=event["create_time"],
time=event['create_time'],
source_platform_object=event,
)
class WeChatPadAdapter(adapter.MessagePlatformAdapter):
name: str = "WeChatPad" # 定义适配器名称
name: str = 'WeChatPad' # 定义适配器名称
bot: WeChatPadClient
quart_app: quart.Quart
@@ -572,35 +541,29 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.message_converter = WeChatPadMessageConverter(config)
self.event_converter = WeChatPadEventConverter(config)
self.message_converter = WeChatPadMessageConverter(config, ap.logger)
self.event_converter = WeChatPadEventConverter(config, ap.logger)
async def ws_message(self, data):
"""处理接收到的消息"""
# self.ap.logger.debug(f"Gewechat callback event: {data}")
# print(data)
try:
event = await self.event_converter.target2yiri(data.copy(), self.bot_account_id)
except Exception as e:
await self.logger.error(f"Error in wechatpad callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wechatpad callback: {traceback.format_exc()}')
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
return 'ok'
async def _handle_message(
self,
message: platform_message.MessageChain,
target_id: str
):
async def _handle_message(self, message: platform_message.MessageChain, target_id: str):
"""统一消息处理核心逻辑"""
content_list = await self.message_converter.yiri2target(message)
# print(content_list)
at_targets = [item["target"] for item in content_list if item["type"] == "at"]
at_targets = [item['target'] for item in content_list if item['type'] == 'at']
# print(at_targets)
# 处理@逻辑
at_targets = at_targets or []
@@ -608,7 +571,7 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
if at_targets:
member_info = self.bot.get_chatroom_member_detail(
target_id,
)["Data"]["member_data"]["chatroom_member_list"]
)['Data']['member_data']['chatroom_member_list']
# 处理消息组件
for msg in content_list:
@@ -616,63 +579,51 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
if msg['type'] == 'text' and at_targets:
at_nick_name_list = []
for member in member_info:
if member["user_name"] in at_targets:
if member['user_name'] in at_targets:
at_nick_name_list.append(f'@{member["nick_name"]}')
msg['content'] = f'{" ".join(at_nick_name_list)} {msg["content"]}'
# 统一消息派发
handler_map = {
'text': lambda msg: self.bot.send_text_message(
to_wxid=target_id,
message=msg['content'],
ats=at_targets
to_wxid=target_id, message=msg['content'], ats=at_targets
),
'image': lambda msg: self.bot.send_image_message(
to_wxid=target_id,
img_url=msg["image"],
ats = at_targets
to_wxid=target_id, img_url=msg['image'], ats=at_targets
),
'WeChatEmoji': lambda msg: self.bot.send_emoji_message(
to_wxid=target_id,
emoji_md5=msg['emoji_md5'],
emoji_size=msg['emoji_size']
to_wxid=target_id, emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size']
),
'voice': lambda msg: self.bot.send_voice_message(
to_wxid=target_id,
voice_data=msg['data'],
voice_duration=msg["duration"],
voice_forma=msg["forma"],
voice_duration=msg['duration'],
voice_forma=msg['forma'],
),
'WeChatAppMsg': lambda msg: self.bot.send_app_message(
to_wxid=target_id,
app_message=msg['app_msg'],
type=0,
),
'at': lambda msg: None
'at': lambda msg: None,
}
if handler := handler_map.get(msg['type']):
handler(msg)
# self.ap.logger.warning(f"未处理的消息类型: {ret}")
else:
self.ap.logger.warning(f"未处理的消息类型: {msg['type']}")
self.ap.logger.warning(f'未处理的消息类型: {msg["type"]}')
continue
async def send_message(
self,
target_type: str,
target_id: str,
message: platform_message.MessageChain
):
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
"""主动发送消息"""
return await self._handle_message(message, target_id)
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
"""回复消息"""
if message_source.source_platform_object:
@@ -683,58 +634,49 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
pass
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None]
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
):
self.listeners[event_type] = callback
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None]
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
):
pass
async def run_async(self):
if not self.config["admin_key"] and not self.config["token"]:
raise RuntimeError("无wechatpad管理密匙请填入配置文件后重启")
if not self.config['admin_key'] and not self.config['token']:
raise RuntimeError('无wechatpad管理密匙请填入配置文件后重启')
else:
if self.config["token"]:
self.bot = WeChatPadClient(
self.config['wechatpad_url'],
self.config["token"]
)
if self.config['token']:
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'])
data = self.bot.get_login_status()
self.ap.logger.info(data)
if data["Code"] == 300 and data["Text"] == "你已退出微信":
if data['Code'] == 300 and data['Text'] == '你已退出微信':
response = requests.post(
f"{self.config['wechatpad_url']}/admin/GenAuthKey1?key={self.config['admin_key']}",
json={"Count": 1, "Days": 365}
f'{self.config["wechatpad_url"]}/admin/GenAuthKey1?key={self.config["admin_key"]}',
json={'Count': 1, 'Days': 365},
)
if response.status_code != 200:
raise Exception(f"获取token失败: {response.text}")
self.config["token"] = response.json()["Data"][0]
raise Exception(f'获取token失败: {response.text}')
self.config['token'] = response.json()['Data'][0]
elif not self.config["token"]:
elif not self.config['token']:
response = requests.post(
f"{self.config['wechatpad_url']}/admin/GenAuthKey1?key={self.config['admin_key']}",
json={"Count": 1, "Days": 365}
f'{self.config["wechatpad_url"]}/admin/GenAuthKey1?key={self.config["admin_key"]}',
json={'Count': 1, 'Days': 365},
)
if response.status_code != 200:
raise Exception(f"获取token失败: {response.text}")
self.config["token"] = response.json()["Data"][0]
raise Exception(f'获取token失败: {response.text}')
self.config['token'] = response.json()['Data'][0]
self.bot = WeChatPadClient(
self.config['wechatpad_url'],
self.config["token"],
logger=self.logger
)
self.ap.logger.info(self.config["token"])
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'], logger=self.logger)
self.ap.logger.info(self.config['token'])
thread_1 = threading.Event()
def wechat_login_process():
# 不登录这些先注释掉避免登陆态尝试拉qrcode。
# login_data =self.bot.get_login_qr()
@@ -742,67 +684,54 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
# url = login_data['Data']["QrCodeUrl"]
# self.ap.logger.info(login_data)
profile =self.bot.get_profile()
profile = self.bot.get_profile()
self.ap.logger.info(profile)
self.bot_account_id = profile["Data"]["userInfo"]["nickName"]["str"]
self.config["wxid"] = profile["Data"]["userInfo"]["userName"]["str"]
self.bot_account_id = profile['Data']['userInfo']['nickName']['str']
self.config['wxid'] = profile['Data']['userInfo']['userName']['str']
thread_1.set()
# asyncio.create_task(wechat_login_process)
threading.Thread(target=wechat_login_process).start()
def connect_websocket_sync() -> None:
thread_1.wait()
uri = f"{self.config['wechatpad_ws']}/GetSyncMsg?key={self.config['token']}"
self.ap.logger.info(f"Connecting to WebSocket: {uri}")
uri = f'{self.config["wechatpad_ws"]}/GetSyncMsg?key={self.config["token"]}'
self.ap.logger.info(f'Connecting to WebSocket: {uri}')
def on_message(ws, message):
try:
data = json.loads(message)
self.ap.logger.debug(f"Received message: {data}")
self.ap.logger.debug(f'Received message: {data}')
# 这里需要确保ws_message是同步的或者使用asyncio.run调用异步方法
asyncio.run(self.ws_message(data))
except json.JSONDecodeError:
self.ap.logger.error(f"Non-JSON message: {message[:100]}...")
self.ap.logger.error(f'Non-JSON message: {message[:100]}...')
def on_error(ws, error):
self.ap.logger.error(f"WebSocket error: {str(error)[:200]}")
self.ap.logger.error(f'WebSocket error: {str(error)[:200]}')
def on_close(ws, close_status_code, close_msg):
self.ap.logger.info("WebSocket closed, reconnecting...")
self.ap.logger.info('WebSocket closed, reconnecting...')
time.sleep(5)
connect_websocket_sync() # 自动重连
def on_open(ws):
self.ap.logger.info("WebSocket connected successfully!")
self.ap.logger.info('WebSocket connected successfully!')
ws = websocket.WebSocketApp(
uri,
on_message=on_message,
on_error=on_error,
on_close=on_close,
on_open=on_open
)
ws.run_forever(
ping_interval=60,
ping_timeout=20
uri, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open
)
ws.run_forever(ping_interval=60, ping_timeout=20)
# 直接调用同步版本(会阻塞)
# connect_websocket_sync()
# 这行代码会在WebSocket连接断开后才会执行
# self.ap.logger.info("WebSocket client thread started")
thread = threading.Thread(
target=connect_websocket_sync,
name="WebSocketClientThread",
daemon=True
)
thread = threading.Thread(target=connect_websocket_sync, name='WebSocketClientThread', daemon=True)
thread.start()
self.ap.logger.info("WebSocket client thread started")
self.ap.logger.info('WebSocket client thread started')
async def kill(self) -> bool:
pass

View File

@@ -157,7 +157,7 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
contacts_secret=config['contacts_secret'],
logger=self.logger
logger=self.logger,
)
async def reply_message(
@@ -201,8 +201,8 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in wecom callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wecom callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -145,7 +145,7 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
secret=config['secret'],
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
logger=self.logger
logger=self.logger,
)
async def reply_message(
@@ -178,8 +178,8 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in wecomcs callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wecomcs callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -17,7 +17,7 @@ class LLMModelInfo(pydantic.BaseModel):
token_mgr: token.TokenManager
requester: requester.LLMAPIRequester
requester: requester.ProviderAPIRequester
tool_call_supported: typing.Optional[bool] = False

View File

@@ -18,7 +18,7 @@ class ModelManager:
model_list: list[entities.LLMModelInfo] # deprecated
requesters: dict[str, requester.LLMAPIRequester] # deprecated
requesters: dict[str, requester.ProviderAPIRequester] # deprecated
token_mgrs: dict[str, token.TokenManager] # deprecated
@@ -28,9 +28,11 @@ class ModelManager:
llm_models: list[requester.RuntimeLLMModel]
embedding_models: list[requester.RuntimeEmbeddingModel]
requester_components: list[engine.Component]
requester_dict: dict[str, type[requester.LLMAPIRequester]] # cache
requester_dict: dict[str, type[requester.ProviderAPIRequester]] # cache
def __init__(self, ap: app.Application):
self.ap = ap
@@ -38,6 +40,7 @@ class ModelManager:
self.requesters = {}
self.token_mgrs = {}
self.llm_models = []
self.embedding_models = []
self.requester_components = []
self.requester_dict = {}
@@ -45,7 +48,7 @@ class ModelManager:
self.requester_components = self.ap.discover.get_components_by_kind('LLMAPIRequester')
# forge requester class dict
requester_dict: dict[str, type[requester.LLMAPIRequester]] = {}
requester_dict: dict[str, type[requester.ProviderAPIRequester]] = {}
for component in self.requester_components:
requester_dict[component.metadata.name] = component.get_python_component_class()
@@ -58,13 +61,11 @@ class ModelManager:
self.ap.logger.info('Loading models from db...')
self.llm_models = []
self.embedding_models = []
# llm models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.LLMModel))
llm_models = result.all()
# load models
for llm_model in llm_models:
try:
await self.load_llm_model(llm_model)
@@ -73,11 +74,17 @@ class ModelManager:
except Exception as e:
self.ap.logger.error(f'Failed to load model {llm_model.uuid}: {e}\n{traceback.format_exc()}')
# embedding models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
embedding_models = result.all()
for embedding_model in embedding_models:
await self.load_embedding_model(embedding_model)
async def init_runtime_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""初始化运行时模型"""
"""初始化运行时 LLM 模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.LLMModel(**model_info._mapping)
elif isinstance(model_info, dict):
@@ -101,14 +108,47 @@ class ModelManager:
return runtime_llm_model
async def init_runtime_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""初始化运行时 Embedding 模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.EmbeddingModel(**model_info._mapping)
elif isinstance(model_info, dict):
model_info = persistence_model.EmbeddingModel(**model_info)
requester_inst = self.requester_dict[model_info.requester](ap=self.ap, config=model_info.requester_config)
await requester_inst.initialize()
runtime_embedding_model = requester.RuntimeEmbeddingModel(
model_entity=model_info,
token_mgr=token.TokenManager(
name=model_info.uuid,
tokens=model_info.api_keys,
),
requester=requester_inst,
)
return runtime_embedding_model
async def load_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""加载模型"""
"""加载 LLM 模型"""
runtime_llm_model = await self.init_runtime_llm_model(model_info)
self.llm_models.append(runtime_llm_model)
async def load_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""加载 Embedding 模型"""
runtime_embedding_model = await self.init_runtime_embedding_model(model_info)
self.embedding_models.append(runtime_embedding_model)
async def get_model_by_name(self, name: str) -> entities.LLMModelInfo: # deprecated
"""通过名称获取模型"""
for model in self.model_list:
@@ -116,23 +156,44 @@ class ModelManager:
return model
raise ValueError(f'无法确定模型 {name} 的信息')
async def get_model_by_uuid(self, uuid: str) -> entities.LLMModelInfo:
"""通过uuid获取模型"""
async def get_model_by_uuid(self, uuid: str) -> requester.RuntimeLLMModel:
"""通过uuid获取 LLM 模型"""
for model in self.llm_models:
if model.model_entity.uuid == uuid:
return model
raise ValueError(f'model {uuid} not found')
raise ValueError(f'LLM model {uuid} not found')
async def get_embedding_model_by_uuid(self, uuid: str) -> requester.RuntimeEmbeddingModel:
"""通过uuid获取 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == uuid:
return model
raise ValueError(f'Embedding model {uuid} not found')
async def remove_llm_model(self, model_uuid: str):
"""移除模型"""
"""移除 LLM 模型"""
for model in self.llm_models:
if model.model_entity.uuid == model_uuid:
self.llm_models.remove(model)
return
def get_available_requesters_info(self) -> list[dict]:
async def remove_embedding_model(self, model_uuid: str):
"""移除 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == model_uuid:
self.embedding_models.remove(model)
return
def get_available_requesters_info(self, model_type: str) -> list[dict]:
"""获取所有可用的请求器"""
return [component.to_plain_dict() for component in self.requester_components]
if model_type != '':
return [
component.to_plain_dict()
for component in self.requester_components
if model_type in component.spec['support_type']
]
else:
return [component.to_plain_dict() for component in self.requester_components]
def get_available_requester_info_by_name(self, name: str) -> dict | None:
"""通过名称获取请求器信息"""

View File

@@ -20,22 +20,45 @@ class RuntimeLLMModel:
token_mgr: token.TokenManager
"""api key管理器"""
requester: LLMAPIRequester
requester: ProviderAPIRequester
"""请求器实例"""
def __init__(
self,
model_entity: persistence_model.LLMModel,
token_mgr: token.TokenManager,
requester: LLMAPIRequester,
requester: ProviderAPIRequester,
):
self.model_entity = model_entity
self.token_mgr = token_mgr
self.requester = requester
class LLMAPIRequester(metaclass=abc.ABCMeta):
"""LLM API请求器"""
class RuntimeEmbeddingModel:
"""运行时 Embedding 模型"""
model_entity: persistence_model.EmbeddingModel
"""模型数据"""
token_mgr: token.TokenManager
"""api key管理器"""
requester: ProviderAPIRequester
"""请求器实例"""
def __init__(
self,
model_entity: persistence_model.EmbeddingModel,
token_mgr: token.TokenManager,
requester: ProviderAPIRequester,
):
self.model_entity = model_entity
self.token_mgr = token_mgr
self.requester = requester
class ProviderAPIRequester(metaclass=abc.ABCMeta):
"""Provider API请求器"""
name: str = None
@@ -74,3 +97,22 @@ class LLMAPIRequester(metaclass=abc.ABCMeta):
llm_entities.Message: 返回消息对象
"""
pass
async def invoke_embedding(
self,
model: RuntimeEmbeddingModel,
input_text: list[str],
extra_args: dict[str, typing.Any] = {},
) -> list[list[float]]:
"""调用 Embedding API
Args:
query (core_entities.Query): 请求上下文
model (RuntimeEmbeddingModel): 使用的模型信息
input_text (list[str]): 输入文本
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
Returns:
list[list[float]]: 返回的 embedding 向量
"""
pass

View File

@@ -7,7 +7,7 @@ from . import chatcmpl
class AI302ChatCompletions(chatcmpl.OpenAIChatCompletions):
"""302 AI ChatCompletion API 请求器"""
"""302.AI ChatCompletion API 请求器"""
client: openai.AsyncClient

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./302aichatcmpl.py

View File

@@ -15,7 +15,7 @@ from ...tools import entities as tools_entities
from ....utils import image
class AnthropicMessages(requester.LLMAPIRequester):
class AnthropicMessages(requester.ProviderAPIRequester):
"""Anthropic Messages API 请求器"""
client: anthropic.AsyncAnthropic

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./anthropicmsgs.py

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./bailianchatcmpl.py

Some files were not shown because too many files have changed in this diff Show More