Compare commits

..

141 Commits

Author SHA1 Message Date
Junyan Qin
d3b31f7027 chore: release v4.1.1 2025-07-26 19:28:34 +08:00
How-Sean Xin
c00f05fca4 Add GitHub link redirection for front-end plugin cards (#1579)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json

* Update PluginCardComponent.tsx

* perf: no display github button when plugin has no github url

---------

Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-07-26 19:22:00 +08:00
Junyan Qin
92c3a86356 feat: add qhaigc 2025-07-24 22:42:26 +08:00
Junyan Qin
341fdc409d perf: embedding model ui 2025-07-24 22:29:25 +08:00
Junyan Qin
ebd542f592 feat: 302.AI embeddings 2025-07-24 22:05:15 +08:00
Junyan Qin
194b2d9814 feat: supports more embedding providers 2025-07-24 22:03:20 +08:00
Junyan Qin
7aed5cf1ed feat: ollama embeddings models 2025-07-24 10:36:32 +08:00
Junyan Qin
abc88c4979 doc: update README 2025-07-23 18:53:15 +08:00
gaord
6754666845 feat(wechatpad): 添加对@所有人的支持并统一处理消息派发 (#1588)
在消息转换器中添加对AtAll组件的支持,将@所有人转换为特定格式。同时在消息派发时统一处理@所有人的情况,确保通知能正确发送。
2025-07-23 15:22:04 +08:00
Junyan Qin
08e6f46b19 fix(deps): react-focus-scope pkg bug 2025-07-22 11:05:16 +08:00
Junyan Qin
1497fdae56 doc(README): adjust structure 2025-07-20 22:10:32 +08:00
Junyan Qin
10a3cb40e1 perf(retrieve): ui 2025-07-20 17:57:33 +08:00
devin-ai-integration[bot]
dd1ec15a39 feat: add knowledge base retrieve test tab with Card-based UI (#1583)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:56:46 +08:00
devin-ai-integration[bot]
ea51cec57e feat: add pipeline sorting functionality with three sort options (#1582)
* feat: add pipeline sorting functionality with three sort options

Co-Authored-By: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:23:30 +08:00
Junyan Qin
28ce986a8c chore: release v4.1.0 2025-07-20 12:32:06 +08:00
Junyan Qin
489b145606 doc: update README 2025-07-20 12:30:41 +08:00
Junyan Qin (Chin)
5e92bffaa6 Merge pull request #1581 from langbot-app/RockChinQ-patch-1
Update README.md
2025-07-19 23:09:53 +08:00
Junyan Qin (Chin)
277d1b0e30 feat: rag engine (#1492)
* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* kb

* delete ap

* feat: add functions

* fix: modify rag database

* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* feat: add sidebar for rag and related i18n

* feat: add knowledge base page

* feat: basic entities of kb

* feat: complete support_type for 302ai and compshare requester

* perf: format

* perf: ruff check --fix

* feat: basic definition

* feat: rag fe framework

* perf: en comments

* feat: modify the rag.py

* perf: definitions

* fix: success method bad params

* fix: bugs

* fix: bug

* feat: kb dialog action

* fix: create knwoledge base issue

* fix: kb get api format

* fix: kb get api not contains model uuid

* fix: api bug

* fix: the fucking logger

* feat(fe): component for available apis

* fix: embbeding and chunking

* fix: ensure File.status is set correctly after storing data to avoid null values

* fix: add functions for deleting files

* feat(fe): file uploading

* perf: adjust ui

* fix: file be deleted twice

* feat(fe): complete kb ui

* fix: ui bugs

* fix: no longer require Query for invoking embedding

* feat: add embedder

* fix: delete embedding models file

* chore: stash

* chore: stash

* feat(rag): make embedding and retrieving available

* feat(rag): all APIs ok

* fix: delete utils

* feat: rag pipeline backend

* feat: combine kb with pipeline

* fix: .md file parse failed

* perf: debug output

* feat: add functions for frontend of kb

* perf(rag): ui and related apis

* perf(rag): use badge show doc status

* perf: open kb detail dialog after creating

* fix: linter error

* deps: remove sentence-transformers

* perf: description of default pipeline

* feat: add html and epub

* chore: no longer supports epub

---------

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: WangCham <651122857@qq.com>
2025-07-19 22:06:11 +08:00
Junyan Qin
13f4ed8d2c chore: no longer supports epub 2025-07-19 21:56:50 +08:00
WangCham
91cb5ca36c feat: add html and epub 2025-07-19 19:57:57 +08:00
TwperBody
c34d54a6cb Fixed a bug where some Windows systems failed to recognize spaces. (#1577)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx
2025-07-19 16:48:15 +08:00
TwperBody
2d1737da1f Optimize plugin display (#1578)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json
2025-07-19 16:47:34 +08:00
Junyan Qin
a1b8b9d47b perf: description of default pipeline 2025-07-18 18:57:42 +08:00
Junyan Qin
8df14bf9d9 deps: remove sentence-transformers 2025-07-18 18:46:07 +08:00
Junyan Qin
c98d265a1e fix: linter error 2025-07-18 17:52:24 +08:00
Junyan Qin
4e6782a6b7 perf: open kb detail dialog after creating 2025-07-18 16:52:54 +08:00
Junyan Qin
5541e9e6d0 perf(rag): use badge show doc status 2025-07-18 16:38:55 +08:00
gaord
878ab0ef6b fix(wechatpad): @所有人的情况下,修复@机器人消息未被正确解析的问题 (#1575) 2025-07-18 12:52:30 +08:00
Junyan Qin
b61bd36b14 perf(rag): ui and related apis 2025-07-18 00:45:38 +08:00
Junyan Qin (Chin)
bb672d8f46 Merge branch 'master' into feat/rag 2025-07-18 00:45:24 +08:00
WangCham
ba1a26543b Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-17 23:57:52 +08:00
WangCham
cb868ee7b2 feat: add functions for frontend of kb 2025-07-17 23:52:46 +08:00
Junyan Qin
5dd5cb12ad perf: debug output 2025-07-17 23:34:35 +08:00
Junyan Qin
2dfa83ff22 fix: .md file parse failed 2025-07-17 23:22:20 +08:00
Junyan Qin
27bb4e1253 feat: combine kb with pipeline 2025-07-17 23:15:13 +08:00
WangCham
45afdbdfbb feat: rag pipeline backend 2025-07-17 15:05:11 +08:00
WangCham
4cbbe9e000 fix: delete utils 2025-07-16 23:25:12 +08:00
Junyan Qin
333ec346ef feat(rag): all APIs ok 2025-07-16 22:15:03 +08:00
Junyan Qin
2f2db4d445 feat(rag): make embedding and retrieving available 2025-07-16 21:17:18 +08:00
Junyan Qin
fdc79b8d77 chore: release v4.0.9 2025-07-16 11:39:15 +08:00
Junyan Qin
f244795e57 fix: rename to '302.AI' 2025-07-16 11:36:57 +08:00
Junyan Qin
5a2aa19d0f feat(aiocqhttp): no longer download files for now 2025-07-16 11:36:01 +08:00
Junyan Qin
f731115805 chore: stash 2025-07-16 11:31:55 +08:00
Junyan Qin
67bc065ccd chore: stash 2025-07-15 22:09:10 +08:00
Junyan Qin
81eb92646f doc: perf README_JP 2025-07-14 11:22:59 +08:00
Junyan Qin
019a9317e9 doc: perf README 2025-07-14 11:17:58 +08:00
WangCham
199164fc4b fix: delete embedding models file 2025-07-13 23:12:08 +08:00
WangCham
c9c26213df Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-13 23:09:41 +08:00
WangCham
b7c57104c4 feat: add embedder 2025-07-13 23:04:03 +08:00
TwperBody
858cfd8d5a Update package.json (#1570)
Compatible with the creation of environment variables in the Windows environment
2025-07-12 22:31:30 +08:00
Junyan Qin
cbe297dc59 fix: no longer require Query for invoking embedding 2025-07-12 21:23:19 +08:00
Junyan Qin
de76fed25a fix: ui bugs 2025-07-12 18:12:53 +08:00
Junyan Qin
a10e61735d feat(fe): complete kb ui 2025-07-12 18:00:54 +08:00
Junyan Qin
1ef0193028 fix: file be deleted twice 2025-07-12 17:47:53 +08:00
Junyan Qin
1e85d02ae4 perf: adjust ui 2025-07-12 17:29:39 +08:00
Junyan Qin
d78a329aa9 feat(fe): file uploading 2025-07-12 17:15:07 +08:00
Junyan Qin
bfdf238db5 chore: use new social image 2025-07-12 11:44:08 +08:00
WangCham
234b61e2f8 fix: add functions for deleting files 2025-07-12 01:37:44 +08:00
WangCham
9f43097361 fix: ensure File.status is set correctly after storing data to avoid null values 2025-07-12 01:21:02 +08:00
WangCham
f395cac893 fix: embbeding and chunking 2025-07-12 01:07:49 +08:00
Junyan Qin
fe122281fd feat(fe): component for available apis 2025-07-11 21:40:42 +08:00
Junyan Qin
6d788cadbc fix: the fucking logger 2025-07-11 21:37:31 +08:00
Junyan Qin
a79a22a74d fix: api bug 2025-07-11 21:30:47 +08:00
Junyan Qin
2ed3b68790 fix: kb get api not contains model uuid 2025-07-11 20:58:51 +08:00
Junyan Qin
bd9331ce62 fix: kb get api format 2025-07-11 20:57:09 +08:00
WangCham
14c161b733 fix: create knwoledge base issue 2025-07-11 18:14:03 +08:00
Junyan Qin
815cdf8b4a feat: kb dialog action 2025-07-11 17:22:43 +08:00
Junyan Qin
7d5503dab2 fix: bug 2025-07-11 16:49:55 +08:00
Junyan Qin
9ba1ad5bd3 fix: bugs 2025-07-11 16:38:08 +08:00
Junyan Qin
367d04d0f0 fix: success method bad params 2025-07-11 11:28:43 +08:00
Junyan Qin
75c3ddde19 perf: definitions 2025-07-10 16:45:59 +08:00
Junyan Qin
c6e77e42be chore: switch some comments to en 2025-07-10 11:09:33 +08:00
Junyan Qin
4d0a39eb65 chore: switch comments to en 2025-07-10 11:01:16 +08:00
WangCham
ac03a2dceb feat: modify the rag.py 2025-07-09 22:09:46 +08:00
Junyan Qin
56248c350f chore: repo transferred 2025-07-07 19:00:55 +08:00
gaord
244aaf6e20 feat: 聊天的@用户id内容需要保留 (#1564)
* converters could use the application logger

* keep @targets in message for some plugins may need it to their functionality

* fix:form wxid in config

fix:传参问题,可以直接从config中拿到wxid

---------

Co-authored-by: fdc310 <82008029+fdc310@users.noreply.github.com>
2025-07-07 10:28:12 +08:00
Junyan Qin
cd25340826 perf: en comments 2025-07-06 16:08:02 +08:00
Junyan Qin
ebd8e014c6 feat: rag fe framework 2025-07-06 15:52:53 +08:00
Junyan Qin
a0b7d759ac chore: release v4.0.8.1 2025-07-06 10:46:32 +08:00
Junyan Qin
09884d3152 revert: 0203faa 2025-07-06 10:34:24 +08:00
Junyan Qin
bef0d73e83 feat: basic definition 2025-07-06 10:25:28 +08:00
Junyan Qin
8d28ace252 perf: ruff check --fix 2025-07-05 21:56:54 +08:00
Junyan Qin
39c062f73e perf: format 2025-07-05 21:56:17 +08:00
Junyan Qin
0e5c9e19e1 feat: complete support_type for 302ai and compshare requester 2025-07-05 21:03:14 +08:00
Matthew_Astral
01f2ef5694 feat: new discord adapter (#1563) 2025-07-05 20:51:04 +08:00
Junyan Qin
c5b62b6ba3 Merge remote-tracking branch 'wangcham/feat/rag' into feat/rag 2025-07-05 20:16:37 +08:00
Junyan Qin
bbf583ddb5 feat: basic entities of kb 2025-07-05 20:07:27 +08:00
Junyan Qin
22ef1a399e feat: add knowledge base page 2025-07-05 20:07:27 +08:00
Junyan Qin
0733f8878f feat: add sidebar for rag and related i18n 2025-07-05 20:07:27 +08:00
Junyan Qin
f36a61dbb2 feat: add api for uploading files 2025-07-05 20:07:15 +08:00
Junyan Qin
6d8936bd74 feat: add knowledge page 2025-07-05 20:07:15 +08:00
devin-ai-integration[bot]
d2b93b3296 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 20:07:15 +08:00
WangCham
552fee9bac fix: modify rag database 2025-07-05 18:58:17 +08:00
WangCham
34fe8b324d feat: add functions 2025-07-05 18:58:16 +08:00
WangCham
c4671fbf1c delete ap 2025-07-05 18:58:16 +08:00
WangCham
4bcc06c955 kb 2025-07-05 18:58:16 +08:00
Junyan Qin
348f6d9eaa feat: add api for uploading files 2025-07-05 18:57:24 +08:00
Junyan Qin
157ffdc34c feat: add knowledge page 2025-07-05 18:57:24 +08:00
devin-ai-integration[bot]
c81d5a1a49 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 18:57:23 +08:00
Junyan Qin (Chin)
a01706d163 Feat/reset password (#1566)
* feat: reset password with recovery key

* perf: formatting and multi language
2025-07-05 17:36:35 +08:00
Junyan Qin
a8d03c98dc doc: replace comshare link 2025-07-04 11:37:31 +08:00
Junyan Qin
3f0153ea4d doc: fix incorrect 302.AI name 2025-07-03 17:26:17 +08:00
Junyan Qin
60b50a35f1 chore: release v4.0.8 2025-07-03 15:07:19 +08:00
Junyan Qin (Chin)
abd02f04af Feat/compshare requester (#1561)
* feat: add compshare requester

* doc: add compshare to README
2025-07-03 15:04:02 +08:00
Matthew_Astral
14411a8af6 Add Discord platform adapter implementation (#1560)
- Implement DiscordMessageConverter for message conversion
- Support image handling from base64, URL, and file paths
- Add DiscordEventConverter for event conversion
- Implement DiscordAdapter for Discord bot integration
- Support DM and TextChannel message handling
2025-07-02 09:48:49 +08:00
Junyan Qin
896fef8cce perf: make launch notes show async 2025-06-30 21:34:02 +08:00
Junyan Qin
89c1972abe perf: skip broken models and bots in bootstrap 2025-06-30 21:29:38 +08:00
Junyan Qin
1627d04958 fix: bad import 2025-06-30 21:13:14 +08:00
Junyan Qin (Chin)
c959c99e45 Feat/302 ai (#1558)
* feat: add 302.AI requester

* doc: add 302.AI to README
2025-06-30 21:05:32 +08:00
Junyan Qin
0203faa8c1 fix: dingtalk adapter initializer blocks boot (#1544) 2025-06-28 22:06:12 +08:00
Junyan Qin (Chin)
35f76cb7ae Perf/combine entity dialogs (#1555)
* feat: combine bot settings and bot log dialogs

* perf: dialog style when creating bot

* perf: bot creation dialog

* feat: combine pipeline dialogs

* perf: ui

* perf: move buttons

* perf: ui layout in pipeline detail dialog

* perf: remove debug button from pipeline card

* perf: open pipeline dialog after creating

* perf: placeholder in send input

* perf: no close dialog when save done

* fix: linter errors
2025-06-28 21:50:51 +08:00
fdc310
c34232a26c fix: add wechatpad image (#1551)
* add wechatpad image

* add wechatpad image

---------

Co-authored-by: fdc <you@example.com>
2025-06-27 15:41:21 +08:00
简律纯
b43dd95dc6 chore(python): Delete .python-version (#1549) 2025-06-25 22:47:02 +08:00
Junyan Qin
5331ba83d7 chore: update description of lark bot name field 2025-06-25 10:57:44 +08:00
fdc310
a2038b86f1 feat:add onebotv11 face send and accept but some face no name. (#1543)
* feat:add onebotv11 face send and accept but some face no name.

* add face annotation

* add face_code_dict

* add some face in image can't download,so pass on face

* fix:Pass the face_id to face
2025-06-19 10:38:02 +08:00
Junyan Qin
eb066f3485 revert: 3cbc823 2025-06-18 15:16:55 +08:00
Junyan Qin
bf98b82cf2 chore: release v4.0.7 2025-06-18 13:10:20 +08:00
Junyan Qin (Chin)
edd70b943d Update bug-report_en.yml 2025-06-18 09:48:42 +08:00
Junyan Qin
3cbc823085 doc: make en README as default 2025-06-17 22:51:51 +08:00
Sheldon.li
48becf2c51 refactor(ContentFilterStage): Add logic for handling empty messages (#1525)
-In the ContentFilterStage, logic for handling empty messages has been added to ensure that the pipeline continues to process even when the message is empty.
- This change enhances the robustness of content filtering, preventing potential issues caused by empty messages.
- This optimization was implemented to address the issue where, when someone is @ in a group chat and a message is sent without any content, the Source type messages in the message chain are lost.
2025-06-17 22:12:55 +08:00
devin-ai-integration[bot]
56c686cd5a feat: add Japanese (ja-JP) language support (#1537)
* feat: add Japanese (ja-JP) language support

- Add comprehensive Japanese translation file (ja-JP.ts)
- Update i18n configuration to include Japanese locale
- Add Japanese language option to login and register page dropdowns
- Implement Japanese language detection and switching logic
- Maintain fallback to en-US for missing translations in flexible components

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui for ja-JP

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-16 21:30:57 +08:00
Junyan Qin (Chin)
208273c0dd Update README.md 2025-06-16 21:01:11 +08:00
fdc310
2ff7ca3025 feat:add file url and add onebotv11(napcat) send file and seve file in local. (#1533)
* feat:add file url and add onebotv11(napcat) send file and seve file in local.

* del print
2025-06-15 17:22:35 +08:00
fdc310
61a2361730 feat:add new messagetyps WeChatFile and add wechat file is accepted and transmitted in base64 format. (#1531) 2025-06-15 17:17:08 +08:00
Junyan Qin
f80f997a89 chore: update version field in pyproject.toml 2025-06-11 10:24:18 +08:00
Junyan Qin
18529a42c1 chore: release v4.0.6 2025-06-11 10:23:46 +08:00
Junyan Qin (Chin)
3e707b4b6e feat: reset all associated session after bot and pipeline modified (#1517) 2025-06-09 21:50:08 +08:00
Junyan Qin
62f0a938a8 chore: remove legacy test in fe 2025-06-09 17:56:37 +08:00
Junyan Qin
ad3a163d82 fix: ruff linter error in libs 2025-06-09 17:56:21 +08:00
Junyan Qin
f5a4503610 perf: add text comment on bot log button 2025-06-09 15:27:17 +08:00
Junyan Qin
ec012cf5ed doc: update README 2025-06-09 10:20:11 +08:00
Junyan Qin
d70eceb72c fix(DebugDialog): \n not supported 2025-06-08 21:41:44 +08:00
devin-ai-integration[bot]
f271608114 feat: add dynamic base URL configuration using environment variables (#1511)
- Replace hardcoded base URL in HttpClient.ts with environment variable support
- Add NEXT_PUBLIC_API_BASE_URL environment variable for dynamic configuration
- Add dev:local script for development with localhost:5300 backend
- Development: uses localhost:5300, Production: uses / (relative path)
- Eliminates need for manual code changes when switching environments

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-08 17:44:40 +08:00
Junyan Qin
793f0a9c10 fix: base url 2025-06-08 17:34:32 +08:00
devin-ai-integration[bot]
4f2ec195fc feat: add WebChat adapter for pipeline debugging (#1510)
* feat: add WebChat adapter for pipeline debugging

- Create WebChatAdapter for handling debug messages in pipeline testing
- Add HTTP API endpoints for debug message sending and retrieval
- Implement frontend debug dialog with session switching (private/group chat)
- Add Chinese i18n translations for debug interface
- Auto-create default WebChat bot during database initialization
- Support fixed session IDs: webchatperson and webchatgroup for testing

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui for webchat

* feat: complete webchat backend

* feat: core chat apis

* perf: button style in pipeline card

* perf: log btn in bot card

* perf: webchat entities definition

* fix: bugs

* perf: web chat

* perf: dialog styles

* perf: styles

* perf: styles

* fix: group invalid in webchat

* perf: simulate real im message

* perf: group timeout toast

* feat(webchat): add supports for mentioning bot in group

* perf(webchat): at component styles

* perf: at badge display in message

* fix: linter errors

* fix: webchat was listed on adapter list

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-08 15:34:26 +08:00
Junyan Qin (Chin)
e6bc009414 feat: add i18n support for initialization page and fix plugin loading text (#1505)
* feat: add i18n support for initialization page and fix plugin loading text

- Add language selector to register/initialization page with Chinese and English options
- Add register section translations to both zh-Hans.ts and en-US.ts
- Replace hardcoded Chinese texts in register page with i18n translation calls
- Fix hardcoded '加载中...' text in plugin configuration dialog to use t('plugins.loading')
- Follow existing login page pattern for language selector implementation
- Maintain consistent UI/UX design with proper language switching functionality

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: language selecting logic

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-06 21:29:36 +08:00
Junyan Qin
20dc8fb5ab perf: language selecting logic 2025-06-06 21:27:08 +08:00
Devin AI
9a71edfeb0 feat: add i18n support for initialization page and fix plugin loading text
- Add language selector to register/initialization page with Chinese and English options
- Add register section translations to both zh-Hans.ts and en-US.ts
- Replace hardcoded Chinese texts in register page with i18n translation calls
- Fix hardcoded '加载中...' text in plugin configuration dialog to use t('plugins.loading')
- Follow existing login page pattern for language selector implementation
- Maintain consistent UI/UX design with proper language switching functionality

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-06 10:50:31 +00:00
Guanchao Wang
fe3fd664af Fix/slack image (#1501)
* fix: dingtalk adapters couldn't handle images

* fix: slack adapter couldn't put the image in logger
2025-06-06 10:04:00 +08:00
Guanchao Wang
6402755ac6 fix: dingtalk adapters couldn't handle images (#1500) 2025-06-05 23:37:58 +08:00
Junyan Qin
ac8fe049de fix: uv removes it self 2025-06-05 11:12:04 +08:00
242 changed files with 11454 additions and 1506 deletions

View File

@@ -19,7 +19,7 @@ body:
- type: textarea
attributes:
label: Reproduction steps
description: How to reproduce this problem, the more detailed the better; the more information you provide, the faster we will solve the problem.
description: How to reproduce this problem, the more detailed the better; the more information you provide, the faster we will solve the problem. 【注意】请务必认真填写此部分,若不提供完整信息(如只有一两句话的概括),我们将不会回复!
validations:
required: false
- type: textarea

View File

@@ -9,7 +9,7 @@
*请在方括号间写`x`以打勾 / Please tick the box with `x`*
- [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 阅读仓库[贡献指引](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/langbot-app/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 与项目所有者沟通过了吗? / Have you communicated with the project maintainer?
- [ ] 我确定已自行测试所作的更改,确保功能符合预期。 / I have tested the changes and ensured they work as expected.

3
.gitignore vendored
View File

@@ -42,4 +42,5 @@ botpy.log*
test.py
/web_ui
.venv/
uv.lock
uv.lock
/test

View File

@@ -1 +0,0 @@
3.12

View File

@@ -1,52 +1,38 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
</div>
</p>
> 近期 GeWeChat 项目归档,我们已经适配 WeChatPad 协议端,个微恢复正常使用,详情请查看文档
## ✨ 特性
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
LangBot 是一个开源的大语言模型原生即时通信机器人开发平台,旨在提供开箱即用的 IM 机器人开发体验,具有 Agent、RAG、MCP 等多种 LLM 应用功能,适配全球主流即时通信平台,并提供丰富的 API 接口,支持自定义开发
## 📦 开始使用
#### Docker Compose 部署
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -73,23 +59,25 @@ docker compose up -d
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 📸 效果展示
## 😎 保持更新
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/>
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/>
## ✨ 特性
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/>
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
详细规格特性请访问[文档](https://docs.langbot.app/zh/insight/features.html)。
- WebUI Demo: https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示webui效果,公开环境,请不要在其中填入您的任何敏感信息。
## 🔌 组件兼容性
或访问 demo 环境:https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示 WebUI 效果,公开环境,请不要在其中填入您的任何敏感信息。
### 消息平台
@@ -97,19 +85,14 @@ docker compose up -d
| --- | --- | --- |
| QQ 个人号 | ✅ | QQ 个人号私聊、群聊 |
| QQ 官方机器人 | ✅ | QQ 官方机器人,支持频道、私聊、群聊 |
| 企业微信 | ✅ | |
| 微信 | ✅ | |
| 企微对外客服 | ✅ | |
| 个人微信 | ✅ | |
| 微信公众号 | ✅ | |
| 飞书 | ✅ | |
| 钉钉 | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
### 大模型能力
@@ -121,7 +104,9 @@ docker compose up -d
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
@@ -149,8 +134,8 @@ docker compose up -d
## 😘 社区贡献
感谢以下[代码贡献者](https://github.com/RockChinQ/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
感谢以下[代码贡献者](https://github.com/langbot-app/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -1,48 +1,34 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">Home</a>
<a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
<div align="center">
😎High Stability, 🧩Extension Supported, 🦄Multi-modal - LLM Native Instant Messaging Bot Platform🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
</div>
</p>
## ✨ Features
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
LangBot is an open-source LLM native instant messaging robot development platform, aiming to provide out-of-the-box IM robot development experience, with Agent, RAG, MCP and other LLM application functions, adapting to global instant messaging platforms, and providing rich API interfaces, supporting custom development.
## 📦 Getting Started
#### Docker Compose Deployment
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -69,23 +55,25 @@ Community contributed Zeabur template.
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 Demo
## 😎 Stay Ahead
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ Features
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
For more detailed specifications, please refer to the [documentation](https://docs.langbot.app/en/insight/features.html).
- WebUI Demo: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment.
## 🔌 Component Compatibility
Or visit the demo environment: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: For WebUI demo only, please do not fill in any sensitive information in the public environment.
### Message Platform
@@ -101,10 +89,6 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: In development
### LLMs
@@ -116,8 +100,10 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | LLM and GPU resource platform |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
| [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform |
@@ -130,8 +116,8 @@ Directly use the released version to run, see the [Manual Deployment](https://do
## 🤝 Community Contribution
Thank you for the following [code contributors](https://github.com/RockChinQ/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
Thank you for the following [code contributors](https://github.com/langbot-app/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -1,47 +1,34 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
<img src="https://docs.langbot.app/social_en.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[简体中文](README.md) / [English](README_EN.md) / 日本語 / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
<a href="https://langbot.app">ホーム</a>
<a href="https://docs.langbot.app/en/insight/guide.html">デプロイ</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">プラグイン</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a>
<div align="center">
😎高い安定性、🧩拡張サポート、🦄マルチモーダル - LLMネイティブインスタントメッセージングボットプラットフォーム🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">プラグインの提出</a>
</div>
</p>
## ✨ 機能
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート。 [Dify](https://dify.ai) と深く統合。現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
LangBot は、エージェント、RAG、MCP などの LLM アプリケーション機能を備えた、オープンソースの LLM ネイティブのインスタントメッセージングロボット開発プラットフォームです。世界中のインスタントメッセージングプラットフォームに適応し、豊富な API インターフェースを提供し、カスタム開発をサポートします。
## 📦 始め方
#### Docker Compose デプロイ
```bash
git clone https://github.com/RockChinQ/LangBot
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
@@ -50,7 +37,7 @@ http://localhost:5300 にアクセスして使用を開始します。
詳細なドキュメントは[Dockerデプロイ](https://docs.langbot.app/en/deploy/langbot/docker.html)を参照してください。
#### BTPanelでのワンクリックデプロイ
#### Panelでのワンクリックデプロイ
LangBotはBTPanelにリストされています。BTPanelをインストールしている場合は、[ドキュメント](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html)を使用して使用できます。
@@ -68,23 +55,25 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
リリースバージョンを直接使用して実行します。[手動デプロイ](https://docs.langbot.app/en/deploy/langbot/manual.html)のドキュメントを参照してください。
## 📸 デモ
## 😎 最新情報を入手
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ 機能
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
<img alt="返信効果(インターネットプラグイン付き)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
詳細な仕様については、[ドキュメント](https://docs.langbot.app/en/insight/features.html)を参照してください。
- WebUIデモ: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUIの効果のみを示しています。公開環境では機密情報を入力しないでください。
## 🔌 コンポーネントの互換性
または、デモ環境にアクセスしてください: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUI のデモンストレーションのみの場合、公開環境では機密情報を入力しないでください。
### メッセージプラットフォーム
@@ -100,10 +89,6 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 開発中
### LLMs
@@ -115,7 +100,9 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |
| [Ollama](https://ollama.com/) | ✅ | ローカルLLM実行プラットフォーム |
@@ -129,8 +116,8 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
## 🤝 コミュニティ貢献
LangBot への貢献に対して、以下の [コード貢献者](https://github.com/RockChinQ/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。
LangBot への貢献に対して、以下の [コード貢献者](https://github.com/langbot-app/LangBot/graphs/contributors) とコミュニティの他のメンバーに感謝します。
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -1,4 +1,4 @@
from v1 import client
from v1 import client # type: ignore
import asyncio
@@ -8,19 +8,13 @@ import json
class TestDifyClient:
async def test_chat_messages(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
async for chunk in cln.chat_messages(
inputs={}, query='调用工具查看现在几点?', user='test'
):
async for chunk in cln.chat_messages(inputs={}, query='调用工具查看现在几点?', user='test'):
print(json.dumps(chunk, ensure_ascii=False, indent=4))
async def test_upload_file(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
file_bytes = open('img.png', 'rb').read()
@@ -32,9 +26,7 @@ class TestDifyClient:
print(json.dumps(resp, ensure_ascii=False, indent=4))
async def test_workflow_run(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
# resp = await cln.workflow_run(inputs={}, user="test")
# # print(json.dumps(resp, ensure_ascii=False, indent=4))

View File

@@ -1,5 +1,5 @@
import asyncio
import dingtalk_stream
import dingtalk_stream # type: ignore
from dingtalk_stream import AckMessage
@@ -27,9 +27,3 @@ class EchoTextHandler(dingtalk_stream.ChatbotHandler):
await asyncio.sleep(0.1) # 异步等待,避免阻塞
return self.incoming_message
async def get_dingtalk_client(client_id, client_secret):
from api import DingTalkClient # 延迟导入,避免循环导入
return DingTalkClient(client_id, client_secret)

View File

@@ -2,7 +2,7 @@ import base64
import json
import time
from typing import Callable
import dingtalk_stream
import dingtalk_stream # type: ignore
from .EchoHandler import EchoTextHandler
from .dingtalkevent import DingTalkEvent
import httpx
@@ -49,8 +49,8 @@ class DingTalkClient:
self.access_token = response_data.get('accessToken')
expires_in = int(response_data.get('expireIn', 7200))
self.access_token_expiry_time = time.time() + expires_in - 60
except Exception as e:
await self.logger.error("failed to get access token in dingtalk")
except Exception:
await self.logger.error('failed to get access token in dingtalk')
async def is_token_expired(self):
"""检查token是否过期"""
@@ -75,7 +75,7 @@ class DingTalkClient:
result = response.json()
download_url = result.get('downloadUrl')
else:
await self.logger.error(f"failed to get download url: {response.json()}")
await self.logger.error(f'failed to get download url: {response.json()}')
if download_url:
return await self.download_url_to_base64(download_url)
@@ -86,10 +86,11 @@ class DingTalkClient:
if response.status_code == 200:
file_bytes = response.content
base64_str = base64.b64encode(file_bytes).decode('utf-8') # 返回字符串格式
return base64_str
mime_type = response.headers.get('Content-Type', 'application/octet-stream')
base64_str = base64.b64encode(file_bytes).decode('utf-8')
return f'data:{mime_type};base64,{base64_str}'
else:
await self.logger.error(f"failed to get files: {response.json()}")
await self.logger.error(f'failed to get files: {response.json()}')
async def get_audio_url(self, download_code: str):
if not await self.check_access_token():
@@ -105,7 +106,7 @@ class DingTalkClient:
if download_url:
return await self.download_url_to_base64(download_url)
else:
await self.logger.error(f"failed to get audio: {response.json()}")
await self.logger.error(f'failed to get audio: {response.json()}')
else:
raise Exception(f'Error: {response.status_code}, {response.text}')
@@ -117,12 +118,12 @@ class DingTalkClient:
if event:
await self._handle_message(event)
async def send_message(self, content: str, incoming_message,at:bool):
async def send_message(self, content: str, incoming_message, at: bool):
if self.markdown_card:
if at:
self.EchoTextHandler.reply_markdown(
title='@'+incoming_message.sender_nick+' '+content,
text='@'+incoming_message.sender_nick+' '+content,
title='@' + incoming_message.sender_nick + ' ' + content,
text='@' + incoming_message.sender_nick + ' ' + content,
incoming_message=incoming_message,
)
else:
@@ -192,9 +193,9 @@ class DingTalkClient:
copy_message_data = message_data.copy()
del copy_message_data['IncomingMessage']
# print("message_data:", json.dumps(copy_message_data, indent=4, ensure_ascii=False))
except Exception as e:
except Exception:
if self.logger:
await self.logger.error(f"Error in get_message: {traceback.format_exc()}")
await self.logger.error(f'Error in get_message: {traceback.format_exc()}')
else:
traceback.print_exc()
@@ -223,8 +224,8 @@ class DingTalkClient:
if response.status_code == 200:
return
except Exception:
await self.logger.error(f"failed to send proactive massage to person: {traceback.format_exc()}")
raise Exception(f"failed to send proactive massage to person: {traceback.format_exc()}")
await self.logger.error(f'failed to send proactive massage to person: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to person: {traceback.format_exc()}')
async def send_proactive_message_to_group(self, target_id: str, content: str):
if not await self.check_access_token():
@@ -249,8 +250,8 @@ class DingTalkClient:
if response.status_code == 200:
return
except Exception:
await self.logger.error(f"failed to send proactive massage to group: {traceback.format_exc()}")
raise Exception(f"failed to send proactive massage to group: {traceback.format_exc()}")
await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}')
async def start(self):
"""启动 WebSocket 连接,监听消息"""

View File

@@ -1,5 +1,5 @@
from typing import Dict, Any, Optional
import dingtalk_stream
import dingtalk_stream # type: ignore
class DingTalkEvent(dict):

View File

@@ -1,7 +1,7 @@
# 微信公众号的加解密算法与企业微信一样,所以直接使用企业微信的加解密算法文件
import time
import traceback
from ..wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
from libs.wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
import xml.etree.ElementTree as ET
from quart import Quart, request
import hashlib
@@ -55,7 +55,7 @@ class OAClient:
echostr = request.args.get('echostr', '')
msg_signature = request.args.get('msg_signature', '')
if msg_signature is None:
await self.logger.error(f'msg_signature不在请求体中')
await self.logger.error('msg_signature不在请求体中')
raise Exception('msg_signature不在请求体中')
if request.method == 'GET':
@@ -66,7 +66,7 @@ class OAClient:
if check_signature == signature:
return echostr # 验证成功返回echostr
else:
await self.logger.error(f'拒绝请求')
await self.logger.error('拒绝请求')
raise Exception('拒绝请求')
elif request.method == 'POST':
encryt_msg = await request.data
@@ -75,9 +75,9 @@ class OAClient:
xml_msg = xml_msg.decode('utf-8')
if ret != 0:
await self.logger.error(f'消息解密失败')
await self.logger.error('消息解密失败')
raise Exception('消息解密失败')
message_data = await self.get_message(xml_msg)
if message_data:
event = OAEvent.from_payload(message_data)
@@ -214,7 +214,7 @@ class OAClientForLongerResponse:
msg_signature = request.args.get('msg_signature', '')
if msg_signature is None:
await self.logger.error(f'msg_signature不在请求体中')
await self.logger.error('msg_signature不在请求体中')
raise Exception('msg_signature不在请求体中')
if request.method == 'GET':
@@ -229,9 +229,8 @@ class OAClientForLongerResponse:
xml_msg = xml_msg.decode('utf-8')
if ret != 0:
await self.logger.error(f'消息解密失败')
await self.logger.error('消息解密失败')
raise Exception('消息解密失败')
# 解析 XML
root = ET.fromstring(xml_msg)

View File

@@ -1 +1 @@
from .client import WeChatPadClient
from .client import WeChatPadClient as WeChatPadClient

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class ChatRoomApi:
@@ -7,8 +7,6 @@ class ChatRoomApi:
self.token = token
def get_chatroom_member_detail(self, chatroom_name):
params = {
"ChatRoomName": chatroom_name
}
params = {'ChatRoomName': chatroom_name}
url = self.base_url + '/group/GetChatroomMemberDetail'
return post_json(url, token=self.token, data=params)

View File

@@ -1,32 +1,23 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
import httpx
import base64
class DownloadApi:
def __init__(self, base_url, token):
self.base_url = base_url
self.token = token
def send_download(self, aeskey, file_type, file_url):
json_data = {
"AesKey": aeskey,
"FileType": file_type,
"FileURL": file_url
}
url = self.base_url + "/message/SendCdnDownload"
json_data = {'AesKey': aeskey, 'FileType': file_type, 'FileURL': file_url}
url = self.base_url + '/message/SendCdnDownload'
return post_json(url, token=self.token, data=json_data)
def get_msg_voice(self,buf_id, length, new_msgid):
json_data = {
"Bufid": buf_id,
"Length": length,
"NewMsgId": new_msgid,
"ToUserName": ""
}
url = self.base_url + "/message/GetMsgVoice"
def get_msg_voice(self, buf_id, length, new_msgid):
json_data = {'Bufid': buf_id, 'Length': length, 'NewMsgId': new_msgid, 'ToUserName': ''}
url = self.base_url + '/message/GetMsgVoice'
return post_json(url, token=self.token, data=json_data)
async def download_url_to_base64(self, download_url):
async with httpx.AsyncClient() as client:
response = await client.get(download_url)
@@ -36,4 +27,4 @@ class DownloadApi:
base64_str = base64.b64encode(file_bytes).decode('utf-8') # 返回字符串格式
return base64_str
else:
raise Exception('获取文件失败')
raise Exception('获取文件失败')

View File

@@ -1,11 +1,6 @@
from libs.wechatpad_api.util.http_util import post_json,async_request
from typing import List, Dict, Any, Optional
class FriendApi:
"""联系人API类处理所有与联系人相关的操作"""
def __init__(self, base_url: str, token: str):
self.base_url = base_url
self.token = token

View File

@@ -1,37 +1,34 @@
from libs.wechatpad_api.util.http_util import async_request,post_json,get_json
from libs.wechatpad_api.util.http_util import post_json, get_json
class LoginApi:
def __init__(self, base_url: str, token: str = None, admin_key: str = None):
'''
"""
Args:
base_url: 原始路径
token: token
admin_key: 管理员key
'''
"""
self.base_url = base_url
self.token = token
# self.admin_key = admin_key
def get_token(self, admin_key, day: int=365):
def get_token(self, admin_key, day: int = 365):
# 获取普通token
url = f"{self.base_url}/admin/GenAuthKey1"
json_data = {
"Count": 1,
"Days": day
}
url = f'{self.base_url}/admin/GenAuthKey1'
json_data = {'Count': 1, 'Days': day}
return post_json(base_url=url, token=admin_key, data=json_data)
def get_login_qr(self, Proxy: str = ""):
'''
def get_login_qr(self, Proxy: str = ''):
"""
Args:
Proxy:异地使用时代理
Returns:json数据
'''
"""
"""
{
@@ -49,54 +46,37 @@ class LoginApi:
}
"""
#获取登录二维码
url = f"{self.base_url}/login/GetLoginQrCodeNew"
# 获取登录二维码
url = f'{self.base_url}/login/GetLoginQrCodeNew'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": Proxy
}
json_data = {'Check': check, 'Proxy': Proxy}
return post_json(base_url=url, token=self.token, data=json_data)
def get_login_status(self):
# 获取登录状态
url = f'{self.base_url}/login/GetLoginStatus'
return get_json(base_url=url, token=self.token)
def logout(self):
# 退出登录
url = f'{self.base_url}/login/LogOut'
return post_json(base_url=url, token=self.token)
def wake_up_login(self, Proxy: str = ""):
def wake_up_login(self, Proxy: str = ''):
# 唤醒登录
url = f'{self.base_url}/login/WakeUpLogin'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": ""
}
json_data = {'Check': check, 'Proxy': ''}
return post_json(base_url=url, token=self.token, data=json_data)
def login(self,admin_key):
def login(self, admin_key):
login_status = self.get_login_status()
if login_status["Code"] == 300 and login_status["Text"] == "你已退出微信":
print("token已经失效重新获取")
if login_status['Code'] == 300 and login_status['Text'] == '你已退出微信':
print('token已经失效重新获取')
token_data = self.get_token(admin_key)
self.token = token_data["Data"][0]
self.token = token_data['Data'][0]

View File

@@ -1,5 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class MessageApi:
@@ -7,8 +6,8 @@ class MessageApi:
self.base_url = base_url
self.token = token
def post_text(self, to_wxid, content, ats: list= []):
'''
def post_text(self, to_wxid, content, ats: list = []):
"""
Args:
app_id: 微信id
@@ -18,106 +17,64 @@ class MessageApi:
Returns:
'''
url = self.base_url + "/message/SendTextMessage"
"""
url = self.base_url + '/message/SendTextMessage'
"""发送文字消息"""
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": "",
"MsgType": 0,
"TextContent": content,
"ToUserName": to_wxid
}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': '', 'MsgType': 0, 'TextContent': content, 'ToUserName': to_wxid}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
def post_image(self, to_wxid, img_url, ats: list= []):
def post_image(self, to_wxid, img_url, ats: list = []):
"""发送图片消息"""
# 这里好像可以尝试发送多个暂时未测试
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": img_url,
"MsgType": 0,
"TextContent": '',
"ToUserName": to_wxid
}
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': img_url, 'MsgType': 0, 'TextContent': '', 'ToUserName': to_wxid}
]
}
url = self.base_url + "/message/SendImageMessage"
url = self.base_url + '/message/SendImageMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_voice(self, to_wxid, voice_data, voice_forma, voice_duration):
"""发送语音消息"""
json_data = {
"ToUserName": to_wxid,
"VoiceData": voice_data,
"VoiceFormat": voice_forma,
"VoiceSecond": voice_duration
'ToUserName': to_wxid,
'VoiceData': voice_data,
'VoiceFormat': voice_forma,
'VoiceSecond': voice_duration,
}
url = self.base_url + "/message/SendVoice"
url = self.base_url + '/message/SendVoice'
return post_json(base_url=url, token=self.token, data=json_data)
def post_name_card(self, alias, to_wxid, nick_name, name_card_wxid, flag):
"""发送名片消息"""
param = {
"CardAlias": alias,
"CardFlag": flag,
"CardNickName": nick_name,
"CardWxId": name_card_wxid,
"ToUserName": to_wxid
'CardAlias': alias,
'CardFlag': flag,
'CardNickName': nick_name,
'CardWxId': name_card_wxid,
'ToUserName': to_wxid,
}
url = f"{self.base_url}/message/ShareCardMessage"
url = f'{self.base_url}/message/ShareCardMessage'
return post_json(base_url=url, token=self.token, data=param)
def post_emoji(self, to_wxid, emoji_md5, emoji_size:int=0):
def post_emoji(self, to_wxid, emoji_md5, emoji_size: int = 0):
"""发送emoji消息"""
json_data = {
"EmojiList": [
{
"EmojiMd5": emoji_md5,
"EmojiSize": emoji_size,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendEmojiMessage"
json_data = {'EmojiList': [{'EmojiMd5': emoji_md5, 'EmojiSize': emoji_size, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendEmojiMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_app_msg(self, to_wxid,xml_data, contenttype:int=0):
def post_app_msg(self, to_wxid, xml_data, contenttype: int = 0):
"""发送appmsg消息"""
json_data = {
"AppList": [
{
"ContentType": contenttype,
"ContentXML": xml_data,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendAppMessage"
json_data = {'AppList': [{'ContentType': contenttype, 'ContentXML': xml_data, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendAppMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def revoke_msg(self, to_wxid, msg_id, new_msg_id, create_time):
"""撤回消息"""
param = {
"ClientMsgId": msg_id,
"CreateTime": create_time,
"NewMsgId": new_msg_id,
"ToUserName": to_wxid
}
url = f"{self.base_url}/message/RevokeMsg"
return post_json(base_url=url, token=self.token, data=param)
param = {'ClientMsgId': msg_id, 'CreateTime': create_time, 'NewMsgId': new_msg_id, 'ToUserName': to_wxid}
url = f'{self.base_url}/message/RevokeMsg'
return post_json(base_url=url, token=self.token, data=param)

View File

@@ -1,10 +1,9 @@
import requests
import aiohttp
def post_json(base_url, token, data=None):
headers = {
'Content-Type': 'application/json'
}
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -18,14 +17,12 @@ def post_json(base_url, token, data=None):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
def get_json(base_url, token):
headers = {
'Content-Type': 'application/json'
}
def get_json(base_url, token):
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -39,21 +36,18 @@ def get_json(base_url, token):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
import aiohttp
import asyncio
async def async_request(
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None,
):
"""
通用异步请求函数
@@ -67,18 +61,11 @@ async def async_request(
:param json: JSON数据
:return: 响应文本
"""
headers = {
'Content-Type': 'application/json'
}
url = f"{base_url}?key={token_key}"
headers = {'Content-Type': 'application/json'}
url = f'{base_url}?key={token_key}'
async with aiohttp.ClientSession() as session:
async with session.request(
method=method,
url=url,
params=params,
headers=headers,
data=data,
json=json
method=method, url=url, params=params, headers=headers, data=data, json=json
) as response:
response.raise_for_status() # 如果状态码不是200抛出异常
result = await response.json()
@@ -89,4 +76,3 @@ async def async_request(
# return await result
# else:
# raise RuntimeError("请求失败",response.text)

View File

@@ -11,7 +11,7 @@ asciiart = r"""
|____\__,_|_||_\__, |___/\___/\__|
|___/
⭐️ Open Source 开源地址: https://github.com/RockChinQ/LangBot
⭐️ Open Source 开源地址: https://github.com/langbot-app/LangBot
📖 Documentation 文档地址: https://docs.langbot.app
"""

View File

@@ -11,10 +11,10 @@ from ....core import app
preregistered_groups: list[type[RouterGroup]] = []
"""RouterGroup 的预注册列表"""
"""Pre-registered list of RouterGroup"""
def group_class(name: str, path: str) -> None:
def group_class(name: str, path: str) -> typing.Callable[[typing.Type[RouterGroup]], typing.Type[RouterGroup]]:
"""注册一个 RouterGroup"""
def decorator(cls: typing.Type[RouterGroup]) -> typing.Type[RouterGroup]:
@@ -27,7 +27,7 @@ def group_class(name: str, path: str) -> None:
class AuthType(enum.Enum):
"""认证类型"""
"""Authentication type"""
NONE = 'none'
USER_TOKEN = 'user-token'
@@ -56,7 +56,7 @@ class RouterGroup(abc.ABC):
auth_type: AuthType = AuthType.USER_TOKEN,
**options: typing.Any,
) -> typing.Callable[[RouteCallable], RouteCallable]: # decorator
"""注册一个路由"""
"""Register a route"""
def decorator(f: RouteCallable) -> RouteCallable:
nonlocal rule
@@ -64,11 +64,11 @@ class RouterGroup(abc.ABC):
async def handler_error(*args, **kwargs):
if auth_type == AuthType.USER_TOKEN:
# Authorization头中获取token
# get token from Authorization header
token = quart.request.headers.get('Authorization', '').replace('Bearer ', '')
if not token:
return self.http_status(401, -1, '未提供有效的用户令牌')
return self.http_status(401, -1, 'No valid user token provided')
try:
user_email = await self.ap.user_service.verify_jwt_token(token)
@@ -76,9 +76,9 @@ class RouterGroup(abc.ABC):
# check if this account exists
user = await self.ap.user_service.get_user_by_email(user_email)
if not user:
return self.http_status(401, -1, '用户不存在')
return self.http_status(401, -1, 'User not found')
# 检查f是否接受user_email参数
# check if f accepts user_email parameter
if 'user_email' in f.__code__.co_varnames:
kwargs['user_email'] = user_email
except Exception as e:
@@ -86,10 +86,11 @@ class RouterGroup(abc.ABC):
try:
return await f(*args, **kwargs)
except Exception: # 自动 500
except Exception as e: # 自动 500
traceback.print_exc()
# return self.http_status(500, -2, str(e))
return self.http_status(500, -2, 'internal server error')
return self.http_status(500, -2, str(e))
new_f = handler_error
new_f.__name__ = (self.name + rule).replace('/', '__')
@@ -101,7 +102,7 @@ class RouterGroup(abc.ABC):
return decorator
def success(self, data: typing.Any = None) -> quart.Response:
"""返回一个 200 响应"""
"""Return a 200 response"""
return quart.jsonify(
{
'code': 0,
@@ -111,7 +112,7 @@ class RouterGroup(abc.ABC):
)
def fail(self, code: int, msg: str) -> quart.Response:
"""返回一个异常响应"""
"""Return an error response"""
return quart.jsonify(
{
@@ -120,6 +121,6 @@ class RouterGroup(abc.ABC):
}
)
def http_status(self, status: int, code: int, msg: str) -> quart.Response:
def http_status(self, status: int, code: int, msg: str) -> typing.Tuple[quart.Response, int]:
"""返回一个指定状态码的响应"""
return self.fail(code, msg), status
return (self.fail(code, msg), status)

View File

@@ -2,6 +2,10 @@ from __future__ import annotations
import quart
import mimetypes
import uuid
import asyncio
import quart.datastructures
from .. import group
@@ -20,3 +24,23 @@ class FilesRouterGroup(group.RouterGroup):
mime_type = 'image/jpeg'
return quart.Response(image_bytes, mimetype=mime_type)
@self.route('/documents', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> quart.Response:
request = quart.request
# get file bytes from 'file'
file = (await request.files)['file']
assert isinstance(file, quart.datastructures.FileStorage)
file_bytes = await asyncio.to_thread(file.stream.read)
extension = file.filename.split('.')[-1]
file_name = file.filename.split('.')[0]
file_key = file_name + '_' + str(uuid.uuid4())[:8] + '.' + extension
# save file to storage
await self.ap.storage_mgr.storage_provider.save(file_key, file_bytes)
return self.success(
data={
'file_id': file_key,
}
)

View File

@@ -0,0 +1,90 @@
import quart
from ... import group
@group.group_class('knowledge_base', '/api/v1/knowledge/bases')
class KnowledgeBaseRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['POST', 'GET'])
async def handle_knowledge_bases() -> quart.Response:
if quart.request.method == 'GET':
knowledge_bases = await self.ap.knowledge_service.get_knowledge_bases()
return self.success(data={'bases': knowledge_bases})
elif quart.request.method == 'POST':
json_data = await quart.request.json
knowledge_base_uuid = await self.ap.knowledge_service.create_knowledge_base(json_data)
return self.success(data={'uuid': knowledge_base_uuid})
return self.http_status(405, -1, 'Method not allowed')
@self.route(
'/<knowledge_base_uuid>',
methods=['GET', 'DELETE', 'PUT'],
)
async def handle_specific_knowledge_base(knowledge_base_uuid: str) -> quart.Response:
if quart.request.method == 'GET':
knowledge_base = await self.ap.knowledge_service.get_knowledge_base(knowledge_base_uuid)
if knowledge_base is None:
return self.http_status(404, -1, 'knowledge base not found')
return self.success(
data={
'base': knowledge_base,
}
)
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.knowledge_service.update_knowledge_base(knowledge_base_uuid, json_data)
return self.success({})
elif quart.request.method == 'DELETE':
await self.ap.knowledge_service.delete_knowledge_base(knowledge_base_uuid)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/files',
methods=['GET', 'POST'],
)
async def get_knowledge_base_files(knowledge_base_uuid: str) -> str:
if quart.request.method == 'GET':
files = await self.ap.knowledge_service.get_files_by_knowledge_base(knowledge_base_uuid)
return self.success(
data={
'files': files,
}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json
file_id = json_data.get('file_id')
if not file_id:
return self.http_status(400, -1, 'File ID is required')
# 调用服务层方法将文件与知识库关联
task_id = await self.ap.knowledge_service.store_file(knowledge_base_uuid, file_id)
return self.success(
{
'task_id': task_id,
}
)
@self.route(
'/<knowledge_base_uuid>/files/<file_id>',
methods=['DELETE'],
)
async def delete_specific_file_in_kb(file_id: str, knowledge_base_uuid: str) -> str:
await self.ap.knowledge_service.delete_file(knowledge_base_uuid, file_id)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/retrieve',
methods=['POST'],
)
async def retrieve_knowledge_base(knowledge_base_uuid: str) -> str:
json_data = await quart.request.json
query = json_data.get('query')
results = await self.ap.knowledge_service.retrieve_knowledge_base(knowledge_base_uuid, query)
return self.success(data={'results': results})

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
import quart
from .. import group
from ... import group
@group.group_class('pipelines', '/api/v1/pipelines')
@@ -11,7 +11,9 @@ class PipelinesRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'pipelines': await self.ap.pipeline_service.get_pipelines()})
sort_by = quart.request.args.get('sort_by', 'created_at')
sort_order = quart.request.args.get('sort_order', 'DESC')
return self.success(data={'pipelines': await self.ap.pipeline_service.get_pipelines(sort_by, sort_order)})
elif quart.request.method == 'POST':
json_data = await quart.request.json

View File

@@ -0,0 +1,79 @@
import quart
from ... import group
@group.group_class('webchat', '/api/v1/pipelines/<pipeline_uuid>/chat')
class WebChatDebugRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str:
"""Send a message to the pipeline for debugging"""
try:
data = await quart.request.get_json()
session_type = data.get('session_type', 'person')
message_chain_obj = data.get('message', [])
if not message_chain_obj:
return self.http_status(400, -1, 'message is required')
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = self.ap.platform_mgr.webchat_proxy_bot.adapter
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
result = await webchat_adapter.send_webchat_message(pipeline_uuid, session_type, message_chain_obj)
return self.success(
data={
'message': result,
}
)
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')
@self.route('/messages/<session_type>', methods=['GET'])
async def get_messages(pipeline_uuid: str, session_type: str) -> str:
"""Get the message history of the pipeline for debugging"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = self.ap.platform_mgr.webchat_proxy_bot.adapter
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
messages = webchat_adapter.get_webchat_messages(pipeline_uuid, session_type)
return self.success(data={'messages': messages})
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')
@self.route('/reset/<session_type>', methods=['POST'])
async def reset_session(session_type: str) -> str:
"""Reset the debug session"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = None
for bot in self.ap.platform_mgr.bots:
if hasattr(bot.adapter, '__class__') and bot.adapter.__class__.__name__ == 'WebChatAdapter':
webchat_adapter = bot.adapter
break
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
webchat_adapter.reset_debug_session(session_type)
return self.success(data={'message': 'Session reset successfully'})
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')

View File

@@ -40,7 +40,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.update_plugin(plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-update-{plugin_name}',
label=f'更新插件 {plugin_name}',
label=f'Updating plugin {plugin_name}',
context=ctx,
)
return self.success(data={'task_id': wrapper.id})
@@ -62,7 +62,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.uninstall_plugin(plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-remove-{plugin_name}',
label=f'删除插件 {plugin_name}',
label=f'Removing plugin {plugin_name}',
context=ctx,
)
@@ -102,7 +102,7 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.install_plugin(data['source'], task_context=ctx),
kind='plugin-operation',
name='plugin-install-github',
label=f'安装插件 ...{short_source_str}',
label=f'Installing plugin ...{short_source_str}',
context=ctx,
)

View File

@@ -9,18 +9,18 @@ class LLMModelsRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.model_service.get_llm_models()})
return self.success(data={'models': await self.ap.llm_model_service.get_llm_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.model_service.create_llm_model(json_data)
model_uuid = await self.ap.llm_model_service.create_llm_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.model_service.get_llm_model(model_uuid)
model = await self.ap.llm_model_service.get_llm_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
@@ -29,11 +29,11 @@ class LLMModelsRouterGroup(group.RouterGroup):
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.model_service.update_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.update_llm_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.model_service.delete_llm_model(model_uuid)
await self.ap.llm_model_service.delete_llm_model(model_uuid)
return self.success()
@@ -41,6 +41,49 @@ class LLMModelsRouterGroup(group.RouterGroup):
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.model_service.test_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.test_llm_model(model_uuid, json_data)
return self.success()
@group.group_class('models/embedding', '/api/v1/provider/models/embedding')
class EmbeddingModelsRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.embedding_models_service.get_embedding_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.embedding_models_service.create_embedding_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.embedding_models_service.get_embedding_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
return self.success(data={'model': model})
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.embedding_models_service.update_embedding_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.embedding_models_service.delete_embedding_model(model_uuid)
return self.success()
@self.route('/<model_uuid>/test', methods=['POST'])
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.embedding_models_service.test_embedding_model(model_uuid, json_data)
return self.success()

View File

@@ -8,7 +8,8 @@ class RequestersRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET'])
async def _() -> quart.Response:
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info()})
model_type = quart.request.args.get('type', '')
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info(model_type)})
@self.route('/<requester_name>', methods=['GET'])
async def _(requester_name: str) -> quart.Response:

View File

@@ -1,5 +1,6 @@
import quart
import argon2
import asyncio
from .. import group
@@ -13,7 +14,7 @@ class UserRouterGroup(group.RouterGroup):
return self.success(data={'initialized': await self.ap.user_service.is_initialized()})
if await self.ap.user_service.is_initialized():
return self.fail(1, '系统已初始化')
return self.fail(1, 'System already initialized')
json_data = await quart.request.json
@@ -31,7 +32,7 @@ class UserRouterGroup(group.RouterGroup):
try:
token = await self.ap.user_service.authenticate(json_data['user'], json_data['password'])
except argon2.exceptions.VerifyMismatchError:
return self.fail(1, '用户名或密码错误')
return self.fail(1, 'Invalid username or password')
return self.success(data={'token': token})
@@ -40,3 +41,29 @@ class UserRouterGroup(group.RouterGroup):
token = await self.ap.user_service.generate_jwt_token(user_email)
return self.success(data={'token': token})
@self.route('/reset-password', methods=['POST'], auth_type=group.AuthType.NONE)
async def _() -> str:
json_data = await quart.request.json
user_email = json_data['user']
recovery_key = json_data['recovery_key']
new_password = json_data['new_password']
# hard sleep 3s for security
await asyncio.sleep(3)
if not await self.ap.user_service.is_initialized():
return self.http_status(400, -1, 'System not initialized')
user_obj = await self.ap.user_service.get_user_by_email(user_email)
if user_obj is None:
return self.http_status(400, -1, 'User not found')
if recovery_key != self.ap.instance_config.data['system']['recovery_key']:
return self.http_status(403, -1, 'Invalid recovery key')
await self.ap.user_service.reset_password(user_email, new_password)
return self.success(data={'user': user_email})

View File

@@ -13,10 +13,14 @@ from . import groups
from . import group
from .groups import provider as groups_provider
from .groups import platform as groups_platform
from .groups import pipelines as groups_pipelines
from .groups import knowledge as groups_knowledge
importutil.import_modules_in_pkg(groups)
importutil.import_modules_in_pkg(groups_provider)
importutil.import_modules_in_pkg(groups_platform)
importutil.import_modules_in_pkg(groups_pipelines)
importutil.import_modules_in_pkg(groups_knowledge)
class HTTPController:
@@ -43,7 +47,7 @@ class HTTPController:
try:
await self.quart_app.run_task(*args, **kwargs)
except Exception as e:
self.ap.logger.error(f'启动 HTTP 服务失败: {e}')
self.ap.logger.error(f'Failed to start HTTP service: {e}')
self.ap.task_mgr.create_task(
exception_handler(

View File

@@ -10,7 +10,7 @@ from ....entity.persistence import pipeline as persistence_pipeline
class BotService:
"""机器人服务"""
"""Bot service"""
ap: app.Application
@@ -18,7 +18,7 @@ class BotService:
self.ap = ap
async def get_bots(self) -> list[dict]:
"""获取所有机器人"""
"""Get all bots"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_bot.Bot))
bots = result.all()
@@ -26,7 +26,7 @@ class BotService:
return [self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot) for bot in bots]
async def get_bot(self, bot_uuid: str) -> dict | None:
"""获取机器人"""
"""Get bot"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)
)
@@ -39,7 +39,7 @@ class BotService:
return self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot)
async def create_bot(self, bot_data: dict) -> str:
"""创建机器人"""
"""Create bot"""
# TODO: 检查配置信息格式
bot_data['uuid'] = str(uuid.uuid4())
@@ -63,7 +63,7 @@ class BotService:
return bot_data['uuid']
async def update_bot(self, bot_uuid: str, bot_data: dict) -> None:
"""更新机器人"""
"""Update bot"""
if 'uuid' in bot_data:
del bot_data['uuid']
@@ -93,8 +93,13 @@ class BotService:
if runtime_bot.enable:
await runtime_bot.run()
# update all conversation that use this bot
for session in self.ap.sess_mgr.session_list:
if session.using_conversation is not None and session.using_conversation.bot_uuid == bot_uuid:
session.using_conversation = None
async def delete_bot(self, bot_uuid: str) -> None:
"""删除机器人"""
"""Delete bot"""
await self.ap.platform_mgr.remove_bot(bot_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)

View File

@@ -0,0 +1,118 @@
from __future__ import annotations
import uuid
import sqlalchemy
from ....core import app
from ....entity.persistence import rag as persistence_rag
class KnowledgeService:
"""知识库服务"""
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_knowledge_bases(self) -> list[dict]:
"""获取所有知识库"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_rag.KnowledgeBase))
knowledge_bases = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
for knowledge_base in knowledge_bases
]
async def get_knowledge_base(self, kb_uuid: str) -> dict | None:
"""获取知识库"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
knowledge_base = result.first()
if knowledge_base is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
async def create_knowledge_base(self, kb_data: dict) -> str:
"""创建知识库"""
kb_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.KnowledgeBase).values(kb_data))
kb = await self.get_knowledge_base(kb_data['uuid'])
await self.ap.rag_mgr.load_knowledge_base(kb)
return kb_data['uuid']
async def update_knowledge_base(self, kb_uuid: str, kb_data: dict) -> None:
"""更新知识库"""
if 'uuid' in kb_data:
del kb_data['uuid']
if 'embedding_model_uuid' in kb_data:
del kb_data['embedding_model_uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_rag.KnowledgeBase)
.values(kb_data)
.where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
await self.ap.rag_mgr.remove_knowledge_base_from_runtime(kb_uuid)
kb = await self.get_knowledge_base(kb_uuid)
await self.ap.rag_mgr.load_knowledge_base(kb)
async def store_file(self, kb_uuid: str, file_id: str) -> int:
"""存储文件"""
# await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.File).values(kb_id=kb_uuid, file_id=file_id))
# await self.ap.rag_mgr.store_file(file_id)
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return await runtime_kb.store_file(file_id)
async def retrieve_knowledge_base(self, kb_uuid: str, query: str) -> list[dict]:
"""检索知识库"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return [result.model_dump() for result in await runtime_kb.retrieve(query)]
async def get_files_by_knowledge_base(self, kb_uuid: str) -> list[dict]:
"""获取知识库文件"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
files = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_rag.File, file) for file in files]
async def delete_file(self, kb_uuid: str, file_id: str) -> None:
"""删除文件"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
await runtime_kb.delete_file(file_id)
async def delete_knowledge_base(self, kb_uuid: str) -> None:
"""删除知识库"""
await self.ap.rag_mgr.delete_knowledge_base(kb_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
# delete files
files = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
for file in files:
# delete chunks
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.Chunk).where(persistence_rag.Chunk.file_id == file.uuid)
)
# delete file
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.File).where(persistence_rag.File.uuid == file.uuid)
)

View File

@@ -10,7 +10,7 @@ from ....provider.modelmgr import requester as model_requester
from ....provider import entities as llm_entities
class ModelsService:
class LLMModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
@@ -103,3 +103,89 @@ class ModelsService:
funcs=[],
extra_args={},
)
class EmbeddingModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_embedding_models(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
models = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model) for model in models]
async def create_embedding_model(self, model_data: dict) -> str:
model_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(
sqlalchemy.insert(persistence_model.EmbeddingModel).values(**model_data)
)
embedding_model = await self.get_embedding_model(model_data['uuid'])
await self.ap.model_mgr.load_embedding_model(embedding_model)
return model_data['uuid']
async def get_embedding_model(self, model_uuid: str) -> dict | None:
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
model = result.first()
if model is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model)
async def update_embedding_model(self, model_uuid: str, model_data: dict) -> None:
if 'uuid' in model_data:
del model_data['uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_model.EmbeddingModel)
.where(persistence_model.EmbeddingModel.uuid == model_uuid)
.values(**model_data)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
embedding_model = await self.get_embedding_model(model_uuid)
await self.ap.model_mgr.load_embedding_model(embedding_model)
async def delete_embedding_model(self, model_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
async def test_embedding_model(self, model_uuid: str, model_data: dict) -> None:
runtime_embedding_model: model_requester.RuntimeEmbeddingModel | None = None
if model_uuid != '_':
for model in self.ap.model_mgr.embedding_models:
if model.model_entity.uuid == model_uuid:
runtime_embedding_model = model
break
if runtime_embedding_model is None:
raise Exception('model not found')
else:
runtime_embedding_model = await self.ap.model_mgr.init_runtime_embedding_model(model_data)
await runtime_embedding_model.requester.invoke_embedding(
model=runtime_embedding_model,
input_text=['Hello, world!'],
extra_args={},
)

View File

@@ -38,9 +38,21 @@ class PipelineService:
self.ap.pipeline_config_meta_output.data,
]
async def get_pipelines(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]:
query = sqlalchemy.select(persistence_pipeline.LegacyPipeline)
if sort_by == 'created_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.asc())
elif sort_by == 'updated_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.asc())
result = await self.ap.persistence_mgr.execute_async(query)
pipelines = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
@@ -112,6 +124,11 @@ class PipelineService:
await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid)
await self.ap.pipeline_mgr.load_pipeline(pipeline)
# update all conversation that use this pipeline
for session in self.ap.sess_mgr.session_list:
if session.using_conversation is not None and session.using_conversation.pipeline_uuid == pipeline_uuid:
session.using_conversation = None
async def delete_pipeline(self, pipeline_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_pipeline.LegacyPipeline).where(

View File

@@ -73,3 +73,12 @@ class UserService:
jwt_secret = self.ap.instance_config.data['system']['jwt']['secret']
return jwt.decode(token, jwt_secret, algorithms=['HS256'])['user']
async def reset_password(self, user_email: str, new_password: str) -> None:
ph = argon2.PasswordHasher()
hashed_password = ph.hash(new_password)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class JSONConfigFile(file_model.ConfigFile):
"""JSON配置文件"""
"""JSON config file"""
def __init__(
self,
@@ -42,7 +42,7 @@ class JSONConfigFile(file_model.ConfigFile):
try:
cfg = json.load(f)
except json.JSONDecodeError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}')
raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion:
for key in self.template_data:

View File

@@ -7,13 +7,13 @@ from .. import model as file_model
class PythonModuleConfigFile(file_model.ConfigFile):
"""Python模块配置文件"""
"""Python module config file"""
config_file_name: str = None
"""配置文件名"""
"""Config file name"""
template_file_name: str = None
"""模板文件名"""
"""Template file name"""
def __init__(self, config_file_name: str, template_file_name: str) -> None:
self.config_file_name = config_file_name
@@ -42,7 +42,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
cfg[key] = getattr(module, key)
# 从模板模块文件中进行补全
# complete from template module file
if completion:
module_name = os.path.splitext(os.path.basename(self.template_file_name))[0]
module = importlib.import_module(module_name)
@@ -60,7 +60,7 @@ class PythonModuleConfigFile(file_model.ConfigFile):
return cfg
async def save(self, data: dict):
logging.warning('Python模块配置文件不支持保存')
logging.warning('Python module config file does not support saving')
def save_sync(self, data: dict):
logging.warning('Python模块配置文件不支持保存')
logging.warning('Python module config file does not support saving')

View File

@@ -6,7 +6,7 @@ from .. import model as file_model
class YAMLConfigFile(file_model.ConfigFile):
"""YAML配置文件"""
"""YAML config file"""
def __init__(
self,
@@ -42,7 +42,7 @@ class YAMLConfigFile(file_model.ConfigFile):
try:
cfg = yaml.load(f, Loader=yaml.FullLoader)
except yaml.YAMLError as e:
raise Exception(f'配置文件 {self.config_file_name} 语法错误: {e}')
raise Exception(f'Syntax error in config file {self.config_file_name}: {e}')
if completion:
for key in self.template_data:

View File

@@ -5,27 +5,27 @@ from .impls import pymodule, json as json_file, yaml as yaml_file
class ConfigManager:
"""配置文件管理器"""
"""Config file manager"""
name: str = None
"""配置管理器名"""
"""Config manager name"""
description: str = None
"""配置管理器描述"""
"""Config manager description"""
schema: dict = None
"""配置文件 schema
需要符合 JSON Schema Draft 7 规范
"""Config file schema
Must conform to JSON Schema Draft 7 specification
"""
file: file_model.ConfigFile = None
"""配置文件实例"""
"""Config file instance"""
data: dict = None
"""配置数据"""
"""Config data"""
doc_link: str = None
"""配置文件文档链接"""
"""Config file documentation link"""
def __init__(self, cfg_file: file_model.ConfigFile) -> None:
self.file = cfg_file
@@ -42,15 +42,15 @@ class ConfigManager:
async def load_python_module_config(config_name: str, template_name: str, completion: bool = True) -> ConfigManager:
"""加载Python模块配置文件
"""Load Python module config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
completion (bool): Whether to automatically complete the config file in memory
Returns:
ConfigManager: 配置文件管理器
ConfigManager: Config file manager
"""
cfg_inst = pymodule.PythonModuleConfigFile(config_name, template_name)
@@ -66,13 +66,13 @@ async def load_json_config(
template_data: dict = None,
completion: bool = True,
) -> ConfigManager:
"""加载JSON配置文件
"""Load JSON config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
template_data (dict): Template data
completion (bool): Whether to automatically complete the config file in memory
"""
cfg_inst = json_file.JSONConfigFile(config_name, template_name, template_data)
@@ -88,16 +88,16 @@ async def load_yaml_config(
template_data: dict = None,
completion: bool = True,
) -> ConfigManager:
"""加载YAML配置文件
"""Load YAML config file
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
config_name (str): Config file name
template_name (str): Template file name
template_data (dict): Template data
completion (bool): Whether to automatically complete the config file in memory
Returns:
ConfigManager: 配置文件管理器
ConfigManager: Config file manager
"""
cfg_inst = yaml_file.YAMLConfigFile(config_name, template_name, template_data)

View File

@@ -2,16 +2,16 @@ import abc
class ConfigFile(metaclass=abc.ABCMeta):
"""配置文件抽象类"""
"""Config file abstract class"""
config_file_name: str = None
"""配置文件名"""
"""Config file name"""
template_file_name: str = None
"""模板文件名"""
"""Template file name"""
template_data: dict = None
"""模板数据"""
"""Template data"""
@abc.abstractmethod
def exists(self) -> bool:

View File

@@ -22,15 +22,18 @@ from ..api.http.service import user as user_service
from ..api.http.service import model as model_service
from ..api.http.service import pipeline as pipeline_service
from ..api.http.service import bot as bot_service
from ..api.http.service import knowledge as knowledge_service
from ..discover import engine as discover_engine
from ..storage import mgr as storagemgr
from ..utils import logcache
from . import taskmgr
from . import entities as core_entities
from ..rag.knowledge import kbmgr as rag_mgr
from ..vector import mgr as vectordb_mgr
class Application:
"""运行时应用对象和上下文"""
"""Runtime application object and context"""
event_loop: asyncio.AbstractEventLoop = None
@@ -47,10 +50,12 @@ class Application:
model_mgr: llm_model_mgr.ModelManager = None
# TODO 移动到 pipeline 里
rag_mgr: rag_mgr.RAGManager = None
# TODO move to pipeline
tool_mgr: llm_tool_mgr.ToolManager = None
# ======= 配置管理器 =======
# ======= Config manager =======
command_cfg: config_mgr.ConfigManager = None # deprecated
@@ -64,7 +69,7 @@ class Application:
instance_config: config_mgr.ConfigManager = None
# ======= 元数据配置管理器 =======
# ======= Metadata config manager =======
sensitive_meta: config_mgr.ConfigManager = None
@@ -93,6 +98,8 @@ class Application:
persistence_mgr: persistencemgr.PersistenceManager = None
vector_db_mgr: vectordb_mgr.VectorDBManager = None
http_ctrl: http_controller.HTTPController = None
log_cache: logcache.LogCache = None
@@ -103,12 +110,16 @@ class Application:
user_service: user_service.UserService = None
model_service: model_service.ModelsService = None
llm_model_service: model_service.LLMModelsService = None
embedding_models_service: model_service.EmbeddingModelsService = None
pipeline_service: pipeline_service.PipelineService = None
bot_service: bot_service.BotService = None
knowledge_service: knowledge_service.KnowledgeService = None
def __init__(self):
pass
@@ -143,6 +154,7 @@ class Application:
name='http-api-controller',
scopes=[core_entities.LifecycleControlScope.APPLICATION],
)
self.task_mgr.create_task(
never_ending(),
name='never-ending-task',
@@ -154,11 +166,11 @@ class Application:
except asyncio.CancelledError:
pass
except Exception as e:
self.logger.error(f'应用运行致命异常: {e}')
self.logger.error(f'Application runtime fatal exception: {e}')
self.logger.debug(f'Traceback: {traceback.format_exc()}')
async def print_web_access_info(self):
"""打印访问 webui 的提示"""
"""Print access webui tips"""
if not os.path.exists(os.path.join('.', 'web/out')):
self.logger.warning('WebUI 文件缺失请根据文档部署https://docs.langbot.app/zh')
@@ -190,7 +202,7 @@ class Application:
):
match scope:
case core_entities.LifecycleControlScope.PLATFORM.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.platform_mgr.shutdown()
self.platform_mgr = im_mgr.PlatformManager(self)
@@ -206,7 +218,7 @@ class Application:
],
)
case core_entities.LifecycleControlScope.PLUGIN.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.plugin_mgr.destroy_plugins()
# 删除 sys.module 中所有的 plugins/* 下的模块
@@ -222,7 +234,7 @@ class Application:
await self.plugin_mgr.load_plugins()
await self.plugin_mgr.initialize_plugins()
case core_entities.LifecycleControlScope.PROVIDER.value:
self.logger.info('执行热重载 scope=' + scope)
self.logger.info('Hot reload scope=' + scope)
await self.tool_mgr.shutdown()

View File

@@ -1,4 +1,4 @@
from __future__ import print_function
from __future__ import annotations
import traceback
import asyncio
@@ -8,7 +8,7 @@ from . import app
from . import stage
from ..utils import constants, importutil
# 引入启动阶段实现以便注册
# Import startup stage implementation to register
from . import stages
importutil.import_modules_in_pkg(stages)
@@ -25,7 +25,7 @@ stage_order = [
async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
# 确定是否为调试模式
# Determine if it is debug mode
if 'DEBUG' in os.environ and os.environ['DEBUG'] in ['true', '1']:
constants.debug_mode = True
@@ -33,7 +33,7 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
ap.event_loop = loop
# 执行启动阶段
# Execute startup stage
for stage_name in stage_order:
stage_cls = stage.preregistered_stages[stage_name]
stage_inst = stage_cls()
@@ -47,11 +47,11 @@ async def make_app(loop: asyncio.AbstractEventLoop) -> app.Application:
async def main(loop: asyncio.AbstractEventLoop):
try:
# 挂系统信号处理
# Hang system signal processing
import signal
def signal_handler(sig, frame):
print('[Signal] 程序退出.')
print('[Signal] Program exit.')
# ap.shutdown()
os._exit(0)

View File

@@ -2,8 +2,8 @@ import pip
import os
from ...utils import pkgmgr
# 检查依赖,防止用户未安装
# 左边为引入名称,右边为依赖名称
# Check dependencies to prevent users from not installing
# Left is the import name, right is the dependency name
required_deps = {
'requests': 'requests',
'openai': 'openai',
@@ -65,7 +65,7 @@ async def install_deps(deps: list[str]):
async def precheck_plugin_deps():
print('[Startup] Prechecking plugin dependencies...')
# 只有在plugins目录存在时才执行插件依赖安装
# Only execute plugin dependency installation when the plugins directory exists
if os.path.exists('plugins'):
for dir in os.listdir('plugins'):
subdir = os.path.join('plugins', dir)

View File

@@ -17,7 +17,7 @@ log_colors_config = {
async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.Logger:
# 删除所有现有的logger
# Remove all existing loggers
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
@@ -54,13 +54,13 @@ async def init_logging(extra_handlers: list[logging.Handler] = None) -> logging.
handler.setFormatter(color_formatter)
qcg_logger.addHandler(handler)
qcg_logger.debug('日志初始化完成,日志级别:%s' % level)
qcg_logger.debug('Logging initialized, log level: %s' % level)
logging.basicConfig(
level=logging.CRITICAL, # 设置日志输出格式
level=logging.CRITICAL, # Set log output format
format='[DEPR][%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s',
# 日志输出的格式
# -8表示占位符让输出左对齐输出长度都为8位
datefmt='%Y-%m-%d %H:%M:%S', # 时间输出的格式
# Log output format
# -8 is a placeholder, left-align the output, and output length is 8
datefmt='%Y-%m-%d %H:%M:%S', # Time output format
handlers=[logging.NullHandler()],
)

View File

@@ -19,7 +19,7 @@ class LifecycleControlScope(enum.Enum):
APPLICATION = 'application'
PLATFORM = 'platform'
PLUGIN = 'plugin'
PROVIDER = 'provider'
PROVIDER = 'provider'
class LauncherTypes(enum.Enum):
@@ -137,6 +137,12 @@ class Conversation(pydantic.BaseModel):
use_funcs: typing.Optional[list[tools_entities.LLMFunction]]
pipeline_uuid: str
"""流水线UUID。"""
bot_uuid: str
"""机器人UUID。"""
uuid: typing.Optional[str] = None
"""该对话的 uuid在创建时不会自动生成。而是当使用 Dify API 等由外部管理对话信息的服务时,用于绑定外部的会话。具体如何使用,取决于 Runner。"""

View File

@@ -7,11 +7,11 @@ from . import app
preregistered_migrations: list[typing.Type[Migration]] = []
"""当前阶段暂不支持扩展"""
"""Currently not supported for extension"""
def migration_class(name: str, number: int):
"""注册一个迁移"""
"""Register a migration"""
def decorator(cls: typing.Type[Migration]) -> typing.Type[Migration]:
cls.name = name
@@ -23,7 +23,7 @@ def migration_class(name: str, number: int):
class Migration(abc.ABC):
"""一个版本的迁移"""
"""A version migration"""
name: str
@@ -36,10 +36,10 @@ class Migration(abc.ABC):
@abc.abstractmethod
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
"""Determine if the current environment needs to run this migration"""
pass
@abc.abstractmethod
async def run(self):
"""执行迁移"""
"""Run migration"""
pass

View File

@@ -9,7 +9,7 @@ preregistered_notes: list[typing.Type[LaunchNote]] = []
def note_class(name: str, number: int):
"""注册一个启动信息"""
"""Register a launch information"""
def decorator(cls: typing.Type[LaunchNote]) -> typing.Type[LaunchNote]:
cls.name = name
@@ -21,7 +21,7 @@ def note_class(name: str, number: int):
class LaunchNote(abc.ABC):
"""启动信息"""
"""Launch information"""
name: str
@@ -34,10 +34,10 @@ class LaunchNote(abc.ABC):
@abc.abstractmethod
async def need_show(self) -> bool:
"""判断当前环境是否需要显示此启动信息"""
"""Determine if the current environment needs to display this launch information"""
pass
@abc.abstractmethod
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
"""生成启动信息"""
"""Generate launch information"""
pass

View File

@@ -7,7 +7,7 @@ from .. import note
@note.note_class('ClassicNotes', 1)
class ClassicNotes(note.LaunchNote):
"""经典启动信息"""
"""Classic launch information"""
async def need_show(self) -> bool:
return True

View File

@@ -9,7 +9,7 @@ from .. import note
@note.note_class('SelectionModeOnWindows', 2)
class SelectionModeOnWindows(note.LaunchNote):
"""Windows 上的选择模式提示信息"""
"""Selection mode prompt information on Windows"""
async def need_show(self) -> bool:
return os.name == 'nt'
@@ -19,3 +19,8 @@ class SelectionModeOnWindows(note.LaunchNote):
"""您正在使用 Windows 系统,若窗口左上角显示处于”选择“模式,程序将被暂停运行,此时请右键窗口中空白区域退出选择模式。""",
logging.INFO,
)
yield (
"""You are using Windows system, if the top left corner of the window displays "Selection" mode, the program will be paused running, please right-click on the blank area in the window to exit the selection mode.""",
logging.INFO,
)

View File

@@ -7,9 +7,9 @@ from . import app
preregistered_stages: dict[str, typing.Type[BootingStage]] = {}
"""预注册的请求处理阶段。在初始化时,所有请求处理阶段类会被注册到此字典中。
"""Pre-registered request processing stages. All request processing stage classes are registered in this dictionary during initialization.
当前阶段暂不支持扩展
Currently not supported for extension
"""
@@ -22,11 +22,11 @@ def stage_class(name: str):
class BootingStage(abc.ABC):
"""启动阶段"""
"""Booting stage"""
name: str = None
@abc.abstractmethod
async def run(self, ap: app.Application):
"""启动"""
"""Run"""
pass

View File

@@ -9,6 +9,7 @@ from ...command import cmdmgr
from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr
from ...provider.tools import toolmgr as llm_tool_mgr
from ...rag.knowledge import kbmgr as rag_mgr
from ...platform import botmgr as im_mgr
from ...persistence import mgr as persistencemgr
from ...api.http.controller import main as http_controller
@@ -16,18 +17,20 @@ from ...api.http.service import user as user_service
from ...api.http.service import model as model_service
from ...api.http.service import pipeline as pipeline_service
from ...api.http.service import bot as bot_service
from ...api.http.service import knowledge as knowledge_service
from ...discover import engine as discover_engine
from ...storage import mgr as storagemgr
from ...utils import logcache
from ...vector import mgr as vectordb_mgr
from .. import taskmgr
@stage.stage_class('BuildAppStage')
class BuildAppStage(stage.BootingStage):
"""构建应用阶段"""
"""Build LangBot application"""
async def run(self, ap: app.Application):
"""构建app对象的各个组件对象并初始化"""
"""Build LangBot application"""
ap.task_mgr = taskmgr.AsyncTaskManager(ap)
discover = discover_engine.ComponentDiscoveryEngine(ap)
@@ -42,7 +45,7 @@ class BuildAppStage(stage.BootingStage):
await ver_mgr.initialize()
ap.ver_mgr = ver_mgr
# 发送公告
# Send announcement
ann_mgr = announce.AnnouncementManager(ap)
ap.ann_mgr = ann_mgr
@@ -88,6 +91,15 @@ class BuildAppStage(stage.BootingStage):
await pipeline_mgr.initialize()
ap.pipeline_mgr = pipeline_mgr
rag_mgr_inst = rag_mgr.RAGManager(ap)
await rag_mgr_inst.initialize()
ap.rag_mgr = rag_mgr_inst
# 初始化向量数据库管理器
vectordb_mgr_inst = vectordb_mgr.VectorDBManager(ap)
await vectordb_mgr_inst.initialize()
ap.vector_db_mgr = vectordb_mgr_inst
http_ctrl = http_controller.HTTPController(ap)
await http_ctrl.initialize()
ap.http_ctrl = http_ctrl
@@ -95,8 +107,11 @@ class BuildAppStage(stage.BootingStage):
user_service_inst = user_service.UserService(ap)
ap.user_service = user_service_inst
model_service_inst = model_service.ModelsService(ap)
ap.model_service = model_service_inst
llm_model_service_inst = model_service.LLMModelsService(ap)
ap.llm_model_service = llm_model_service_inst
embedding_models_service_inst = model_service.EmbeddingModelsService(ap)
ap.embedding_models_service = embedding_models_service_inst
pipeline_service_inst = pipeline_service.PipelineService(ap)
ap.pipeline_service = pipeline_service_inst
@@ -104,5 +119,8 @@ class BuildAppStage(stage.BootingStage):
bot_service_inst = bot_service.BotService(ap)
ap.bot_service = bot_service_inst
knowledge_service_inst = knowledge_service.KnowledgeService(ap)
ap.knowledge_service = knowledge_service_inst
ctrl = controller.Controller(ap)
ap.ctrl = ctrl

View File

@@ -7,11 +7,18 @@ from .. import stage, app
@stage.stage_class('GenKeysStage')
class GenKeysStage(stage.BootingStage):
"""生成密钥阶段"""
"""Generate keys stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Generate keys"""
if not ap.instance_config.data['system']['jwt']['secret']:
ap.instance_config.data['system']['jwt']['secret'] = secrets.token_hex(16)
await ap.instance_config.dump_config()
if 'recovery_key' not in ap.instance_config.data['system']:
ap.instance_config.data['system']['recovery_key'] = ''
if not ap.instance_config.data['system']['recovery_key']:
ap.instance_config.data['system']['recovery_key'] = secrets.token_hex(3).upper()
await ap.instance_config.dump_config()

View File

@@ -8,10 +8,10 @@ from ..bootutils import config
@stage.stage_class('LoadConfigStage')
class LoadConfigStage(stage.BootingStage):
"""加载配置文件阶段"""
"""Load config file stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Load config file"""
# ======= deprecated =======
if os.path.exists('data/config/command.json'):

View File

@@ -11,10 +11,13 @@ importutil.import_modules_in_pkg(migrations)
@stage.stage_class('MigrationStage')
class MigrationStage(stage.BootingStage):
"""迁移阶段"""
"""Migration stage
These migrations are legacy, only performed in version 3.x
"""
async def run(self, ap: app.Application):
"""启动"""
"""Run migration"""
if any(
[
@@ -29,7 +32,7 @@ class MigrationStage(stage.BootingStage):
migrations = migration.preregistered_migrations
# 按照迁移号排序
# Sort by migration number
migrations.sort(key=lambda x: x.number)
for migration_cls in migrations:
@@ -37,4 +40,4 @@ class MigrationStage(stage.BootingStage):
if await migration_instance.need_migrate():
await migration_instance.run()
print(f'已执行迁移 {migration_instance.name}')
print(f'Migration {migration_instance.name} executed')

View File

@@ -8,7 +8,7 @@ from ..bootutils import log
class PersistenceHandler(logging.Handler, object):
"""
保存日志到数据库
Save logs to database
"""
ap: app.Application
@@ -19,9 +19,9 @@ class PersistenceHandler(logging.Handler, object):
def emit(self, record):
"""
emit函数为自定义handler类时必重写的函数这里可以根据需要对日志消息做一些处理比如发送日志到服务器
emit function is a required function for custom handler classes, here you can process the log messages as needed, such as sending logs to the server
发出记录(Emit a record)
Emit a record
"""
try:
msg = self.format(record)
@@ -34,10 +34,10 @@ class PersistenceHandler(logging.Handler, object):
@stage.stage_class('SetupLoggerStage')
class SetupLoggerStage(stage.BootingStage):
"""设置日志器阶段"""
"""Setup logger stage"""
async def run(self, ap: app.Application):
"""启动"""
"""Setup logger"""
persistence_handler = PersistenceHandler('LoggerHandler', ap)
extra_handlers = []

View File

@@ -1,5 +1,7 @@
from __future__ import annotations
import asyncio
from .. import stage, app, note
from ...utils import importutil
@@ -10,21 +12,25 @@ importutil.import_modules_in_pkg(notes)
@stage.stage_class('ShowNotesStage')
class ShowNotesStage(stage.BootingStage):
"""显示启动信息阶段"""
"""Show notes stage"""
async def run(self, ap: app.Application):
# 排序
# Sort
note.preregistered_notes.sort(key=lambda x: x.number)
for note_cls in note.preregistered_notes:
try:
note_inst = note_cls(ap)
if await note_inst.need_show():
async for ret in note_inst.yield_note():
if not ret:
continue
msg, level = ret
if msg:
ap.logger.log(level, msg)
async def ayield_note(note_inst: note.LaunchNote):
async for ret in note_inst.yield_note():
if not ret:
continue
msg, level = ret
if msg:
ap.logger.log(level, msg)
asyncio.create_task(ayield_note(note_inst))
except Exception:
continue

View File

@@ -9,13 +9,13 @@ from . import entities as core_entities
class TaskContext:
"""任务跟踪上下文"""
"""Task tracking context"""
current_action: str
"""当前正在执行的动作"""
"""Current action being executed"""
log: str
"""记录日志"""
"""Log"""
def __init__(self):
self.current_action = 'default'
@@ -58,40 +58,40 @@ placeholder_context: TaskContext | None = None
class TaskWrapper:
"""任务包装器"""
"""Task wrapper"""
_id_index: int = 0
"""任务ID索引"""
"""Task ID index"""
id: int
"""任务ID"""
"""Task ID"""
task_type: str = 'system' # 任务类型: system user
"""任务类型"""
task_type: str = 'system' # Task type: system or user
"""Task type"""
kind: str = 'system_task' # 由发起者确定任务种类,通常同质化的任务种类相同
"""任务种类"""
kind: str = 'system_task' # Task type determined by the initiator, usually the same task type
"""Task type"""
name: str = ''
"""任务唯一名称"""
"""Task unique name"""
label: str = ''
"""任务显示名称"""
"""Task display name"""
task_context: TaskContext
"""任务上下文"""
"""Task context"""
task: asyncio.Task
"""任务"""
"""Task"""
task_stack: list = None
"""任务堆栈"""
"""Task stack"""
ap: app.Application
"""应用实例"""
"""Application instance"""
scopes: list[core_entities.LifecycleControlScope]
"""任务所属生命周期控制范围"""
"""Task scope"""
def __init__(
self,
@@ -165,13 +165,13 @@ class TaskWrapper:
class AsyncTaskManager:
"""保存app中的所有异步任务
包含系统级的和用户级(插件安装、更新等由用户直接发起的)的"""
"""Save all asynchronous tasks in the app
Include system-level and user-level (plugin installation, update, etc. initiated by users directly)"""
ap: app.Application
tasks: list[TaskWrapper]
"""所有任务"""
"""All tasks"""
def __init__(self, ap: app.Application):
self.ap = ap

View File

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
class AdapterNotFoundError(Exception):
def __init__(self, adapter_name: str):
self.adapter_name = adapter_name
def __str__(self):
return f'Adapter {self.adapter_name} not found'

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
class RequesterNotFoundError(Exception):
def __init__(self, requester_name: str):
self.requester_name = requester_name
def __str__(self):
return f'Requester {self.requester_name} not found'

View File

@@ -4,7 +4,7 @@ from .base import Base
class Bot(Base):
"""机器人"""
"""Bot"""
__tablename__ = 'bots'

View File

@@ -12,7 +12,7 @@ initial_metadata = [
class Metadata(Base):
"""数据库元数据"""
"""Database metadata"""
__tablename__ = 'metadata'

View File

@@ -4,7 +4,7 @@ from .base import Base
class LLMModel(Base):
"""LLM 模型"""
"""LLM model"""
__tablename__ = 'llm_models'
@@ -23,3 +23,24 @@ class LLMModel(Base):
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)
class EmbeddingModel(Base):
"""Embedding 模型"""
__tablename__ = 'embedding_models'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
description = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester_config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
api_keys = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
extra_args = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,
nullable=False,
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)

View File

@@ -4,7 +4,7 @@ from .base import Base
class LegacyPipeline(Base):
"""旧版流水线"""
"""Legacy pipeline"""
__tablename__ = 'legacy_pipelines'
@@ -20,13 +20,12 @@ class LegacyPipeline(Base):
)
for_version = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
is_default = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=False)
stages = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
class PipelineRunRecord(Base):
"""流水线运行记录"""
"""Pipeline run record"""
__tablename__ = 'pipeline_run_records'
@@ -43,3 +42,4 @@ class PipelineRunRecord(Base):
started_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
finished_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
result = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
knowledge_base_uuid = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)

View File

@@ -4,7 +4,7 @@ from .base import Base
class PluginSetting(Base):
"""插件配置"""
"""Plugin setting"""
__tablename__ = 'plugin_settings'

View File

@@ -0,0 +1,50 @@
import sqlalchemy
from .base import Base
# Base = declarative_base()
# DATABASE_URL = os.getenv('DATABASE_URL', 'sqlite:///./rag_knowledge.db')
# print("Using database URL:", DATABASE_URL)
# engine = create_engine(DATABASE_URL, connect_args={'check_same_thread': False})
# SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# def create_db_and_tables():
# """Creates all database tables defined in the Base."""
# Base.metadata.create_all(bind=engine)
# print('Database tables created or already exist.')
class KnowledgeBase(Base):
__tablename__ = 'knowledge_bases'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String, index=True)
description = sqlalchemy.Column(sqlalchemy.Text)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
embedding_model_uuid = sqlalchemy.Column(sqlalchemy.String, default='')
top_k = sqlalchemy.Column(sqlalchemy.Integer, default=5)
class File(Base):
__tablename__ = 'knowledge_base_files'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
kb_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
file_name = sqlalchemy.Column(sqlalchemy.String)
extension = sqlalchemy.Column(sqlalchemy.String)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
status = sqlalchemy.Column(sqlalchemy.String, default='pending') # pending, processing, completed, failed
class Chunk(Base):
__tablename__ = 'knowledge_base_chunks'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
file_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
text = sqlalchemy.Column(sqlalchemy.Text)
# class Vector(Base):
# __tablename__ = 'knowledge_base_vectors'
# uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
# chunk_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# embedding = sqlalchemy.Column(sqlalchemy.LargeBinary)

View File

@@ -0,0 +1,13 @@
from sqlalchemy import Column, Integer, ForeignKey, LargeBinary
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class Vector(Base):
__tablename__ = 'vectors'
id = Column(Integer, primary_key=True, index=True)
chunk_id = Column(Integer, ForeignKey('chunks.id'), unique=True)
embedding = Column(LargeBinary) # Store embeddings as binary
chunk = relationship('Chunk', back_populates='vector')

View File

View File

@@ -0,0 +1,13 @@
from __future__ import annotations
import pydantic
from typing import Any
class RetrieveResultEntry(pydantic.BaseModel):
id: str
metadata: dict[str, Any]
distance: float

View File

@@ -11,7 +11,7 @@ preregistered_managers: list[type[BaseDatabaseManager]] = []
def manager_class(name: str) -> None:
"""注册一个数据库管理类"""
"""Register a database manager class"""
def decorator(cls: type[BaseDatabaseManager]) -> type[BaseDatabaseManager]:
cls.name = name
@@ -22,7 +22,7 @@ def manager_class(name: str) -> None:
class BaseDatabaseManager(abc.ABC):
"""基础数据库管理类"""
"""Base database manager class"""
name: str

View File

@@ -7,7 +7,7 @@ from .. import database
@database.manager_class('sqlite')
class SQLiteDatabaseManager(database.BaseDatabaseManager):
"""SQLite 数据库管理类"""
"""SQLite database manager"""
async def initialize(self) -> None:
sqlite_path = 'data/langbot.db'

View File

@@ -22,12 +22,12 @@ importutil.import_modules_in_pkg(persistence)
class PersistenceManager:
"""持久化模块管理器"""
"""Persistence module manager"""
ap: app.Application
db: database.BaseDatabaseManager
"""数据库管理器"""
"""Database manager"""
meta: sqlalchemy.MetaData
@@ -66,22 +66,25 @@ class PersistenceManager:
# write default pipeline
result = await self.execute_async(sqlalchemy.select(pipeline.LegacyPipeline))
default_pipeline_uuid = None
if result.first() is None:
self.ap.logger.info('Creating default pipeline...')
pipeline_config = json.load(open('templates/default-pipeline-config.json', 'r', encoding='utf-8'))
default_pipeline_uuid = str(uuid.uuid4())
pipeline_data = {
'uuid': str(uuid.uuid4()),
'uuid': default_pipeline_uuid,
'for_version': self.ap.ver_mgr.get_current_version(),
'stages': pipeline_service.default_stage_order,
'is_default': True,
'name': 'ChatPipeline',
'description': '默认提供的流水线,您配置的机器人、第一个模型将自动绑定到此流水线',
'description': 'Default pipeline, new bots will be bound to this pipeline | 默认提供的流水线,您配置的机器人将自动绑定到此流水线',
'config': pipeline_config,
}
await self.execute_async(sqlalchemy.insert(pipeline.LegacyPipeline).values(pipeline_data))
# =================================
# run migrations

View File

@@ -10,7 +10,7 @@ preregistered_db_migrations: list[typing.Type[DBMigration]] = []
def migration_class(number: int):
"""迁移类装饰器"""
"""Migration class decorator"""
def wrapper(cls: typing.Type[DBMigration]) -> typing.Type[DBMigration]:
cls.number = number
@@ -21,20 +21,20 @@ def migration_class(number: int):
class DBMigration(abc.ABC):
"""数据库迁移"""
"""Database migration"""
number: int
"""迁移号"""
"""Migration number"""
def __init__(self, ap: app.Application):
self.ap = ap
@abc.abstractmethod
async def upgrade(self):
"""升级"""
"""Upgrade"""
pass
@abc.abstractmethod
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -15,21 +15,21 @@ from ...entity.persistence import (
@migration.migration_class(1)
class DBMigrateV3Config(migration.DBMigration):
"""从 v3 的配置迁移到 v4 的数据库"""
"""Migrate v3 config to v4 database"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
"""
将 data/config 下的所有配置文件进行迁移。
迁移后,之前的配置文件都保存到 data/legacy/config 下。
迁移后data/metadata/ 下的所有配置文件都保存到 data/legacy/metadata 下。
Migrate all config files under data/config.
After migration, all previous config files are saved under data/legacy/config.
After migration, all config files under data/metadata/ are saved under data/legacy/metadata.
"""
if self.ap.provider_cfg is None:
return
# ======= 迁移模型 =======
# 只迁移当前选中的模型
# ======= Migrate model =======
# Only migrate the currently selected model
model_name = self.ap.provider_cfg.data.get('model', 'gpt-4o')
model_requester = 'openai-chat-completions'
@@ -91,8 +91,8 @@ class DBMigrateV3Config(migration.DBMigration):
sqlalchemy.insert(persistence_model.LLMModel).values(**llm_model_data)
)
# ======= 迁移流水线配置 =======
# 修改到默认流水线
# ======= Migrate pipeline config =======
# Modify to default pipeline
default_pipeline = [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
for pipeline in (
@@ -184,8 +184,8 @@ class DBMigrateV3Config(migration.DBMigration):
.where(persistence_pipeline.LegacyPipeline.uuid == default_pipeline['uuid'])
)
# ======= 迁移机器人 =======
# 只迁移启用的机器人
# ======= Migrate bot =======
# Only migrate enabled bots
for adapter in self.ap.platform_cfg.data.get('platform-adapters', []):
if not adapter.get('enable'):
continue
@@ -207,7 +207,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_bot.Bot).values(**bot_data))
# ======= 迁移系统设置 =======
# ======= Migrate system settings =======
self.ap.instance_config.data['admins'] = self.ap.system_cfg.data['admin-sessions']
self.ap.instance_config.data['api']['port'] = self.ap.system_cfg.data['http-api']['port']
self.ap.instance_config.data['command'] = {
@@ -223,7 +223,7 @@ class DBMigrateV3Config(migration.DBMigration):
await self.ap.instance_config.dump_config()
# ======= move files =======
# 迁移 data/config 下的所有配置文件
# Migrate all config files under data/config
all_legacy_dir_name = [
'config',
# 'metadata',
@@ -246,4 +246,4 @@ class DBMigrateV3Config(migration.DBMigration):
move_legacy_files(dir_name)
async def downgrade(self):
"""降级"""
"""Downgrade"""

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(2)
class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
"""引用消息合并配置"""
"""Combine quote message config"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -37,5 +37,5 @@ class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
)
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -7,10 +7,10 @@ from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(3)
class DBMigrateN8nConfig(migration.DBMigration):
"""N8n配置"""
"""N8n config"""
async def upgrade(self):
"""升级"""
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
@@ -45,5 +45,5 @@ class DBMigrateN8nConfig(migration.DBMigration):
)
async def downgrade(self):
"""降级"""
"""Downgrade"""
pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(4)
class DBMigrateRAGKBUUID(migration.DBMigration):
"""RAG知识库UUID"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'knowledge-base' not in config['ai']['local-agent']:
config['ai']['local-agent']['knowledge-base'] = ''
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -6,9 +6,9 @@ from ...core import entities as core_entities
@stage.stage_class('BanSessionCheckStage')
class BanSessionCheckStage(stage.PipelineStage):
"""访问控制处理阶段
"""Access control processing stage
仅检查query中群号或个人号是否在访问控制列表中。
Only check if the group or personal number in the query is in the access control list.
"""
async def initialize(self, pipeline_config: dict):
@@ -41,5 +41,7 @@ class BanSessionCheckStage(stage.PipelineStage):
return entities.StageProcessResult(
result_type=entities.ResultType.CONTINUE if ctn else entities.ResultType.INTERRUPT,
new_query=query,
console_notice=f'根据访问控制忽略消息: {query.launcher_type.value}_{query.launcher_id}' if not ctn else '',
console_notice=f'Ignore message according to access control: {query.launcher_type.value}_{query.launcher_id}'
if not ctn
else '',
)

View File

@@ -66,6 +66,8 @@ class ContentFilterStage(stage.PipelineStage):
if query.pipeline_config['safety']['content-filter']['scope'] == 'output-msg':
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
if not message.strip():
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
for filter in self.filter_chain:
if filter_entities.EnableStage.PRE in filter.enable_stages:

View File

@@ -13,13 +13,13 @@ preregistered_filters: list[typing.Type[ContentFilter]] = []
def filter_class(
name: str,
) -> typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]:
"""内容过滤器类装饰器
"""Content filter class decorator
Args:
name (str): 过滤器名称
name (str): Filter name
Returns:
typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: 装饰器
typing.Callable[[typing.Type[ContentFilter]], typing.Type[ContentFilter]]: Decorator
"""
def decorator(cls: typing.Type[ContentFilter]) -> typing.Type[ContentFilter]:
@@ -35,7 +35,7 @@ def filter_class(
class ContentFilter(metaclass=abc.ABCMeta):
"""内容过滤器抽象类"""
"""Content filter abstract class"""
name: str
@@ -46,31 +46,31 @@ class ContentFilter(metaclass=abc.ABCMeta):
@property
def enable_stages(self):
"""启用的阶段
"""Enabled stages
默认为消息请求AI前后的两个阶段。
Default is the two stages before and after the message request to AI.
entity.EnableStage.PRE: 消息请求AI前此时需要检查的内容是用户的输入消息。
entity.EnableStage.POST: 消息请求AI后此时需要检查的内容是AI的回复消息。
entity.EnableStage.PRE: Before message request to AI, the content to check is the user's input message.
entity.EnableStage.POST: After message request to AI, the content to check is the AI's reply message.
"""
return [entities.EnableStage.PRE, entities.EnableStage.POST]
async def initialize(self):
"""初始化过滤器"""
"""Initialize filter"""
pass
@abc.abstractmethod
async def process(self, query: core_entities.Query, message: str = None, image_url=None) -> entities.FilterResult:
"""处理消息
"""Process message
分为前后阶段,具体取决于 enable_stages 的值。
对于内容过滤器来说,不需要考虑消息所处的阶段,只需要检查消息内容即可。
It is divided into two stages, depending on the value of enable_stages.
For content filters, you do not need to consider the stage of the message, you only need to check the message content.
Args:
message (str): 需要检查的内容
image_url (str): 要检查的图片的 URL
message (str): Content to check
image_url (str): URL of the image to check
Returns:
entities.FilterResult: 过滤结果,具体内容请查看 entities.FilterResult 类的文档
entities.FilterResult: Filter result, please refer to the documentation of entities.FilterResult class
"""
raise NotImplementedError

View File

@@ -8,7 +8,7 @@ from ....core import entities as core_entities
@filter_model.filter_class('ban-word-filter')
class BanWordFilter(filter_model.ContentFilter):
"""根据内容过滤"""
"""Filter content"""
async def initialize(self):
pass

View File

@@ -8,7 +8,7 @@ from ....core import entities as core_entities
@filter_model.filter_class('content-ignore')
class ContentIgnore(filter_model.ContentFilter):
"""根据内容忽略消息"""
"""Ignore message according to content"""
@property
def enable_stages(self):
@@ -24,7 +24,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK,
replacement='',
user_notice='',
console_notice='根据 ignore_rules 中的 prefix 规则,忽略消息',
console_notice='Ignore message according to prefix rule in ignore_rules',
)
if 'regexp' in query.pipeline_config['trigger']['ignore-rules']:
@@ -34,7 +34,7 @@ class ContentIgnore(filter_model.ContentFilter):
level=entities.ResultLevel.BLOCK,
replacement='',
user_notice='',
console_notice='根据 ignore_rules 中的 regexp 规则,忽略消息',
console_notice='Ignore message according to regexp rule in ignore_rules',
)
return entities.FilterResult(

View File

@@ -51,11 +51,10 @@ class Controller:
# find pipeline
# Here firstly find the bot, then find the pipeline, in case the bot adapter's config is not the latest one.
# Like aiocqhttp, once a client is connected, even the adapter was updated and restarted, the existing client connection will not be affected.
bot = await self.ap.platform_mgr.get_bot_by_uuid(selected_query.bot_uuid)
if bot:
pipeline = await self.ap.pipeline_mgr.get_pipeline_by_uuid(
bot.bot_entity.use_pipeline_uuid
)
pipeline_uuid = selected_query.pipeline_uuid
if pipeline_uuid:
pipeline = await self.ap.pipeline_mgr.get_pipeline_by_uuid(pipeline_uuid)
if pipeline:
await pipeline.run(selected_query)

View File

@@ -16,9 +16,9 @@ importutil.import_modules_in_pkg(strategies)
@stage.stage_class('LongTextProcessStage')
class LongTextProcessStage(stage.PipelineStage):
"""长消息处理阶段
"""Long message processing stage
改写:
Rewrite:
- resp_message_chain
"""
@@ -36,22 +36,22 @@ class LongTextProcessStage(stage.PipelineStage):
use_font = 'C:/Windows/Fonts/msyh.ttc'
if not os.path.exists(use_font):
self.ap.logger.warn(
'未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在配置文件中调整相关设置。'
'Font file not found, and Windows system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
)
config['blob_message_strategy'] = 'forward'
else:
self.ap.logger.info('使用Windows自带字体:' + use_font)
self.ap.logger.info('Using Windows system font: ' + use_font)
config['font-path'] = use_font
else:
self.ap.logger.warn(
'未找到字体文件,且无法使用系统自带字体,更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。'
'Font file not found, and system font cannot be used, switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'
)
pipeline_config['output']['long-text-processing']['strategy'] = 'forward'
except Exception:
traceback.print_exc()
self.ap.logger.error(
'加载字体文件失败({}),更换为转发消息组件以发送长消息,您可以在配置文件中调整相关设置。'.format(
'Failed to load font file ({}), switch to forward message component to send long messages, you can adjust the related settings in the configuration file.'.format(
use_font
)
)
@@ -63,12 +63,12 @@ class LongTextProcessStage(stage.PipelineStage):
self.strategy_impl = strategy_cls(self.ap)
break
else:
raise ValueError(f'未找到名为 {config["strategy"]} 的长消息处理策略')
raise ValueError(f'Long message processing strategy not found: {config["strategy"]}')
await self.strategy_impl.initialize()
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
# 检查是否包含非 Plain 组件
# Check if it contains non-Plain components
contains_non_plain = False
for msg in query.resp_message_chain[-1]:
@@ -77,7 +77,7 @@ class LongTextProcessStage(stage.PipelineStage):
break
if contains_non_plain:
self.ap.logger.debug('消息中包含非 Plain 组件,跳过长消息处理。')
self.ap.logger.debug('Message contains non-Plain components, skip long message processing.')
elif (
len(str(query.resp_message_chain[-1]))
> query.pipeline_config['output']['long-text-processing']['threshold']

View File

@@ -15,17 +15,17 @@ Forward = platform_message.Forward
class ForwardComponentStrategy(strategy_model.LongTextStrategy):
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
display = ForwardMessageDiaplay(
title='群聊的聊天记录',
brief='[聊天记录]',
source='聊天记录',
preview=['QQ用户: ' + message],
summary='查看1条转发消息',
title='Group chat history',
brief='[Chat history]',
source='Chat history',
preview=['User: ' + message],
summary='View 1 forwarded message',
)
node_list = [
platform_message.ForwardMessageNode(
sender_id=query.adapter.bot_account_id,
sender_name='QQ用户',
sender_name='User',
message_chain=platform_message.MessageChain([message]),
)
]

View File

@@ -14,13 +14,13 @@ preregistered_strategies: list[typing.Type[LongTextStrategy]] = []
def strategy_class(
name: str,
) -> typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]:
"""长文本处理策略类装饰器
"""Long text processing strategy class decorator
Args:
name (str): 策略名称
name (str): Strategy name
Returns:
typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: 装饰器
typing.Callable[[typing.Type[LongTextStrategy]], typing.Type[LongTextStrategy]]: Decorator
"""
def decorator(cls: typing.Type[LongTextStrategy]) -> typing.Type[LongTextStrategy]:
@@ -36,7 +36,7 @@ def strategy_class(
class LongTextStrategy(metaclass=abc.ABCMeta):
"""长文本处理策略抽象类"""
"""Long text processing strategy abstract class"""
name: str
@@ -50,15 +50,15 @@ class LongTextStrategy(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
"""处理长文本
"""Process long text
在 platform.json 中配置 long-text-process 字段,只要 文本长度超过了 threshold 就会调用此方法
If the text length exceeds the threshold, this method will be called.
Args:
message (str): 消息
query (core_entities.Query): 此次请求的上下文对象
message (str): Message
query (core_entities.Query): Query object
Returns:
list[platform_message.MessageComponent]: 转换后的 平台 消息组件列表
list[platform_message.MessageComponent]: Converted platform message components
"""
return []

View File

@@ -12,9 +12,9 @@ importutil.import_modules_in_pkg(truncators)
@stage.stage_class('ConversationMessageTruncator')
class ConversationMessageTruncator(stage.PipelineStage):
"""会话消息截断器
"""Conversation message truncator
用于截断会话消息链,以适应平台消息长度限制。
Used to truncate the conversation message chain to adapt to the LLM message length limit.
"""
trun: truncator.Truncator
@@ -27,10 +27,10 @@ class ConversationMessageTruncator(stage.PipelineStage):
self.trun = trun(self.ap)
break
else:
raise ValueError(f'未知的截断器: {use_method}')
raise ValueError(f'Unknown truncator: {use_method}')
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理"""
"""Process"""
query = await self.trun.truncate(query)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -6,17 +6,17 @@ from ....core import entities as core_entities
@truncator.truncator_class('round')
class RoundTruncator(truncator.Truncator):
"""前文回合数阶段器"""
"""Truncate the conversation message chain to adapt to the LLM message length limit."""
async def truncate(self, query: core_entities.Query) -> core_entities.Query:
"""截断"""
"""Truncate"""
max_round = query.pipeline_config['ai']['local-agent']['max-round']
temp_messages = []
current_round = 0
# 从后往前遍历
# Traverse from back to front
for msg in query.messages[::-1]:
if current_round < max_round:
temp_messages.append(msg)

View File

@@ -144,23 +144,27 @@ class RuntimePipeline:
result = await result
if isinstance(result, pipeline_entities.StageProcessResult): # 直接返回结果
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {result.result_type}'
)
await self._check_output(query, result)
if result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif result.result_type == pipeline_entities.ResultType.CONTINUE:
query = result.new_query
elif isinstance(result, typing.AsyncGenerator): # 生成器
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} gen')
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query.query_id} gen')
async for sub_result in result:
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {sub_result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {sub_result.result_type}'
)
await self._check_output(query, sub_result)
if sub_result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif sub_result.result_type == pipeline_entities.ResultType.CONTINUE:
query = sub_result.new_query
@@ -192,7 +196,7 @@ class RuntimePipeline:
if event_ctx.is_prevented_default():
return
self.ap.logger.debug(f'Processing query {query}')
self.ap.logger.debug(f'Processing query {query.query_id}')
await self._execute_from_stage(0, query)
except Exception as e:
@@ -200,7 +204,7 @@ class RuntimePipeline:
self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}')
self.ap.logger.error(f'Traceback: {traceback.format_exc()}')
finally:
self.ap.logger.debug(f'Query {query} processed')
self.ap.logger.debug(f'Query {query.query_id} processed')
class PipelineManager:

View File

@@ -35,6 +35,7 @@ class QueryPool:
message_event: platform_events.MessageEvent,
message_chain: platform_message.MessageChain,
adapter: msadapter.MessagePlatformAdapter,
pipeline_uuid: typing.Optional[str] = None,
) -> entities.Query:
async with self.condition:
query = entities.Query(
@@ -48,6 +49,7 @@ class QueryPool:
resp_messages=[],
resp_message_chain=[],
adapter=adapter,
pipeline_uuid=pipeline_uuid,
)
self.queries.append(query)
self.query_id_counter += 1

View File

@@ -11,11 +11,11 @@ from ...platform.types import message as platform_message
@stage.stage_class('PreProcessor')
class PreProcessor(stage.PipelineStage):
"""请求预处理阶段
"""Request pre-processing stage
签出会话、prompt、上文、模型、内容函数。
Check out session, prompt, context, model, and content functions.
改写:
Rewrite:
- session
- prompt
- messages
@@ -29,12 +29,12 @@ class PreProcessor(stage.PipelineStage):
query: core_entities.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""处理"""
"""Process"""
selected_runner = query.pipeline_config['ai']['runner']['runner']
session = await self.ap.sess_mgr.get_session(query)
# local-agent 时,llm_model None
# When not local-agent, llm_model is None
llm_model = (
await self.ap.model_mgr.get_model_by_uuid(query.pipeline_config['ai']['local-agent']['model'])
if selected_runner == 'local-agent'
@@ -45,11 +45,13 @@ class PreProcessor(stage.PipelineStage):
query,
session,
query.pipeline_config['ai']['local-agent']['prompt'],
query.pipeline_uuid,
query.bot_uuid,
)
conversation.use_llm_model = llm_model
# 设置query
# Set query
query.session = session
query.prompt = conversation.prompt.copy()
query.messages = conversation.messages.copy()
@@ -78,14 +80,15 @@ class PreProcessor(stage.PipelineStage):
if me.type == 'image_url':
msg.content.remove(me)
content_list = []
content_list: list[llm_entities.ContentElement] = []
plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')
# tidy the content_list
# combine all text content into one, and put it in the first position
for me in query.message_chain:
if isinstance(me, platform_message.Plain):
content_list.append(llm_entities.ContentElement.from_text(me.text))
plain_text += me.text
elif isinstance(me, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
@@ -104,10 +107,12 @@ class PreProcessor(stage.PipelineStage):
if msg.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(msg.base64))
content_list.insert(0, llm_entities.ContentElement.from_text(plain_text))
query.variables['user_message_text'] = plain_text
query.user_message = llm_entities.Message(role='user', content=content_list)
# =========== 触发事件 PromptPreProcessing
# =========== Trigger event PromptPreProcessing
event_ctx = await self.ap.plugin_mgr.emit_event(
event=events.PromptPreProcessing(

View File

@@ -25,7 +25,7 @@ class MessageHandler(metaclass=abc.ABCMeta):
def cut_str(self, s: str) -> str:
"""
取字符串第一行最多20个字符若有多行或超过20个字符则加省略号
Take the first line of the string, up to 20 characters, if there are multiple lines, or more than 20 characters, add an ellipsis
"""
s0 = s.split('\n')[0]
if len(s0) > 20 or '\n' in s:

View File

@@ -22,11 +22,11 @@ class ChatMessageHandler(handler.MessageHandler):
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理"""
# API
# 生成器
"""Process"""
# Call API
# generator
# 触发插件事件
# Trigger plugin event
event_class = (
events.PersonNormalMessageReceived
if query.launcher_type == core_entities.LauncherTypes.PERSON
@@ -54,7 +54,7 @@ class ChatMessageHandler(handler.MessageHandler):
yield entities.StageProcessResult(result_type=entities.ResultType.INTERRUPT, new_query=query)
else:
if event_ctx.event.alter is not None:
# if isinstance(event_ctx.event, str): # 现在暂时不考虑多模态alter
# if isinstance(event_ctx.event, str): # Currently not considering multi-modal alter
query.user_message.content = event_ctx.event.alter
text_length = 0
@@ -65,12 +65,12 @@ class ChatMessageHandler(handler.MessageHandler):
runner = r(self.ap, query.pipeline_config)
break
else:
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
raise ValueError(f'Request runner not found: {query.pipeline_config["ai"]["runner"]["runner"]}')
async for result in runner.run(query):
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
self.ap.logger.info(f'Response({query.query_id}): {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
@@ -80,7 +80,7 @@ class ChatMessageHandler(handler.MessageHandler):
query.session.using_conversation.messages.append(query.user_message)
query.session.using_conversation.messages.extend(query.resp_messages)
except Exception as e:
self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}')
self.ap.logger.error(f'Request failed({query.query_id}): {type(e).__name__} {str(e)}')
hide_exception_info = query.pipeline_config['output']['misc']['hide-exception']

View File

@@ -15,7 +15,7 @@ class CommandHandler(handler.MessageHandler):
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理"""
"""Process"""
command_text = str(query.message_chain).strip()[1:]
@@ -70,7 +70,7 @@ class CommandHandler(handler.MessageHandler):
)
)
self.ap.logger.info(f'命令({query.query_id})报错: {self.cut_str(str(ret.error))}')
self.ap.logger.info(f'Command({query.query_id}) error: {self.cut_str(str(ret.error))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
elif ret.text is not None or ret.image_url is not None:
@@ -89,7 +89,7 @@ class CommandHandler(handler.MessageHandler):
)
)
self.ap.logger.info(f'命令返回: {self.cut_str(str(content[0]))}')
self.ap.logger.info(f'Command returned: {self.cut_str(str(content[0]))}')
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:

View File

@@ -33,11 +33,11 @@ class Processor(stage.PipelineStage):
query: core_entities.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""处理"""
"""Process"""
message_text = str(query.message_chain).strip()
self.ap.logger.info(
f'处理 {query.launcher_type.value}_{query.launcher_id} 的请求({query.query_id}): {message_text}'
f'Processing request from {query.launcher_type.value}_{query.launcher_id} ({query.query_id}): {message_text}'
)
async def generator():

View File

@@ -5,7 +5,6 @@ import asyncio
import traceback
import sqlalchemy
# FriendMessage, Image, MessageChain, Plain
from . import adapter as msadapter
@@ -16,6 +15,8 @@ from ..discover import engine
from ..entity.persistence import bot as persistence_bot
from ..entity.errors import platform as platform_errors
from .logger import EventLogger
# 处理 3.4 移除了 YiriMirai 之后,插件的兼容性问题
@@ -78,6 +79,7 @@ class RuntimeBot:
message_event=event,
message_chain=event.message_chain,
adapter=adapter,
pipeline_uuid=self.bot_entity.use_pipeline_uuid,
)
async def on_group_message(
@@ -102,6 +104,7 @@ class RuntimeBot:
message_event=event,
message_chain=event.message_chain,
adapter=adapter,
pipeline_uuid=self.bot_entity.use_pipeline_uuid,
)
self.adapter.register_listener(platform_events.FriendMessage, on_friend_message)
@@ -144,6 +147,8 @@ class PlatformManager:
bots: list[RuntimeBot]
webchat_proxy_bot: RuntimeBot
adapter_components: list[engine.Component]
adapter_dict: dict[str, type[msadapter.MessagePlatformAdapter]]
@@ -161,6 +166,31 @@ class PlatformManager:
adapter_dict[component.metadata.name] = component.get_python_component_class()
self.adapter_dict = adapter_dict
webchat_adapter_class = self.adapter_dict['webchat']
# initialize webchat adapter
webchat_logger = EventLogger(name='webchat-adapter', ap=self.ap)
webchat_adapter_inst = webchat_adapter_class(
{},
self.ap,
webchat_logger,
)
self.webchat_proxy_bot = RuntimeBot(
ap=self.ap,
bot_entity=persistence_bot.Bot(
uuid='webchat-proxy-bot',
name='WebChat',
description='',
adapter='webchat',
adapter_config={},
enable=True,
),
adapter=webchat_adapter_inst,
logger=webchat_logger,
)
await self.webchat_proxy_bot.initialize()
await self.load_bots_from_db()
def get_running_adapters(self) -> list[msadapter.MessagePlatformAdapter]:
@@ -177,7 +207,12 @@ class PlatformManager:
for bot in bots:
# load all bots here, enable or disable will be handled in runtime
await self.load_bot(bot)
try:
await self.load_bot(bot)
except platform_errors.AdapterNotFoundError as e:
self.ap.logger.warning(f'Adapter {e.adapter_name} not found, skipping bot {bot.uuid}')
except Exception as e:
self.ap.logger.error(f'Failed to load bot {bot.uuid}: {e}\n{traceback.format_exc()}')
async def load_bot(
self,
@@ -191,6 +226,9 @@ class PlatformManager:
logger = EventLogger(name=f'platform-adapter-{bot_entity.name}', ap=self.ap)
if bot_entity.adapter not in self.adapter_dict:
raise platform_errors.AdapterNotFoundError(bot_entity.adapter)
adapter_inst = self.adapter_dict[bot_entity.adapter](
bot_entity.adapter_config,
self.ap,
@@ -220,7 +258,9 @@ class PlatformManager:
return
def get_available_adapters_info(self) -> list[dict]:
return [component.to_plain_dict() for component in self.adapter_components]
return [
component.to_plain_dict() for component in self.adapter_components if component.metadata.name != 'webchat'
]
def get_available_adapter_info_by_name(self, name: str) -> dict | None:
for component in self.adapter_components:
@@ -273,6 +313,8 @@ class PlatformManager:
async def run(self):
# This method will only be called when the application launching
await self.webchat_proxy_bot.run()
for bot in self.bots:
if bot.enable:
await bot.run()

View File

@@ -119,7 +119,7 @@ class EventLogger:
async def _truncate_logs(self):
if len(self.logs) > MAX_LOG_COUNT:
for i in range(DELETE_COUNT_PER_TIME):
for image_key in self.logs[i].images:
for image_key in self.logs[i].images: # type: ignore
await self.ap.storage_mgr.storage_provider.delete(image_key)
self.logs = self.logs[DELETE_COUNT_PER_TIME:]

View File

@@ -16,7 +16,6 @@ from ..logger import EventLogger
class AiocqhttpMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain,
@@ -61,6 +60,15 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif type(msg) is platform_message.Forward:
for node in msg.node_list:
msg_list.extend((await AiocqhttpMessageConverter.yiri2target(node.message_chain))[0])
elif isinstance(msg, platform_message.File):
msg_list.append({'type': 'file', 'data': {'file': msg.url, 'name': msg.name}})
elif isinstance(msg, platform_message.Face):
if msg.face_type == 'face':
msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id))
elif msg.face_type == 'rps':
msg_list.append(aiocqhttp.MessageSegment.rps())
elif msg.face_type == 'dice':
msg_list.append(aiocqhttp.MessageSegment.dice())
else:
msg_list.append(aiocqhttp.MessageSegment.text(str(msg)))
@@ -68,34 +76,154 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
return msg_list, msg_id, msg_time
@staticmethod
async def target2yiri(message: str, message_id: int = -1,bot=None):
async def target2yiri(message: str, message_id: int = -1, bot: aiocqhttp.CQHttp = None):
message = aiocqhttp.Message(message)
def get_face_name(face_id):
face_code_dict = {
'2': '好色',
'4': '得意',
'5': '流泪',
'8': '',
'9': '大哭',
'10': '尴尬',
'12': '调皮',
'14': '微笑',
'16': '',
'21': '可爱',
'23': '傲慢',
'24': '饥饿',
'25': '',
'26': '惊恐',
'27': '流汗',
'28': '憨笑',
'29': '悠闲',
'30': '奋斗',
'32': '疑问',
'33': '',
'34': '',
'38': '敲打',
'39': '再见',
'41': '发抖',
'42': '爱情',
'43': '跳跳',
'49': '拥抱',
'53': '蛋糕',
'60': '咖啡',
'63': '玫瑰',
'66': '爱心',
'74': '太阳',
'75': '月亮',
'76': '',
'78': '握手',
'79': '胜利',
'85': '飞吻',
'89': '西瓜',
'96': '冷汗',
'97': '擦汗',
'98': '抠鼻',
'99': '鼓掌',
'100': '糗大了',
'101': '坏笑',
'102': '左哼哼',
'103': '右哼哼',
'104': '哈欠',
'106': '委屈',
'109': '左亲亲',
'111': '可怜',
'116': '示爱',
'118': '抱拳',
'120': '拳头',
'122': '爱你',
'123': 'NO',
'124': 'OK',
'125': '转圈',
'129': '挥手',
'144': '喝彩',
'147': '棒棒糖',
'171': '',
'173': '泪奔',
'174': '无奈',
'175': '卖萌',
'176': '小纠结',
'179': 'doge',
'180': '惊喜',
'181': '骚扰',
'182': '笑哭',
'183': '我最美',
'201': '点赞',
'203': '托脸',
'212': '托腮',
'214': '啵啵',
'219': '蹭一蹭',
'222': '抱抱',
'227': '拍手',
'232': '佛系',
'240': '喷脸',
'243': '甩头',
'246': '加油抱抱',
'262': '脑阔疼',
'264': '捂脸',
'265': '辣眼睛',
'266': '哦哟',
'267': '头秃',
'268': '问号脸',
'269': '暗中观察',
'270': 'emm',
'271': '吃瓜',
'272': '呵呵哒',
'273': '我酸了',
'277': '汪汪',
'278': '',
'281': '无眼笑',
'282': '敬礼',
'284': '面无表情',
'285': '摸鱼',
'287': '',
'289': '睁眼',
'290': '敲开心',
'293': '摸锦鲤',
'294': '期待',
'297': '拜谢',
'298': '元宝',
'299': '牛啊',
'305': '右亲亲',
'306': '牛气冲天',
'307': '喵喵',
'314': '仔细分析',
'315': '加油',
'318': '崇拜',
'319': '比心',
'320': '庆祝',
'322': '拒绝',
'324': '吃糖',
'326': '生气',
}
return face_code_dict.get(face_id, '')
async def process_message_data(msg_data, reply_list):
if msg_data["type"] == "image":
image_base64, image_format = await image.qq_image_url_to_base64(msg_data["data"]['url'])
reply_list.append(
platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
if msg_data['type'] == 'image':
image_base64, image_format = await image.qq_image_url_to_base64(msg_data['data']['url'])
reply_list.append(platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
elif msg_data["type"] == "text":
reply_list.append(platform_message.Plain(text=msg_data["data"]["text"]))
elif msg_data['type'] == 'text':
reply_list.append(platform_message.Plain(text=msg_data['data']['text']))
elif msg_data["type"] == "forward": # 这里来应该传入转发消息组暂时传入qoute
for forward_msg_datas in msg_data["data"]["content"]:
for forward_msg_data in forward_msg_datas["message"]:
elif msg_data['type'] == 'forward': # 这里来应该传入转发消息组暂时传入qoute
for forward_msg_datas in msg_data['data']['content']:
for forward_msg_data in forward_msg_datas['message']:
await process_message_data(forward_msg_data, reply_list)
elif msg_data["type"] == "at":
if msg_data["data"]['qq'] == 'all':
elif msg_data['type'] == 'at':
if msg_data['data']['qq'] == 'all':
reply_list.append(platform_message.AtAll())
else:
reply_list.append(
platform_message.At(
target=msg_data["data"]['qq'],
target=msg_data['data']['qq'],
)
)
yiri_msg_list = []
yiri_msg_list.append(platform_message.Source(id=message_id, time=datetime.datetime.now()))
@@ -114,8 +242,15 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif msg.type == 'text':
yiri_msg_list.append(platform_message.Plain(text=msg.data['text']))
elif msg.type == 'image':
image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url'])
yiri_msg_list.append(platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
emoji_id = msg.data.get('emoji_package_id', None)
if emoji_id:
face_id = emoji_id
face_name = msg.data.get('summary', '')
image_msg = platform_message.Face(face_id=face_id, face_name=face_name)
else:
image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url'])
image_msg = platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}')
yiri_msg_list.append(image_msg)
elif msg.type == 'forward':
# 暂时不太合理
# msg_datas = await bot.get_msg(message_id=message_id)
@@ -124,38 +259,53 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
# await process_message_data(msg_data, yiri_msg_list)
pass
elif msg.type == 'reply': # 此处处理引用消息传入Qoute
msg_datas = await bot.get_msg(message_id=msg.data["id"])
msg_datas = await bot.get_msg(message_id=msg.data['id'])
for msg_data in msg_datas["message"]:
for msg_data in msg_datas['message']:
await process_message_data(msg_data, reply_list)
reply_msg = platform_message.Quote(message_id=msg.data["id"],sender_id=msg_datas["user_id"],origin=reply_list)
reply_msg = platform_message.Quote(
message_id=msg.data['id'], sender_id=msg_datas['user_id'], origin=reply_list
)
yiri_msg_list.append(reply_msg)
# 这里下载所有文件会导致下载文件过多,暂时不下载
# elif msg.type == 'file':
# # file_name = msg.data['file']
# file_id = msg.data['file_id']
# file_data = await bot.get_file(file_id=file_id)
# file_name = file_data.get('file_name')
# file_path = file_data.get('file')
# file_url = file_data.get('file_url')
# file_size = file_data.get('file_size')
# yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
elif msg.type == 'face':
face_id = msg.data['id']
face_name = msg.data['raw']['faceText']
if not face_name:
face_name = get_face_name(face_id)
yiri_msg_list.append(platform_message.Face(face_id=int(face_id), face_name=face_name.replace('/', '')))
elif msg.type == 'rps':
face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type='rps', face_id=int(face_id), face_name='猜拳'))
elif msg.type == 'dice':
face_id = msg.data['result']
yiri_msg_list.append(platform_message.Face(face_type='dice', face_id=int(face_id), face_name='骰子'))
chain = platform_message.MessageChain(yiri_msg_list)
return chain
class AiocqhttpEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(event: platform_events.MessageEvent, bot_account_id: int):
return event.source_platform_object
@staticmethod
async def target2yiri(event: aiocqhttp.Event,bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id,bot)
async def target2yiri(event: aiocqhttp.Event, bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id, bot)
if event.message_type == 'group':
permission = 'MEMBER'
@@ -263,15 +413,18 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
async def on_message(event: aiocqhttp.Event):
self.bot_account_id = event.self_id
try:
return await callback(await self.event_converter.target2yiri(event,self.bot), self)
return await callback(await self.event_converter.target2yiri(event, self.bot), self)
except Exception:
await self.logger.error(f'Error in on_message: {traceback.format_exc()}')
traceback.print_exc()
if event_type == platform_events.GroupMessage:
self.bot.on_message('group')(on_message)
# self.bot.on_notice()(on_message)
elif event_type == platform_events.FriendMessage:
self.bot.on_message('private')(on_message)
# self.bot.on_notice()(on_message)
# print(event_type)
async def on_websocket_connection(event: aiocqhttp.Event):
for event in self.on_websocket_connection_event_cache:

Some files were not shown because too many files have changed in this diff Show More