Compare commits

...

73 Commits

Author SHA1 Message Date
Junyan Qin
bf98b82cf2 chore: release v4.0.7 2025-06-18 13:10:20 +08:00
Junyan Qin (Chin)
edd70b943d Update bug-report_en.yml 2025-06-18 09:48:42 +08:00
Junyan Qin
3cbc823085 doc: make en README as default 2025-06-17 22:51:51 +08:00
Sheldon.li
48becf2c51 refactor(ContentFilterStage): Add logic for handling empty messages (#1525)
-In the ContentFilterStage, logic for handling empty messages has been added to ensure that the pipeline continues to process even when the message is empty.
- This change enhances the robustness of content filtering, preventing potential issues caused by empty messages.
- This optimization was implemented to address the issue where, when someone is @ in a group chat and a message is sent without any content, the Source type messages in the message chain are lost.
2025-06-17 22:12:55 +08:00
devin-ai-integration[bot]
56c686cd5a feat: add Japanese (ja-JP) language support (#1537)
* feat: add Japanese (ja-JP) language support

- Add comprehensive Japanese translation file (ja-JP.ts)
- Update i18n configuration to include Japanese locale
- Add Japanese language option to login and register page dropdowns
- Implement Japanese language detection and switching logic
- Maintain fallback to en-US for missing translations in flexible components

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui for ja-JP

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-16 21:30:57 +08:00
Junyan Qin (Chin)
208273c0dd Update README.md 2025-06-16 21:01:11 +08:00
fdc310
2ff7ca3025 feat:add file url and add onebotv11(napcat) send file and seve file in local. (#1533)
* feat:add file url and add onebotv11(napcat) send file and seve file in local.

* del print
2025-06-15 17:22:35 +08:00
fdc310
61a2361730 feat:add new messagetyps WeChatFile and add wechat file is accepted and transmitted in base64 format. (#1531) 2025-06-15 17:17:08 +08:00
Junyan Qin
f80f997a89 chore: update version field in pyproject.toml 2025-06-11 10:24:18 +08:00
Junyan Qin
18529a42c1 chore: release v4.0.6 2025-06-11 10:23:46 +08:00
Junyan Qin (Chin)
3e707b4b6e feat: reset all associated session after bot and pipeline modified (#1517) 2025-06-09 21:50:08 +08:00
Junyan Qin
62f0a938a8 chore: remove legacy test in fe 2025-06-09 17:56:37 +08:00
Junyan Qin
ad3a163d82 fix: ruff linter error in libs 2025-06-09 17:56:21 +08:00
Junyan Qin
f5a4503610 perf: add text comment on bot log button 2025-06-09 15:27:17 +08:00
Junyan Qin
ec012cf5ed doc: update README 2025-06-09 10:20:11 +08:00
Junyan Qin
d70eceb72c fix(DebugDialog): \n not supported 2025-06-08 21:41:44 +08:00
devin-ai-integration[bot]
f271608114 feat: add dynamic base URL configuration using environment variables (#1511)
- Replace hardcoded base URL in HttpClient.ts with environment variable support
- Add NEXT_PUBLIC_API_BASE_URL environment variable for dynamic configuration
- Add dev:local script for development with localhost:5300 backend
- Development: uses localhost:5300, Production: uses / (relative path)
- Eliminates need for manual code changes when switching environments

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-08 17:44:40 +08:00
Junyan Qin
793f0a9c10 fix: base url 2025-06-08 17:34:32 +08:00
devin-ai-integration[bot]
4f2ec195fc feat: add WebChat adapter for pipeline debugging (#1510)
* feat: add WebChat adapter for pipeline debugging

- Create WebChatAdapter for handling debug messages in pipeline testing
- Add HTTP API endpoints for debug message sending and retrieval
- Implement frontend debug dialog with session switching (private/group chat)
- Add Chinese i18n translations for debug interface
- Auto-create default WebChat bot during database initialization
- Support fixed session IDs: webchatperson and webchatgroup for testing

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui for webchat

* feat: complete webchat backend

* feat: core chat apis

* perf: button style in pipeline card

* perf: log btn in bot card

* perf: webchat entities definition

* fix: bugs

* perf: web chat

* perf: dialog styles

* perf: styles

* perf: styles

* fix: group invalid in webchat

* perf: simulate real im message

* perf: group timeout toast

* feat(webchat): add supports for mentioning bot in group

* perf(webchat): at component styles

* perf: at badge display in message

* fix: linter errors

* fix: webchat was listed on adapter list

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-08 15:34:26 +08:00
Junyan Qin (Chin)
e6bc009414 feat: add i18n support for initialization page and fix plugin loading text (#1505)
* feat: add i18n support for initialization page and fix plugin loading text

- Add language selector to register/initialization page with Chinese and English options
- Add register section translations to both zh-Hans.ts and en-US.ts
- Replace hardcoded Chinese texts in register page with i18n translation calls
- Fix hardcoded '加载中...' text in plugin configuration dialog to use t('plugins.loading')
- Follow existing login page pattern for language selector implementation
- Maintain consistent UI/UX design with proper language switching functionality

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: language selecting logic

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-06 21:29:36 +08:00
Junyan Qin
20dc8fb5ab perf: language selecting logic 2025-06-06 21:27:08 +08:00
Devin AI
9a71edfeb0 feat: add i18n support for initialization page and fix plugin loading text
- Add language selector to register/initialization page with Chinese and English options
- Add register section translations to both zh-Hans.ts and en-US.ts
- Replace hardcoded Chinese texts in register page with i18n translation calls
- Fix hardcoded '加载中...' text in plugin configuration dialog to use t('plugins.loading')
- Follow existing login page pattern for language selector implementation
- Maintain consistent UI/UX design with proper language switching functionality

Co-Authored-By: Junyan Qin <Chin>, 秦骏言 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-06-06 10:50:31 +00:00
Guanchao Wang
fe3fd664af Fix/slack image (#1501)
* fix: dingtalk adapters couldn't handle images

* fix: slack adapter couldn't put the image in logger
2025-06-06 10:04:00 +08:00
Guanchao Wang
6402755ac6 fix: dingtalk adapters couldn't handle images (#1500) 2025-06-05 23:37:58 +08:00
Junyan Qin
ac8fe049de fix: uv removes it self 2025-06-05 11:12:04 +08:00
Junyan Qin
955b391253 chore: release v4.0.5 2025-06-03 16:28:55 +08:00
Junyan Qin
08c6672841 feat: allow skip plugin deps checking 2025-06-02 21:43:27 +08:00
Junyan Qin
8917050fae chore: add ppio icon 2025-05-31 20:00:18 +08:00
Junyan Qin
21daef46f7 chore: remove gemini related deps 2025-05-31 19:27:08 +08:00
Junyan Qin (Chin)
8ad60b5b64 refactor: gemini requester (#1490)
* refactor: use openai compatible api for gemini

* chore: remove codes
2025-05-31 13:11:53 +08:00
Junyan Qin
7e17c96c30 fix: linter error 2025-05-30 22:29:16 +08:00
whw174660897
f17b06767e Feature add n8 n (#1468)
* feat(n8n): 添加n8n工作流API支持

添加n8n工作流API作为新的运行器类型,支持通过webhook调用n8n工作流,并提供多种认证方式(Basic、JWT、Header)。新增N8nAuthFormComponent用于处理n8n认证表单联动,并更新相关配置文件和测试用例。

* chore: remove pip mirror url

* perf: simplify ret def of pipeline metadata

* feat(n8n): raise exc instead of ret as normal msg

* perf: add var `user_message_text`

* chore(n8n): migration and default config

* chore: required database version

---------

Co-authored-by: hengwei.wang <@>
Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-05-30 22:23:57 +08:00
Junyan Qin
70a29fc623 chore: f u if you dont provide enough info in issue 2025-05-29 16:51:47 +08:00
Junyan Qin
239223be3f chore: release v4.0.4 2025-05-28 12:55:15 +08:00
Junyan Qin
b112cb320c fix: bad ability name in preproc check 2025-05-28 12:54:30 +08:00
Junyan Qin
5aaf2ba3ef fix: base url 2025-05-27 22:58:31 +08:00
Junyan Qin (Chin)
f1e9f46af1 feat: event log of bots (#1441)
* feat: basic arch of event log

* feat: complete event log framework

* fix: bad struct in bot log api

* feat: add event logging to all platform adapters

Co-Authored-By: wangcham233@gmail.com <651122857@qq.com>

* feat: add event logging to client classes

Co-Authored-By: wangcham233@gmail.com <651122857@qq.com>

* refactor: bot log getting api

* perf: logger for aiocqhttp and gewechat

* fix: add ignored logger in dingtalk

* fix: seq id bug in log getting

* feat: add logger in dingtalk,QQ official,Slack, wxoa

* feat: add logger for wecom

* feat: add logger for wecomcs

* perf(event logger): image processing

* 完成机器人日志的前端部分 (#1479)

* feat: webui  bot log framework done

* feat: bot log complete

* perf(bot-log): style

* chore: fix incompleted i18n

* feat: support message session copy

* fix: filter and badge text

* perf: styles

* feat: add bot toggle switch in bot card

* fix: linter errors

---------

Co-authored-by: Junyan Qin <rockchinq@gmail.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: wangcham233@gmail.com <651122857@qq.com>
Co-authored-by: HYana <65863826+KaedeSAMA@users.noreply.github.com>
2025-05-27 22:36:50 +08:00
aberry
8dfef1d118 Bugfix (#1482)
* Update modelscopechatcmpl.py

tool_call 流式输出的最后一个参数是 None,需要判断一下

* Update mcp.py

问题:闭包(closure)对循环变量 tool 的捕获,导致最终注册到 self.functions 里的所有 func,都会引用 同一个(最后一个)tool

解决:在定义 func 时,通过函数参数的 默认值 把当下的 tool “冻结”住

* Update mcp.py
2025-05-27 15:09:09 +08:00
Junyan Qin (Chin)
919a621bf8 fix: lru bug in t2i (#1445) (#1481) 2025-05-27 09:58:22 +08:00
Junyan Qin
3ac96f464d perf: show description in bot form 2025-05-23 10:31:11 +08:00
Junyan Qin
f9f03b81d1 chore: release v4.0.3.3 2025-05-22 10:49:24 +08:00
Junyan Qin
42171a9c07 fix: combine quote message not in default pipeline config 2025-05-22 10:44:33 +08:00
Junyan Qin
f1f00115c9 chore: update issue template 2025-05-22 10:42:59 +08:00
Junyan Qin
59bff61409 chore: release v4.0.3.2 2025-05-21 19:46:42 +08:00
Junyan Qin
778693a804 perf: desc of random 2025-05-21 19:45:45 +08:00
Junyan Qin
e5b2da225c perf: no longer get host ip 2025-05-21 19:42:04 +08:00
Steven Lynn
4a988b89a2 fix: update auto-reply probability description in trigger.yaml (#1463) 2025-05-21 17:50:23 +08:00
Junyan Qin
e5e8807312 perf: no longer ask for apikeys for ollama and lm studio 2025-05-20 16:01:20 +08:00
Junyan Qin
1376530c2e fix: conversation is null 2025-05-20 15:32:04 +08:00
Junyan Qin
7d34a2154b perf: unify i18n text class in frontend 2025-05-20 11:32:55 +08:00
Junyan Qin
ff335130ae chore: update CONTRIBUTING 2025-05-20 09:39:46 +08:00
Junyan Qin
0afef0ac0f chore: update pr template 2025-05-20 09:21:59 +08:00
Junyan Qin (Chin)
6447f270ea Update bug-report_en.yml 2025-05-20 09:16:30 +08:00
Junyan Qin (Chin)
81be62e1a4 Update bug-report_en.yml 2025-05-20 09:15:52 +08:00
Junyan Qin (Chin)
409909ccb1 Update bug-report_en.yml (#1456) 2025-05-20 09:14:52 +08:00
Junyan Qin
b821b69dbb chore: perf issue templates 2025-05-20 09:13:13 +08:00
Junyan Qin
7e2448655e chore: add english issue templates 2025-05-20 09:11:47 +08:00
Junyan Qin (Chin)
a7d2a68639 feat: add supports for testing llm models (#1454)
* feat: add supports for testing llm models

* fix: linter error
2025-05-19 23:10:04 +08:00
fdc310
aba51409a7 feat:add qoute message process and add Whether to enable this function (#1446)
* 更新了wechatpad接口,以及适配器

* 更新了wechatpad接口,以及适配器

* 修复一些细节问题,比如at回复,以及启动登录和启动ws长连接的线程同步

* importutil中修复了在wi上启动替换斜杠问题,login中加上了一个login,暂时没啥用。wechatpad中做出了一些细节修改

* 更新了wechatpad接口,以及适配器

* 怎加了处理图片链接转换为image_base64发送

* feat(wechatpad): 调整日志+bugfix

* feat(wechatpad): fix typo

* 修正了发送语音api参数错误,添加了发送链接处理为base64数据(好像只有一部分链接可以)

* 修复了部分手抽的typo错误

* chore: remove manager.py

* feat:add qoute message process and add Whether to enable this function

* chore: add db migration for this change

---------

Co-authored-by: shinelin <shinelinxx@gmail.com>
Co-authored-by: Junyan Qin (Chin) <rockchinq@gmail.com>
2025-05-19 22:24:18 +08:00
sheetung
5e5d37cbf1 St/webui (#1452)
* 解决webUI模型配置页面卡片溢出问题

* fix: webUI卡片文本溢出问题
2025-05-19 18:11:50 +08:00
sheetung
e5a99a0fe4 解决webUI模型配置页面卡片溢出问题 (#1451) 2025-05-19 13:14:39 +08:00
Junyan Qin
a594cc07f6 chore: release v4.0.3.1 2025-05-19 10:31:11 +08:00
Junyan Qin
0a9714fbe7 perf: no cache for fronend page 2025-05-17 19:30:26 +08:00
Junyan Qin (Chin)
1992934dce fix: user_funcs typo in ollama chat requester (#1431) 2025-05-15 20:51:58 +08:00
zejiewang
bb930aec14 fix:lark adapter listeners init problem (#1426)
Co-authored-by: wangzejie <wangzejie@meicai.cn>
2025-05-15 11:25:38 +08:00
Junyan Qin
1d7f2ab701 fix: wrong ref in HomeTitleBar 2025-05-15 10:54:22 +08:00
Junyan Qin
347da6142e perf: multi language 2025-05-15 10:40:36 +08:00
Junyan Qin
a9f4dc517a perf: remove -q params in plugin deps precheking 2025-05-15 10:24:53 +08:00
Junyan Qin (Chin)
9d45f3f3a7 updatr README.md 2025-05-15 09:04:38 +08:00
Guanchao Wang
256d24718b fix: dingtalk & wecom problems (#1424) 2025-05-14 22:55:16 +08:00
Junyan Qin
1272b8ef16 ci: update Dockerfile python version 2025-05-14 22:22:17 +08:00
Junyan Qin
696162ee52 chore: release v4.0.3 2025-05-14 22:05:03 +08:00
Junyan Qin
533f993e3a fix: bad Dockerfile CMD 2025-05-14 22:04:08 +08:00
138 changed files with 4452 additions and 796 deletions

View File

@@ -1,5 +1,5 @@
name: 漏洞反馈
description: 报错或漏洞请使用这个模板创建不使用此模板创建的异常、漏洞相关issue将被直接关闭。由于自己操作不当/不甚了解所用技术栈引起的网络连接问题恕无法解决,请勿提 issue。容器间网络连接问题参考文档 https://docs.langbot.app/deploy/network-details.html
description: 【供中文用户】报错或漏洞请使用这个模板创建不使用此模板创建的异常、漏洞相关issue将被直接关闭。由于自己操作不当/不甚了解所用技术栈引起的网络连接问题恕无法解决,请勿提 issue。容器间网络连接问题参考文档 https://docs.langbot.app/zh/workshop/network-details.html
title: "[Bug]: "
labels: ["bug?"]
body:
@@ -7,7 +7,7 @@ body:
attributes:
label: 运行环境
description: LangBot 版本、操作系统、系统架构、**Python版本**、**主机地理位置**
placeholder: 例如v3.3.0、CentOS x64 Python 3.10.3、Docker 的系统直接写 Docker 就行
placeholder: 例如v3.3.0、CentOS x64 Python 3.10.3、Docker
validations:
required: true
- type: textarea
@@ -19,12 +19,12 @@ body:
- type: textarea
attributes:
label: 复现步骤
description: 如何重现这个问题,越详细越好;请贴上所有相关的配置文件和元数据文件(注意隐去敏感信息)
description: 提供越多信息,我们会越快解决问题,建议多提供配置截图;**如果你不认真填写(只一两句话概括),我们会很生气并且立即关闭 issue 或两年后才回复你**
validations:
required: true
required: false
- type: textarea
attributes:
label: 启用的插件
description: 有些情况可能和插件功能有关,建议提供插件启用情况。可以使用`!plugin`命令查看已启用的插件
description: 有些情况可能和插件功能有关,建议提供插件启用情况。
validations:
required: false

View File

@@ -0,0 +1,30 @@
name: Bug report
description: Report bugs or vulnerabilities using this template. For container network connection issues, refer to the documentation https://docs.langbot.app/en/workshop/network-details.html
title: "[Bug]: "
labels: ["bug?"]
body:
- type: input
attributes:
label: Runtime environment
description: LangBot version, operating system, system architecture, **Python version**, **host location**
placeholder: "For example: v3.3.0, CentOS x64 Python 3.10.3, Docker"
validations:
required: true
- type: textarea
attributes:
label: Exception
description: Describe the exception in detail, what happened and when it happened. **Please include log information.**
validations:
required: true
- type: textarea
attributes:
label: Reproduction steps
description: How to reproduce this problem, the more detailed the better; the more information you provide, the faster we will solve the problem. 【注意】请务必认真填写此部分,若不提供完整信息(如只有一两句话的概括),我们将不会回复!
validations:
required: false
- type: textarea
attributes:
label: Enabled plugins
description: Some cases may be related to plugin functionality, so please provide the plugin enablement status.
validations:
required: false

View File

@@ -1,7 +1,7 @@
name: 需求建议
title: "[Feature]: "
labels: ["改进"]
description: "新功能或现有功能优化请使用这个模板不符合类别的issue将被直接关闭"
labels: []
description: "【供中文用户】新功能或现有功能优化请使用这个模板不符合类别的issue将被直接关闭"
body:
- type: dropdown
attributes:

View File

@@ -0,0 +1,21 @@
name: Feature request
title: "[Feature]: "
labels: []
description: "New features or existing feature improvements should use this template; issues that do not match will be closed directly"
body:
- type: dropdown
attributes:
label: This is a?
description: New feature request or existing feature improvement
options:
- New feature
- Existing feature improvement
validations:
required: true
- type: textarea
attributes:
label: Detailed description
description: Detailed description, the more detailed the better
validations:
required: true

View File

@@ -1,7 +1,7 @@
name: 提交新插件
title: "[Plugin]: 请求登记新插件"
labels: ["独立插件"]
description: "本模板供且仅供提交新插件使用"
description: "【供中文用户】本模板供且仅供提交新插件使用"
body:
- type: input
attributes:

View File

@@ -0,0 +1,24 @@
name: Submit a new plugin
title: "[Plugin]: Request to register a new plugin"
labels: ["Independent Plugin"]
description: "This template is only for submitting new plugins"
body:
- type: input
attributes:
label: Plugin name
description: Fill in the name of the plugin
validations:
required: true
- type: textarea
attributes:
label: Plugin code repository address
description: Only support Github
validations:
required: true
- type: textarea
attributes:
label: Plugin description
description: The description of the plugin
validations:
required: true

View File

@@ -1,20 +1,21 @@
## 概述
## 概述 / Overview
实现/解决/优化的内容:
> 请在此部分填写你实现/解决/优化的内容:
> Summary of what you implemented/solved/optimized:
## 检查清单
## 检查清单 / Checklist
### PR 作者完成
### PR 作者完成 / For PR author
*请在方括号间写`x`以打勾
*请在方括号间写`x`以打勾 / Please tick the box with `x`*
- [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)了吗?
- [ ] 与项目所有者沟通过了吗?
- [ ] 我确定已自行测试所作的更改,确保功能符合预期。
- [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)了吗? / Have you read the [contribution guide](https://github.com/RockChinQ/LangBot/blob/master/CONTRIBUTING.md)?
- [ ] 与项目所有者沟通过了吗? / Have you communicated with the project maintainer?
- [ ] 我确定已自行测试所作的更改,确保功能符合预期。 / I have tested the changes and ensured they work as expected.
### 项目所有者完成
### 项目维护者完成 / For project maintainer
- [ ] 相关 issues 链接了吗?
- [ ] 配置项写好了吗?迁移写好了吗?生效了吗?
- [ ] 依赖加到 pyproject.toml 和 core/bootutils/deps.py 了吗
- [ ] 文档编写了吗?
- [ ] 相关 issues 链接了吗? / Have you linked the related issues?
- [ ] 配置项写好了吗?迁移写好了吗?生效了吗? / Have you written the configuration items? Have you written the migration? Has it taken effect?
- [ ] 依赖加到 pyproject.toml 和 core/bootutils/deps.py 了吗 / Have you added the dependencies to pyproject.toml and core/bootutils/deps.py?
- [ ] 文档编写了吗? / Have you written the documentation?

View File

@@ -5,22 +5,27 @@
### 贡献形式
- 提交PR解决issues中提到的bug或期待的功能
- 提交PR实现您设想的功能请先提出issue与者沟通)
- 优化代码架构,使各个模块的组织更加整洁优雅
- 在issues中提出发现的bug或者期待的功能
- 提交PR实现您设想的功能请先提出issue与项目维护者沟通)
- 为本项目在其他社交平台撰写文章、制作视频等
- 为本项目的衍生项目作出贡献,或开发插件增加功能
### 如何开始
### 沟通语言规范
- 加入本项目交流群,一同探讨项目相关事务
- 解决本项目或衍生项目的issues中亟待解决的问题
- 阅读并完善本项目文档
- 在各个社交媒体撰写本项目教程等
- 在 PR 和 Commit Message 中请使用全英文
- 对于中文用户issue 中可以使用中文
### 代码规范
<hr/>
- 代码中的注解`务必`符合Google风格的规范
- 模块顶部的引入代码请遵循`系统模块``第三方库模块``自定义模块`的顺序进行引入
- `不要`直接引入模块的特定属性,而是引入这个模块,再通过`xxx.yyy`的形式使用属性
- 任何作用域的字段`必须`先声明后使用,并在声明处注明类型提示
## Guidelines
### Contribution
- Submit PRs to solve bugs or features in the issues
- Submit PRs to implement your ideas (Please create an issue first and communicate with the project maintainer)
- Write articles or make videos about this project on other social platforms
- Contribute to the development of derivative projects, or develop plugins to add features
### Spoken Language
- Use English in PRs and Commit Messages
- For English users, you can use English in issues

View File

@@ -6,7 +6,7 @@ COPY web ./web
RUN cd web && npm install && npm run build
FROM python:3.10.13-slim
FROM python:3.12.7-slim
WORKDIR /app
@@ -20,4 +20,4 @@ RUN apt update \
&& uv sync \
&& touch /.dockerenv
CMD [ "python", "main.py" ]
CMD [ "uv", "run", "main.py" ]

141
README.md
View File

@@ -1,4 +1,3 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
@@ -8,40 +7,39 @@
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<a href="https://langbot.app">Home</a>
<a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台🤖
😎High Stability, 🧩Extension Supported, 🦄Multi-modal - LLM Native Instant Messaging Bot Platform🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
[简体中文](README_CN.md) / [English](README.md) / [日本語](README_JP.md) / (PR for your language)
</div>
</p>
## ✨ 特性
## ✨ Features
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、DiscordTelegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
## 📦 开始使用
## 📦 Getting Started
#### Docker Compose 部署
#### Docker Compose Deployment
```bash
git clone https://github.com/RockChinQ/LangBot
@@ -49,106 +47,97 @@ cd LangBot
docker compose up -d
```
访问 http://localhost:5300 即可开始使用。
Visit http://localhost:5300 to start using it.
详细文档[Docker 部署](https://docs.langbot.app/zh/deploy/langbot/docker.html)
Detailed documentation [Docker Deployment](https://docs.langbot.app/en/deploy/langbot/docker.html).
#### 宝塔面板部署
#### One-click Deployment on BTPanel
已上架宝塔面板,若您已安装宝塔面板,可以根据[文档](https://docs.langbot.app/zh/deploy/langbot/one-click/bt.html)使用。
LangBot has been listed on the BTPanel, if you have installed the BTPanel, you can use the [document](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html) to use it.
#### Zeabur 云部署
#### Zeabur Cloud Deployment
社区贡献的 Zeabur 模板。
Community contributed Zeabur template.
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/zh-CN/templates/ZKTBDH)
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/en-US/templates/ZKTBDH)
#### Railway 云部署
#### Railway Cloud Deployment
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### 手动部署
#### Other Deployment Methods
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 效果展示
## 📸 Demo
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
- WebUI Demo: https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意仅展示webui效果公开环境请不要在其中填入您的任何敏感信息。
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment.
## 🔌 组件兼容性
## 🔌 Component Compatibility
### 消息平台
### Message Platform
| 平台 | 状态 | 备注 |
| Platform | Status | Remarks |
| --- | --- | --- |
| QQ 个人号 | ✅ | QQ 个人号私聊、群聊 |
| QQ 官方机器人 | ✅ | QQ 官方机器人,支持频道、私聊、群聊 |
| 企业微信 | ✅ | |
| 企微对外客服 | ✅ | |
| 个人微信 | ✅ | |
| 微信公众号 | ✅ | |
| 飞书 | ✅ | |
| 钉钉 | ✅ | |
| Personal QQ | ✅ | |
| QQ Official API | ✅ | |
| WeCom | ✅ | |
| WeComCS | ✅ | |
| Personal WeChat | ✅ | |
| Lark | ✅ | |
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
🚧: In development
### 大模型能力
### LLMs
| 模型 | 状态 | 备注 |
| LLM | Status | Remarks |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | 可接入任何 OpenAI 接口格式模型 |
| [OpenAI](https://platform.openai.com/) | ✅ | Available for any OpenAI interface format model |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型运行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型接口聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
| [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform |
| [GiteeAI](https://ai.gitee.com/) | ✅ | LLM interface gateway(MaaS) |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLM gateway(MaaS) |
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
### TTS
## 🤝 Community Contribution
| 平台/模型 | 备注 |
| --- | --- |
| [FishAudio](https://fish.audio/zh-CN/discovery/) | [插件](https://github.com/the-lazy-me/NewChatVoice) |
| [海豚 AI](https://www.ttson.cn/?source=thelazy) | [插件](https://github.com/the-lazy-me/NewChatVoice) |
| [AzureTTS](https://portal.azure.com/) | [插件](https://github.com/Ingnaryk/LangBot_AzureTTS) |
### 文生图
| 平台/模型 | 备注 |
| --- | --- |
| 阿里云百炼 | [插件](https://github.com/Thetail001/LangBot_BailianTextToImagePlugin)
## 😘 社区贡献
感谢以下[代码贡献者](https://github.com/RockChinQ/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
Thank you for the following [code contributors](https://github.com/RockChinQ/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
</a>
## 😎 Stay Ahead
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
![star gif](https://docs.langbot.app/star.gif)

160
README_CN.md Normal file
View File

@@ -0,0 +1,160 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
[简体中文](README_CN.md) / [English](README.md) / [日本語](README_JP.md) / (PR for your language)
</div>
</p>
## ✨ 特性
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
## 📦 开始使用
#### Docker Compose 部署
```bash
git clone https://github.com/RockChinQ/LangBot
cd LangBot
docker compose up -d
```
访问 http://localhost:5300 即可开始使用。
详细文档[Docker 部署](https://docs.langbot.app/zh/deploy/langbot/docker.html)。
#### 宝塔面板部署
已上架宝塔面板,若您已安装宝塔面板,可以根据[文档](https://docs.langbot.app/zh/deploy/langbot/one-click/bt.html)使用。
#### Zeabur 云部署
社区贡献的 Zeabur 模板。
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/zh-CN/templates/ZKTBDH)
#### Railway 云部署
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### 手动部署
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 📸 效果展示
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/>
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/>
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
- WebUI Demo: https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意仅展示webui效果公开环境请不要在其中填入您的任何敏感信息。
## 🔌 组件兼容性
### 消息平台
| 平台 | 状态 | 备注 |
| --- | --- | --- |
| QQ 个人号 | ✅ | QQ 个人号私聊、群聊 |
| QQ 官方机器人 | ✅ | QQ 官方机器人,支持频道、私聊、群聊 |
| 企业微信 | ✅ | |
| 企微对外客服 | ✅ | |
| 个人微信 | ✅ | |
| 微信公众号 | ✅ | |
| 飞书 | ✅ | |
| 钉钉 | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
### 大模型能力
| 模型 | 状态 | 备注 |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | 可接入任何 OpenAI 接口格式模型 |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型运行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型接口聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
### TTS
| 平台/模型 | 备注 |
| --- | --- |
| [FishAudio](https://fish.audio/zh-CN/discovery/) | [插件](https://github.com/the-lazy-me/NewChatVoice) |
| [海豚 AI](https://www.ttson.cn/?source=thelazy) | [插件](https://github.com/the-lazy-me/NewChatVoice) |
| [AzureTTS](https://portal.azure.com/) | [插件](https://github.com/Ingnaryk/LangBot_AzureTTS) |
### 文生图
| 平台/模型 | 备注 |
| --- | --- |
| 阿里云百炼 | [插件](https://github.com/Thetail001/LangBot_BailianTextToImagePlugin)
## 😘 社区贡献
感谢以下[代码贡献者](https://github.com/RockChinQ/LangBot/graphs/contributors)和社区里其他成员对 LangBot 的贡献:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
</a>
## 😎 保持更新
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -1,137 +0,0 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social.png" alt="LangBot"/>
</a>
<div align="center">
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<a href="https://langbot.app">Home</a>
<a href="https://docs.langbot.app/en/insight/guide.html">Deployment</a>
<a href="https://docs.langbot.app/en/plugin/plugin-intro.html">Plugin</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">Submit Plugin</a>
<div align="center">
😎High Stability, 🧩Extension Supported, 🦄Multi-modal - LLM Native Instant Messaging Bot Platform🤖
</div>
<br/>
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/RockChinQ/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
</div>
</p>
## ✨ Features
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
## 📦 Getting Started
#### Docker Compose Deployment
```bash
git clone https://github.com/RockChinQ/LangBot
cd LangBot
docker compose up -d
```
Visit http://localhost:5300 to start using it.
Detailed documentation [Docker Deployment](https://docs.langbot.app/en/deploy/langbot/docker.html).
#### One-click Deployment on BTPanel
LangBot has been listed on the BTPanel, if you have installed the BTPanel, you can use the [document](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html) to use it.
#### Zeabur Cloud Deployment
Community contributed Zeabur template.
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/en-US/templates/ZKTBDH)
#### Railway Cloud Deployment
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### Other Deployment Methods
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 Demo
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
- WebUI Demo: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment.
## 🔌 Component Compatibility
### Message Platform
| Platform | Status | Remarks |
| --- | --- | --- |
| Personal QQ | ✅ | |
| QQ Official API | ✅ | |
| WeCom | ✅ | |
| WeComCS | ✅ | |
| Personal WeChat | ✅ | |
| Lark | ✅ | |
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: In development
### LLMs
| LLM | Status | Remarks |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | Available for any OpenAI interface format model |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
| [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform |
| [GiteeAI](https://ai.gitee.com/) | ✅ | LLM interface gateway(MaaS) |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLM gateway(MaaS) |
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
## 🤝 Community Contribution
Thank you for the following [code contributors](https://github.com/RockChinQ/LangBot/graphs/contributors) and other members in the community for their contributions to LangBot:
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
</a>

View File

@@ -23,7 +23,7 @@
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[简体中文](README.md) / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
[简体中文](README_CN.md) / [English](README.md) / [日本語](README_JP.md) / (PR for your language)
</div>
@@ -134,3 +134,9 @@ LangBot への貢献に対して、以下の [コード貢献者](https://github
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
</a>
## 😎 最新情報を入手
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -1,4 +1,4 @@
from v1 import client
from v1 import client # type: ignore
import asyncio
@@ -8,19 +8,13 @@ import json
class TestDifyClient:
async def test_chat_messages(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
async for chunk in cln.chat_messages(
inputs={}, query='调用工具查看现在几点?', user='test'
):
async for chunk in cln.chat_messages(inputs={}, query='调用工具查看现在几点?', user='test'):
print(json.dumps(chunk, ensure_ascii=False, indent=4))
async def test_upload_file(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
file_bytes = open('img.png', 'rb').read()
@@ -32,9 +26,7 @@ class TestDifyClient:
print(json.dumps(resp, ensure_ascii=False, indent=4))
async def test_workflow_run(self):
cln = client.AsyncDifyServiceClient(
api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL')
)
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
# resp = await cln.workflow_run(inputs={}, user="test")
# # print(json.dumps(resp, ensure_ascii=False, indent=4))

View File

@@ -1,5 +1,5 @@
import asyncio
import dingtalk_stream
import dingtalk_stream # type: ignore
from dingtalk_stream import AckMessage
@@ -27,9 +27,3 @@ class EchoTextHandler(dingtalk_stream.ChatbotHandler):
await asyncio.sleep(0.1) # 异步等待,避免阻塞
return self.incoming_message
async def get_dingtalk_client(client_id, client_secret):
from api import DingTalkClient # 延迟导入,避免循环导入
return DingTalkClient(client_id, client_secret)

View File

@@ -2,7 +2,7 @@ import base64
import json
import time
from typing import Callable
import dingtalk_stream
import dingtalk_stream # type: ignore
from .EchoHandler import EchoTextHandler
from .dingtalkevent import DingTalkEvent
import httpx
@@ -17,6 +17,7 @@ class DingTalkClient:
robot_name: str,
robot_code: str,
markdown_card: bool,
logger: None,
):
"""初始化 WebSocket 连接并自动启动"""
self.credential = dingtalk_stream.Credential(client_id, client_secret)
@@ -34,6 +35,7 @@ class DingTalkClient:
self.robot_code = robot_code
self.access_token_expiry_time = ''
self.markdown_card = markdown_card
self.logger = logger
async def get_access_token(self):
url = 'https://api.dingtalk.com/v1.0/oauth2/accessToken'
@@ -47,8 +49,8 @@ class DingTalkClient:
self.access_token = response_data.get('accessToken')
expires_in = int(response_data.get('expireIn', 7200))
self.access_token_expiry_time = time.time() + expires_in - 60
except Exception as e:
raise Exception(e)
except Exception:
await self.logger.error('failed to get access token in dingtalk')
async def is_token_expired(self):
"""检查token是否过期"""
@@ -73,7 +75,7 @@ class DingTalkClient:
result = response.json()
download_url = result.get('downloadUrl')
else:
raise Exception(f'Error: {response.status_code}, {response.text}')
await self.logger.error(f'failed to get download url: {response.json()}')
if download_url:
return await self.download_url_to_base64(download_url)
@@ -84,10 +86,11 @@ class DingTalkClient:
if response.status_code == 200:
file_bytes = response.content
base64_str = base64.b64encode(file_bytes).decode('utf-8') # 返回字符串格式
return base64_str
mime_type = response.headers.get('Content-Type', 'application/octet-stream')
base64_str = base64.b64encode(file_bytes).decode('utf-8')
return f'data:{mime_type};base64,{base64_str}'
else:
raise Exception('获取文件失败')
await self.logger.error(f'failed to get files: {response.json()}')
async def get_audio_url(self, download_code: str):
if not await self.check_access_token():
@@ -103,7 +106,7 @@ class DingTalkClient:
if download_url:
return await self.download_url_to_base64(download_url)
else:
raise Exception('获取音频失败')
await self.logger.error(f'failed to get audio: {response.json()}')
else:
raise Exception(f'Error: {response.status_code}, {response.text}')
@@ -115,13 +118,20 @@ class DingTalkClient:
if event:
await self._handle_message(event)
async def send_message(self, content: str, incoming_message):
async def send_message(self, content: str, incoming_message, at: bool):
if self.markdown_card:
self.EchoTextHandler.reply_markdown(
title=self.robot_name + '的回答',
text=content,
incoming_message=incoming_message,
)
if at:
self.EchoTextHandler.reply_markdown(
title='@' + incoming_message.sender_nick + ' ' + content,
text='@' + incoming_message.sender_nick + ' ' + content,
incoming_message=incoming_message,
)
else:
self.EchoTextHandler.reply_markdown(
title=content,
text=content,
incoming_message=incoming_message,
)
else:
self.EchoTextHandler.reply_text(content, incoming_message)
@@ -184,7 +194,10 @@ class DingTalkClient:
del copy_message_data['IncomingMessage']
# print("message_data:", json.dumps(copy_message_data, indent=4, ensure_ascii=False))
except Exception:
traceback.print_exc()
if self.logger:
await self.logger.error(f'Error in get_message: {traceback.format_exc()}')
else:
traceback.print_exc()
return message_data
@@ -207,9 +220,12 @@ class DingTalkClient:
}
try:
async with httpx.AsyncClient() as client:
await client.post(url, headers=headers, json=data)
response = await client.post(url, headers=headers, json=data)
if response.status_code == 200:
return
except Exception:
traceback.print_exc()
await self.logger.error(f'failed to send proactive massage to person: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to person: {traceback.format_exc()}')
async def send_proactive_message_to_group(self, target_id: str, content: str):
if not await self.check_access_token():
@@ -230,9 +246,12 @@ class DingTalkClient:
}
try:
async with httpx.AsyncClient() as client:
await client.post(url, headers=headers, json=data)
response = await client.post(url, headers=headers, json=data)
if response.status_code == 200:
return
except Exception:
traceback.print_exc()
await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}')
async def start(self):
"""启动 WebSocket 连接,监听消息"""

View File

@@ -1,5 +1,5 @@
from typing import Dict, Any, Optional
import dingtalk_stream
import dingtalk_stream # type: ignore
class DingTalkEvent(dict):

View File

@@ -1,7 +1,7 @@
# 微信公众号的加解密算法与企业微信一样,所以直接使用企业微信的加解密算法文件
import time
import traceback
from ..wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
from libs.wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
import xml.etree.ElementTree as ET
from quart import Quart, request
import hashlib
@@ -23,7 +23,7 @@ xml_template = """
class OAClient:
def __init__(self, token: str, EncodingAESKey: str, AppID: str, Appsecret: str):
def __init__(self, token: str, EncodingAESKey: str, AppID: str, Appsecret: str, logger: None):
self.token = token
self.aes = EncodingAESKey
self.appid = AppID
@@ -43,6 +43,7 @@ class OAClient:
self.access_token_expiry_time = None
self.msg_id_map = {}
self.generated_content = {}
self.logger = logger
async def handle_callback_request(self):
try:
@@ -54,6 +55,7 @@ class OAClient:
echostr = request.args.get('echostr', '')
msg_signature = request.args.get('msg_signature', '')
if msg_signature is None:
await self.logger.error('msg_signature不在请求体中')
raise Exception('msg_signature不在请求体中')
if request.method == 'GET':
@@ -64,6 +66,7 @@ class OAClient:
if check_signature == signature:
return echostr # 验证成功返回echostr
else:
await self.logger.error('拒绝请求')
raise Exception('拒绝请求')
elif request.method == 'POST':
encryt_msg = await request.data
@@ -72,6 +75,7 @@ class OAClient:
xml_msg = xml_msg.decode('utf-8')
if ret != 0:
await self.logger.error('消息解密失败')
raise Exception('消息解密失败')
message_data = await self.get_message(xml_msg)
@@ -114,6 +118,7 @@ class OAClient:
return ''
except Exception:
await self.logger.error(f'handle_callback_request失败: {traceback.format_exc()}')
traceback.print_exc()
async def get_message(self, xml_msg: str):
@@ -176,6 +181,7 @@ class OAClientForLongerResponse:
AppID: str,
Appsecret: str,
LoadingMessage: str,
logger: None,
):
self.token = token
self.aes = EncodingAESKey
@@ -197,6 +203,7 @@ class OAClientForLongerResponse:
self.loading_message = LoadingMessage
self.msg_queue = {}
self.user_msg_queue = {}
self.logger = logger
async def handle_callback_request(self):
try:
@@ -207,6 +214,7 @@ class OAClientForLongerResponse:
msg_signature = request.args.get('msg_signature', '')
if msg_signature is None:
await self.logger.error('msg_signature不在请求体中')
raise Exception('msg_signature不在请求体中')
if request.method == 'GET':
@@ -221,6 +229,7 @@ class OAClientForLongerResponse:
xml_msg = xml_msg.decode('utf-8')
if ret != 0:
await self.logger.error('消息解密失败')
raise Exception('消息解密失败')
# 解析 XML
@@ -270,6 +279,7 @@ class OAClientForLongerResponse:
return response_xml
except Exception:
await self.logger.error(f'handle_callback_request失败: {traceback.format_exc()}')
traceback.print_exc()
async def get_message(self, xml_msg: str):

View File

@@ -34,7 +34,7 @@ def handle_validation(body: dict, bot_secret: str):
class QQOfficialClient:
def __init__(self, secret: str, token: str, app_id: str):
def __init__(self, secret: str, token: str, app_id: str, logger: None):
self.app = Quart(__name__)
self.app.add_url_rule(
'/callback/command',
@@ -49,6 +49,7 @@ class QQOfficialClient:
self.base_url = 'https://api.sgroup.qq.com'
self.access_token = ''
self.access_token_expiry_time = None
self.logger = logger
async def check_access_token(self):
"""检查access_token是否存在"""
@@ -77,6 +78,7 @@ class QQOfficialClient:
if access_token:
self.access_token = access_token
except Exception as e:
await self.logger.error(f'获取access_token失败: {response_data}')
raise Exception(f'获取access_token失败: {e}')
async def handle_callback_request(self):
@@ -102,7 +104,7 @@ class QQOfficialClient:
return {'code': 0, 'message': 'success'}
except Exception as e:
traceback.print_exc()
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
return {'error': str(e)}, 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -166,6 +168,7 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/users/' + user_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -178,9 +181,11 @@ class QQOfficialClient:
'msg_id': msg_id,
}
response = await client.post(url, headers=headers, json=data)
response_data = response.json()
if response.status_code == 200:
return
else:
await self.logger.error(f'发送私聊消息失败: {response_data}')
raise ValueError(response)
async def send_group_text_msg(self, group_openid: str, content: str, msg_id: str):
@@ -188,6 +193,7 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/groups/' + group_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -203,6 +209,7 @@ class QQOfficialClient:
if response.status_code == 200:
return
else:
await self.logger.error(f"发送群聊消息失败:{response.json()}")
raise Exception(response.read().decode())
async def send_channle_group_text_msg(self, channel_id: str, content: str, msg_id: str):
@@ -210,6 +217,7 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/channels/' + channel_id + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -225,12 +233,14 @@ class QQOfficialClient:
if response.status_code == 200:
return True
else:
await self.logger.error(f'发送频道群聊消息失败: {response.json()}')
raise Exception(response)
async def send_channle_private_text_msg(self, guild_id: str, content: str, msg_id: str):
"""发送频道私聊消息"""
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/dms/' + guild_id + '/messages'
async with httpx.AsyncClient() as client:
@@ -247,6 +257,7 @@ class QQOfficialClient:
if response.status_code == 200:
return True
else:
await self.logger.error(f'发送频道私聊消息失败: {response.json()}')
raise Exception(response)
async def is_token_expired(self):

View File

@@ -1,4 +1,5 @@
import json
import traceback
from quart import Quart, jsonify, request
from slack_sdk.web.async_client import AsyncWebClient
from .slackevent import SlackEvent
@@ -7,7 +8,7 @@ from pkg.platform.types import events as platform_events
class SlackClient:
def __init__(self, bot_token: str, signing_secret: str):
def __init__(self, bot_token: str, signing_secret: str, logger: None):
self.bot_token = bot_token
self.signing_secret = signing_secret
self.app = Quart(__name__)
@@ -19,6 +20,7 @@ class SlackClient:
'example': [],
}
self.bot_user_id = None # 避免机器人回复自己的消息
self.logger = logger
async def handle_callback_request(self):
try:
@@ -32,6 +34,7 @@ class SlackClient:
if self.bot_user_id and bot_user_id == self.bot_user_id:
return jsonify({'status': 'ok'})
# 处理私信
if data and data.get('event', {}).get('channel_type') in ['im']:
@@ -49,6 +52,7 @@ class SlackClient:
return jsonify({'status': 'ok'})
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
raise (e)
async def _handle_message(self, event: SlackEvent):
@@ -78,6 +82,7 @@ class SlackClient:
self.bot_user_id = response['message']['bot_id']
return
except Exception as e:
await self.logger.error(f"Error in send_message: {e}")
raise e
async def send_message_to_one(self, text: str, user_id: str):
@@ -88,6 +93,7 @@ class SlackClient:
return
except Exception as e:
await self.logger.error(f"Error in send_message: {traceback.format_exc()}")
raise e
async def run_task(self, host: str, port: int, *args, **kwargs):

View File

@@ -11,13 +11,14 @@ from libs.wechatpad_api.api.chatroom import ChatRoomApi
class WeChatPadClient:
def __init__(self,base_url, token):
def __init__(self, base_url, token, logger=None):
self._login_api = LoginApi(base_url, token)
self._friend_api = FriendApi(base_url, token)
self._message_api = MessageApi(base_url, token)
self._user_api = UserApi(base_url, token)
self._download_api = DownloadApi(base_url, token)
self._chatroom_api = ChatRoomApi(base_url, token)
self.logger = logger
def get_token(self,admin_key, day: int):
'''获取token'''

View File

@@ -3,6 +3,7 @@ from .WXBizMsgCrypt3 import WXBizMsgCrypt
import base64
import binascii
import httpx
import traceback
from quart import Quart
import xml.etree.ElementTree as ET
from typing import Callable, Dict, Any
@@ -19,6 +20,7 @@ class WecomClient:
token: str,
EncodingAESKey: str,
contacts_secret: str,
logger: None,
):
self.corpid = corpid
self.secret = secret
@@ -28,8 +30,8 @@ class WecomClient:
self.base_url = 'https://qyapi.weixin.qq.com/cgi-bin'
self.access_token = ''
self.secret_for_contacts = contacts_secret
self.logger = logger
self.app = Quart(__name__)
self.wxcpt = WXBizMsgCrypt(self.token, self.aes, self.corpid)
self.app.add_url_rule(
'/callback/command',
'handle_callback',
@@ -55,6 +57,7 @@ class WecomClient:
if 'access_token' in data:
return data['access_token']
else:
await self.logger.error(f"获取accesstoken失败:{response.json()}")
raise Exception(f'未获取access token: {data}')
async def get_users(self):
@@ -126,6 +129,7 @@ class WecomClient:
response = await client.post(url, json=params)
data = response.json()
except Exception as e:
await self.logger.error(f"发送图片失败:{data}")
raise Exception('Failed to send image: ' + str(e))
# 企业微信错误码40014和42001代表accesstoken问题
@@ -160,6 +164,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_private_msg(user_id, agent_id, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
raise Exception('Failed to send message: ' + str(data))
async def handle_callback_request(self):
@@ -171,18 +176,22 @@ class WecomClient:
timestamp = request.args.get('timestamp')
nonce = request.args.get('nonce')
wxcpt = WXBizMsgCrypt(self.token, self.aes, self.corpid)
if request.method == 'GET':
echostr = request.args.get('echostr')
ret, reply_echo_str = self.wxcpt.VerifyURL(msg_signature, timestamp, nonce, echostr)
ret, reply_echo_str = wxcpt.VerifyURL(msg_signature, timestamp, nonce, echostr)
if ret != 0:
await self.logger.error("验证失败")
raise Exception(f'验证失败,错误码: {ret}')
return reply_echo_str
elif request.method == 'POST':
encrypt_msg = await request.data
ret, xml_msg = self.wxcpt.DecryptMsg(encrypt_msg, msg_signature, timestamp, nonce)
ret, xml_msg = wxcpt.DecryptMsg(encrypt_msg, msg_signature, timestamp, nonce)
if ret != 0:
await self.logger.error("消息解密失败")
raise Exception(f'消息解密失败,错误码: {ret}')
# 解析消息并处理
message_data = await self.get_message(xml_msg)
@@ -193,6 +202,7 @@ class WecomClient:
return 'success'
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
return f'Error processing request: {str(e)}', 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -291,6 +301,7 @@ class WecomClient:
except binascii.Error as e:
raise ValueError(f'Invalid base64 string: {str(e)}')
else:
await self.logger.error("Image对象出错")
raise ValueError('image对象出错')
# 设置 multipart/form-data 格式的文件
@@ -314,6 +325,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
media_id = await self.upload_to_work(image)
if data.get('errcode', 0) != 0:
await self.logger.error(f"上传图片失败:{data}")
raise Exception('failed to upload file')
media_id = data.get('media_id')

View File

@@ -13,7 +13,7 @@ import aiofiles
class WecomCSClient:
def __init__(self, corpid: str, secret: str, token: str, EncodingAESKey: str):
def __init__(self, corpid: str, secret: str, token: str, EncodingAESKey: str, logger: None):
self.corpid = corpid
self.secret = secret
self.access_token_for_contacts = ''
@@ -21,6 +21,7 @@ class WecomCSClient:
self.aes = EncodingAESKey
self.base_url = 'https://qyapi.weixin.qq.com/cgi-bin'
self.access_token = ''
self.logger = logger
self.app = Quart(__name__)
self.app.add_url_rule(
'/callback/command', 'handle_callback', self.handle_callback_request, methods=['GET', 'POST']
@@ -186,6 +187,7 @@ class WecomCSClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_text_msg(open_kfid, external_userid, msgid, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
raise Exception('Failed to send message')
return data
@@ -224,7 +226,10 @@ class WecomCSClient:
return 'success'
except Exception as e:
traceback.print_exc()
if self.logger:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
else:
traceback.print_exc()
return f'Error processing request: {str(e)}', 400
async def run_task(self, host: str, port: int, *args, **kwargs):

22
main.py
View File

@@ -1,4 +1,5 @@
import asyncio
import argparse
# LangBot 终端启动入口
# 在此层级解决依赖项检查。
# LangBot/main.py
@@ -10,12 +11,16 @@ asciiart = r"""
|____\__,_|_||_\__, |___/\___/\__|
|___/
⭐️开源地址: https://github.com/RockChinQ/LangBot
📖文档地址: https://docs.langbot.app
⭐️ Open Source 开源地址: https://github.com/RockChinQ/LangBot
📖 Documentation 文档地址: https://docs.langbot.app
"""
async def main_entry(loop: asyncio.AbstractEventLoop):
parser = argparse.ArgumentParser(description='LangBot')
parser.add_argument('--skip-plugin-deps-check', action='store_true', help='跳过插件依赖项检查', default=False)
args = parser.parse_args()
print(asciiart)
import sys
@@ -28,14 +33,19 @@ async def main_entry(loop: asyncio.AbstractEventLoop):
if missing_deps:
print('以下依赖包未安装,将自动安装,请完成后重启程序:')
print(
'These dependencies are missing, they will be installed automatically, please restart the program after completion:'
)
for dep in missing_deps:
print('-', dep)
await deps.install_deps(missing_deps)
print('已自动安装缺失的依赖包,请重启程序。')
print('The missing dependencies have been installed automatically, please restart the program.')
sys.exit(0)
# check plugin deps
await deps.precheck_plugin_deps()
if not args.skip_plugin_deps_check:
await deps.precheck_plugin_deps()
# 检查pydantic版本如果没有 pydantic.v1则把 pydantic 映射为 v1
import pydantic.version
@@ -53,6 +63,7 @@ async def main_entry(loop: asyncio.AbstractEventLoop):
if generated_files:
print('以下文件不存在,已自动生成:')
print('Following files do not exist and have been automatically generated:')
for file in generated_files:
print('-', file)
@@ -69,9 +80,10 @@ if __name__ == '__main__':
if sys.version_info < (3, 10, 1):
print('需要 Python 3.10.1 及以上版本,当前 Python 版本为:', sys.version)
input('按任意键退出...')
print('Your Python version is not supported. Please exit the program by pressing any key.')
exit(1)
# 检查本目录是否有main.py且包含LangBot字符串
# Check if the current directory is the LangBot project root directory
invalid_pwd = False
if not os.path.exists('main.py'):
@@ -84,6 +96,8 @@ if __name__ == '__main__':
if invalid_pwd:
print('请在 LangBot 项目根目录下以命令形式运行此程序。')
input('按任意键退出...')
print('Please run this program in the LangBot project root directory in command form.')
print('Press any key to exit...')
exit(1)
loop = asyncio.new_event_loop()

View File

@@ -0,0 +1,22 @@
from __future__ import annotations
import quart
import mimetypes
from .. import group
@group.group_class('files', '/api/v1/files')
class FilesRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('/image/<image_key>', methods=['GET'], auth_type=group.AuthType.NONE)
async def _(image_key: str) -> quart.Response:
if not await self.ap.storage_mgr.storage_provider.exists(image_key):
return quart.Response(status=404)
image_bytes = await self.ap.storage_mgr.storage_provider.load(image_key)
mime_type = mimetypes.guess_type(image_key)[0]
if mime_type is None:
mime_type = 'image/jpeg'
return quart.Response(image_bytes, mimetype=mime_type)

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
import quart
from .. import group
from ... import group
@group.group_class('pipelines', '/api/v1/pipelines')

View File

@@ -0,0 +1,79 @@
import quart
from ... import group
@group.group_class('webchat', '/api/v1/pipelines/<pipeline_uuid>/chat')
class WebChatDebugRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str:
"""发送调试消息到流水线"""
try:
data = await quart.request.get_json()
session_type = data.get('session_type', 'person')
message_chain_obj = data.get('message', [])
if not message_chain_obj:
return self.http_status(400, -1, 'message is required')
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = self.ap.platform_mgr.webchat_proxy_bot.adapter
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
result = await webchat_adapter.send_webchat_message(pipeline_uuid, session_type, message_chain_obj)
return self.success(
data={
'message': result,
}
)
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')
@self.route('/messages/<session_type>', methods=['GET'])
async def get_messages(pipeline_uuid: str, session_type: str) -> str:
"""获取调试消息历史"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = self.ap.platform_mgr.webchat_proxy_bot.adapter
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
messages = webchat_adapter.get_webchat_messages(pipeline_uuid, session_type)
return self.success(data={'messages': messages})
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')
@self.route('/reset/<session_type>', methods=['POST'])
async def reset_session(session_type: str) -> str:
"""重置调试会话"""
try:
if session_type not in ['person', 'group']:
return self.http_status(400, -1, 'session_type must be person or group')
webchat_adapter = None
for bot in self.ap.platform_mgr.bots:
if hasattr(bot.adapter, '__class__') and bot.adapter.__class__.__name__ == 'WebChatAdapter':
webchat_adapter = bot.adapter
break
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
webchat_adapter.reset_debug_session(session_type)
return self.success(data={'message': 'Session reset successfully'})
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')

View File

@@ -29,3 +29,16 @@ class BotsRouterGroup(group.RouterGroup):
elif quart.request.method == 'DELETE':
await self.ap.bot_service.delete_bot(bot_uuid)
return self.success()
@self.route('/<bot_uuid>/logs', methods=['POST'])
async def _(bot_uuid: str) -> str:
json_data = await quart.request.json
from_index = json_data.get('from_index', -1)
max_count = json_data.get('max_count', 10)
logs, total_count = await self.ap.bot_service.list_event_logs(bot_uuid, from_index, max_count)
return self.success(
data={
'logs': logs,
'total_count': total_count,
}
)

View File

@@ -36,3 +36,11 @@ class LLMModelsRouterGroup(group.RouterGroup):
await self.ap.model_service.delete_llm_model(model_uuid)
return self.success()
@self.route('/<model_uuid>/test', methods=['POST'])
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.model_service.test_llm_model(model_uuid, json_data)
return self.success()

View File

@@ -13,10 +13,12 @@ from . import groups
from . import group
from .groups import provider as groups_provider
from .groups import platform as groups_platform
from .groups import pipelines as groups_pipelines
importutil.import_modules_in_pkg(groups)
importutil.import_modules_in_pkg(groups_provider)
importutil.import_modules_in_pkg(groups_platform)
importutil.import_modules_in_pkg(groups_pipelines)
class HTTPController:
@@ -107,4 +109,8 @@ class HTTPController:
elif path.endswith('.txt'):
mimetype = 'text/plain'
return await quart.send_from_directory(frontend_path, path, mimetype=mimetype)
response = await quart.send_from_directory(frontend_path, path, mimetype=mimetype)
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
response.headers['Pragma'] = 'no-cache'
response.headers['Expires'] = '0'
return response

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import uuid
import sqlalchemy
import typing
from ....core import app
from ....entity.persistence import bot as persistence_bot
@@ -92,9 +93,25 @@ class BotService:
if runtime_bot.enable:
await runtime_bot.run()
# update all conversation that use this bot
for session in self.ap.sess_mgr.session_list:
if session.using_conversation is not None and session.using_conversation.bot_uuid == bot_uuid:
session.using_conversation = None
async def delete_bot(self, bot_uuid: str) -> None:
"""删除机器人"""
await self.ap.platform_mgr.remove_bot(bot_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)
)
async def list_event_logs(
self, bot_uuid: str, from_index: int, max_count: int
) -> typing.Tuple[list[dict], int, int, int]:
runtime_bot = await self.ap.platform_mgr.get_bot_by_uuid(bot_uuid)
if runtime_bot is None:
raise Exception('Bot not found')
logs, total_count = await runtime_bot.logger.get_logs(from_index, max_count)
return [log.to_json() for log in logs], total_count

View File

@@ -6,6 +6,8 @@ import sqlalchemy
from ....core import app
from ....entity.persistence import model as persistence_model
from ....entity.persistence import pipeline as persistence_pipeline
from ....provider.modelmgr import requester as model_requester
from ....provider import entities as llm_entities
class ModelsService:
@@ -78,3 +80,26 @@ class ModelsService:
)
await self.ap.model_mgr.remove_llm_model(model_uuid)
async def test_llm_model(self, model_uuid: str, model_data: dict) -> None:
runtime_llm_model: model_requester.RuntimeLLMModel | None = None
if model_uuid != '_':
for model in self.ap.model_mgr.llm_models:
if model.model_entity.uuid == model_uuid:
runtime_llm_model = model
break
if runtime_llm_model is None:
raise Exception('model not found')
else:
runtime_llm_model = await self.ap.model_mgr.init_runtime_llm_model(model_data)
await runtime_llm_model.requester.invoke_llm(
query=None,
model=runtime_llm_model,
messages=[llm_entities.Message(role='user', content='Hello, world!')],
funcs=[],
extra_args={},
)

View File

@@ -112,6 +112,11 @@ class PipelineService:
await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid)
await self.ap.pipeline_mgr.load_pipeline(pipeline)
# update all conversation that use this pipeline
for session in self.ap.sess_mgr.session_list:
if session.using_conversation is not None and session.using_conversation.pipeline_uuid == pipeline_uuid:
session.using_conversation = None
async def delete_pipeline(self, pipeline_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_pipeline.LegacyPipeline).where(

View File

@@ -23,7 +23,8 @@ from ..api.http.service import model as model_service
from ..api.http.service import pipeline as pipeline_service
from ..api.http.service import bot as bot_service
from ..discover import engine as discover_engine
from ..utils import logcache, ip
from ..storage import mgr as storagemgr
from ..utils import logcache
from . import taskmgr
from . import entities as core_entities
@@ -96,6 +97,8 @@ class Application:
log_cache: logcache.LogCache = None
storage_mgr: storagemgr.StorageMgr = None
# ========= HTTP Services =========
user_service: user_service.UserService = None
@@ -158,28 +161,24 @@ class Application:
"""打印访问 webui 的提示"""
if not os.path.exists(os.path.join('.', 'web/out')):
self.logger.warning('WebUI 文件缺失,请根据文档获取https://docs.langbot.app/webui/intro.html')
self.logger.warning('WebUI 文件缺失,请根据文档部署https://docs.langbot.app/zh')
self.logger.warning(
'WebUI files are missing, please deploy according to the documentation: https://docs.langbot.app/en'
)
return
host_ip = '127.0.0.1'
public_ip = await ip.get_myip()
port = self.instance_config.data['api']['port']
tips = f"""
=======================================
您可通过以下方式访问管理面板
Access WebUI / 访问管理面板
🏠 本地地址:http://{host_ip}:{port}/
🌐 公网地址http://{public_ip}:{port}/
🏠 Local Address: http://{host_ip}:{port}/
🌐 Public Address: http://<Your Public IP>:{port}/
📌 如果您在容器中运行此程序,请确保容器的 {port} 端口已对外暴露
🔗 若要使用公网地址访问,请阅读以下须知
1. 公网地址仅供参考,请以您的主机公网 IP 为准;
2. 要使用公网地址访问,请确保您的主机具有公网 IP并且系统防火墙已放行 {port} 端口;
🤯 WebUI 仍处于 Beta 测试阶段,如有问题或建议请反馈到 https://github.com/RockChinQ/LangBot/issues
📌 Running this program in a container? Please ensure that the {port} port is exposed
=======================================
""".strip()
for line in tips.split('\n'):

View File

@@ -74,5 +74,5 @@ async def precheck_plugin_deps():
if 'requirements.txt' in os.listdir(subdir):
pkgmgr.install_requirements(
os.path.join(subdir, 'requirements.txt'),
extra_params=['-q', '-q', '-q'],
extra_params=[],
)

View File

@@ -137,6 +137,12 @@ class Conversation(pydantic.BaseModel):
use_funcs: typing.Optional[list[tools_entities.LLMFunction]]
pipeline_uuid: str
"""流水线UUID。"""
bot_uuid: str
"""机器人UUID。"""
uuid: typing.Optional[str] = None
"""该对话的 uuid在创建时不会自动生成。而是当使用 Dify API 等由外部管理对话信息的服务时,用于绑定外部的会话。具体如何使用,取决于 Runner。"""

View File

@@ -17,6 +17,7 @@ from ...api.http.service import model as model_service
from ...api.http.service import pipeline as pipeline_service
from ...api.http.service import bot as bot_service
from ...discover import engine as discover_engine
from ...storage import mgr as storagemgr
from ...utils import logcache
from .. import taskmgr
@@ -50,6 +51,10 @@ class BuildAppStage(stage.BootingStage):
log_cache = logcache.LogCache()
ap.log_cache = log_cache
storage_mgr_inst = storagemgr.StorageMgr(ap)
await storage_mgr_inst.initialize()
ap.storage_mgr = storage_mgr_inst
persistence_mgr_inst = persistencemgr.PersistenceManager(ap)
ap.persistence_mgr = persistence_mgr_inst
await persistence_mgr_inst.initialize()

View File

@@ -66,13 +66,15 @@ class PersistenceManager:
# write default pipeline
result = await self.execute_async(sqlalchemy.select(pipeline.LegacyPipeline))
default_pipeline_uuid = None
if result.first() is None:
self.ap.logger.info('Creating default pipeline...')
pipeline_config = json.load(open('templates/default-pipeline-config.json', 'r', encoding='utf-8'))
default_pipeline_uuid = str(uuid.uuid4())
pipeline_data = {
'uuid': str(uuid.uuid4()),
'uuid': default_pipeline_uuid,
'for_version': self.ap.ver_mgr.get_current_version(),
'stages': pipeline_service.default_stage_order,
'is_default': True,
@@ -82,6 +84,7 @@ class PersistenceManager:
}
await self.execute_async(sqlalchemy.insert(pipeline.LegacyPipeline).values(pipeline_data))
# =================================
# run migrations

View File

@@ -0,0 +1,41 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(2)
class DBMigrateCombineQuoteMsgConfig(migration.DBMigration):
"""引用消息合并配置"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'misc' not in config['trigger']:
config['trigger']['misc'] = {}
if 'combine-quote-message' not in config['trigger']['misc']:
config['trigger']['misc']['combine-quote-message'] = False
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -0,0 +1,49 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(3)
class DBMigrateN8nConfig(migration.DBMigration):
"""N8n配置"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'n8n-service-api' not in config['ai']:
config['ai']['n8n-service-api'] = {
'webhook-url': 'http://your-n8n-webhook-url',
'auth-type': 'none',
'basic-username': '',
'basic-password': '',
'jwt-secret': '',
'jwt-algorithm': 'HS256',
'header-name': '',
'header-value': '',
'timeout': 120,
'output-key': 'response',
}
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -66,6 +66,8 @@ class ContentFilterStage(stage.PipelineStage):
if query.pipeline_config['safety']['content-filter']['scope'] == 'output-msg':
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
if not message.strip():
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
for filter in self.filter_chain:
if filter_entities.EnableStage.PRE in filter.enable_stages:

View File

@@ -51,11 +51,10 @@ class Controller:
# find pipeline
# Here firstly find the bot, then find the pipeline, in case the bot adapter's config is not the latest one.
# Like aiocqhttp, once a client is connected, even the adapter was updated and restarted, the existing client connection will not be affected.
bot = await self.ap.platform_mgr.get_bot_by_uuid(selected_query.bot_uuid)
if bot:
pipeline = await self.ap.pipeline_mgr.get_pipeline_by_uuid(
bot.bot_entity.use_pipeline_uuid
)
pipeline_uuid = selected_query.pipeline_uuid
if pipeline_uuid:
pipeline = await self.ap.pipeline_mgr.get_pipeline_by_uuid(pipeline_uuid)
if pipeline:
await pipeline.run(selected_query)

View File

@@ -20,9 +20,9 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
pass
@functools.lru_cache(maxsize=16)
def get_font(self, query: core_entities.Query):
def get_font(self, font_path: str):
return ImageFont.truetype(
query.pipeline_config['output']['long-text-processing']['font-path'],
font_path,
32,
encoding='utf-8',
)
@@ -146,7 +146,9 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
self.ap.logger.debug('lines: {}, text_width: {}'.format(lines, text_width))
for line in lines:
# 如果长了就分割
line_width = self.get_font(query).getlength(line)
line_width = self.get_font(query.pipeline_config['output']['long-text-processing']['font-path']).getlength(
line
)
self.ap.logger.debug('line_width: {}'.format(line_width))
if line_width < text_width:
final_lines.append(line)
@@ -167,7 +169,9 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
final_lines.append(rest_text[:point])
rest_text = rest_text[point:]
line_width = self.text_render_font.getlength(rest_text)
line_width = self.get_font(
query.pipeline_config['output']['long-text-processing']['font-path']
).getlength(rest_text)
if line_width < text_width:
final_lines.append(rest_text)
break
@@ -187,7 +191,7 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
(offset_x, offset_y + 35 * line_number),
final_line,
fill=(0, 0, 0),
font=self.text_render_font,
font=self.get_font(query.pipeline_config['output']['long-text-processing']['font-path']),
)
# 遍历此行,检查是否有emoji
idx_in_line = 0

View File

@@ -35,6 +35,7 @@ class QueryPool:
message_event: platform_events.MessageEvent,
message_chain: platform_message.MessageChain,
adapter: msadapter.MessagePlatformAdapter,
pipeline_uuid: typing.Optional[str] = None,
) -> entities.Query:
async with self.condition:
query = entities.Query(
@@ -48,6 +49,7 @@ class QueryPool:
resp_messages=[],
resp_message_chain=[],
adapter=adapter,
pipeline_uuid=pipeline_uuid,
)
self.queries.append(query)
self.query_id_counter += 1

View File

@@ -45,6 +45,8 @@ class PreProcessor(stage.PipelineStage):
query,
session,
query.pipeline_config['ai']['local-agent']['prompt'],
query.pipeline_uuid,
query.bot_uuid,
)
conversation.use_llm_model = llm_model
@@ -58,7 +60,7 @@ class PreProcessor(stage.PipelineStage):
if selected_runner == 'local-agent':
query.use_funcs = (
conversation.use_funcs if query.use_llm_model.model_entity.abilities.__contains__('tool_call') else None
conversation.use_funcs if query.use_llm_model.model_entity.abilities.__contains__('func_call') else None
)
query.variables = {
@@ -81,6 +83,7 @@ class PreProcessor(stage.PipelineStage):
content_list = []
plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')
for me in query.message_chain:
if isinstance(me, platform_message.Plain):
@@ -92,6 +95,16 @@ class PreProcessor(stage.PipelineStage):
):
if me.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(me.base64))
elif isinstance(me, platform_message.Quote) and qoute_msg:
for msg in me.origin:
if isinstance(msg, platform_message.Plain):
content_list.append(llm_entities.ContentElement.from_text(msg.text))
elif isinstance(msg, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
'vision'
):
if msg.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(msg.base64))
query.variables['user_message_text'] = plain_text

View File

@@ -7,11 +7,11 @@ from ..core import app, entities as core_entities
from . import entities
preregistered_stages: dict[str, PipelineStage] = {}
preregistered_stages: dict[str, type[PipelineStage]] = {}
def stage_class(name: str):
def decorator(cls):
def stage_class(name: str) -> typing.Callable[[type[PipelineStage]], type[PipelineStage]]:
def decorator(cls: type[PipelineStage]) -> type[PipelineStage]:
preregistered_stages[name] = cls
return cls

View File

@@ -8,6 +8,7 @@ import abc
from ..core import app
from .types import message as platform_message
from .types import events as platform_events
from .logger import EventLogger
class MessagePlatformAdapter(metaclass=abc.ABCMeta):
@@ -22,7 +23,9 @@ class MessagePlatformAdapter(metaclass=abc.ABCMeta):
ap: app.Application
def __init__(self, config: dict, ap: app.Application):
logger: EventLogger
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
"""初始化适配器
Args:
@@ -31,6 +34,7 @@ class MessagePlatformAdapter(metaclass=abc.ABCMeta):
"""
self.config = config
self.ap = ap
self.logger = logger
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
"""主动发送消息

View File

@@ -5,17 +5,18 @@ import asyncio
import traceback
import sqlalchemy
# FriendMessage, Image, MessageChain, Plain
from . import adapter as msadapter
from ..core import app, entities as core_entities, taskmgr
from .types import events as platform_events
from .types import events as platform_events, message as platform_message
from ..discover import engine
from ..entity.persistence import bot as persistence_bot
from .logger import EventLogger
# 处理 3.4 移除了 YiriMirai 之后,插件的兼容性问题
from . import types as mirai
@@ -37,23 +38,37 @@ class RuntimeBot:
task_context: taskmgr.TaskContext
logger: EventLogger
def __init__(
self,
ap: app.Application,
bot_entity: persistence_bot.Bot,
adapter: msadapter.MessagePlatformAdapter,
logger: EventLogger,
):
self.ap = ap
self.bot_entity = bot_entity
self.enable = bot_entity.enable
self.adapter = adapter
self.task_context = taskmgr.TaskContext()
self.logger = logger
async def initialize(self):
async def on_friend_message(
event: platform_events.FriendMessage,
adapter: msadapter.MessagePlatformAdapter,
):
image_components = [
component for component in event.message_chain if isinstance(component, platform_message.Image)
]
await self.logger.info(
f'{event.message_chain}',
images=image_components,
message_session_id=f'person_{event.sender.id}',
)
await self.ap.query_pool.add_query(
bot_uuid=self.bot_entity.uuid,
launcher_type=core_entities.LauncherTypes.PERSON,
@@ -62,12 +77,23 @@ class RuntimeBot:
message_event=event,
message_chain=event.message_chain,
adapter=adapter,
pipeline_uuid=self.bot_entity.use_pipeline_uuid,
)
async def on_group_message(
event: platform_events.GroupMessage,
adapter: msadapter.MessagePlatformAdapter,
):
image_components = [
component for component in event.message_chain if isinstance(component, platform_message.Image)
]
await self.logger.info(
f'{event.message_chain}',
images=image_components,
message_session_id=f'group_{event.group.id}',
)
await self.ap.query_pool.add_query(
bot_uuid=self.bot_entity.uuid,
launcher_type=core_entities.LauncherTypes.GROUP,
@@ -76,6 +102,7 @@ class RuntimeBot:
message_event=event,
message_chain=event.message_chain,
adapter=adapter,
pipeline_uuid=self.bot_entity.use_pipeline_uuid,
)
self.adapter.register_listener(platform_events.FriendMessage, on_friend_message)
@@ -92,10 +119,7 @@ class RuntimeBot:
self.task_context.set_current_action('Exited.')
return
self.task_context.set_current_action('Exited with error.')
self.task_context.log(f'平台适配器运行出错: {e}')
self.task_context.log(f'Traceback: {traceback.format_exc()}')
self.ap.logger.error(f'平台适配器运行出错: {e}')
self.ap.logger.debug(f'Traceback: {traceback.format_exc()}')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback.format_exc()}')
self.task_wrapper = self.ap.task_mgr.create_task(
exception_wrapper(),
@@ -121,6 +145,8 @@ class PlatformManager:
bots: list[RuntimeBot]
webchat_proxy_bot: RuntimeBot
adapter_components: list[engine.Component]
adapter_dict: dict[str, type[msadapter.MessagePlatformAdapter]]
@@ -138,6 +164,31 @@ class PlatformManager:
adapter_dict[component.metadata.name] = component.get_python_component_class()
self.adapter_dict = adapter_dict
webchat_adapter_class = self.adapter_dict['webchat']
# initialize webchat adapter
webchat_logger = EventLogger(name='webchat-adapter', ap=self.ap)
webchat_adapter_inst = webchat_adapter_class(
{},
self.ap,
webchat_logger,
)
self.webchat_proxy_bot = RuntimeBot(
ap=self.ap,
bot_entity=persistence_bot.Bot(
uuid='webchat-proxy-bot',
name='WebChat',
description='',
adapter='webchat',
adapter_config={},
enable=True,
),
adapter=webchat_adapter_inst,
logger=webchat_logger,
)
await self.webchat_proxy_bot.initialize()
await self.load_bots_from_db()
def get_running_adapters(self) -> list[msadapter.MessagePlatformAdapter]:
@@ -166,9 +217,15 @@ class PlatformManager:
elif isinstance(bot_entity, dict):
bot_entity = persistence_bot.Bot(**bot_entity)
adapter_inst = self.adapter_dict[bot_entity.adapter](bot_entity.adapter_config, self.ap)
logger = EventLogger(name=f'platform-adapter-{bot_entity.name}', ap=self.ap)
runtime_bot = RuntimeBot(ap=self.ap, bot_entity=bot_entity, adapter=adapter_inst)
adapter_inst = self.adapter_dict[bot_entity.adapter](
bot_entity.adapter_config,
self.ap,
logger,
)
runtime_bot = RuntimeBot(ap=self.ap, bot_entity=bot_entity, adapter=adapter_inst, logger=logger)
await runtime_bot.initialize()
@@ -191,7 +248,9 @@ class PlatformManager:
return
def get_available_adapters_info(self) -> list[dict]:
return [component.to_plain_dict() for component in self.adapter_components]
return [
component.to_plain_dict() for component in self.adapter_components if component.metadata.name != 'webchat'
]
def get_available_adapter_info_by_name(self, name: str) -> dict | None:
for component in self.adapter_components:
@@ -244,6 +303,8 @@ class PlatformManager:
async def run(self):
# This method will only be called when the application launching
await self.webchat_proxy_bot.run()
for bot in self.bots:
if bot.enable:
await bot.run()

233
pkg/platform/logger.py Normal file
View File

@@ -0,0 +1,233 @@
from __future__ import annotations
import typing
import mimetypes
import time
import enum
import pydantic
import traceback
import uuid
from ..core import app
from .types import message as platform_message
class EventLogLevel(enum.Enum):
"""日志级别"""
DEBUG = 'debug'
INFO = 'info'
WARNING = 'warning'
ERROR = 'error'
class EventLog(pydantic.BaseModel):
seq_id: int
"""日志序号"""
timestamp: int
"""日志时间戳"""
level: EventLogLevel
"""日志级别"""
text: str
"""日志文本"""
images: typing.Optional[list[str]] = None
"""日志图片 URL 列表,需要通过 /api/v1/image/{uuid} 获取图片"""
message_session_id: typing.Optional[str] = None
"""消息会话ID仅收发消息事件有值"""
def to_json(self) -> dict:
return {
'seq_id': self.seq_id,
'timestamp': self.timestamp,
'level': self.level.value,
'text': self.text,
'images': self.images,
'message_session_id': self.message_session_id,
}
MAX_LOG_COUNT = 200
DELETE_COUNT_PER_TIME = 50
class EventLogger:
"""used for logging bot events"""
ap: app.Application
seq_id_inc: int
logs: list[EventLog]
def __init__(
self,
name: str,
ap: app.Application,
):
self.name = name
self.ap = ap
self.logs = []
self.seq_id_inc = 0
async def get_logs(self, from_seq_id: int, max_count: int) -> typing.Tuple[list[EventLog], int]:
"""
获取日志,从 from_seq_id 开始获取 max_count 条历史日志
Args:
from_seq_id: 起始序号,-1 表示末尾
max_count: 最大数量
Returns:
Tuple[list[EventLog], int]: 日志列表,日志总数
"""
if len(self.logs) == 0:
return [], 0
if from_seq_id <= -1:
from_seq_id = self.logs[-1].seq_id
min_seq_id_in_logs = self.logs[0].seq_id
max_seq_id_in_logs = self.logs[-1].seq_id
if from_seq_id < min_seq_id_in_logs: # 需要的整个范围都已经被删除
return [], len(self.logs)
if (
from_seq_id > max_seq_id_in_logs and from_seq_id - max_count > max_seq_id_in_logs
): # 需要的整个范围都还没生成
return [], len(self.logs)
end_index = 1
for i, log in enumerate(self.logs):
if log.seq_id >= from_seq_id:
end_index = i + 1
break
start_index = max(0, end_index - max_count)
if max_count > 0:
return self.logs[start_index:end_index], len(self.logs)
else:
return [], len(self.logs)
async def _truncate_logs(self):
if len(self.logs) > MAX_LOG_COUNT:
for i in range(DELETE_COUNT_PER_TIME):
for image_key in self.logs[i].images:
await self.ap.storage_mgr.storage_provider.delete(image_key)
self.logs = self.logs[DELETE_COUNT_PER_TIME:]
async def _add_log(
self,
level: EventLogLevel,
text: str,
images: typing.Optional[list[platform_message.Image]] = None,
message_session_id: typing.Optional[str] = None,
no_throw: bool = True,
):
try:
image_keys = []
if images is None:
images = []
if message_session_id is None:
message_session_id = ''
if not isinstance(message_session_id, str):
message_session_id = str(message_session_id)
for img in images:
img_bytes, mime_type = await img.get_bytes()
extension = mimetypes.guess_extension(mime_type)
if extension is None:
extension = '.jpg'
image_key = f'{message_session_id}-{uuid.uuid4()}{extension}'
await self.ap.storage_mgr.storage_provider.save(image_key, img_bytes)
image_keys.append(image_key)
self.logs.append(
EventLog(
seq_id=self.seq_id_inc,
timestamp=int(time.time()),
level=level,
text=text,
images=image_keys,
message_session_id=message_session_id,
)
)
self.seq_id_inc += 1
await self._truncate_logs()
except Exception as e:
if not no_throw:
raise e
else:
traceback.print_exc()
async def info(
self,
text: str,
images: typing.Optional[list[platform_message.Image]] = None,
message_session_id: typing.Optional[str] = None,
no_throw: bool = True,
):
await self._add_log(
level=EventLogLevel.INFO,
text=text,
images=images,
message_session_id=message_session_id,
no_throw=no_throw,
)
async def debug(
self,
text: str,
images: typing.Optional[list[platform_message.Image]] = None,
message_session_id: typing.Optional[str] = None,
no_throw: bool = True,
):
await self._add_log(
level=EventLogLevel.DEBUG,
text=text,
images=images,
message_session_id=message_session_id,
no_throw=no_throw,
)
async def warning(
self,
text: str,
images: typing.Optional[list[platform_message.Image]] = None,
message_session_id: typing.Optional[str] = None,
no_throw: bool = True,
):
await self._add_log(
level=EventLogLevel.WARNING,
text=text,
images=images,
message_session_id=message_session_id,
no_throw=no_throw,
)
async def error(
self,
text: str,
images: typing.Optional[list[platform_message.Image]] = None,
message_session_id: typing.Optional[str] = None,
no_throw: bool = True,
):
await self._add_log(
level=EventLogLevel.ERROR,
text=text,
images=images,
message_session_id=message_session_id,
no_throw=no_throw,
)

View File

@@ -12,9 +12,11 @@ from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
from ..logger import EventLogger
class AiocqhttpMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain,
@@ -59,6 +61,9 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif type(msg) is platform_message.Forward:
for node in msg.node_list:
msg_list.extend((await AiocqhttpMessageConverter.yiri2target(node.message_chain))[0])
elif isinstance(msg, platform_message.File):
msg_list.append({"type":"file", "data":{'file': msg.url, "name": msg.name}})
else:
msg_list.append(aiocqhttp.MessageSegment.text(str(msg)))
@@ -66,14 +71,41 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
return msg_list, msg_id, msg_time
@staticmethod
async def target2yiri(message: str, message_id: int = -1):
async def target2yiri(message: str, message_id: int = -1,bot=None):
# print(message)
message = aiocqhttp.Message(message)
async def process_message_data(msg_data, reply_list):
if msg_data["type"] == "image":
image_base64, image_format = await image.qq_image_url_to_base64(msg_data["data"]['url'])
reply_list.append(
platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
elif msg_data["type"] == "text":
reply_list.append(platform_message.Plain(text=msg_data["data"]["text"]))
elif msg_data["type"] == "forward": # 这里来应该传入转发消息组暂时传入qoute
for forward_msg_datas in msg_data["data"]["content"]:
for forward_msg_data in forward_msg_datas["message"]:
await process_message_data(forward_msg_data, reply_list)
elif msg_data["type"] == "at":
if msg_data["data"]['qq'] == 'all':
reply_list.append(platform_message.AtAll())
else:
reply_list.append(
platform_message.At(
target=msg_data["data"]['qq'],
)
)
yiri_msg_list = []
yiri_msg_list.append(platform_message.Source(id=message_id, time=datetime.datetime.now()))
for msg in message:
reply_list = []
if msg.type == 'at':
if msg.data['qq'] == 'all':
yiri_msg_list.append(platform_message.AtAll())
@@ -88,20 +120,60 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif msg.type == 'image':
image_base64, image_format = await image.qq_image_url_to_base64(msg.data['url'])
yiri_msg_list.append(platform_message.Image(base64=f'data:image/{image_format};base64,{image_base64}'))
elif msg.type == 'forward':
# 暂时不太合理
# msg_datas = await bot.get_msg(message_id=message_id)
# print(msg_datas)
# for msg_data in msg_datas["message"]:
# await process_message_data(msg_data, yiri_msg_list)
pass
elif msg.type == 'reply': # 此处处理引用消息传入Qoute
msg_datas = await bot.get_msg(message_id=msg.data["id"])
for msg_data in msg_datas["message"]:
await process_message_data(msg_data, reply_list)
reply_msg = platform_message.Quote(message_id=msg.data["id"],sender_id=msg_datas["user_id"],origin=reply_list)
yiri_msg_list.append(reply_msg)
elif msg.type == 'file':
# file_name = msg.data['file']
file_id = msg.data['file_id']
file_data = await bot.get_file(file_id=file_id)
file_name = file_data.get('file_name')
file_path = file_data.get('file')
file_url = file_data.get('file_url')
file_size = file_data.get('file_size')
yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
chain = platform_message.MessageChain(yiri_msg_list)
return chain
class AiocqhttpEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(event: platform_events.MessageEvent, bot_account_id: int):
return event.source_platform_object
@staticmethod
async def target2yiri(event: aiocqhttp.Event):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id)
async def target2yiri(event: aiocqhttp.Event,bot=None):
yiri_chain = await AiocqhttpMessageConverter.target2yiri(event.message, event.message_id,bot)
if event.message_type == 'group':
permission = 'MEMBER'
@@ -156,8 +228,11 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
ap: app.Application
def __init__(self, config: dict, ap: app.Application):
on_websocket_connection_event_cache: typing.List[typing.Callable[[aiocqhttp.Event], None]] = []
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.logger = logger
async def shutdown_trigger_placeholder():
while True:
@@ -166,6 +241,7 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
self.config['shutdown_trigger'] = shutdown_trigger_placeholder
self.ap = ap
self.on_websocket_connection_event_cache = []
if 'access-token' in config:
self.bot = aiocqhttp.CQHttp(access_token=config['access-token'])
@@ -177,6 +253,7 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
aiocq_msg = (await AiocqhttpMessageConverter.yiri2target(message))[0]
if target_type == 'group':
await self.bot.send_group_msg(group_id=int(target_id), message=aiocq_msg)
elif target_type == 'person':
await self.bot.send_private_msg(user_id=int(target_id), message=aiocq_msg)
@@ -205,8 +282,9 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
async def on_message(event: aiocqhttp.Event):
self.bot_account_id = event.self_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
return await callback(await self.event_converter.target2yiri(event,self.bot), self)
except Exception:
await self.logger.error(f'Error in on_message: {traceback.format_exc()}')
traceback.print_exc()
if event_type == platform_events.GroupMessage:
@@ -214,6 +292,16 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
elif event_type == platform_events.FriendMessage:
self.bot.on_message('private')(on_message)
async def on_websocket_connection(event: aiocqhttp.Event):
for event in self.on_websocket_connection_event_cache:
if event.self_id == event.self_id and event.time == event.time:
return
self.on_websocket_connection_event_cache.append(event)
await self.logger.info(f'WebSocket connection established, bot id: {event.self_id}')
self.bot.on_websocket_connection(on_websocket_connection)
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],

View File

@@ -9,14 +9,20 @@ from ..types import events as platform_events
from ..types import entities as platform_entities
from libs.dingtalk_api.api import DingTalkClient
import datetime
from ..logger import EventLogger
class DingTalkMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(message_chain: platform_message.MessageChain):
content = ''
at = False
for msg in message_chain:
if type(msg) is platform_message.At:
at = True
if type(msg) is platform_message.Plain:
return msg.text
content += msg.text
return content,at
@staticmethod
async def target2yiri(event: DingTalkEvent, bot_name: str):
@@ -94,9 +100,10 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
event_converter: DingTalkEventConverter = DingTalkEventConverter()
config: dict
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'client_id',
'client_secret',
@@ -115,6 +122,7 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
robot_name=config['robot_name'],
robot_code=config['robot_code'],
markdown_card=config['markdown_card'],
logger=self.logger,
)
async def reply_message(
@@ -128,8 +136,8 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
)
incoming_message = event.incoming_message
content = await DingTalkMessageConverter.yiri2target(message)
await self.bot.send_message(content, incoming_message)
content,at = await DingTalkMessageConverter.yiri2target(message)
await self.bot.send_message(content, incoming_message,at)
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
content = await DingTalkMessageConverter.yiri2target(message)
@@ -149,8 +157,8 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
await self.event_converter.target2yiri(event, self.config['robot_name']),
self,
)
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in dingtalk callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('FriendMessage')(on_message)

View File

@@ -16,6 +16,7 @@ from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ..logger import EventLogger
class DiscordMessageConverter(adapter.MessageConverter):
@@ -170,9 +171,10 @@ class DiscordAdapter(adapter.MessagePlatformAdapter):
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.bot_account_id = self.config['client_id']

View File

@@ -20,6 +20,7 @@ from ...utils import image
import xml.etree.ElementTree as ET
from typing import Optional, Tuple
from functools import partial
from ..logger import EventLogger
class GewechatMessageConverter(adapter.MessageConverter):
@@ -371,7 +372,7 @@ class GewechatMessageConverter(adapter.MessageConverter):
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr') # 引用消息的原发送者
ats_bot = ats_bot or (quote_id == tousername)
except Exception as e:
print(f'_ats_bot got except: {e}')
print(f'Error in gewechat _ats_bot: {e}')
finally:
return ats_bot
@@ -477,9 +478,10 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.message_converter = GewechatMessageConverter(config)
@@ -503,7 +505,7 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
try:
event = await self.event_converter.target2yiri(data.copy(), self.bot_account_id)
except Exception:
traceback.print_exc()
await self.logger.error(f'Error in gewechat callback: {traceback.format_exc()}')
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)

View File

@@ -23,6 +23,7 @@ from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ..logger import EventLogger
class AESCipher(object):
@@ -332,16 +333,18 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
] = {}
]
config: dict
quart_app: quart.Quart
ap: app.Application
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.listeners = {}
@self.quart_app.route('/lark/callback', methods=['POST'])
async def lark_callback():
@@ -375,15 +378,15 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
if 'im.message.receive_v1' == type:
try:
event = await self.event_converter.target2yiri(p2v1, self.api_client)
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
return {'code': 200, 'message': 'ok'}
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
return {'code': 500, 'message': 'error'}
async def on_message(event: lark_oapi.im.v1.P2ImMessageReceiveV1):

View File

@@ -14,6 +14,7 @@ from ...pipeline.longtext.strategies import forward
from ...platform.types import message as platform_message
from ...platform.types import entities as platform_entities
from ...platform.types import events as platform_events
from ..logger import EventLogger
class NakuruProjectMessageConverter(adapter_model.MessageConverter):
@@ -71,9 +72,8 @@ class NakuruProjectMessageConverter(adapter_model.MessageConverter):
content=content_list,
)
nakuru_forward_node_list.append(nakuru_forward_node)
except Exception:
except Exception as e:
import traceback
traceback.print_exc()
nakuru_msg_list.append(nakuru_forward_node_list)
@@ -178,12 +178,13 @@ class NakuruAdapter(adapter_model.MessagePlatformAdapter):
cfg: dict
def __init__(self, cfg: dict, ap):
def __init__(self, cfg: dict, ap, logger: EventLogger):
"""初始化nakuru-project的对象"""
cfg['port'] = cfg['ws_port']
del cfg['ws_port']
self.cfg = cfg
self.ap = ap
self.logger = logger
self.listener_list = []
self.bot = nakuru.CQHTTP(**self.cfg)
@@ -275,7 +276,7 @@ class NakuruAdapter(adapter_model.MessagePlatformAdapter):
# 注册监听器
self.bot.receiver(source_cls.__name__)(listener_wrapper)
except Exception as e:
traceback.print_exc()
self.logger.error(f"Error in nakuru register_listener: {traceback.format_exc()}")
raise e
def unregister_listener(

View File

@@ -13,6 +13,7 @@ from .. import adapter
from ...core import app
from ..types import entities as platform_entities
from ...command.errors import ParamNotEnoughError
from ..logger import EventLogger
class OAMessageConverter(adapter.MessageConverter):
@@ -63,10 +64,10 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
event_converter: OAEventConverter = OAEventConverter()
config: dict
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'token',
@@ -85,6 +86,7 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
EncodingAESKey=config['EncodingAESKey'],
Appsecret=config['AppSecret'],
AppID=config['AppID'],
logger=self.logger,
)
elif self.config['Mode'] == 'passive':
self.bot = OAClientForLongerResponse(
@@ -93,6 +95,7 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
Appsecret=config['AppSecret'],
AppID=config['AppID'],
LoadingMessage=config['LoadingMessage'],
logger=self.logger,
)
else:
raise KeyError('请设置微信公众号通信模式')
@@ -122,8 +125,8 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in officialaccount callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -57,6 +57,9 @@ spec:
label:
en_US: Host
zh_Hans: 监听主机
description:
en_US: The host that Official Account listens on for Webhook connections.
zh_Hans: 微信公众号监听的主机,除非你知道自己在做什么,否则请写 0.0.0.0
type: string
required: true
default: 0.0.0.0

View File

@@ -17,6 +17,7 @@ from ...config import manager as cfg_mgr
from ...platform.types import entities as platform_entities
from ...platform.types import events as platform_events
from ...platform.types import message as platform_message
from ..logger import EventLogger
class OfficialGroupMessage(platform_events.GroupMessage):
@@ -357,10 +358,11 @@ class OfficialAdapter(adapter_model.MessagePlatformAdapter):
group_msg_seq = None
c2c_msg_seq = None
def __init__(self, cfg: dict, ap: app.Application):
def __init__(self, cfg: dict, ap: app.Application, logger: EventLogger):
"""初始化适配器"""
self.cfg = cfg
self.ap = ap
self.logger = logger
self.group_msg_seq = 1
self.c2c_msg_seq = 1
@@ -499,7 +501,7 @@ class OfficialAdapter(adapter_model.MessagePlatformAdapter):
for event_handler in event_handler_mapping[event_type]:
setattr(self.bot, event_handler, wrapper)
except Exception as e:
traceback.print_exc()
self.logger.error(f"Error in qqbotpy callback: {traceback.format_exc()}")
raise e
def unregister_listener(

View File

@@ -14,6 +14,7 @@ from ...command.errors import ParamNotEnoughError
from libs.qq_official_api.api import QQOfficialClient
from libs.qq_official_api.qqofficialevent import QQOfficialEvent
from ...utils import image
from ..logger import EventLogger
class QQOfficialMessageConverter(adapter.MessageConverter):
@@ -139,9 +140,10 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
message_converter: QQOfficialMessageConverter = QQOfficialMessageConverter()
event_converter: QQOfficialEventConverter = QQOfficialEventConverter()
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'appid',
@@ -155,6 +157,7 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
app_id=config['appid'],
secret=config['secret'],
token=config['token'],
logger=self.logger
)
async def reply_message(
@@ -221,8 +224,8 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'justbot'
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in qqofficial callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('DIRECT_MESSAGE_CREATE')(on_message)

View File

@@ -14,6 +14,7 @@ from .. import adapter
from ..types import entities as platform_entities
from ...command.errors import ParamNotEnoughError
from ...utils import image
from ..logger import EventLogger
class SlackMessageConverter(adapter.MessageConverter):
@@ -91,9 +92,10 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
event_converter: SlackEventConverter = SlackEventConverter()
config: dict
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'bot_token',
'signing_secret',
@@ -102,7 +104,7 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
if missing_keys:
raise ParamNotEnoughError('Slack机器人缺少相关配置项请查看文档或联系管理员')
self.bot = SlackClient(bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'])
self.bot = SlackClient(bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'], logger=self.logger)
async def reply_message(
self,
@@ -137,8 +139,8 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'SlackBot'
try:
return await callback(await self.event_converter.target2yiri(event, self.bot), self)
except:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in slack callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('im')(on_message)

View File

@@ -17,6 +17,7 @@ from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ..logger import EventLogger
class TelegramMessageConverter(adapter.MessageConverter):
@@ -147,9 +148,10 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
async def telegram_callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
if update.message.from_user.is_bot:
@@ -158,8 +160,8 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
try:
lb_event = await self.event_converter.target2yiri(update, self.bot, self.bot_account_id)
await self.listeners[type(lb_event)](lb_event, self)
except Exception:
print(traceback.format_exc())
except Exception as e:
await self.logger.error(f"Error in telegram callback: {traceback.format_exc()}")
self.application = ApplicationBuilder().token(self.config['token']).build()
self.bot = self.application.bot

View File

@@ -0,0 +1,209 @@
import asyncio
import logging
import typing
from datetime import datetime
from pydantic import BaseModel
from .. import adapter as msadapter
from ..types import events as platform_events, message as platform_message, entities as platform_entities
from ...core import app
from ..logger import EventLogger
logger = logging.getLogger(__name__)
class WebChatMessage(BaseModel):
id: int
role: str
content: str
message_chain: list[dict]
timestamp: str
class WebChatSession:
id: str
message_lists: dict[str, list[WebChatMessage]] = {}
resp_waiters: dict[int, asyncio.Future[WebChatMessage]]
def __init__(self, id: str):
self.id = id
self.message_lists = {}
self.resp_waiters = {}
def get_message_list(self, pipeline_uuid: str) -> list[WebChatMessage]:
if pipeline_uuid not in self.message_lists:
self.message_lists[pipeline_uuid] = []
return self.message_lists[pipeline_uuid]
class WebChatAdapter(msadapter.MessagePlatformAdapter):
"""WebChat调试适配器用于流水线调试"""
webchat_person_session: WebChatSession
webchat_group_session: WebChatSession
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, msadapter.MessagePlatformAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.ap = ap
self.logger = logger
self.config = config
self.webchat_person_session = WebChatSession(id='webchatperson')
self.webchat_group_session = WebChatSession(id='webchatgroup')
self.bot_account_id = 'webchatbot'
async def send_message(
self,
target_type: str,
target_id: str,
message: platform_message.MessageChain,
) -> dict:
"""发送消息到调试会话"""
session_key = target_id
if session_key not in self.debug_messages:
self.debug_messages[session_key] = []
message_data = {
'id': len(self.debug_messages[session_key]) + 1,
'type': 'bot',
'content': str(message),
'timestamp': datetime.now().isoformat(),
'message_chain': [component.__dict__ for component in message],
}
self.debug_messages[session_key].append(message_data)
await self.logger.info(f'Send message to {session_key}: {message}')
return message_data
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
) -> dict:
"""回复消息"""
message_data = WebChatMessage(
id=-1,
role='assistant',
content=str(message),
message_chain=[component.__dict__ for component in message],
timestamp=datetime.now().isoformat(),
)
# notify waiter
if isinstance(message_source, platform_events.FriendMessage):
self.webchat_person_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data)
elif isinstance(message_source, platform_events.GroupMessage):
self.webchat_group_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data)
return message_data.model_dump()
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
func: typing.Callable[[platform_events.Event, msadapter.MessagePlatformAdapter], typing.Awaitable[None]],
):
"""注册事件监听器"""
self.listeners[event_type] = func
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
func: typing.Callable[[platform_events.Event, msadapter.MessagePlatformAdapter], typing.Awaitable[None]],
):
"""取消注册事件监听器"""
del self.listeners[event_type]
async def run_async(self):
"""运行适配器"""
await self.logger.info('WebChat调试适配器已启动')
try:
while True:
await asyncio.sleep(1)
except asyncio.CancelledError:
await self.logger.info('WebChat调试适配器已停止')
raise
async def kill(self):
"""停止适配器"""
await self.logger.info('WebChat调试适配器正在停止')
async def send_webchat_message(
self, pipeline_uuid: str, session_type: str, message_chain_obj: typing.List[dict]
) -> dict:
"""发送调试消息到流水线"""
if session_type == 'person':
use_session = self.webchat_person_session
else:
use_session = self.webchat_group_session
message_chain = platform_message.MessageChain.parse_obj(message_chain_obj)
message_id = len(use_session.get_message_list(pipeline_uuid)) + 1
use_session.get_message_list(pipeline_uuid).append(
WebChatMessage(
id=message_id,
role='user',
content=str(message_chain),
message_chain=message_chain_obj,
timestamp=datetime.now().isoformat(),
)
)
message_chain.insert(0, platform_message.Source(id=message_id, time=datetime.now().timestamp()))
if session_type == 'person':
sender = platform_entities.Friend(id='webchatperson', nickname='User')
event = platform_events.FriendMessage(
sender=sender, message_chain=message_chain, time=datetime.now().timestamp()
)
else:
group = platform_entities.Group(
id='webchatgroup', name='Group', permission=platform_entities.Permission.Member
)
sender = platform_entities.GroupMember(
id='webchatperson',
member_name='User',
group=group,
permission=platform_entities.Permission.Member,
)
event = platform_events.GroupMessage(
sender=sender, message_chain=message_chain, time=datetime.now().timestamp()
)
self.ap.platform_mgr.webchat_proxy_bot.bot_entity.use_pipeline_uuid = pipeline_uuid
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
# set waiter
waiter = asyncio.Future[WebChatMessage]()
use_session.resp_waiters[message_id] = waiter
waiter.add_done_callback(lambda future: use_session.resp_waiters.pop(message_id))
resp_message = await waiter
resp_message.id = len(use_session.get_message_list(pipeline_uuid)) + 1
use_session.get_message_list(pipeline_uuid).append(resp_message)
return resp_message.model_dump()
def get_webchat_messages(self, pipeline_uuid: str, session_type: str) -> list[dict]:
"""获取调试消息历史"""
if session_type == 'person':
return [message.model_dump() for message in self.webchat_person_session.get_message_list(pipeline_uuid)]
else:
return [message.model_dump() for message in self.webchat_group_session.get_message_list(pipeline_uuid)]

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: MessagePlatformAdapter
metadata:
name: webchat
label:
en_US: "WebChat Debug"
zh_Hans: "网页聊天调试"
description:
en_US: "WebChat adapter for pipeline debugging"
zh_Hans: "用于流水线调试的网页聊天适配器"
icon: ""
spec: {}
execution:
python:
path: "webchat.py"
attr: "WebChatAdapter"

View File

@@ -30,6 +30,7 @@ from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
from ..logger import EventLogger
import xml.etree.ElementTree as ET
from typing import Optional, List, Tuple
from functools import partial
@@ -234,6 +235,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
'57': self._handler_compound_quote,
'5': self._handler_compound_link,
'6': self._handler_compound_file,
'74': self._handler_compound_file,
'33': self._handler_compound_mini_program,
'36': self._handler_compound_mini_program,
'2000': partial(self._handler_compound_unsupported, text="[转账消息]"),
@@ -319,10 +321,41 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
xml_data: ET.Element
) -> platform_message.MessageChain:
"""处理文件消息 (data_type=6)"""
xml_data_str = ET.tostring(xml_data, encoding='unicode')
return platform_message.MessageChain([
platform_message.WeChatForwardFile(xml_data=xml_data_str)
])
file_data = xml_data.find('.//appmsg')
if file_data.findtext('.//type', "") == "74":
return None
else:
xml_data_str = ET.tostring(xml_data, encoding='unicode')
# print(xml_data_str)
# 提取img标签的属性
# print(xml_data)
file_name = file_data.find('title').text
file_id = file_data.find('md5').text
# file_szie = file_data.find('totallen')
# print(file_data)
if file_data is not None:
aeskey = xml_data.findtext('.//appattach/aeskey')
cdnthumburl = xml_data.findtext('.//appattach/cdnattachurl')
# cdnmidimgurl = img_tag.get('cdnmidimgurl')
# print(aeskey,cdnthumburl)
file_data = self.bot.cdn_download(aeskey=aeskey, file_type=5, file_url=cdnthumburl)
file_base64 = file_data["Data"]['FileData']
# print(file_data)
file_size = file_data["Data"]['TotalSize']
# print(file_base64)
return platform_message.MessageChain([
platform_message.WeChatFile(file_id=file_id, file_name=file_name, file_size=file_size,
file_base64=file_base64),
platform_message.WeChatForwardFile(xml_data=xml_data_str)
])
async def _handler_compound_link(
self,
@@ -533,9 +566,10 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.message_converter = WeChatPadMessageConverter(config)
@@ -550,7 +584,7 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
try:
event = await self.event_converter.target2yiri(data.copy(), self.bot_account_id)
except Exception as e:
traceback.print_exc()
await self.logger.error(f"Error in wechatpad callback: {traceback.format_exc()}")
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
@@ -694,7 +728,8 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
self.bot = WeChatPadClient(
self.config['wechatpad_url'],
self.config["token"]
self.config["token"],
logger=self.logger
)
self.ap.logger.info(self.config["token"])
thread_1 = threading.Event()

View File

@@ -14,6 +14,7 @@ from ...core import app
from ..types import entities as platform_entities
from ...command.errors import ParamNotEnoughError
from ...utils import image
from ..logger import EventLogger
class WecomMessageConverter(adapter.MessageConverter):
@@ -134,10 +135,10 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
event_converter: WecomEventConverter = WecomEventConverter()
config: dict
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'corpid',
@@ -156,6 +157,7 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
contacts_secret=config['contacts_secret'],
logger=self.logger
)
async def reply_message(
@@ -199,8 +201,8 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in wecom callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -13,6 +13,7 @@ from pkg.core import app
from .. import adapter
from ..types import entities as platform_entities
from ...command.errors import ParamNotEnoughError
from ..logger import EventLogger
class WecomMessageConverter(adapter.MessageConverter):
@@ -124,10 +125,10 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
event_converter: WecomEventConverter = WecomEventConverter()
config: dict
def __init__(self, config: dict, ap: app.Application):
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
required_keys = [
'corpid',
@@ -144,6 +145,7 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
secret=config['secret'],
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
logger=self.logger
)
async def reply_message(
@@ -176,8 +178,8 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except:
traceback.print_exc()
except Exception as e:
await self.logger.error(f"Error in wecomcs callback: {traceback.format_exc()}")
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -3,7 +3,10 @@ import logging
import typing
from datetime import datetime
from pathlib import Path
import base64
import aiofiles
import httpx
import pydantic.v1 as pydantic
from . import entities as platform_entities
@@ -552,52 +555,29 @@ class Image(MessageComponent):
image_id = image_id[1:]
return image_id
async def download(
self,
filename: typing.Union[str, Path, None] = None,
directory: typing.Union[str, Path, None] = None,
determine_type: bool = True,
):
"""下载图片到本地。
async def get_bytes(self) -> typing.Tuple[bytes, str]:
"""获取图片的 bytes 和 mime type"""
if self.url:
async with httpx.AsyncClient() as client:
response = await client.get(self.url)
response.raise_for_status()
return response.content, response.headers.get('Content-Type')
elif self.base64:
mime_type = 'image/jpeg'
Args:
filename: 下载到本地的文件路径。与 `directory` 二选一。
directory: 下载到本地的文件夹路径。与 `filename` 二选一。
determine_type: 是否自动根据图片类型确定拓展名,默认为 True。
"""
if not self.url:
logger.warning(f'图片 `{self.uuid}` 无 url 参数,下载失败。')
return
split_index = self.base64.find(';base64,')
if split_index == -1:
raise ValueError('Invalid base64 string')
import httpx
mime_type = self.base64[5:split_index]
base64_data = self.base64[split_index + 8 :]
async with httpx.AsyncClient() as client:
response = await client.get(self.url)
response.raise_for_status()
content = response.content
if filename:
path = Path(filename)
if determine_type:
import imghdr
path = path.with_suffix('.' + str(imghdr.what(None, content)))
path.parent.mkdir(parents=True, exist_ok=True)
elif directory:
import imghdr
path = Path(directory)
path.mkdir(parents=True, exist_ok=True)
path = path / f'{self.uuid}.{imghdr.what(None, content)}'
else:
raise ValueError('请指定文件路径或文件夹路径!')
import aiofiles
async with aiofiles.open(path, 'wb') as f:
await f.write(content)
return path
return base64.b64decode(base64_data), mime_type
elif self.path:
async with aiofiles.open(self.path, 'rb') as f:
return await f.read(), 'image/jpeg'
else:
raise ValueError('Can not get bytes from image')
@classmethod
async def from_local(
@@ -820,12 +800,14 @@ class File(MessageComponent):
type: str = 'File'
"""消息组件类型。"""
id: str
id: str = ''
"""文件识别 ID。"""
name: str
"""文件名称。"""
size: int
size: int = ''
"""文件大小。"""
url: str
"""文件路径"""
def __str__(self):
return f'[文件]{self.name}'
@@ -942,3 +924,22 @@ class WeChatForwardQuote(MessageComponent):
def __str__(self):
return self.app_msg
class WeChatFile(MessageComponent):
"""文件。"""
type: str = 'File'
"""消息组件类型。"""
file_id: str = ''
"""文件识别 ID。"""
file_name: str = ''
"""文件名称。"""
file_size: int = ''
"""文件大小。"""
file_path: str = ''
"""文件地址"""
file_base64: str = ''
"""base64"""
def __str__(self):
return f'[文件]{self.file_name}'

View File

@@ -4,9 +4,6 @@ import sqlalchemy
from . import entities, requester
from ...core import app
from ...core import entities as core_entities
from .. import entities as llm_entities
from ..tools import entities as tools_entities
from ...discover import engine
from . import token
from ...entity.persistence import model as persistence_model
@@ -69,12 +66,11 @@ class ModelManager:
for llm_model in llm_models:
await self.load_llm_model(llm_model)
async def load_llm_model(
async def init_runtime_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""加载模型"""
"""初始化运行时模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.LLMModel(**model_info._mapping)
elif isinstance(model_info, dict):
@@ -92,6 +88,15 @@ class ModelManager:
),
requester=requester_inst,
)
return runtime_llm_model
async def load_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""加载模型"""
runtime_llm_model = await self.init_runtime_llm_model(model_info)
self.llm_models.append(runtime_llm_model)
async def get_model_by_name(self, name: str) -> entities.LLMModelInfo: # deprecated
@@ -132,12 +137,3 @@ class ModelManager:
if component.metadata.name == name:
return component
return None
async def invoke_llm(
self,
query: core_entities.Query,
model_uuid: str,
messages: list[llm_entities.Message],
funcs: list[tools_entities.LLMFunction] = None,
) -> llm_entities.Message:
pass

View File

@@ -1,87 +1,14 @@
from __future__ import annotations
import typing
import google.genai
from google.genai import types
from .. import errors, requester
from ....core import entities as core_entities
from ... import entities as llm_entities
from ...tools import entities as tools_entities
from . import chatcmpl
class GeminiChatCompletions(requester.LLMAPIRequester):
class GeminiChatCompletions(chatcmpl.OpenAIChatCompletions):
"""Google Gemini API 请求器"""
default_config: dict[str, typing.Any] = {
'base_url': 'https://generativelanguage.googleapis.com',
'base_url': 'https://generativelanguage.googleapis.com/v1beta/openai',
'timeout': 120,
}
async def initialize(self):
"""初始化 Gemini API 客户端"""
pass
async def invoke_llm(
self,
query: core_entities.Query,
model: requester.RuntimeLLMModel,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
) -> llm_entities.Message:
"""调用 Gemini API 生成回复"""
try:
self.client = google.genai.Client(
api_key=model.token_mgr.get_token(),
http_options=types.HttpOptions(api_version='v1alpha'),
)
contents = []
system_content = None
for message in messages:
role = message.role
parts = []
if isinstance(message.content, str):
parts.append(types.Part.from_text(text=message.content))
elif isinstance(message.content, list):
for content in message.content:
if content.type == 'text':
parts.append(types.Part.from_text(text=content.text))
# elif content.type == 'image_url':
# parts.append(types.Part.from_image_url(url=content.image_url))
if role == 'system':
system_content = parts
else:
content = types.Content(role=role, parts=parts)
contents.append(content)
response = self.client.models.generate_content(
model=model.model_entity.name,
contents=contents,
config=types.GenerateContentConfig(
system_instruction=system_content,
**extra_args,
),
)
return llm_entities.Message(
role='assistant',
content=response.candidates[0].content.parts[0].text,
)
except Exception as e:
error_message = str(e).lower()
if 'invalid api key' in error_message:
raise errors.RequesterError(f'无效的 API 密钥: {str(e)}')
elif 'not found' in error_message:
raise errors.RequesterError(f'请求路径错误或模型无效: {str(e)}')
elif any(keyword in error_message for keyword in ['rate limit', 'quota', 'permission denied']):
raise errors.RequesterError(f'请求过于频繁或余额不足: {str(e)}')
elif 'timeout' in error_message:
raise errors.RequesterError(f'请求超时: {str(e)}')
else:
raise errors.RequesterError(f'Gemini API 请求错误: {str(e)}')

View File

@@ -14,7 +14,7 @@ spec:
zh_Hans: 基础 URL
type: string
required: true
default: "https://generativelanguage.googleapis.com"
default: "https://generativelanguage.googleapis.com/v1beta/openai"
- name: timeout
label:
en_US: Timeout

View File

@@ -57,6 +57,8 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
if chunk.choices[0].delta.tool_calls is not None:
for tool_call in chunk.choices[0].delta.tool_calls:
if tool_call.function.arguments is None:
continue
for tc in tool_calls:
if tc.index == tool_call.index:
tc.function.arguments += tool_call.function.arguments

View File

@@ -42,7 +42,7 @@ class OllamaChatCompletions(requester.LLMAPIRequester):
query: core_entities.Query,
req_messages: list[dict],
use_model: requester.RuntimeLLMModel,
user_funcs: list[tools_entities.LLMFunction] = None,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
) -> llm_entities.Message:
args = extra_args.copy()
@@ -67,8 +67,8 @@ class OllamaChatCompletions(requester.LLMAPIRequester):
args['messages'] = messages
args['tools'] = []
if user_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(user_funcs)
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args['tools'] = tools

View File

@@ -0,0 +1,3 @@
<svg width="60" height="60" viewBox="0 0 60 60" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M29.7888 0.215881C13.3449 0.215881 0 13.5422 0 29.986C0 38.0916 3.24782 45.4527 8.51506 50.8223V30.0139C8.51506 24.3372 10.7299 18.9769 14.7408 14.966C18.7704 10.9365 24.112 8.74025 29.7981 8.74025H29.9749L29.7888 8.75886C41.5423 8.75886 51.0718 18.2883 51.0718 30.0326C51.0718 31.0562 50.9973 32.0613 50.8577 33.057L38.8343 20.9964C36.4333 18.5954 33.2134 17.2646 29.8074 17.2646C26.4013 17.2646 23.1907 18.5954 20.7805 20.9964C18.3609 23.4159 17.0394 26.6172 17.0394 30.0326C17.0394 33.4479 18.3702 36.6492 20.7805 39.0688C23.1814 41.4697 26.4013 42.8005 29.8074 42.8005C33.2134 42.8005 36.424 41.4697 38.8343 39.0688C41.077 36.826 42.3706 33.8946 42.5474 30.7584L49.6014 37.8403C46.4839 45.7319 38.797 51.3249 29.7981 51.3249C25.1357 51.3249 20.6874 49.8359 17.0301 47.072V56.9178C20.9014 58.7604 25.2195 59.7841 29.7794 59.7841C46.2233 59.7841 59.5682 46.4578 59.5682 30.0139C59.5868 13.5515 46.2512 0.225187 29.7981 0.225187L29.7888 0.215881Z" fill="#0062E2"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -5,6 +5,7 @@ metadata:
label:
en_US: ppio
zh_Hans: 派欧云
icon: ppio.svg
spec:
config:
- name: base_url

View File

@@ -93,6 +93,7 @@ class DifyServiceAPIRunner(runner.RequestRunner):
async def _chat_messages(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""调用聊天助手"""
cov_id = query.session.using_conversation.uuid or ''
query.variables['conversation_id'] = cov_id
plain_text, image_ids = await self._preprocess_user_message(query)
@@ -155,6 +156,7 @@ class DifyServiceAPIRunner(runner.RequestRunner):
) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""调用聊天助手"""
cov_id = query.session.using_conversation.uuid or ''
query.variables['conversation_id'] = cov_id
plain_text, image_ids = await self._preprocess_user_message(query)

View File

@@ -0,0 +1,159 @@
from __future__ import annotations
import typing
import json
import uuid
import aiohttp
from .. import runner
from ...core import app, entities as core_entities
from .. import entities as llm_entities
class N8nAPIError(Exception):
"""N8n API 请求失败"""
def __init__(self, message: str):
self.message = message
super().__init__(self.message)
@runner.runner_class('n8n-service-api')
class N8nServiceAPIRunner(runner.RequestRunner):
"""N8n Service API 工作流请求器"""
def __init__(self, ap: app.Application, pipeline_config: dict):
self.ap = ap
self.pipeline_config = pipeline_config
# 获取webhook URL
self.webhook_url = self.pipeline_config['ai']['n8n-service-api']['webhook-url']
# 获取超时设置默认为120秒
self.timeout = self.pipeline_config['ai']['n8n-service-api'].get('timeout', 120)
# 获取输出键名默认为response
self.output_key = self.pipeline_config['ai']['n8n-service-api'].get('output-key', 'response')
# 获取认证类型默认为none
self.auth_type = self.pipeline_config['ai']['n8n-service-api'].get('auth-type', 'none')
# 根据认证类型获取相应的认证信息
if self.auth_type == 'basic':
self.basic_username = self.pipeline_config['ai']['n8n-service-api'].get('basic-username', '')
self.basic_password = self.pipeline_config['ai']['n8n-service-api'].get('basic-password', '')
elif self.auth_type == 'jwt':
self.jwt_secret = self.pipeline_config['ai']['n8n-service-api'].get('jwt-secret', '')
self.jwt_algorithm = self.pipeline_config['ai']['n8n-service-api'].get('jwt-algorithm', 'HS256')
elif self.auth_type == 'header':
self.header_name = self.pipeline_config['ai']['n8n-service-api'].get('header-name', '')
self.header_value = self.pipeline_config['ai']['n8n-service-api'].get('header-value', '')
async def _preprocess_user_message(self, query: core_entities.Query) -> str:
"""预处理用户消息,提取纯文本
Returns:
str: 纯文本消息
"""
plain_text = ''
if isinstance(query.user_message.content, list):
for ce in query.user_message.content:
if ce.type == 'text':
plain_text += ce.text
# 注意n8n webhook目前不支持直接处理图片如需支持可在此扩展
elif isinstance(query.user_message.content, str):
plain_text = query.user_message.content
return plain_text
async def _call_webhook(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""调用n8n webhook"""
# 生成会话ID如果不存在
if not query.session.using_conversation.uuid:
query.session.using_conversation.uuid = str(uuid.uuid4())
# 预处理用户消息
plain_text = await self._preprocess_user_message(query)
# 准备请求数据
payload = {
# 基本消息内容
'message': plain_text,
'user_message_text': plain_text,
'conversation_id': query.session.using_conversation.uuid,
'session_id': query.variables.get('session_id', ''),
'user_id': f'{query.session.launcher_type.value}_{query.session.launcher_id}',
'msg_create_time': query.variables.get('msg_create_time', ''),
}
# 添加所有变量到payload
payload.update(query.variables)
try:
# 准备请求头和认证信息
headers = {}
auth = None
# 根据认证类型设置相应的认证信息
if self.auth_type == 'basic':
# 使用Basic认证
auth = aiohttp.BasicAuth(self.basic_username, self.basic_password)
self.ap.logger.debug(f'using basic auth: {self.basic_username}')
elif self.auth_type == 'jwt':
# 使用JWT认证
import jwt
import time
# 创建JWT令牌
payload_jwt = {
'exp': int(time.time()) + 3600, # 1小时过期
'iat': int(time.time()),
'sub': 'n8n-webhook',
}
token = jwt.encode(payload_jwt, self.jwt_secret, algorithm=self.jwt_algorithm)
# 添加到Authorization头
headers['Authorization'] = f'Bearer {token}'
self.ap.logger.debug('using jwt auth')
elif self.auth_type == 'header':
# 使用自定义请求头认证
headers[self.header_name] = self.header_value
self.ap.logger.debug(f'using header auth: {self.header_name}')
else:
self.ap.logger.debug('no auth')
# 调用webhook
async with aiohttp.ClientSession() as session:
async with session.post(
self.webhook_url, json=payload, headers=headers, auth=auth, timeout=self.timeout
) as response:
if response.status != 200:
error_text = await response.text()
self.ap.logger.error(f'n8n webhook call failed: {response.status}, {error_text}')
raise Exception(f'n8n webhook call failed: {response.status}, {error_text}')
# 解析响应
response_data = await response.json()
self.ap.logger.debug(f'n8n webhook response: {response_data}')
# 从响应中提取输出
if self.output_key in response_data:
output_content = response_data[self.output_key]
else:
# 如果没有指定的输出键,则使用整个响应
output_content = json.dumps(response_data, ensure_ascii=False)
# 返回消息
yield llm_entities.Message(
role='assistant',
content=output_content,
)
except Exception as e:
self.ap.logger.error(f'n8n webhook call exception: {str(e)}')
raise N8nAPIError(f'n8n webhook call exception: {str(e)}')
async def run(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""运行请求"""
async for msg in self._call_webhook(query):
yield msg

View File

@@ -41,6 +41,8 @@ class SessionManager:
query: core_entities.Query,
session: core_entities.Session,
prompt_config: list[dict],
pipeline_uuid: str,
bot_uuid: str,
) -> core_entities.Conversation:
"""获取对话或创建对话"""
@@ -58,13 +60,15 @@ class SessionManager:
messages=prompt_messages,
)
if session.using_conversation is None:
if session.using_conversation is None or session.using_conversation.pipeline_uuid != pipeline_uuid:
conversation = core_entities.Conversation(
prompt=prompt,
messages=[],
use_funcs=await self.ap.tool_mgr.get_all_functions(
plugin_enabled=True,
),
pipeline_uuid=pipeline_uuid,
bot_uuid=bot_uuid,
)
session.conversations.append(conversation)
session.using_conversation = conversation

View File

@@ -82,8 +82,8 @@ class RuntimeMCPSession:
for tool in tools.tools:
async def func(query: core_entities.Query, **kwargs):
result = await self.session.call_tool(tool.name, kwargs)
async def func(query: core_entities.Query, *, _tool=tool, **kwargs):
result = await self.session.call_tool(_tool.name, kwargs)
if result.isError:
raise Exception(result.content[0].text)
return result.content[0].text

0
pkg/storage/__init__.py Normal file
View File

21
pkg/storage/mgr.py Normal file
View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from ..core import app
from . import provider
from .providers import localstorage
class StorageMgr:
"""存储管理器"""
ap: app.Application
storage_provider: provider.StorageProvider
def __init__(self, ap: app.Application):
self.ap = ap
self.storage_provider = localstorage.LocalStorageProvider(ap)
async def initialize(self):
await self.storage_provider.initialize()

44
pkg/storage/provider.py Normal file
View File

@@ -0,0 +1,44 @@
from __future__ import annotations
import abc
from ..core import app
class StorageProvider(abc.ABC):
ap: app.Application
def __init__(self, ap: app.Application):
self.ap = ap
async def initialize(self):
pass
@abc.abstractmethod
async def save(
self,
key: str,
value: bytes,
):
pass
@abc.abstractmethod
async def load(
self,
key: str,
) -> bytes:
pass
@abc.abstractmethod
async def exists(
self,
key: str,
) -> bool:
pass
@abc.abstractmethod
async def delete(
self,
key: str,
):
pass

View File

View File

@@ -0,0 +1,45 @@
from __future__ import annotations
import os
import aiofiles
from ...core import app
from .. import provider
LOCAL_STORAGE_PATH = os.path.join('data', 'storage')
class LocalStorageProvider(provider.StorageProvider):
def __init__(self, ap: app.Application):
super().__init__(ap)
if not os.path.exists(LOCAL_STORAGE_PATH):
os.makedirs(LOCAL_STORAGE_PATH)
async def save(
self,
key: str,
value: bytes,
):
async with aiofiles.open(os.path.join(LOCAL_STORAGE_PATH, f'{key}'), 'wb') as f:
await f.write(value)
async def load(
self,
key: str,
) -> bytes:
async with aiofiles.open(os.path.join(LOCAL_STORAGE_PATH, f'{key}'), 'rb') as f:
return await f.read()
async def exists(
self,
key: str,
) -> bool:
return os.path.exists(os.path.join(LOCAL_STORAGE_PATH, f'{key}'))
async def delete(
self,
key: str,
):
os.remove(os.path.join(LOCAL_STORAGE_PATH, f'{key}'))

View File

@@ -1,6 +1,6 @@
semantic_version = 'v4.0.2'
semantic_version = 'v4.0.7'
required_database_version = 1
required_database_version = 3
"""标记本版本所需要的数据库结构版本,用于判断数据库迁移"""
debug_mode = False

View File

@@ -1,8 +1,9 @@
import re
import inspect
import typing
def get_func_schema(function: callable) -> dict:
def get_func_schema(function: typing.Callable) -> dict:
"""
Return the data schema of a function.
{

View File

@@ -204,7 +204,9 @@ async def get_slack_image_to_base64(pic_url: str, bot_token: str):
try:
async with aiohttp.ClientSession() as session:
async with session.get(pic_url, headers=headers) as resp:
image_data = await resp.read()
return base64.b64encode(image_data).decode('utf-8')
mime_type = resp.headers.get("Content-Type", "application/octet-stream")
file_bytes = await resp.read()
base64_str = base64.b64encode(file_bytes).decode("utf-8")
return f"data:{mime_type};base64,{base64_str}"
except Exception as e:
raise (e)
raise (e)

View File

@@ -1,10 +0,0 @@
import aiohttp
async def get_myip() -> str:
try:
async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=10)) as session:
async with session.get('https://ip.useragentinfo.com/myip') as response:
return await response.text()
except Exception:
return '0.0.0.0'

View File

@@ -199,9 +199,9 @@ class VersionManager:
try:
if await self.ap.ver_mgr.is_new_version_available():
return (
'有新版本可用,根据文档更新https://docs.langbot.app/deploy/update.html',
'New version available:\n有新版本可用,根据文档更新: \nhttps://docs.langbot.app/zh/deploy/update.html',
logging.INFO,
)
except Exception as e:
return f'检查版本更新时出错: {e}', logging.WARNING
return f'Error checking version update: {e}', logging.WARNING

View File

@@ -1,6 +1,6 @@
[project]
name = "langbot"
version = "4.0.2"
version = "4.0.7"
description = "高稳定、支持扩展、多模态 - 大模型原生即时通信机器人平台"
readme = "README.md"
requires-python = ">=3.10.1"
@@ -45,11 +45,10 @@ dependencies = [
"websockets>=15.0.1",
"python-socks>=2.7.1", # dingtalk missing dependency
"taskgroup==0.0.0a4", # graingert/taskgroup#20
"pip>=25.1.1", # pkg.core.bootutils.deps
"google-genai>=1.15.0",
"google-generativeai>=0.8.5",
"pip>=25.1.1",
"ruff>=0.11.9",
"pre-commit>=4.2.0",
"uv>=0.7.11",
]
keywords = [
"bot",
@@ -181,3 +180,4 @@ skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"

View File

@@ -16,6 +16,9 @@
"ignore-rules": {
"prefix": [],
"regexp": []
},
"misc": {
"combine-quote-message": true
}
},
"safety": {
@@ -55,6 +58,18 @@
"api-key": "your-api-key",
"app-id": "your-app-id",
"references-quote": "参考资料来自:"
},
"n8n-service-api": {
"webhook-url": "http://your-n8n-webhook-url",
"auth-type": "none",
"basic-username": "",
"basic-password": "",
"jwt-secret": "",
"jwt-algorithm": "HS256",
"header-name": "",
"header-value": "",
"timeout": 120,
"output-key": "response"
}
},
"output": {

View File

@@ -31,6 +31,10 @@ stages:
label:
en_US: Aliyun Dashscope App API
zh_Hans: 阿里云百炼平台 API
- name: n8n-service-api
label:
en_US: n8n Workflow API
zh_Hans: n8n 工作流 API
- name: local-agent
label:
en_US: Local Agent
@@ -170,3 +174,127 @@ stages:
type: string
required: false
default: '参考资料来自:'
- name: n8n-service-api
label:
en_US: n8n Workflow API
zh_Hans: n8n 工作流 API
description:
en_US: Configure the n8n workflow API of the pipeline
zh_Hans: 配置 n8n 工作流 API
config:
- name: webhook-url
label:
en_US: Webhook URL
zh_Hans: Webhook URL
description:
en_US: The webhook URL of the n8n workflow
zh_Hans: n8n 工作流的 webhook URL
type: string
required: true
- name: auth-type
label:
en_US: Authentication Type
zh_Hans: 认证类型
description:
en_US: The authentication type for the webhook call
zh_Hans: webhook 调用的认证类型
type: select
required: true
default: 'none'
options:
- name: 'none'
label:
en_US: None
zh_Hans: 无认证
- name: 'basic'
label:
en_US: Basic Auth
zh_Hans: 基本认证
- name: 'jwt'
label:
en_US: JWT
zh_Hans: JWT认证
- name: 'header'
label:
en_US: Header Auth
zh_Hans: 请求头认证
- name: basic-username
label:
en_US: Username
zh_Hans: 用户名
description:
en_US: The username for Basic Auth
zh_Hans: 基本认证的用户名
type: string
required: false
default: ''
- name: basic-password
label:
en_US: Password
zh_Hans: 密码
description:
en_US: The password for Basic Auth
zh_Hans: 基本认证的密码
type: string
required: false
default: ''
- name: jwt-secret
label:
en_US: Secret
zh_Hans: 密钥
description:
en_US: The secret for JWT authentication
zh_Hans: JWT认证的密钥
type: string
required: false
default: ''
- name: jwt-algorithm
label:
en_US: Algorithm
zh_Hans: 算法
description:
en_US: The algorithm for JWT authentication
zh_Hans: JWT认证的算法
type: string
required: false
default: 'HS256'
- name: header-name
label:
en_US: Header Name
zh_Hans: 请求头名称
description:
en_US: The header name for Header Auth
zh_Hans: 请求头认证的名称
type: string
required: false
default: ''
- name: header-value
label:
en_US: Header Value
zh_Hans: 请求头值
description:
en_US: The header value for Header Auth
zh_Hans: 请求头认证的值
type: string
required: false
default: ''
- name: timeout
label:
en_US: Timeout
zh_Hans: 超时时间
description:
en_US: The timeout in seconds for the webhook call
zh_Hans: webhook 调用的超时时间(秒)
type: integer
required: false
default: 120
- name: output-key
label:
en_US: Output Key
zh_Hans: 输出键名
description:
en_US: The key name of the output in the webhook response
zh_Hans: webhook 响应中输出内容的键名
type: string
required: false
default: 'response'

View File

@@ -46,8 +46,8 @@ stages:
en_US: Random
zh_Hans: 随机
description:
en_US: The probability of the random response, range from 0.0 to 1.0
zh_Hans: 随机响应概率范围0.0-1.0,对应 0% 到 100%
en_US: 'Probability of automatically responding to messages that are not matched by other rules. Range: 0.0-1.0 (0%-100%).'
zh_Hans: '自动响应其他规则未匹配的消息的概率范围0.0-1.0 (0%-100%)。'
type: float
required: false
default: 0
@@ -117,3 +117,18 @@ stages:
type: array[string]
required: true
default: []
- name: misc
label:
en_US: Misc
zh_Hans: 杂项
config:
- name: combine-quote-message
label:
en_US: Combine Quote Message
zh_Hans: 合并引用消息
description:
en_US: If enabled, the bot will combine the quote message with the user's message
zh_Hans: 如果启用,将合并引用消息与用户发送的消息
type: boolean
required: true
default: true

293
web/package-lock.json generated
View File

@@ -15,6 +15,8 @@
"@radix-ui/react-dialog": "^1.1.13",
"@radix-ui/react-hover-card": "^1.1.13",
"@radix-ui/react-label": "^2.1.6",
"@radix-ui/react-popover": "^1.1.14",
"@radix-ui/react-scroll-area": "^1.2.9",
"@radix-ui/react-select": "^2.2.4",
"@radix-ui/react-slot": "^1.2.2",
"@radix-ui/react-switch": "^1.2.4",
@@ -36,6 +38,7 @@
"react-dom": "^19.0.0",
"react-hook-form": "^7.56.3",
"react-i18next": "^15.5.1",
"react-photo-view": "^1.2.7",
"sonner": "^2.0.3",
"tailwind-merge": "^3.2.0",
"tailwindcss": "^4.1.5",
@@ -1254,6 +1257,215 @@
}
}
},
"node_modules/@radix-ui/react-popover": {
"version": "1.1.14",
"resolved": "https://registry.npmjs.org/@radix-ui/react-popover/-/react-popover-1.1.14.tgz",
"integrity": "sha512-ODz16+1iIbGUfFEfKx2HTPKizg2MN39uIOV8MXeHnmdd3i/N9Wt7vU46wbHsqA0xoaQyXVcs0KIlBdOA2Y95bw==",
"license": "MIT",
"dependencies": {
"@radix-ui/primitive": "1.1.2",
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-context": "1.1.2",
"@radix-ui/react-dismissable-layer": "1.1.10",
"@radix-ui/react-focus-guards": "1.1.2",
"@radix-ui/react-focus-scope": "1.1.7",
"@radix-ui/react-id": "1.1.1",
"@radix-ui/react-popper": "1.2.7",
"@radix-ui/react-portal": "1.1.9",
"@radix-ui/react-presence": "1.1.4",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-slot": "1.2.3",
"@radix-ui/react-use-controllable-state": "1.2.2",
"aria-hidden": "^1.2.4",
"react-remove-scroll": "^2.6.3"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-arrow": {
"version": "1.1.7",
"resolved": "https://registry.npmjs.org/@radix-ui/react-arrow/-/react-arrow-1.1.7.tgz",
"integrity": "sha512-F+M1tLhO+mlQaOWspE8Wstg+z6PwxwRd8oQ8IXceWz92kfAmalTRf0EjrouQeo7QssEPfCn05B4Ihs1K9WQ/7w==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-primitive": "2.1.3"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-dismissable-layer": {
"version": "1.1.10",
"resolved": "https://registry.npmjs.org/@radix-ui/react-dismissable-layer/-/react-dismissable-layer-1.1.10.tgz",
"integrity": "sha512-IM1zzRV4W3HtVgftdQiiOmA0AdJlCtMLe00FXaHwgt3rAnNsIyDqshvkIW3hj/iu5hu8ERP7KIYki6NkqDxAwQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/primitive": "1.1.2",
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-callback-ref": "1.1.1",
"@radix-ui/react-use-escape-keydown": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-focus-scope": {
"version": "1.1.7",
"resolved": "https://registry.npmjs.org/@radix-ui/react-focus-scope/-/react-focus-scope-1.1.7.tgz",
"integrity": "sha512-t2ODlkXBQyn7jkl6TNaw/MtVEVvIGelJDCG41Okq/KwUsJBwQ4XVZsHAVUkK4mBv3ewiAS3PGuUWuY2BoK4ZUw==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-callback-ref": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-popper": {
"version": "1.2.7",
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.7.tgz",
"integrity": "sha512-IUFAccz1JyKcf/RjB552PlWwxjeCJB8/4KxT7EhBHOJM+mN7LdW+B3kacJXILm32xawcMMjb2i0cIZpo+f9kiQ==",
"license": "MIT",
"dependencies": {
"@floating-ui/react-dom": "^2.0.0",
"@radix-ui/react-arrow": "1.1.7",
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-context": "1.1.2",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-callback-ref": "1.1.1",
"@radix-ui/react-use-layout-effect": "1.1.1",
"@radix-ui/react-use-rect": "1.1.1",
"@radix-ui/react-use-size": "1.1.1",
"@radix-ui/rect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-portal": {
"version": "1.1.9",
"resolved": "https://registry.npmjs.org/@radix-ui/react-portal/-/react-portal-1.1.9.tgz",
"integrity": "sha512-bpIxvq03if6UNwXZ+HTK71JLh4APvnXntDc6XOX8UVq4XQOVl7lwok0AvIl+b8zgCw3fSaVTZMpAPPagXbKmHQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-primitive": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz",
"integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-slot": "1.2.3"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-slot": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2"
},
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-popper": {
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.6.tgz",
@@ -1388,6 +1600,78 @@
}
}
},
"node_modules/@radix-ui/react-scroll-area": {
"version": "1.2.9",
"resolved": "https://registry.npmjs.org/@radix-ui/react-scroll-area/-/react-scroll-area-1.2.9.tgz",
"integrity": "sha512-YSjEfBXnhUELsO2VzjdtYYD4CfQjvao+lhhrX5XsHD7/cyUNzljF1FHEbgTPN7LH2MClfwRMIsYlqTYpKTTe2A==",
"license": "MIT",
"dependencies": {
"@radix-ui/number": "1.1.1",
"@radix-ui/primitive": "1.1.2",
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-context": "1.1.2",
"@radix-ui/react-direction": "1.1.1",
"@radix-ui/react-presence": "1.1.4",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-callback-ref": "1.1.1",
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-scroll-area/node_modules/@radix-ui/react-primitive": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz",
"integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-slot": "1.2.3"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-scroll-area/node_modules/@radix-ui/react-slot": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2"
},
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-select": {
"version": "2.2.4",
"resolved": "https://registry.npmjs.org/@radix-ui/react-select/-/react-select-2.2.4.tgz",
@@ -6126,6 +6410,15 @@
"integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==",
"dev": true
},
"node_modules/react-photo-view": {
"version": "1.2.7",
"resolved": "https://registry.npmmirror.com/react-photo-view/-/react-photo-view-1.2.7.tgz",
"integrity": "sha512-MfOWVPxuibncRLaycZUNxqYU8D9IA+rbGDDaq6GM8RIoGJal592hEJoRAyRSI7ZxyyJNJTLMUWWL3UIXHJJOpw==",
"peerDependencies": {
"react": ">=16.8.0",
"react-dom": ">=16.8.0"
}
},
"node_modules/react-remove-scroll": {
"version": "2.6.3",
"resolved": "https://registry.npmjs.org/react-remove-scroll/-/react-remove-scroll-2.6.3.tgz",

View File

@@ -4,6 +4,7 @@
"private": true,
"scripts": {
"dev": "next dev --turbopack",
"dev:local": "NEXT_PUBLIC_API_BASE_URL=http://localhost:5300 next dev --turbopack",
"build": "next build",
"start": "next start",
"lint": "next lint",
@@ -23,6 +24,8 @@
"@radix-ui/react-dialog": "^1.1.13",
"@radix-ui/react-hover-card": "^1.1.13",
"@radix-ui/react-label": "^2.1.6",
"@radix-ui/react-popover": "^1.1.14",
"@radix-ui/react-scroll-area": "^1.2.9",
"@radix-ui/react-select": "^2.2.4",
"@radix-ui/react-slot": "^1.2.2",
"@radix-ui/react-switch": "^1.2.4",
@@ -44,6 +47,7 @@
"react-dom": "^19.0.0",
"react-hook-form": "^7.56.3",
"react-i18next": "^15.5.1",
"react-photo-view": "^1.2.7",
"sonner": "^2.0.3",
"tailwind-merge": "^3.2.0",
"tailwindcss": "^4.1.5",

View File

@@ -0,0 +1,63 @@
import { httpClient } from '@/app/infra/http/HttpClient';
import {
BotLog,
GetBotLogsResponse,
} from '@/app/infra/http/requestParam/bots/GetBotLogsResponse';
export class BotLogManager {
private botId: string;
private callbacks: ((_: BotLog[]) => void)[] = [];
private intervalIds: number[] = [];
constructor(botId: string) {
this.botId = botId;
}
startListenServerPush() {
const timerNumber = setInterval(() => {
this.getLogList(-1, 50).then((response) => {
this.callbacks.forEach((callback) =>
callback(this.parseResponse(response)),
);
});
}, 3000);
this.intervalIds.push(Number(timerNumber));
}
stopServerPush() {
this.intervalIds.forEach((id) => clearInterval(id));
this.intervalIds = [];
}
subscribeLogPush(callback: (_: BotLog[]) => void) {
if (!this.callbacks.includes(callback)) {
this.callbacks.push(callback);
}
}
dispose() {
this.callbacks = [];
}
/**
* 获取日志页的基本信息
*/
private getLogList(next: number, count: number = 20) {
return httpClient.getBotLogs(this.botId, {
from_index: next,
max_count: count,
});
}
async loadFirstPage() {
return this.parseResponse(await this.getLogList(-1, 10));
}
async loadMore(position: number, total: number) {
return this.parseResponse(await this.getLogList(position, total));
}
private parseResponse(httpResponse: GetBotLogsResponse): BotLog[] {
return httpResponse.logs;
}
}

View File

@@ -0,0 +1,116 @@
'use client';
import { BotLog } from '@/app/infra/http/requestParam/bots/GetBotLogsResponse';
import styles from './botLog.module.css';
import { httpClient } from '@/app/infra/http/HttpClient';
import { PhotoProvider } from 'react-photo-view';
import { useTranslation } from 'react-i18next';
import { toast } from 'sonner';
export function BotLogCard({ botLog }: { botLog: BotLog }) {
const { t } = useTranslation();
const baseURL = httpClient.getBaseUrl();
function formatTime(timestamp: number) {
const now = new Date();
const date = new Date(timestamp * 1000);
// 获取各个时间部分
const year = date.getFullYear();
const month = date.getMonth() + 1; // 月份从0开始需要+1
const day = date.getDate();
const hours = date.getHours().toString().padStart(2, '0');
const minutes = date.getMinutes().toString().padStart(2, '0');
// 判断时间范围
const isToday = now.toDateString() === date.toDateString();
const isYesterday =
new Date(now.setDate(now.getDate() - 1)).toDateString() ===
date.toDateString();
const isThisYear = now.getFullYear() === year;
if (isToday) {
return `${hours}:${minutes}`; // 今天的消息:小时:分钟
} else if (isYesterday) {
return `${t('bots.yesterday')} ${hours}:${minutes}`; // 昨天的消息:昨天 小时:分钟
} else if (isThisYear) {
return t('bots.dateFormat', { month, day }); // 本年消息x月x日
} else {
return t('bots.earlier'); // 更早的消息:更久之前
}
}
function getSubChatId(str: string) {
const strArr = str.split('');
return strArr;
}
return (
<div className={`${styles.botLogCardContainer}`}>
{/* 头部标签,时间 */}
<div className={`${styles.cardTitleContainer}`}>
<div className={`flex flex-row gap-4`}>
<div className={`${styles.tag}`}>{botLog.level}</div>
{botLog.message_session_id && (
<div
className={`${styles.tag} ${styles.chatTag}`}
onClick={() => {
navigator.clipboard
.writeText(botLog.message_session_id)
.then(() => {
toast.success(t('common.copySuccess'));
});
}}
>
<svg
className="icon"
viewBox="0 0 1024 1024"
version="1.1"
xmlns="http://www.w3.org/2000/svg"
p-id="1664"
width="20"
height="20"
fill="currentColor"
>
<path
d="M96.1 575.7a32.2 32.1 0 1 0 64.4 0 32.2 32.1 0 1 0-64.4 0Z"
p-id="1665"
fill="currentColor"
></path>
<path
d="M742.1 450.7l-269.5-2.1c-14.3-0.1-26 13.8-26 31s11.7 31.3 26 31.4l269.5 2.1c14.3 0.1 26-13.8 26-31s-11.7-31.3-26-31.4zM742.1 577.7l-269.5-2.1c-14.3-0.1-26 13.8-26 31s11.7 31.3 26 31.4l269.5 2.1c14.3 0.2 26-13.8 26-31s-11.7-31.3-26-31.4z"
p-id="1666"
fill="currentColor"
></path>
<path
d="M736.1 63.9H417c-70.4 0-128 57.6-128 128h-64.9c-70.4 0-128 57.6-128 128v128c-0.1 17.7 14.4 32 32.2 32 17.8 0 32.2-14.4 32.2-32.1V320c0-35.2 28.8-64 64-64H289v447.8c0 70.4 57.6 128 128 128h255.1c-0.1 35.2-28.8 63.8-64 63.8H224.5c-35.2 0-64-28.8-64-64V703.5c0-17.7-14.4-32.1-32.2-32.1-17.8 0-32.3 14.4-32.3 32.1v128.3c0 70.4 57.6 128 128 128h384.1c70.4 0 128-57.6 128-128h65c70.4 0 128-57.6 128-128V255.9l-193-192z m0.1 63.4l127.7 128.3H800c-35.2 0-64-28.8-64-64v-64.3h0.2z m64 641H416.1c-35.2 0-64-28.8-64-64v-513c0-35.2 28.8-64 64-64H671V191c0 70.4 57.6 128 128 128h65.2v385.3c0 35.2-28.8 64-64 64z"
p-id="1667"
fill="currentColor"
></path>
</svg>
{/* 会话ID */}
<span className={`${styles.chatId}`}>
{getSubChatId(botLog.message_session_id)}
</span>
</div>
)}
</div>
<div>{formatTime(botLog.timestamp)}</div>
</div>
<div className={`${styles.cardTitleContainer} ${styles.cardText}`}>
{botLog.text}
</div>
<PhotoProvider className={``}>
<div className={`w-50 mt-2`}>
{botLog.images.map((item) => (
<img
key={item}
src={`${baseURL}/api/v1/files/image/${item}`}
alt=""
/>
))}
</div>
</PhotoProvider>
</div>
);
}

View File

@@ -0,0 +1,129 @@
'use client';
import { BotLogManager } from '@/app/home/bots/bot-log/BotLogManager';
import { useCallback, useEffect, useRef, useState } from 'react';
import { BotLog } from '@/app/infra/http/requestParam/bots/GetBotLogsResponse';
import { BotLogCard } from '@/app/home/bots/bot-log/view/BotLogCard';
import styles from './botLog.module.css';
import { Switch } from '@/components/ui/switch';
import { debounce } from 'lodash';
import { useTranslation } from 'react-i18next';
export function BotLogListComponent({ botId }: { botId: string }) {
const { t } = useTranslation();
const manager = useRef(new BotLogManager(botId)).current;
const [botLogList, setBotLogList] = useState<BotLog[]>([]);
const [autoFlush, setAutoFlush] = useState(true);
const listContainerRef = useRef<HTMLDivElement>(null);
const botLogListRef = useRef<BotLog[]>(botLogList);
useEffect(() => {
initComponent();
return () => {
onDestroy();
};
}, []);
useEffect(() => {
botLogListRef.current = botLogList;
}, [botLogList]);
// 观测自动刷新状态
useEffect(() => {
if (autoFlush) {
manager.startListenServerPush();
} else {
manager.stopServerPush();
}
return () => {
manager.stopServerPush();
};
}, [autoFlush]);
function initComponent() {
// 订阅日志推送
manager.subscribeLogPush(handleBotLogPush);
// 加载第一页日志
manager.loadFirstPage().then((response) => {
setBotLogList(response.reverse());
});
// 监听滚动
listenScroll();
}
function onDestroy() {
manager.dispose();
removeScrollListener();
}
function listenScroll() {
if (!listContainerRef.current) {
return;
}
const list = listContainerRef.current;
list.addEventListener('scroll', handleScroll);
}
function removeScrollListener() {
if (!listContainerRef.current) {
return;
}
const list = listContainerRef.current;
list.removeEventListener('scroll', handleScroll);
}
function loadMore() {
// 加载更多日志
const list = botLogListRef.current;
const lastSeq = list[list.length - 1].seq_id;
if (lastSeq === 0) {
return;
}
manager.loadMore(lastSeq - 1, 10).then((response) => {
setBotLogList([...list, ...response.reverse()]);
});
}
function handleBotLogPush(response: BotLog[]) {
setBotLogList(response.reverse());
}
const handleScroll = useCallback(
debounce(() => {
if (!listContainerRef.current) return;
const { scrollTop, scrollHeight, clientHeight } =
listContainerRef.current;
const isBottom = scrollTop + clientHeight >= scrollHeight - 5;
const isTop = scrollTop === 0;
if (isBottom) {
setAutoFlush(false);
loadMore();
}
if (isTop) {
setAutoFlush(true);
}
if (!isTop && !isBottom) {
setAutoFlush(false);
}
}, 300), // 防抖延迟 300ms
[botLogList], // 依赖项为空
);
return (
<div
className={`${styles.botLogListContainer} px-6`}
ref={listContainerRef}
>
<div className={`${styles.listHeader}`}>
<div className={'mr-2'}>{t('bots.enableAutoRefresh')}</div>
<Switch checked={autoFlush} onCheckedChange={(e) => setAutoFlush(e)} />
</div>
{botLogList.map((botLog) => {
return <BotLogCard botLog={botLog} key={botLog.seq_id} />;
})}
</div>
);
}

View File

@@ -0,0 +1,68 @@
.botLogListContainer {
width: 100%;
min-height: 10rem;
display: flex;
flex-direction: column;
align-items: center;
justify-content: flex-start;
overflow-y: scroll;
}
.botLogCardContainer {
width: 100%;
background-color: #fff;
border-radius: 10px;
border: 1px solid #cbd5e1;
padding: 1.2rem;
margin-bottom: 1rem;
cursor: pointer;
}
.listHeader {
width: 100%;
height: 2.5rem;
display: flex;
flex-direction: row;
align-items: center;
}
.tag {
display: flex;
flex-direction: row;
align-items: center;
justify-content: flex-start;
gap: 0.2rem;
height: 1.5rem;
padding: 0.5rem;
border-radius: 0.4rem;
background-color: #a5d8ff;
color: #ffffff;
max-width: 16rem;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.chatTag {
color: #626262;
background-color: #d1d1d1;
}
.chatId {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.cardTitleContainer {
width: 100%;
display: flex;
flex-direction: row;
align-items: center;
justify-content: space-between;
}
.cardText {
margin-top: 0.4rem;
color: #64748b;
}

View File

@@ -1,7 +1,35 @@
import { BotCardVO } from '@/app/home/bots/components/bot-card/BotCardVO';
import styles from './botCard.module.css';
import { httpClient } from '@/app/infra/http/HttpClient';
import { Switch } from '@/components/ui/switch';
import { useTranslation } from 'react-i18next';
import { toast } from 'sonner';
import { Button } from '@/components/ui/button';
export default function BotCard({
botCardVO,
clickLogIconCallback,
setBotEnableCallback,
}: {
botCardVO: BotCardVO;
clickLogIconCallback: (id: string) => void;
setBotEnableCallback: (id: string, enable: boolean) => void;
}) {
const { t } = useTranslation();
function onClickLogIcon() {
clickLogIconCallback(botCardVO.id);
}
function setBotEnable(enable: boolean) {
return httpClient.updateBot(botCardVO.id, {
name: botCardVO.name,
description: botCardVO.description,
adapter: botCardVO.adapter,
adapter_config: botCardVO.adapterConfig,
enable: enable,
});
}
export default function BotCard({ botCardVO }: { botCardVO: BotCardVO }) {
return (
<div className={`${styles.cardContainer}`}>
<div className={`${styles.iconBasicInfoContainer}`}>
@@ -47,6 +75,44 @@ export default function BotCard({ botCardVO }: { botCardVO: BotCardVO }) {
</span>
</div>
</div>
<div className={`${styles.botOperationContainer}`}>
<Switch
checked={botCardVO.enable}
onCheckedChange={(e) => {
setBotEnable(e)
.then(() => {
setBotEnableCallback(botCardVO.id, e);
})
.catch((err) => {
console.error(err);
toast.error(t('bots.setBotEnableError'));
});
}}
onClick={(e) => {
e.stopPropagation();
}}
/>
<Button
variant="outline"
className="w-auto h-[40px]"
onClick={(e) => {
onClickLogIcon();
e.stopPropagation();
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
width="48"
height="48"
fill="currentColor"
>
<path d="M21 8V20.9932C21 21.5501 20.5552 22 20.0066 22H3.9934C3.44495 22 3 21.556 3 21.0082V2.9918C3 2.45531 3.4487 2 4.00221 2H14.9968L21 8ZM19 9H14V4H5V20H19V9ZM8 7H11V9H8V7ZM8 11H16V13H8V11ZM8 15H16V17H8V15Z"></path>
</svg>
{t('bots.log')}
</Button>
</div>
</div>
</div>
);

Some files were not shown because too many files have changed in this diff Show More