Compare commits

...

250 Commits

Author SHA1 Message Date
Junyan Qin
d7fc5283f7 chore: bump version 4.3.0b4 2025-08-28 14:43:45 +08:00
Junyan Qin
4bdd8a021c fix: tool call params in localagent 2025-08-28 14:38:10 +08:00
Junyan Qin
c0ccdaf91a fix: minor bugs 2025-08-28 14:02:56 +08:00
Junyan Qin
e7fe41810e fix: dark mode for plugins management page 2025-08-26 22:40:32 +08:00
Junyan Qin
56183867a7 fix: replace message_chain.has usage 2025-08-25 23:22:36 +08:00
Junyan Qin
ea6ce2f552 fix: set plugin enabled=true as default 2025-08-25 20:56:39 +08:00
Junyan Qin
55df728471 fix: wrong migration target version 2025-08-24 21:47:54 +08:00
Junyan Qin
8a370a260e fix: syntax errors 2025-08-24 21:46:20 +08:00
Junyan Qin
64764c412b Merge branch 'rc/new-plugin' into refactor/new-plugin-system 2025-08-24 21:40:02 +08:00
Junyan Qin
f2d5c21712 perf: inline code display style in markdown 2025-08-24 19:59:33 +08:00
Junyan Qin
6113c42014 fix: change tag of image to latest 2025-08-24 11:15:28 +08:00
Junyan Qin
fd9d1c4acc feat: update docker launch method 2025-08-24 11:10:05 +08:00
Junyan Qin
118ebddae6 fix: use --standalone-runtime 2025-08-23 23:03:32 +08:00
Junyan Qin
2742144e12 feat: change standalone runtime tag env 2025-08-23 22:57:46 +08:00
devin-ai-integration[bot]
83ff64698b feat: add ZIP file upload support for knowledge base (#1626)
* feat: add ZIP file upload support for knowledge base

- Add _parse_zip method to FileParser class using zipfile library
- Support extraction and processing of TXT, PDF, DOCX, MD, HTML files from ZIP
- Update FileUploadZone to accept .zip files
- Add ZIP format to supported formats in internationalization files
- Implement error handling for invalid ZIP files and unsupported content
- Follow existing async parsing patterns and error handling conventions

Co-Authored-By: Rock <rockchinq@gmail.com>

* refactor: modify ZIP processing to store each document as separate file

- Remove _parse_zip method from FileParser as ZIP handling now occurs at knowledge base level
- Add _store_zip_file method to RuntimeKnowledgeBase to extract and store each document separately
- Each document in ZIP is now stored as individual file entry in knowledge base
- Process ZIP files in memory using io.BytesIO to avoid filesystem writes
- Generate unique file names for extracted documents to prevent conflicts

Co-Authored-By: Rock <rockchinq@gmail.com>

* perf: delete temp files

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Rock <rockchinq@gmail.com>
2025-08-23 21:18:13 +08:00
How-Sean Xin
b5e22c6db8 Update package.json (#1627) 2025-08-23 20:22:25 +08:00
Junyan Qin
d3a147bbdd chore: bump version 4.3.0b3 2025-08-23 20:08:29 +08:00
Junyan Qin
8eb1b8759b chore: bump version to '4.3.0b2' 2025-08-23 20:06:19 +08:00
Junyan Qin
0155d3b0b9 fix: conflict in table plugin_settings 2025-08-23 20:05:24 +08:00
Junyan Qin
e47a5b4e0d chore: bump langbot_plugin version 2025-08-23 17:12:29 +08:00
Junyan Qin
87ecb4e519 feat: add note for remove_think & remove dify remove cot code 2025-08-21 21:38:58 +08:00
Ljzd_PRO
df524b8a7a Fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages (#1624)
* fix: update invoke_embedding to return only embeddings from client.embed

* fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages
2025-08-21 20:46:26 +08:00
Junyan Qin
8a7df423ab chore: update shengsuanyun url 2025-08-21 14:14:25 +08:00
Junyan Qin
cafd623c92 chore: update shengsuanyun 2025-08-21 12:03:04 +08:00
Junyan Qin
4df11ef064 chore: update for shengsuanyun 2025-08-21 11:47:40 +08:00
Junyan Qin
4012310d99 chore: bump version 4.3.0b1 2025-08-21 10:49:51 +08:00
Junyan Qin
9e9bc88473 chore: remove plugin reorder functionality 2025-08-21 10:47:53 +08:00
Junyan Qin
aa7c08ee00 chore: release v4.2.1 2025-08-21 10:15:05 +08:00
Junyan Qin
b98de29b07 feat: add shengsuanyun requester 2025-08-20 23:33:35 +08:00
Junyan Qin
53ade384eb feat: bump version of langbot-plugin 2025-08-20 23:26:32 +08:00
fdc310
c7c2eb4518 fix:in the gmini tool_calls no id The resulting call failure (#1622)
* fix:in the dify agent llm return message can not joint

* fix:in the gmini tool_calls no id The resulting call failure
2025-08-20 22:39:16 +08:00
Ljzd_PRO
37fa318258 fix: update invoke_embedding to return only embeddings from client.embed (#1619) 2025-08-20 10:25:33 +08:00
fdc310
ff7bebb782 fix:in the dify agent llm return message can not joint (#1617) 2025-08-19 23:23:00 +08:00
Junyan Qin
30bb26f898 doc(README): streaming output 2025-08-18 21:21:50 +08:00
Junyan Qin
9c1f4e1690 chore: release v4.2.0 2025-08-18 18:43:20 +08:00
dependabot[bot]
865ee2ca01 Merge pull request #1612 from langbot-app/dependabot/npm_and_yarn/web/form-data-4.0.4
chore(deps): bump form-data from 4.0.2 to 4.0.4 in /web
2025-08-18 16:10:56 +08:00
Junyan Qin (Chin)
c2264080bd Merge pull request #1442 from langbot-app/feat/streaming
feat: streaming output
2025-08-17 23:36:30 +08:00
Dong_master
67b622d5a6 fix:Some adjustments to the return types 2025-08-17 23:34:19 +08:00
Dong_master
a534c02d75 fix:remove print 2025-08-17 23:34:01 +08:00
Junyan Qin
da890d3074 chore: remove fix.MD 2025-08-17 21:20:32 +08:00
Junyan Qin
3049aa7a96 feat: add migration for pipeline remove-think 2025-08-17 21:18:41 +08:00
Junyan Qin
8b2480ad3b feat: setting plugin config 2025-08-17 21:01:43 +08:00
Junyan Qin
b176959836 feat: plugin deletion and upgrade 2025-08-17 18:07:51 +08:00
Junyan Qin
a0c42a5f6e feat: plugin operations 2025-08-17 16:51:44 +08:00
Junyan Qin (Chin)
e66f674968 Merge branch 'master' into feat/streaming 2025-08-17 14:30:22 +08:00
Junyan Qin (Chin)
dd0e0abdc4 Merge pull request #1571 from fdc310/streaming_feature
feat:add streaming output and pipeline stream
2025-08-17 14:27:39 +08:00
Junyan Qin (Chin)
13f6396eb4 Merge pull request #1610 from langbot-app/devin/1755399221-add-password-change-feature
feat: add password change functionality
2025-08-17 14:25:24 +08:00
Junyan Qin
7bbaa4fcad feat: perf ui & complete i18n 2025-08-17 14:09:28 +08:00
Junyan Qin
e931d5eb88 chore: remove print 2025-08-17 13:52:40 +08:00
Junyan Qin
4bbfa2f1d7 fix: telegram adapter gracefully stop 2025-08-17 13:52:02 +08:00
Junyan Qin
17d997c88e fix: i18n fallback 2025-08-17 11:43:38 +08:00
Devin AI
dd30d08c68 feat: add password change functionality
- Add password change button to sidebar account menu
- Create PasswordChangeDialog component with shadcn UI components
- Implement backend API endpoint /api/v1/user/change-password
- Add form validation with current password verification
- Include internationalization support for Chinese and English
- Add proper error handling and success notifications

Co-Authored-By: Rock <rockchinq@gmail.com>
2025-08-17 03:03:36 +00:00
Junyan Qin
0ea7609ff1 perf: frontend 2025-08-16 23:23:24 +08:00
Junyan Qin
28d4b1dd61 feat: marketplace page 2025-08-16 18:05:33 +08:00
Junyan Qin
5179b3e53a feat: trace plugin installation 2025-08-16 15:42:49 +08:00
Dong_master
8ccda10045 fix: in the dashscopeapi.py workflow stream bug 2025-08-16 12:11:00 +08:00
Dong_master
46fbfbefea fix: in the dashscopeapi.py stream and non-stream remove_think logic 2025-08-16 02:13:45 +08:00
Junyan Qin
288b294148 feat: plugin installation webui 2025-08-15 22:05:39 +08:00
Junyan Qin
b464d238c5 feat: plugin installation 2025-08-15 21:30:26 +08:00
Junyan Qin
e1a78e8ff9 feat: tag debugging plugins in webui 2025-08-15 19:11:49 +08:00
Junyan Qin
2b8eb5f01c fix: bot switching 2025-08-15 17:02:00 +08:00
Dong_master
8f863cf530 fix: remove_think bug 2025-08-15 00:55:39 +08:00
Dong_master
2351193c51 fix: in the difysvapi.py add stream , and remove_think on chunk 2025-08-15 00:50:32 +08:00
Junyan Qin
bf2bc70794 feat: refactor webui httpclient 2025-08-14 23:55:14 +08:00
Junyan Qin
ebe0b68e8f feat: set cloud_service_url 2025-08-14 23:42:57 +08:00
Dong_master
8c87a47f5a fix: in the ollamachat.py func _closure add remove_think agr 2025-08-14 22:35:30 +08:00
Dong_master
b8b9a37825 fix: in the dify non-stream remove_think lgic 2025-08-14 22:32:22 +08:00
Dong_master
13dd6fcee3 fix: in the webchat non-stream not save resp_message in message_lists 2025-08-14 22:29:42 +08:00
Junyan Qin
39c50d3c12 feat: get_bot_info api 2025-08-13 20:54:43 +08:00
Junyan Qin
29f0075bd8 perf: zh-Hant specs 2025-08-13 17:49:54 +08:00
Junyan Qin
8a96ffbcc0 chore: complete zh-Hant specs for top_k 2025-08-13 17:33:47 +08:00
Junyan Qin (Chin)
67f68d8101 Merge pull request #1606 from langbot-app/feat/topk_splitter
Feat/topk splitter
2025-08-13 17:31:11 +08:00
Junyan Qin
ad59d92cef perf: i18n 2025-08-13 17:28:00 +08:00
Dong_master
85f97860c5 fix: Fixed the errors in modelscopechatcmpl.py when in pseudo-non-streaming mode, regarding the display of main content and tool calls. 2025-08-13 01:55:06 +08:00
Dong_master
8fd21e76f2 fix: Only when messagechunk is present, will msg_sequence be assigned to the subsequent tool calls. 2025-08-13 00:00:10 +08:00
Dong_master
cc83ddbe21 fix: del print 2025-08-12 23:29:32 +08:00
Dong_master
99fcde1586 fix: in the MessageChunk add msg_sequence ,And obtain the usage in the adapter. 2025-08-12 23:20:41 +08:00
WangCham
eab08dfbf3 fix: format the code 2025-08-12 23:13:00 +08:00
Dong_master
dbf0200cca feat:add More attractive card templates 2025-08-12 22:36:42 +08:00
Junyan Qin
ac44f35299 chore: remove comments 2025-08-12 21:07:23 +08:00
Junyan Qin
d6a5fdd911 perf: complete sidebar menu 2025-08-12 21:02:40 +08:00
Dong_master
4668db716a fix: fix command reply_message error bug,del some print 2025-08-12 20:54:47 +08:00
Junyan Qin
f7cd6b76f2 feat: refactor account menu 2025-08-12 20:13:18 +08:00
Junyan Qin
b6d47187f5 perf: prettier 2025-08-12 19:39:41 +08:00
Junyan Qin
051fffd41e fix: stash 2025-08-12 19:18:49 +08:00
Junyan Qin
c5480078b3 perf: make prompt editor textarea 2025-08-12 11:30:42 +08:00
Dong_master
e744e9c4ef fix: in the localagent.py yield MessageChunk add agr tool_calls,and After calling the "tool_calls", the first returned body data will be concatenated. 2025-08-12 11:25:37 +08:00
Dong_master
9f22b8b585 fix: be adapter.py func reply_message_chunk agr message_id alter bot_message,and in pipelinemgr.py respback.py agr alter 2025-08-12 11:21:08 +08:00
Dong_master
27cee0a4e1 fix: in the adapter.py func reply_message_chunk agr message_id alter bot_message,and in dingtalk.py lark.py telegram.py webchat.py agr alter 2025-08-12 11:19:27 +08:00
Dong_master
6d35fc408c fix: some time in the anthropicmsgs.py mesg_dcit["content"] is str can not append 2025-08-12 11:15:17 +08:00
Dong_master
0607a0fa5c fix: in the modelscopechatcmpl.py stream tool_calls arguments bug, 2025-08-12 00:04:21 +08:00
Dong_master
ed57d2fafa del localagent.py print 2025-08-11 23:49:19 +08:00
Junyan Qin
39ef92676b doc: add back wechat 2025-08-11 23:38:41 +08:00
Dong_master
7301476228 fix:Because the message_id was popped out, it caused the issue where the tool couldn't find the message_id after being invoked. 2025-08-11 23:36:01 +08:00
WangCham
457cc3eecd fix: wrong definition of topk 2025-08-11 23:22:36 +08:00
Dong_master
a381069bcc fix:fix tool_result argument bug 2025-08-11 23:05:47 +08:00
WangCham
146c38e64c fix: wrong positions 2025-08-11 22:58:48 +08:00
Junyan Qin (Chin)
763c41729e Merge pull request #1605 from TwperBody/master
feat: dark mode supports for webui
2025-08-11 20:51:58 +08:00
Junyan Qin
0021efebd7 perf: minor fix 2025-08-11 20:50:39 +08:00
Junyan Qin
5f18a1b13a chore: prettier 2025-08-11 20:46:08 +08:00
Junyan Qin
0124448479 perf: card shadowbox 2025-08-11 20:41:57 +08:00
Junyan Qin
621f1301b3 fix: message chain init 2025-08-11 17:24:57 +08:00
WangCham
e76bc80e51 Merge branch 'feat/topk_splitter' of github.com:RockChinQ/LangBot into feat/topk_splitter 2025-08-11 00:20:13 +08:00
WangCham
a27560e804 fix: page bug 2025-08-11 00:12:06 +08:00
Dong_master
46452de7b5 fix:The handling of the streaming tool calls has been fixed, but there are still bugs in the model's reply messages with thoughtfulness. 2025-08-10 23:14:57 +08:00
TwperBody
2aef139577 dark mode 2025-08-10 22:17:06 +08:00
Dong_master
03b11481ed fix:fix remove_think logic, and end<think> fix </think> 2025-08-10 00:28:55 +08:00
Dong_master
8c5cb71812 fix:del the chatcmpl.py useless logic,and in the modelscopechatcmpl.py Non-streaming add and del <think> logic,and fix the ppiochatcmpl.py stream logic and the giteeaichatcmpl.py inherit ppiochatcmpl.py 2025-08-10 00:16:13 +08:00
Dong_master
7c59bc1ce5 feat:add anthropic stream ouput 2025-08-10 00:09:19 +08:00
Junyan Qin
0b60ef0d06 chore: bump langbot-plugin version to 0.1.1a1 2025-08-09 21:06:31 +08:00
Dong_master
eede354d3b fix:chatcmpl.py del content <think>,in the ppiochatcmpl.py and modelsopechatcmpl.py fun _closure_stream stream logic 2025-08-09 02:46:13 +08:00
Junyan Qin
eb7b5dcc25 chore: rename zh_Hans label of deepseek requester 2025-08-08 17:31:24 +08:00
WangCham
47e9ce96fc feat: add topk 2025-08-07 18:10:33 +08:00
WangCham
4e95bc542c fix: kb form 2025-08-07 18:10:33 +08:00
WangCham
e4f321ea7a feat: add description for topk 2025-08-07 18:10:33 +08:00
WangCham
246eb71b75 feat: add topk 2025-08-07 18:10:33 +08:00
Junyan Qin
261f50b8ec feat: refactor with cursor max mode claude 4.1 opus 2025-08-07 15:47:57 +08:00
Junyan Qin
9736d0708a fix: missing deps 2025-08-07 10:15:09 +08:00
Junyan Qin
02dbe80d2f perf: model testing 2025-08-07 10:01:04 +08:00
Dong_master
0f239ace17 Merge remote-tracking branch 'origin/streaming_feature' into streaming_feature 2025-08-06 23:02:35 +08:00
Dong_master
3a82ae8da5 fix: the bug in the "remove_think" function. 2025-08-06 23:00:57 +08:00
Junyan Qin
c33c9eaab0 chore: remove remove_think param in trigger.yaml 2025-08-06 15:45:35 +08:00
Junyan Qin
87f626f3cc doc(README): add HelloGitHub badge 2025-08-05 17:40:27 +08:00
Dong_master
e88302f1b4 fix:The handling logic of remove_think in the connector and Temporarily blocked the processing of streaming tool calls in the runner. 2025-08-05 04:24:03 +08:00
Dong_master
5597dffaeb Merge remote-tracking branch 'origin/streaming_feature' into streaming_feature
# Conflicts:
#	pkg/api/http/controller/groups/pipelines/webchat.py
#	pkg/pipeline/process/handlers/chat.py
#	pkg/platform/sources/aiocqhttp.py
#	pkg/platform/sources/dingtalk.py
#	pkg/platform/sources/discord.py
#	pkg/platform/sources/lark.py
#	pkg/platform/sources/telegram.py
#	pkg/platform/sources/wechatpad.py
#	pkg/provider/modelmgr/requester.py
#	pkg/provider/modelmgr/requesters/chatcmpl.py
#	pkg/provider/modelmgr/requesters/deepseekchatcmpl.py
#	pkg/provider/modelmgr/requesters/giteeaichatcmpl.py
#	pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
#	pkg/provider/modelmgr/requesters/ppiochatcmpl.py
#	pkg/provider/runners/dashscopeapi.py
#	pkg/provider/runners/difysvapi.py
#	pkg/provider/runners/localagent.py
2025-08-04 23:17:36 +08:00
Junyan Qin
7f25d61531 fix: minor fix 2025-08-04 23:00:54 +08:00
Junyan Qin
15e524c6e6 perf: move remove-think to output tab 2025-08-04 19:26:19 +08:00
fdc
4a1d033ee9 fix: Reduce chunk returns in dify and Hundred Refining Runner to every 8 chunks 2025-08-04 19:26:19 +08:00
fdc
8adc88a8c0 fix:Modify the remove_think that directly retrieves the configuration file from the requester, retrieves it from the runner, and passes it to the required function 2025-08-04 19:26:18 +08:00
fdc
a62b38eda7 fix: In the reply_message_chunk of the adapter, the message is only streamed into the card or edited at the end of the 8th chunk return or streaming 2025-08-04 19:26:18 +08:00
Dong_master
fcef784180 fix: In the runner, every 8 tokens yield 2025-08-04 19:26:18 +08:00
Junyan Qin
c3ed4ef6a1 feat: no longer use typewriter in debug dialog 2025-08-04 19:26:18 +08:00
Junyan Qin
b9f768af25 perf: minor fixes 2025-08-04 19:26:18 +08:00
Junyan Qin
47ff883fc7 perf: ruff format & remove stream params in requester 2025-08-04 19:26:18 +08:00
Dong_master
68906c43ff feat: add webchat Word-by-word output
fix:webchat on message stream bug
2025-08-04 19:26:18 +08:00
Dong_master
c6deed4e6e feat: webchat stream is ok 2025-08-04 19:26:18 +08:00
Dong_master
b45cc59322 fix:webchat stream judge bug and frontend bug 2025-08-04 19:26:17 +08:00
fdc
c33a96823b fix: frontend bug 2025-08-04 19:26:17 +08:00
fdc
d3ab16761d fix:lsome bug 2025-08-04 19:26:17 +08:00
fdc
70f23f24b0 fix: is_stream_output_supperted in webchat return 2025-08-04 19:26:17 +08:00
fdc
00a8410c94 feat:webchat frontend stream 2025-08-04 19:26:17 +08:00
fdc
2a17e89a99 feat: add webchat stream but only some 2025-08-04 19:26:17 +08:00
fdc
8fe0992c15 fix:in chat judge create_message_card telegram reply_message_chunk no message 2025-08-04 19:26:17 +08:00
Dong_master
a9776b7b53 fix:del some print ,and amend respback on stream judge ,and del in dingtalk this is_stream_output_supported() use 2025-08-04 19:26:16 +08:00
Dong_master
074d359c8e feat:add dashscopeapi stream
fix:dify 64chunk yield
2025-08-04 19:26:16 +08:00
Dong_master
7728b4262b fix:lark message_id and dingtalk incoming_message 2025-08-04 19:26:16 +08:00
Dong_master
4905b5a738 feat:add dingtalk stream
fix:adapter is_stream_output_supported bug
fix:stream message reply chunk in message_id
2025-08-04 19:26:16 +08:00
Dong_master
43a259a1ae feat:add dingtalk stream 2025-08-04 19:26:16 +08:00
Dong_master
cffe493db0 feat:add telegram stream 2025-08-04 19:26:16 +08:00
Dong_master
0042629bf0 feat:add ppio and openrouter llm stream,and ppio think in content remove_think.
fix: giteeai stream no remove_think content add char"<think>"
2025-08-04 19:26:16 +08:00
Dong_master
a7d638cc9a feat:add deepseek and modelscope llm stream,and giteeai think in content remove_think 2025-08-04 19:26:16 +08:00
Dong_master
f84a79bf74 perf:del dify stream in ai.yaml config.and enbale stream in lark.yaml.
fix:localagent remove_think bug
2025-08-04 19:26:15 +08:00
Dong_master
f5a0cb9175 feat:add dify _agent_chat_message streaming 2025-08-04 19:26:15 +08:00
Dong_master
f9a5507029 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及有思考的think处理
feat:增加带有深度思考模型的think的去think操作
feat:dify中聊天机器人,chatflow对流式的支持
2025-08-04 19:26:15 +08:00
Dong_master
5ce32d2f04 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及带有深度思考模型的think增加 2025-08-04 19:26:15 +08:00
Dong_master
4908996cac 流式基本流程已通过修改了yield和return的冲突导致的问题 2025-08-04 19:26:15 +08:00
fdc
ee545a163f 增加了飞书中的流式但是好像还有问题 2025-08-04 19:26:15 +08:00
fdc
6e0e5802cc fix:修改手误message_id写进reply_message中 2025-08-04 19:26:15 +08:00
fdc
0d53843230 chat中的流式修改 2025-08-04 19:26:14 +08:00
fdc
b65670cd1a feat: 实现流式消息处理支持 2025-08-04 19:26:14 +08:00
zejiewang
ba4b5255a2 feat:support dify message streaming output (#1437)
* fix:lark adapter listeners init problem

* feat:support dify streaming mode

* feat:remove some log

* fix(bot form): field desc missing

* fix: not compatible with chatflow

---------

Co-authored-by: wangzejie <wangzejie@meicai.cn>
Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-08-04 18:45:52 +08:00
Junyan Qin (Chin)
d60af2b451 fix(pipeline dialog): config reset between tabs switching (#1597) 2025-08-04 00:05:55 +08:00
Dong_master
44ac8b2b63 fix: In the runner, every 8 tokens yield 2025-08-03 23:23:51 +08:00
Junyan Qin
b70001c579 chore: release v4.1.2 2025-08-03 22:52:47 +08:00
Junyan Qin (Chin)
4a8f5516f6 feat: add new api requester (#1596) 2025-08-03 22:30:52 +08:00
Junyan Qin
48d11540ae feat: no longer use typewriter in debug dialog 2025-08-03 17:18:44 +08:00
Junyan Qin
84129e3339 perf: minor fixes 2025-08-03 15:30:11 +08:00
Junyan Qin
377d455ec1 perf: ruff format & remove stream params in requester 2025-08-03 13:08:51 +08:00
Junyan Qin
41650b585a perf: dispose process 2025-08-02 23:54:06 +08:00
Dong_master
52280d7a05 feat: add webchat Word-by-word output
fix:webchat on message stream bug
2025-08-02 01:42:22 +08:00
Dong_master
0ce81a2df2 feat: webchat stream is ok 2025-08-01 11:33:16 +08:00
Dong_master
d9a2bb9a06 fix:webchat stream judge bug and frontend bug 2025-07-31 14:49:12 +08:00
fdc
cb88da7f02 fix: frontend bug 2025-07-31 10:34:36 +08:00
fdc
5560a4f52d fix:lsome bug 2025-07-31 10:28:43 +08:00
fdc
e4d951b174 fix: is_stream_output_supperted in webchat return 2025-07-31 10:01:47 +08:00
fdc
6e08bf71c9 feat:webchat frontend stream 2025-07-31 09:51:25 +08:00
fdc
daaf4b54ef feat: add webchat stream but only some 2025-07-30 17:06:14 +08:00
fdc
3291266f5d fix:in chat judge create_message_card telegram reply_message_chunk no message 2025-07-30 15:21:59 +08:00
Dong_master
307f6acd8c fix:del some print ,and amend respback on stream judge ,and del in dingtalk this is_stream_output_supported() use 2025-07-29 23:09:02 +08:00
Junyan Qin
f1ac9c77e6 doc: update README_TW 2025-07-28 15:50:00 +08:00
Junyan Qin
b434a4e3d7 doc: add README_TW 2025-07-28 15:47:50 +08:00
Junyan Qin
2f209cd59f chore(i18n): add zh-Hant 2025-07-28 15:11:41 +08:00
Junyan Qin
0f585fd5ef fix(moonshot): make api.moonshot.ai the default api base url 2025-07-26 22:23:33 +08:00
Junyan Qin
a152dece9a chore: switch to pnpm 2025-07-26 19:45:38 +08:00
Junyan Qin
d3b31f7027 chore: release v4.1.1 2025-07-26 19:28:34 +08:00
How-Sean Xin
c00f05fca4 Add GitHub link redirection for front-end plugin cards (#1579)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json

* Update PluginCardComponent.tsx

* perf: no display github button when plugin has no github url

---------

Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-07-26 19:22:00 +08:00
Junyan Qin
92c3a86356 feat: add qhaigc 2025-07-24 22:42:26 +08:00
Junyan Qin
341fdc409d perf: embedding model ui 2025-07-24 22:29:25 +08:00
Junyan Qin
ebd542f592 feat: 302.AI embeddings 2025-07-24 22:05:15 +08:00
Junyan Qin
194b2d9814 feat: supports more embedding providers 2025-07-24 22:03:20 +08:00
Junyan Qin
7aed5cf1ed feat: ollama embeddings models 2025-07-24 10:36:32 +08:00
Junyan Qin
abc88c4979 doc: update README 2025-07-23 18:53:15 +08:00
WangCham
3fa38f71f1 feat: add topk 2025-07-23 17:29:36 +08:00
WangCham
d651d956d6 Merge branch 'master' into feat/topk_splitter 2025-07-23 16:37:27 +08:00
gaord
6754666845 feat(wechatpad): 添加对@所有人的支持并统一处理消息派发 (#1588)
在消息转换器中添加对AtAll组件的支持,将@所有人转换为特定格式。同时在消息派发时统一处理@所有人的情况,确保通知能正确发送。
2025-07-23 15:22:04 +08:00
Junyan Qin
08e6f46b19 fix(deps): react-focus-scope pkg bug 2025-07-22 11:05:16 +08:00
Dong_master
8f8c8ff367 feat:add dashscopeapi stream
fix:dify 64chunk yield
2025-07-21 18:45:45 +08:00
Dong_master
63ec2a8c34 fix:lark message_id and dingtalk incoming_message 2025-07-21 17:28:11 +08:00
Dong_master
f58c8497c3 feat:add dingtalk stream
fix:adapter is_stream_output_supported bug
fix:stream message reply chunk in message_id
2025-07-20 23:53:20 +08:00
Junyan Qin
1497fdae56 doc(README): adjust structure 2025-07-20 22:10:32 +08:00
Junyan Qin
10a3cb40e1 perf(retrieve): ui 2025-07-20 17:57:33 +08:00
devin-ai-integration[bot]
dd1ec15a39 feat: add knowledge base retrieve test tab with Card-based UI (#1583)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:56:46 +08:00
devin-ai-integration[bot]
ea51cec57e feat: add pipeline sorting functionality with three sort options (#1582)
* feat: add pipeline sorting functionality with three sort options

Co-Authored-By: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:23:30 +08:00
Dong_master
adb0bf2473 feat:add dingtalk stream 2025-07-19 01:05:44 +08:00
Dong_master
11e52a3ade feat:add telegram stream 2025-07-17 14:29:30 +08:00
WangCham
e986a0acaf fix: kb form 2025-07-16 22:50:17 +08:00
Junyan Qin
f5b893cfe0 feat: kill runtime process when exit in stdio mode 2025-07-16 22:43:39 +08:00
WangCham
e31883547d feat: add description for topk 2025-07-16 18:15:27 +08:00
WangCham
88c0066b06 feat: add topk 2025-07-16 17:20:13 +08:00
Dong_master
d15df3338f feat:add ppio and openrouter llm stream,and ppio think in content remove_think.
fix: giteeai stream no remove_think content add char"<think>"
2025-07-15 00:50:42 +08:00
Dong_master
c74cf38e9f feat:add deepseek and modelscope llm stream,and giteeai think in content remove_think 2025-07-14 23:53:55 +08:00
Dong_master
0e68a922bd perf:del dify stream in ai.yaml config.and enbale stream in lark.yaml.
fix:localagent remove_think bug
2025-07-14 01:42:42 +08:00
Dong_master
4e1d81c9f8 feat:add dify _agent_chat_message streaming 2025-07-14 00:40:02 +08:00
Dong_master
0be08d8882 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及有思考的think处理
feat:增加带有深度思考模型的think的去think操作
feat:dify中聊天机器人,chatflow对流式的支持
2025-07-13 22:41:39 +08:00
Junyan Qin
e0abd19636 feat: get plugin info 2025-07-13 22:14:22 +08:00
Junyan Qin
4380041c7f feat(ui): list plugins 2025-07-13 22:03:47 +08:00
Junyan Qin
65814a4644 feat: binary storage api 2025-07-13 21:39:33 +08:00
Junyan Qin
7237294008 perf: longer timeout for emit_event 2025-07-13 20:48:15 +08:00
Junyan Qin
214bc8ada9 feat: backward call apis 2025-07-13 20:45:45 +08:00
Junyan Qin
6a1de889b4 refactor: switch llm_entities to plugin sdk 2025-07-13 20:30:17 +08:00
Junyan Qin
4a319b2b20 feat: query-based apis 2025-07-13 18:41:04 +08:00
Junyan Qin
9f269d1614 feat: get bot uuid api 2025-07-13 17:44:20 +08:00
Junyan Qin
4b57771eb1 feat: reply_message api 2025-07-13 16:31:25 +08:00
Junyan Qin
5922be7e15 feat: command execution via plugin 2025-07-13 10:26:48 +08:00
Dong_master
301509b1db fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及带有深度思考模型的think增加 2025-07-12 18:09:24 +08:00
Junyan Qin
10a44c70b6 feat: switch command entities to sdk 2025-07-10 10:51:36 +08:00
Junyan Qin
5b044a1917 feat: add Tool component 2025-07-06 21:03:33 +08:00
Dong_master
68cdd163d3 流式基本流程已通过修改了yield和return的冲突导致的问题 2025-07-04 03:26:44 +08:00
fdc
4005a8a3e2 增加了飞书中的流式但是好像还有问题 2025-07-03 22:58:17 +08:00
Junyan Qin
a60aa6f644 feat: runtime reconnecting 2025-07-02 22:20:20 +08:00
fdc
542409d48d Merge branch 'feat/streaming' of github.com:fdc310/LangBot into streaming_feature 2025-07-02 14:09:01 +08:00
Junyan Qin
1a10b40b17 refactor: use emit_event from connector 2025-07-02 12:46:30 +08:00
Junyan Qin
e2124054bf feat: switch all event emitting logic to new method 2025-07-02 11:58:10 +08:00
zejiewang
3c6e858c35 feat:support dify message streaming output (#1437)
* fix:lark adapter listeners init problem

* feat:support dify streaming mode

* feat:remove some log

* fix(bot form): field desc missing

* fix: not compatible with chatflow

---------

Co-authored-by: wangzejie <wangzejie@meicai.cn>
Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-07-02 11:07:31 +08:00
Junyan Qin
ee3da8aa17 feat: adapt more events 2025-07-02 11:04:03 +08:00
fdc
8670ae82a3 fix:修改手误message_id写进reply_message中 2025-07-02 10:49:50 +08:00
Junyan Qin
c246470b37 feat: minor changes adapt to event emitting 2025-07-01 22:44:46 +08:00
fdc
48c9d66ab8 chat中的流式修改 2025-07-01 18:03:05 +08:00
Junyan Qin
f474e42b79 fix: serialization bug in events emitting 2025-06-30 21:49:59 +08:00
Junyan Qin
5553a86ac8 feat: preliminary migration of events entities 2025-06-30 21:49:59 +08:00
Junyan Qin
01613b2f0d chore: remove adapter meta manifest from components.yaml 2025-06-30 21:49:59 +08:00
Junyan Qin
a177786063 feat: switch message platform adapters to sdk 2025-06-30 21:49:59 +08:00
Junyan Qin
62b2884011 chore: delete Query class 2025-06-30 21:47:40 +08:00
Junyan Qin
6b782f8761 feat: switch Query to langbot-plugin definition 2025-06-30 21:47:40 +08:00
Junyan Qin
0c2560cafb feat: switch tool entities and format 2025-06-30 21:47:40 +08:00
Junyan Qin
c5eeab2fd0 feat: listing plugins 2025-06-30 21:43:43 +08:00
Junyan Qin
6f2fd72af6 feat(plugin): basic communication 2025-06-30 21:43:43 +08:00
Junyan Qin
2d06f1cadb feat: connector for plugin runtime 2025-06-30 21:43:43 +08:00
Junyan Qin
af493c117c deps: add langbot-plugin 2025-06-30 21:43:42 +08:00
fdc
0eac9135c0 feat: 实现流式消息处理支持 2025-06-30 17:58:18 +08:00
277 changed files with 11027 additions and 7261 deletions

View File

@@ -6,7 +6,9 @@
<div align="center">
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
<a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / 简体中文 / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
@@ -25,13 +27,7 @@
</p>
## ✨ 特性
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
LangBot 是一个开源的大语言模型原生即时通信机器人开发平台,旨在提供开箱即用的 IM 机器人开发体验,具有 Agent、RAG、MCP 等多种 LLM 应用功能,适配全球主流即时通信平台,并提供丰富的 API 接口,支持自定义开发。
## 📦 开始使用
@@ -65,23 +61,25 @@ docker compose up -d
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 📸 效果展示
## 😎 保持更新
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/>
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/>
## ✨ 特性
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/>
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态、流式输出能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
详细规格特性请访问[文档](https://docs.langbot.app/zh/insight/features.html)。
- WebUI Demo: https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示webui效果,公开环境,请不要在其中填入您的任何敏感信息。
## 🔌 组件兼容性
或访问 demo 环境:https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示 WebUI 效果,公开环境,请不要在其中填入您的任何敏感信息。
### 消息平台
@@ -98,10 +96,6 @@ docker compose up -d
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
### 大模型能力
@@ -115,6 +109,7 @@ docker compose up -d
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [胜算云](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 资源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
@@ -148,9 +143,3 @@ docker compose up -d
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 保持更新
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -5,7 +5,7 @@
<div align="center">
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
English / [简体中文](README.md) / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
@@ -21,13 +21,7 @@
</p>
## ✨ Features
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
LangBot is an open-source LLM native instant messaging robot development platform, aiming to provide out-of-the-box IM robot development experience, with Agent, RAG, MCP and other LLM application functions, adapting to global instant messaging platforms, and providing rich API interfaces, supporting custom development.
## 📦 Getting Started
@@ -61,23 +55,25 @@ Community contributed Zeabur template.
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 Demo
## 😎 Stay Ahead
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ Features
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, multi-modal, and streaming output capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
For more detailed specifications, please refer to the [documentation](https://docs.langbot.app/en/insight/features.html).
- WebUI Demo: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment.
## 🔌 Component Compatibility
Or visit the demo environment: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: For WebUI demo only, please do not fill in any sensitive information in the public environment.
### Message Platform
@@ -93,10 +89,6 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: In development
### LLMs
@@ -111,6 +103,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | LLM and GPU resource platform |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLM and GPU resource platform |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
@@ -129,9 +122,3 @@ Thank you for the following [code contributors](https://github.com/langbot-app/L
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 Stay Ahead
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -5,7 +5,7 @@
<div align="center">
[简体中文](README.md) / [English](README_EN.md) / 日本語 / (PR for your language)
[English](README_EN.md) / [简体中文](README.md) / [繁體中文](README_TW.md) / 日本語 / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
@@ -21,13 +21,7 @@
</p>
## ✨ 機能
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
LangBot は、エージェント、RAG、MCP などの LLM アプリケーション機能を備えた、オープンソースの LLM ネイティブのインスタントメッセージングロボット開発プラットフォームです。世界中のインスタントメッセージングプラットフォームに適応し、豊富な API インターフェースを提供し、カスタム開発をサポートします。
## 📦 始め方
@@ -43,7 +37,7 @@ http://localhost:5300 にアクセスして使用を開始します。
詳細なドキュメントは[Dockerデプロイ](https://docs.langbot.app/en/deploy/langbot/docker.html)を参照してください。
#### BTPanelでのワンクリックデプロイ
#### Panelでのワンクリックデプロイ
LangBotはBTPanelにリストされています。BTPanelをインストールしている場合は、[ドキュメント](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html)を使用して使用できます。
@@ -61,23 +55,25 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
リリースバージョンを直接使用して実行します。[手動デプロイ](https://docs.langbot.app/en/deploy/langbot/manual.html)のドキュメントを参照してください。
## 📸 デモ
## 😎 最新情報を入手
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ 機能
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル、ストリーミング出力機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
<img alt="返信効果(インターネットプラグイン付き)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
詳細な仕様については、[ドキュメント](https://docs.langbot.app/en/insight/features.html)を参照してください。
- WebUIデモ: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUIの効果のみを示しています。公開環境では機密情報を入力しないでください。
## 🔌 コンポーネントの互換性
または、デモ環境にアクセスしてください: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUI のデモンストレーションのみの場合、公開環境では機密情報を入力しないでください。
### メッセージプラットフォーム
@@ -93,10 +89,6 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 開発中
### LLMs
@@ -110,6 +102,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLMとGPUリソースプラットフォーム |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |
@@ -129,9 +122,3 @@ LangBot への貢献に対して、以下の [コード貢献者](https://github
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 最新情報を入手
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
![star gif](https://docs.langbot.app/star.gif)

140
README_TW.md Normal file
View File

@@ -0,0 +1,140 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a>
<div align="center"><a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / [简体中文](README.md) / 繁體中文 / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">主頁</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文件</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">外掛介紹</a>
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交外掛</a>
</div>
</p>
LangBot 是一個開源的大語言模型原生即時通訊機器人開發平台,旨在提供開箱即用的 IM 機器人開發體驗,具有 Agent、RAG、MCP 等多種 LLM 應用功能,適配全球主流即時通訊平台,並提供豐富的 API 介面,支援自定義開發。
## 📦 開始使用
#### Docker Compose 部署
```bash
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
訪問 http://localhost:5300 即可開始使用。
詳細文件[Docker 部署](https://docs.langbot.app/zh/deploy/langbot/docker.html)。
#### 寶塔面板部署
已上架寶塔面板,若您已安裝寶塔面板,可以根據[文件](https://docs.langbot.app/zh/deploy/langbot/one-click/bt.html)使用。
#### Zeabur 雲端部署
社群貢獻的 Zeabur 模板。
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/zh-CN/templates/ZKTBDH)
#### Railway 雲端部署
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### 手動部署
直接使用發行版運行,查看文件[手動部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 😎 保持更新
點擊倉庫右上角 Star 和 Watch 按鈕,獲取最新動態。
![star gif](https://docs.langbot.app/star.gif)
## ✨ 特性
- 💬 大模型對話、Agent支援多種大模型適配群聊和私聊具有多輪對話、工具調用、多模態、流式輸出能力自帶 RAG知識庫實現並深度適配 [Dify](https://dify.ai)。
- 🤖 多平台支援:目前支援 QQ、QQ頻道、企業微信、個人微信、飛書、Discord、Telegram 等平台。
- 🛠️ 高穩定性、功能完備:原生支援訪問控制、限速、敏感詞過濾等機制;配置簡單,支援多種部署方式。支援多流水線配置,不同機器人用於不同應用場景。
- 🧩 外掛擴展、活躍社群:支援事件驅動、組件擴展等外掛機制;適配 Anthropic [MCP 協議](https://modelcontextprotocol.io/);目前已有數百個外掛。
- 😻 Web 管理面板:支援通過瀏覽器管理 LangBot 實例,不再需要手動編寫配置文件。
詳細規格特性請訪問[文件](https://docs.langbot.app/zh/insight/features.html)。
或訪問 demo 環境https://demo.langbot.dev/
- 登入資訊:郵箱:`demo@langbot.app` 密碼:`langbot123456`
- 注意:僅展示 WebUI 效果,公開環境,請不要在其中填入您的任何敏感資訊。
### 訊息平台
| 平台 | 狀態 | 備註 |
| --- | --- | --- |
| QQ 個人號 | ✅ | QQ 個人號私聊、群聊 |
| QQ 官方機器人 | ✅ | QQ 官方機器人,支援頻道、私聊、群聊 |
| 微信 | ✅ | |
| 企微對外客服 | ✅ | |
| 微信公眾號 | ✅ | |
| Lark | ✅ | |
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
### 大模型能力
| 模型 | 狀態 | 備註 |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | 可接入任何 OpenAI 介面格式模型 |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智譜AI](https://open.bigmodel.cn/) | ✅ | |
| [勝算雲](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 資源平台 |
| [優雲智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型運行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型運行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型介面聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里雲百煉](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支援通過 MCP 協議獲取工具 |
### TTS
| 平台/模型 | 備註 |
| --- | --- |
| [FishAudio](https://fish.audio/zh-CN/discovery/) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [海豚 AI](https://www.ttson.cn/?source=thelazy) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [AzureTTS](https://portal.azure.com/) | [外掛](https://github.com/Ingnaryk/LangBot_AzureTTS) |
### 文生圖
| 平台/模型 | 備註 |
| --- | --- |
| 阿里雲百煉 | [外掛](https://github.com/Thetail001/LangBot_BailianTextToImagePlugin)
## 😘 社群貢獻
感謝以下[程式碼貢獻者](https://github.com/langbot-app/LangBot/graphs/contributors)和社群裡其他成員對 LangBot 的貢獻:
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -9,7 +9,6 @@ spec:
components:
ComponentTemplate:
fromFiles:
- pkg/platform/adapter.yaml
- pkg/provider/modelmgr/requester.yaml
MessagePlatformAdapter:
fromDirs:

View File

@@ -1,6 +1,21 @@
version: "3"
services:
langbot_plugin_runtime:
image: rockchin/langbot:latest
container_name: langbot_plugin_runtime
volumes:
- ./data/plugins:/app/data/plugins
ports:
- 5401:5401
restart: on-failure
environment:
- TZ=Asia/Shanghai
command: ["uv", "run", "-m", "langbot_plugin.cli.__init__", "rt"]
networks:
- langbot_network
langbot:
image: rockchin/langbot:latest
container_name: langbot
@@ -11,6 +26,11 @@ services:
environment:
- TZ=Asia/Shanghai
ports:
- 5300:5300 # 供 WebUI 使用
- 2280-2290:2280-2290 # 供消息平台适配器方向连接
# 根据具体环境配置网络
- 5300:5300 # For web ui
- 2280-2290:2280-2290 # For platform webhook
networks:
- langbot_network
networks:
langbot_network:
driver: bridge

View File

@@ -253,6 +253,43 @@ class DingTalkClient:
await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}')
async def create_and_card(
self, temp_card_id: str, incoming_message: dingtalk_stream.ChatbotMessage, quote_origin: bool = False
):
content_key = 'content'
card_data = {content_key: ''}
card_instance = dingtalk_stream.AICardReplier(self.client, incoming_message)
# print(card_instance)
# 先投放卡片: https://open.dingtalk.com/document/orgapp/create-and-deliver-cards
card_instance_id = await card_instance.async_create_and_deliver_card(
temp_card_id,
card_data,
)
return card_instance, card_instance_id
async def send_card_message(self, card_instance, card_instance_id: str, content: str, is_final: bool):
content_key = 'content'
try:
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value=content,
append=False,
finished=is_final,
failed=False,
)
except Exception as e:
self.logger.exception(e)
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value='',
append=False,
finished=is_final,
failed=True,
)
async def start(self):
"""启动 WebSocket 连接,监听消息"""
await self.client.start()

View File

@@ -3,7 +3,7 @@ from quart import request
import httpx
from quart import Quart
from typing import Callable, Dict, Any
from pkg.platform.types import events as platform_events
import langbot_plugin.api.entities.builtin.platform.events as platform_events
from .qqofficialevent import QQOfficialEvent
import json
import traceback
@@ -104,7 +104,7 @@ class QQOfficialClient:
return {'code': 0, 'message': 'success'}
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
return {'error': str(e)}, 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -168,7 +168,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/users/' + user_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -193,7 +192,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/groups/' + group_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -209,7 +207,7 @@ class QQOfficialClient:
if response.status_code == 200:
return
else:
await self.logger.error(f"发送群聊消息失败:{response.json()}")
await self.logger.error(f'发送群聊消息失败:{response.json()}')
raise Exception(response.read().decode())
async def send_channle_group_text_msg(self, channel_id: str, content: str, msg_id: str):
@@ -217,7 +215,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/channels/' + channel_id + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -240,7 +237,6 @@ class QQOfficialClient:
"""发送频道私聊消息"""
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/dms/' + guild_id + '/messages'
async with httpx.AsyncClient() as client:

View File

@@ -4,7 +4,7 @@ from quart import Quart, jsonify, request
from slack_sdk.web.async_client import AsyncWebClient
from .slackevent import SlackEvent
from typing import Callable
from pkg.platform.types import events as platform_events
import langbot_plugin.api.entities.builtin.platform.events as platform_events
class SlackClient:
@@ -34,7 +34,6 @@ class SlackClient:
if self.bot_user_id and bot_user_id == self.bot_user_id:
return jsonify({'status': 'ok'})
# 处理私信
if data and data.get('event', {}).get('channel_type') in ['im']:
@@ -52,7 +51,7 @@ class SlackClient:
return jsonify({'status': 'ok'})
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
raise (e)
async def _handle_message(self, event: SlackEvent):
@@ -82,7 +81,7 @@ class SlackClient:
self.bot_user_id = response['message']['bot_id']
return
except Exception as e:
await self.logger.error(f"Error in send_message: {e}")
await self.logger.error(f'Error in send_message: {e}')
raise e
async def send_message_to_one(self, text: str, user_id: str):
@@ -93,7 +92,7 @@ class SlackClient:
return
except Exception as e:
await self.logger.error(f"Error in send_message: {traceback.format_exc()}")
await self.logger.error(f'Error in send_message: {traceback.format_exc()}')
raise e
async def run_task(self, host: str, port: int, *args, **kwargs):

View File

@@ -12,12 +12,9 @@ class UserApi:
return get_json(base_url=url, token=self.token)
def get_qr_code(self, recover:bool=True, style:int=8):
def get_qr_code(self, recover: bool = True, style: int = 8):
"""获取自己的二维码"""
param = {
"Recover": recover,
"Style": style
}
param = {'Recover': recover, 'Style': style}
url = f'{self.base_url}/user/GetMyQRCode'
return post_json(base_url=url, token=self.token, data=param)
@@ -26,12 +23,8 @@ class UserApi:
url = f'{self.base_url}/equipment/GetSafetyInfo'
return post_json(base_url=url, token=self.token)
async def update_head_img(self, head_img_base64):
async def update_head_img(self, head_img_base64):
"""修改头像"""
param = {
"Base64": head_img_base64
}
param = {'Base64': head_img_base64}
url = f'{self.base_url}/user/UploadHeadImage'
return await async_request(base_url=url, token_key=self.token, json=param)
return await async_request(base_url=url, token_key=self.token, json=param)

View File

@@ -1,4 +1,3 @@
from libs.wechatpad_api.api.login import LoginApi
from libs.wechatpad_api.api.friend import FriendApi
from libs.wechatpad_api.api.message import MessageApi
@@ -7,9 +6,6 @@ from libs.wechatpad_api.api.downloadpai import DownloadApi
from libs.wechatpad_api.api.chatroom import ChatRoomApi
class WeChatPadClient:
def __init__(self, base_url, token, logger=None):
self._login_api = LoginApi(base_url, token)
@@ -20,16 +16,16 @@ class WeChatPadClient:
self._chatroom_api = ChatRoomApi(base_url, token)
self.logger = logger
def get_token(self,admin_key, day: int):
'''获取token'''
def get_token(self, admin_key, day: int):
"""获取token"""
return self._login_api.get_token(admin_key, day)
def get_login_qr(self, Proxy:str=""):
def get_login_qr(self, Proxy: str = ''):
"""登录二维码"""
return self._login_api.get_login_qr(Proxy=Proxy)
def awaken_login(self, Proxy:str=""):
'''唤醒登录'''
def awaken_login(self, Proxy: str = ''):
"""唤醒登录"""
return self._login_api.wake_up_login(Proxy=Proxy)
def log_out(self):
@@ -40,59 +36,57 @@ class WeChatPadClient:
"""获取登录状态"""
return self._login_api.get_login_status()
def send_text_message(self, to_wxid, message, ats: list=[]):
def send_text_message(self, to_wxid, message, ats: list = []):
"""发送文本消息"""
return self._message_api.post_text(to_wxid, message, ats)
return self._message_api.post_text(to_wxid, message, ats)
def send_image_message(self, to_wxid, img_url, ats: list=[]):
def send_image_message(self, to_wxid, img_url, ats: list = []):
"""发送图片消息"""
return self._message_api.post_image(to_wxid, img_url, ats)
return self._message_api.post_image(to_wxid, img_url, ats)
def send_voice_message(self, to_wxid, voice_data, voice_forma, voice_duration):
"""发送音频消息"""
return self._message_api.post_voice(to_wxid, voice_data, voice_forma, voice_duration)
return self._message_api.post_voice(to_wxid, voice_data, voice_forma, voice_duration)
def send_app_message(self, to_wxid, app_message, type):
"""发送app消息"""
return self._message_api.post_app_msg(to_wxid, app_message, type)
return self._message_api.post_app_msg(to_wxid, app_message, type)
def send_emoji_message(self, to_wxid, emoji_md5, emoji_size):
"""发送emoji消息"""
return self._message_api.post_emoji(to_wxid,emoji_md5,emoji_size)
return self._message_api.post_emoji(to_wxid, emoji_md5, emoji_size)
def revoke_msg(self, to_wxid, msg_id, new_msg_id, create_time):
"""撤回消息"""
return self._message_api.revoke_msg(to_wxid, msg_id, new_msg_id, create_time)
return self._message_api.revoke_msg(to_wxid, msg_id, new_msg_id, create_time)
def get_profile(self):
"""获取用户信息"""
return self._user_api.get_profile()
def get_qr_code(self, recover:bool=True, style:int=8):
def get_qr_code(self, recover: bool = True, style: int = 8):
"""获取用户二维码"""
return self._user_api.get_qr_code(recover=recover, style=style)
return self._user_api.get_qr_code(recover=recover, style=style)
def get_safety_info(self):
"""获取设备信息"""
return self._user_api.get_safety_info()
return self._user_api.get_safety_info()
def update_head_img(self, head_img_base64):
def update_head_img(self, head_img_base64):
"""上传用户头像"""
return self._user_api.update_head_img(head_img_base64)
return self._user_api.update_head_img(head_img_base64)
def cdn_download(self, aeskey, file_type, file_url):
"""cdn下载"""
return self._download_api.send_download( aeskey, file_type, file_url)
return self._download_api.send_download(aeskey, file_type, file_url)
def get_msg_voice(self,buf_id, length, msgid):
def get_msg_voice(self, buf_id, length, msgid):
"""下载语音"""
return self._download_api.get_msg_voice(buf_id, length, msgid)
async def download_base64(self,url):
async def download_base64(self, url):
return await self._download_api.download_url_to_base64(download_url=url)
def get_chatroom_member_detail(self, chatroom_name):
"""查看群成员详情"""
return self._chatroom_api.get_chatroom_member_detail(chatroom_name)

View File

@@ -1,31 +1,34 @@
import qrcode
def print_green(text):
print(f"\033[32m{text}\033[0m")
print(f'\033[32m{text}\033[0m')
def print_yellow(text):
print(f"\033[33m{text}\033[0m")
print(f'\033[33m{text}\033[0m')
def print_red(text):
print(f"\033[31m{text}\033[0m")
print(f'\033[31m{text}\033[0m')
def make_and_print_qr(url):
"""生成并打印二维码
Args:
url: 需要生成二维码的URL字符串
Returns:
None
功能:
1. 在终端打印二维码的ASCII图形
2. 同时提供在线二维码生成链接作为备选
"""
print_green("请扫描下方二维码登录")
print_green('请扫描下方二维码登录')
qr = qrcode.QRCode()
qr.add_data(url)
qr.make()
qr.print_ascii(invert=True)
print_green(f"也可以访问下方链接获取二维码:\nhttps://api.qrserver.com/v1/create-qr-code/?data={url}")
print_green(f'也可以访问下方链接获取二维码:\nhttps://api.qrserver.com/v1/create-qr-code/?data={url}')

View File

@@ -8,7 +8,7 @@ from quart import Quart
import xml.etree.ElementTree as ET
from typing import Callable, Dict, Any
from .wecomevent import WecomEvent
from pkg.platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import aiofiles
@@ -57,7 +57,7 @@ class WecomClient:
if 'access_token' in data:
return data['access_token']
else:
await self.logger.error(f"获取accesstoken失败:{response.json()}")
await self.logger.error(f'获取accesstoken失败:{response.json()}')
raise Exception(f'未获取access token: {data}')
async def get_users(self):
@@ -129,7 +129,7 @@ class WecomClient:
response = await client.post(url, json=params)
data = response.json()
except Exception as e:
await self.logger.error(f"发送图片失败:{data}")
await self.logger.error(f'发送图片失败:{data}')
raise Exception('Failed to send image: ' + str(e))
# 企业微信错误码40014和42001代表accesstoken问题
@@ -164,7 +164,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_private_msg(user_id, agent_id, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
await self.logger.error(f'发送消息失败:{data}')
raise Exception('Failed to send message: ' + str(data))
async def handle_callback_request(self):
@@ -181,7 +181,7 @@ class WecomClient:
echostr = request.args.get('echostr')
ret, reply_echo_str = wxcpt.VerifyURL(msg_signature, timestamp, nonce, echostr)
if ret != 0:
await self.logger.error("验证失败")
await self.logger.error('验证失败')
raise Exception(f'验证失败,错误码: {ret}')
return reply_echo_str
@@ -189,9 +189,8 @@ class WecomClient:
encrypt_msg = await request.data
ret, xml_msg = wxcpt.DecryptMsg(encrypt_msg, msg_signature, timestamp, nonce)
if ret != 0:
await self.logger.error("消息解密失败")
await self.logger.error('消息解密失败')
raise Exception(f'消息解密失败,错误码: {ret}')
# 解析消息并处理
message_data = await self.get_message(xml_msg)
@@ -202,7 +201,7 @@ class WecomClient:
return 'success'
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
return f'Error processing request: {str(e)}', 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -301,7 +300,7 @@ class WecomClient:
except binascii.Error as e:
raise ValueError(f'Invalid base64 string: {str(e)}')
else:
await self.logger.error("Image对象出错")
await self.logger.error('Image对象出错')
raise ValueError('image对象出错')
# 设置 multipart/form-data 格式的文件
@@ -325,7 +324,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
media_id = await self.upload_to_work(image)
if data.get('errcode', 0) != 0:
await self.logger.error(f"上传图片失败:{data}")
await self.logger.error(f'上传图片失败:{data}')
raise Exception('failed to upload file')
media_id = data.get('media_id')

View File

@@ -8,7 +8,7 @@ from quart import Quart
import xml.etree.ElementTree as ET
from typing import Callable
from .wecomcsevent import WecomCSEvent
from pkg.platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import aiofiles
@@ -187,7 +187,7 @@ class WecomCSClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_text_msg(open_kfid, external_userid, msgid, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
await self.logger.error(f'发送消息失败:{data}')
raise Exception('Failed to send message')
return data
@@ -227,7 +227,7 @@ class WecomCSClient:
return 'success'
except Exception as e:
if self.logger:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
else:
traceback.print_exc()
return f'Error processing request: {str(e)}', 400

16
main.py
View File

@@ -19,8 +19,14 @@ asciiart = r"""
async def main_entry(loop: asyncio.AbstractEventLoop):
parser = argparse.ArgumentParser(description='LangBot')
parser.add_argument('--skip-plugin-deps-check', action='store_true', help='跳过插件依赖项检查', default=False)
parser.add_argument('--standalone-runtime', action='store_true', help='使用独立插件运行时', default=False)
args = parser.parse_args()
if args.standalone_runtime:
from pkg.utils import platform
platform.standalone_runtime = True
print(asciiart)
import sys
@@ -47,13 +53,13 @@ async def main_entry(loop: asyncio.AbstractEventLoop):
if not args.skip_plugin_deps_check:
await deps.precheck_plugin_deps()
# 检查pydantic版本如果没有 pydantic.v1则把 pydantic 映射为 v1
import pydantic.version
# # 检查pydantic版本如果没有 pydantic.v1则把 pydantic 映射为 v1
# import pydantic.version
if pydantic.version.VERSION < '2.0':
import pydantic
# if pydantic.version.VERSION < '2.0':
# import pydantic
sys.modules['pydantic.v1'] = pydantic
# sys.modules['pydantic.v1'] = pydantic
# 检查配置文件

View File

@@ -123,4 +123,4 @@ class RouterGroup(abc.ABC):
def http_status(self, status: int, code: int, msg: str) -> typing.Tuple[quart.Response, int]:
"""返回一个指定状态码的响应"""
return (self.fail(code, msg), status)
return (self.fail(code, msg), status)

View File

@@ -11,7 +11,11 @@ class PipelinesRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'pipelines': await self.ap.pipeline_service.get_pipelines()})
sort_by = quart.request.args.get('sort_by', 'created_at')
sort_order = quart.request.args.get('sort_order', 'DESC')
return self.success(
data={'pipelines': await self.ap.pipeline_service.get_pipelines(sort_by, sort_order)}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json

View File

@@ -1,3 +1,5 @@
import json
import quart
from ... import group
@@ -9,10 +11,18 @@ class WebChatDebugRouterGroup(group.RouterGroup):
@self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str:
"""Send a message to the pipeline for debugging"""
async def stream_generator(generator):
yield 'data: {"type": "start"}\n\n'
async for message in generator:
yield f'data: {json.dumps({"message": message})}\n\n'
yield 'data: {"type": "end"}\n\n'
try:
data = await quart.request.get_json()
session_type = data.get('session_type', 'person')
message_chain_obj = data.get('message', [])
is_stream = data.get('is_stream', False)
if not message_chain_obj:
return self.http_status(400, -1, 'message is required')
@@ -25,13 +35,33 @@ class WebChatDebugRouterGroup(group.RouterGroup):
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
result = await webchat_adapter.send_webchat_message(pipeline_uuid, session_type, message_chain_obj)
return self.success(
data={
'message': result,
if is_stream:
generator = webchat_adapter.send_webchat_message(
pipeline_uuid, session_type, message_chain_obj, is_stream
)
# 设置正确的响应头
headers = {
'Content-Type': 'text/event-stream',
'Transfer-Encoding': 'chunked',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
}
)
return quart.Response(stream_generator(generator), mimetype='text/event-stream', headers=headers)
else: # non-stream
result = None
async for message in webchat_adapter.send_webchat_message(
pipeline_uuid, session_type, message_chain_obj
):
result = message
if result is not None:
return self.success(
data={
'message': result,
}
)
else:
return self.http_status(400, -1, 'message is required')
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')

View File

@@ -1,10 +1,11 @@
from __future__ import annotations
import base64
import quart
from .....core import taskmgr
from .. import group
from langbot_plugin.runtime.plugin.mgr import PluginInstallSource
@group.group_class('plugins', '/api/v1/plugins')
@@ -12,35 +13,22 @@ class PluginsRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
plugins = self.ap.plugin_mgr.plugins()
plugins = await self.ap.plugin_connector.list_plugins()
plugins_data = [plugin.model_dump() for plugin in plugins]
return self.success(data={'plugins': plugins_data})
return self.success(data={'plugins': plugins})
@self.route(
'/<author>/<plugin_name>/toggle',
methods=['PUT'],
auth_type=group.AuthType.USER_TOKEN,
)
async def _(author: str, plugin_name: str) -> str:
data = await quart.request.json
target_enabled = data.get('target_enabled')
await self.ap.plugin_mgr.update_plugin_switch(plugin_name, target_enabled)
return self.success()
@self.route(
'/<author>/<plugin_name>/update',
'/<author>/<plugin_name>/upgrade',
methods=['POST'],
auth_type=group.AuthType.USER_TOKEN,
)
async def _(author: str, plugin_name: str) -> str:
ctx = taskmgr.TaskContext.new()
wrapper = self.ap.task_mgr.create_user_task(
self.ap.plugin_mgr.update_plugin(plugin_name, task_context=ctx),
self.ap.plugin_connector.upgrade_plugin(author, plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-update-{plugin_name}',
label=f'Updating plugin {plugin_name}',
name=f'plugin-upgrade-{plugin_name}',
label=f'Upgrading plugin {plugin_name}',
context=ctx,
)
return self.success(data={'task_id': wrapper.id})
@@ -52,14 +40,14 @@ class PluginsRouterGroup(group.RouterGroup):
)
async def _(author: str, plugin_name: str) -> str:
if quart.request.method == 'GET':
plugin = self.ap.plugin_mgr.get_plugin(author, plugin_name)
plugin = await self.ap.plugin_connector.get_plugin_info(author, plugin_name)
if plugin is None:
return self.http_status(404, -1, 'plugin not found')
return self.success(data={'plugin': plugin.model_dump()})
return self.success(data={'plugin': plugin})
elif quart.request.method == 'DELETE':
ctx = taskmgr.TaskContext.new()
wrapper = self.ap.task_mgr.create_user_task(
self.ap.plugin_mgr.uninstall_plugin(plugin_name, task_context=ctx),
self.ap.plugin_connector.delete_plugin(author, plugin_name, task_context=ctx),
kind='plugin-operation',
name=f'plugin-remove-{plugin_name}',
label=f'Removing plugin {plugin_name}',
@@ -74,24 +62,19 @@ class PluginsRouterGroup(group.RouterGroup):
auth_type=group.AuthType.USER_TOKEN,
)
async def _(author: str, plugin_name: str) -> quart.Response:
plugin = self.ap.plugin_mgr.get_plugin(author, plugin_name)
plugin = await self.ap.plugin_connector.get_plugin_info(author, plugin_name)
if plugin is None:
return self.http_status(404, -1, 'plugin not found')
if quart.request.method == 'GET':
return self.success(data={'config': plugin.plugin_config})
return self.success(data={'config': plugin['plugin_config']})
elif quart.request.method == 'PUT':
data = await quart.request.json
await self.ap.plugin_mgr.set_plugin_config(plugin, data)
await self.ap.plugin_connector.set_plugin_config(author, plugin_name, data)
return self.success(data={})
@self.route('/reorder', methods=['PUT'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
data = await quart.request.json
await self.ap.plugin_mgr.reorder_plugins(data.get('plugins'))
return self.success()
@self.route('/install/github', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
data = await quart.request.json
@@ -102,7 +85,47 @@ class PluginsRouterGroup(group.RouterGroup):
self.ap.plugin_mgr.install_plugin(data['source'], task_context=ctx),
kind='plugin-operation',
name='plugin-install-github',
label=f'Installing plugin ...{short_source_str}',
label=f'Installing plugin from github ...{short_source_str}',
context=ctx,
)
return self.success(data={'task_id': wrapper.id})
@self.route('/install/marketplace', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
data = await quart.request.json
ctx = taskmgr.TaskContext.new()
wrapper = self.ap.task_mgr.create_user_task(
self.ap.plugin_connector.install_plugin(PluginInstallSource.MARKETPLACE, data, task_context=ctx),
kind='plugin-operation',
name='plugin-install-marketplace',
label=f'Installing plugin from marketplace ...{data}',
context=ctx,
)
return self.success(data={'task_id': wrapper.id})
@self.route('/install/local', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
file = (await quart.request.files).get('file')
if file is None:
return self.http_status(400, -1, 'file is required')
file_bytes = file.read()
file_base64 = base64.b64encode(file_bytes).decode('utf-8')
data = {
'plugin_file': file_base64,
}
ctx = taskmgr.TaskContext.new()
wrapper = self.ap.task_mgr.create_user_task(
self.ap.plugin_connector.install_plugin(PluginInstallSource.LOCAL, data, task_context=ctx),
kind='plugin-operation',
name='plugin-install-local',
label=f'Installing plugin from local ...{file.filename}',
context=ctx,
)

View File

@@ -14,6 +14,11 @@ class SystemRouterGroup(group.RouterGroup):
'version': constants.semantic_version,
'debug': constants.debug_mode,
'enabled_platform_count': len(self.ap.platform_mgr.get_running_adapters()),
'cloud_service_url': (
self.ap.instance_config.data['plugin']['cloud_service_url']
if 'cloud_service_url' in self.ap.instance_config.data['plugin']
else 'https://space.langbot.app'
),
}
)
@@ -35,16 +40,7 @@ class SystemRouterGroup(group.RouterGroup):
return self.success(data=task.to_dict())
@self.route('/reload', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
json_data = await quart.request.json
scope = json_data.get('scope')
await self.ap.reload(scope=scope)
return self.success()
@self.route('/_debug/exec', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
@self.route('/debug/exec', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
if not constants.debug_mode:
return self.http_status(403, 403, 'Forbidden')
@@ -54,3 +50,39 @@ class SystemRouterGroup(group.RouterGroup):
ap = self.ap
return self.success(data=exec(py_code, {'ap': ap}))
@self.route('/debug/tools/call', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> str:
if not constants.debug_mode:
return self.http_status(403, 403, 'Forbidden')
data = await quart.request.json
return self.success(
data=await self.ap.tool_mgr.execute_func_call(data['tool_name'], data['tool_parameters'])
)
@self.route(
'/debug/plugin/action',
methods=['POST'],
auth_type=group.AuthType.USER_TOKEN,
)
async def _() -> str:
if not constants.debug_mode:
return self.http_status(403, 403, 'Forbidden')
data = await quart.request.json
class AnoymousAction:
value = 'anonymous_action'
def __init__(self, value: str):
self.value = value
resp = await self.ap.plugin_connector.handler.call_action(
AnoymousAction(data['action']),
data['data'],
timeout=data.get('timeout', 10),
)
return self.success(data=resp)

View File

@@ -67,3 +67,19 @@ class UserRouterGroup(group.RouterGroup):
await self.ap.user_service.reset_password(user_email, new_password)
return self.success(data={'user': user_email})
@self.route('/change-password', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _(user_email: str) -> str:
json_data = await quart.request.json
current_password = json_data['current_password']
new_password = json_data['new_password']
try:
await self.ap.user_service.change_password(user_email, current_password, new_password)
except argon2.exceptions.VerifyMismatchError:
return self.http_status(400, -1, 'Current password is incorrect')
except ValueError as e:
return self.http_status(400, -1, str(e))
return self.success(data={'user': user_email})

View File

@@ -17,16 +17,20 @@ class BotService:
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_bots(self) -> list[dict]:
"""Get all bots"""
async def get_bots(self, include_secret: bool = True) -> list[dict]:
"""获取所有机器人"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_bot.Bot))
bots = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot) for bot in bots]
masked_columns = []
if not include_secret:
masked_columns = ['adapter_config']
async def get_bot(self, bot_uuid: str) -> dict | None:
"""Get bot"""
return [self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot, masked_columns) for bot in bots]
async def get_bot(self, bot_uuid: str, include_secret: bool = True) -> dict | None:
"""获取机器人"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_bot.Bot).where(persistence_bot.Bot.uuid == bot_uuid)
)
@@ -36,7 +40,27 @@ class BotService:
if bot is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot)
masked_columns = []
if not include_secret:
masked_columns = ['adapter_config']
return self.ap.persistence_mgr.serialize_model(persistence_bot.Bot, bot, masked_columns)
async def get_runtime_bot_info(self, bot_uuid: str, include_secret: bool = True) -> dict:
"""获取机器人运行时信息"""
persistence_bot = await self.get_bot(bot_uuid, include_secret)
if persistence_bot is None:
raise Exception('Bot not found')
adapter_runtime_values = {}
runtime_bot = await self.ap.platform_mgr.get_bot_by_uuid(bot_uuid)
if runtime_bot is not None:
adapter_runtime_values['bot_account_id'] = runtime_bot.adapter.bot_account_id
persistence_bot['adapter_runtime_values'] = adapter_runtime_values
return persistence_bot
async def create_bot(self, bot_data: dict) -> str:
"""Create bot"""

View File

@@ -78,7 +78,9 @@ class KnowledgeService:
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return [result.model_dump() for result in await runtime_kb.retrieve(query)]
return [
result.model_dump() for result in await runtime_kb.retrieve(query, runtime_kb.knowledge_base_entity.top_k)
]
async def get_files_by_knowledge_base(self, kb_uuid: str) -> list[dict]:
"""获取知识库文件"""

View File

@@ -7,7 +7,7 @@ from ....core import app
from ....entity.persistence import model as persistence_model
from ....entity.persistence import pipeline as persistence_pipeline
from ....provider.modelmgr import requester as model_requester
from ....provider import entities as llm_entities
from langbot_plugin.api.entities.builtin.provider import message as provider_message
class LLMModelsService:
@@ -16,11 +16,19 @@ class LLMModelsService:
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_llm_models(self) -> list[dict]:
async def get_llm_models(self, include_secret: bool = True) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.LLMModel))
models = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_model.LLMModel, model) for model in models]
masked_columns = []
if not include_secret:
masked_columns = ['api_keys']
return [
self.ap.persistence_mgr.serialize_model(persistence_model.LLMModel, model, masked_columns)
for model in models
]
async def create_llm_model(self, model_data: dict) -> str:
model_data['uuid'] = str(uuid.uuid4())
@@ -99,9 +107,9 @@ class LLMModelsService:
await runtime_llm_model.requester.invoke_llm(
query=None,
model=runtime_llm_model,
messages=[llm_entities.Message(role='user', content='Hello, world!')],
messages=[provider_message.Message(role='user', content='Hello, world!')],
funcs=[],
extra_args={},
extra_args=model_data.get('extra_args', {}),
)

View File

@@ -38,9 +38,21 @@ class PipelineService:
self.ap.pipeline_config_meta_output.data,
]
async def get_pipelines(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]:
query = sqlalchemy.select(persistence_pipeline.LegacyPipeline)
if sort_by == 'created_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.asc())
elif sort_by == 'updated_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.asc())
result = await self.ap.persistence_mgr.execute_async(query)
pipelines = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)

View File

@@ -82,3 +82,18 @@ class UserService:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)
async def change_password(self, user_email: str, current_password: str, new_password: str) -> None:
ph = argon2.PasswordHasher()
user_obj = await self.get_user_by_email(user_email)
if user_obj is None:
raise ValueError('User not found')
ph.verify(user_obj.password, current_password)
hashed_password = ph.hash(new_password)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)

View File

@@ -2,9 +2,12 @@ from __future__ import annotations
import typing
from ..core import app, entities as core_entities
from . import entities, operator, errors
from ..core import app
from . import operator
from ..utils import importutil
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
# 引入所有算子以便注册
from . import operators
@@ -13,13 +16,11 @@ importutil.import_modules_in_pkg(operators)
class CommandManager:
"""命令管理器"""
ap: app.Application
cmd_list: list[operator.CommandOperator]
"""
运行时命令列表,扁平存储,各个对象包含对应的子节点引用
Runtime command list, flat storage, each object contains a reference to the corresponding child node
"""
def __init__(self, ap: app.Application):
@@ -55,43 +56,28 @@ class CommandManager:
async def _execute(
self,
context: entities.ExecuteContext,
context: command_context.ExecuteContext,
operator_list: list[operator.CommandOperator],
operator: operator.CommandOperator = None,
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""执行命令"""
found = False
if len(context.crt_params) > 0: # 查找下一个参数是否对应此节点的某个子节点名
for oper in operator_list:
if (context.crt_params[0] == oper.name or context.crt_params[0] in oper.alias) and (
oper.parent_class is None or oper.parent_class == operator.__class__
):
found = True
command_list = await self.ap.plugin_connector.list_commands()
context.crt_command = context.crt_params[0]
context.crt_params = context.crt_params[1:]
async for ret in self._execute(context, oper.children, oper):
yield ret
break
if not found: # 如果下一个参数未在此节点的子节点中找到,则执行此节点或者报错
if operator is None:
yield entities.CommandReturn(error=errors.CommandNotFoundError(context.crt_params[0]))
else:
if operator.lowest_privilege > context.privilege:
yield entities.CommandReturn(error=errors.CommandPrivilegeError(operator.name))
else:
async for ret in operator.execute(context):
yield ret
for command in command_list:
if command.metadata.name == context.command:
async for ret in self.ap.plugin_connector.execute_command(context):
yield ret
break
else:
yield command_context.CommandReturn(error=command_errors.CommandNotFoundError(context.command))
async def execute(
self,
command_text: str,
query: core_entities.Query,
session: core_entities.Session,
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
query: pipeline_query.Query,
session: provider_session.Session,
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""执行命令"""
privilege = 1
@@ -99,8 +85,8 @@ class CommandManager:
if f'{query.launcher_type.value}_{query.launcher_id}' in self.ap.instance_config.data['admins']:
privilege = 2
ctx = entities.ExecuteContext(
query=query,
ctx = command_context.ExecuteContext(
query_id=query.query_id,
session=session,
command_text=command_text,
command='',
@@ -110,5 +96,9 @@ class CommandManager:
privilege=privilege,
)
ctx.command = ctx.params[0]
ctx.shift()
async for ret in self._execute(ctx, self.cmd_list):
yield ret

View File

@@ -1,74 +0,0 @@
from __future__ import annotations
import typing
import pydantic.v1 as pydantic
from ..core import entities as core_entities
from . import errors
from ..platform.types import message as platform_message
class CommandReturn(pydantic.BaseModel):
"""命令返回值"""
text: typing.Optional[str] = None
"""文本
"""
image: typing.Optional[platform_message.Image] = None
"""弃用"""
image_url: typing.Optional[str] = None
"""图片链接
"""
error: typing.Optional[errors.CommandError] = None
"""错误
"""
class Config:
arbitrary_types_allowed = True
class ExecuteContext(pydantic.BaseModel):
"""单次命令执行上下文"""
query: core_entities.Query
"""本次消息的请求对象"""
session: core_entities.Session
"""本次消息所属的会话对象"""
command_text: str
"""命令完整文本"""
command: str
"""命令名称"""
crt_command: str
"""当前命令
多级命令中crt_command为当前命令command为根命令。
例如:!plugin on Webwlkr
处理到plugin时command为plugincrt_command为plugin
处理到on时command为plugincrt_command为on
"""
params: list[str]
"""命令参数
整个命令以空格分割后的参数列表
"""
crt_params: list[str]
"""当前命令参数
多级命令中crt_params为当前命令参数params为根命令参数。
例如:!plugin on Webwlkr
处理到plugin时params为['on', 'Webwlkr']crt_params为['on', 'Webwlkr']
处理到on时params为['on', 'Webwlkr']crt_params为['Webwlkr']
"""
privilege: int
"""发起人权限"""

View File

@@ -1,26 +0,0 @@
class CommandError(Exception):
def __init__(self, message: str = None):
self.message = message
def __str__(self):
return self.message
class CommandNotFoundError(CommandError):
def __init__(self, message: str = None):
super().__init__('未知命令: ' + message)
class CommandPrivilegeError(CommandError):
def __init__(self, message: str = None):
super().__init__('权限不足: ' + message)
class ParamNotEnoughError(CommandError):
def __init__(self, message: str = None):
super().__init__('参数不足: ' + message)
class CommandOperationError(CommandError):
def __init__(self, message: str = None):
super().__init__('操作失败: ' + message)

View File

@@ -4,7 +4,7 @@ import typing
import abc
from ..core import app
from . import entities
from langbot_plugin.api.entities.builtin.command import context as command_context
preregistered_operators: list[typing.Type[CommandOperator]] = []
@@ -95,16 +95,18 @@ class CommandOperator(metaclass=abc.ABCMeta):
pass
@abc.abstractmethod
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""实现此方法以执行命令
支持多次yield以返回多个结果。
例如:一个安装插件的命令,可能会有下载、解压、安装等多个步骤,每个步骤都可以返回一个结果。
Args:
context (entities.ExecuteContext): 命令执行上下文
context (command_context.ExecuteContext): 命令执行上下文
Yields:
entities.CommandReturn: 命令返回封装
command_context.CommandReturn: 命令返回封装
"""
pass

View File

@@ -2,14 +2,17 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='cmd', help='显示命令列表', usage='!cmd\n!cmd <命令名称>')
class CmdOperator(operator.CommandOperator):
"""命令列表"""
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""执行"""
if len(context.crt_params) == 0:
reply_str = '当前所有命令: \n\n'
@@ -20,7 +23,7 @@ class CmdOperator(operator.CommandOperator):
reply_str += '\n使用 !cmd <命令名称> 查看命令的详细帮助'
yield entities.CommandReturn(text=reply_str.strip())
yield command_context.CommandReturn(text=reply_str.strip())
else:
cmd_name = context.crt_params[0]
@@ -33,9 +36,9 @@ class CmdOperator(operator.CommandOperator):
break
if cmd is None:
yield entities.CommandReturn(error=errors.CommandNotFoundError(cmd_name))
yield command_context.CommandReturn(error=command_errors.CommandNotFoundError(cmd_name))
else:
reply_str = f'{cmd.name}: {cmd.help}\n\n'
reply_str += f'使用方法: \n{cmd.usage}'
yield entities.CommandReturn(text=reply_str.strip())
yield command_context.CommandReturn(text=reply_str.strip())

View File

@@ -2,23 +2,26 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='del', help='删除当前会话的历史记录', usage='!del <序号>\n!del all')
class DelOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if context.session.conversations:
delete_index = 0
if len(context.crt_params) > 0:
try:
delete_index = int(context.crt_params[0])
except Exception:
yield entities.CommandReturn(error=errors.CommandOperationError('索引必须是整数'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('索引必须是整数'))
return
if delete_index < 0 or delete_index >= len(context.session.conversations):
yield entities.CommandReturn(error=errors.CommandOperationError('索引超出范围'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('索引超出范围'))
return
# 倒序
@@ -29,15 +32,17 @@ class DelOperator(operator.CommandOperator):
del context.session.conversations[to_delete_index]
yield entities.CommandReturn(text=f'已删除对话: {delete_index}')
yield command_context.CommandReturn(text=f'已删除对话: {delete_index}')
else:
yield entities.CommandReturn(error=errors.CommandOperationError('当前没有对话'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('当前没有对话'))
@operator.operator_class(name='all', help='删除此会话的所有历史记录', parent_class=DelOperator)
class DelAllOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
context.session.conversations = []
context.session.using_conversation = None
yield entities.CommandReturn(text='已删除所有对话')
yield command_context.CommandReturn(text='已删除所有对话')

View File

@@ -1,19 +1,20 @@
from __future__ import annotations
from typing import AsyncGenerator
from .. import operator, entities
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context
@operator.operator_class(name='func', help='查看所有已注册的内容函数', usage='!func')
class FuncOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> AsyncGenerator[command_context.CommandReturn, None]:
reply_str = '当前已启用的内容函数: \n\n'
index = 1
all_functions = await self.ap.tool_mgr.get_all_functions(
plugin_enabled=True,
)
all_functions = await self.ap.tool_mgr.get_all_tools()
for func in all_functions:
reply_str += '{}. {}:\n{}\n\n'.format(
@@ -23,4 +24,4 @@ class FuncOperator(operator.CommandOperator):
)
index += 1
yield entities.CommandReturn(text=reply_str)
yield command_context.CommandReturn(text=reply_str)

View File

@@ -2,14 +2,17 @@ from __future__ import annotations
import typing
from .. import operator, entities
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context
@operator.operator_class(name='help', help='显示帮助', usage='!help\n!help <命令名称>')
class HelpOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
help = 'LangBot - 大语言模型原生即时通信机器人平台\n链接https://langbot.app'
help += '\n发送命令 !cmd 可查看命令列表'
yield entities.CommandReturn(text=help)
yield command_context.CommandReturn(text=help)

View File

@@ -3,26 +3,31 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='last', help='切换到前一个对话', usage='!last')
class LastOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if context.session.conversations:
# 找到当前会话的上一个会话
for index in range(len(context.session.conversations) - 1, -1, -1):
if context.session.conversations[index] == context.session.using_conversation:
if index == 0:
yield entities.CommandReturn(error=errors.CommandOperationError('已经是第一个对话了'))
yield command_context.CommandReturn(
error=command_errors.CommandOperationError('已经是第一个对话了')
)
return
else:
context.session.using_conversation = context.session.conversations[index - 1]
time_str = context.session.using_conversation.create_time.strftime('%Y-%m-%d %H:%M:%S')
yield entities.CommandReturn(
yield command_context.CommandReturn(
text=f'已切换到上一个对话: {index} {time_str}: {context.session.using_conversation.messages[0].readable_str()}'
)
return
else:
yield entities.CommandReturn(error=errors.CommandOperationError('当前没有对话'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('当前没有对话'))

View File

@@ -2,19 +2,22 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='list', help='列出此会话中的所有历史对话', usage='!list\n!list <页码>')
class ListOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
page = 0
if len(context.crt_params) > 0:
try:
page = int(context.crt_params[0] - 1)
except Exception:
yield entities.CommandReturn(error=errors.CommandOperationError('页码应为整数'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('页码应为整数'))
return
record_per_page = 10
@@ -45,4 +48,4 @@ class ListOperator(operator.CommandOperator):
else:
content += f'\n当前会话: {using_conv_index} {context.session.using_conversation.create_time.strftime("%Y-%m-%d %H:%M:%S")}: {context.session.using_conversation.messages[0].readable_str() if len(context.session.using_conversation.messages) > 0 else "无内容"}'
yield entities.CommandReturn(text=f'{page + 1} 页 (时间倒序):\n{content}')
yield command_context.CommandReturn(text=f'{page + 1} 页 (时间倒序):\n{content}')

View File

@@ -2,26 +2,31 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='next', help='切换到后一个对话', usage='!next')
class NextOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if context.session.conversations:
# 找到当前会话的下一个会话
for index in range(len(context.session.conversations)):
if context.session.conversations[index] == context.session.using_conversation:
if index == len(context.session.conversations) - 1:
yield entities.CommandReturn(error=errors.CommandOperationError('已经是最后一个对话了'))
yield command_context.CommandReturn(
error=command_errors.CommandOperationError('已经是最后一个对话了')
)
return
else:
context.session.using_conversation = context.session.conversations[index + 1]
time_str = context.session.using_conversation.create_time.strftime('%Y-%m-%d %H:%M:%S')
yield entities.CommandReturn(
yield command_context.CommandReturn(
text=f'已切换到后一个对话: {index} {time_str}: {context.session.using_conversation.messages[0].content}'
)
return
else:
yield entities.CommandReturn(error=errors.CommandOperationError('当前没有对话'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('当前没有对话'))

View File

@@ -2,7 +2,8 @@ from __future__ import annotations
import typing
import traceback
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(
@@ -11,7 +12,9 @@ from .. import operator, entities, errors
usage='!plugin\n!plugin get <插件仓库地址>\n!plugin update\n!plugin del <插件名>\n!plugin on <插件名>\n!plugin off <插件名>',
)
class PluginOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
plugin_list = self.ap.plugin_mgr.plugins()
reply_str = '所有插件({}):\n'.format(len(plugin_list))
idx = 0
@@ -27,32 +30,36 @@ class PluginOperator(operator.CommandOperator):
idx += 1
yield entities.CommandReturn(text=reply_str)
yield command_context.CommandReturn(text=reply_str)
@operator.operator_class(name='get', help='安装插件', privilege=2, parent_class=PluginOperator)
class PluginGetOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if len(context.crt_params) == 0:
yield entities.CommandReturn(error=errors.ParamNotEnoughError('请提供插件仓库地址'))
yield command_context.CommandReturn(error=command_errors.ParamNotEnoughError('请提供插件仓库地址'))
else:
repo = context.crt_params[0]
yield entities.CommandReturn(text='正在安装插件...')
yield command_context.CommandReturn(text='正在安装插件...')
try:
await self.ap.plugin_mgr.install_plugin(repo)
yield entities.CommandReturn(text='插件安装成功,请重启程序以加载插件')
yield command_context.CommandReturn(text='插件安装成功,请重启程序以加载插件')
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件安装失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件安装失败: ' + str(e)))
@operator.operator_class(name='update', help='更新插件', privilege=2, parent_class=PluginOperator)
class PluginUpdateOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if len(context.crt_params) == 0:
yield entities.CommandReturn(error=errors.ParamNotEnoughError('请提供插件名称'))
yield command_context.CommandReturn(error=command_errors.ParamNotEnoughError('请提供插件名称'))
else:
plugin_name = context.crt_params[0]
@@ -60,24 +67,26 @@ class PluginUpdateOperator(operator.CommandOperator):
plugin_container = self.ap.plugin_mgr.get_plugin_by_name(plugin_name)
if plugin_container is not None:
yield entities.CommandReturn(text='正在更新插件...')
yield command_context.CommandReturn(text='正在更新插件...')
await self.ap.plugin_mgr.update_plugin(plugin_name)
yield entities.CommandReturn(text='插件更新成功,请重启程序以加载插件')
yield command_context.CommandReturn(text='插件更新成功,请重启程序以加载插件')
else:
yield entities.CommandReturn(error=errors.CommandError('插件更新失败: 未找到插件'))
yield command_context.CommandReturn(error=command_errors.CommandError('插件更新失败: 未找到插件'))
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件更新失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件更新失败: ' + str(e)))
@operator.operator_class(name='all', help='更新所有插件', privilege=2, parent_class=PluginUpdateOperator)
class PluginUpdateAllOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
try:
plugins = [p.plugin_name for p in self.ap.plugin_mgr.plugins()]
if plugins:
yield entities.CommandReturn(text='正在更新插件...')
yield command_context.CommandReturn(text='正在更新插件...')
updated = []
try:
for plugin_name in plugins:
@@ -85,20 +94,22 @@ class PluginUpdateAllOperator(operator.CommandOperator):
updated.append(plugin_name)
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件更新失败: ' + str(e)))
yield entities.CommandReturn(text='已更新插件: {}'.format(', '.join(updated)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件更新失败: ' + str(e)))
yield command_context.CommandReturn(text='已更新插件: {}'.format(', '.join(updated)))
else:
yield entities.CommandReturn(text='没有可更新的插件')
yield command_context.CommandReturn(text='没有可更新的插件')
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件更新失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件更新失败: ' + str(e)))
@operator.operator_class(name='del', help='删除插件', privilege=2, parent_class=PluginOperator)
class PluginDelOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if len(context.crt_params) == 0:
yield entities.CommandReturn(error=errors.ParamNotEnoughError('请提供插件名称'))
yield command_context.CommandReturn(error=command_errors.ParamNotEnoughError('请提供插件名称'))
else:
plugin_name = context.crt_params[0]
@@ -106,51 +117,55 @@ class PluginDelOperator(operator.CommandOperator):
plugin_container = self.ap.plugin_mgr.get_plugin_by_name(plugin_name)
if plugin_container is not None:
yield entities.CommandReturn(text='正在删除插件...')
yield command_context.CommandReturn(text='正在删除插件...')
await self.ap.plugin_mgr.uninstall_plugin(plugin_name)
yield entities.CommandReturn(text='插件删除成功,请重启程序以加载插件')
yield command_context.CommandReturn(text='插件删除成功,请重启程序以加载插件')
else:
yield entities.CommandReturn(error=errors.CommandError('插件删除失败: 未找到插件'))
yield command_context.CommandReturn(error=command_errors.CommandError('插件删除失败: 未找到插件'))
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件删除失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件删除失败: ' + str(e)))
@operator.operator_class(name='on', help='启用插件', privilege=2, parent_class=PluginOperator)
class PluginEnableOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if len(context.crt_params) == 0:
yield entities.CommandReturn(error=errors.ParamNotEnoughError('请提供插件名称'))
yield command_context.CommandReturn(error=command_errors.ParamNotEnoughError('请提供插件名称'))
else:
plugin_name = context.crt_params[0]
try:
if await self.ap.plugin_mgr.update_plugin_switch(plugin_name, True):
yield entities.CommandReturn(text='已启用插件: {}'.format(plugin_name))
yield command_context.CommandReturn(text='已启用插件: {}'.format(plugin_name))
else:
yield entities.CommandReturn(
error=errors.CommandError('插件状态修改失败: 未找到插件 {}'.format(plugin_name))
yield command_context.CommandReturn(
error=command_errors.CommandError('插件状态修改失败: 未找到插件 {}'.format(plugin_name))
)
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件状态修改失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件状态修改失败: ' + str(e)))
@operator.operator_class(name='off', help='禁用插件', privilege=2, parent_class=PluginOperator)
class PluginDisableOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
if len(context.crt_params) == 0:
yield entities.CommandReturn(error=errors.ParamNotEnoughError('请提供插件名称'))
yield command_context.CommandReturn(error=command_errors.ParamNotEnoughError('请提供插件名称'))
else:
plugin_name = context.crt_params[0]
try:
if await self.ap.plugin_mgr.update_plugin_switch(plugin_name, False):
yield entities.CommandReturn(text='已禁用插件: {}'.format(plugin_name))
yield command_context.CommandReturn(text='已禁用插件: {}'.format(plugin_name))
else:
yield entities.CommandReturn(
error=errors.CommandError('插件状态修改失败: 未找到插件 {}'.format(plugin_name))
yield command_context.CommandReturn(
error=command_errors.CommandError('插件状态修改失败: 未找到插件 {}'.format(plugin_name))
)
except Exception as e:
traceback.print_exc()
yield entities.CommandReturn(error=errors.CommandError('插件状态修改失败: ' + str(e)))
yield command_context.CommandReturn(error=command_errors.CommandError('插件状态修改失败: ' + str(e)))

View File

@@ -2,19 +2,22 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='prompt', help='查看当前对话的前文', usage='!prompt')
class PromptOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""执行"""
if context.session.using_conversation is None:
yield entities.CommandReturn(error=errors.CommandOperationError('当前没有对话'))
yield command_context.CommandReturn(error=command_errors.CommandOperationError('当前没有对话'))
else:
reply_str = '当前对话所有内容:\n\n'
for msg in context.session.using_conversation.messages:
reply_str += f'{msg.role}: {msg.content}\n'
yield entities.CommandReturn(text=reply_str)
yield command_context.CommandReturn(text=reply_str)

View File

@@ -2,15 +2,18 @@ from __future__ import annotations
import typing
from .. import operator, entities, errors
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context, errors as command_errors
@operator.operator_class(name='resend', help='重发当前会话的最后一条消息', usage='!resend')
class ResendOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
# 回滚到最后一条用户message前
if context.session.using_conversation is None:
yield entities.CommandReturn(error=errors.CommandError('当前没有对话'))
yield command_context.CommandReturn(error=command_errors.CommandError('当前没有对话'))
else:
conv_msg = context.session.using_conversation.messages
@@ -23,4 +26,4 @@ class ResendOperator(operator.CommandOperator):
conv_msg.pop()
# 不重发了,提示用户已删除就行了
yield entities.CommandReturn(text='已删除最后一次请求记录')
yield command_context.CommandReturn(text='已删除最后一次请求记录')

View File

@@ -2,13 +2,16 @@ from __future__ import annotations
import typing
from .. import operator, entities
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context
@operator.operator_class(name='reset', help='重置当前会话', usage='!reset')
class ResetOperator(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
"""执行"""
context.session.using_conversation = None
yield entities.CommandReturn(text='已重置当前会话')
yield command_context.CommandReturn(text='已重置当前会话')

View File

@@ -1,11 +0,0 @@
from __future__ import annotations
import typing
from .. import operator, entities
@operator.operator_class(name='update', help='更新程序', usage='!update', privilege=2)
class UpdateCommand(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
yield entities.CommandReturn(text='不再支持通过命令更新,请查看 LangBot 文档。')

View File

@@ -2,12 +2,15 @@ from __future__ import annotations
import typing
from .. import operator, entities
from .. import operator
from langbot_plugin.api.entities.builtin.command import context as command_context
@operator.operator_class(name='version', help='显示版本信息', usage='!version')
class VersionCommand(operator.CommandOperator):
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
async def execute(
self, context: command_context.ExecuteContext
) -> typing.AsyncGenerator[command_context.CommandReturn, None]:
reply_str = f'当前版本: \n{self.ap.ver_mgr.get_current_version()}'
try:
@@ -16,4 +19,4 @@ class VersionCommand(operator.CommandOperator):
except Exception:
pass
yield entities.CommandReturn(text=reply_str.strip())
yield command_context.CommandReturn(text=reply_str.strip())

View File

@@ -3,7 +3,6 @@ from __future__ import annotations
import logging
import asyncio
import traceback
import sys
import os
from ..platform import botmgr as im_mgr
@@ -12,7 +11,7 @@ from ..provider.modelmgr import modelmgr as llm_model_mgr
from ..provider.tools import toolmgr as llm_tool_mgr
from ..config import manager as config_mgr
from ..command import cmdmgr
from ..plugin import manager as plugin_mgr
from ..plugin import connector as plugin_connector
from ..pipeline import pool
from ..pipeline import controller, pipelinemgr
from ..utils import version as version_mgr, proxy as proxy_mgr, announce as announce_mgr
@@ -80,7 +79,7 @@ class Application:
# =========================
plugin_mgr: plugin_mgr.PluginManager = None
plugin_connector: plugin_connector.PluginRuntimeConnector = None
query_pool: pool.QueryPool = None
@@ -128,7 +127,7 @@ class Application:
async def run(self):
try:
await self.plugin_mgr.initialize_plugins()
await self.plugin_connector.initialize_plugins()
# 后续可能会允许动态重启其他任务
# 故为了防止程序在非 Ctrl-C 情况下退出,这里创建一个不会结束的协程
@@ -169,6 +168,9 @@ class Application:
self.logger.error(f'Application runtime fatal exception: {e}')
self.logger.debug(f'Traceback: {traceback.format_exc()}')
def dispose(self):
self.plugin_connector.dispose()
async def print_web_access_info(self):
"""Print access webui tips"""
@@ -195,59 +197,3 @@ class Application:
""".strip()
for line in tips.split('\n'):
self.logger.info(line)
async def reload(
self,
scope: core_entities.LifecycleControlScope,
):
match scope:
case core_entities.LifecycleControlScope.PLATFORM.value:
self.logger.info('Hot reload scope=' + scope)
await self.platform_mgr.shutdown()
self.platform_mgr = im_mgr.PlatformManager(self)
await self.platform_mgr.initialize()
self.task_mgr.create_task(
self.platform_mgr.run(),
name='platform-manager',
scopes=[
core_entities.LifecycleControlScope.APPLICATION,
core_entities.LifecycleControlScope.PLATFORM,
],
)
case core_entities.LifecycleControlScope.PLUGIN.value:
self.logger.info('Hot reload scope=' + scope)
await self.plugin_mgr.destroy_plugins()
# 删除 sys.module 中所有的 plugins/* 下的模块
for mod in list(sys.modules.keys()):
if mod.startswith('plugins.'):
del sys.modules[mod]
self.plugin_mgr = plugin_mgr.PluginManager(self)
await self.plugin_mgr.initialize()
await self.plugin_mgr.initialize_plugins()
await self.plugin_mgr.load_plugins()
await self.plugin_mgr.initialize_plugins()
case core_entities.LifecycleControlScope.PROVIDER.value:
self.logger.info('Hot reload scope=' + scope)
await self.tool_mgr.shutdown()
llm_model_mgr_inst = llm_model_mgr.ModelManager(self)
await llm_model_mgr_inst.initialize()
self.model_mgr = llm_model_mgr_inst
llm_session_mgr_inst = llm_session_mgr.SessionManager(self)
await llm_session_mgr_inst.initialize()
self.sess_mgr = llm_session_mgr_inst
llm_tool_mgr_inst = llm_tool_mgr.ToolManager(self)
await llm_tool_mgr_inst.initialize()
self.tool_mgr = llm_tool_mgr_inst
case _:
pass

View File

@@ -51,8 +51,8 @@ async def main(loop: asyncio.AbstractEventLoop):
import signal
def signal_handler(sig, frame):
app_inst.dispose()
print('[Signal] Program exit.')
# ap.shutdown()
os._exit(0)
signal.signal(signal.SIGINT, signal_handler)

View File

@@ -1,176 +1,10 @@
from __future__ import annotations
import enum
import typing
import datetime
import asyncio
import pydantic.v1 as pydantic
from ..provider import entities as llm_entities
from ..provider.modelmgr import requester
from ..provider.tools import entities as tools_entities
from ..platform import adapter as msadapter
from ..platform.types import message as platform_message
from ..platform.types import events as platform_events
class LifecycleControlScope(enum.Enum):
APPLICATION = 'application'
PLATFORM = 'platform'
PLUGIN = 'plugin'
PROVIDER = 'provider'
class LauncherTypes(enum.Enum):
"""一个请求的发起者类型"""
PERSON = 'person'
"""私聊"""
GROUP = 'group'
"""群聊"""
class Query(pydantic.BaseModel):
"""一次请求的信息封装"""
query_id: int
"""请求ID添加进请求池时生成"""
launcher_type: LauncherTypes
"""会话类型platform处理阶段设置"""
launcher_id: typing.Union[int, str]
"""会话IDplatform处理阶段设置"""
sender_id: typing.Union[int, str]
"""发送者IDplatform处理阶段设置"""
message_event: platform_events.MessageEvent
"""事件platform收到的原始事件"""
message_chain: platform_message.MessageChain
"""消息链platform收到的原始消息链"""
bot_uuid: typing.Optional[str] = None
"""机器人UUID。"""
pipeline_uuid: typing.Optional[str] = None
"""流水线UUID。"""
pipeline_config: typing.Optional[dict[str, typing.Any]] = None
"""流水线配置,由 Pipeline 在运行开始时设置。"""
adapter: msadapter.MessagePlatformAdapter
"""消息平台适配器对象单个app中可能启用了多个消息平台适配器此对象表明发起此query的适配器"""
session: typing.Optional[Session] = None
"""会话对象,由前置处理器阶段设置"""
messages: typing.Optional[list[llm_entities.Message]] = []
"""历史消息列表,由前置处理器阶段设置"""
prompt: typing.Optional[llm_entities.Prompt] = None
"""情景预设内容,由前置处理器阶段设置"""
user_message: typing.Optional[llm_entities.Message] = None
"""此次请求的用户消息对象,由前置处理器阶段设置"""
variables: typing.Optional[dict[str, typing.Any]] = None
"""变量由前置处理器阶段设置。在prompt中嵌入或由 Runner 传递到 LLMOps 平台。"""
use_llm_model: typing.Optional[requester.RuntimeLLMModel] = None
"""使用的对话模型,由前置处理器阶段设置"""
use_funcs: typing.Optional[list[tools_entities.LLMFunction]] = None
"""使用的函数,由前置处理器阶段设置"""
resp_messages: (
typing.Optional[list[llm_entities.Message]] | typing.Optional[list[platform_message.MessageChain]]
) = []
"""由Process阶段生成的回复消息对象列表"""
resp_message_chain: typing.Optional[list[platform_message.MessageChain]] = None
"""回复消息链从resp_messages包装而得"""
# ======= 内部保留 =======
current_stage: typing.Optional['pkg.pipeline.pipelinemgr.StageInstContainer'] = None
"""当前所处阶段"""
class Config:
arbitrary_types_allowed = True
# ========== 插件可调用的 API请求 API ==========
def set_variable(self, key: str, value: typing.Any):
"""设置变量"""
if self.variables is None:
self.variables = {}
self.variables[key] = value
def get_variable(self, key: str) -> typing.Any:
"""获取变量"""
if self.variables is None:
return None
return self.variables.get(key)
def get_variables(self) -> dict[str, typing.Any]:
"""获取所有变量"""
if self.variables is None:
return {}
return self.variables
class Conversation(pydantic.BaseModel):
"""对话,包含于 Session 中,一个 Session 可以有多个历史 Conversation但只有一个当前使用的 Conversation"""
prompt: llm_entities.Prompt
messages: list[llm_entities.Message]
create_time: typing.Optional[datetime.datetime] = pydantic.Field(default_factory=datetime.datetime.now)
update_time: typing.Optional[datetime.datetime] = pydantic.Field(default_factory=datetime.datetime.now)
use_llm_model: typing.Optional[requester.RuntimeLLMModel] = None
use_funcs: typing.Optional[list[tools_entities.LLMFunction]]
pipeline_uuid: str
"""流水线UUID。"""
bot_uuid: str
"""机器人UUID。"""
uuid: typing.Optional[str] = None
"""该对话的 uuid在创建时不会自动生成。而是当使用 Dify API 等由外部管理对话信息的服务时,用于绑定外部的会话。具体如何使用,取决于 Runner。"""
class Config:
arbitrary_types_allowed = True
class Session(pydantic.BaseModel):
"""会话,一个 Session 对应一个 {launcher_type.value}_{launcher_id}"""
launcher_type: LauncherTypes
launcher_id: typing.Union[int, str]
sender_id: typing.Optional[typing.Union[int, str]] = 0
use_prompt_name: typing.Optional[str] = 'default'
using_conversation: typing.Optional[Conversation] = None
conversations: typing.Optional[list[Conversation]] = pydantic.Field(default_factory=list)
create_time: typing.Optional[datetime.datetime] = pydantic.Field(default_factory=datetime.datetime.now)
update_time: typing.Optional[datetime.datetime] = pydantic.Field(default_factory=datetime.datetime.now)
semaphore: typing.Optional[asyncio.Semaphore] = None
"""当前会话的信号量,用于限制并发"""
class Config:
arbitrary_types_allowed = True
PROVIDER = 'provider'

View File

@@ -1,10 +1,11 @@
from __future__ import annotations
import asyncio
from .. import stage, app
from ...utils import version, proxy, announce
from ...pipeline import pool, controller, pipelinemgr
from ...plugin import manager as plugin_mgr
from ...plugin import connector as plugin_connector
from ...command import cmdmgr
from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr
@@ -62,10 +63,13 @@ class BuildAppStage(stage.BootingStage):
ap.persistence_mgr = persistence_mgr_inst
await persistence_mgr_inst.initialize()
plugin_mgr_inst = plugin_mgr.PluginManager(ap)
await plugin_mgr_inst.initialize()
ap.plugin_mgr = plugin_mgr_inst
await plugin_mgr_inst.load_plugins()
async def runtime_disconnect_callback(connector: plugin_connector.PluginRuntimeConnector) -> None:
await asyncio.sleep(3)
await plugin_connector_inst.initialize()
plugin_connector_inst = plugin_connector.PluginRuntimeConnector(ap, runtime_disconnect_callback)
await plugin_connector_inst.initialize()
ap.plugin_connector = plugin_connector_inst
cmd_mgr_inst = cmdmgr.CommandManager(ap)
await cmd_mgr_inst.initialize()

View File

@@ -0,0 +1,22 @@
import sqlalchemy
from .base import Base
class BinaryStorage(Base):
"""Current for plugin use only"""
__tablename__ = 'binary_storages'
unique_key = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True)
key = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
owner_type = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
owner = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
value = sqlalchemy.Column(sqlalchemy.LargeBinary, nullable=False)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,
nullable=False,
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)

View File

@@ -13,6 +13,8 @@ class PluginSetting(Base):
enabled = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=True)
priority = sqlalchemy.Column(sqlalchemy.Integer, nullable=False, default=0)
config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default=dict)
install_source = sqlalchemy.Column(sqlalchemy.String(255), nullable=False, default='github')
install_info = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default=dict)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,

View File

@@ -44,6 +44,38 @@ class PersistenceManager:
await self.create_tables()
# run migrations
database_version = await self.execute_async(
sqlalchemy.select(metadata.Metadata).where(metadata.Metadata.key == 'database_version')
)
database_version = int(database_version.fetchone()[1])
required_database_version = constants.required_database_version
if database_version < required_database_version:
migrations = migration.preregistered_db_migrations
migrations.sort(key=lambda x: x.number)
last_migration_number = database_version
for migration_cls in migrations:
migration_instance = migration_cls(self.ap)
if (
migration_instance.number > database_version
and migration_instance.number <= required_database_version
):
await migration_instance.upgrade()
await self.execute_async(
sqlalchemy.update(metadata.Metadata)
.where(metadata.Metadata.key == 'database_version')
.values({'value': str(migration_instance.number)})
)
last_migration_number = migration_instance.number
self.ap.logger.info(f'Migration {migration_instance.number} completed.')
self.ap.logger.info(f'Successfully upgraded database to version {last_migration_number}.')
async def create_tables(self):
# create tables
async with self.get_db_engine().connect() as conn:
@@ -87,38 +119,6 @@ class PersistenceManager:
# =================================
# run migrations
database_version = await self.execute_async(
sqlalchemy.select(metadata.Metadata).where(metadata.Metadata.key == 'database_version')
)
database_version = int(database_version.fetchone()[1])
required_database_version = constants.required_database_version
if database_version < required_database_version:
migrations = migration.preregistered_db_migrations
migrations.sort(key=lambda x: x.number)
last_migration_number = database_version
for migration_cls in migrations:
migration_instance = migration_cls(self.ap)
if (
migration_instance.number > database_version
and migration_instance.number <= required_database_version
):
await migration_instance.upgrade()
await self.execute_async(
sqlalchemy.update(metadata.Metadata)
.where(metadata.Metadata.key == 'database_version')
.values({'value': str(migration_instance.number)})
)
last_migration_number = migration_instance.number
self.ap.logger.info(f'Migration {migration_instance.number} completed.')
self.ap.logger.info(f'Successfully upgraded database to version {last_migration_number}.')
async def execute_async(self, *args, **kwargs) -> sqlalchemy.engine.cursor.CursorResult:
async with self.get_db_engine().connect() as conn:
result = await conn.execute(*args, **kwargs)
@@ -128,10 +128,13 @@ class PersistenceManager:
def get_db_engine(self) -> sqlalchemy_asyncio.AsyncEngine:
return self.db.get_engine()
def serialize_model(self, model: typing.Type[sqlalchemy.Base], data: sqlalchemy.Base) -> dict:
def serialize_model(
self, model: typing.Type[sqlalchemy.Base], data: sqlalchemy.Base, masked_columns: list[str] = []
) -> dict:
return {
column.name: getattr(data, column.name)
if not isinstance(getattr(data, column.name), (datetime.datetime))
else getattr(data, column.name).isoformat()
for column in model.__table__.columns
if column.name not in masked_columns
}

View File

@@ -0,0 +1,20 @@
from .. import migration
@migration.migration_class(4)
class DBMigratePluginConfig(migration.DBMigration):
"""插件配置"""
async def upgrade(self):
"""升级"""
if 'plugin' not in self.ap.instance_config.data:
self.ap.instance_config.data['plugin'] = {
'runtime_ws_url': 'ws://localhost:5400/control/ws',
}
await self.ap.instance_config.dump_config()
async def downgrade(self):
"""降级"""
pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(5)
class DBMigratePipelineRemoveCotConfig(migration.DBMigration):
"""Pipeline remove cot config"""
async def upgrade(self):
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'remove-think' not in config['output']['misc']:
config['output']['misc']['remove-think'] = False
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""Downgrade"""
pass

View File

@@ -0,0 +1,32 @@
import sqlalchemy
from .. import migration
@migration.migration_class(6)
class DBMigratePluginInstallSource(migration.DBMigration):
"""插件安装来源"""
async def upgrade(self):
"""升级"""
# 查询表结构获取所有列名(异步执行 SQL
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.text('PRAGMA table_info(plugin_settings);'))
# fetchall() 是同步方法,无需 await
columns = [row[1] for row in result.fetchall()]
# 检查并添加 install_source 列
if 'install_source' not in columns:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.text(
"ALTER TABLE plugin_settings ADD COLUMN install_source VARCHAR(255) NOT NULL DEFAULT 'github'"
)
)
# 检查并添加 install_info 列
if 'install_info' not in columns:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.text("ALTER TABLE plugin_settings ADD COLUMN install_info JSON NOT NULL DEFAULT '{}'")
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -1,7 +1,7 @@
from __future__ import annotations
from .. import stage, entities
from ...core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('BanSessionCheckStage')
@@ -14,7 +14,7 @@ class BanSessionCheckStage(stage.PipelineStage):
async def initialize(self, pipeline_config: dict):
pass
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
found = False
mode = query.pipeline_config['trigger']['access-control']['mode']

View File

@@ -3,12 +3,11 @@ from __future__ import annotations
from ...core import app
from .. import stage, entities
from ...core import entities as core_entities
from . import filter as filter_model, entities as filter_entities
from ...provider import entities as llm_entities
from ...platform.types import message as platform_message
from langbot_plugin.api.entities.builtin.provider import message as provider_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
from ...utils import importutil
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import filters
importutil.import_modules_in_pkg(filters)
@@ -58,7 +57,7 @@ class ContentFilterStage(stage.PipelineStage):
async def _pre_process(
self,
message: str,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.StageProcessResult:
"""请求llm前处理消息
只要有一个不通过就不放行,只放行 PASS 的消息
@@ -67,7 +66,7 @@ class ContentFilterStage(stage.PipelineStage):
if query.pipeline_config['safety']['content-filter']['scope'] == 'output-msg':
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
if not message.strip():
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
for filter in self.filter_chain:
if filter_entities.EnableStage.PRE in filter.enable_stages:
@@ -86,14 +85,14 @@ class ContentFilterStage(stage.PipelineStage):
elif result.level == filter_entities.ResultLevel.PASS: # 传到下一个
message = result.replacement
query.message_chain = platform_message.MessageChain(platform_message.Plain(message))
query.message_chain = platform_message.MessageChain([platform_message.Plain(text=message)])
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
async def _post_process(
self,
message: str,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.StageProcessResult:
"""请求llm后处理响应
只要是 PASS 或者 MASKED 的就通过此 filter将其 replacement 设置为message进入下一个 filter
@@ -123,7 +122,7 @@ class ContentFilterStage(stage.PipelineStage):
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理"""
if stage_inst_name == 'PreContentFilterStage':
contain_non_text = False
@@ -142,7 +141,7 @@ class ContentFilterStage(stage.PipelineStage):
return await self._pre_process(str(query.message_chain).strip(), query)
elif stage_inst_name == 'PostContentFilterStage':
# 仅处理 query.resp_messages[-1].content 是 str 的情况
if isinstance(query.resp_messages[-1], llm_entities.Message) and isinstance(
if isinstance(query.resp_messages[-1], provider_message.Message) and isinstance(
query.resp_messages[-1].content, str
):
return await self._post_process(query.resp_messages[-1].content, query)

View File

@@ -1,6 +1,6 @@
import enum
import pydantic.v1 as pydantic
import pydantic
class ResultLevel(enum.Enum):

View File

@@ -3,9 +3,9 @@ from __future__ import annotations
import abc
import typing
from ...core import app, entities as core_entities
from ...core import app
from . import entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregistered_filters: list[typing.Type[ContentFilter]] = []
@@ -60,8 +60,8 @@ class ContentFilter(metaclass=abc.ABCMeta):
pass
@abc.abstractmethod
async def process(self, query: core_entities.Query, message: str = None, image_url=None) -> entities.FilterResult:
"""Process message
async def process(self, query: pipeline_query.Query, message: str = None, image_url=None) -> entities.FilterResult:
"""处理消息
It is divided into two stages, depending on the value of enable_stages.
For content filters, you do not need to consider the stage of the message, you only need to check the message content.

View File

@@ -4,8 +4,7 @@ import aiohttp
from .. import entities
from .. import filter as filter_model
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
BAIDU_EXAMINE_URL = 'https://aip.baidubce.com/rest/2.0/solution/v1/text_censor/v2/user_defined?access_token={}'
BAIDU_EXAMINE_TOKEN_URL = 'https://aip.baidubce.com/oauth/2.0/token'
@@ -27,7 +26,7 @@ class BaiduCloudExamine(filter_model.ContentFilter):
) as resp:
return (await resp.json())['access_token']
async def process(self, query: core_entities.Query, message: str) -> entities.FilterResult:
async def process(self, query: pipeline_query.Query, message: str) -> entities.FilterResult:
async with aiohttp.ClientSession() as session:
async with session.post(
BAIDU_EXAMINE_URL.format(await self._get_token()),

View File

@@ -3,7 +3,7 @@ import re
from .. import filter as filter_model
from .. import entities
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@filter_model.filter_class('ban-word-filter')
@@ -13,7 +13,7 @@ class BanWordFilter(filter_model.ContentFilter):
async def initialize(self):
pass
async def process(self, query: core_entities.Query, message: str) -> entities.FilterResult:
async def process(self, query: pipeline_query.Query, message: str) -> entities.FilterResult:
found = False
for word in self.ap.sensitive_meta.data['words']:

View File

@@ -3,7 +3,7 @@ import re
from .. import entities
from .. import filter as filter_model
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@filter_model.filter_class('content-ignore')
@@ -16,7 +16,7 @@ class ContentIgnore(filter_model.ContentFilter):
entities.EnableStage.PRE,
]
async def process(self, query: core_entities.Query, message: str) -> entities.FilterResult:
async def process(self, query: pipeline_query.Query, message: str) -> entities.FilterResult:
if 'prefix' in query.pipeline_config['trigger']['ignore-rules']:
for rule in query.pipeline_config['trigger']['ignore-rules']['prefix']:
if message.startswith(rule):

View File

@@ -3,7 +3,10 @@ from __future__ import annotations
import asyncio
import traceback
from ..core import app, entities
from ..core import app
from ..core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
class Controller:
@@ -22,19 +25,19 @@ class Controller:
"""事件处理循环"""
try:
while True:
selected_query: entities.Query = None
selected_query: pipeline_query.Query = None
# 取请求
async with self.ap.query_pool:
queries: list[entities.Query] = self.ap.query_pool.queries
queries: list[pipeline_query.Query] = self.ap.query_pool.queries
for query in queries:
session = await self.ap.sess_mgr.get_session(query)
self.ap.logger.debug(f'Checking query {query} session {session}')
if not session.semaphore.locked():
if not session._semaphore.locked():
selected_query = query
await session.semaphore.acquire()
await session._semaphore.acquire()
break
@@ -46,7 +49,7 @@ class Controller:
if selected_query:
async def _process_query(selected_query: entities.Query):
async def _process_query(selected_query: pipeline_query.Query):
async with self.semaphore: # 总并发上限
# find pipeline
# Here firstly find the bot, then find the pipeline, in case the bot adapter's config is not the latest one.
@@ -59,7 +62,7 @@ class Controller:
await pipeline.run(selected_query)
async with self.ap.query_pool:
(await self.ap.sess_mgr.get_session(selected_query)).semaphore.release()
(await self.ap.sess_mgr.get_session(selected_query))._semaphore.release()
# 通知其他协程,有新的请求可以处理了
self.ap.query_pool.condition.notify_all()
@@ -68,8 +71,8 @@ class Controller:
kind='query',
name=f'query-{selected_query.query_id}',
scopes=[
entities.LifecycleControlScope.APPLICATION,
entities.LifecycleControlScope.PLATFORM,
core_entities.LifecycleControlScope.APPLICATION,
core_entities.LifecycleControlScope.PLATFORM,
],
)

View File

@@ -3,10 +3,10 @@ from __future__ import annotations
import enum
import typing
import pydantic.v1 as pydantic
from ..platform.types import message as platform_message
import pydantic
from ..core import entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.builtin.platform.message as platform_message
class ResultType(enum.Enum):
@@ -20,7 +20,7 @@ class ResultType(enum.Enum):
class StageProcessResult(pydantic.BaseModel):
result_type: ResultType
new_query: entities.Query
new_query: pipeline_query.Query
user_notice: typing.Optional[
typing.Union[

View File

@@ -5,10 +5,9 @@ import traceback
from . import strategy
from .. import stage, entities
from ...core import entities as core_entities
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
from ...utils import importutil
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import strategies
importutil.import_modules_in_pkg(strategies)
@@ -67,8 +66,8 @@ class LongTextProcessStage(stage.PipelineStage):
await self.strategy_impl.initialize()
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
# Check if it contains non-Plain components
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
# 检查是否包含非 Plain 组件
contains_non_plain = False
for msg in query.resp_message_chain[-1]:

View File

@@ -3,9 +3,9 @@ from __future__ import annotations
from .. import strategy as strategy_model
from ....core import entities as core_entities
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.builtin.platform.message as platform_message
ForwardMessageDiaplay = platform_message.ForwardMessageDiaplay
Forward = platform_message.Forward
@@ -13,7 +13,7 @@ Forward = platform_message.Forward
@strategy_model.strategy_class('forward')
class ForwardComponentStrategy(strategy_model.LongTextStrategy):
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]:
display = ForwardMessageDiaplay(
title='Group chat history',
brief='[Chat history]',

View File

@@ -8,10 +8,10 @@ import re
from PIL import Image, ImageDraw, ImageFont
import functools
from ....platform.types import message as platform_message
from .. import strategy as strategy_model
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.builtin.platform.message as platform_message
@strategy_model.strategy_class('image')
@@ -27,7 +27,7 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
encoding='utf-8',
)
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]:
img_path = self.text_to_image(
text_str=message,
save_as='temp/{}.png'.format(int(time.time())),
@@ -131,7 +131,7 @@ class Text2ImageStrategy(strategy_model.LongTextStrategy):
text_str: str,
save_as='temp.png',
width=800,
query: core_entities.Query = None,
query: pipeline_query.Query = None,
):
text_str = text_str.replace('\t', ' ')

View File

@@ -4,8 +4,9 @@ import typing
from ...core import app
from ...core import entities as core_entities
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregistered_strategies: list[typing.Type[LongTextStrategy]] = []
@@ -49,8 +50,8 @@ class LongTextStrategy(metaclass=abc.ABCMeta):
pass
@abc.abstractmethod
async def process(self, message: str, query: core_entities.Query) -> list[platform_message.MessageComponent]:
"""Process long text
async def process(self, message: str, query: pipeline_query.Query) -> list[platform_message.MessageComponent]:
"""处理长文本
If the text length exceeds the threshold, this method will be called.

View File

@@ -1,10 +1,9 @@
from __future__ import annotations
from .. import stage, entities
from ...core import entities as core_entities
from . import truncator
from ...utils import importutil
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import truncators
importutil.import_modules_in_pkg(truncators)
@@ -29,8 +28,8 @@ class ConversationMessageTruncator(stage.PipelineStage):
else:
raise ValueError(f'Unknown truncator: {use_method}')
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""Process"""
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理"""
query = await self.trun.truncate(query)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
import typing
import abc
from ...core import entities as core_entities, app
from ...core import app
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregistered_truncators: list[typing.Type[Truncator]] = []
@@ -47,7 +47,7 @@ class Truncator(abc.ABC):
pass
@abc.abstractmethod
async def truncate(self, query: core_entities.Query) -> core_entities.Query:
async def truncate(self, query: pipeline_query.Query) -> pipeline_query.Query:
"""截断
一般只需要操作query.messages也可以扩展操作query.prompt, query.user_message。

View File

@@ -1,15 +1,15 @@
from __future__ import annotations
from .. import truncator
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@truncator.truncator_class('round')
class RoundTruncator(truncator.Truncator):
"""Truncate the conversation message chain to adapt to the LLM message length limit."""
async def truncate(self, query: core_entities.Query) -> core_entities.Query:
"""Truncate"""
async def truncate(self, query: pipeline_query.Query) -> pipeline_query.Query:
"""截断"""
max_round = query.pipeline_config['ai']['local-agent']['max-round']
temp_messages = []

View File

@@ -5,14 +5,18 @@ import traceback
import sqlalchemy
from ..core import app, entities
from ..core import app
from . import entities as pipeline_entities
from ..entity.persistence import pipeline as persistence_pipeline
from . import stage
from ..platform.types import message as platform_message, events as platform_events
from ..plugin import events
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.events as events
from ..utils import importutil
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import (
resprule,
bansess,
@@ -75,17 +79,17 @@ class RuntimePipeline:
self.pipeline_entity = pipeline_entity
self.stage_containers = stage_containers
async def run(self, query: entities.Query):
async def run(self, query: pipeline_query.Query):
query.pipeline_config = self.pipeline_entity.config
await self.process_query(query)
async def _check_output(self, query: entities.Query, result: pipeline_entities.StageProcessResult):
async def _check_output(self, query: pipeline_query.Query, result: pipeline_entities.StageProcessResult):
"""检查输出"""
if result.user_notice:
# 处理str类型
if isinstance(result.user_notice, str):
result.user_notice = platform_message.MessageChain(platform_message.Plain(result.user_notice))
result.user_notice = platform_message.MessageChain([platform_message.Plain(text=result.user_notice)])
elif isinstance(result.user_notice, list):
result.user_notice = platform_message.MessageChain(*result.user_notice)
@@ -93,12 +97,20 @@ class RuntimePipeline:
query.message_event, platform_events.GroupMessage
):
result.user_notice.insert(0, platform_message.At(query.message_event.sender.id))
await query.adapter.reply_message(
message_source=query.message_event,
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
)
if await query.adapter.is_stream_output_supported():
await query.adapter.reply_message_chunk(
message_source=query.message_event,
bot_message=query.resp_messages[-1],
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
is_final=[msg.is_final for msg in query.resp_messages][0],
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
)
if result.debug_notice:
self.ap.logger.debug(result.debug_notice)
if result.console_notice:
@@ -109,7 +121,7 @@ class RuntimePipeline:
async def _execute_from_stage(
self,
stage_index: int,
query: entities.Query,
query: pipeline_query.Query,
):
"""从指定阶段开始执行,实现了责任链模式和基于生成器的阶段分叉功能。
@@ -136,7 +148,7 @@ class RuntimePipeline:
while i < len(self.stage_containers):
stage_container = self.stage_containers[i]
query.current_stage = stage_container # 标记到 Query 对象里
query.current_stage_name = stage_container.inst_name # 标记到 Query 对象里
result = stage_container.inst.process(query, stage_container.inst_name)
@@ -173,26 +185,26 @@ class RuntimePipeline:
i += 1
async def process_query(self, query: entities.Query):
async def process_query(self, query: pipeline_query.Query):
"""处理请求"""
try:
# ======== 触发 MessageReceived 事件 ========
event_type = (
events.PersonMessageReceived
if query.launcher_type == entities.LauncherTypes.PERSON
if query.launcher_type == provider_session.LauncherTypes.PERSON
else events.GroupMessageReceived
)
event_ctx = await self.ap.plugin_mgr.emit_event(
event=event_type(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
message_chain=query.message_chain,
query=query,
)
event_obj = event_type(
query=query,
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
message_chain=query.message_chain,
)
event_ctx = await self.ap.plugin_connector.emit_event(event_obj)
if event_ctx.is_prevented_default():
return
@@ -200,11 +212,12 @@ class RuntimePipeline:
await self._execute_from_stage(0, query)
except Exception as e:
inst_name = query.current_stage.inst_name if query.current_stage else 'unknown'
inst_name = query.current_stage_name if query.current_stage_name else 'unknown'
self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}')
self.ap.logger.error(f'Traceback: {traceback.format_exc()}')
finally:
self.ap.logger.debug(f'Query {query.query_id} processed')
del self.ap.query_pool.cached_queries[query.query_id]
class PipelineManager:

View File

@@ -3,10 +3,11 @@ from __future__ import annotations
import asyncio
import typing
from ..core import entities
from ..platform import adapter as msadapter
from ..platform.types import message as platform_message
from ..platform.types import events as platform_events
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
class QueryPool:
@@ -16,7 +17,10 @@ class QueryPool:
pool_lock: asyncio.Lock
queries: list[entities.Query]
queries: list[pipeline_query.Query]
cached_queries: dict[int, pipeline_query.Query]
"""Cached queries, used for plugin backward api call, will be removed after the query completely processed"""
condition: asyncio.Condition
@@ -24,34 +28,38 @@ class QueryPool:
self.query_id_counter = 0
self.pool_lock = asyncio.Lock()
self.queries = []
self.cached_queries = {}
self.condition = asyncio.Condition(self.pool_lock)
async def add_query(
self,
bot_uuid: str,
launcher_type: entities.LauncherTypes,
launcher_type: provider_session.LauncherTypes,
launcher_id: typing.Union[int, str],
sender_id: typing.Union[int, str],
message_event: platform_events.MessageEvent,
message_chain: platform_message.MessageChain,
adapter: msadapter.MessagePlatformAdapter,
adapter: abstract_platform_adapter.AbstractMessagePlatformAdapter,
pipeline_uuid: typing.Optional[str] = None,
) -> entities.Query:
) -> pipeline_query.Query:
async with self.condition:
query = entities.Query(
query_id = self.query_id_counter
query = pipeline_query.Query(
bot_uuid=bot_uuid,
query_id=self.query_id_counter,
query_id=query_id,
launcher_type=launcher_type,
launcher_id=launcher_id,
sender_id=sender_id,
message_event=message_event,
message_chain=message_chain,
variables={},
resp_messages=[],
resp_message_chain=[],
adapter=adapter,
pipeline_uuid=pipeline_uuid,
)
self.queries.append(query)
self.cached_queries[query_id] = query
self.query_id_counter += 1
self.condition.notify_all()

View File

@@ -3,10 +3,10 @@ from __future__ import annotations
import datetime
from .. import stage, entities
from ...core import entities as core_entities
from ...provider import entities as llm_entities
from ...plugin import events
from ...platform.types import message as platform_message
from langbot_plugin.api.entities.builtin.provider import message as provider_message
import langbot_plugin.api.entities.events as events
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('PreProcessor')
@@ -26,7 +26,7 @@ class PreProcessor(stage.PipelineStage):
async def process(
self,
query: core_entities.Query,
query: pipeline_query.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""Process"""
@@ -49,80 +49,73 @@ class PreProcessor(stage.PipelineStage):
query.bot_uuid,
)
conversation.use_llm_model = llm_model
# Set query
# 设置query
query.session = session
query.prompt = conversation.prompt.copy()
query.messages = conversation.messages.copy()
query.use_llm_model = llm_model
query.use_llm_model_uuid = llm_model.model_entity.uuid
if selected_runner == 'local-agent':
query.use_funcs = (
conversation.use_funcs if query.use_llm_model.model_entity.abilities.__contains__('func_call') else None
)
query.use_funcs = []
query.variables = {
if llm_model.model_entity.abilities.__contains__('func_call'):
query.use_funcs = await self.ap.tool_mgr.get_all_tools()
variables = {
'session_id': f'{query.session.launcher_type.value}_{query.session.launcher_id}',
'conversation_id': conversation.uuid,
'msg_create_time': (
int(query.message_event.time) if query.message_event.time else int(datetime.datetime.now().timestamp())
),
}
query.variables.update(variables)
# Check if this model supports vision, if not, remove all images
# TODO this checking should be performed in runner, and in this stage, the image should be reserved
if selected_runner == 'local-agent' and not query.use_llm_model.model_entity.abilities.__contains__('vision'):
if selected_runner == 'local-agent' and not llm_model.model_entity.abilities.__contains__('vision'):
for msg in query.messages:
if isinstance(msg.content, list):
for me in msg.content:
if me.type == 'image_url':
msg.content.remove(me)
content_list: list[llm_entities.ContentElement] = []
content_list: list[provider_message.ContentElement] = []
plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')
# tidy the content_list
# combine all text content into one, and put it in the first position
for me in query.message_chain:
if isinstance(me, platform_message.Plain):
content_list.append(provider_message.ContentElement.from_text(me.text))
plain_text += me.text
elif isinstance(me, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
'vision'
):
if selected_runner != 'local-agent' or llm_model.model_entity.abilities.__contains__('vision'):
if me.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(me.base64))
content_list.append(provider_message.ContentElement.from_image_base64(me.base64))
elif isinstance(me, platform_message.Quote) and qoute_msg:
for msg in me.origin:
if isinstance(msg, platform_message.Plain):
content_list.append(llm_entities.ContentElement.from_text(msg.text))
content_list.append(provider_message.ContentElement.from_text(msg.text))
elif isinstance(msg, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
'vision'
):
if selected_runner != 'local-agent' or llm_model.model_entity.abilities.__contains__('vision'):
if msg.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(msg.base64))
content_list.insert(0, llm_entities.ContentElement.from_text(plain_text))
content_list.append(provider_message.ContentElement.from_image_base64(msg.base64))
query.variables['user_message_text'] = plain_text
query.user_message = llm_entities.Message(role='user', content=content_list)
# =========== Trigger event PromptPreProcessing
query.user_message = provider_message.Message(role='user', content=content_list)
# =========== 触发事件 PromptPreProcessing
event_ctx = await self.ap.plugin_mgr.emit_event(
event=events.PromptPreProcessing(
session_name=f'{query.session.launcher_type.value}_{query.session.launcher_id}',
default_prompt=query.prompt.messages,
prompt=query.messages,
query=query,
)
event = events.PromptPreProcessing(
session_name=f'{query.session.launcher_type.value}_{query.session.launcher_id}',
default_prompt=query.prompt.messages,
prompt=query.messages,
query=query,
)
event_ctx = await self.ap.plugin_connector.emit_event(event)
query.prompt.messages = event_ctx.event.default_prompt
query.messages = event_ctx.event.prompt

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
import abc
from ...core import app
from ...core import entities as core_entities
from .. import entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
class MessageHandler(metaclass=abc.ABCMeta):
@@ -19,7 +19,7 @@ class MessageHandler(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def handle(
self,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.StageProcessResult:
raise NotImplementedError

View File

@@ -1,18 +1,21 @@
from __future__ import annotations
import uuid
import typing
import traceback
from .. import handler
from ... import entities
from ....core import entities as core_entities
from ....provider import runner as runner_module
from ....plugin import events
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.events as events
from ....utils import importutil
from ....provider import runners
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
importutil.import_modules_in_pkg(runners)
@@ -20,33 +23,32 @@ importutil.import_modules_in_pkg(runners)
class ChatMessageHandler(handler.MessageHandler):
async def handle(
self,
query: core_entities.Query,
query: pipeline_query.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""Process"""
# Call API
# generator
"""处理"""
# API
# 生成器
# Trigger plugin event
# 触发插件事件
event_class = (
events.PersonNormalMessageReceived
if query.launcher_type == core_entities.LauncherTypes.PERSON
if query.launcher_type == provider_session.LauncherTypes.PERSON
else events.GroupNormalMessageReceived
)
event_ctx = await self.ap.plugin_mgr.emit_event(
event=event_class(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
text_message=str(query.message_chain),
query=query,
)
event = event_class(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
text_message=str(query.message_chain),
query=query,
)
event_ctx = await self.ap.plugin_connector.emit_event(event)
if event_ctx.is_prevented_default():
if event_ctx.event.reply is not None:
mc = platform_message.MessageChain(event_ctx.event.reply)
query.resp_messages.append(mc)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
@@ -54,10 +56,14 @@ class ChatMessageHandler(handler.MessageHandler):
yield entities.StageProcessResult(result_type=entities.ResultType.INTERRUPT, new_query=query)
else:
if event_ctx.event.alter is not None:
# if isinstance(event_ctx.event, str): # Currently not considering multi-modal alter
# if isinstance(event_ctx.event, str): # 现在暂时不考虑多模态alter
query.user_message.content = event_ctx.event.alter
text_length = 0
try:
is_stream = await query.adapter.is_stream_output_supported()
except AttributeError:
is_stream = False
try:
for r in runner_module.preregistered_runners:
@@ -65,22 +71,42 @@ class ChatMessageHandler(handler.MessageHandler):
runner = r(self.ap, query.pipeline_config)
break
else:
raise ValueError(f'Request runner not found: {query.pipeline_config["ai"]["runner"]["runner"]}')
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
if is_stream:
resp_message_id = uuid.uuid4()
await query.adapter.create_message_card(str(resp_message_id), query.message_event)
async for result in runner.run(query):
result.resp_message_id = str(resp_message_id)
if query.resp_messages:
query.resp_messages.pop()
if query.resp_message_chain:
query.resp_message_chain.pop()
async for result in runner.run(query):
query.resp_messages.append(result)
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})流式响应: {self.cut_str(result.readable_str())}')
self.ap.logger.info(f'Response({query.query_id}): {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
if result.content is not None:
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
async for result in runner.run(query):
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
query.session.using_conversation.messages.append(query.user_message)
query.session.using_conversation.messages.extend(query.resp_messages)
except Exception as e:
self.ap.logger.error(f'Request failed({query.query_id}): {type(e).__name__} {str(e)}')
self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}')
traceback.print_exc()
hide_exception_info = query.pipeline_config['output']['misc']['hide-exception']

View File

@@ -4,16 +4,17 @@ import typing
from .. import handler
from ... import entities
from ....core import entities as core_entities
from ....provider import entities as llm_entities
from ....plugin import events
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.provider.message as provider_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.events as events
class CommandHandler(handler.MessageHandler):
async def handle(
self,
query: core_entities.Query,
query: pipeline_query.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""Process"""
@@ -28,23 +29,23 @@ class CommandHandler(handler.MessageHandler):
event_class = (
events.PersonCommandSent
if query.launcher_type == core_entities.LauncherTypes.PERSON
if query.launcher_type == provider_session.LauncherTypes.PERSON
else events.GroupCommandSent
)
event_ctx = await self.ap.plugin_mgr.emit_event(
event=event_class(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
command=spt[0],
params=spt[1:] if len(spt) > 1 else [],
text_message=str(query.message_chain),
is_admin=(privilege == 2),
query=query,
)
event = event_class(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
command=spt[0],
params=spt[1:] if len(spt) > 1 else [],
text_message=str(query.message_chain),
is_admin=(privilege == 2),
query=query,
)
event_ctx = await self.ap.plugin_connector.emit_event(event)
if event_ctx.is_prevented_default():
if event_ctx.event.reply is not None:
mc = platform_message.MessageChain(event_ctx.event.reply)
@@ -64,7 +65,7 @@ class CommandHandler(handler.MessageHandler):
async for ret in self.ap.cmd_mgr.execute(command_text=command_text, query=query, session=session):
if ret.error is not None:
query.resp_messages.append(
llm_entities.Message(
provider_message.Message(
role='command',
content=str(ret.error),
)
@@ -74,16 +75,16 @@ class CommandHandler(handler.MessageHandler):
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
elif ret.text is not None or ret.image_url is not None:
content: list[llm_entities.ContentElement] = []
content: list[provider_message.ContentElement] = []
if ret.text is not None:
content.append(llm_entities.ContentElement.from_text(ret.text))
content.append(provider_message.ContentElement.from_text(ret.text))
if ret.image_url is not None:
content.append(llm_entities.ContentElement.from_image_url(ret.image_url))
content.append(provider_message.ContentElement.from_image_url(ret.image_url))
query.resp_messages.append(
llm_entities.Message(
provider_message.Message(
role='command',
content=content,
)

View File

@@ -1,10 +1,10 @@
from __future__ import annotations
from ...core import entities as core_entities
from . import handler
from .handlers import chat, command
from .. import entities
from .. import stage
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('MessageProcessor')
@@ -30,7 +30,7 @@ class Processor(stage.PipelineStage):
async def process(
self,
query: core_entities.Query,
query: pipeline_query.Query,
stage_inst_name: str,
) -> entities.StageProcessResult:
"""Process"""

View File

@@ -2,7 +2,8 @@ from __future__ import annotations
import abc
import typing
from ...core import app, entities as core_entities
from ...core import app
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregistered_algos: list[typing.Type[ReteLimitAlgo]] = []
@@ -33,7 +34,7 @@ class ReteLimitAlgo(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def require_access(
self,
query: core_entities.Query,
query: pipeline_query.Query,
launcher_type: str,
launcher_id: typing.Union[int, str],
) -> bool:
@@ -53,7 +54,7 @@ class ReteLimitAlgo(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def release_access(
self,
query: core_entities.Query,
query: pipeline_query.Query,
launcher_type: str,
launcher_id: typing.Union[int, str],
):

View File

@@ -3,7 +3,7 @@ import asyncio
import time
import typing
from .. import algo
from ....core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
# 固定窗口算法
@@ -32,7 +32,7 @@ class FixedWindowAlgo(algo.ReteLimitAlgo):
async def require_access(
self,
query: core_entities.Query,
query: pipeline_query.Query,
launcher_type: str,
launcher_id: typing.Union[int, str],
) -> bool:
@@ -91,7 +91,7 @@ class FixedWindowAlgo(algo.ReteLimitAlgo):
async def release_access(
self,
query: core_entities.Query,
query: pipeline_query.Query,
launcher_type: str,
launcher_id: typing.Union[int, str],
):

View File

@@ -4,9 +4,10 @@ import typing
from .. import entities, stage
from . import algo
from ...core import entities as core_entities
from ...utils import importutil
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import algos
importutil.import_modules_in_pkg(algos)
@@ -39,7 +40,7 @@ class RateLimit(stage.PipelineStage):
async def process(
self,
query: core_entities.Query,
query: pipeline_query.Query,
stage_inst_name: str,
) -> typing.Union[
entities.StageProcessResult,

View File

@@ -4,18 +4,19 @@ import random
import asyncio
from ...platform.types import events as platform_events
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.provider.message as provider_message
from .. import stage, entities
from ...core import entities as core_entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@stage.stage_class('SendResponseBackStage')
class SendResponseBackStage(stage.PipelineStage):
"""发送响应消息"""
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
"""处理"""
random_range = (
@@ -36,10 +37,22 @@ class SendResponseBackStage(stage.PipelineStage):
quote_origin = query.pipeline_config['output']['misc']['quote-origin']
await query.adapter.reply_message(
message_source=query.message_event,
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
)
has_chunks = any(isinstance(msg, provider_message.MessageChunk) for msg in query.resp_messages)
# TODO 命令与流式的兼容性问题
if await query.adapter.is_stream_output_supported() and has_chunks:
is_final = [msg.is_final for msg in query.resp_messages][0]
await query.adapter.reply_message_chunk(
message_source=query.message_event,
bot_message=query.resp_messages[-1],
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
is_final=is_final,
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -1,6 +1,6 @@
import pydantic.v1 as pydantic
import pydantic
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
class RuleJudgeResult(pydantic.BaseModel):

View File

@@ -4,9 +4,10 @@ from __future__ import annotations
from . import rule
from .. import stage, entities
from ...core import entities as core_entities
from ...utils import importutil
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
from . import rules
importutil.import_modules_in_pkg(rules)
@@ -32,7 +33,7 @@ class GroupRespondRuleCheckStage(stage.PipelineStage):
await rule_inst.initialize()
self.rule_matchers.append(rule_inst)
async def process(self, query: core_entities.Query, stage_inst_name: str) -> entities.StageProcessResult:
async def process(self, query: pipeline_query.Query, stage_inst_name: str) -> entities.StageProcessResult:
if query.launcher_type.value != 'group': # 只处理群消息
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -2,10 +2,11 @@ from __future__ import annotations
import abc
import typing
from ...core import app, entities as core_entities
from ...core import app
from . import entities
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregisetered_rules: list[typing.Type[GroupRespondRule]] = []
@@ -39,7 +40,7 @@ class GroupRespondRule(metaclass=abc.ABCMeta):
message_text: str,
message_chain: platform_message.MessageChain,
rule_dict: dict,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.RuleJudgeResult:
"""判断消息是否匹配规则"""
raise NotImplementedError

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
from .. import rule as rule_model
from .. import entities
from ....core import entities as core_entities
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@rule_model.rule_class('at-bot')
@@ -14,19 +14,28 @@ class AtBotRule(rule_model.GroupRespondRule):
message_text: str,
message_chain: platform_message.MessageChain,
rule_dict: dict,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.RuleJudgeResult:
if message_chain.has(platform_message.At(query.adapter.bot_account_id)) and rule_dict['at']:
message_chain.remove(platform_message.At(query.adapter.bot_account_id))
def remove_at(message_chain: platform_message.MessageChain):
for component in message_chain.root:
if isinstance(component, platform_message.At) and component.target == query.adapter.bot_account_id:
message_chain.remove(component)
break
if message_chain.has(
platform_message.At(query.adapter.bot_account_id)
): # 回复消息时会at两次检查并删除重复的
message_chain.remove(platform_message.At(query.adapter.bot_account_id))
remove_at(message_chain)
remove_at(message_chain) # 回复消息时会at两次检查并删除重复的
return entities.RuleJudgeResult(
matching=True,
replacement=message_chain,
)
# if message_chain.has(platform_message.At(query.adapter.bot_account_id)) and rule_dict['at']:
# message_chain.remove(platform_message.At(query.adapter.bot_account_id))
# if message_chain.has(
# platform_message.At(query.adapter.bot_account_id)
# ): # 回复消息时会at两次检查并删除重复的
# message_chain.remove(platform_message.At(query.adapter.bot_account_id))
# return entities.RuleJudgeResult(
# matching=True,
# replacement=message_chain,
# )
return entities.RuleJudgeResult(matching=False, replacement=message_chain)

View File

@@ -1,7 +1,7 @@
from .. import rule as rule_model
from .. import entities
from ....core import entities as core_entities
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@rule_model.rule_class('prefix')
@@ -11,7 +11,7 @@ class PrefixRule(rule_model.GroupRespondRule):
message_text: str,
message_chain: platform_message.MessageChain,
rule_dict: dict,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.RuleJudgeResult:
prefixes = rule_dict['prefix']

View File

@@ -3,8 +3,8 @@ import random
from .. import rule as rule_model
from .. import entities
from ....core import entities as core_entities
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@rule_model.rule_class('random')
@@ -14,7 +14,7 @@ class RandomRespRule(rule_model.GroupRespondRule):
message_text: str,
message_chain: platform_message.MessageChain,
rule_dict: dict,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.RuleJudgeResult:
random_rate = rule_dict['random']

View File

@@ -3,8 +3,8 @@ import re
from .. import rule as rule_model
from .. import entities
from ....core import entities as core_entities
from ....platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
@rule_model.rule_class('regexp')
@@ -14,7 +14,7 @@ class RegExpRule(rule_model.GroupRespondRule):
message_text: str,
message_chain: platform_message.MessageChain,
rule_dict: dict,
query: core_entities.Query,
query: pipeline_query.Query,
) -> entities.RuleJudgeResult:
regexps = rule_dict['regexp']

View File

@@ -3,8 +3,9 @@ from __future__ import annotations
import abc
import typing
from ..core import app, entities as core_entities
from ..core import app
from . import entities
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
preregistered_stages: dict[str, type[PipelineStage]] = {}
@@ -33,7 +34,7 @@ class PipelineStage(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def process(
self,
query: core_entities.Query,
query: pipeline_query.Query,
stage_inst_name: str,
) -> typing.Union[
entities.StageProcessResult,

View File

@@ -2,12 +2,12 @@ from __future__ import annotations
import typing
from ...core import entities as core_entities
from .. import entities
from .. import stage
from ...plugin import events
from ...platform.types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.pipeline.query as pipeline_query
import langbot_plugin.api.entities.events as events
@stage.stage_class('ResponseWrapper')
@@ -25,7 +25,7 @@ class ResponseWrapper(stage.PipelineStage):
async def process(
self,
query: core_entities.Query,
query: pipeline_query.Query,
stage_inst_name: str,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""处理"""
@@ -58,21 +58,22 @@ class ResponseWrapper(stage.PipelineStage):
reply_text = str(result.get_content_platform_message_chain())
# ============= 触发插件事件 ===============
event_ctx = await self.ap.plugin_mgr.emit_event(
event=events.NormalMessageResponded(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
session=session,
prefix='',
response_text=reply_text,
finish_reason='stop',
funcs_called=[fc.function.name for fc in result.tool_calls]
if result.tool_calls is not None
else [],
query=query,
)
event = events.NormalMessageResponded(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
session=session,
prefix='',
response_text=reply_text,
finish_reason='stop',
funcs_called=[fc.function.name for fc in result.tool_calls]
if result.tool_calls is not None
else [],
query=query,
)
event_ctx = await self.ap.plugin_connector.emit_event(event)
if event_ctx.is_prevented_default():
yield entities.StageProcessResult(
result_type=entities.ResultType.INTERRUPT,
@@ -96,26 +97,26 @@ class ResponseWrapper(stage.PipelineStage):
reply_text = f'调用函数 {".".join(function_names)}...'
query.resp_message_chain.append(
platform_message.MessageChain([platform_message.Plain(reply_text)])
platform_message.MessageChain([platform_message.Plain(text=reply_text)])
)
if query.pipeline_config['output']['misc']['track-function-calls']:
event_ctx = await self.ap.plugin_mgr.emit_event(
event=events.NormalMessageResponded(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
session=session,
prefix='',
response_text=reply_text,
finish_reason='stop',
funcs_called=[fc.function.name for fc in result.tool_calls]
if result.tool_calls is not None
else [],
query=query,
)
event = events.NormalMessageResponded(
launcher_type=query.launcher_type.value,
launcher_id=query.launcher_id,
sender_id=query.sender_id,
session=session,
prefix='',
response_text=reply_text,
finish_reason='stop',
funcs_called=[fc.function.name for fc in result.tool_calls]
if result.tool_calls is not None
else [],
query=query,
)
event_ctx = await self.ap.plugin_connector.emit_event(event)
if event_ctx.is_prevented_default():
yield entities.StageProcessResult(
result_type=entities.ResultType.INTERRUPT,
@@ -124,12 +125,12 @@ class ResponseWrapper(stage.PipelineStage):
else:
if event_ctx.event.reply is not None:
query.resp_message_chain.append(
platform_message.MessageChain(event_ctx.event.reply)
platform_message.MessageChain(text=event_ctx.event.reply)
)
else:
query.resp_message_chain.append(
platform_message.MessageChain([platform_message.Plain(reply_text)])
platform_message.MessageChain([platform_message.Plain(text=reply_text)])
)
yield entities.StageProcessResult(

View File

@@ -1,160 +0,0 @@
from __future__ import annotations
# MessageSource的适配器
import typing
import abc
from ..core import app
from .types import message as platform_message
from .types import events as platform_events
from .logger import EventLogger
class MessagePlatformAdapter(metaclass=abc.ABCMeta):
"""消息平台适配器基类"""
name: str
bot_account_id: int
"""机器人账号ID需要在初始化时设置"""
config: dict
ap: app.Application
logger: EventLogger
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
"""初始化适配器
Args:
config (dict): 对应的配置
ap (app.Application): 应用上下文
"""
self.config = config
self.ap = ap
self.logger = logger
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
"""主动发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (platform.types.MessageChain): 消息链
"""
raise NotImplementedError
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
"""回复消息
Args:
message_source (platform.types.MessageEvent): 消息源事件
message (platform.types.MessageChain): 消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
"""
raise NotImplementedError
async def is_muted(self, group_id: int) -> bool:
"""获取账号是否在指定群被禁言"""
raise NotImplementedError
def register_listener(
self,
event_type: typing.Type[platform_message.Event],
callback: typing.Callable[[platform_message.Event, MessagePlatformAdapter], None],
):
"""注册事件监听器
Args:
event_type (typing.Type[platform.types.Event]): 事件类型
callback (typing.Callable[[platform.types.Event], None]): 回调函数,接收一个参数,为事件
"""
raise NotImplementedError
def unregister_listener(
self,
event_type: typing.Type[platform_message.Event],
callback: typing.Callable[[platform_message.Event, MessagePlatformAdapter], None],
):
"""注销事件监听器
Args:
event_type (typing.Type[platform.types.Event]): 事件类型
callback (typing.Callable[[platform.types.Event], None]): 回调函数,接收一个参数,为事件
"""
raise NotImplementedError
async def run_async(self):
"""异步运行"""
raise NotImplementedError
async def kill(self) -> bool:
"""关闭适配器
Returns:
bool: 是否成功关闭热重载时若此函数返回False则不会重载MessageSource底层
"""
raise NotImplementedError
class MessageConverter:
"""消息链转换器基类"""
@staticmethod
def yiri2target(message_chain: platform_message.MessageChain):
"""将源平台消息链转换为目标平台消息链
Args:
message_chain (platform.types.MessageChain): 源平台消息链
Returns:
typing.Any: 目标平台消息链
"""
raise NotImplementedError
@staticmethod
def target2yiri(message_chain: typing.Any) -> platform_message.MessageChain:
"""将目标平台消息链转换为源平台消息链
Args:
message_chain (typing.Any): 目标平台消息链
Returns:
platform.types.MessageChain: 源平台消息链
"""
raise NotImplementedError
class EventConverter:
"""事件转换器基类"""
@staticmethod
def yiri2target(event: typing.Type[platform_message.Event]):
"""将源平台事件转换为目标平台事件
Args:
event (typing.Type[platform.types.Event]): 源平台事件
Returns:
typing.Any: 目标平台事件
"""
raise NotImplementedError
@staticmethod
def target2yiri(event: typing.Any) -> platform_message.Event:
"""将目标平台事件的调用参数转换为源平台的事件参数对象
Args:
event (typing.Any): 目标平台事件
Returns:
typing.Type[platform.types.Event]: 源平台事件
"""
raise NotImplementedError

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: ComponentTemplate
metadata:
name: MessagePlatformAdapter
label:
en_US: Message Platform Adapter
zh_Hans: 消息平台适配器模板类
spec:
type:
- python
execution:
python:
path: ./adapter.py
attr: MessagePlatformAdapter

View File

@@ -1,15 +1,10 @@
from __future__ import annotations
import sys
import asyncio
import traceback
import sqlalchemy
# FriendMessage, Image, MessageChain, Plain
from . import adapter as msadapter
from ..core import app, entities as core_entities, taskmgr
from .types import events as platform_events, message as platform_message
from ..discover import engine
@@ -19,10 +14,10 @@ from ..entity.errors import platform as platform_errors
from .logger import EventLogger
# 处理 3.4 移除了 YiriMirai 之后,插件的兼容性问题
from . import types as mirai
sys.modules['mirai'] = mirai
import langbot_plugin.api.entities.builtin.provider.session as provider_session
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
class RuntimeBot:
@@ -34,7 +29,7 @@ class RuntimeBot:
enable: bool
adapter: msadapter.MessagePlatformAdapter
adapter: abstract_platform_adapter.AbstractMessagePlatformAdapter
task_wrapper: taskmgr.TaskWrapper
@@ -46,7 +41,7 @@ class RuntimeBot:
self,
ap: app.Application,
bot_entity: persistence_bot.Bot,
adapter: msadapter.MessagePlatformAdapter,
adapter: abstract_platform_adapter.AbstractMessagePlatformAdapter,
logger: EventLogger,
):
self.ap = ap
@@ -59,7 +54,7 @@ class RuntimeBot:
async def initialize(self):
async def on_friend_message(
event: platform_events.FriendMessage,
adapter: msadapter.MessagePlatformAdapter,
adapter: abstract_platform_adapter.AbstractMessagePlatformAdapter,
):
image_components = [
component for component in event.message_chain if isinstance(component, platform_message.Image)
@@ -73,7 +68,7 @@ class RuntimeBot:
await self.ap.query_pool.add_query(
bot_uuid=self.bot_entity.uuid,
launcher_type=core_entities.LauncherTypes.PERSON,
launcher_type=provider_session.LauncherTypes.PERSON,
launcher_id=event.sender.id,
sender_id=event.sender.id,
message_event=event,
@@ -84,7 +79,7 @@ class RuntimeBot:
async def on_group_message(
event: platform_events.GroupMessage,
adapter: msadapter.MessagePlatformAdapter,
adapter: abstract_platform_adapter.AbstractMessagePlatformAdapter,
):
image_components = [
component for component in event.message_chain if isinstance(component, platform_message.Image)
@@ -98,7 +93,7 @@ class RuntimeBot:
await self.ap.query_pool.add_query(
bot_uuid=self.bot_entity.uuid,
launcher_type=core_entities.LauncherTypes.GROUP,
launcher_type=provider_session.LauncherTypes.GROUP,
launcher_id=event.group.id,
sender_id=event.sender.id,
message_event=event,
@@ -120,8 +115,10 @@ class RuntimeBot:
if isinstance(e, asyncio.CancelledError):
self.task_context.set_current_action('Exited.')
return
traceback_str = traceback.format_exc()
self.task_context.set_current_action('Exited with error.')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback.format_exc()}')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback_str}')
self.task_wrapper = self.ap.task_mgr.create_task(
exception_wrapper(),
@@ -151,7 +148,7 @@ class PlatformManager:
adapter_components: list[engine.Component]
adapter_dict: dict[str, type[msadapter.MessagePlatformAdapter]]
adapter_dict: dict[str, type[abstract_platform_adapter.AbstractMessagePlatformAdapter]]
def __init__(self, ap: app.Application = None):
self.ap = ap
@@ -161,7 +158,7 @@ class PlatformManager:
async def initialize(self):
self.adapter_components = self.ap.discover.get_components_by_kind('MessagePlatformAdapter')
adapter_dict: dict[str, type[msadapter.MessagePlatformAdapter]] = {}
adapter_dict: dict[str, type[abstract_platform_adapter.AbstractMessagePlatformAdapter]] = {}
for component in self.adapter_components:
adapter_dict[component.metadata.name] = component.get_python_component_class()
self.adapter_dict = adapter_dict
@@ -172,8 +169,9 @@ class PlatformManager:
webchat_logger = EventLogger(name='webchat-adapter', ap=self.ap)
webchat_adapter_inst = webchat_adapter_class(
{},
self.ap,
webchat_logger,
ap=self.ap,
is_stream=False,
)
self.webchat_proxy_bot = RuntimeBot(
@@ -193,7 +191,7 @@ class PlatformManager:
await self.load_bots_from_db()
def get_running_adapters(self) -> list[msadapter.MessagePlatformAdapter]:
def get_running_adapters(self) -> list[abstract_platform_adapter.AbstractMessagePlatformAdapter]:
return [bot.adapter for bot in self.bots if bot.enable]
async def load_bots_from_db(self):
@@ -231,7 +229,6 @@ class PlatformManager:
adapter_inst = self.adapter_dict[bot_entity.adapter](
bot_entity.adapter_config,
self.ap,
logger,
)
@@ -274,43 +271,6 @@ class PlatformManager:
return component
return None
async def write_back_config(
self,
adapter_name: str,
adapter_inst: msadapter.MessagePlatformAdapter,
config: dict,
):
# index = -2
# for i, adapter in enumerate(self.adapters):
# if adapter == adapter_inst:
# index = i
# break
# if index == -2:
# raise Exception('平台适配器未找到')
# # 只修改启用的适配器
# real_index = -1
# for i, adapter in enumerate(self.ap.platform_cfg.data['platform-adapters']):
# if adapter['enable']:
# index -= 1
# if index == -1:
# real_index = i
# break
# new_cfg = {
# 'adapter': adapter_name,
# 'enable': True,
# **config
# }
# self.ap.platform_cfg.data['platform-adapters'][real_index] = new_cfg
# await self.ap.platform_cfg.dump_config()
# TODO implement this
pass
async def run(self):
# This method will only be called when the application launching
await self.webchat_proxy_bot.run()

View File

@@ -9,7 +9,8 @@ import traceback
import uuid
from ..core import app
from .types import message as platform_message
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_event_logger
class EventLogLevel(enum.Enum):
@@ -55,7 +56,7 @@ MAX_LOG_COUNT = 200
DELETE_COUNT_PER_TIME = 50
class EventLogger:
class EventLogger(abstract_platform_event_logger.AbstractEventLogger):
"""used for logging bot events"""
ap: app.Application

View File

@@ -5,17 +5,17 @@ import traceback
import datetime
import aiocqhttp
import pydantic
from .. import adapter
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
from ...utils import image
from ..logger import EventLogger
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger
class AiocqhttpMessageConverter(adapter.MessageConverter):
class AiocqhttpMessageConverter(abstract_platform_adapter.AbstractMessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain,
@@ -270,16 +270,17 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
)
yiri_msg_list.append(reply_msg)
# 这里下载所有文件会导致下载文件过多,暂时不下载
# elif msg.type == 'file':
# # file_name = msg.data['file']
# file_id = msg.data['file_id']
# file_data = await bot.get_file(file_id=file_id)
# file_name = file_data.get('file_name')
# file_path = file_data.get('file')
# file_url = file_data.get('file_url')
# file_size = file_data.get('file_size')
# yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
elif msg.type == 'file':
pass
# file_name = msg.data['file']
# file_id = msg.data['file_id']
# file_data = await bot.get_file(file_id=file_id)
# file_name = file_data.get('file_name')
# file_path = file_data.get('file')
# _ = file_path
# file_url = file_data.get('file_url')
# file_size = file_data.get('file_size')
# yiri_msg_list.append(platform_message.File(id=file_id, name=file_name,url=file_url,size=file_size))
elif msg.type == 'face':
face_id = msg.data['id']
face_name = msg.data['raw']['faceText']
@@ -298,7 +299,7 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
return chain
class AiocqhttpEventConverter(adapter.EventConverter):
class AiocqhttpEventConverter(abstract_platform_adapter.AbstractEventConverter):
@staticmethod
async def yiri2target(event: platform_events.MessageEvent, bot_account_id: int):
return event.source_platform_object
@@ -348,23 +349,19 @@ class AiocqhttpEventConverter(adapter.EventConverter):
)
class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
bot: aiocqhttp.CQHttp
bot_account_id: int
class AiocqhttpAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
bot: aiocqhttp.CQHttp = pydantic.Field(exclude=True, default_factory=aiocqhttp.CQHttp)
message_converter: AiocqhttpMessageConverter = AiocqhttpMessageConverter()
event_converter: AiocqhttpEventConverter = AiocqhttpEventConverter()
config: dict
ap: app.Application
on_websocket_connection_event_cache: typing.List[typing.Callable[[aiocqhttp.Event], None]] = []
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.logger = logger
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger):
super().__init__(
config=config,
logger=logger,
)
async def shutdown_trigger_placeholder():
while True:
@@ -372,7 +369,6 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
self.config['shutdown_trigger'] = shutdown_trigger_placeholder
self.ap = ap
self.on_websocket_connection_event_cache = []
if 'access-token' in config:
@@ -408,7 +404,9 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
async def on_message(event: aiocqhttp.Event):
self.bot_account_id = event.self_id
@@ -439,7 +437,9 @@ class AiocqhttpAdapter(adapter.MessagePlatformAdapter):
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
return super().unregister_listener(event_type, callback)

View File

@@ -1,18 +1,16 @@
import traceback
import typing
from libs.dingtalk_api.dingtalkevent import DingTalkEvent
from pkg.platform.types import message as platform_message
from pkg.platform.adapter import MessagePlatformAdapter
from .. import adapter
from ...core import app
from ..types import events as platform_events
from ..types import entities as platform_entities
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
from libs.dingtalk_api.api import DingTalkClient
import datetime
from ..logger import EventLogger
class DingTalkMessageConverter(adapter.MessageConverter):
class DingTalkMessageConverter(abstract_platform_adapter.AbstractMessageConverter):
@staticmethod
async def yiri2target(message_chain: platform_message.MessageChain):
content = ''
@@ -48,7 +46,7 @@ class DingTalkMessageConverter(adapter.MessageConverter):
return chain
class DingTalkEventConverter(adapter.EventConverter):
class DingTalkEventConverter(abstract_platform_adapter.AbstractEventConverter):
@staticmethod
async def yiri2target(event: platform_events.MessageEvent):
return event.source_platform_object
@@ -92,18 +90,22 @@ class DingTalkEventConverter(adapter.EventConverter):
)
class DingTalkAdapter(adapter.MessagePlatformAdapter):
class DingTalkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
bot: DingTalkClient
ap: app.Application
bot_account_id: str
message_converter: DingTalkMessageConverter = DingTalkMessageConverter()
event_converter: DingTalkEventConverter = DingTalkEventConverter()
config: dict
card_instance_id_dict: (
dict # 回复卡片消息字典key为消息idvalue为回复卡片实例id用于在流式消息时判断是否发送到指定卡片
)
seq: int # 消息顺序直接以seq作为标识
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
def __init__(self, config: dict, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.card_instance_id_dict = {}
# self.seq = 1
required_keys = [
'client_id',
'client_secret',
@@ -139,6 +141,33 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
content, at = await DingTalkMessageConverter.yiri2target(message)
await self.bot.send_message(content, incoming_message, at)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
# event = await DingTalkEventConverter.yiri2target(
# message_source,
# )
# incoming_message = event.incoming_message
# msg_id = incoming_message.message_id
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if (msg_seq - 1) % 8 == 0 or is_final:
content, at = await DingTalkMessageConverter.yiri2target(message)
card_instance, card_instance_id = self.card_instance_id_dict[message_id]
# print(card_instance_id)
await self.bot.send_card_message(card_instance, card_instance_id, content, is_final)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_instance_id_dict.pop(message_id) # 消息回复结束之后删除卡片实例id
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
content = await DingTalkMessageConverter.yiri2target(message)
if target_type == 'person':
@@ -146,10 +175,26 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
if target_type == 'group':
await self.bot.send_proactive_message_to_group(target_id, content)
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_message_card(self, message_id, event):
card_template_id = self.config['card_template_id']
incoming_message = event.source_platform_object.incoming_message
# message_id = incoming_message.message_id
card_instance, card_instance_id = await self.bot.create_and_card(card_template_id, incoming_message)
self.card_instance_id_dict[message_id] = (card_instance, card_instance_id)
return True
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
async def on_message(event: DingTalkEvent):
try:
@@ -174,6 +219,8 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
async def unregister_listener(
self,
event_type: type,
callback: typing.Callable[[platform_events.Event, MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
return super().unregister_listener(event_type, callback)

View File

@@ -46,6 +46,23 @@ spec:
type: boolean
required: false
default: true
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用钉钉卡片流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用钉钉卡片流式方式来回复内容
type: boolean
required: true
default: false
- name: card_template_id
label:
en_US: card template id
zh_Hans: 卡片模板ID
type: string
required: true
default: "填写你的卡片template_id"
execution:
python:
path: ./dingtalk.py

File diff suppressed because it is too large Load Diff

View File

@@ -17,13 +17,14 @@ import aiohttp
import lark_oapi.ws.exception
import quart
from lark_oapi.api.im.v1 import *
import pydantic
from lark_oapi.api.cardkit.v1 import *
from .. import adapter
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ..logger import EventLogger
import langbot_plugin.api.definition.abstract.platform.adapter as abstract_platform_adapter
import langbot_plugin.api.entities.builtin.platform.message as platform_message
import langbot_plugin.api.entities.builtin.platform.events as platform_events
import langbot_plugin.api.entities.builtin.platform.entities as platform_entities
import langbot_plugin.api.definition.abstract.platform.event_logger as abstract_platform_logger
class AESCipher(object):
@@ -52,7 +53,7 @@ class AESCipher(object):
return self.decrypt(enc).decode('utf8')
class LarkMessageConverter(adapter.MessageConverter):
class LarkMessageConverter(abstract_platform_adapter.AbstractMessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain, api_client: lark_oapi.Client
@@ -276,7 +277,7 @@ class LarkMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain(lb_msg_list)
class LarkEventConverter(adapter.EventConverter):
class LarkEventConverter(abstract_platform_adapter.AbstractEventConverter):
@staticmethod
async def yiri2target(
event: platform_events.MessageEvent,
@@ -320,41 +321,41 @@ class LarkEventConverter(adapter.EventConverter):
)
class LarkAdapter(adapter.MessagePlatformAdapter):
bot: lark_oapi.ws.Client
api_client: lark_oapi.Client
CARD_ID_CACHE_SIZE = 500
CARD_ID_CACHE_MAX_LIFETIME = 20 * 60 # 20分钟
class LarkAdapter(abstract_platform_adapter.AbstractMessagePlatformAdapter):
bot: lark_oapi.ws.Client = pydantic.Field(exclude=True)
api_client: lark_oapi.Client = pydantic.Field(exclude=True)
bot_account_id: str # 用于在流水线中识别at是否是本bot直接以bot_name作为标识
lark_tenant_key: str # 飞书企业key
lark_tenant_key: str = pydantic.Field(exclude=True, default='') # 飞书企业key
message_converter: LarkMessageConverter = LarkMessageConverter()
event_converter: LarkEventConverter = LarkEventConverter()
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
typing.Callable[[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None],
]
config: dict
quart_app: quart.Quart
ap: app.Application
quart_app: quart.Quart = pydantic.Field(exclude=True)
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.listeners = {}
card_id_dict: dict[str, str] # 消息id到卡片id的映射便于创建卡片后的发送消息到指定卡片
@self.quart_app.route('/lark/callback', methods=['POST'])
seq: int # 用于在发送卡片消息中识别消息顺序直接以seq作为标识
def __init__(self, config: dict, logger: abstract_platform_logger.AbstractEventLogger, **kwargs):
quart_app = quart.Quart(__name__)
@quart_app.route('/lark/callback', methods=['POST'])
async def lark_callback():
try:
data = await quart.request.json
self.ap.logger.debug(f'Lark callback event: {data}')
if 'encrypt' in data:
cipher = AESCipher(self.config['encrypt-key'])
cipher = AESCipher(config['encrypt-key'])
data = cipher.decrypt_string(data['encrypt'])
data = json.loads(data)
@@ -401,14 +402,264 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
lark_oapi.EventDispatcherHandler.builder('', '').register_p2_im_message_receive_v1(sync_on_message).build()
)
self.bot_account_id = config['bot_name']
bot_account_id = config['bot_name']
self.bot = lark_oapi.ws.Client(config['app_id'], config['app_secret'], event_handler=event_handler)
self.api_client = lark_oapi.Client.builder().app_id(config['app_id']).app_secret(config['app_secret']).build()
bot = lark_oapi.ws.Client(config['app_id'], config['app_secret'], event_handler=event_handler)
api_client = lark_oapi.Client.builder().app_id(config['app_id']).app_secret(config['app_secret']).build()
super().__init__(
config=config,
logger=logger,
lark_tenant_key=config.get('lark_tenant_key', ''),
card_id_dict={},
seq=1,
listeners={},
quart_app=quart_app,
bot=bot,
api_client=api_client,
bot_account_id=bot_account_id,
**kwargs,
)
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
pass
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_card_id(self, message_id):
try:
# self.logger.debug('飞书支持stream输出,创建卡片......')
card_data = {
'schema': '2.0',
'config': {
'update_multi': True,
'streaming_mode': True,
'streaming_config': {
'print_step': {'default': 1},
'print_frequency_ms': {'default': 70},
'print_strategy': 'fast',
},
},
'body': {
'direction': 'vertical',
'padding': '12px 12px 12px 12px',
'elements': [
{
'tag': 'div',
'text': {
'tag': 'plain_text',
'content': 'LangBot',
'text_size': 'normal',
'text_align': 'left',
'text_color': 'default',
},
'icon': {
'tag': 'custom_icon',
'img_key': 'img_v3_02p3_05c65d5d-9bad-440a-a2fb-c89571bfd5bg',
},
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
'element_id': 'streaming_txt',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'column_set',
'horizontal_spacing': '8px',
'horizontal_align': 'left',
'columns': [
{
'tag': 'column',
'width': 'weighted',
'elements': [
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'markdown',
'content': '',
'text_align': 'left',
'text_size': 'normal',
'margin': '0px 0px 0px 0px',
},
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '2px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
'weight': 1,
}
],
'margin': '0px 0px 0px 0px',
},
{'tag': 'hr', 'margin': '0px 0px 0px 0px'},
{
'tag': 'column_set',
'horizontal_spacing': '12px',
'horizontal_align': 'right',
'columns': [
{
'tag': 'column',
'width': 'weighted',
'elements': [
{
'tag': 'markdown',
'content': '<font color="grey-600">以上内容由 AI 生成,仅供参考。更多详细、准确信息可点击引用链接查看</font>',
'text_align': 'left',
'text_size': 'notation',
'margin': '4px 0px 0px 0px',
'icon': {
'tag': 'standard_icon',
'token': 'robot_outlined',
'color': 'grey',
},
}
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
'weight': 1,
},
{
'tag': 'column',
'width': '20px',
'elements': [
{
'tag': 'button',
'text': {'tag': 'plain_text', 'content': ''},
'type': 'text',
'width': 'fill',
'size': 'medium',
'icon': {'tag': 'standard_icon', 'token': 'thumbsup_outlined'},
'hover_tips': {'tag': 'plain_text', 'content': '有帮助'},
'margin': '0px 0px 0px 0px',
}
],
'padding': '0px 0px 0px 0px',
'direction': 'vertical',
'horizontal_spacing': '8px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
},
{
'tag': 'column',
'width': '30px',
'elements': [
{
'tag': 'button',
'text': {'tag': 'plain_text', 'content': ''},
'type': 'text',
'width': 'default',
'size': 'medium',
'icon': {'tag': 'standard_icon', 'token': 'thumbdown_outlined'},
'hover_tips': {'tag': 'plain_text', 'content': '无帮助'},
'margin': '0px 0px 0px 0px',
}
],
'padding': '0px 0px 0px 0px',
'vertical_spacing': '8px',
'horizontal_align': 'left',
'vertical_align': 'top',
'margin': '0px 0px 0px 0px',
},
],
'margin': '0px 0px 4px 0px',
},
],
},
}
# delay / fast 创建卡片模板delay 延迟打印fast 实时打印,可以自定义更好看的消息模板
request: CreateCardRequest = (
CreateCardRequest.builder()
.request_body(CreateCardRequestBody.builder().type('card_json').data(json.dumps(card_data)).build())
.build()
)
# 发起请求
response: CreateCardResponse = self.api_client.cardkit.v1.card.create(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.cardkit.v1.card.create failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
self.ap.logger.debug(f'飞书卡片创建成功,卡片ID: {response.data.card_id}')
self.card_id_dict[message_id] = response.data.card_id
card_id = response.data.card_id
return card_id
except Exception as e:
self.ap.logger.error(f'飞书卡片创建失败,错误信息: {e}')
async def create_message_card(self, message_id, event) -> str:
"""
创建卡片消息。
使用卡片消息是因为普通消息更新次数有限制而大模型流式返回结果可能很多而超过限制而飞书卡片没有这个限制api免费次数有限
"""
# message_id = event.message_chain.message_id
card_id = await self.create_card_id(message_id)
content = {
'type': 'card',
'data': {'card_id': card_id, 'template_variable': {'content': 'Thinking...'}},
} # 当收到消息时发送消息模板,可添加模板变量,详情查看飞书中接口文档
request: ReplyMessageRequest = (
ReplyMessageRequest.builder()
.message_id(event.message_chain.message_id)
.request_body(
ReplyMessageRequestBody.builder().content(json.dumps(content)).msg_type('interactive').build()
)
.build()
)
# 发起请求
response: ReplyMessageResponse = await self.api_client.im.v1.message.areply(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return True
async def reply_message(
self,
message_source: platform_events.MessageEvent,
@@ -447,20 +698,80 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
"""
回复消息变成更新卡片消息
"""
# self.seq += 1
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if msg_seq % 8 == 0 or is_final:
lark_message = await self.message_converter.yiri2target(message, self.api_client)
text_message = ''
for ele in lark_message[0]:
if ele['tag'] == 'text':
text_message += ele['text']
elif ele['tag'] == 'md':
text_message += ele['text']
# content = {
# 'type': 'card_json',
# 'data': {'card_id': self.card_id_dict[message_id], 'elements': {'content': text_message}},
# }
request: ContentCardElementRequest = (
ContentCardElementRequest.builder()
.card_id(self.card_id_dict[message_id])
.element_id('streaming_txt')
.request_body(
ContentCardElementRequestBody.builder()
# .uuid("a0d69e20-1dd1-458b-k525-dfeca4015204")
.content(text_message)
.sequence(msg_seq)
.build()
)
.build()
)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_id_dict.pop(message_id) # 清理已经使用过的卡片
# 发起请求
response: ContentCardElementResponse = self.api_client.cardkit.v1.card_element.content(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.patch failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return
async def is_muted(self, group_id: int) -> bool:
return False
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
self.listeners[event_type] = callback
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
callback: typing.Callable[
[platform_events.Event, abstract_platform_adapter.AbstractMessagePlatformAdapter], None
],
):
self.listeners.pop(event_type)
@@ -492,4 +803,9 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
)
async def kill(self) -> bool:
# 需要断开连接,不然旧的连接会继续运行,导致飞书消息来时会随机选择一个连接
# 断开时lark.ws.Client的_receive_message_loop会打印error日志: receive message loop exit。然后进行重连
# 所以要设置_auto_reconnect=False,让其不重连。
self.bot._auto_reconnect = False
await self.bot._disconnect()
return False

View File

@@ -65,6 +65,16 @@ spec:
type: string
required: true
default: ""
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用飞书流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用飞书流式方式来回复内容
type: boolean
required: true
default: false
execution:
python:
path: ./lark.py

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Some files were not shown because too many files have changed in this diff Show More