Compare commits

...

252 Commits

Author SHA1 Message Date
Junyan Qin
87ecb4e519 feat: add note for remove_think & remove dify remove cot code 2025-08-21 21:38:58 +08:00
Ljzd_PRO
df524b8a7a Fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages (#1624)
* fix: update invoke_embedding to return only embeddings from client.embed

* fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages
2025-08-21 20:46:26 +08:00
Junyan Qin
8a7df423ab chore: update shengsuanyun url 2025-08-21 14:14:25 +08:00
Junyan Qin
cafd623c92 chore: update shengsuanyun 2025-08-21 12:03:04 +08:00
Junyan Qin
4df11ef064 chore: update for shengsuanyun 2025-08-21 11:47:40 +08:00
Junyan Qin
aa7c08ee00 chore: release v4.2.1 2025-08-21 10:15:05 +08:00
Junyan Qin
b98de29b07 feat: add shengsuanyun requester 2025-08-20 23:33:35 +08:00
fdc310
c7c2eb4518 fix:in the gmini tool_calls no id The resulting call failure (#1622)
* fix:in the dify agent llm return message can not joint

* fix:in the gmini tool_calls no id The resulting call failure
2025-08-20 22:39:16 +08:00
Ljzd_PRO
37fa318258 fix: update invoke_embedding to return only embeddings from client.embed (#1619) 2025-08-20 10:25:33 +08:00
fdc310
ff7bebb782 fix:in the dify agent llm return message can not joint (#1617) 2025-08-19 23:23:00 +08:00
Junyan Qin
30bb26f898 doc(README): streaming output 2025-08-18 21:21:50 +08:00
Junyan Qin
9c1f4e1690 chore: release v4.2.0 2025-08-18 18:43:20 +08:00
dependabot[bot]
865ee2ca01 Merge pull request #1612 from langbot-app/dependabot/npm_and_yarn/web/form-data-4.0.4
chore(deps): bump form-data from 4.0.2 to 4.0.4 in /web
2025-08-18 16:10:56 +08:00
Junyan Qin (Chin)
c2264080bd Merge pull request #1442 from langbot-app/feat/streaming
feat: streaming output
2025-08-17 23:36:30 +08:00
Dong_master
67b622d5a6 fix:Some adjustments to the return types 2025-08-17 23:34:19 +08:00
Dong_master
a534c02d75 fix:remove print 2025-08-17 23:34:01 +08:00
Junyan Qin
da890d3074 chore: remove fix.MD 2025-08-17 21:20:32 +08:00
Junyan Qin
3049aa7a96 feat: add migration for pipeline remove-think 2025-08-17 21:18:41 +08:00
Junyan Qin (Chin)
e66f674968 Merge branch 'master' into feat/streaming 2025-08-17 14:30:22 +08:00
Junyan Qin (Chin)
dd0e0abdc4 Merge pull request #1571 from fdc310/streaming_feature
feat:add streaming output and pipeline stream
2025-08-17 14:27:39 +08:00
Junyan Qin (Chin)
13f6396eb4 Merge pull request #1610 from langbot-app/devin/1755399221-add-password-change-feature
feat: add password change functionality
2025-08-17 14:25:24 +08:00
Junyan Qin
7bbaa4fcad feat: perf ui & complete i18n 2025-08-17 14:09:28 +08:00
Junyan Qin
e931d5eb88 chore: remove print 2025-08-17 13:52:40 +08:00
Junyan Qin
4bbfa2f1d7 fix: telegram adapter gracefully stop 2025-08-17 13:52:02 +08:00
Devin AI
dd30d08c68 feat: add password change functionality
- Add password change button to sidebar account menu
- Create PasswordChangeDialog component with shadcn UI components
- Implement backend API endpoint /api/v1/user/change-password
- Add form validation with current password verification
- Include internationalization support for Chinese and English
- Add proper error handling and success notifications

Co-Authored-By: Rock <rockchinq@gmail.com>
2025-08-17 03:03:36 +00:00
Dong_master
8ccda10045 fix: in the dashscopeapi.py workflow stream bug 2025-08-16 12:11:00 +08:00
Dong_master
46fbfbefea fix: in the dashscopeapi.py stream and non-stream remove_think logic 2025-08-16 02:13:45 +08:00
Dong_master
8f863cf530 fix: remove_think bug 2025-08-15 00:55:39 +08:00
Dong_master
2351193c51 fix: in the difysvapi.py add stream , and remove_think on chunk 2025-08-15 00:50:32 +08:00
Dong_master
8c87a47f5a fix: in the ollamachat.py func _closure add remove_think agr 2025-08-14 22:35:30 +08:00
Dong_master
b8b9a37825 fix: in the dify non-stream remove_think lgic 2025-08-14 22:32:22 +08:00
Dong_master
13dd6fcee3 fix: in the webchat non-stream not save resp_message in message_lists 2025-08-14 22:29:42 +08:00
Junyan Qin
29f0075bd8 perf: zh-Hant specs 2025-08-13 17:49:54 +08:00
Junyan Qin
8a96ffbcc0 chore: complete zh-Hant specs for top_k 2025-08-13 17:33:47 +08:00
Junyan Qin (Chin)
67f68d8101 Merge pull request #1606 from langbot-app/feat/topk_splitter
Feat/topk splitter
2025-08-13 17:31:11 +08:00
Junyan Qin
ad59d92cef perf: i18n 2025-08-13 17:28:00 +08:00
Dong_master
85f97860c5 fix: Fixed the errors in modelscopechatcmpl.py when in pseudo-non-streaming mode, regarding the display of main content and tool calls. 2025-08-13 01:55:06 +08:00
Dong_master
8fd21e76f2 fix: Only when messagechunk is present, will msg_sequence be assigned to the subsequent tool calls. 2025-08-13 00:00:10 +08:00
Dong_master
cc83ddbe21 fix: del print 2025-08-12 23:29:32 +08:00
Dong_master
99fcde1586 fix: in the MessageChunk add msg_sequence ,And obtain the usage in the adapter. 2025-08-12 23:20:41 +08:00
WangCham
eab08dfbf3 fix: format the code 2025-08-12 23:13:00 +08:00
Dong_master
dbf0200cca feat:add More attractive card templates 2025-08-12 22:36:42 +08:00
Junyan Qin
ac44f35299 chore: remove comments 2025-08-12 21:07:23 +08:00
Junyan Qin
d6a5fdd911 perf: complete sidebar menu 2025-08-12 21:02:40 +08:00
Dong_master
4668db716a fix: fix command reply_message error bug,del some print 2025-08-12 20:54:47 +08:00
Junyan Qin
f7cd6b76f2 feat: refactor account menu 2025-08-12 20:13:18 +08:00
Junyan Qin
b6d47187f5 perf: prettier 2025-08-12 19:39:41 +08:00
Junyan Qin
051fffd41e fix: stash 2025-08-12 19:18:49 +08:00
Junyan Qin
c5480078b3 perf: make prompt editor textarea 2025-08-12 11:30:42 +08:00
Dong_master
e744e9c4ef fix: in the localagent.py yield MessageChunk add agr tool_calls,and After calling the "tool_calls", the first returned body data will be concatenated. 2025-08-12 11:25:37 +08:00
Dong_master
9f22b8b585 fix: be adapter.py func reply_message_chunk agr message_id alter bot_message,and in pipelinemgr.py respback.py agr alter 2025-08-12 11:21:08 +08:00
Dong_master
27cee0a4e1 fix: in the adapter.py func reply_message_chunk agr message_id alter bot_message,and in dingtalk.py lark.py telegram.py webchat.py agr alter 2025-08-12 11:19:27 +08:00
Dong_master
6d35fc408c fix: some time in the anthropicmsgs.py mesg_dcit["content"] is str can not append 2025-08-12 11:15:17 +08:00
Dong_master
0607a0fa5c fix: in the modelscopechatcmpl.py stream tool_calls arguments bug, 2025-08-12 00:04:21 +08:00
Dong_master
ed57d2fafa del localagent.py print 2025-08-11 23:49:19 +08:00
Junyan Qin
39ef92676b doc: add back wechat 2025-08-11 23:38:41 +08:00
Dong_master
7301476228 fix:Because the message_id was popped out, it caused the issue where the tool couldn't find the message_id after being invoked. 2025-08-11 23:36:01 +08:00
WangCham
457cc3eecd fix: wrong definition of topk 2025-08-11 23:22:36 +08:00
Dong_master
a381069bcc fix:fix tool_result argument bug 2025-08-11 23:05:47 +08:00
WangCham
146c38e64c fix: wrong positions 2025-08-11 22:58:48 +08:00
Junyan Qin (Chin)
763c41729e Merge pull request #1605 from TwperBody/master
feat: dark mode supports for webui
2025-08-11 20:51:58 +08:00
Junyan Qin
0021efebd7 perf: minor fix 2025-08-11 20:50:39 +08:00
Junyan Qin
5f18a1b13a chore: prettier 2025-08-11 20:46:08 +08:00
Junyan Qin
0124448479 perf: card shadowbox 2025-08-11 20:41:57 +08:00
WangCham
e76bc80e51 Merge branch 'feat/topk_splitter' of github.com:RockChinQ/LangBot into feat/topk_splitter 2025-08-11 00:20:13 +08:00
WangCham
a27560e804 fix: page bug 2025-08-11 00:12:06 +08:00
Dong_master
46452de7b5 fix:The handling of the streaming tool calls has been fixed, but there are still bugs in the model's reply messages with thoughtfulness. 2025-08-10 23:14:57 +08:00
TwperBody
2aef139577 dark mode 2025-08-10 22:17:06 +08:00
Dong_master
03b11481ed fix:fix remove_think logic, and end<think> fix </think> 2025-08-10 00:28:55 +08:00
Dong_master
8c5cb71812 fix:del the chatcmpl.py useless logic,and in the modelscopechatcmpl.py Non-streaming add and del <think> logic,and fix the ppiochatcmpl.py stream logic and the giteeaichatcmpl.py inherit ppiochatcmpl.py 2025-08-10 00:16:13 +08:00
Dong_master
7c59bc1ce5 feat:add anthropic stream ouput 2025-08-10 00:09:19 +08:00
Dong_master
eede354d3b fix:chatcmpl.py del content <think>,in the ppiochatcmpl.py and modelsopechatcmpl.py fun _closure_stream stream logic 2025-08-09 02:46:13 +08:00
Junyan Qin
eb7b5dcc25 chore: rename zh_Hans label of deepseek requester 2025-08-08 17:31:24 +08:00
WangCham
47e9ce96fc feat: add topk 2025-08-07 18:10:33 +08:00
WangCham
4e95bc542c fix: kb form 2025-08-07 18:10:33 +08:00
WangCham
e4f321ea7a feat: add description for topk 2025-08-07 18:10:33 +08:00
WangCham
246eb71b75 feat: add topk 2025-08-07 18:10:33 +08:00
Junyan Qin
261f50b8ec feat: refactor with cursor max mode claude 4.1 opus 2025-08-07 15:47:57 +08:00
Junyan Qin
9736d0708a fix: missing deps 2025-08-07 10:15:09 +08:00
Junyan Qin
02dbe80d2f perf: model testing 2025-08-07 10:01:04 +08:00
Dong_master
0f239ace17 Merge remote-tracking branch 'origin/streaming_feature' into streaming_feature 2025-08-06 23:02:35 +08:00
Dong_master
3a82ae8da5 fix: the bug in the "remove_think" function. 2025-08-06 23:00:57 +08:00
Junyan Qin
c33c9eaab0 chore: remove remove_think param in trigger.yaml 2025-08-06 15:45:35 +08:00
Junyan Qin
87f626f3cc doc(README): add HelloGitHub badge 2025-08-05 17:40:27 +08:00
Dong_master
e88302f1b4 fix:The handling logic of remove_think in the connector and Temporarily blocked the processing of streaming tool calls in the runner. 2025-08-05 04:24:03 +08:00
Dong_master
5597dffaeb Merge remote-tracking branch 'origin/streaming_feature' into streaming_feature
# Conflicts:
#	pkg/api/http/controller/groups/pipelines/webchat.py
#	pkg/pipeline/process/handlers/chat.py
#	pkg/platform/sources/aiocqhttp.py
#	pkg/platform/sources/dingtalk.py
#	pkg/platform/sources/discord.py
#	pkg/platform/sources/lark.py
#	pkg/platform/sources/telegram.py
#	pkg/platform/sources/wechatpad.py
#	pkg/provider/modelmgr/requester.py
#	pkg/provider/modelmgr/requesters/chatcmpl.py
#	pkg/provider/modelmgr/requesters/deepseekchatcmpl.py
#	pkg/provider/modelmgr/requesters/giteeaichatcmpl.py
#	pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
#	pkg/provider/modelmgr/requesters/ppiochatcmpl.py
#	pkg/provider/runners/dashscopeapi.py
#	pkg/provider/runners/difysvapi.py
#	pkg/provider/runners/localagent.py
2025-08-04 23:17:36 +08:00
Junyan Qin
7f25d61531 fix: minor fix 2025-08-04 23:00:54 +08:00
Junyan Qin
15e524c6e6 perf: move remove-think to output tab 2025-08-04 19:26:19 +08:00
fdc
4a1d033ee9 fix: Reduce chunk returns in dify and Hundred Refining Runner to every 8 chunks 2025-08-04 19:26:19 +08:00
fdc
8adc88a8c0 fix:Modify the remove_think that directly retrieves the configuration file from the requester, retrieves it from the runner, and passes it to the required function 2025-08-04 19:26:18 +08:00
fdc
a62b38eda7 fix: In the reply_message_chunk of the adapter, the message is only streamed into the card or edited at the end of the 8th chunk return or streaming 2025-08-04 19:26:18 +08:00
Dong_master
fcef784180 fix: In the runner, every 8 tokens yield 2025-08-04 19:26:18 +08:00
Junyan Qin
c3ed4ef6a1 feat: no longer use typewriter in debug dialog 2025-08-04 19:26:18 +08:00
Junyan Qin
b9f768af25 perf: minor fixes 2025-08-04 19:26:18 +08:00
Junyan Qin
47ff883fc7 perf: ruff format & remove stream params in requester 2025-08-04 19:26:18 +08:00
Dong_master
68906c43ff feat: add webchat Word-by-word output
fix:webchat on message stream bug
2025-08-04 19:26:18 +08:00
Dong_master
c6deed4e6e feat: webchat stream is ok 2025-08-04 19:26:18 +08:00
Dong_master
b45cc59322 fix:webchat stream judge bug and frontend bug 2025-08-04 19:26:17 +08:00
fdc
c33a96823b fix: frontend bug 2025-08-04 19:26:17 +08:00
fdc
d3ab16761d fix:lsome bug 2025-08-04 19:26:17 +08:00
fdc
70f23f24b0 fix: is_stream_output_supperted in webchat return 2025-08-04 19:26:17 +08:00
fdc
00a8410c94 feat:webchat frontend stream 2025-08-04 19:26:17 +08:00
fdc
2a17e89a99 feat: add webchat stream but only some 2025-08-04 19:26:17 +08:00
fdc
8fe0992c15 fix:in chat judge create_message_card telegram reply_message_chunk no message 2025-08-04 19:26:17 +08:00
Dong_master
a9776b7b53 fix:del some print ,and amend respback on stream judge ,and del in dingtalk this is_stream_output_supported() use 2025-08-04 19:26:16 +08:00
Dong_master
074d359c8e feat:add dashscopeapi stream
fix:dify 64chunk yield
2025-08-04 19:26:16 +08:00
Dong_master
7728b4262b fix:lark message_id and dingtalk incoming_message 2025-08-04 19:26:16 +08:00
Dong_master
4905b5a738 feat:add dingtalk stream
fix:adapter is_stream_output_supported bug
fix:stream message reply chunk in message_id
2025-08-04 19:26:16 +08:00
Dong_master
43a259a1ae feat:add dingtalk stream 2025-08-04 19:26:16 +08:00
Dong_master
cffe493db0 feat:add telegram stream 2025-08-04 19:26:16 +08:00
Dong_master
0042629bf0 feat:add ppio and openrouter llm stream,and ppio think in content remove_think.
fix: giteeai stream no remove_think content add char"<think>"
2025-08-04 19:26:16 +08:00
Dong_master
a7d638cc9a feat:add deepseek and modelscope llm stream,and giteeai think in content remove_think 2025-08-04 19:26:16 +08:00
Dong_master
f84a79bf74 perf:del dify stream in ai.yaml config.and enbale stream in lark.yaml.
fix:localagent remove_think bug
2025-08-04 19:26:15 +08:00
Dong_master
f5a0cb9175 feat:add dify _agent_chat_message streaming 2025-08-04 19:26:15 +08:00
Dong_master
f9a5507029 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及有思考的think处理
feat:增加带有深度思考模型的think的去think操作
feat:dify中聊天机器人,chatflow对流式的支持
2025-08-04 19:26:15 +08:00
Dong_master
5ce32d2f04 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及带有深度思考模型的think增加 2025-08-04 19:26:15 +08:00
Dong_master
4908996cac 流式基本流程已通过修改了yield和return的冲突导致的问题 2025-08-04 19:26:15 +08:00
fdc
ee545a163f 增加了飞书中的流式但是好像还有问题 2025-08-04 19:26:15 +08:00
fdc
6e0e5802cc fix:修改手误message_id写进reply_message中 2025-08-04 19:26:15 +08:00
fdc
0d53843230 chat中的流式修改 2025-08-04 19:26:14 +08:00
fdc
b65670cd1a feat: 实现流式消息处理支持 2025-08-04 19:26:14 +08:00
zejiewang
ba4b5255a2 feat:support dify message streaming output (#1437)
* fix:lark adapter listeners init problem

* feat:support dify streaming mode

* feat:remove some log

* fix(bot form): field desc missing

* fix: not compatible with chatflow

---------

Co-authored-by: wangzejie <wangzejie@meicai.cn>
Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-08-04 18:45:52 +08:00
Junyan Qin (Chin)
d60af2b451 fix(pipeline dialog): config reset between tabs switching (#1597) 2025-08-04 00:05:55 +08:00
Dong_master
44ac8b2b63 fix: In the runner, every 8 tokens yield 2025-08-03 23:23:51 +08:00
Junyan Qin
b70001c579 chore: release v4.1.2 2025-08-03 22:52:47 +08:00
Junyan Qin (Chin)
4a8f5516f6 feat: add new api requester (#1596) 2025-08-03 22:30:52 +08:00
Junyan Qin
48d11540ae feat: no longer use typewriter in debug dialog 2025-08-03 17:18:44 +08:00
Junyan Qin
84129e3339 perf: minor fixes 2025-08-03 15:30:11 +08:00
Junyan Qin
377d455ec1 perf: ruff format & remove stream params in requester 2025-08-03 13:08:51 +08:00
Dong_master
52280d7a05 feat: add webchat Word-by-word output
fix:webchat on message stream bug
2025-08-02 01:42:22 +08:00
Dong_master
0ce81a2df2 feat: webchat stream is ok 2025-08-01 11:33:16 +08:00
Dong_master
d9a2bb9a06 fix:webchat stream judge bug and frontend bug 2025-07-31 14:49:12 +08:00
fdc
cb88da7f02 fix: frontend bug 2025-07-31 10:34:36 +08:00
fdc
5560a4f52d fix:lsome bug 2025-07-31 10:28:43 +08:00
fdc
e4d951b174 fix: is_stream_output_supperted in webchat return 2025-07-31 10:01:47 +08:00
fdc
6e08bf71c9 feat:webchat frontend stream 2025-07-31 09:51:25 +08:00
fdc
daaf4b54ef feat: add webchat stream but only some 2025-07-30 17:06:14 +08:00
fdc
3291266f5d fix:in chat judge create_message_card telegram reply_message_chunk no message 2025-07-30 15:21:59 +08:00
Dong_master
307f6acd8c fix:del some print ,and amend respback on stream judge ,and del in dingtalk this is_stream_output_supported() use 2025-07-29 23:09:02 +08:00
Junyan Qin
f1ac9c77e6 doc: update README_TW 2025-07-28 15:50:00 +08:00
Junyan Qin
b434a4e3d7 doc: add README_TW 2025-07-28 15:47:50 +08:00
Junyan Qin
2f209cd59f chore(i18n): add zh-Hant 2025-07-28 15:11:41 +08:00
Junyan Qin
0f585fd5ef fix(moonshot): make api.moonshot.ai the default api base url 2025-07-26 22:23:33 +08:00
Junyan Qin
a152dece9a chore: switch to pnpm 2025-07-26 19:45:38 +08:00
Junyan Qin
d3b31f7027 chore: release v4.1.1 2025-07-26 19:28:34 +08:00
How-Sean Xin
c00f05fca4 Add GitHub link redirection for front-end plugin cards (#1579)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json

* Update PluginCardComponent.tsx

* perf: no display github button when plugin has no github url

---------

Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-07-26 19:22:00 +08:00
Junyan Qin
92c3a86356 feat: add qhaigc 2025-07-24 22:42:26 +08:00
Junyan Qin
341fdc409d perf: embedding model ui 2025-07-24 22:29:25 +08:00
Junyan Qin
ebd542f592 feat: 302.AI embeddings 2025-07-24 22:05:15 +08:00
Junyan Qin
194b2d9814 feat: supports more embedding providers 2025-07-24 22:03:20 +08:00
Junyan Qin
7aed5cf1ed feat: ollama embeddings models 2025-07-24 10:36:32 +08:00
Junyan Qin
abc88c4979 doc: update README 2025-07-23 18:53:15 +08:00
WangCham
3fa38f71f1 feat: add topk 2025-07-23 17:29:36 +08:00
WangCham
d651d956d6 Merge branch 'master' into feat/topk_splitter 2025-07-23 16:37:27 +08:00
gaord
6754666845 feat(wechatpad): 添加对@所有人的支持并统一处理消息派发 (#1588)
在消息转换器中添加对AtAll组件的支持,将@所有人转换为特定格式。同时在消息派发时统一处理@所有人的情况,确保通知能正确发送。
2025-07-23 15:22:04 +08:00
Junyan Qin
08e6f46b19 fix(deps): react-focus-scope pkg bug 2025-07-22 11:05:16 +08:00
Dong_master
8f8c8ff367 feat:add dashscopeapi stream
fix:dify 64chunk yield
2025-07-21 18:45:45 +08:00
Dong_master
63ec2a8c34 fix:lark message_id and dingtalk incoming_message 2025-07-21 17:28:11 +08:00
Dong_master
f58c8497c3 feat:add dingtalk stream
fix:adapter is_stream_output_supported bug
fix:stream message reply chunk in message_id
2025-07-20 23:53:20 +08:00
Junyan Qin
1497fdae56 doc(README): adjust structure 2025-07-20 22:10:32 +08:00
Junyan Qin
10a3cb40e1 perf(retrieve): ui 2025-07-20 17:57:33 +08:00
devin-ai-integration[bot]
dd1ec15a39 feat: add knowledge base retrieve test tab with Card-based UI (#1583)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:56:46 +08:00
devin-ai-integration[bot]
ea51cec57e feat: add pipeline sorting functionality with three sort options (#1582)
* feat: add pipeline sorting functionality with three sort options

Co-Authored-By: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>

* perf: ui

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin>, u79E6u9A8Fu8A00 in Chinese, you can call me my english name Rock Chin. <rockchinq@gmail.com>
2025-07-20 17:23:30 +08:00
Junyan Qin
28ce986a8c chore: release v4.1.0 2025-07-20 12:32:06 +08:00
Junyan Qin
489b145606 doc: update README 2025-07-20 12:30:41 +08:00
Junyan Qin (Chin)
5e92bffaa6 Merge pull request #1581 from langbot-app/RockChinQ-patch-1
Update README.md
2025-07-19 23:09:53 +08:00
Junyan Qin (Chin)
277d1b0e30 feat: rag engine (#1492)
* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* kb

* delete ap

* feat: add functions

* fix: modify rag database

* feat: add embeddings model management (#1461)

* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add knowledge page

* feat: add api for uploading files

* feat: add sidebar for rag and related i18n

* feat: add knowledge base page

* feat: basic entities of kb

* feat: complete support_type for 302ai and compshare requester

* perf: format

* perf: ruff check --fix

* feat: basic definition

* feat: rag fe framework

* perf: en comments

* feat: modify the rag.py

* perf: definitions

* fix: success method bad params

* fix: bugs

* fix: bug

* feat: kb dialog action

* fix: create knwoledge base issue

* fix: kb get api format

* fix: kb get api not contains model uuid

* fix: api bug

* fix: the fucking logger

* feat(fe): component for available apis

* fix: embbeding and chunking

* fix: ensure File.status is set correctly after storing data to avoid null values

* fix: add functions for deleting files

* feat(fe): file uploading

* perf: adjust ui

* fix: file be deleted twice

* feat(fe): complete kb ui

* fix: ui bugs

* fix: no longer require Query for invoking embedding

* feat: add embedder

* fix: delete embedding models file

* chore: stash

* chore: stash

* feat(rag): make embedding and retrieving available

* feat(rag): all APIs ok

* fix: delete utils

* feat: rag pipeline backend

* feat: combine kb with pipeline

* fix: .md file parse failed

* perf: debug output

* feat: add functions for frontend of kb

* perf(rag): ui and related apis

* perf(rag): use badge show doc status

* perf: open kb detail dialog after creating

* fix: linter error

* deps: remove sentence-transformers

* perf: description of default pipeline

* feat: add html and epub

* chore: no longer supports epub

---------

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: WangCham <651122857@qq.com>
2025-07-19 22:06:11 +08:00
Junyan Qin
13f4ed8d2c chore: no longer supports epub 2025-07-19 21:56:50 +08:00
WangCham
91cb5ca36c feat: add html and epub 2025-07-19 19:57:57 +08:00
TwperBody
c34d54a6cb Fixed a bug where some Windows systems failed to recognize spaces. (#1577)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx
2025-07-19 16:48:15 +08:00
TwperBody
2d1737da1f Optimize plugin display (#1578)
* Update package.json

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update PluginMarketComponent.tsx

* Update package.json
2025-07-19 16:47:34 +08:00
Dong_master
adb0bf2473 feat:add dingtalk stream 2025-07-19 01:05:44 +08:00
Junyan Qin
a1b8b9d47b perf: description of default pipeline 2025-07-18 18:57:42 +08:00
Junyan Qin
8df14bf9d9 deps: remove sentence-transformers 2025-07-18 18:46:07 +08:00
Junyan Qin
c98d265a1e fix: linter error 2025-07-18 17:52:24 +08:00
Junyan Qin
4e6782a6b7 perf: open kb detail dialog after creating 2025-07-18 16:52:54 +08:00
Junyan Qin
5541e9e6d0 perf(rag): use badge show doc status 2025-07-18 16:38:55 +08:00
gaord
878ab0ef6b fix(wechatpad): @所有人的情况下,修复@机器人消息未被正确解析的问题 (#1575) 2025-07-18 12:52:30 +08:00
Junyan Qin
b61bd36b14 perf(rag): ui and related apis 2025-07-18 00:45:38 +08:00
Junyan Qin (Chin)
bb672d8f46 Merge branch 'master' into feat/rag 2025-07-18 00:45:24 +08:00
WangCham
ba1a26543b Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-17 23:57:52 +08:00
WangCham
cb868ee7b2 feat: add functions for frontend of kb 2025-07-17 23:52:46 +08:00
Junyan Qin
5dd5cb12ad perf: debug output 2025-07-17 23:34:35 +08:00
Junyan Qin
2dfa83ff22 fix: .md file parse failed 2025-07-17 23:22:20 +08:00
Junyan Qin
27bb4e1253 feat: combine kb with pipeline 2025-07-17 23:15:13 +08:00
WangCham
45afdbdfbb feat: rag pipeline backend 2025-07-17 15:05:11 +08:00
Dong_master
11e52a3ade feat:add telegram stream 2025-07-17 14:29:30 +08:00
WangCham
4cbbe9e000 fix: delete utils 2025-07-16 23:25:12 +08:00
WangCham
e986a0acaf fix: kb form 2025-07-16 22:50:17 +08:00
Junyan Qin
333ec346ef feat(rag): all APIs ok 2025-07-16 22:15:03 +08:00
Junyan Qin
2f2db4d445 feat(rag): make embedding and retrieving available 2025-07-16 21:17:18 +08:00
WangCham
e31883547d feat: add description for topk 2025-07-16 18:15:27 +08:00
WangCham
88c0066b06 feat: add topk 2025-07-16 17:20:13 +08:00
Junyan Qin
f731115805 chore: stash 2025-07-16 11:31:55 +08:00
Junyan Qin
67bc065ccd chore: stash 2025-07-15 22:09:10 +08:00
Dong_master
d15df3338f feat:add ppio and openrouter llm stream,and ppio think in content remove_think.
fix: giteeai stream no remove_think content add char"<think>"
2025-07-15 00:50:42 +08:00
Dong_master
c74cf38e9f feat:add deepseek and modelscope llm stream,and giteeai think in content remove_think 2025-07-14 23:53:55 +08:00
Dong_master
0e68a922bd perf:del dify stream in ai.yaml config.and enbale stream in lark.yaml.
fix:localagent remove_think bug
2025-07-14 01:42:42 +08:00
Dong_master
4e1d81c9f8 feat:add dify _agent_chat_message streaming 2025-07-14 00:40:02 +08:00
WangCham
199164fc4b fix: delete embedding models file 2025-07-13 23:12:08 +08:00
WangCham
c9c26213df Merge branch 'feat/rag' of github.com:RockChinQ/LangBot into feat/rag 2025-07-13 23:09:41 +08:00
WangCham
b7c57104c4 feat: add embedder 2025-07-13 23:04:03 +08:00
Dong_master
0be08d8882 fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及有思考的think处理
feat:增加带有深度思考模型的think的去think操作
feat:dify中聊天机器人,chatflow对流式的支持
2025-07-13 22:41:39 +08:00
Junyan Qin
cbe297dc59 fix: no longer require Query for invoking embedding 2025-07-12 21:23:19 +08:00
Junyan Qin
de76fed25a fix: ui bugs 2025-07-12 18:12:53 +08:00
Dong_master
301509b1db fix:修复了因为迭代数据只推入resq_messages和resq_message_chain导致缓存到内存中的数据和写入log中的数据量庞大,以及带有深度思考模型的think增加 2025-07-12 18:09:24 +08:00
Junyan Qin
a10e61735d feat(fe): complete kb ui 2025-07-12 18:00:54 +08:00
Junyan Qin
1ef0193028 fix: file be deleted twice 2025-07-12 17:47:53 +08:00
Junyan Qin
1e85d02ae4 perf: adjust ui 2025-07-12 17:29:39 +08:00
Junyan Qin
d78a329aa9 feat(fe): file uploading 2025-07-12 17:15:07 +08:00
WangCham
234b61e2f8 fix: add functions for deleting files 2025-07-12 01:37:44 +08:00
WangCham
9f43097361 fix: ensure File.status is set correctly after storing data to avoid null values 2025-07-12 01:21:02 +08:00
WangCham
f395cac893 fix: embbeding and chunking 2025-07-12 01:07:49 +08:00
Junyan Qin
fe122281fd feat(fe): component for available apis 2025-07-11 21:40:42 +08:00
Junyan Qin
6d788cadbc fix: the fucking logger 2025-07-11 21:37:31 +08:00
Junyan Qin
a79a22a74d fix: api bug 2025-07-11 21:30:47 +08:00
Junyan Qin
2ed3b68790 fix: kb get api not contains model uuid 2025-07-11 20:58:51 +08:00
Junyan Qin
bd9331ce62 fix: kb get api format 2025-07-11 20:57:09 +08:00
WangCham
14c161b733 fix: create knwoledge base issue 2025-07-11 18:14:03 +08:00
Junyan Qin
815cdf8b4a feat: kb dialog action 2025-07-11 17:22:43 +08:00
Junyan Qin
7d5503dab2 fix: bug 2025-07-11 16:49:55 +08:00
Junyan Qin
9ba1ad5bd3 fix: bugs 2025-07-11 16:38:08 +08:00
Junyan Qin
367d04d0f0 fix: success method bad params 2025-07-11 11:28:43 +08:00
Junyan Qin
75c3ddde19 perf: definitions 2025-07-10 16:45:59 +08:00
WangCham
ac03a2dceb feat: modify the rag.py 2025-07-09 22:09:46 +08:00
Junyan Qin
cd25340826 perf: en comments 2025-07-06 16:08:02 +08:00
Junyan Qin
ebd8e014c6 feat: rag fe framework 2025-07-06 15:52:53 +08:00
Junyan Qin
bef0d73e83 feat: basic definition 2025-07-06 10:25:28 +08:00
Junyan Qin
8d28ace252 perf: ruff check --fix 2025-07-05 21:56:54 +08:00
Junyan Qin
39c062f73e perf: format 2025-07-05 21:56:17 +08:00
Junyan Qin
0e5c9e19e1 feat: complete support_type for 302ai and compshare requester 2025-07-05 21:03:14 +08:00
Junyan Qin
c5b62b6ba3 Merge remote-tracking branch 'wangcham/feat/rag' into feat/rag 2025-07-05 20:16:37 +08:00
Junyan Qin
bbf583ddb5 feat: basic entities of kb 2025-07-05 20:07:27 +08:00
Junyan Qin
22ef1a399e feat: add knowledge base page 2025-07-05 20:07:27 +08:00
Junyan Qin
0733f8878f feat: add sidebar for rag and related i18n 2025-07-05 20:07:27 +08:00
Junyan Qin
f36a61dbb2 feat: add api for uploading files 2025-07-05 20:07:15 +08:00
Junyan Qin
6d8936bd74 feat: add knowledge page 2025-07-05 20:07:15 +08:00
devin-ai-integration[bot]
d2b93b3296 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 20:07:15 +08:00
WangCham
552fee9bac fix: modify rag database 2025-07-05 18:58:17 +08:00
WangCham
34fe8b324d feat: add functions 2025-07-05 18:58:16 +08:00
WangCham
c4671fbf1c delete ap 2025-07-05 18:58:16 +08:00
WangCham
4bcc06c955 kb 2025-07-05 18:58:16 +08:00
Junyan Qin
348f6d9eaa feat: add api for uploading files 2025-07-05 18:57:24 +08:00
Junyan Qin
157ffdc34c feat: add knowledge page 2025-07-05 18:57:24 +08:00
devin-ai-integration[bot]
c81d5a1a49 feat: add embeddings model management (#1461)
* feat: add embeddings model management backend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* feat: add embeddings model management frontend support

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* chore: revert HttpClient URL to production setting

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* refactor: integrate embeddings models into models page with tabs

Co-Authored-By: Junyan Qin <Chin> <rockchinq@gmail.com>

* perf: move files

* perf: remove `s`

* feat: allow requester to declare supported types in manifest

* feat(embedding): delete dimension and encoding format

* feat: add extra_args for embedding moels

* perf: i18n ref

* fix: linter err

* fix: lint err

* fix: linter err

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Junyan Qin <Chin> <rockchinq@gmail.com>
2025-07-05 18:57:23 +08:00
Dong_master
68cdd163d3 流式基本流程已通过修改了yield和return的冲突导致的问题 2025-07-04 03:26:44 +08:00
fdc
4005a8a3e2 增加了飞书中的流式但是好像还有问题 2025-07-03 22:58:17 +08:00
fdc
542409d48d Merge branch 'feat/streaming' of github.com:fdc310/LangBot into streaming_feature 2025-07-02 14:09:01 +08:00
zejiewang
3c6e858c35 feat:support dify message streaming output (#1437)
* fix:lark adapter listeners init problem

* feat:support dify streaming mode

* feat:remove some log

* fix(bot form): field desc missing

* fix: not compatible with chatflow

---------

Co-authored-by: wangzejie <wangzejie@meicai.cn>
Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-07-02 11:07:31 +08:00
fdc
8670ae82a3 fix:修改手误message_id写进reply_message中 2025-07-02 10:49:50 +08:00
fdc
48c9d66ab8 chat中的流式修改 2025-07-01 18:03:05 +08:00
fdc
0eac9135c0 feat: 实现流式消息处理支持 2025-06-30 17:58:18 +08:00
211 changed files with 11169 additions and 1885 deletions

3
.gitignore vendored
View File

@@ -42,4 +42,5 @@ botpy.log*
test.py
/web_ui
.venv/
uv.lock
uv.lock
/test

View File

@@ -6,14 +6,16 @@
<div align="center">
简体中文 / [English](README_EN.md) / [日本語](README_JP.md) / (PR for your language)
<a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / 简体中文 / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/langbot-app/LangBot/star/badge.svg)](https://gitcode.com/langbot-app/LangBot)
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">项目主页</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文档</a>
@@ -25,12 +27,7 @@
</p>
## ✨ 特性
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
LangBot 是一个开源的大语言模型原生即时通信机器人开发平台,旨在提供开箱即用的 IM 机器人开发体验,具有 Agent、RAG、MCP 等多种 LLM 应用功能,适配全球主流即时通信平台,并提供丰富的 API 接口,支持自定义开发。
## 📦 开始使用
@@ -64,23 +61,25 @@ docker compose up -d
直接使用发行版运行,查看文档[手动部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 📸 效果展示
## 😎 保持更新
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="450px"/>
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="450px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="450px"/>
## ✨ 特性
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="450px"/>
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态、流式输出能力自带 RAG知识库实现并深度适配 [Dify](https://dify.ai)。
- 🤖 多平台支持:目前支持 QQ、QQ频道、企业微信、个人微信、飞书、Discord、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。支持多流水线配置,不同机器人用于不同应用场景。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;适配 Anthropic [MCP 协议](https://modelcontextprotocol.io/);目前已有数百个插件。
- 😻 Web 管理面板:支持通过浏览器管理 LangBot 实例,不再需要手动编写配置文件。
<img alt="回复效果(带有联网插件)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
详细规格特性请访问[文档](https://docs.langbot.app/zh/insight/features.html)。
- WebUI Demo: https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示webui效果,公开环境,请不要在其中填入您的任何敏感信息。
## 🔌 组件兼容性
或访问 demo 环境:https://demo.langbot.dev/
- 登录信息:邮箱:`demo@langbot.app` 密码:`langbot123456`
- 注意:仅展示 WebUI 效果,公开环境,请不要在其中填入您的任何敏感信息。
### 消息平台
@@ -97,10 +96,6 @@ docker compose up -d
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 正在开发中
### 大模型能力
@@ -114,6 +109,7 @@ docker compose up -d
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [胜算云](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 资源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
@@ -147,9 +143,3 @@ docker compose up -d
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 保持更新
点击仓库右上角 Star 和 Watch 按钮,获取最新动态。
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -5,7 +5,7 @@
<div align="center">
[简体中文](README.md) / English / [日本語](README_JP.md) / (PR for your language)
English / [简体中文](README.md) / [繁體中文](README_TW.md) / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
@@ -21,12 +21,7 @@
</p>
## ✨ Features
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, and multi-modal capabilities. Deeply integrates with [Dify](https://dify.ai). Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 [New] Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
LangBot is an open-source LLM native instant messaging robot development platform, aiming to provide out-of-the-box IM robot development experience, with Agent, RAG, MCP and other LLM application functions, adapting to global instant messaging platforms, and providing rich API interfaces, supporting custom development.
## 📦 Getting Started
@@ -60,23 +55,25 @@ Community contributed Zeabur template.
Directly use the released version to run, see the [Manual Deployment](https://docs.langbot.app/en/deploy/langbot/manual.html) documentation.
## 📸 Demo
## 😎 Stay Ahead
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ Features
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 Chat with LLM / Agent: Supports multiple LLMs, adapt to group chats and private chats; Supports multi-round conversations, tool calls, multi-modal, and streaming output capabilities. Built-in RAG (knowledge base) implementation, and deeply integrates with [Dify](https://dify.ai).
- 🤖 Multi-platform Support: Currently supports QQ, QQ Channel, WeCom, personal WeChat, Lark, DingTalk, Discord, Telegram, etc.
- 🛠️ High Stability, Feature-rich: Native access control, rate limiting, sensitive word filtering, etc. mechanisms; Easy to use, supports multiple deployment methods. Supports multiple pipeline configurations, different bots can be used for different scenarios.
- 🧩 Plugin Extension, Active Community: Support event-driven, component extension, etc. plugin mechanisms; Integrate Anthropic [MCP protocol](https://modelcontextprotocol.io/); Currently has hundreds of plugins.
- 😻 Web UI: Support management LangBot instance through the browser. No need to manually write configuration files.
<img alt="Reply Effect (with Internet Plugin)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
For more detailed specifications, please refer to the [documentation](https://docs.langbot.app/en/insight/features.html).
- WebUI Demo: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: Only the WebUI effect is shown, please do not fill in any sensitive information in the public environment.
## 🔌 Component Compatibility
Or visit the demo environment: https://demo.langbot.dev/
- Login information: Email: `demo@langbot.app` Password: `langbot123456`
- Note: For WebUI demo only, please do not fill in any sensitive information in the public environment.
### Message Platform
@@ -92,10 +89,6 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: In development
### LLMs
@@ -110,6 +103,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | LLM and GPU resource platform |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLM and GPU resource platform |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
@@ -128,9 +122,3 @@ Thank you for the following [code contributors](https://github.com/langbot-app/L
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 Stay Ahead
Click the Star and Watch button in the upper right corner of the repository to get the latest updates.
![star gif](https://docs.langbot.app/star.gif)

View File

@@ -5,7 +5,7 @@
<div align="center">
[简体中文](README.md) / [English](README_EN.md) / 日本語 / (PR for your language)
[English](README_EN.md) / [简体中文](README.md) / [繁體中文](README_TW.md) / 日本語 / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
@@ -21,12 +21,7 @@
</p>
## ✨ 機能
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル機能をサポート。 [Dify](https://dify.ai) と深く統合。現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
LangBot は、エージェント、RAG、MCP などの LLM アプリケーション機能を備えた、オープンソースの LLM ネイティブのインスタントメッセージングロボット開発プラットフォームです。世界中のインスタントメッセージングプラットフォームに適応し、豊富な API インターフェースを提供し、カスタム開発をサポートします。
## 📦 始め方
@@ -42,7 +37,7 @@ http://localhost:5300 にアクセスして使用を開始します。
詳細なドキュメントは[Dockerデプロイ](https://docs.langbot.app/en/deploy/langbot/docker.html)を参照してください。
#### BTPanelでのワンクリックデプロイ
#### Panelでのワンクリックデプロイ
LangBotはBTPanelにリストされています。BTPanelをインストールしている場合は、[ドキュメント](https://docs.langbot.app/en/deploy/langbot/one-click/bt.html)を使用して使用できます。
@@ -60,23 +55,25 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
リリースバージョンを直接使用して実行します。[手動デプロイ](https://docs.langbot.app/en/deploy/langbot/manual.html)のドキュメントを参照してください。
## 📸 デモ
## 😎 最新情報を入手
<img alt="bots" src="https://docs.langbot.app/webui/bot-page.png" width="400px"/>
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
<img alt="bots" src="https://docs.langbot.app/webui/create-model.png" width="400px"/>
![star gif](https://docs.langbot.app/star.gif)
<img alt="bots" src="https://docs.langbot.app/webui/edit-pipeline.png" width="400px"/>
## ✨ 機能
<img alt="bots" src="https://docs.langbot.app/webui/plugin-market.png" width="400px"/>
- 💬 LLM / エージェントとのチャット: 複数のLLMをサポートし、グループチャットとプライベートチャットに対応。マルチラウンドの会話、ツールの呼び出し、マルチモーダル、ストリーミング出力機能をサポート、RAG知識ベースを組み込み、[Dify](https://dify.ai) と深く統合。
- 🤖 多プラットフォーム対応: 現在、QQ、QQ チャンネル、WeChat、個人 WeChat、Lark、DingTalk、Discord、Telegram など、複数のプラットフォームをサポートしています。
- 🛠️ 高い安定性、豊富な機能: ネイティブのアクセス制御、レート制限、敏感な単語のフィルタリングなどのメカニズムをサポート。使いやすく、複数のデプロイ方法をサポート。複数のパイプライン設定をサポートし、異なるボットを異なる用途に使用できます。
- 🧩 プラグイン拡張、活発なコミュニティ: イベント駆動、コンポーネント拡張などのプラグインメカニズムをサポート。適配 Anthropic [MCP プロトコル](https://modelcontextprotocol.io/);豊富なエコシステム、現在数百のプラグインが存在。
- 😻 Web UI: ブラウザを通じてLangBotインスタンスを管理することをサポート。
<img alt="返信効果(インターネットプラグイン付き)" src="https://docs.langbot.app/QChatGPT-0516.png" width="500px"/>
詳細な仕様については、[ドキュメント](https://docs.langbot.app/en/insight/features.html)を参照してください。
- WebUIデモ: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUIの効果のみを示しています。公開環境では機密情報を入力しないでください。
## 🔌 コンポーネントの互換性
または、デモ環境にアクセスしてください: https://demo.langbot.dev/
- ログイン情報: メール: `demo@langbot.app` パスワード: `langbot123456`
- 注意: WebUI のデモンストレーションのみの場合、公開環境では機密情報を入力しないでください。
### メッセージプラットフォーム
@@ -92,10 +89,6 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
🚧: 開発中
### LLMs
@@ -109,6 +102,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLMとGPUリソースプラットフォーム |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |
@@ -128,9 +122,3 @@ LangBot への貢献に対して、以下の [コード貢献者](https://github
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>
## 😎 最新情報を入手
リポジトリの右上にある Star と Watch ボタンをクリックして、最新の更新を取得してください。
![star gif](https://docs.langbot.app/star.gif)

140
README_TW.md Normal file
View File

@@ -0,0 +1,140 @@
<p align="center">
<a href="https://langbot.app">
<img src="https://docs.langbot.app/social_zh.png" alt="LangBot"/>
</a>
<div align="center"><a href="https://hellogithub.com/repository/langbot-app/LangBot" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=5ce8ae2aa4f74316bf393b57b952433c&claim_uid=gtmc6YWjMZkT21R" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
[English](README_EN.md) / [简体中文](README.md) / 繁體中文 / [日本語](README_JP.md) / (PR for your language)
[![Discord](https://img.shields.io/discord/1335141740050649118?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb)](https://discord.gg/wdNEHETs87)
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-966235608-blue)](https://qm.qq.com/q/JLi38whHum)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/langbot-app/LangBot)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/langbot-app/LangBot)](https://github.com/langbot-app/LangBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10 ~ 3.13 -blue.svg" alt="python">
[![star](https://gitcode.com/RockChinQ/LangBot/star/badge.svg)](https://gitcode.com/RockChinQ/LangBot)
<a href="https://langbot.app">主頁</a>
<a href="https://docs.langbot.app/zh/insight/guide.html">部署文件</a>
<a href="https://docs.langbot.app/zh/plugin/plugin-intro.html">外掛介紹</a>
<a href="https://github.com/langbot-app/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交外掛</a>
</div>
</p>
LangBot 是一個開源的大語言模型原生即時通訊機器人開發平台,旨在提供開箱即用的 IM 機器人開發體驗,具有 Agent、RAG、MCP 等多種 LLM 應用功能,適配全球主流即時通訊平台,並提供豐富的 API 介面,支援自定義開發。
## 📦 開始使用
#### Docker Compose 部署
```bash
git clone https://github.com/langbot-app/LangBot
cd LangBot
docker compose up -d
```
訪問 http://localhost:5300 即可開始使用。
詳細文件[Docker 部署](https://docs.langbot.app/zh/deploy/langbot/docker.html)。
#### 寶塔面板部署
已上架寶塔面板,若您已安裝寶塔面板,可以根據[文件](https://docs.langbot.app/zh/deploy/langbot/one-click/bt.html)使用。
#### Zeabur 雲端部署
社群貢獻的 Zeabur 模板。
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/zh-CN/templates/ZKTBDH)
#### Railway 雲端部署
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.app/template/yRrAyL?referralCode=vogKPF)
#### 手動部署
直接使用發行版運行,查看文件[手動部署](https://docs.langbot.app/zh/deploy/langbot/manual.html)。
## 😎 保持更新
點擊倉庫右上角 Star 和 Watch 按鈕,獲取最新動態。
![star gif](https://docs.langbot.app/star.gif)
## ✨ 特性
- 💬 大模型對話、Agent支援多種大模型適配群聊和私聊具有多輪對話、工具調用、多模態、流式輸出能力自帶 RAG知識庫實現並深度適配 [Dify](https://dify.ai)。
- 🤖 多平台支援:目前支援 QQ、QQ頻道、企業微信、個人微信、飛書、Discord、Telegram 等平台。
- 🛠️ 高穩定性、功能完備:原生支援訪問控制、限速、敏感詞過濾等機制;配置簡單,支援多種部署方式。支援多流水線配置,不同機器人用於不同應用場景。
- 🧩 外掛擴展、活躍社群:支援事件驅動、組件擴展等外掛機制;適配 Anthropic [MCP 協議](https://modelcontextprotocol.io/);目前已有數百個外掛。
- 😻 Web 管理面板:支援通過瀏覽器管理 LangBot 實例,不再需要手動編寫配置文件。
詳細規格特性請訪問[文件](https://docs.langbot.app/zh/insight/features.html)。
或訪問 demo 環境https://demo.langbot.dev/
- 登入資訊:郵箱:`demo@langbot.app` 密碼:`langbot123456`
- 注意:僅展示 WebUI 效果,公開環境,請不要在其中填入您的任何敏感資訊。
### 訊息平台
| 平台 | 狀態 | 備註 |
| --- | --- | --- |
| QQ 個人號 | ✅ | QQ 個人號私聊、群聊 |
| QQ 官方機器人 | ✅ | QQ 官方機器人,支援頻道、私聊、群聊 |
| 微信 | ✅ | |
| 企微對外客服 | ✅ | |
| 微信公眾號 | ✅ | |
| Lark | ✅ | |
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | ✅ | |
### 大模型能力
| 模型 | 狀態 | 備註 |
| --- | --- | --- |
| [OpenAI](https://platform.openai.com/) | ✅ | 可接入任何 OpenAI 介面格式模型 |
| [DeepSeek](https://www.deepseek.com/) | ✅ | |
| [Moonshot](https://www.moonshot.cn/) | ✅ | |
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智譜AI](https://open.bigmodel.cn/) | ✅ | |
| [勝算雲](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 資源平台 |
| [優雲智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型運行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型運行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型介面聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里雲百煉](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支援通過 MCP 協議獲取工具 |
### TTS
| 平台/模型 | 備註 |
| --- | --- |
| [FishAudio](https://fish.audio/zh-CN/discovery/) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [海豚 AI](https://www.ttson.cn/?source=thelazy) | [外掛](https://github.com/the-lazy-me/NewChatVoice) |
| [AzureTTS](https://portal.azure.com/) | [外掛](https://github.com/Ingnaryk/LangBot_AzureTTS) |
### 文生圖
| 平台/模型 | 備註 |
| --- | --- |
| 阿里雲百煉 | [外掛](https://github.com/Thetail001/LangBot_BailianTextToImagePlugin)
## 😘 社群貢獻
感謝以下[程式碼貢獻者](https://github.com/langbot-app/LangBot/graphs/contributors)和社群裡其他成員對 LangBot 的貢獻:
<a href="https://github.com/langbot-app/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langbot-app/LangBot" />
</a>

View File

@@ -253,6 +253,43 @@ class DingTalkClient:
await self.logger.error(f'failed to send proactive massage to group: {traceback.format_exc()}')
raise Exception(f'failed to send proactive massage to group: {traceback.format_exc()}')
async def create_and_card(
self, temp_card_id: str, incoming_message: dingtalk_stream.ChatbotMessage, quote_origin: bool = False
):
content_key = 'content'
card_data = {content_key: ''}
card_instance = dingtalk_stream.AICardReplier(self.client, incoming_message)
# print(card_instance)
# 先投放卡片: https://open.dingtalk.com/document/orgapp/create-and-deliver-cards
card_instance_id = await card_instance.async_create_and_deliver_card(
temp_card_id,
card_data,
)
return card_instance, card_instance_id
async def send_card_message(self, card_instance, card_instance_id: str, content: str, is_final: bool):
content_key = 'content'
try:
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value=content,
append=False,
finished=is_final,
failed=False,
)
except Exception as e:
self.logger.exception(e)
await card_instance.async_streaming(
card_instance_id,
content_key=content_key,
content_value='',
append=False,
finished=is_final,
failed=True,
)
async def start(self):
"""启动 WebSocket 连接,监听消息"""
await self.client.start()

View File

@@ -104,7 +104,7 @@ class QQOfficialClient:
return {'code': 0, 'message': 'success'}
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
return {'error': str(e)}, 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -168,7 +168,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/users/' + user_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -193,7 +192,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/v2/groups/' + group_openid + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -209,7 +207,7 @@ class QQOfficialClient:
if response.status_code == 200:
return
else:
await self.logger.error(f"发送群聊消息失败:{response.json()}")
await self.logger.error(f'发送群聊消息失败:{response.json()}')
raise Exception(response.read().decode())
async def send_channle_group_text_msg(self, channel_id: str, content: str, msg_id: str):
@@ -217,7 +215,6 @@ class QQOfficialClient:
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/channels/' + channel_id + '/messages'
async with httpx.AsyncClient() as client:
headers = {
@@ -240,7 +237,6 @@ class QQOfficialClient:
"""发送频道私聊消息"""
if not await self.check_access_token():
await self.get_access_token()
url = self.base_url + '/dms/' + guild_id + '/messages'
async with httpx.AsyncClient() as client:

View File

@@ -34,7 +34,6 @@ class SlackClient:
if self.bot_user_id and bot_user_id == self.bot_user_id:
return jsonify({'status': 'ok'})
# 处理私信
if data and data.get('event', {}).get('channel_type') in ['im']:
@@ -52,7 +51,7 @@ class SlackClient:
return jsonify({'status': 'ok'})
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
raise (e)
async def _handle_message(self, event: SlackEvent):
@@ -82,7 +81,7 @@ class SlackClient:
self.bot_user_id = response['message']['bot_id']
return
except Exception as e:
await self.logger.error(f"Error in send_message: {e}")
await self.logger.error(f'Error in send_message: {e}')
raise e
async def send_message_to_one(self, text: str, user_id: str):
@@ -93,7 +92,7 @@ class SlackClient:
return
except Exception as e:
await self.logger.error(f"Error in send_message: {traceback.format_exc()}")
await self.logger.error(f'Error in send_message: {traceback.format_exc()}')
raise e
async def run_task(self, host: str, port: int, *args, **kwargs):

View File

@@ -1 +1 @@
from .client import WeChatPadClient
from .client import WeChatPadClient as WeChatPadClient

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class ChatRoomApi:
@@ -7,8 +7,6 @@ class ChatRoomApi:
self.token = token
def get_chatroom_member_detail(self, chatroom_name):
params = {
"ChatRoomName": chatroom_name
}
params = {'ChatRoomName': chatroom_name}
url = self.base_url + '/group/GetChatroomMemberDetail'
return post_json(url, token=self.token, data=params)

View File

@@ -1,32 +1,23 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
import httpx
import base64
class DownloadApi:
def __init__(self, base_url, token):
self.base_url = base_url
self.token = token
def send_download(self, aeskey, file_type, file_url):
json_data = {
"AesKey": aeskey,
"FileType": file_type,
"FileURL": file_url
}
url = self.base_url + "/message/SendCdnDownload"
json_data = {'AesKey': aeskey, 'FileType': file_type, 'FileURL': file_url}
url = self.base_url + '/message/SendCdnDownload'
return post_json(url, token=self.token, data=json_data)
def get_msg_voice(self,buf_id, length, new_msgid):
json_data = {
"Bufid": buf_id,
"Length": length,
"NewMsgId": new_msgid,
"ToUserName": ""
}
url = self.base_url + "/message/GetMsgVoice"
def get_msg_voice(self, buf_id, length, new_msgid):
json_data = {'Bufid': buf_id, 'Length': length, 'NewMsgId': new_msgid, 'ToUserName': ''}
url = self.base_url + '/message/GetMsgVoice'
return post_json(url, token=self.token, data=json_data)
async def download_url_to_base64(self, download_url):
async with httpx.AsyncClient() as client:
response = await client.get(download_url)
@@ -36,4 +27,4 @@ class DownloadApi:
base64_str = base64.b64encode(file_bytes).decode('utf-8') # 返回字符串格式
return base64_str
else:
raise Exception('获取文件失败')
raise Exception('获取文件失败')

View File

@@ -1,11 +1,6 @@
from libs.wechatpad_api.util.http_util import post_json,async_request
from typing import List, Dict, Any, Optional
class FriendApi:
"""联系人API类处理所有与联系人相关的操作"""
def __init__(self, base_url: str, token: str):
self.base_url = base_url
self.token = token

View File

@@ -1,37 +1,34 @@
from libs.wechatpad_api.util.http_util import async_request,post_json,get_json
from libs.wechatpad_api.util.http_util import post_json, get_json
class LoginApi:
def __init__(self, base_url: str, token: str = None, admin_key: str = None):
'''
"""
Args:
base_url: 原始路径
token: token
admin_key: 管理员key
'''
"""
self.base_url = base_url
self.token = token
# self.admin_key = admin_key
def get_token(self, admin_key, day: int=365):
def get_token(self, admin_key, day: int = 365):
# 获取普通token
url = f"{self.base_url}/admin/GenAuthKey1"
json_data = {
"Count": 1,
"Days": day
}
url = f'{self.base_url}/admin/GenAuthKey1'
json_data = {'Count': 1, 'Days': day}
return post_json(base_url=url, token=admin_key, data=json_data)
def get_login_qr(self, Proxy: str = ""):
'''
def get_login_qr(self, Proxy: str = ''):
"""
Args:
Proxy:异地使用时代理
Returns:json数据
'''
"""
"""
{
@@ -49,54 +46,37 @@ class LoginApi:
}
"""
#获取登录二维码
url = f"{self.base_url}/login/GetLoginQrCodeNew"
# 获取登录二维码
url = f'{self.base_url}/login/GetLoginQrCodeNew'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": Proxy
}
json_data = {'Check': check, 'Proxy': Proxy}
return post_json(base_url=url, token=self.token, data=json_data)
def get_login_status(self):
# 获取登录状态
url = f'{self.base_url}/login/GetLoginStatus'
return get_json(base_url=url, token=self.token)
def logout(self):
# 退出登录
url = f'{self.base_url}/login/LogOut'
return post_json(base_url=url, token=self.token)
def wake_up_login(self, Proxy: str = ""):
def wake_up_login(self, Proxy: str = ''):
# 唤醒登录
url = f'{self.base_url}/login/WakeUpLogin'
check = False
if Proxy != "":
if Proxy != '':
check = True
json_data = {
"Check": check,
"Proxy": ""
}
json_data = {'Check': check, 'Proxy': ''}
return post_json(base_url=url, token=self.token, data=json_data)
def login(self,admin_key):
def login(self, admin_key):
login_status = self.get_login_status()
if login_status["Code"] == 300 and login_status["Text"] == "你已退出微信":
print("token已经失效重新获取")
if login_status['Code'] == 300 and login_status['Text'] == '你已退出微信':
print('token已经失效重新获取')
token_data = self.get_token(admin_key)
self.token = token_data["Data"][0]
self.token = token_data['Data'][0]

View File

@@ -1,5 +1,4 @@
from libs.wechatpad_api.util.http_util import async_request, post_json
from libs.wechatpad_api.util.http_util import post_json
class MessageApi:
@@ -7,8 +6,8 @@ class MessageApi:
self.base_url = base_url
self.token = token
def post_text(self, to_wxid, content, ats: list= []):
'''
def post_text(self, to_wxid, content, ats: list = []):
"""
Args:
app_id: 微信id
@@ -18,106 +17,64 @@ class MessageApi:
Returns:
'''
url = self.base_url + "/message/SendTextMessage"
"""
url = self.base_url + '/message/SendTextMessage'
"""发送文字消息"""
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": "",
"MsgType": 0,
"TextContent": content,
"ToUserName": to_wxid
}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': '', 'MsgType': 0, 'TextContent': content, 'ToUserName': to_wxid}
]
}
return post_json(base_url=url, token=self.token, data=json_data)
def post_image(self, to_wxid, img_url, ats: list= []):
def post_image(self, to_wxid, img_url, ats: list = []):
"""发送图片消息"""
# 这里好像可以尝试发送多个暂时未测试
json_data = {
"MsgItem": [
{
"AtWxIDList": ats,
"ImageContent": img_url,
"MsgType": 0,
"TextContent": '',
"ToUserName": to_wxid
}
'MsgItem': [
{'AtWxIDList': ats, 'ImageContent': img_url, 'MsgType': 0, 'TextContent': '', 'ToUserName': to_wxid}
]
}
url = self.base_url + "/message/SendImageMessage"
url = self.base_url + '/message/SendImageMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_voice(self, to_wxid, voice_data, voice_forma, voice_duration):
"""发送语音消息"""
json_data = {
"ToUserName": to_wxid,
"VoiceData": voice_data,
"VoiceFormat": voice_forma,
"VoiceSecond": voice_duration
'ToUserName': to_wxid,
'VoiceData': voice_data,
'VoiceFormat': voice_forma,
'VoiceSecond': voice_duration,
}
url = self.base_url + "/message/SendVoice"
url = self.base_url + '/message/SendVoice'
return post_json(base_url=url, token=self.token, data=json_data)
def post_name_card(self, alias, to_wxid, nick_name, name_card_wxid, flag):
"""发送名片消息"""
param = {
"CardAlias": alias,
"CardFlag": flag,
"CardNickName": nick_name,
"CardWxId": name_card_wxid,
"ToUserName": to_wxid
'CardAlias': alias,
'CardFlag': flag,
'CardNickName': nick_name,
'CardWxId': name_card_wxid,
'ToUserName': to_wxid,
}
url = f"{self.base_url}/message/ShareCardMessage"
url = f'{self.base_url}/message/ShareCardMessage'
return post_json(base_url=url, token=self.token, data=param)
def post_emoji(self, to_wxid, emoji_md5, emoji_size:int=0):
def post_emoji(self, to_wxid, emoji_md5, emoji_size: int = 0):
"""发送emoji消息"""
json_data = {
"EmojiList": [
{
"EmojiMd5": emoji_md5,
"EmojiSize": emoji_size,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendEmojiMessage"
json_data = {'EmojiList': [{'EmojiMd5': emoji_md5, 'EmojiSize': emoji_size, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendEmojiMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def post_app_msg(self, to_wxid,xml_data, contenttype:int=0):
def post_app_msg(self, to_wxid, xml_data, contenttype: int = 0):
"""发送appmsg消息"""
json_data = {
"AppList": [
{
"ContentType": contenttype,
"ContentXML": xml_data,
"ToUserName": to_wxid
}
]
}
url = f"{self.base_url}/message/SendAppMessage"
json_data = {'AppList': [{'ContentType': contenttype, 'ContentXML': xml_data, 'ToUserName': to_wxid}]}
url = f'{self.base_url}/message/SendAppMessage'
return post_json(base_url=url, token=self.token, data=json_data)
def revoke_msg(self, to_wxid, msg_id, new_msg_id, create_time):
"""撤回消息"""
param = {
"ClientMsgId": msg_id,
"CreateTime": create_time,
"NewMsgId": new_msg_id,
"ToUserName": to_wxid
}
url = f"{self.base_url}/message/RevokeMsg"
return post_json(base_url=url, token=self.token, data=param)
param = {'ClientMsgId': msg_id, 'CreateTime': create_time, 'NewMsgId': new_msg_id, 'ToUserName': to_wxid}
url = f'{self.base_url}/message/RevokeMsg'
return post_json(base_url=url, token=self.token, data=param)

View File

@@ -12,12 +12,9 @@ class UserApi:
return get_json(base_url=url, token=self.token)
def get_qr_code(self, recover:bool=True, style:int=8):
def get_qr_code(self, recover: bool = True, style: int = 8):
"""获取自己的二维码"""
param = {
"Recover": recover,
"Style": style
}
param = {'Recover': recover, 'Style': style}
url = f'{self.base_url}/user/GetMyQRCode'
return post_json(base_url=url, token=self.token, data=param)
@@ -26,12 +23,8 @@ class UserApi:
url = f'{self.base_url}/equipment/GetSafetyInfo'
return post_json(base_url=url, token=self.token)
async def update_head_img(self, head_img_base64):
async def update_head_img(self, head_img_base64):
"""修改头像"""
param = {
"Base64": head_img_base64
}
param = {'Base64': head_img_base64}
url = f'{self.base_url}/user/UploadHeadImage'
return await async_request(base_url=url, token_key=self.token, json=param)
return await async_request(base_url=url, token_key=self.token, json=param)

View File

@@ -1,4 +1,3 @@
from libs.wechatpad_api.api.login import LoginApi
from libs.wechatpad_api.api.friend import FriendApi
from libs.wechatpad_api.api.message import MessageApi
@@ -7,9 +6,6 @@ from libs.wechatpad_api.api.downloadpai import DownloadApi
from libs.wechatpad_api.api.chatroom import ChatRoomApi
class WeChatPadClient:
def __init__(self, base_url, token, logger=None):
self._login_api = LoginApi(base_url, token)
@@ -20,16 +16,16 @@ class WeChatPadClient:
self._chatroom_api = ChatRoomApi(base_url, token)
self.logger = logger
def get_token(self,admin_key, day: int):
'''获取token'''
def get_token(self, admin_key, day: int):
"""获取token"""
return self._login_api.get_token(admin_key, day)
def get_login_qr(self, Proxy:str=""):
def get_login_qr(self, Proxy: str = ''):
"""登录二维码"""
return self._login_api.get_login_qr(Proxy=Proxy)
def awaken_login(self, Proxy:str=""):
'''唤醒登录'''
def awaken_login(self, Proxy: str = ''):
"""唤醒登录"""
return self._login_api.wake_up_login(Proxy=Proxy)
def log_out(self):
@@ -40,59 +36,57 @@ class WeChatPadClient:
"""获取登录状态"""
return self._login_api.get_login_status()
def send_text_message(self, to_wxid, message, ats: list=[]):
def send_text_message(self, to_wxid, message, ats: list = []):
"""发送文本消息"""
return self._message_api.post_text(to_wxid, message, ats)
return self._message_api.post_text(to_wxid, message, ats)
def send_image_message(self, to_wxid, img_url, ats: list=[]):
def send_image_message(self, to_wxid, img_url, ats: list = []):
"""发送图片消息"""
return self._message_api.post_image(to_wxid, img_url, ats)
return self._message_api.post_image(to_wxid, img_url, ats)
def send_voice_message(self, to_wxid, voice_data, voice_forma, voice_duration):
"""发送音频消息"""
return self._message_api.post_voice(to_wxid, voice_data, voice_forma, voice_duration)
return self._message_api.post_voice(to_wxid, voice_data, voice_forma, voice_duration)
def send_app_message(self, to_wxid, app_message, type):
"""发送app消息"""
return self._message_api.post_app_msg(to_wxid, app_message, type)
return self._message_api.post_app_msg(to_wxid, app_message, type)
def send_emoji_message(self, to_wxid, emoji_md5, emoji_size):
"""发送emoji消息"""
return self._message_api.post_emoji(to_wxid,emoji_md5,emoji_size)
return self._message_api.post_emoji(to_wxid, emoji_md5, emoji_size)
def revoke_msg(self, to_wxid, msg_id, new_msg_id, create_time):
"""撤回消息"""
return self._message_api.revoke_msg(to_wxid, msg_id, new_msg_id, create_time)
return self._message_api.revoke_msg(to_wxid, msg_id, new_msg_id, create_time)
def get_profile(self):
"""获取用户信息"""
return self._user_api.get_profile()
def get_qr_code(self, recover:bool=True, style:int=8):
def get_qr_code(self, recover: bool = True, style: int = 8):
"""获取用户二维码"""
return self._user_api.get_qr_code(recover=recover, style=style)
return self._user_api.get_qr_code(recover=recover, style=style)
def get_safety_info(self):
"""获取设备信息"""
return self._user_api.get_safety_info()
return self._user_api.get_safety_info()
def update_head_img(self, head_img_base64):
def update_head_img(self, head_img_base64):
"""上传用户头像"""
return self._user_api.update_head_img(head_img_base64)
return self._user_api.update_head_img(head_img_base64)
def cdn_download(self, aeskey, file_type, file_url):
"""cdn下载"""
return self._download_api.send_download( aeskey, file_type, file_url)
return self._download_api.send_download(aeskey, file_type, file_url)
def get_msg_voice(self,buf_id, length, msgid):
def get_msg_voice(self, buf_id, length, msgid):
"""下载语音"""
return self._download_api.get_msg_voice(buf_id, length, msgid)
async def download_base64(self,url):
async def download_base64(self, url):
return await self._download_api.download_url_to_base64(download_url=url)
def get_chatroom_member_detail(self, chatroom_name):
"""查看群成员详情"""
return self._chatroom_api.get_chatroom_member_detail(chatroom_name)

View File

@@ -1,10 +1,9 @@
import requests
import aiohttp
def post_json(base_url, token, data=None):
headers = {
'Content-Type': 'application/json'
}
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -18,14 +17,12 @@ def post_json(base_url, token, data=None):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
def get_json(base_url, token):
headers = {
'Content-Type': 'application/json'
}
def get_json(base_url, token):
headers = {'Content-Type': 'application/json'}
url = base_url + f'?key={token}'
@@ -39,21 +36,18 @@ def get_json(base_url, token):
else:
raise RuntimeError(response.text)
except Exception as e:
print(f"http请求失败, url={url}, exception={e}")
print(f'http请求失败, url={url}, exception={e}')
raise RuntimeError(str(e))
import aiohttp
import asyncio
async def async_request(
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None
base_url: str,
token_key: str,
method: str = 'POST',
params: dict = None,
# headers: dict = None,
data: dict = None,
json: dict = None,
):
"""
通用异步请求函数
@@ -67,18 +61,11 @@ async def async_request(
:param json: JSON数据
:return: 响应文本
"""
headers = {
'Content-Type': 'application/json'
}
url = f"{base_url}?key={token_key}"
headers = {'Content-Type': 'application/json'}
url = f'{base_url}?key={token_key}'
async with aiohttp.ClientSession() as session:
async with session.request(
method=method,
url=url,
params=params,
headers=headers,
data=data,
json=json
method=method, url=url, params=params, headers=headers, data=data, json=json
) as response:
response.raise_for_status() # 如果状态码不是200抛出异常
result = await response.json()
@@ -89,4 +76,3 @@ async def async_request(
# return await result
# else:
# raise RuntimeError("请求失败",response.text)

View File

@@ -1,31 +1,34 @@
import qrcode
def print_green(text):
print(f"\033[32m{text}\033[0m")
print(f'\033[32m{text}\033[0m')
def print_yellow(text):
print(f"\033[33m{text}\033[0m")
print(f'\033[33m{text}\033[0m')
def print_red(text):
print(f"\033[31m{text}\033[0m")
print(f'\033[31m{text}\033[0m')
def make_and_print_qr(url):
"""生成并打印二维码
Args:
url: 需要生成二维码的URL字符串
Returns:
None
功能:
1. 在终端打印二维码的ASCII图形
2. 同时提供在线二维码生成链接作为备选
"""
print_green("请扫描下方二维码登录")
print_green('请扫描下方二维码登录')
qr = qrcode.QRCode()
qr.add_data(url)
qr.make()
qr.print_ascii(invert=True)
print_green(f"也可以访问下方链接获取二维码:\nhttps://api.qrserver.com/v1/create-qr-code/?data={url}")
print_green(f'也可以访问下方链接获取二维码:\nhttps://api.qrserver.com/v1/create-qr-code/?data={url}')

View File

@@ -57,7 +57,7 @@ class WecomClient:
if 'access_token' in data:
return data['access_token']
else:
await self.logger.error(f"获取accesstoken失败:{response.json()}")
await self.logger.error(f'获取accesstoken失败:{response.json()}')
raise Exception(f'未获取access token: {data}')
async def get_users(self):
@@ -129,7 +129,7 @@ class WecomClient:
response = await client.post(url, json=params)
data = response.json()
except Exception as e:
await self.logger.error(f"发送图片失败:{data}")
await self.logger.error(f'发送图片失败:{data}')
raise Exception('Failed to send image: ' + str(e))
# 企业微信错误码40014和42001代表accesstoken问题
@@ -164,7 +164,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_private_msg(user_id, agent_id, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
await self.logger.error(f'发送消息失败:{data}')
raise Exception('Failed to send message: ' + str(data))
async def handle_callback_request(self):
@@ -181,7 +181,7 @@ class WecomClient:
echostr = request.args.get('echostr')
ret, reply_echo_str = wxcpt.VerifyURL(msg_signature, timestamp, nonce, echostr)
if ret != 0:
await self.logger.error("验证失败")
await self.logger.error('验证失败')
raise Exception(f'验证失败,错误码: {ret}')
return reply_echo_str
@@ -189,9 +189,8 @@ class WecomClient:
encrypt_msg = await request.data
ret, xml_msg = wxcpt.DecryptMsg(encrypt_msg, msg_signature, timestamp, nonce)
if ret != 0:
await self.logger.error("消息解密失败")
await self.logger.error('消息解密失败')
raise Exception(f'消息解密失败,错误码: {ret}')
# 解析消息并处理
message_data = await self.get_message(xml_msg)
@@ -202,7 +201,7 @@ class WecomClient:
return 'success'
except Exception as e:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
return f'Error processing request: {str(e)}', 400
async def run_task(self, host: str, port: int, *args, **kwargs):
@@ -301,7 +300,7 @@ class WecomClient:
except binascii.Error as e:
raise ValueError(f'Invalid base64 string: {str(e)}')
else:
await self.logger.error("Image对象出错")
await self.logger.error('Image对象出错')
raise ValueError('image对象出错')
# 设置 multipart/form-data 格式的文件
@@ -325,7 +324,7 @@ class WecomClient:
self.access_token = await self.get_access_token(self.secret)
media_id = await self.upload_to_work(image)
if data.get('errcode', 0) != 0:
await self.logger.error(f"上传图片失败:{data}")
await self.logger.error(f'上传图片失败:{data}')
raise Exception('failed to upload file')
media_id = data.get('media_id')

View File

@@ -187,7 +187,7 @@ class WecomCSClient:
self.access_token = await self.get_access_token(self.secret)
return await self.send_text_msg(open_kfid, external_userid, msgid, content)
if data['errcode'] != 0:
await self.logger.error(f"发送消息失败:{data}")
await self.logger.error(f'发送消息失败:{data}')
raise Exception('Failed to send message')
return data
@@ -227,7 +227,7 @@ class WecomCSClient:
return 'success'
except Exception as e:
if self.logger:
await self.logger.error(f"Error in handle_callback_request: {traceback.format_exc()}")
await self.logger.error(f'Error in handle_callback_request: {traceback.format_exc()}')
else:
traceback.print_exc()
return f'Error processing request: {str(e)}', 400

View File

@@ -14,8 +14,8 @@ preregistered_groups: list[type[RouterGroup]] = []
"""Pre-registered list of RouterGroup"""
def group_class(name: str, path: str) -> None:
"""Register a RouterGroup"""
def group_class(name: str, path: str) -> typing.Callable[[typing.Type[RouterGroup]], typing.Type[RouterGroup]]:
"""注册一个 RouterGroup"""
def decorator(cls: typing.Type[RouterGroup]) -> typing.Type[RouterGroup]:
cls.name = name
@@ -86,10 +86,11 @@ class RouterGroup(abc.ABC):
try:
return await f(*args, **kwargs)
except Exception: # auto 500
except Exception as e: # 自动 500
traceback.print_exc()
# return self.http_status(500, -2, str(e))
return self.http_status(500, -2, 'internal server error')
return self.http_status(500, -2, str(e))
new_f = handler_error
new_f.__name__ = (self.name + rule).replace('/', '__')
@@ -120,6 +121,6 @@ class RouterGroup(abc.ABC):
}
)
def http_status(self, status: int, code: int, msg: str) -> quart.Response:
"""Return a response with a specified status code"""
return self.fail(code, msg), status
def http_status(self, status: int, code: int, msg: str) -> typing.Tuple[quart.Response, int]:
"""返回一个指定状态码的响应"""
return (self.fail(code, msg), status)

View File

@@ -2,6 +2,10 @@ from __future__ import annotations
import quart
import mimetypes
import uuid
import asyncio
import quart.datastructures
from .. import group
@@ -20,3 +24,23 @@ class FilesRouterGroup(group.RouterGroup):
mime_type = 'image/jpeg'
return quart.Response(image_bytes, mimetype=mime_type)
@self.route('/documents', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _() -> quart.Response:
request = quart.request
# get file bytes from 'file'
file = (await request.files)['file']
assert isinstance(file, quart.datastructures.FileStorage)
file_bytes = await asyncio.to_thread(file.stream.read)
extension = file.filename.split('.')[-1]
file_name = file.filename.split('.')[0]
file_key = file_name + '_' + str(uuid.uuid4())[:8] + '.' + extension
# save file to storage
await self.ap.storage_mgr.storage_provider.save(file_key, file_bytes)
return self.success(
data={
'file_id': file_key,
}
)

View File

@@ -0,0 +1,90 @@
import quart
from ... import group
@group.group_class('knowledge_base', '/api/v1/knowledge/bases')
class KnowledgeBaseRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['POST', 'GET'])
async def handle_knowledge_bases() -> quart.Response:
if quart.request.method == 'GET':
knowledge_bases = await self.ap.knowledge_service.get_knowledge_bases()
return self.success(data={'bases': knowledge_bases})
elif quart.request.method == 'POST':
json_data = await quart.request.json
knowledge_base_uuid = await self.ap.knowledge_service.create_knowledge_base(json_data)
return self.success(data={'uuid': knowledge_base_uuid})
return self.http_status(405, -1, 'Method not allowed')
@self.route(
'/<knowledge_base_uuid>',
methods=['GET', 'DELETE', 'PUT'],
)
async def handle_specific_knowledge_base(knowledge_base_uuid: str) -> quart.Response:
if quart.request.method == 'GET':
knowledge_base = await self.ap.knowledge_service.get_knowledge_base(knowledge_base_uuid)
if knowledge_base is None:
return self.http_status(404, -1, 'knowledge base not found')
return self.success(
data={
'base': knowledge_base,
}
)
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.knowledge_service.update_knowledge_base(knowledge_base_uuid, json_data)
return self.success({})
elif quart.request.method == 'DELETE':
await self.ap.knowledge_service.delete_knowledge_base(knowledge_base_uuid)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/files',
methods=['GET', 'POST'],
)
async def get_knowledge_base_files(knowledge_base_uuid: str) -> str:
if quart.request.method == 'GET':
files = await self.ap.knowledge_service.get_files_by_knowledge_base(knowledge_base_uuid)
return self.success(
data={
'files': files,
}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json
file_id = json_data.get('file_id')
if not file_id:
return self.http_status(400, -1, 'File ID is required')
# 调用服务层方法将文件与知识库关联
task_id = await self.ap.knowledge_service.store_file(knowledge_base_uuid, file_id)
return self.success(
{
'task_id': task_id,
}
)
@self.route(
'/<knowledge_base_uuid>/files/<file_id>',
methods=['DELETE'],
)
async def delete_specific_file_in_kb(file_id: str, knowledge_base_uuid: str) -> str:
await self.ap.knowledge_service.delete_file(knowledge_base_uuid, file_id)
return self.success({})
@self.route(
'/<knowledge_base_uuid>/retrieve',
methods=['POST'],
)
async def retrieve_knowledge_base(knowledge_base_uuid: str) -> str:
json_data = await quart.request.json
query = json_data.get('query')
results = await self.ap.knowledge_service.retrieve_knowledge_base(knowledge_base_uuid, query)
return self.success(data={'results': results})

View File

@@ -11,7 +11,11 @@ class PipelinesRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'pipelines': await self.ap.pipeline_service.get_pipelines()})
sort_by = quart.request.args.get('sort_by', 'created_at')
sort_order = quart.request.args.get('sort_order', 'DESC')
return self.success(
data={'pipelines': await self.ap.pipeline_service.get_pipelines(sort_by, sort_order)}
)
elif quart.request.method == 'POST':
json_data = await quart.request.json

View File

@@ -1,3 +1,5 @@
import json
import quart
from ... import group
@@ -9,10 +11,18 @@ class WebChatDebugRouterGroup(group.RouterGroup):
@self.route('/send', methods=['POST'])
async def send_message(pipeline_uuid: str) -> str:
"""Send a message to the pipeline for debugging"""
async def stream_generator(generator):
yield 'data: {"type": "start"}\n\n'
async for message in generator:
yield f'data: {json.dumps({"message": message})}\n\n'
yield 'data: {"type": "end"}\n\n'
try:
data = await quart.request.get_json()
session_type = data.get('session_type', 'person')
message_chain_obj = data.get('message', [])
is_stream = data.get('is_stream', False)
if not message_chain_obj:
return self.http_status(400, -1, 'message is required')
@@ -25,13 +35,33 @@ class WebChatDebugRouterGroup(group.RouterGroup):
if not webchat_adapter:
return self.http_status(404, -1, 'WebChat adapter not found')
result = await webchat_adapter.send_webchat_message(pipeline_uuid, session_type, message_chain_obj)
return self.success(
data={
'message': result,
if is_stream:
generator = webchat_adapter.send_webchat_message(
pipeline_uuid, session_type, message_chain_obj, is_stream
)
# 设置正确的响应头
headers = {
'Content-Type': 'text/event-stream',
'Transfer-Encoding': 'chunked',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
}
)
return quart.Response(stream_generator(generator), mimetype='text/event-stream',headers=headers)
else: # non-stream
result = None
async for message in webchat_adapter.send_webchat_message(
pipeline_uuid, session_type, message_chain_obj
):
result = message
if result is not None:
return self.success(
data={
'message': result,
}
)
else:
return self.http_status(400, -1, 'message is required')
except Exception as e:
return self.http_status(500, -1, f'Internal server error: {str(e)}')

View File

@@ -9,18 +9,18 @@ class LLMModelsRouterGroup(group.RouterGroup):
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.model_service.get_llm_models()})
return self.success(data={'models': await self.ap.llm_model_service.get_llm_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.model_service.create_llm_model(json_data)
model_uuid = await self.ap.llm_model_service.create_llm_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.model_service.get_llm_model(model_uuid)
model = await self.ap.llm_model_service.get_llm_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
@@ -29,11 +29,11 @@ class LLMModelsRouterGroup(group.RouterGroup):
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.model_service.update_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.update_llm_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.model_service.delete_llm_model(model_uuid)
await self.ap.llm_model_service.delete_llm_model(model_uuid)
return self.success()
@@ -41,6 +41,49 @@ class LLMModelsRouterGroup(group.RouterGroup):
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.model_service.test_llm_model(model_uuid, json_data)
await self.ap.llm_model_service.test_llm_model(model_uuid, json_data)
return self.success()
@group.group_class('models/embedding', '/api/v1/provider/models/embedding')
class EmbeddingModelsRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET', 'POST'])
async def _() -> str:
if quart.request.method == 'GET':
return self.success(data={'models': await self.ap.embedding_models_service.get_embedding_models()})
elif quart.request.method == 'POST':
json_data = await quart.request.json
model_uuid = await self.ap.embedding_models_service.create_embedding_model(json_data)
return self.success(data={'uuid': model_uuid})
@self.route('/<model_uuid>', methods=['GET', 'PUT', 'DELETE'])
async def _(model_uuid: str) -> str:
if quart.request.method == 'GET':
model = await self.ap.embedding_models_service.get_embedding_model(model_uuid)
if model is None:
return self.http_status(404, -1, 'model not found')
return self.success(data={'model': model})
elif quart.request.method == 'PUT':
json_data = await quart.request.json
await self.ap.embedding_models_service.update_embedding_model(model_uuid, json_data)
return self.success()
elif quart.request.method == 'DELETE':
await self.ap.embedding_models_service.delete_embedding_model(model_uuid)
return self.success()
@self.route('/<model_uuid>/test', methods=['POST'])
async def _(model_uuid: str) -> str:
json_data = await quart.request.json
await self.ap.embedding_models_service.test_embedding_model(model_uuid, json_data)
return self.success()

View File

@@ -8,7 +8,8 @@ class RequestersRouterGroup(group.RouterGroup):
async def initialize(self) -> None:
@self.route('', methods=['GET'])
async def _() -> quart.Response:
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info()})
model_type = quart.request.args.get('type', '')
return self.success(data={'requesters': self.ap.model_mgr.get_available_requesters_info(model_type)})
@self.route('/<requester_name>', methods=['GET'])
async def _(requester_name: str) -> quart.Response:

View File

@@ -67,3 +67,19 @@ class UserRouterGroup(group.RouterGroup):
await self.ap.user_service.reset_password(user_email, new_password)
return self.success(data={'user': user_email})
@self.route('/change-password', methods=['POST'], auth_type=group.AuthType.USER_TOKEN)
async def _(user_email: str) -> str:
json_data = await quart.request.json
current_password = json_data['current_password']
new_password = json_data['new_password']
try:
await self.ap.user_service.change_password(user_email, current_password, new_password)
except argon2.exceptions.VerifyMismatchError:
return self.http_status(400, -1, 'Current password is incorrect')
except ValueError as e:
return self.http_status(400, -1, str(e))
return self.success(data={'user': user_email})

View File

@@ -14,11 +14,13 @@ from . import group
from .groups import provider as groups_provider
from .groups import platform as groups_platform
from .groups import pipelines as groups_pipelines
from .groups import knowledge as groups_knowledge
importutil.import_modules_in_pkg(groups)
importutil.import_modules_in_pkg(groups_provider)
importutil.import_modules_in_pkg(groups_platform)
importutil.import_modules_in_pkg(groups_pipelines)
importutil.import_modules_in_pkg(groups_knowledge)
class HTTPController:

View File

@@ -0,0 +1,120 @@
from __future__ import annotations
import uuid
import sqlalchemy
from ....core import app
from ....entity.persistence import rag as persistence_rag
class KnowledgeService:
"""知识库服务"""
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_knowledge_bases(self) -> list[dict]:
"""获取所有知识库"""
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_rag.KnowledgeBase))
knowledge_bases = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
for knowledge_base in knowledge_bases
]
async def get_knowledge_base(self, kb_uuid: str) -> dict | None:
"""获取知识库"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
knowledge_base = result.first()
if knowledge_base is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_rag.KnowledgeBase, knowledge_base)
async def create_knowledge_base(self, kb_data: dict) -> str:
"""创建知识库"""
kb_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.KnowledgeBase).values(kb_data))
kb = await self.get_knowledge_base(kb_data['uuid'])
await self.ap.rag_mgr.load_knowledge_base(kb)
return kb_data['uuid']
async def update_knowledge_base(self, kb_uuid: str, kb_data: dict) -> None:
"""更新知识库"""
if 'uuid' in kb_data:
del kb_data['uuid']
if 'embedding_model_uuid' in kb_data:
del kb_data['embedding_model_uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_rag.KnowledgeBase)
.values(kb_data)
.where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
await self.ap.rag_mgr.remove_knowledge_base_from_runtime(kb_uuid)
kb = await self.get_knowledge_base(kb_uuid)
await self.ap.rag_mgr.load_knowledge_base(kb)
async def store_file(self, kb_uuid: str, file_id: str) -> int:
"""存储文件"""
# await self.ap.persistence_mgr.execute_async(sqlalchemy.insert(persistence_rag.File).values(kb_id=kb_uuid, file_id=file_id))
# await self.ap.rag_mgr.store_file(file_id)
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return await runtime_kb.store_file(file_id)
async def retrieve_knowledge_base(self, kb_uuid: str, query: str) -> list[dict]:
"""检索知识库"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
return [
result.model_dump() for result in await runtime_kb.retrieve(query, runtime_kb.knowledge_base_entity.top_k)
]
async def get_files_by_knowledge_base(self, kb_uuid: str) -> list[dict]:
"""获取知识库文件"""
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
files = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_rag.File, file) for file in files]
async def delete_file(self, kb_uuid: str, file_id: str) -> None:
"""删除文件"""
runtime_kb = await self.ap.rag_mgr.get_knowledge_base_by_uuid(kb_uuid)
if runtime_kb is None:
raise Exception('Knowledge base not found')
await runtime_kb.delete_file(file_id)
async def delete_knowledge_base(self, kb_uuid: str) -> None:
"""删除知识库"""
await self.ap.rag_mgr.delete_knowledge_base(kb_uuid)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.KnowledgeBase).where(persistence_rag.KnowledgeBase.uuid == kb_uuid)
)
# delete files
files = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_rag.File).where(persistence_rag.File.kb_id == kb_uuid)
)
for file in files:
# delete chunks
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.Chunk).where(persistence_rag.Chunk.file_id == file.uuid)
)
# delete file
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_rag.File).where(persistence_rag.File.uuid == file.uuid)
)

View File

@@ -10,7 +10,7 @@ from ....provider.modelmgr import requester as model_requester
from ....provider import entities as llm_entities
class ModelsService:
class LLMModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
@@ -101,5 +101,91 @@ class ModelsService:
model=runtime_llm_model,
messages=[llm_entities.Message(role='user', content='Hello, world!')],
funcs=[],
extra_args=model_data.get('extra_args', {}),
)
class EmbeddingModelsService:
ap: app.Application
def __init__(self, ap: app.Application) -> None:
self.ap = ap
async def get_embedding_models(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
models = result.all()
return [self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model) for model in models]
async def create_embedding_model(self, model_data: dict) -> str:
model_data['uuid'] = str(uuid.uuid4())
await self.ap.persistence_mgr.execute_async(
sqlalchemy.insert(persistence_model.EmbeddingModel).values(**model_data)
)
embedding_model = await self.get_embedding_model(model_data['uuid'])
await self.ap.model_mgr.load_embedding_model(embedding_model)
return model_data['uuid']
async def get_embedding_model(self, model_uuid: str) -> dict | None:
result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
model = result.first()
if model is None:
return None
return self.ap.persistence_mgr.serialize_model(persistence_model.EmbeddingModel, model)
async def update_embedding_model(self, model_uuid: str, model_data: dict) -> None:
if 'uuid' in model_data:
del model_data['uuid']
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_model.EmbeddingModel)
.where(persistence_model.EmbeddingModel.uuid == model_uuid)
.values(**model_data)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
embedding_model = await self.get_embedding_model(model_uuid)
await self.ap.model_mgr.load_embedding_model(embedding_model)
async def delete_embedding_model(self, model_uuid: str) -> None:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.delete(persistence_model.EmbeddingModel).where(
persistence_model.EmbeddingModel.uuid == model_uuid
)
)
await self.ap.model_mgr.remove_embedding_model(model_uuid)
async def test_embedding_model(self, model_uuid: str, model_data: dict) -> None:
runtime_embedding_model: model_requester.RuntimeEmbeddingModel | None = None
if model_uuid != '_':
for model in self.ap.model_mgr.embedding_models:
if model.model_entity.uuid == model_uuid:
runtime_embedding_model = model
break
if runtime_embedding_model is None:
raise Exception('model not found')
else:
runtime_embedding_model = await self.ap.model_mgr.init_runtime_embedding_model(model_data)
await runtime_embedding_model.requester.invoke_embedding(
model=runtime_embedding_model,
input_text=['Hello, world!'],
extra_args={},
)

View File

@@ -38,9 +38,21 @@ class PipelineService:
self.ap.pipeline_config_meta_output.data,
]
async def get_pipelines(self) -> list[dict]:
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]:
query = sqlalchemy.select(persistence_pipeline.LegacyPipeline)
if sort_by == 'created_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.created_at.asc())
elif sort_by == 'updated_at':
if sort_order == 'DESC':
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.desc())
else:
query = query.order_by(persistence_pipeline.LegacyPipeline.updated_at.asc())
result = await self.ap.persistence_mgr.execute_async(query)
pipelines = result.all()
return [
self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)

View File

@@ -82,3 +82,18 @@ class UserService:
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)
async def change_password(self, user_email: str, current_password: str, new_password: str) -> None:
ph = argon2.PasswordHasher()
user_obj = await self.get_user_by_email(user_email)
if user_obj is None:
raise ValueError('User not found')
ph.verify(user_obj.password, current_password)
hashed_password = ph.hash(new_password)
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(user.User).where(user.User.user == user_email).values(password=hashed_password)
)

View File

@@ -22,11 +22,14 @@ from ..api.http.service import user as user_service
from ..api.http.service import model as model_service
from ..api.http.service import pipeline as pipeline_service
from ..api.http.service import bot as bot_service
from ..api.http.service import knowledge as knowledge_service
from ..discover import engine as discover_engine
from ..storage import mgr as storagemgr
from ..utils import logcache
from . import taskmgr
from . import entities as core_entities
from ..rag.knowledge import kbmgr as rag_mgr
from ..vector import mgr as vectordb_mgr
class Application:
@@ -47,6 +50,8 @@ class Application:
model_mgr: llm_model_mgr.ModelManager = None
rag_mgr: rag_mgr.RAGManager = None
# TODO move to pipeline
tool_mgr: llm_tool_mgr.ToolManager = None
@@ -93,6 +98,8 @@ class Application:
persistence_mgr: persistencemgr.PersistenceManager = None
vector_db_mgr: vectordb_mgr.VectorDBManager = None
http_ctrl: http_controller.HTTPController = None
log_cache: logcache.LogCache = None
@@ -103,12 +110,16 @@ class Application:
user_service: user_service.UserService = None
model_service: model_service.ModelsService = None
llm_model_service: model_service.LLMModelsService = None
embedding_models_service: model_service.EmbeddingModelsService = None
pipeline_service: pipeline_service.PipelineService = None
bot_service: bot_service.BotService = None
knowledge_service: knowledge_service.KnowledgeService = None
def __init__(self):
pass
@@ -143,6 +154,7 @@ class Application:
name='http-api-controller',
scopes=[core_entities.LifecycleControlScope.APPLICATION],
)
self.task_mgr.create_task(
never_ending(),
name='never-ending-task',

View File

@@ -87,7 +87,9 @@ class Query(pydantic.BaseModel):
"""使用的函数,由前置处理器阶段设置"""
resp_messages: (
typing.Optional[list[llm_entities.Message]] | typing.Optional[list[platform_message.MessageChain]]
typing.Optional[list[llm_entities.Message]]
| typing.Optional[list[platform_message.MessageChain]]
| typing.Optional[list[llm_entities.MessageChunk]]
) = []
"""由Process阶段生成的回复消息对象列表"""

View File

@@ -9,6 +9,7 @@ from ...command import cmdmgr
from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr
from ...provider.tools import toolmgr as llm_tool_mgr
from ...rag.knowledge import kbmgr as rag_mgr
from ...platform import botmgr as im_mgr
from ...persistence import mgr as persistencemgr
from ...api.http.controller import main as http_controller
@@ -16,9 +17,11 @@ from ...api.http.service import user as user_service
from ...api.http.service import model as model_service
from ...api.http.service import pipeline as pipeline_service
from ...api.http.service import bot as bot_service
from ...api.http.service import knowledge as knowledge_service
from ...discover import engine as discover_engine
from ...storage import mgr as storagemgr
from ...utils import logcache
from ...vector import mgr as vectordb_mgr
from .. import taskmgr
@@ -88,6 +91,15 @@ class BuildAppStage(stage.BootingStage):
await pipeline_mgr.initialize()
ap.pipeline_mgr = pipeline_mgr
rag_mgr_inst = rag_mgr.RAGManager(ap)
await rag_mgr_inst.initialize()
ap.rag_mgr = rag_mgr_inst
# 初始化向量数据库管理器
vectordb_mgr_inst = vectordb_mgr.VectorDBManager(ap)
await vectordb_mgr_inst.initialize()
ap.vector_db_mgr = vectordb_mgr_inst
http_ctrl = http_controller.HTTPController(ap)
await http_ctrl.initialize()
ap.http_ctrl = http_ctrl
@@ -95,8 +107,11 @@ class BuildAppStage(stage.BootingStage):
user_service_inst = user_service.UserService(ap)
ap.user_service = user_service_inst
model_service_inst = model_service.ModelsService(ap)
ap.model_service = model_service_inst
llm_model_service_inst = model_service.LLMModelsService(ap)
ap.llm_model_service = llm_model_service_inst
embedding_models_service_inst = model_service.EmbeddingModelsService(ap)
ap.embedding_models_service = embedding_models_service_inst
pipeline_service_inst = pipeline_service.PipelineService(ap)
ap.pipeline_service = pipeline_service_inst
@@ -104,5 +119,8 @@ class BuildAppStage(stage.BootingStage):
bot_service_inst = bot_service.BotService(ap)
ap.bot_service = bot_service_inst
knowledge_service_inst = knowledge_service.KnowledgeService(ap)
ap.knowledge_service = knowledge_service_inst
ctrl = controller.Controller(ap)
ap.ctrl = ctrl

View File

@@ -23,3 +23,24 @@ class LLMModel(Base):
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)
class EmbeddingModel(Base):
"""Embedding 模型"""
__tablename__ = 'embedding_models'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
description = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
requester_config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
api_keys = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
extra_args = sqlalchemy.Column(sqlalchemy.JSON, nullable=False, default={})
created_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False, server_default=sqlalchemy.func.now())
updated_at = sqlalchemy.Column(
sqlalchemy.DateTime,
nullable=False,
server_default=sqlalchemy.func.now(),
onupdate=sqlalchemy.func.now(),
)

View File

@@ -20,7 +20,6 @@ class LegacyPipeline(Base):
)
for_version = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)
is_default = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False, default=False)
stages = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
config = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
@@ -43,3 +42,4 @@ class PipelineRunRecord(Base):
started_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
finished_at = sqlalchemy.Column(sqlalchemy.DateTime, nullable=False)
result = sqlalchemy.Column(sqlalchemy.JSON, nullable=False)
knowledge_base_uuid = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)

View File

@@ -0,0 +1,50 @@
import sqlalchemy
from .base import Base
# Base = declarative_base()
# DATABASE_URL = os.getenv('DATABASE_URL', 'sqlite:///./rag_knowledge.db')
# print("Using database URL:", DATABASE_URL)
# engine = create_engine(DATABASE_URL, connect_args={'check_same_thread': False})
# SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# def create_db_and_tables():
# """Creates all database tables defined in the Base."""
# Base.metadata.create_all(bind=engine)
# print('Database tables created or already exist.')
class KnowledgeBase(Base):
__tablename__ = 'knowledge_bases'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
name = sqlalchemy.Column(sqlalchemy.String, index=True)
description = sqlalchemy.Column(sqlalchemy.Text)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
embedding_model_uuid = sqlalchemy.Column(sqlalchemy.String, default='')
top_k = sqlalchemy.Column(sqlalchemy.Integer, default=5)
class File(Base):
__tablename__ = 'knowledge_base_files'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
kb_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
file_name = sqlalchemy.Column(sqlalchemy.String)
extension = sqlalchemy.Column(sqlalchemy.String)
created_at = sqlalchemy.Column(sqlalchemy.DateTime, default=sqlalchemy.func.now())
status = sqlalchemy.Column(sqlalchemy.String, default='pending') # pending, processing, completed, failed
class Chunk(Base):
__tablename__ = 'knowledge_base_chunks'
uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
file_id = sqlalchemy.Column(sqlalchemy.String(255), nullable=True)
text = sqlalchemy.Column(sqlalchemy.Text)
# class Vector(Base):
# __tablename__ = 'knowledge_base_vectors'
# uuid = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True, unique=True)
# chunk_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# embedding = sqlalchemy.Column(sqlalchemy.LargeBinary)

View File

@@ -0,0 +1,13 @@
from sqlalchemy import Column, Integer, ForeignKey, LargeBinary
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class Vector(Base):
__tablename__ = 'vectors'
id = Column(Integer, primary_key=True, index=True)
chunk_id = Column(Integer, ForeignKey('chunks.id'), unique=True)
embedding = Column(LargeBinary) # Store embeddings as binary
chunk = relationship('Chunk', back_populates='vector')

View File

View File

@@ -0,0 +1,13 @@
from __future__ import annotations
import pydantic
from typing import Any
class RetrieveResultEntry(pydantic.BaseModel):
id: str
metadata: dict[str, Any]
distance: float

View File

@@ -79,7 +79,7 @@ class PersistenceManager:
'stages': pipeline_service.default_stage_order,
'is_default': True,
'name': 'ChatPipeline',
'description': 'Default pipeline provided, your new bots will be automatically bound to this pipeline | 默认提供的流水线,您配置的机器人将自动绑定到此流水线',
'description': 'Default pipeline, new bots will be bound to this pipeline | 默认提供的流水线,您配置的机器人将自动绑定到此流水线',
'config': pipeline_config,
}

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(4)
class DBMigrateRAGKBUUID(migration.DBMigration):
"""RAG知识库UUID"""
async def upgrade(self):
"""升级"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'knowledge-base' not in config['ai']['local-agent']:
config['ai']['local-agent']['knowledge-base'] = ''
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""降级"""
pass

View File

@@ -0,0 +1,38 @@
from .. import migration
import sqlalchemy
from ...entity.persistence import pipeline as persistence_pipeline
@migration.migration_class(5)
class DBMigratePipelineRemoveCotConfig(migration.DBMigration):
"""Pipeline remove cot config"""
async def upgrade(self):
"""Upgrade"""
# read all pipelines
pipelines = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_pipeline.LegacyPipeline))
for pipeline in pipelines:
serialized_pipeline = self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
config = serialized_pipeline['config']
if 'remove-think' not in config['output']['misc']:
config['output']['misc']['remove-think'] = False
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == serialized_pipeline['uuid'])
.values(
{
'config': config,
'for_version': self.ap.ver_mgr.get_current_version(),
}
)
)
async def downgrade(self):
"""Downgrade"""
pass

View File

@@ -67,7 +67,7 @@ class ContentFilterStage(stage.PipelineStage):
if query.pipeline_config['safety']['content-filter']['scope'] == 'output-msg':
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
if not message.strip():
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
for filter in self.filter_chain:
if filter_entities.EnableStage.PRE in filter.enable_stages:

View File

@@ -93,12 +93,20 @@ class RuntimePipeline:
query.message_event, platform_events.GroupMessage
):
result.user_notice.insert(0, platform_message.At(query.message_event.sender.id))
await query.adapter.reply_message(
message_source=query.message_event,
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
)
if await query.adapter.is_stream_output_supported():
await query.adapter.reply_message_chunk(
message_source=query.message_event,
bot_message=query.resp_messages[-1],
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
is_final=[msg.is_final for msg in query.resp_messages][0]
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=result.user_notice,
quote_origin=query.pipeline_config['output']['misc']['quote-origin'],
)
if result.debug_notice:
self.ap.logger.debug(result.debug_notice)
if result.console_notice:
@@ -144,23 +152,27 @@ class RuntimePipeline:
result = await result
if isinstance(result, pipeline_entities.StageProcessResult): # 直接返回结果
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {result.result_type}'
)
await self._check_output(query, result)
if result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif result.result_type == pipeline_entities.ResultType.CONTINUE:
query = result.new_query
elif isinstance(result, typing.AsyncGenerator): # 生成器
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} gen')
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query.query_id} gen')
async for sub_result in result:
self.ap.logger.debug(f'Stage {stage_container.inst_name} processed query {query} res {sub_result}')
self.ap.logger.debug(
f'Stage {stage_container.inst_name} processed query {query.query_id} res {sub_result.result_type}'
)
await self._check_output(query, sub_result)
if sub_result.result_type == pipeline_entities.ResultType.INTERRUPT:
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query}')
self.ap.logger.debug(f'Stage {stage_container.inst_name} interrupted query {query.query_id}')
break
elif sub_result.result_type == pipeline_entities.ResultType.CONTINUE:
query = sub_result.new_query
@@ -192,7 +204,7 @@ class RuntimePipeline:
if event_ctx.is_prevented_default():
return
self.ap.logger.debug(f'Processing query {query}')
self.ap.logger.debug(f'Processing query {query.query_id}')
await self._execute_from_stage(0, query)
except Exception as e:
@@ -200,7 +212,7 @@ class RuntimePipeline:
self.ap.logger.error(f'处理请求时出错 query_id={query.query_id} stage={inst_name} : {e}')
self.ap.logger.error(f'Traceback: {traceback.format_exc()}')
finally:
self.ap.logger.debug(f'Query {query} processed')
self.ap.logger.debug(f'Query {query.query_id} processed')
class PipelineManager:

View File

@@ -80,14 +80,15 @@ class PreProcessor(stage.PipelineStage):
if me.type == 'image_url':
msg.content.remove(me)
content_list = []
content_list: list[llm_entities.ContentElement] = []
plain_text = ''
qoute_msg = query.pipeline_config['trigger'].get('misc', '').get('combine-quote-message')
# tidy the content_list
# combine all text content into one, and put it in the first position
for me in query.message_chain:
if isinstance(me, platform_message.Plain):
content_list.append(llm_entities.ContentElement.from_text(me.text))
plain_text += me.text
elif isinstance(me, platform_message.Image):
if selected_runner != 'local-agent' or query.use_llm_model.model_entity.abilities.__contains__(
@@ -106,6 +107,8 @@ class PreProcessor(stage.PipelineStage):
if msg.base64 is not None:
content_list.append(llm_entities.ContentElement.from_image_base64(msg.base64))
content_list.insert(0, llm_entities.ContentElement.from_text(plain_text))
query.variables['user_message_text'] = plain_text
query.user_message = llm_entities.Message(role='user', content=content_list)

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import uuid
import typing
import traceback
@@ -22,11 +23,11 @@ class ChatMessageHandler(handler.MessageHandler):
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[entities.StageProcessResult, None]:
"""Process"""
# Call API
# generator
"""处理"""
# API
# 生成器
# Trigger plugin event
# 触发插件事件
event_class = (
events.PersonNormalMessageReceived
if query.launcher_type == core_entities.LauncherTypes.PERSON
@@ -46,7 +47,6 @@ class ChatMessageHandler(handler.MessageHandler):
if event_ctx.is_prevented_default():
if event_ctx.event.reply is not None:
mc = platform_message.MessageChain(event_ctx.event.reply)
query.resp_messages.append(mc)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
@@ -54,10 +54,14 @@ class ChatMessageHandler(handler.MessageHandler):
yield entities.StageProcessResult(result_type=entities.ResultType.INTERRUPT, new_query=query)
else:
if event_ctx.event.alter is not None:
# if isinstance(event_ctx.event, str): # Currently not considering multi-modal alter
# if isinstance(event_ctx.event, str): # 现在暂时不考虑多模态alter
query.user_message.content = event_ctx.event.alter
text_length = 0
try:
is_stream = await query.adapter.is_stream_output_supported()
except AttributeError:
is_stream = False
try:
for r in runner_module.preregistered_runners:
@@ -65,22 +69,42 @@ class ChatMessageHandler(handler.MessageHandler):
runner = r(self.ap, query.pipeline_config)
break
else:
raise ValueError(f'Request runner not found: {query.pipeline_config["ai"]["runner"]["runner"]}')
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
if is_stream:
resp_message_id = uuid.uuid4()
await query.adapter.create_message_card(str(resp_message_id), query.message_event)
async for result in runner.run(query):
result.resp_message_id = str(resp_message_id)
if query.resp_messages:
query.resp_messages.pop()
if query.resp_message_chain:
query.resp_message_chain.pop()
async for result in runner.run(query):
query.resp_messages.append(result)
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})流式响应: {self.cut_str(result.readable_str())}')
self.ap.logger.info(f'Response({query.query_id}): {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
if result.content is not None:
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
else:
async for result in runner.run(query):
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
if result.content is not None:
text_length += len(result.content)
yield entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)
query.session.using_conversation.messages.append(query.user_message)
query.session.using_conversation.messages.extend(query.resp_messages)
except Exception as e:
self.ap.logger.error(f'Request failed({query.query_id}): {type(e).__name__} {str(e)}')
self.ap.logger.error(f'对话({query.query_id})请求失败: {type(e).__name__} {str(e)}')
traceback.print_exc()
hide_exception_info = query.pipeline_config['output']['misc']['hide-exception']
@@ -93,4 +117,4 @@ class ChatMessageHandler(handler.MessageHandler):
)
finally:
# TODO statistics
pass
pass

View File

@@ -7,6 +7,10 @@ import asyncio
from ...platform.types import events as platform_events
from ...platform.types import message as platform_message
from ...provider import entities as llm_entities
from .. import stage, entities
from ...core import entities as core_entities
@@ -36,10 +40,22 @@ class SendResponseBackStage(stage.PipelineStage):
quote_origin = query.pipeline_config['output']['misc']['quote-origin']
await query.adapter.reply_message(
message_source=query.message_event,
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
)
has_chunks = any(isinstance(msg, llm_entities.MessageChunk) for msg in query.resp_messages)
# TODO 命令与流式的兼容性问题
if await query.adapter.is_stream_output_supported() and has_chunks:
is_final = [msg.is_final for msg in query.resp_messages][0]
await query.adapter.reply_message_chunk(
message_source=query.message_event,
bot_message=query.resp_messages[-1],
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
is_final=is_final,
)
else:
await query.adapter.reply_message(
message_source=query.message_event,
message=query.resp_message_chain[-1],
quote_origin=quote_origin,
)
return entities.StageProcessResult(result_type=entities.ResultType.CONTINUE, new_query=query)

View File

@@ -61,14 +61,40 @@ class MessagePlatformAdapter(metaclass=abc.ABCMeta):
"""
raise NotImplementedError
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message: dict,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
"""回复消息(流式输出)
Args:
message_source (platform.types.MessageEvent): 消息源事件
message_id (int): 消息ID
message (platform.types.MessageChain): 消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
is_final (bool, optional): 流式是否结束. Defaults to False.
"""
raise NotImplementedError
async def create_message_card(self, message_id: typing.Type[str, int], event: platform_events.MessageEvent) -> bool:
"""创建卡片消息
Args:
message_id (str): 消息ID
event (platform_events.MessageEvent): 消息源事件
"""
return False
async def is_muted(self, group_id: int) -> bool:
"""获取账号是否在指定群被禁言"""
raise NotImplementedError
def register_listener(
self,
event_type: typing.Type[platform_message.Event],
callback: typing.Callable[[platform_message.Event, MessagePlatformAdapter], None],
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, MessagePlatformAdapter], None],
):
"""注册事件监听器
@@ -80,8 +106,8 @@ class MessagePlatformAdapter(metaclass=abc.ABCMeta):
def unregister_listener(
self,
event_type: typing.Type[platform_message.Event],
callback: typing.Callable[[platform_message.Event, MessagePlatformAdapter], None],
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, MessagePlatformAdapter], None],
):
"""注销事件监听器
@@ -95,6 +121,10 @@ class MessagePlatformAdapter(metaclass=abc.ABCMeta):
"""异步运行"""
raise NotImplementedError
async def is_stream_output_supported(self) -> bool:
"""是否支持流式输出"""
return False
async def kill(self) -> bool:
"""关闭适配器
@@ -136,7 +166,7 @@ class EventConverter:
"""事件转换器基类"""
@staticmethod
def yiri2target(event: typing.Type[platform_message.Event]):
def yiri2target(event: typing.Type[platform_events.Event]):
"""将源平台事件转换为目标平台事件
Args:
@@ -148,7 +178,7 @@ class EventConverter:
raise NotImplementedError
@staticmethod
def target2yiri(event: typing.Any) -> platform_message.Event:
def target2yiri(event: typing.Any) -> platform_events.Event:
"""将目标平台事件的调用参数转换为源平台的事件参数对象
Args:

View File

@@ -120,8 +120,10 @@ class RuntimeBot:
if isinstance(e, asyncio.CancelledError):
self.task_context.set_current_action('Exited.')
return
traceback_str = traceback.format_exc()
self.task_context.set_current_action('Exited with error.')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback.format_exc()}')
await self.logger.error(f'平台适配器运行出错:\n{e}\n{traceback_str}')
self.task_wrapper = self.ap.task_mgr.create_task(
exception_wrapper(),

View File

@@ -119,7 +119,7 @@ class EventLogger:
async def _truncate_logs(self):
if len(self.logs) > MAX_LOG_COUNT:
for i in range(DELETE_COUNT_PER_TIME):
for image_key in self.logs[i].images:
for image_key in self.logs[i].images: # type: ignore
await self.ap.storage_mgr.storage_provider.delete(image_key)
self.logs = self.logs[DELETE_COUNT_PER_TIME:]

View File

@@ -266,7 +266,7 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
await process_message_data(msg_data, reply_list)
reply_msg = platform_message.Quote(
message_id=msg.data['id'], sender_id=msg_datas['user_id'], origin=reply_list
message_id=msg.data['id'], sender_id=msg_datas['sender']['user_id'], origin=reply_list
)
yiri_msg_list.append(reply_msg)

View File

@@ -1,3 +1,4 @@
from re import S
import traceback
import typing
from libs.dingtalk_api.dingtalkevent import DingTalkEvent
@@ -99,11 +100,15 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
message_converter: DingTalkMessageConverter = DingTalkMessageConverter()
event_converter: DingTalkEventConverter = DingTalkEventConverter()
config: dict
card_instance_id_dict: dict # 回复卡片消息字典key为消息idvalue为回复卡片实例id用于在流式消息时判断是否发送到指定卡片
seq: int # 消息顺序直接以seq作为标识
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.card_instance_id_dict = {}
# self.seq = 1
required_keys = [
'client_id',
'client_secret',
@@ -139,6 +144,34 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
content, at = await DingTalkMessageConverter.yiri2target(message)
await self.bot.send_message(content, incoming_message, at)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
# event = await DingTalkEventConverter.yiri2target(
# message_source,
# )
# incoming_message = event.incoming_message
# msg_id = incoming_message.message_id
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if (msg_seq - 1) % 8 == 0 or is_final:
content, at = await DingTalkMessageConverter.yiri2target(message)
card_instance, card_instance_id = self.card_instance_id_dict[message_id]
# print(card_instance_id)
await self.bot.send_card_message(card_instance, card_instance_id, content, is_final)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_instance_id_dict.pop(message_id) # 消息回复结束之后删除卡片实例id
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
content = await DingTalkMessageConverter.yiri2target(message)
if target_type == 'person':
@@ -146,6 +179,20 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
if target_type == 'group':
await self.bot.send_proactive_message_to_group(target_id, content)
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_message_card(self, message_id, event):
card_template_id = self.config['card_template_id']
incoming_message = event.source_platform_object.incoming_message
# message_id = incoming_message.message_id
card_instance, card_instance_id = await self.bot.create_and_card(card_template_id, incoming_message)
self.card_instance_id_dict[message_id] = (card_instance, card_instance_id)
return True
def register_listener(
self,
event_type: typing.Type[platform_events.Event],

View File

@@ -46,6 +46,23 @@ spec:
type: boolean
required: false
default: true
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用钉钉卡片流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用钉钉卡片流式方式来回复内容
type: boolean
required: true
default: false
- name: card_template_id
label:
en_US: card template id
zh_Hans: 卡片模板ID
type: string
required: true
default: "填写你的卡片template_id"
execution:
python:
path: ./dingtalk.py

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@ import aiohttp
import lark_oapi.ws.exception
import quart
from lark_oapi.api.im.v1 import *
from lark_oapi.api.cardkit.v1 import *
from .. import adapter
from ...core import app
@@ -320,6 +321,10 @@ class LarkEventConverter(adapter.EventConverter):
)
CARD_ID_CACHE_SIZE = 500
CARD_ID_CACHE_MAX_LIFETIME = 20 * 60 # 20分钟
class LarkAdapter(adapter.MessagePlatformAdapter):
bot: lark_oapi.ws.Client
api_client: lark_oapi.Client
@@ -339,12 +344,20 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
quart_app: quart.Quart
ap: app.Application
card_id_dict: dict[str, str] # 消息id到卡片id的映射便于创建卡片后的发送消息到指定卡片
seq: int # 用于在发送卡片消息中识别消息顺序直接以seq作为标识
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.config = config
self.ap = ap
self.logger = logger
self.quart_app = quart.Quart(__name__)
self.listeners = {}
self.card_id_dict = {}
self.seq = 1
@self.quart_app.route('/lark/callback', methods=['POST'])
async def lark_callback():
@@ -378,15 +391,15 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
if 'im.message.receive_v1' == type:
try:
event = await self.event_converter.target2yiri(p2v1, self.api_client)
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in lark callback: {traceback.format_exc()}')
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
return {'code': 200, 'message': 'ok'}
except Exception as e:
await self.logger.error(f"Error in lark callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in lark callback: {traceback.format_exc()}')
return {'code': 500, 'message': 'error'}
async def on_message(event: lark_oapi.im.v1.P2ImMessageReceiveV1):
@@ -409,6 +422,216 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
pass
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def create_card_id(self, message_id):
try:
self.ap.logger.debug('飞书支持stream输出,创建卡片......')
card_data = {"schema": "2.0", "config": {"update_multi": True, "streaming_mode": True,
"streaming_config": {"print_step": {"default": 1},
"print_frequency_ms": {"default": 70},
"print_strategy": "fast"}},
"body": {"direction": "vertical", "padding": "12px 12px 12px 12px", "elements": [{"tag": "div",
"text": {
"tag": "plain_text",
"content": "LangBot",
"text_size": "normal",
"text_align": "left",
"text_color": "default"},
"icon": {
"tag": "custom_icon",
"img_key": "img_v3_02p3_05c65d5d-9bad-440a-a2fb-c89571bfd5bg"}},
{
"tag": "markdown",
"content": "",
"text_align": "left",
"text_size": "normal",
"margin": "0px 0px 0px 0px",
"element_id": "streaming_txt"},
{
"tag": "markdown",
"content": "",
"text_align": "left",
"text_size": "normal",
"margin": "0px 0px 0px 0px"},
{
"tag": "column_set",
"horizontal_spacing": "8px",
"horizontal_align": "left",
"columns": [
{
"tag": "column",
"width": "weighted",
"elements": [
{
"tag": "markdown",
"content": "",
"text_align": "left",
"text_size": "normal",
"margin": "0px 0px 0px 0px"},
{
"tag": "markdown",
"content": "",
"text_align": "left",
"text_size": "normal",
"margin": "0px 0px 0px 0px"},
{
"tag": "markdown",
"content": "",
"text_align": "left",
"text_size": "normal",
"margin": "0px 0px 0px 0px"}],
"padding": "0px 0px 0px 0px",
"direction": "vertical",
"horizontal_spacing": "8px",
"vertical_spacing": "2px",
"horizontal_align": "left",
"vertical_align": "top",
"margin": "0px 0px 0px 0px",
"weight": 1}],
"margin": "0px 0px 0px 0px"},
{"tag": "hr",
"margin": "0px 0px 0px 0px"},
{
"tag": "column_set",
"horizontal_spacing": "12px",
"horizontal_align": "right",
"columns": [
{
"tag": "column",
"width": "weighted",
"elements": [
{
"tag": "markdown",
"content": "<font color=\"grey-600\">以上内容由 AI 生成,仅供参考。更多详细、准确信息可点击引用链接查看</font>",
"text_align": "left",
"text_size": "notation",
"margin": "4px 0px 0px 0px",
"icon": {
"tag": "standard_icon",
"token": "robot_outlined",
"color": "grey"}}],
"padding": "0px 0px 0px 0px",
"direction": "vertical",
"horizontal_spacing": "8px",
"vertical_spacing": "8px",
"horizontal_align": "left",
"vertical_align": "top",
"margin": "0px 0px 0px 0px",
"weight": 1},
{
"tag": "column",
"width": "20px",
"elements": [
{
"tag": "button",
"text": {
"tag": "plain_text",
"content": ""},
"type": "text",
"width": "fill",
"size": "medium",
"icon": {
"tag": "standard_icon",
"token": "thumbsup_outlined"},
"hover_tips": {
"tag": "plain_text",
"content": "有帮助"},
"margin": "0px 0px 0px 0px"}],
"padding": "0px 0px 0px 0px",
"direction": "vertical",
"horizontal_spacing": "8px",
"vertical_spacing": "8px",
"horizontal_align": "left",
"vertical_align": "top",
"margin": "0px 0px 0px 0px"},
{
"tag": "column",
"width": "30px",
"elements": [
{
"tag": "button",
"text": {
"tag": "plain_text",
"content": ""},
"type": "text",
"width": "default",
"size": "medium",
"icon": {
"tag": "standard_icon",
"token": "thumbdown_outlined"},
"hover_tips": {
"tag": "plain_text",
"content": "无帮助"},
"margin": "0px 0px 0px 0px"}],
"padding": "0px 0px 0px 0px",
"vertical_spacing": "8px",
"horizontal_align": "left",
"vertical_align": "top",
"margin": "0px 0px 0px 0px"}],
"margin": "0px 0px 4px 0px"}]}}
# delay / fast 创建卡片模板delay 延迟打印fast 实时打印,可以自定义更好看的消息模板
request: CreateCardRequest = (
CreateCardRequest.builder()
.request_body(CreateCardRequestBody.builder().type('card_json').data(json.dumps(card_data)).build())
.build()
)
# 发起请求
response: CreateCardResponse = self.api_client.cardkit.v1.card.create(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.cardkit.v1.card.create failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
self.ap.logger.debug(f'飞书卡片创建成功,卡片ID: {response.data.card_id}')
self.card_id_dict[message_id] = response.data.card_id
card_id = response.data.card_id
return card_id
except Exception as e:
self.ap.logger.error(f'飞书卡片创建失败,错误信息: {e}')
async def create_message_card(self, message_id, event) -> str:
"""
创建卡片消息。
使用卡片消息是因为普通消息更新次数有限制而大模型流式返回结果可能很多而超过限制而飞书卡片没有这个限制api免费次数有限
"""
# message_id = event.message_chain.message_id
card_id = await self.create_card_id(message_id)
content = {
'type': 'card',
'data': {'card_id': card_id, 'template_variable': {'content': 'Thinking...'}},
} # 当收到消息时发送消息模板,可添加模板变量,详情查看飞书中接口文档
request: ReplyMessageRequest = (
ReplyMessageRequest.builder()
.message_id(event.message_chain.message_id)
.request_body(
ReplyMessageRequestBody.builder().content(json.dumps(content)).msg_type('interactive').build()
)
.build()
)
# 发起请求
response: ReplyMessageResponse = await self.api_client.im.v1.message.areply(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return True
async def reply_message(
self,
message_source: platform_events.MessageEvent,
@@ -447,6 +670,64 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
f'client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
"""
回复消息变成更新卡片消息
"""
# self.seq += 1
message_id = bot_message.resp_message_id
msg_seq = bot_message.msg_sequence
if msg_seq % 8 == 0 or is_final:
lark_message = await self.message_converter.yiri2target(message, self.api_client)
text_message = ''
for ele in lark_message[0]:
if ele['tag'] == 'text':
text_message += ele['text']
elif ele['tag'] == 'md':
text_message += ele['text']
# content = {
# 'type': 'card_json',
# 'data': {'card_id': self.card_id_dict[message_id], 'elements': {'content': text_message}},
# }
request: ContentCardElementRequest = (
ContentCardElementRequest.builder()
.card_id(self.card_id_dict[message_id])
.element_id('streaming_txt')
.request_body(
ContentCardElementRequestBody.builder()
# .uuid("a0d69e20-1dd1-458b-k525-dfeca4015204")
.content(text_message)
.sequence(msg_seq)
.build()
)
.build()
)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_id_dict.pop(message_id) # 清理已经使用过的卡片
# 发起请求
response: ContentCardElementResponse = self.api_client.cardkit.v1.card_element.content(request)
# 处理失败返回
if not response.success():
raise Exception(
f'client.im.v1.message.patch failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}'
)
return
async def is_muted(self, group_id: int) -> bool:
return False
@@ -492,4 +773,9 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
)
async def kill(self) -> bool:
return False
# 需要断开连接,不然旧的连接会继续运行,导致飞书消息来时会随机选择一个连接
# 断开时lark.ws.Client的_receive_message_loop会打印error日志: receive message loop exit。然后进行重连
# 所以要设置_auto_reconnect=False,让其不重连。
self.bot._auto_reconnect = False
await self.bot._disconnect()
return False

View File

@@ -65,6 +65,16 @@ spec:
type: string
required: true
default: ""
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用飞书流式回复模式
description:
en_US: If enabled, the bot will use the stream of lark reply mode
zh_Hans: 如果启用,将使用飞书流式方式来回复内容
type: boolean
required: true
default: false
execution:
python:
path: ./lark.py

View File

@@ -72,8 +72,9 @@ class NakuruProjectMessageConverter(adapter_model.MessageConverter):
content=content_list,
)
nakuru_forward_node_list.append(nakuru_forward_node)
except Exception as e:
except Exception:
import traceback
traceback.print_exc()
nakuru_msg_list.append(nakuru_forward_node_list)
@@ -276,7 +277,7 @@ class NakuruAdapter(adapter_model.MessagePlatformAdapter):
# 注册监听器
self.bot.receiver(source_cls.__name__)(listener_wrapper)
except Exception as e:
self.logger.error(f"Error in nakuru register_listener: {traceback.format_exc()}")
self.logger.error(f'Error in nakuru register_listener: {traceback.format_exc()}')
raise e
def unregister_listener(

View File

@@ -125,8 +125,8 @@ class OfficialAccountAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in officialaccount callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in officialaccount callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -501,7 +501,7 @@ class OfficialAdapter(adapter_model.MessagePlatformAdapter):
for event_handler in event_handler_mapping[event_type]:
setattr(self.bot, event_handler, wrapper)
except Exception as e:
self.logger.error(f"Error in qqbotpy callback: {traceback.format_exc()}")
self.logger.error(f'Error in qqbotpy callback: {traceback.format_exc()}')
raise e
def unregister_listener(

View File

@@ -154,10 +154,7 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
raise ParamNotEnoughError('QQ官方机器人缺少相关配置项请查看文档或联系管理员')
self.bot = QQOfficialClient(
app_id=config['appid'],
secret=config['secret'],
token=config['token'],
logger=self.logger
app_id=config['appid'], secret=config['secret'], token=config['token'], logger=self.logger
)
async def reply_message(
@@ -224,8 +221,8 @@ class QQOfficialAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'justbot'
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in qqofficial callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in qqofficial callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('DIRECT_MESSAGE_CREATE')(on_message)

View File

@@ -104,7 +104,9 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
if missing_keys:
raise ParamNotEnoughError('Slack机器人缺少相关配置项请查看文档或联系管理员')
self.bot = SlackClient(bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'], logger=self.logger)
self.bot = SlackClient(
bot_token=self.config['bot_token'], signing_secret=self.config['signing_secret'], logger=self.logger
)
async def reply_message(
self,
@@ -139,8 +141,8 @@ class SlackAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = 'SlackBot'
try:
return await callback(await self.event_converter.target2yiri(event, self.bot), self)
except Exception as e:
await self.logger.error(f"Error in slack callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in slack callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('im')(on_message)

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import telegram
import telegram.ext
from telegram import Update
@@ -143,6 +144,10 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
config: dict
ap: app.Application
msg_stream_id: dict # 流式消息id字典key为流式消息idvalue为首次消息源id用于在流式消息时判断编辑那条消息
seq: int # 消息中识别消息顺序直接以seq作为标识
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
@@ -152,6 +157,8 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
self.config = config
self.ap = ap
self.logger = logger
self.msg_stream_id = {}
# self.seq = 1
async def telegram_callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
if update.message.from_user.is_bot:
@@ -160,8 +167,9 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
try:
lb_event = await self.event_converter.target2yiri(update, self.bot, self.bot_account_id)
await self.listeners[type(lb_event)](lb_event, self)
except Exception as e:
await self.logger.error(f"Error in telegram callback: {traceback.format_exc()}")
await self.is_stream_output_supported()
except Exception:
await self.logger.error(f'Error in telegram callback: {traceback.format_exc()}')
self.application = ApplicationBuilder().token(self.config['token']).build()
self.bot = self.application.bot
@@ -200,6 +208,70 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
await self.bot.send_message(**args)
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
):
msg_seq = bot_message.msg_sequence
if (msg_seq - 1) % 8 == 0 or is_final:
assert isinstance(message_source.source_platform_object, Update)
components = await TelegramMessageConverter.yiri2target(message, self.bot)
args = {}
message_id = message_source.source_platform_object.message.id
if quote_origin:
args['reply_to_message_id'] = message_source.source_platform_object.message.id
component = components[0]
if message_id not in self.msg_stream_id: # 当消息回复第一次时,发送新消息
# time.sleep(0.6)
if component['type'] == 'text':
if self.config['markdown_card'] is True:
content = telegramify_markdown.markdownify(
content=component['text'],
)
else:
content = component['text']
args = {
'chat_id': message_source.source_platform_object.effective_chat.id,
'text': content,
}
if self.config['markdown_card'] is True:
args['parse_mode'] = 'MarkdownV2'
send_msg = await self.bot.send_message(**args)
send_msg_id = send_msg.message_id
self.msg_stream_id[message_id] = send_msg_id
else: # 存在消息的时候直接编辑消息1
if component['type'] == 'text':
if self.config['markdown_card'] is True:
content = telegramify_markdown.markdownify(
content=component['text'],
)
else:
content = component['text']
args = {
'message_id': self.msg_stream_id[message_id],
'chat_id': message_source.source_platform_object.effective_chat.id,
'text': content,
}
if self.config['markdown_card'] is True:
args['parse_mode'] = 'MarkdownV2'
await self.bot.edit_message_text(**args)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.msg_stream_id.pop(message_id) # 消息回复结束之后删除流式消息id
async def is_stream_output_supported(self) -> bool:
is_stream = False
if self.config.get('enable-stream-reply', None):
is_stream = True
return is_stream
async def is_muted(self, group_id: int) -> bool:
return False
@@ -222,8 +294,12 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = (await self.bot.get_me()).username
await self.application.updater.start_polling(allowed_updates=Update.ALL_TYPES)
await self.application.start()
await self.logger.info('Telegram adapter running')
async def kill(self) -> bool:
if self.application.running:
await self.application.stop()
if self.application.updater:
await self.application.updater.stop()
await self.logger.info('Telegram adapter stopped')
return True

View File

@@ -25,6 +25,16 @@ spec:
type: boolean
required: false
default: true
- name: enable-stream-reply
label:
en_US: Enable Stream Reply Mode
zh_Hans: 启用电报流式回复模式
description:
en_US: If enabled, the bot will use the stream of telegram reply mode
zh_Hans: 如果启用,将使用电报流式方式来回复内容
type: boolean
required: true
default: false
execution:
python:
path: ./telegram.py

View File

@@ -19,17 +19,20 @@ class WebChatMessage(BaseModel):
content: str
message_chain: list[dict]
timestamp: str
is_final: bool = False
class WebChatSession:
id: str
message_lists: dict[str, list[WebChatMessage]] = {}
resp_waiters: dict[int, asyncio.Future[WebChatMessage]]
resp_queues: dict[int, asyncio.Queue[WebChatMessage]]
def __init__(self, id: str):
self.id = id
self.message_lists = {}
self.resp_waiters = {}
self.resp_queues = {}
def get_message_list(self, pipeline_uuid: str) -> list[WebChatMessage]:
if pipeline_uuid not in self.message_lists:
@@ -49,6 +52,8 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
typing.Callable[[platform_events.Event, msadapter.MessagePlatformAdapter], None],
] = {}
is_stream: bool
def __init__(self, config: dict, ap: app.Application, logger: EventLogger):
self.ap = ap
self.logger = logger
@@ -59,6 +64,8 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
self.bot_account_id = 'webchatbot'
self.is_stream = False
async def send_message(
self,
target_type: str,
@@ -102,12 +109,53 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
# notify waiter
if isinstance(message_source, platform_events.FriendMessage):
self.webchat_person_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data)
await self.webchat_person_session.resp_queues[message_source.message_chain.message_id].put(message_data)
elif isinstance(message_source, platform_events.GroupMessage):
self.webchat_group_session.resp_waiters[message_source.message_chain.message_id].set_result(message_data)
await self.webchat_group_session.resp_queues[message_source.message_chain.message_id].put(message_data)
return message_data.model_dump()
async def reply_message_chunk(
self,
message_source: platform_events.MessageEvent,
bot_message,
message: platform_message.MessageChain,
quote_origin: bool = False,
is_final: bool = False,
) -> dict:
"""回复消息"""
message_data = WebChatMessage(
id=-1,
role='assistant',
content=str(message),
message_chain=[component.__dict__ for component in message],
timestamp=datetime.now().isoformat(),
)
# notify waiter
session = (
self.webchat_group_session
if isinstance(message_source, platform_events.GroupMessage)
else self.webchat_person_session
)
if message_source.message_chain.message_id not in session.resp_waiters:
# session.resp_waiters[message_source.message_chain.message_id] = asyncio.Queue()
queue = session.resp_queues[message_source.message_chain.message_id]
# if isinstance(message_source, platform_events.FriendMessage):
# queue = self.webchat_person_session.resp_queues[message_source.message_chain.message_id]
# elif isinstance(message_source, platform_events.GroupMessage):
# queue = self.webchat_group_session.resp_queues[message_source.message_chain.message_id]
if is_final and bot_message.tool_calls is None:
message_data.is_final = True
# print(message_data)
await queue.put(message_data)
return message_data.model_dump()
async def is_stream_output_supported(self) -> bool:
return self.is_stream
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
@@ -140,8 +188,13 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
await self.logger.info('WebChat调试适配器正在停止')
async def send_webchat_message(
self, pipeline_uuid: str, session_type: str, message_chain_obj: typing.List[dict]
self,
pipeline_uuid: str,
session_type: str,
message_chain_obj: typing.List[dict],
is_stream: bool = False,
) -> dict:
self.is_stream = is_stream
"""发送调试消息到流水线"""
if session_type == 'person':
use_session = self.webchat_person_session
@@ -152,6 +205,9 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
message_id = len(use_session.get_message_list(pipeline_uuid)) + 1
use_session.resp_queues[message_id] = asyncio.Queue()
logger.debug(f'Initialized queue for message_id: {message_id}')
use_session.get_message_list(pipeline_uuid).append(
WebChatMessage(
id=message_id,
@@ -185,21 +241,46 @@ class WebChatAdapter(msadapter.MessagePlatformAdapter):
self.ap.platform_mgr.webchat_proxy_bot.bot_entity.use_pipeline_uuid = pipeline_uuid
# trigger pipeline
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
# set waiter
waiter = asyncio.Future[WebChatMessage]()
use_session.resp_waiters[message_id] = waiter
waiter.add_done_callback(lambda future: use_session.resp_waiters.pop(message_id))
if is_stream:
queue = use_session.resp_queues[message_id]
msg_id = len(use_session.get_message_list(pipeline_uuid)) + 1
while True:
resp_message = await queue.get()
resp_message.id = msg_id
if resp_message.is_final:
resp_message.id = msg_id
use_session.get_message_list(pipeline_uuid).append(resp_message)
yield resp_message.model_dump()
break
yield resp_message.model_dump()
use_session.resp_queues.pop(message_id)
resp_message = await waiter
else: # non-stream
# set waiter
# waiter = asyncio.Future[WebChatMessage]()
# use_session.resp_waiters[message_id] = waiter
# # waiter.add_done_callback(lambda future: use_session.resp_waiters.pop(message_id))
#
# resp_message = await waiter
#
# resp_message.id = len(use_session.get_message_list(pipeline_uuid)) + 1
#
# use_session.get_message_list(pipeline_uuid).append(resp_message)
#
# yield resp_message.model_dump()
msg_id = len(use_session.get_message_list(pipeline_uuid)) + 1
resp_message.id = len(use_session.get_message_list(pipeline_uuid)) + 1
queue = use_session.resp_queues[message_id]
resp_message = await queue.get()
use_session.get_message_list(pipeline_uuid).append(resp_message)
resp_message.id = msg_id
resp_message.is_final = True
use_session.get_message_list(pipeline_uuid).append(resp_message)
return resp_message.model_dump()
yield resp_message.model_dump()
def get_webchat_messages(self, pipeline_uuid: str, session_type: str) -> list[dict]:
"""获取调试消息历史"""

View File

@@ -9,7 +9,8 @@ metadata:
en_US: "WebChat adapter for pipeline debugging"
zh_Hans: "用于流水线调试的网页聊天适配器"
icon: ""
spec: {}
spec:
config: []
execution:
python:
path: "webchat.py"

View File

@@ -1,5 +1,4 @@
import requests
import websockets
import websocket
import json
import time
@@ -10,53 +9,42 @@ from libs.wechatpad_api.client import WeChatPadClient
import typing
import asyncio
import traceback
import time
import re
import base64
import uuid
import json
import os
import copy
import datetime
import threading
import quart
import aiohttp
from .. import adapter
from ...pipeline.longtext.strategies import forward
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
from ..logger import EventLogger
import xml.etree.ElementTree as ET
from typing import Optional, List, Tuple
from typing import Optional, Tuple
from functools import partial
import logging
class WeChatPadMessageConverter(adapter.MessageConverter):
class WeChatPadMessageConverter(adapter.MessageConverter):
def __init__(self, config: dict, logger: logging.Logger):
self.config = config
self.bot = WeChatPadClient(self.config["wechatpad_url"],self.config["token"])
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'])
self.logger = logger
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain
) -> list[dict]:
async def yiri2target(message_chain: platform_message.MessageChain) -> list[dict]:
content_list = []
current_file_path = os.path.abspath(__file__)
for component in message_chain:
if isinstance(component, platform_message.At):
content_list.append({"type": "at", "target": component.target})
if isinstance(component, platform_message.AtAll):
content_list.append({'type': 'at', 'target': 'all'})
elif isinstance(component, platform_message.At):
content_list.append({'type': 'at', 'target': component.target})
elif isinstance(component, platform_message.Plain):
content_list.append({"type": "text", "content": component.text})
content_list.append({'type': 'text', 'content': component.text})
elif isinstance(component, platform_message.Image):
if component.url:
async with httpx.AsyncClient() as client:
@@ -68,15 +56,16 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
else:
raise Exception('获取文件失败')
# pass
content_list.append({"type": "image", "image": base64_str})
content_list.append({'type': 'image', 'image': base64_str})
elif component.base64:
content_list.append({"type": "image", "image": component.base64})
content_list.append({'type': 'image', 'image': component.base64})
elif isinstance(component, platform_message.WeChatEmoji):
content_list.append(
{'type': 'WeChatEmoji', 'emoji_md5': component.emoji_md5, 'emoji_size': component.emoji_size})
{'type': 'WeChatEmoji', 'emoji_md5': component.emoji_md5, 'emoji_size': component.emoji_size}
)
elif isinstance(component, platform_message.Voice):
content_list.append({"type": "voice", "data": component.url, "duration": component.length, "forma": 0})
content_list.append({'type': 'voice', 'data': component.url, 'duration': component.length, 'forma': 0})
elif isinstance(component, platform_message.WeChatAppMsg):
content_list.append({'type': 'WeChatAppMsg', 'app_msg': component.app_msg})
elif isinstance(component, platform_message.Forward):
@@ -86,37 +75,37 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return content_list
async def target2yiri(
self,
message: dict,
bot_account_id: str,
self,
message: dict,
bot_account_id: str,
) -> platform_message.MessageChain:
"""外部消息转平台消息"""
# 数据预处理
message_list = []
bot_wxid = self.config['wxid']
ats_bot = False # 是否被@
content = message["content"]["str"]
content = message['content']['str']
content_no_preifx = content # 群消息则去掉前缀
is_group_message = self._is_group_message(message)
if is_group_message:
ats_bot = self._ats_bot(message, bot_account_id)
self.logger.info(f"ats_bot: {ats_bot}; bot_account_id: {bot_account_id}; bot_wxid: {bot_wxid}")
if "@所有人" in content:
self.logger.info(f'ats_bot: {ats_bot}; bot_account_id: {bot_account_id}; bot_wxid: {bot_wxid}')
if '@所有人' in content:
message_list.append(platform_message.AtAll())
elif ats_bot:
if ats_bot:
message_list.append(platform_message.At(target=bot_account_id))
# 解析@信息并生成At组件
at_targets = self._extract_at_targets(message)
for target_id in at_targets:
if target_id != bot_wxid: # 避免重复添加机器人的At
message_list.append(platform_message.At(target=target_id))
content_no_preifx, _ = self._extract_content_and_sender(content)
msg_type = message["msg_type"]
msg_type = message['msg_type']
# 映射消息类型到处理器方法
handler_map = {
@@ -138,11 +127,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain(message_list)
async def _handler_text(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_text(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理文本消息 (msg_type=1)"""
if message and self._is_group_message(message):
pattern = r'@\S{1,20}'
@@ -150,16 +135,12 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain([platform_message.Plain(content_no_preifx)])
async def _handler_image(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_image(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理图像消息 (msg_type=3)"""
try:
image_xml = content_no_preifx
if not image_xml:
return platform_message.MessageChain([platform_message.Unknown("[图片内容为空]")])
return platform_message.MessageChain([platform_message.Unknown('[图片内容为空]')])
root = ET.fromstring(image_xml)
# 提取img标签的属性
@@ -169,28 +150,22 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
cdnthumburl = img_tag.get('cdnthumburl')
# cdnmidimgurl = img_tag.get('cdnmidimgurl')
image_data = self.bot.cdn_download(aeskey=aeskey, file_type=1, file_url=cdnthumburl)
if image_data["Data"]['FileData'] == '':
if image_data['Data']['FileData'] == '':
image_data = self.bot.cdn_download(aeskey=aeskey, file_type=2, file_url=cdnthumburl)
base64_str = image_data["Data"]['FileData']
base64_str = image_data['Data']['FileData']
# self.logger.info(f"data:image/png;base64,{base64_str}")
elements = [
platform_message.Image(base64=f"data:image/png;base64,{base64_str}"),
platform_message.Image(base64=f'data:image/png;base64,{base64_str}'),
# platform_message.WeChatForwardImage(xml_data=image_xml) # 微信消息转发
]
return platform_message.MessageChain(elements)
except Exception as e:
self.logger.error(f"处理图片失败: {str(e)}")
return platform_message.MessageChain([platform_message.Unknown("[图片处理失败]")])
self.logger.error(f'处理图片失败: {str(e)}')
return platform_message.MessageChain([platform_message.Unknown('[图片处理失败]')])
async def _handler_voice(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_voice(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理语音消息 (msg_type=34)"""
message_List = []
try:
@@ -206,39 +181,33 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
bufid = voicemsg.get('bufid')
length = voicemsg.get('voicelength')
voice_data = self.bot.get_msg_voice(buf_id=str(bufid), length=int(length), msgid=str(new_msg_id))
audio_base64 = voice_data["Data"]['Base64']
audio_base64 = voice_data['Data']['Base64']
# 验证语音数据有效性
if not audio_base64:
message_List.append(platform_message.Unknown(text="[语音内容为空]"))
message_List.append(platform_message.Unknown(text='[语音内容为空]'))
return platform_message.MessageChain(message_List)
# 转换为平台支持的语音格式(如 Silk 格式)
voice_element = platform_message.Voice(
base64=f"data:audio/silk;base64,{audio_base64}"
)
voice_element = platform_message.Voice(base64=f'data:audio/silk;base64,{audio_base64}')
message_List.append(voice_element)
except KeyError as e:
self.logger.error(f"语音数据字段缺失: {str(e)}")
message_List.append(platform_message.Unknown(text="[语音数据解析失败]"))
self.logger.error(f'语音数据字段缺失: {str(e)}')
message_List.append(platform_message.Unknown(text='[语音数据解析失败]'))
except Exception as e:
self.logger.error(f"处理语音消息异常: {str(e)}")
message_List.append(platform_message.Unknown(text="[语音处理失败]"))
self.logger.error(f'处理语音消息异常: {str(e)}')
message_List.append(platform_message.Unknown(text='[语音处理失败]'))
return platform_message.MessageChain(message_List)
async def _handler_compound(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_compound(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理复合消息 (msg_type=49),根据子类型分派"""
try:
xml_data = ET.fromstring(content_no_preifx)
appmsg_data = xml_data.find('.//appmsg')
if appmsg_data:
data_type = appmsg_data.findtext('.//type', "")
data_type = appmsg_data.findtext('.//type', '')
# 二次分派处理器
sub_handler_map = {
'57': self._handler_compound_quote,
@@ -247,9 +216,9 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
'74': self._handler_compound_file,
'33': self._handler_compound_mini_program,
'36': self._handler_compound_mini_program,
'2000': partial(self._handler_compound_unsupported, text="[转账消息]"),
'2001': partial(self._handler_compound_unsupported, text="[红包消息]"),
'51': partial(self._handler_compound_unsupported, text="[视频号消息]"),
'2000': partial(self._handler_compound_unsupported, text='[转账消息]'),
'2001': partial(self._handler_compound_unsupported, text='[红包消息]'),
'51': partial(self._handler_compound_unsupported, text='[视频号消息]'),
}
handler = sub_handler_map.get(data_type, self._handler_compound_unsupported)
@@ -260,56 +229,51 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
else:
return platform_message.MessageChain([platform_message.Unknown(text=content_no_preifx)])
except Exception as e:
self.logger.error(f"解析复合消息失败: {str(e)}")
self.logger.error(f'解析复合消息失败: {str(e)}')
return platform_message.MessageChain([platform_message.Unknown(text=content_no_preifx)])
async def _handler_compound_quote(
self,
message: Optional[dict],
xml_data: ET.Element
self, message: Optional[dict], xml_data: ET.Element
) -> platform_message.MessageChain:
"""处理引用消息 (data_type=57)"""
message_list = []
# self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode'))
# self.logger.info("_handler_compound_quote", ET.tostring(xml_data, encoding='unicode'))
appmsg_data = xml_data.find('.//appmsg')
quote_data = "" # 引用原文
quote_id = None # 引用消息的原发送者
tousername = None # 接收方: 所属微信的wxid
user_data = "" # 用户消息
quote_data = '' # 引用原文
# quote_id = None # 引用消息的原发送者
# tousername = None # 接收方: 所属微信的wxid
user_data = '' # 用户消息
sender_id = xml_data.findtext('.//fromusername') # 发送方:单聊用户/群member
# 引用消息转发
if appmsg_data:
user_data = appmsg_data.findtext('.//title') or ""
user_data = appmsg_data.findtext('.//title') or ''
quote_data = appmsg_data.find('.//refermsg').findtext('.//content')
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr')
message_list.append(
platform_message.WeChatAppMsg(
app_msg=ET.tostring(appmsg_data, encoding='unicode'))
)
if message:
tousername = message['to_user_name']["str"]
# quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr')
message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg_data, encoding='unicode')))
# if message:
# tousername = message['to_user_name']['str']
if quote_data:
quote_data_message_list = platform_message.MessageChain()
# 文本消息
try:
if "<msg>" not in quote_data:
if '<msg>' not in quote_data:
quote_data_message_list.append(platform_message.Plain(quote_data))
else:
# 引用消息展开
quote_data_xml = ET.fromstring(quote_data)
if quote_data_xml.find("img"):
if quote_data_xml.find('img'):
quote_data_message_list.extend(await self._handler_image(None, quote_data))
elif quote_data_xml.find("voicemsg"):
elif quote_data_xml.find('voicemsg'):
quote_data_message_list.extend(await self._handler_voice(None, quote_data))
elif quote_data_xml.find("videomsg"):
elif quote_data_xml.find('videomsg'):
quote_data_message_list.extend(await self._handler_default(None, quote_data)) # 先不处理
else:
# appmsg
quote_data_message_list.extend(await self._handler_compound(None, quote_data))
except Exception as e:
self.logger.error(f"处理引用消息异常 expcetion:{e}")
self.logger.error(f'处理引用消息异常 expcetion:{e}')
quote_data_message_list.append(platform_message.Plain(quote_data))
message_list.append(
platform_message.Quote(
@@ -324,15 +288,11 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
return platform_message.MessageChain(message_list)
async def _handler_compound_file(
self,
message: dict,
xml_data: ET.Element
) -> platform_message.MessageChain:
async def _handler_compound_file(self, message: dict, xml_data: ET.Element) -> platform_message.MessageChain:
"""处理文件消息 (data_type=6)"""
file_data = xml_data.find('.//appmsg')
if file_data.findtext('.//type', "") == "74":
if file_data.findtext('.//type', '') == '74':
return None
else:
@@ -355,22 +315,21 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
file_data = self.bot.cdn_download(aeskey=aeskey, file_type=5, file_url=cdnthumburl)
file_base64 = file_data["Data"]['FileData']
file_base64 = file_data['Data']['FileData']
# print(file_data)
file_size = file_data["Data"]['TotalSize']
file_size = file_data['Data']['TotalSize']
# print(file_base64)
return platform_message.MessageChain([
platform_message.WeChatFile(file_id=file_id, file_name=file_name, file_size=file_size,
file_base64=file_base64),
platform_message.WeChatForwardFile(xml_data=xml_data_str)
])
return platform_message.MessageChain(
[
platform_message.WeChatFile(
file_id=file_id, file_name=file_name, file_size=file_size, file_base64=file_base64
),
platform_message.WeChatForwardFile(xml_data=xml_data_str),
]
)
async def _handler_compound_link(
self,
message: dict,
xml_data: ET.Element
) -> platform_message.MessageChain:
async def _handler_compound_link(self, message: dict, xml_data: ET.Element) -> platform_message.MessageChain:
"""处理链接消息(如公众号文章、外部网页)"""
message_list = []
try:
@@ -383,56 +342,38 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
link_title=appmsg.findtext('title', ''),
link_desc=appmsg.findtext('des', ''),
link_url=appmsg.findtext('url', ''),
link_thumb_url=appmsg.findtext("thumburl", '') # 这个字段拿不到
link_thumb_url=appmsg.findtext('thumburl', ''), # 这个字段拿不到
)
)
# 还没有发链接的接口, 暂时还需要自己构造appmsg, 先用WeChatAppMsg。
message_list.append(
platform_message.WeChatAppMsg(
app_msg=ET.tostring(appmsg, encoding='unicode')
)
)
message_list.append(platform_message.WeChatAppMsg(app_msg=ET.tostring(appmsg, encoding='unicode')))
except Exception as e:
self.logger.error(f"解析链接消息失败: {str(e)}")
self.logger.error(f'解析链接消息失败: {str(e)}')
return platform_message.MessageChain(message_list)
async def _handler_compound_mini_program(
self,
message: dict,
xml_data: ET.Element
self, message: dict, xml_data: ET.Element
) -> platform_message.MessageChain:
"""处理小程序消息(如小程序卡片、服务通知)"""
xml_data_str = ET.tostring(xml_data, encoding='unicode')
return platform_message.MessageChain([
platform_message.WeChatForwardMiniPrograms(xml_data=xml_data_str)
])
return platform_message.MessageChain([platform_message.WeChatForwardMiniPrograms(xml_data=xml_data_str)])
async def _handler_default(
self,
message: Optional[dict],
content_no_preifx: str
) -> platform_message.MessageChain:
async def _handler_default(self, message: Optional[dict], content_no_preifx: str) -> platform_message.MessageChain:
"""处理未知消息类型"""
if message:
msg_type = message["msg_type"]
msg_type = message['msg_type']
else:
msg_type = ""
return platform_message.MessageChain([
platform_message.Unknown(text=f"[未知消息类型 msg_type:{msg_type}]")
])
msg_type = ''
return platform_message.MessageChain([platform_message.Unknown(text=f'[未知消息类型 msg_type:{msg_type}]')])
def _handler_compound_unsupported(
self,
message: dict,
xml_data: str,
text: Optional[str] = None
self, message: dict, xml_data: str, text: Optional[str] = None
) -> platform_message.MessageChain:
"""处理未支持复合消息类型(msg_type=49)子类型"""
if not text:
text = f"[xml_data={xml_data}]"
text = f'[xml_data={xml_data}]'
content_list = []
content_list.append(
platform_message.Unknown(text=f"[处理未支持复合消息类型[msg_type=49]|{text}"))
content_list.append(platform_message.Unknown(text=f'[处理未支持复合消息类型[msg_type=49]|{text}'))
return platform_message.MessageChain(content_list)
@@ -441,7 +382,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
ats_bot = False
try:
to_user_name = message['to_user_name']['str'] # 接收方: 所属微信的wxid
raw_content = message["content"]["str"] # 原始消息内容
raw_content = message['content']['str'] # 原始消息内容
content_no_prefix, _ = self._extract_content_and_sender(raw_content)
# 直接艾特机器人这个有bug当被引用的消息里面有@bot,会套娃
# ats_bot = ats_bot or (f"@{bot_account_id}" in content_no_prefix)
@@ -452,7 +393,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
msg_source = message.get('msg_source', '') or ''
if len(msg_source) > 0:
msg_source_data = ET.fromstring(msg_source)
at_user_list = msg_source_data.findtext("atuserlist") or ""
at_user_list = msg_source_data.findtext('atuserlist') or ''
ats_bot = ats_bot or (to_user_name in at_user_list)
# 引用bot
if message.get('msg_type', 0) == 49:
@@ -463,7 +404,7 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
quote_id = appmsg_data.find('.//refermsg').findtext('.//chatusr') # 引用消息的原发送者
ats_bot = ats_bot or (quote_id == tousername)
except Exception as e:
self.logger.error(f"_ats_bot got except: {e}")
self.logger.error(f'_ats_bot got except: {e}')
finally:
return ats_bot
@@ -476,60 +417,58 @@ class WeChatPadMessageConverter(adapter.MessageConverter):
msg_source = message.get('msg_source', '') or ''
if len(msg_source) > 0:
msg_source_data = ET.fromstring(msg_source)
at_user_list = msg_source_data.findtext("atuserlist") or ""
at_user_list = msg_source_data.findtext('atuserlist') or ''
if at_user_list:
# atuserlist格式通常是逗号分隔的用户ID列表
at_targets = [user_id.strip() for user_id in at_user_list.split(',') if user_id.strip()]
except Exception as e:
self.logger.error(f"_extract_at_targets got except: {e}")
self.logger.error(f'_extract_at_targets got except: {e}')
return at_targets
# 提取一下content前面的sender_id, 和去掉前缀的内容
def _extract_content_and_sender(self, raw_content: str) -> Tuple[str, Optional[str]]:
try:
# 检查消息开头,如果有 wxid_sbitaz0mt65n22:\n 则删掉
# add: 有些用户的wxid不是上述格式。换成user_name:
regex = re.compile(r"^[a-zA-Z0-9_\-]{5,20}:")
line_split = raw_content.split("\n")
regex = re.compile(r'^[a-zA-Z0-9_\-]{5,20}:')
line_split = raw_content.split('\n')
if len(line_split) > 0 and regex.match(line_split[0]):
raw_content = "\n".join(line_split[1:])
sender_id = line_split[0].strip(":")
raw_content = '\n'.join(line_split[1:])
sender_id = line_split[0].strip(':')
return raw_content, sender_id
except Exception as e:
self.logger.error(f"_extract_content_and_sender got except: {e}")
self.logger.error(f'_extract_content_and_sender got except: {e}')
finally:
return raw_content, None
# 是否是群消息
def _is_group_message(self, message: dict) -> bool:
from_user_name = message['from_user_name']['str']
return from_user_name.endswith("@chatroom")
return from_user_name.endswith('@chatroom')
class WeChatPadEventConverter(adapter.EventConverter):
def __init__(self, config: dict, logger: logging.Logger):
self.config = config
self.message_converter = WeChatPadMessageConverter(config, logger)
self.logger = logger
@staticmethod
async def yiri2target(
event: platform_events.MessageEvent
) -> dict:
async def yiri2target(event: platform_events.MessageEvent) -> dict:
pass
async def target2yiri(
self,
event: dict,
bot_account_id: str,
self,
event: dict,
bot_account_id: str,
) -> platform_events.MessageEvent:
# 排除公众号以及微信团队消息
if event['from_user_name']['str'].startswith('gh_') \
or event['from_user_name']['str']=='weixin'\
or event['from_user_name']['str'] == "newsapp"\
or event['from_user_name']['str'] == self.config["wxid"]:
if (
event['from_user_name']['str'].startswith('gh_')
or event['from_user_name']['str'] == 'weixin'
or event['from_user_name']['str'] == 'newsapp'
or event['from_user_name']['str'] == self.config['wxid']
):
return None
message_chain = await self.message_converter.target2yiri(copy.deepcopy(event), bot_account_id)
@@ -538,7 +477,7 @@ class WeChatPadEventConverter(adapter.EventConverter):
if '@chatroom' in event['from_user_name']['str']:
# 找出开头的 wxid_ 字符串,以:结尾
sender_wxid = event['content']['str'].split(":")[0]
sender_wxid = event['content']['str'].split(':')[0]
return platform_events.GroupMessage(
sender=platform_entities.GroupMember(
@@ -550,13 +489,13 @@ class WeChatPadEventConverter(adapter.EventConverter):
name=event['from_user_name']['str'],
permission=platform_entities.Permission.Member,
),
special_title="",
special_title='',
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=message_chain,
time=event["create_time"],
time=event['create_time'],
source_platform_object=event,
)
else:
@@ -567,13 +506,13 @@ class WeChatPadEventConverter(adapter.EventConverter):
remark='',
),
message_chain=message_chain,
time=event["create_time"],
time=event['create_time'],
source_platform_object=event,
)
class WeChatPadAdapter(adapter.MessagePlatformAdapter):
name: str = "WeChatPad" # 定义适配器名称
name: str = 'WeChatPad' # 定义适配器名称
bot: WeChatPadClient
quart_app: quart.Quart
@@ -606,27 +545,21 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
# self.ap.logger.debug(f"Gewechat callback event: {data}")
# print(data)
try:
event = await self.event_converter.target2yiri(data.copy(), self.bot_account_id)
except Exception as e:
await self.logger.error(f"Error in wechatpad callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wechatpad callback: {traceback.format_exc()}')
if event.__class__ in self.listeners:
await self.listeners[event.__class__](event, self)
return 'ok'
async def _handle_message(
self,
message: platform_message.MessageChain,
target_id: str
):
async def _handle_message(self, message: platform_message.MessageChain, target_id: str):
"""统一消息处理核心逻辑"""
content_list = await self.message_converter.yiri2target(message)
# print(content_list)
at_targets = [item["target"] for item in content_list if item["type"] == "at"]
at_targets = [item['target'] for item in content_list if item['type'] == 'at']
# print(at_targets)
# 处理@逻辑
at_targets = at_targets or []
@@ -634,71 +567,62 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
if at_targets:
member_info = self.bot.get_chatroom_member_detail(
target_id,
)["Data"]["member_data"]["chatroom_member_list"]
)['Data']['member_data']['chatroom_member_list']
# 处理消息组件
for msg in content_list:
# 文本消息处理@
if msg['type'] == 'text' and at_targets:
at_nick_name_list = []
for member in member_info:
if member["user_name"] in at_targets:
at_nick_name_list.append(f'@{member["nick_name"]}')
msg['content'] = f'{" ".join(at_nick_name_list)} {msg["content"]}'
if 'all' in at_targets:
msg['content'] = f'@所有人 {msg["content"]}'
else:
at_nick_name_list = []
for member in member_info:
if member['user_name'] in at_targets:
at_nick_name_list.append(f'@{member["nick_name"]}')
msg['content'] = f'{" ".join(at_nick_name_list)} {msg["content"]}'
# 统一消息派发
handler_map = {
'text': lambda msg: self.bot.send_text_message(
to_wxid=target_id,
message=msg['content'],
ats=at_targets
to_wxid=target_id, message=msg['content'], ats=['notify@all'] if 'all' in at_targets else at_targets
),
'image': lambda msg: self.bot.send_image_message(
to_wxid=target_id,
img_url=msg["image"],
ats = at_targets
to_wxid=target_id, img_url=msg['image'], ats=['notify@all'] if 'all' in at_targets else at_targets
),
'WeChatEmoji': lambda msg: self.bot.send_emoji_message(
to_wxid=target_id,
emoji_md5=msg['emoji_md5'],
emoji_size=msg['emoji_size']
to_wxid=target_id, emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size']
),
'voice': lambda msg: self.bot.send_voice_message(
to_wxid=target_id,
voice_data=msg['data'],
voice_duration=msg["duration"],
voice_forma=msg["forma"],
voice_duration=msg['duration'],
voice_forma=msg['forma'],
),
'WeChatAppMsg': lambda msg: self.bot.send_app_message(
to_wxid=target_id,
app_message=msg['app_msg'],
type=0,
),
'at': lambda msg: None
'at': lambda msg: None,
}
if handler := handler_map.get(msg['type']):
handler(msg)
# self.ap.logger.warning(f"未处理的消息类型: {ret}")
else:
self.ap.logger.warning(f"未处理的消息类型: {msg['type']}")
self.ap.logger.warning(f'未处理的消息类型: {msg["type"]}')
continue
async def send_message(
self,
target_type: str,
target_id: str,
message: platform_message.MessageChain
):
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
"""主动发送消息"""
return await self._handle_message(message, target_id)
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
"""回复消息"""
if message_source.source_platform_object:
@@ -709,58 +633,49 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
pass
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None]
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
):
self.listeners[event_type] = callback
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None]
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessagePlatformAdapter], None],
):
pass
async def run_async(self):
if not self.config["admin_key"] and not self.config["token"]:
raise RuntimeError("无wechatpad管理密匙请填入配置文件后重启")
if not self.config['admin_key'] and not self.config['token']:
raise RuntimeError('无wechatpad管理密匙请填入配置文件后重启')
else:
if self.config["token"]:
self.bot = WeChatPadClient(
self.config['wechatpad_url'],
self.config["token"]
)
if self.config['token']:
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'])
data = self.bot.get_login_status()
self.ap.logger.info(data)
if data["Code"] == 300 and data["Text"] == "你已退出微信":
if data['Code'] == 300 and data['Text'] == '你已退出微信':
response = requests.post(
f"{self.config['wechatpad_url']}/admin/GenAuthKey1?key={self.config['admin_key']}",
json={"Count": 1, "Days": 365}
f'{self.config["wechatpad_url"]}/admin/GenAuthKey1?key={self.config["admin_key"]}',
json={'Count': 1, 'Days': 365},
)
if response.status_code != 200:
raise Exception(f"获取token失败: {response.text}")
self.config["token"] = response.json()["Data"][0]
raise Exception(f'获取token失败: {response.text}')
self.config['token'] = response.json()['Data'][0]
elif not self.config["token"]:
elif not self.config['token']:
response = requests.post(
f"{self.config['wechatpad_url']}/admin/GenAuthKey1?key={self.config['admin_key']}",
json={"Count": 1, "Days": 365}
f'{self.config["wechatpad_url"]}/admin/GenAuthKey1?key={self.config["admin_key"]}',
json={'Count': 1, 'Days': 365},
)
if response.status_code != 200:
raise Exception(f"获取token失败: {response.text}")
self.config["token"] = response.json()["Data"][0]
raise Exception(f'获取token失败: {response.text}')
self.config['token'] = response.json()['Data'][0]
self.bot = WeChatPadClient(
self.config['wechatpad_url'],
self.config["token"],
logger=self.logger
)
self.ap.logger.info(self.config["token"])
self.bot = WeChatPadClient(self.config['wechatpad_url'], self.config['token'], logger=self.logger)
self.ap.logger.info(self.config['token'])
thread_1 = threading.Event()
def wechat_login_process():
# 不登录这些先注释掉避免登陆态尝试拉qrcode。
# login_data =self.bot.get_login_qr()
@@ -768,67 +683,54 @@ class WeChatPadAdapter(adapter.MessagePlatformAdapter):
# url = login_data['Data']["QrCodeUrl"]
# self.ap.logger.info(login_data)
profile =self.bot.get_profile()
profile = self.bot.get_profile()
self.ap.logger.info(profile)
self.bot_account_id = profile["Data"]["userInfo"]["nickName"]["str"]
self.config["wxid"] = profile["Data"]["userInfo"]["userName"]["str"]
self.bot_account_id = profile['Data']['userInfo']['nickName']['str']
self.config['wxid'] = profile['Data']['userInfo']['userName']['str']
thread_1.set()
# asyncio.create_task(wechat_login_process)
threading.Thread(target=wechat_login_process).start()
def connect_websocket_sync() -> None:
thread_1.wait()
uri = f"{self.config['wechatpad_ws']}/GetSyncMsg?key={self.config['token']}"
self.ap.logger.info(f"Connecting to WebSocket: {uri}")
uri = f'{self.config["wechatpad_ws"]}/GetSyncMsg?key={self.config["token"]}'
self.ap.logger.info(f'Connecting to WebSocket: {uri}')
def on_message(ws, message):
try:
data = json.loads(message)
self.ap.logger.debug(f"Received message: {data}")
self.ap.logger.debug(f'Received message: {data}')
# 这里需要确保ws_message是同步的或者使用asyncio.run调用异步方法
asyncio.run(self.ws_message(data))
except json.JSONDecodeError:
self.ap.logger.error(f"Non-JSON message: {message[:100]}...")
self.ap.logger.error(f'Non-JSON message: {message[:100]}...')
def on_error(ws, error):
self.ap.logger.error(f"WebSocket error: {str(error)[:200]}")
self.ap.logger.error(f'WebSocket error: {str(error)[:200]}')
def on_close(ws, close_status_code, close_msg):
self.ap.logger.info("WebSocket closed, reconnecting...")
self.ap.logger.info('WebSocket closed, reconnecting...')
time.sleep(5)
connect_websocket_sync() # 自动重连
def on_open(ws):
self.ap.logger.info("WebSocket connected successfully!")
self.ap.logger.info('WebSocket connected successfully!')
ws = websocket.WebSocketApp(
uri,
on_message=on_message,
on_error=on_error,
on_close=on_close,
on_open=on_open
)
ws.run_forever(
ping_interval=60,
ping_timeout=20
uri, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open
)
ws.run_forever(ping_interval=60, ping_timeout=20)
# 直接调用同步版本(会阻塞)
# connect_websocket_sync()
# 这行代码会在WebSocket连接断开后才会执行
# self.ap.logger.info("WebSocket client thread started")
thread = threading.Thread(
target=connect_websocket_sync,
name="WebSocketClientThread",
daemon=True
)
thread = threading.Thread(target=connect_websocket_sync, name='WebSocketClientThread', daemon=True)
thread.start()
self.ap.logger.info("WebSocket client thread started")
self.ap.logger.info('WebSocket client thread started')
async def kill(self) -> bool:
pass

View File

@@ -157,7 +157,7 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
contacts_secret=config['contacts_secret'],
logger=self.logger
logger=self.logger,
)
async def reply_message(
@@ -201,8 +201,8 @@ class WecomAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in wecom callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wecom callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -145,7 +145,7 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
secret=config['secret'],
token=config['token'],
EncodingAESKey=config['EncodingAESKey'],
logger=self.logger
logger=self.logger,
)
async def reply_message(
@@ -178,8 +178,8 @@ class WecomCSAdapter(adapter.MessagePlatformAdapter):
self.bot_account_id = event.receiver_id
try:
return await callback(await self.event_converter.target2yiri(event), self)
except Exception as e:
await self.logger.error(f"Error in wecomcs callback: {traceback.format_exc()}")
except Exception:
await self.logger.error(f'Error in wecomcs callback: {traceback.format_exc()}')
if event_type == platform_events.FriendMessage:
self.bot.on_message('text')(on_message)

View File

@@ -812,12 +812,14 @@ class File(MessageComponent):
def __str__(self):
return f'[文件]{self.name}'
class Face(MessageComponent):
"""系统表情
此处将超级表情骰子/划拳一同归类于face
当face_type为rps(划拳)时 face_id 对应的是手势
当face_type为dice(骰子)时 face_id 对应的是点数
"""
type: str = 'Face'
"""表情类型"""
face_type: str = 'face'
@@ -834,15 +836,15 @@ class Face(MessageComponent):
elif self.face_type == 'rps':
return f'[表情]{self.face_name}({self.rps_data(self.face_id)})'
def rps_data(self,face_id):
rps_dict ={
1 : "",
2 : "剪刀",
3 : "石头",
def rps_data(self, face_id):
rps_dict = {
1: '',
2: '剪刀',
3: '石头',
}
return rps_dict[face_id]
# ================ 个人微信专用组件 ================
@@ -971,5 +973,6 @@ class WeChatFile(MessageComponent):
"""文件地址"""
file_base64: str = ''
"""base64"""
def __str__(self):
return f'[文件]{self.file_name}'
return f'[文件]{self.file_name}'

View File

@@ -125,6 +125,95 @@ class Message(pydantic.BaseModel):
return platform_message.MessageChain(mc)
class MessageChunk(pydantic.BaseModel):
"""消息"""
resp_message_id: typing.Optional[str] = None
"""消息id"""
role: str # user, system, assistant, tool, command, plugin
"""消息的角色"""
name: typing.Optional[str] = None
"""名称,仅函数调用返回时设置"""
all_content: typing.Optional[str] = None
"""所有内容"""
content: typing.Optional[list[ContentElement]] | typing.Optional[str] = None
"""内容"""
tool_calls: typing.Optional[list[ToolCall]] = None
"""工具调用"""
tool_call_id: typing.Optional[str] = None
is_final: bool = False
"""是否是结束"""
msg_sequence: int = 0
"""消息迭代次数"""
def readable_str(self) -> str:
if self.content is not None:
return str(self.role) + ': ' + str(self.get_content_platform_message_chain())
elif self.tool_calls is not None:
return f'调用工具: {self.tool_calls[0].id}'
else:
return '未知消息'
def get_content_platform_message_chain(self, prefix_text: str = '') -> platform_message.MessageChain | None:
"""将内容转换为平台消息 MessageChain 对象
Args:
prefix_text (str): 首个文字组件的前缀文本
"""
if self.content is None:
return None
elif isinstance(self.content, str):
return platform_message.MessageChain([platform_message.Plain(prefix_text + self.content)])
elif isinstance(self.content, list):
mc = []
for ce in self.content:
if ce.type == 'text':
mc.append(platform_message.Plain(ce.text))
elif ce.type == 'image_url':
if ce.image_url.url.startswith('http'):
mc.append(platform_message.Image(url=ce.image_url.url))
else: # base64
b64_str = ce.image_url.url
if b64_str.startswith('data:'):
b64_str = b64_str.split(',')[1]
mc.append(platform_message.Image(base64=b64_str))
# 找第一个文字组件
if prefix_text:
for i, c in enumerate(mc):
if isinstance(c, platform_message.Plain):
mc[i] = platform_message.Plain(prefix_text + c.text)
break
else:
mc.insert(0, platform_message.Plain(prefix_text))
return platform_message.MessageChain(mc)
class ToolCallChunk(pydantic.BaseModel):
"""工具调用"""
id: str
"""工具调用ID"""
type: str
"""工具调用类型"""
function: FunctionCall
"""函数调用"""
class Prompt(pydantic.BaseModel):
"""供AI使用的Prompt"""

View File

@@ -17,7 +17,7 @@ class LLMModelInfo(pydantic.BaseModel):
token_mgr: token.TokenManager
requester: requester.LLMAPIRequester
requester: requester.ProviderAPIRequester
tool_call_supported: typing.Optional[bool] = False

View File

@@ -18,7 +18,7 @@ class ModelManager:
model_list: list[entities.LLMModelInfo] # deprecated
requesters: dict[str, requester.LLMAPIRequester] # deprecated
requesters: dict[str, requester.ProviderAPIRequester] # deprecated
token_mgrs: dict[str, token.TokenManager] # deprecated
@@ -28,9 +28,11 @@ class ModelManager:
llm_models: list[requester.RuntimeLLMModel]
embedding_models: list[requester.RuntimeEmbeddingModel]
requester_components: list[engine.Component]
requester_dict: dict[str, type[requester.LLMAPIRequester]] # cache
requester_dict: dict[str, type[requester.ProviderAPIRequester]] # cache
def __init__(self, ap: app.Application):
self.ap = ap
@@ -38,6 +40,7 @@ class ModelManager:
self.requesters = {}
self.token_mgrs = {}
self.llm_models = []
self.embedding_models = []
self.requester_components = []
self.requester_dict = {}
@@ -45,7 +48,7 @@ class ModelManager:
self.requester_components = self.ap.discover.get_components_by_kind('LLMAPIRequester')
# forge requester class dict
requester_dict: dict[str, type[requester.LLMAPIRequester]] = {}
requester_dict: dict[str, type[requester.ProviderAPIRequester]] = {}
for component in self.requester_components:
requester_dict[component.metadata.name] = component.get_python_component_class()
@@ -58,13 +61,11 @@ class ModelManager:
self.ap.logger.info('Loading models from db...')
self.llm_models = []
self.embedding_models = []
# llm models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.LLMModel))
llm_models = result.all()
# load models
for llm_model in llm_models:
try:
await self.load_llm_model(llm_model)
@@ -73,11 +74,17 @@ class ModelManager:
except Exception as e:
self.ap.logger.error(f'Failed to load model {llm_model.uuid}: {e}\n{traceback.format_exc()}')
# embedding models
result = await self.ap.persistence_mgr.execute_async(sqlalchemy.select(persistence_model.EmbeddingModel))
embedding_models = result.all()
for embedding_model in embedding_models:
await self.load_embedding_model(embedding_model)
async def init_runtime_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""初始化运行时模型"""
"""初始化运行时 LLM 模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.LLMModel(**model_info._mapping)
elif isinstance(model_info, dict):
@@ -101,14 +108,47 @@ class ModelManager:
return runtime_llm_model
async def init_runtime_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""初始化运行时 Embedding 模型"""
if isinstance(model_info, sqlalchemy.Row):
model_info = persistence_model.EmbeddingModel(**model_info._mapping)
elif isinstance(model_info, dict):
model_info = persistence_model.EmbeddingModel(**model_info)
requester_inst = self.requester_dict[model_info.requester](ap=self.ap, config=model_info.requester_config)
await requester_inst.initialize()
runtime_embedding_model = requester.RuntimeEmbeddingModel(
model_entity=model_info,
token_mgr=token.TokenManager(
name=model_info.uuid,
tokens=model_info.api_keys,
),
requester=requester_inst,
)
return runtime_embedding_model
async def load_llm_model(
self,
model_info: persistence_model.LLMModel | sqlalchemy.Row[persistence_model.LLMModel] | dict,
):
"""加载模型"""
"""加载 LLM 模型"""
runtime_llm_model = await self.init_runtime_llm_model(model_info)
self.llm_models.append(runtime_llm_model)
async def load_embedding_model(
self,
model_info: persistence_model.EmbeddingModel | sqlalchemy.Row[persistence_model.EmbeddingModel] | dict,
):
"""加载 Embedding 模型"""
runtime_embedding_model = await self.init_runtime_embedding_model(model_info)
self.embedding_models.append(runtime_embedding_model)
async def get_model_by_name(self, name: str) -> entities.LLMModelInfo: # deprecated
"""通过名称获取模型"""
for model in self.model_list:
@@ -116,23 +156,44 @@ class ModelManager:
return model
raise ValueError(f'无法确定模型 {name} 的信息')
async def get_model_by_uuid(self, uuid: str) -> entities.LLMModelInfo:
"""通过uuid获取模型"""
async def get_model_by_uuid(self, uuid: str) -> requester.RuntimeLLMModel:
"""通过uuid获取 LLM 模型"""
for model in self.llm_models:
if model.model_entity.uuid == uuid:
return model
raise ValueError(f'model {uuid} not found')
raise ValueError(f'LLM model {uuid} not found')
async def get_embedding_model_by_uuid(self, uuid: str) -> requester.RuntimeEmbeddingModel:
"""通过uuid获取 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == uuid:
return model
raise ValueError(f'Embedding model {uuid} not found')
async def remove_llm_model(self, model_uuid: str):
"""移除模型"""
"""移除 LLM 模型"""
for model in self.llm_models:
if model.model_entity.uuid == model_uuid:
self.llm_models.remove(model)
return
def get_available_requesters_info(self) -> list[dict]:
async def remove_embedding_model(self, model_uuid: str):
"""移除 Embedding 模型"""
for model in self.embedding_models:
if model.model_entity.uuid == model_uuid:
self.embedding_models.remove(model)
return
def get_available_requesters_info(self, model_type: str) -> list[dict]:
"""获取所有可用的请求器"""
return [component.to_plain_dict() for component in self.requester_components]
if model_type != '':
return [
component.to_plain_dict()
for component in self.requester_components
if model_type in component.spec['support_type']
]
else:
return [component.to_plain_dict() for component in self.requester_components]
def get_available_requester_info_by_name(self, name: str) -> dict | None:
"""通过名称获取请求器信息"""

View File

@@ -20,22 +20,45 @@ class RuntimeLLMModel:
token_mgr: token.TokenManager
"""api key管理器"""
requester: LLMAPIRequester
requester: ProviderAPIRequester
"""请求器实例"""
def __init__(
self,
model_entity: persistence_model.LLMModel,
token_mgr: token.TokenManager,
requester: LLMAPIRequester,
requester: ProviderAPIRequester,
):
self.model_entity = model_entity
self.token_mgr = token_mgr
self.requester = requester
class LLMAPIRequester(metaclass=abc.ABCMeta):
"""LLM API请求器"""
class RuntimeEmbeddingModel:
"""运行时 Embedding 模型"""
model_entity: persistence_model.EmbeddingModel
"""模型数据"""
token_mgr: token.TokenManager
"""api key管理器"""
requester: ProviderAPIRequester
"""请求器实例"""
def __init__(
self,
model_entity: persistence_model.EmbeddingModel,
token_mgr: token.TokenManager,
requester: ProviderAPIRequester,
):
self.model_entity = model_entity
self.token_mgr = token_mgr
self.requester = requester
class ProviderAPIRequester(metaclass=abc.ABCMeta):
"""Provider API请求器"""
name: str = None
@@ -61,6 +84,7 @@ class LLMAPIRequester(metaclass=abc.ABCMeta):
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
"""调用API
@@ -69,8 +93,51 @@ class LLMAPIRequester(metaclass=abc.ABCMeta):
messages (typing.List[llm_entities.Message]): 消息对象列表
funcs (typing.List[tools_entities.LLMFunction], optional): 使用的工具函数列表. Defaults to None.
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
remove_think (bool, optional): 是否移思考中的消息. Defaults to False.
Returns:
llm_entities.Message: 返回消息对象
"""
pass
async def invoke_llm_stream(
self,
query: core_entities.Query,
model: RuntimeLLMModel,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.MessageChunk:
"""调用API
Args:
model (RuntimeLLMModel): 使用的模型信息
messages (typing.List[llm_entities.Message]): 消息对象列表
funcs (typing.List[tools_entities.LLMFunction], optional): 使用的工具函数列表. Defaults to None.
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
remove_think (bool, optional): 是否移除思考中的消息. Defaults to False.
Returns:
typing.AsyncGenerator[llm_entities.MessageChunk]: 返回消息对象
"""
pass
async def invoke_embedding(
self,
model: RuntimeEmbeddingModel,
input_text: list[str],
extra_args: dict[str, typing.Any] = {},
) -> list[list[float]]:
"""调用 Embedding API
Args:
query (core_entities.Query): 请求上下文
model (RuntimeEmbeddingModel): 使用的模型信息
input_text (list[str]): 输入文本
extra_args (dict[str, typing.Any], optional): 额外的参数. Defaults to {}.
Returns:
list[list[float]]: 返回的 embedding 向量
"""
pass

View File

@@ -22,6 +22,9 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./302aichatcmpl.py

View File

@@ -15,13 +15,13 @@ from ...tools import entities as tools_entities
from ....utils import image
class AnthropicMessages(requester.LLMAPIRequester):
class AnthropicMessages(requester.ProviderAPIRequester):
"""Anthropic Messages API 请求器"""
client: anthropic.AsyncAnthropic
default_config: dict[str, typing.Any] = {
'base_url': 'https://api.anthropic.com/v1',
'base_url': 'https://api.anthropic.com',
'timeout': 120,
}
@@ -44,6 +44,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
self.client = anthropic.AsyncAnthropic(
api_key='',
http_client=httpx_client,
base_url=self.requester_cfg['base_url'],
)
async def invoke_llm(
@@ -53,6 +54,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
self.client.api_key = model.token_mgr.get_token()
@@ -89,7 +91,8 @@ class AnthropicMessages(requester.LLMAPIRequester):
{
'type': 'tool_result',
'tool_use_id': tool_call_id,
'content': m.content,
'is_error': False,
'content': [{'type': 'text', 'text': m.content}],
}
],
}
@@ -133,6 +136,9 @@ class AnthropicMessages(requester.LLMAPIRequester):
args['messages'] = req_messages
if 'thinking' in args:
args['thinking'] = {'type': 'enabled', 'budget_tokens': 10000}
if funcs:
tools = await self.ap.tool_mgr.generate_tools_for_anthropic(funcs)
@@ -140,19 +146,17 @@ class AnthropicMessages(requester.LLMAPIRequester):
args['tools'] = tools
try:
# print(json.dumps(args, indent=4, ensure_ascii=False))
resp = await self.client.messages.create(**args)
args = {
'content': '',
'role': resp.role,
}
assert type(resp) is anthropic.types.message.Message
for block in resp.content:
if block.type == 'thinking':
args['content'] = '<think>' + block.thinking + '</think>\n' + args['content']
if not remove_think and block.type == 'thinking':
args['content'] = '<think>\n' + block.thinking + '\n</think>\n' + args['content']
elif block.type == 'text':
args['content'] += block.text
elif block.type == 'tool_use':
@@ -176,3 +180,191 @@ class AnthropicMessages(requester.LLMAPIRequester):
raise errors.RequesterError(f'模型无效: {e.message}')
else:
raise errors.RequesterError(f'请求地址无效: {e.message}')
async def invoke_llm_stream(
self,
query: core_entities.Query,
model: requester.RuntimeLLMModel,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
self.client.api_key = model.token_mgr.get_token()
args = extra_args.copy()
args['model'] = model.model_entity.name
args['stream'] = True
# 处理消息
# system
system_role_message = None
for i, m in enumerate(messages):
if m.role == 'system':
system_role_message = m
break
if system_role_message:
messages.pop(i)
if isinstance(system_role_message, llm_entities.Message) and isinstance(system_role_message.content, str):
args['system'] = system_role_message.content
req_messages = []
for m in messages:
if m.role == 'tool':
tool_call_id = m.tool_call_id
req_messages.append(
{
'role': 'user',
'content': [
{
'type': 'tool_result',
'tool_use_id': tool_call_id,
'is_error': False, # 暂时直接写false
'content': [
{'type': 'text', 'text': m.content}
], # 这里要是list包裹应该是多个返回的情况type类型好像也可以填其他的暂时只写text
}
],
}
)
continue
msg_dict = m.dict(exclude_none=True)
if isinstance(m.content, str) and m.content.strip() != '':
msg_dict['content'] = [{'type': 'text', 'text': m.content}]
elif isinstance(m.content, list):
for i, ce in enumerate(m.content):
if ce.type == 'image_base64':
image_b64, image_format = await image.extract_b64_and_format(ce.image_base64)
alter_image_ele = {
'type': 'image',
'source': {
'type': 'base64',
'media_type': f'image/{image_format}',
'data': image_b64,
},
}
msg_dict['content'][i] = alter_image_ele
if isinstance(msg_dict['content'], str) and msg_dict['content'] == '':
msg_dict['content'] = [] # 这里不知道为什么会莫名有个空导致content为字符
if m.tool_calls:
for tool_call in m.tool_calls:
msg_dict['content'].append(
{
'type': 'tool_use',
'id': tool_call.id,
'name': tool_call.function.name,
'input': json.loads(tool_call.function.arguments),
}
)
del msg_dict['tool_calls']
req_messages.append(msg_dict)
if 'thinking' in args:
args['thinking'] = {'type': 'enabled', 'budget_tokens': 10000}
args['messages'] = req_messages
if funcs:
tools = await self.ap.tool_mgr.generate_tools_for_anthropic(funcs)
if tools:
args['tools'] = tools
try:
role = 'assistant' # 默认角色
# chunk_idx = 0
think_started = False
think_ended = False
finish_reason = False
content = ''
tool_name = ''
tool_id = ''
async for chunk in await self.client.messages.create(**args):
tool_call = {'id': None, 'function': {'name': None, 'arguments': None}, 'type': 'function'}
if isinstance(
chunk, anthropic.types.raw_content_block_start_event.RawContentBlockStartEvent
): # 记录开始
if chunk.content_block.type == 'tool_use':
if chunk.content_block.name is not None:
tool_name = chunk.content_block.name
if chunk.content_block.id is not None:
tool_id = chunk.content_block.id
tool_call['function']['name'] = tool_name
tool_call['function']['arguments'] = ''
tool_call['id'] = tool_id
if not remove_think:
if chunk.content_block.type == 'thinking' and not remove_think:
think_started = True
elif chunk.content_block.type == 'text' and chunk.index != 0 and not remove_think:
think_ended = True
continue
elif isinstance(chunk, anthropic.types.raw_content_block_delta_event.RawContentBlockDeltaEvent):
if chunk.delta.type == 'thinking_delta':
if think_started:
think_started = False
content = '<think>\n' + chunk.delta.thinking
elif remove_think:
continue
else:
content = chunk.delta.thinking
elif chunk.delta.type == 'text_delta':
if think_ended:
think_ended = False
content = '\n</think>\n' + chunk.delta.text
else:
content = chunk.delta.text
elif chunk.delta.type == 'input_json_delta':
tool_call['function']['arguments'] = chunk.delta.partial_json
tool_call['function']['name'] = tool_name
tool_call['id'] = tool_id
elif isinstance(chunk, anthropic.types.raw_content_block_stop_event.RawContentBlockStopEvent):
continue # 记录raw_content_block结束的
elif isinstance(chunk, anthropic.types.raw_message_delta_event.RawMessageDeltaEvent):
if chunk.delta.stop_reason == 'end_turn':
finish_reason = True
elif isinstance(chunk, anthropic.types.raw_message_stop_event.RawMessageStopEvent):
continue # 这个好像是完全结束
else:
# print(chunk)
self.ap.logger.debug(f'anthropic chunk: {chunk}')
continue
args = {
'content': content,
'role': role,
'is_final': finish_reason,
'tool_calls': None if tool_call['id'] is None else [tool_call],
}
# if chunk_idx == 0:
# chunk_idx += 1
# continue
# assert type(chunk) is anthropic.types.message.Chunk
yield llm_entities.MessageChunk(**args)
# return llm_entities.Message(**args)
except anthropic.AuthenticationError as e:
raise errors.RequesterError(f'api-key 无效: {e.message}')
except anthropic.BadRequestError as e:
raise errors.RequesterError(str(e.message))
except anthropic.NotFoundError as e:
if 'model: ' in str(e):
raise errors.RequesterError(f'模型无效: {e.message}')
else:
raise errors.RequesterError(f'请求地址无效: {e.message}')

View File

@@ -14,7 +14,7 @@ spec:
zh_Hans: 基础 URL
type: string
required: true
default: "https://api.anthropic.com/v1"
default: "https://api.anthropic.com"
- name: timeout
label:
en_US: Timeout
@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./anthropicmsgs.py

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./bailianchatcmpl.py

View File

@@ -13,7 +13,7 @@ from ... import entities as llm_entities
from ...tools import entities as tools_entities
class OpenAIChatCompletions(requester.LLMAPIRequester):
class OpenAIChatCompletions(requester.ProviderAPIRequester):
"""OpenAI ChatCompletion API 请求器"""
client: openai.AsyncClient
@@ -38,9 +38,18 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
) -> chat_completion.ChatCompletion:
return await self.client.chat.completions.create(**args, extra_body=extra_body)
async def _req_stream(
self,
args: dict,
extra_body: dict = {},
):
async for chunk in await self.client.chat.completions.create(**args, extra_body=extra_body):
yield chunk
async def _make_msg(
self,
chat_completion: chat_completion.ChatCompletion,
remove_think: bool = False,
) -> llm_entities.Message:
chatcmpl_message = chat_completion.choices[0].message.model_dump()
@@ -48,16 +57,192 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
chatcmpl_message['role'] = 'assistant'
reasoning_content = chatcmpl_message['reasoning_content'] if 'reasoning_content' in chatcmpl_message else None
# 处理思维链
content = chatcmpl_message.get('content', '')
reasoning_content = chatcmpl_message.get('reasoning_content', None)
# deepseek的reasoner模型
if reasoning_content is not None:
chatcmpl_message['content'] = '<think>\n' + reasoning_content + '\n</think>\n' + chatcmpl_message['content']
processed_content, _ = await self._process_thinking_content(
content=content, reasoning_content=reasoning_content, remove_think=remove_think
)
chatcmpl_message['content'] = processed_content
# 移除 reasoning_content 字段,避免传递给 Message
if 'reasoning_content' in chatcmpl_message:
del chatcmpl_message['reasoning_content']
message = llm_entities.Message(**chatcmpl_message)
return message
async def _process_thinking_content(
self,
content: str,
reasoning_content: str = None,
remove_think: bool = False,
) -> tuple[str, str]:
"""处理思维链内容
Args:
content: 原始内容
reasoning_content: reasoning_content 字段内容
remove_think: 是否移除思维链
Returns:
(处理后的内容, 提取的思维链内容)
"""
thinking_content = ''
# 1. 从 reasoning_content 提取思维链
if reasoning_content:
thinking_content = reasoning_content
# 2. 从 content 中提取 <think> 标签内容
if content and '<think>' in content and '</think>' in content:
import re
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, content, re.DOTALL)
if think_matches:
# 如果已有 reasoning_content则追加
if thinking_content:
thinking_content += '\n' + '\n'.join(think_matches)
else:
thinking_content = '\n'.join(think_matches)
# 移除 content 中的 <think> 标签
content = re.sub(think_pattern, '', content, flags=re.DOTALL).strip()
# 3. 根据 remove_think 参数决定是否保留思维链
if remove_think:
return content, ''
else:
# 如果有思维链内容,将其以 <think> 格式添加到 content 开头
if thinking_content:
content = f'<think>\n{thinking_content}\n</think>\n{content}'.strip()
return content, thinking_content
async def _closure_stream(
self,
query: core_entities.Query,
req_messages: list[dict],
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.MessageChunk:
self.client.api_key = use_model.token_mgr.get_token()
args = {}
args['model'] = use_model.model_entity.name
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args['tools'] = tools
# 设置此次请求中的messages
messages = req_messages.copy()
# 检查vision
for msg in messages:
if 'content' in msg and isinstance(msg['content'], list):
for me in msg['content']:
if me['type'] == 'image_base64':
me['image_url'] = {'url': me['image_base64']}
me['type'] = 'image_url'
del me['image_base64']
args['messages'] = messages
args['stream'] = True
# 流式处理状态
tool_calls_map: dict[str, llm_entities.ToolCall] = {}
chunk_idx = 0
thinking_started = False
thinking_ended = False
role = 'assistant' # 默认角色
tool_id = ""
tool_name = ''
# accumulated_reasoning = '' # 仅用于判断何时结束思维链
async for chunk in self._req_stream(args, extra_body=extra_args):
# 解析 chunk 数据
if hasattr(chunk, 'choices') and chunk.choices:
choice = chunk.choices[0]
delta = choice.delta.model_dump() if hasattr(choice, 'delta') else {}
finish_reason = getattr(choice, 'finish_reason', None)
else:
delta = {}
finish_reason = None
# 从第一个 chunk 获取 role后续使用这个 role
if 'role' in delta and delta['role']:
role = delta['role']
# 获取增量内容
delta_content = delta.get('content', '')
reasoning_content = delta.get('reasoning_content', '')
# 处理 reasoning_content
if reasoning_content:
# accumulated_reasoning += reasoning_content
# 如果设置了 remove_think跳过 reasoning_content
if remove_think:
chunk_idx += 1
continue
# 第一次出现 reasoning_content添加 <think> 开始标签
if not thinking_started:
thinking_started = True
delta_content = '<think>\n' + reasoning_content
else:
# 继续输出 reasoning_content
delta_content = reasoning_content
elif thinking_started and not thinking_ended and delta_content:
# reasoning_content 结束normal content 开始,添加 </think> 结束标签
thinking_ended = True
delta_content = '\n</think>\n' + delta_content
# 处理 content 中已有的 <think> 标签(如果需要移除)
# if delta_content and remove_think and '<think>' in delta_content:
# import re
#
# # 移除 <think> 标签及其内容
# delta_content = re.sub(r'<think>.*?</think>', '', delta_content, flags=re.DOTALL)
# 处理工具调用增量
# delta_tool_calls = None
if delta.get('tool_calls'):
for tool_call in delta['tool_calls']:
if tool_call['id'] and tool_call['function']['name']:
tool_id = tool_call['id']
tool_name = tool_call['function']['name']
else:
tool_call['id'] = tool_id
tool_call['function']['name'] = tool_name
if tool_call['type'] is None:
tool_call['type'] = 'function'
# 跳过空的第一个 chunk只有 role 没有内容)
if chunk_idx == 0 and not delta_content and not reasoning_content and not delta.get('tool_calls'):
chunk_idx += 1
continue
# 构建 MessageChunk - 只包含增量内容
chunk_data = {
'role': role,
'content': delta_content if delta_content else None,
'tool_calls': delta.get('tool_calls'),
'is_final': bool(finish_reason),
}
# 移除 None 值
chunk_data = {k: v for k, v in chunk_data.items() if v is not None}
yield llm_entities.MessageChunk(**chunk_data)
chunk_idx += 1
async def _closure(
self,
query: core_entities.Query,
@@ -65,6 +250,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
@@ -92,10 +278,10 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
args['messages'] = messages
# 发送请求
resp = await self._req(args, extra_body=extra_args)
resp = await self._req(args, extra_body=extra_args)
# 处理请求结果
message = await self._make_msg(resp)
message = await self._make_msg(resp, remove_think)
return message
@@ -106,6 +292,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
for m in messages:
@@ -119,13 +306,90 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
req_messages.append(msg_dict)
try:
return await self._closure(
msg = await self._closure(
query=query,
req_messages=req_messages,
use_model=model,
use_funcs=funcs,
extra_args=extra_args,
remove_think=remove_think,
)
return msg
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
except openai.BadRequestError as e:
if 'context_length_exceeded' in e.message:
raise errors.RequesterError(f'上文过长,请重置会话: {e.message}')
else:
raise errors.RequesterError(f'请求参数错误: {e.message}')
except openai.AuthenticationError as e:
raise errors.RequesterError(f'无效的 api-key: {e.message}')
except openai.NotFoundError as e:
raise errors.RequesterError(f'请求路径错误: {e.message}')
except openai.RateLimitError as e:
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
except openai.APIError as e:
raise errors.RequesterError(f'请求错误: {e.message}')
async def invoke_embedding(
self,
model: requester.RuntimeEmbeddingModel,
input_text: list[str],
extra_args: dict[str, typing.Any] = {},
) -> list[list[float]]:
"""调用 Embedding API"""
self.client.api_key = model.token_mgr.get_token()
args = {
'model': model.model_entity.name,
'input': input_text,
}
if model.model_entity.extra_args:
args.update(model.model_entity.extra_args)
args.update(extra_args)
try:
resp = await self.client.embeddings.create(**args)
return [d.embedding for d in resp.data]
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
except openai.BadRequestError as e:
raise errors.RequesterError(f'请求参数错误: {e.message}')
async def invoke_llm_stream(
self,
query: core_entities.Query,
model: requester.RuntimeLLMModel,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.MessageChunk:
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
for m in messages:
msg_dict = m.dict(exclude_none=True)
content = msg_dict.get('content')
if isinstance(content, list):
# 检查 content 列表中是否每个部分都是文本
if all(isinstance(part, dict) and part.get('type') == 'text' for part in content):
# 将所有文本部分合并为一个字符串
msg_dict['content'] = '\n'.join(part['text'] for part in content)
req_messages.append(msg_dict)
try:
async for item in self._closure_stream(
query=query,
req_messages=req_messages,
use_model=model,
use_funcs=funcs,
extra_args=extra_args,
remove_think=remove_think,
):
yield item
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
except openai.BadRequestError as e:

View File

@@ -22,6 +22,9 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./chatcmpl.py

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./compsharechatcmpl.py

View File

@@ -24,6 +24,7 @@ class DeepseekChatCompletions(chatcmpl.OpenAIChatCompletions):
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
@@ -49,10 +50,11 @@ class DeepseekChatCompletions(chatcmpl.OpenAIChatCompletions):
# 发送请求
resp = await self._req(args, extra_body=extra_args)
# print(resp)
if resp is None:
raise errors.RequesterError('接口返回为空,请确定模型提供商服务是否正常')
# 处理请求结果
message = await self._make_msg(resp)
message = await self._make_msg(resp, remove_think)
return message

View File

@@ -4,7 +4,7 @@ metadata:
name: deepseek-chat-completions
label:
en_US: DeepSeek
zh_Hans: 深度求索
zh_Hans: DeepSeek
icon: deepseek.svg
spec:
config:
@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./deepseekchatcmpl.py

View File

@@ -4,6 +4,13 @@ import typing
from . import chatcmpl
import uuid
from .. import errors, requester
from ....core import entities as core_entities
from ... import entities as llm_entities
from ...tools import entities as tools_entities
class GeminiChatCompletions(chatcmpl.OpenAIChatCompletions):
"""Google Gemini API 请求器"""
@@ -12,3 +19,127 @@ class GeminiChatCompletions(chatcmpl.OpenAIChatCompletions):
'base_url': 'https://generativelanguage.googleapis.com/v1beta/openai',
'timeout': 120,
}
async def _closure_stream(
self,
query: core_entities.Query,
req_messages: list[dict],
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.MessageChunk:
self.client.api_key = use_model.token_mgr.get_token()
args = {}
args['model'] = use_model.model_entity.name
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args['tools'] = tools
# 设置此次请求中的messages
messages = req_messages.copy()
# 检查vision
for msg in messages:
if 'content' in msg and isinstance(msg['content'], list):
for me in msg['content']:
if me['type'] == 'image_base64':
me['image_url'] = {'url': me['image_base64']}
me['type'] = 'image_url'
del me['image_base64']
args['messages'] = messages
args['stream'] = True
# 流式处理状态
tool_calls_map: dict[str, llm_entities.ToolCall] = {}
chunk_idx = 0
thinking_started = False
thinking_ended = False
role = 'assistant' # 默认角色
tool_id = ""
tool_name = ''
# accumulated_reasoning = '' # 仅用于判断何时结束思维链
async for chunk in self._req_stream(args, extra_body=extra_args):
# 解析 chunk 数据
if hasattr(chunk, 'choices') and chunk.choices:
choice = chunk.choices[0]
delta = choice.delta.model_dump() if hasattr(choice, 'delta') else {}
finish_reason = getattr(choice, 'finish_reason', None)
else:
delta = {}
finish_reason = None
# 从第一个 chunk 获取 role后续使用这个 role
if 'role' in delta and delta['role']:
role = delta['role']
# 获取增量内容
delta_content = delta.get('content', '')
reasoning_content = delta.get('reasoning_content', '')
# 处理 reasoning_content
if reasoning_content:
# accumulated_reasoning += reasoning_content
# 如果设置了 remove_think跳过 reasoning_content
if remove_think:
chunk_idx += 1
continue
# 第一次出现 reasoning_content添加 <think> 开始标签
if not thinking_started:
thinking_started = True
delta_content = '<think>\n' + reasoning_content
else:
# 继续输出 reasoning_content
delta_content = reasoning_content
elif thinking_started and not thinking_ended and delta_content:
# reasoning_content 结束normal content 开始,添加 </think> 结束标签
thinking_ended = True
delta_content = '\n</think>\n' + delta_content
# 处理 content 中已有的 <think> 标签(如果需要移除)
# if delta_content and remove_think and '<think>' in delta_content:
# import re
#
# # 移除 <think> 标签及其内容
# delta_content = re.sub(r'<think>.*?</think>', '', delta_content, flags=re.DOTALL)
# 处理工具调用增量
# delta_tool_calls = None
if delta.get('tool_calls'):
for tool_call in delta['tool_calls']:
if tool_call['id'] == '' and tool_id == '':
tool_id = str(uuid.uuid4())
if tool_call['function']['name']:
tool_name = tool_call['function']['name']
tool_call['id'] = tool_id
tool_call['function']['name'] = tool_name
if tool_call['type'] is None:
tool_call['type'] = 'function'
# 跳过空的第一个 chunk只有 role 没有内容)
if chunk_idx == 0 and not delta_content and not reasoning_content and not delta.get('tool_calls'):
chunk_idx += 1
continue
# 构建 MessageChunk - 只包含增量内容
chunk_data = {
'role': role,
'content': delta_content if delta_content else None,
'tool_calls': delta.get('tool_calls'),
'is_final': bool(finish_reason),
}
# 移除 None 值
chunk_data = {k: v for k, v in chunk_data.items() if v is not None}
yield llm_entities.MessageChunk(**chunk_data)
chunk_idx += 1

View File

@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./geminichatcmpl.py

View File

@@ -3,14 +3,16 @@ from __future__ import annotations
import typing
from . import chatcmpl
from . import ppiochatcmpl
from .. import requester
from ....core import entities as core_entities
from ... import entities as llm_entities
from ...tools import entities as tools_entities
import re
import openai.types.chat.chat_completion as chat_completion
class GiteeAIChatCompletions(chatcmpl.OpenAIChatCompletions):
class GiteeAIChatCompletions(ppiochatcmpl.PPIOChatCompletions):
"""Gitee AI ChatCompletions API 请求器"""
default_config: dict[str, typing.Any] = {
@@ -18,34 +20,3 @@ class GiteeAIChatCompletions(chatcmpl.OpenAIChatCompletions):
'timeout': 120,
}
async def _closure(
self,
query: core_entities.Query,
req_messages: list[dict],
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = {}
args['model'] = use_model.model_entity.name
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args['tools'] = tools
# gitee 不支持多模态把content都转换成纯文字
for m in req_messages:
if 'content' in m and isinstance(m['content'], list):
m['content'] = ' '.join([c['text'] for c in m['content']])
args['messages'] = req_messages
resp = await self._req(args, extra_body=extra_args)
message = await self._make_msg(resp)
return message

View File

@@ -22,6 +22,9 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./giteeaichatcmpl.py

View File

@@ -22,6 +22,9 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./lmstudiochatcmpl.py

View File

@@ -1,6 +1,7 @@
from __future__ import annotations
import asyncio
import json
import typing
import openai
@@ -14,7 +15,7 @@ from ... import entities as llm_entities
from ...tools import entities as tools_entities
class ModelScopeChatCompletions(requester.LLMAPIRequester):
class ModelScopeChatCompletions(requester.ProviderAPIRequester):
"""ModelScope ChatCompletion API 请求器"""
client: openai.AsyncClient
@@ -34,9 +35,11 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
async def _req(
self,
query: core_entities.Query,
args: dict,
extra_body: dict = {},
) -> chat_completion.ChatCompletion:
remove_think: bool = False,
) -> list[dict[str, typing.Any]]:
args['stream'] = True
chunk = None
@@ -47,78 +50,75 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
resp_gen: openai.AsyncStream = await self.client.chat.completions.create(**args, extra_body=extra_body)
chunk_idx = 0
thinking_started = False
thinking_ended = False
tool_id = ''
tool_name = ''
message_delta = {}
async for chunk in resp_gen:
# print(chunk)
if not chunk or not chunk.id or not chunk.choices or not chunk.choices[0] or not chunk.choices[0].delta:
continue
if chunk.choices[0].delta.content is not None:
pending_content += chunk.choices[0].delta.content
delta = chunk.choices[0].delta.model_dump() if hasattr(chunk.choices[0], 'delta') else {}
reasoning_content = delta.get('reasoning_content')
# 处理 reasoning_content
if reasoning_content:
# accumulated_reasoning += reasoning_content
# 如果设置了 remove_think跳过 reasoning_content
if remove_think:
chunk_idx += 1
continue
if chunk.choices[0].delta.tool_calls is not None:
for tool_call in chunk.choices[0].delta.tool_calls:
if tool_call.function.arguments is None:
# 第一次出现 reasoning_content添加 <think> 开始标签
if not thinking_started:
thinking_started = True
pending_content += '<think>\n' + reasoning_content
else:
# 继续输出 reasoning_content
pending_content += reasoning_content
elif thinking_started and not thinking_ended and delta.get('content'):
# reasoning_content 结束normal content 开始,添加 </think> 结束标签
thinking_ended = True
pending_content += '\n</think>\n' + delta.get('content')
if delta.get('content') is not None:
pending_content += delta.get('content')
if delta.get('tool_calls') is not None:
for tool_call in delta.get('tool_calls'):
if tool_call['id'] != '':
tool_id = tool_call['id']
if tool_call['function']['name'] is not None:
tool_name = tool_call['function']['name']
if tool_call['function']['arguments'] is None:
continue
tool_call['id'] = tool_id
tool_call['name'] = tool_name
for tc in tool_calls:
if tc.index == tool_call.index:
tc.function.arguments += tool_call.function.arguments
if tc['index'] == tool_call['index']:
tc['function']['arguments'] += tool_call['function']['arguments']
break
else:
tool_calls.append(tool_call)
if chunk.choices[0].finish_reason is not None:
break
message_delta['content'] = pending_content
message_delta['role'] = 'assistant'
real_tool_calls = []
for tc in tool_calls:
function = chat_completion_message_tool_call.Function(
name=tc.function.name, arguments=tc.function.arguments
)
real_tool_calls.append(
chat_completion_message_tool_call.ChatCompletionMessageToolCall(
id=tc.id, function=function, type='function'
)
)
return (
chat_completion.ChatCompletion(
id=chunk.id,
object='chat.completion',
created=chunk.created,
choices=[
chat_completion.Choice(
index=0,
message=chat_completion.ChatCompletionMessage(
role='assistant',
content=pending_content,
tool_calls=real_tool_calls if len(real_tool_calls) > 0 else None,
),
finish_reason=chunk.choices[0].finish_reason
if hasattr(chunk.choices[0], 'finish_reason') and chunk.choices[0].finish_reason is not None
else 'stop',
logprobs=chunk.choices[0].logprobs,
)
],
model=chunk.model,
service_tier=chunk.service_tier if hasattr(chunk, 'service_tier') else None,
system_fingerprint=chunk.system_fingerprint if hasattr(chunk, 'system_fingerprint') else None,
usage=chunk.usage if hasattr(chunk, 'usage') else None,
)
if chunk
else None
)
message_delta['tool_calls'] = tool_calls if tool_calls else None
return [message_delta]
async def _make_msg(
self,
chat_completion: chat_completion.ChatCompletion,
chat_completion: list[dict[str, typing.Any]],
) -> llm_entities.Message:
chatcmpl_message = chat_completion.choices[0].message.dict()
chatcmpl_message = chat_completion[0]
# 确保 role 字段存在且不为 None
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
chatcmpl_message['role'] = 'assistant'
message = llm_entities.Message(**chatcmpl_message)
return message
@@ -130,6 +130,7 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think:bool = False,
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
@@ -157,13 +158,146 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
args['messages'] = messages
# 发送请求
resp = await self._req(args, extra_body=extra_args)
resp = await self._req(query, args, extra_body=extra_args, remove_think=remove_think)
# 处理请求结果
message = await self._make_msg(resp)
return message
async def _req_stream(
self,
args: dict,
extra_body: dict = {},
) -> chat_completion.ChatCompletion:
async for chunk in await self.client.chat.completions.create(**args, extra_body=extra_body):
yield chunk
async def _closure_stream(
self,
query: core_entities.Query,
req_messages: list[dict],
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message | typing.AsyncGenerator[llm_entities.MessageChunk, None]:
self.client.api_key = use_model.token_mgr.get_token()
args = {}
args['model'] = use_model.model_entity.name
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args['tools'] = tools
# 设置此次请求中的messages
messages = req_messages.copy()
# 检查vision
for msg in messages:
if 'content' in msg and isinstance(msg['content'], list):
for me in msg['content']:
if me['type'] == 'image_base64':
me['image_url'] = {'url': me['image_base64']}
me['type'] = 'image_url'
del me['image_base64']
args['messages'] = messages
args['stream'] = True
# 流式处理状态
tool_calls_map: dict[str, llm_entities.ToolCall] = {}
chunk_idx = 0
thinking_started = False
thinking_ended = False
role = 'assistant' # 默认角色
# accumulated_reasoning = '' # 仅用于判断何时结束思维链
async for chunk in self._req_stream(args, extra_body=extra_args):
# 解析 chunk 数据
if hasattr(chunk, 'choices') and chunk.choices:
choice = chunk.choices[0]
delta = choice.delta.model_dump() if hasattr(choice, 'delta') else {}
finish_reason = getattr(choice, 'finish_reason', None)
else:
delta = {}
finish_reason = None
# 从第一个 chunk 获取 role后续使用这个 role
if 'role' in delta and delta['role']:
role = delta['role']
# 获取增量内容
delta_content = delta.get('content', '')
reasoning_content = delta.get('reasoning_content', '')
# 处理 reasoning_content
if reasoning_content:
# accumulated_reasoning += reasoning_content
# 如果设置了 remove_think跳过 reasoning_content
if remove_think:
chunk_idx += 1
continue
# 第一次出现 reasoning_content添加 <think> 开始标签
if not thinking_started:
thinking_started = True
delta_content = '<think>\n' + reasoning_content
else:
# 继续输出 reasoning_content
delta_content = reasoning_content
elif thinking_started and not thinking_ended and delta_content:
# reasoning_content 结束normal content 开始,添加 </think> 结束标签
thinking_ended = True
delta_content = '\n</think>\n' + delta_content
# 处理 content 中已有的 <think> 标签(如果需要移除)
# if delta_content and remove_think and '<think>' in delta_content:
# import re
#
# # 移除 <think> 标签及其内容
# delta_content = re.sub(r'<think>.*?</think>', '', delta_content, flags=re.DOTALL)
# 处理工具调用增量
if delta.get('tool_calls'):
for tool_call in delta['tool_calls']:
if tool_call['id'] != '':
tool_id = tool_call['id']
if tool_call['function']['name'] is not None:
tool_name = tool_call['function']['name']
if tool_call['type'] is None:
tool_call['type'] = 'function'
tool_call['id'] = tool_id
tool_call['function']['name'] = tool_name
tool_call['function']['arguments'] = "" if tool_call['function']['arguments'] is None else tool_call['function']['arguments']
# 跳过空的第一个 chunk只有 role 没有内容)
if chunk_idx == 0 and not delta_content and not reasoning_content and not delta.get('tool_calls'):
chunk_idx += 1
continue
# 构建 MessageChunk - 只包含增量内容
chunk_data = {
'role': role,
'content': delta_content if delta_content else None,
'tool_calls': delta.get('tool_calls'),
'is_final': bool(finish_reason),
}
# 移除 None 值
chunk_data = {k: v for k, v in chunk_data.items() if v is not None}
yield llm_entities.MessageChunk(**chunk_data)
chunk_idx += 1
# return
async def invoke_llm(
self,
query: core_entities.Query,
@@ -171,6 +305,7 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
for m in messages:
@@ -185,7 +320,7 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
try:
return await self._closure(
query=query, req_messages=req_messages, use_model=model, use_funcs=funcs, extra_args=extra_args
query=query, req_messages=req_messages, use_model=model, use_funcs=funcs, extra_args=extra_args, remove_think=remove_think
)
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
@@ -202,3 +337,50 @@ class ModelScopeChatCompletions(requester.LLMAPIRequester):
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
except openai.APIError as e:
raise errors.RequesterError(f'请求错误: {e.message}')
async def invoke_llm_stream(
self,
query: core_entities.Query,
model: requester.RuntimeLLMModel,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.MessageChunk:
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
for m in messages:
msg_dict = m.dict(exclude_none=True)
content = msg_dict.get('content')
if isinstance(content, list):
# 检查 content 列表中是否每个部分都是文本
if all(isinstance(part, dict) and part.get('type') == 'text' for part in content):
# 将所有文本部分合并为一个字符串
msg_dict['content'] = '\n'.join(part['text'] for part in content)
req_messages.append(msg_dict)
try:
async for item in self._closure_stream(
query=query,
req_messages=req_messages,
use_model=model,
use_funcs=funcs,
extra_args=extra_args,
remove_think=remove_think,
):
yield item
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
except openai.BadRequestError as e:
if 'context_length_exceeded' in e.message:
raise errors.RequesterError(f'上文过长,请重置会话: {e.message}')
else:
raise errors.RequesterError(f'请求参数错误: {e.message}')
except openai.AuthenticationError as e:
raise errors.RequesterError(f'无效的 api-key: {e.message}')
except openai.NotFoundError as e:
raise errors.RequesterError(f'请求路径错误: {e.message}')
except openai.RateLimitError as e:
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
except openai.APIError as e:
raise errors.RequesterError(f'请求错误: {e.message}')

View File

@@ -29,6 +29,8 @@ spec:
type: int
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./modelscopechatcmpl.py

View File

@@ -25,6 +25,7 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
@@ -54,6 +55,6 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
resp = await self._req(args, extra_body=extra_args)
# 处理请求结果
message = await self._make_msg(resp)
message = await self._make_msg(resp, remove_think)
return message

View File

@@ -14,7 +14,7 @@ spec:
zh_Hans: 基础 URL
type: string
required: true
default: "https://api.moonshot.com/v1"
default: "https://api.moonshot.ai/v1"
- name: timeout
label:
en_US: Timeout
@@ -22,6 +22,8 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
execution:
python:
path: ./moonshotchatcmpl.py

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

@@ -0,0 +1,17 @@
from __future__ import annotations
import typing
import openai
from . import chatcmpl
class NewAPIChatCompletions(chatcmpl.OpenAIChatCompletions):
"""New API ChatCompletion API 请求器"""
client: openai.AsyncClient
default_config: dict[str, typing.Any] = {
'base_url': 'http://localhost:3000/v1',
'timeout': 120,
}

View File

@@ -0,0 +1,31 @@
apiVersion: v1
kind: LLMAPIRequester
metadata:
name: new-api-chat-completions
label:
en_US: New API
zh_Hans: New API
icon: newapi.png
spec:
config:
- name: base_url
label:
en_US: Base URL
zh_Hans: 基础 URL
type: string
required: true
default: "http://localhost:3000/v1"
- name: timeout
label:
en_US: Timeout
zh_Hans: 超时时间
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./newapichatcmpl.py
attr: NewAPIChatCompletions

View File

@@ -17,7 +17,7 @@ from ....core import entities as core_entities
REQUESTER_NAME: str = 'ollama-chat'
class OllamaChatCompletions(requester.LLMAPIRequester):
class OllamaChatCompletions(requester.ProviderAPIRequester):
"""Ollama平台 ChatCompletion API请求器"""
client: ollama.AsyncClient
@@ -44,6 +44,7 @@ class OllamaChatCompletions(requester.LLMAPIRequester):
use_model: requester.RuntimeLLMModel,
use_funcs: list[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
args = extra_args.copy()
args['model'] = use_model.model_entity.name
@@ -110,6 +111,7 @@ class OllamaChatCompletions(requester.LLMAPIRequester):
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
extra_args: dict[str, typing.Any] = {},
remove_think: bool = False,
) -> llm_entities.Message:
req_messages: list = []
for m in messages:
@@ -126,6 +128,19 @@ class OllamaChatCompletions(requester.LLMAPIRequester):
use_model=model,
use_funcs=funcs,
extra_args=extra_args,
remove_think=remove_think,
)
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
async def invoke_embedding(
self,
model: requester.RuntimeEmbeddingModel,
input_text: list[str],
extra_args: dict[str, typing.Any] = {},
) -> list[list[float]]:
return (await self.client.embed(
model=model.model_entity.name,
input=input_text,
**extra_args,
)).embeddings

View File

@@ -22,6 +22,9 @@ spec:
type: integer
required: true
default: 120
support_type:
- llm
- text-embedding
execution:
python:
path: ./ollamachat.py

Some files were not shown because too many files have changed in this diff Show More