Compare commits

..

191 Commits

Author SHA1 Message Date
RockChinQ
b0cca0a4c2 Release v2.5.1 2023-07-31 18:12:59 +08:00
Junyan Qin
a2bda85a9c Merge pull request #523 from RockChinQ/feat-prompt-preprocess-event
[Feat] 新增PromptPreprocessing事件
2023-07-31 17:55:06 +08:00
RockChinQ
20677cff86 doc(wiki): 插件开发页增加版本断言说明 2023-07-31 17:53:33 +08:00
RockChinQ
c8af5d8445 feat: 添加版本断言函数require_ver 2023-07-31 17:46:30 +08:00
RockChinQ
2dbe984539 doc(wiki): 添加事件wiki说明 2023-07-31 17:27:28 +08:00
RockChinQ
6b8fa664f1 feat: 新增PromptPreprocessing事件 2023-07-31 17:21:09 +08:00
RockChinQ
2b9612e933 chore: 提交部分测试文件 2023-07-31 16:24:39 +08:00
RockChinQ
749d0219fb chore: 删除弃用模块 2023-07-31 16:23:31 +08:00
Junyan Qin
a11a152bd7 ci: 解决sync-wiki.yml异常退出问题 2023-07-31 15:41:37 +08:00
Junyan Qin
fc803a3742 Merge pull request #522 from RockChinQ/feat-generating-stop-case
[Feat] 新增!continue命令
2023-07-31 15:34:30 +08:00
GitHub Actions Bot
13a1e15f24 Update cmdpriv-template.json 2023-07-31 07:24:14 +00:00
RockChinQ
3f41b94da5 feat: 完善命令文档 2023-07-31 15:23:42 +08:00
RockChinQ
0fb5bfda20 ci: 添加tiktoken依赖 2023-07-31 15:20:23 +08:00
RockChinQ
dc1fd73ebb feat: 添加continue命令 2023-07-31 15:17:49 +08:00
Junyan Qin
161b694f71 Merge pull request #521 from RockChinQ/fix-usage-not-reported
[Fix] text的使用量未上报
2023-07-31 14:31:48 +08:00
RockChinQ
45d1c89e45 fix: text的使用量未上报 2023-07-31 14:28:48 +08:00
Junyan Qin
e26664aa51 Merge pull request #520 from RockChinQ/feat-accurately-calculate-tokens
feat: 使用tiktoken计算tokens数
2023-07-31 12:16:10 +08:00
RockChinQ
e29691efbd feat: 使用tiktoken计算tokens数 2023-07-31 11:59:22 +08:00
RockChinQ
6d45327882 debug: 接口底层添加返回数据debug信息 2023-07-31 10:37:45 +08:00
RockChinQ
fbd41eef49 chore: 删除devcontainer.json 2023-07-31 10:37:14 +08:00
Junyan Qin
0a30c88322 doc(README.md): 插件列表 2023-07-31 00:07:39 +08:00
Junyan Qin
4f5af0e8c8 Merge pull request #518 from RockChinQ/fix-cannot-disable-funcs-dynamically
[Fix] plugin启用禁用命令对内容函数不生效
2023-07-30 23:56:01 +08:00
RockChinQ
df3f0fd159 fix: plugin启用禁用命令对内容函数不生效 2023-07-30 23:54:56 +08:00
RockChinQ
f2493c79dd doc(wiki): 添加联网内容函数提问示例 2023-07-29 19:34:47 +08:00
RockChinQ
a86a035b6b doc: 更新README.md 2023-07-29 19:26:28 +08:00
RockChinQ
7995793bfd doc(wiki): 添加内容函数页 2023-07-29 19:24:56 +08:00
RockChinQ
a56b340646 Release v2.5.0 2023-07-29 18:59:25 +08:00
Junyan Qin
7473cdfe16 Merge pull request #513 from RockChinQ/feat-function-calling-integration
[Feat] 支持GPT的函数调用功能
2023-07-29 18:57:29 +08:00
RockChinQ
24273ac158 doc: README添加内容函数相关内容 2023-07-29 18:55:18 +08:00
RockChinQ
fe6275000e doc(wiki): 更新wiki插件页 2023-07-29 18:40:49 +08:00
RockChinQ
5fbf369f82 doc(wiki): 更新插件页 2023-07-29 18:37:03 +08:00
Junyan Qin
4400475ffa chore: 添加Webwlkr插件示例 2023-07-29 17:41:56 +08:00
GitHub Actions Bot
796eb7c95d Update cmdpriv-template.json 2023-07-29 09:30:22 +00:00
RockChinQ
89a01378e7 ci: 跑工作流 2023-07-29 17:29:52 +08:00
RockChinQ
f4735e5e30 ci(cmd_priv): 添加CallingGPT依赖 2023-07-29 17:28:11 +08:00
RockChinQ
f1bb3045aa feat: 添加func命令 2023-07-29 17:26:07 +08:00
RockChinQ
96e474a555 feat: 插件开关对其内容函数生效 2023-07-29 17:10:47 +08:00
RockChinQ
833d29b101 typo: enable->enabled 2023-07-29 16:55:01 +08:00
RockChinQ
dce6734ba2 feat: 改为推荐使用func()装饰器注册内容函数 2023-07-29 16:51:19 +08:00
RockChinQ
0481167dc6 feat: 改为在start流程设置openai.proxy 2023-07-29 16:36:31 +08:00
RockChinQ
a002f93f7b chore: 删除过时代码 2023-07-29 16:30:09 +08:00
RockChinQ
3c894fe70e feat: chat_completion的函数开关支持 2023-07-29 16:29:16 +08:00
RockChinQ
8c69b8a1d9 feat: 内容函数全局开关支持 2023-07-29 16:28:18 +08:00
Junyan Qin
a9dae05303 doc(README.md): 修改社区群群号 2023-07-29 13:31:58 +08:00
RockChinQ
ae6994e241 feat(contentPlugin): 完成基本的内容函数调用功能 2023-07-28 19:03:02 +08:00
Rock Chin
caa72fa40c feat: 在插件层面初步支持内容函数 2023-07-27 14:27:36 +08:00
Junyan Qin
46cc9220c3 Merge pull request #506 from RockChinQ/perf-persist-dprompt-when-auto-reset
[Perf] 在session自动重置时保留非default的prompt
2023-07-07 17:53:29 +08:00
Rock Chin
ddb56d7a8e fix: reset命令错误的逻辑 2023-07-07 17:49:43 +08:00
Rock Chin
a0267416d7 fix: 修复reset逻辑导致的无法初始化情景预设问题 2023-07-07 16:37:05 +08:00
Rock Chin
56e1ef3602 fix: 修复reset可能引起的bug 2023-07-07 16:35:37 +08:00
Rock Chin
b4fc1057d1 perf: 在session自动重置时保留非default的prompt (#494) 2023-07-06 23:09:39 +08:00
Rock Chin
06037df607 ci: 仅在master分支运行sync-wiki工作流 2023-06-20 22:42:18 +08:00
Rock Chin
dce134d08d Release v2.4.7 2023-06-16 19:43:13 +08:00
Junyan Qin
cca471d068 Merge pull request #500 from RockChinQ/perf-more-model-support
Perf more model support
2023-06-16 19:40:29 +08:00
Rock Chin
ddb211b74a feat: 支持新的模型 2023-06-16 19:35:26 +08:00
Rock Chin
cef70751ff chore: 修改配置文件说明 2023-06-16 19:35:06 +08:00
JunYan Qin
2d2219fc6e 更新 README.md 2023-06-13 11:59:30 +08:00
JunYan Qin
514a6b4192 更新 README.md 2023-06-13 11:59:07 +08:00
JunYan Qin
7a552b3434 Merge pull request #496 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.8
chore(deps): update openai requirement from ~=0.27.7 to ~=0.27.8
2023-06-12 23:20:02 +08:00
dependabot[bot]
ecebd1b0e0 chore(deps): update openai requirement from ~=0.27.7 to ~=0.27.8
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.7...v0.27.8)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-12 09:01:51 +00:00
Rock Chin
8dc34d2a88 doc(README.md): 添加卖号网站 2023-06-12 14:13:36 +08:00
Rock Chin
d52644ceec doc: 更新README.md 2023-06-12 12:24:59 +08:00
Rock Chin
3052510591 Release v2.4.6 2023-06-08 14:01:18 +08:00
Rock Chin
777a5617db Merge pull request #492 from RockChinQ/feat-ignore-major-vernum
[Feat] 更新时忽略主版本号不同的版本
2023-06-08 14:00:32 +08:00
Rock Chin
e17c1087e9 feat(updater.py): 更新时忽略主版本号不同的版本 2023-06-08 13:57:24 +08:00
Rock Chin
633695175a Merge pull request #491 from RockChinQ/feat-tokens-auto-reset
[Feat] token超限报错时自动重置会话
2023-06-08 13:49:49 +08:00
Rock Chin
9e78bf3d21 perf: 更严格的重置条件判断 2023-06-08 13:49:20 +08:00
Rock Chin
43aa68a55d feat: 支持在token超限时自动重置会话 2023-06-08 13:45:54 +08:00
Rock Chin
b8308f8c57 Merge branch 'feat-tokens-auto-reset' of https://github.com/RockChinQ/QChatGPT into feat-tokens-auto-reset 2023-06-08 13:43:36 +08:00
Rock Chin
466bfbddeb perf: 提示语格式 2023-06-08 13:43:33 +08:00
GitHub Actions
b6da07b225 Update override-all.json 2023-06-08 05:20:55 +00:00
Rock Chin
2f2159239a chore: 添加开关和提示语配置项 2023-06-08 13:20:33 +08:00
Rock Chin
67d1ca8a65 Merge pull request #490 from RockChinQ/feat-global-group-private-enable
[Feat] 支持设置全局群聊/私聊消息禁用
2023-06-07 23:49:53 +08:00
Rock Chin
497a393e83 doc: 修改wiki 2023-06-07 23:49:09 +08:00
Rock Chin
782c0e22ea feat: 支持设置全局群聊、私聊禁用 2023-06-07 23:47:13 +08:00
Rock Chin
2932fc6dfd chore(banlist-template.py): 添加配置项 2023-06-07 23:23:21 +08:00
Rock Chin
0a9eab2113 chore(requirements.txt): 更新requests版本 2023-06-06 09:37:00 +08:00
Rock Chin
50a673a8ec doc: 添加插件列表list 2023-06-05 22:37:19 +08:00
Rock Chin
9e25d0f9e4 Release v2.4.5 2023-05-31 18:31:25 +08:00
Rock Chin
23cd7be711 Merge pull request #487 from RockChinQ/feat-banlist-syntax-check
feat: 初始化流程异常处理
2023-05-31 18:25:46 +08:00
Rock Chin
025b9e33f1 feat: 初始化流程异常处理 2023-05-31 18:24:01 +08:00
Rock Chin
bab2f64913 doc(README_en.md): 添加wakapi计时 2023-05-29 11:12:07 +08:00
Rock Chin
b00e09aa9c doc: 添加wakapi计时 2023-05-29 11:10:49 +08:00
Rock Chin
0b109fdc7a Merge pull request #479 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.7
chore(deps): update openai requirement from ~=0.27.6 to ~=0.27.7
2023-05-22 17:13:50 +08:00
dependabot[bot]
018fea2ddb chore(deps): update openai requirement from ~=0.27.6 to ~=0.27.7
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.6...v0.27.7)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-22 09:05:00 +00:00
Rock Chin
f8a3cc4352 doc: 折起OpenAI注册步骤 2023-05-21 18:25:35 +08:00
Rock Chin
6ab853acc1 doc: 修改关于HuggingChat的说明 2023-05-21 17:50:35 +08:00
Rock Chin
e825dea02f chore: 排除hugchat.json 2023-05-21 17:47:42 +08:00
Rock Chin
cf8740d16e Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-05-21 17:33:47 +08:00
Rock Chin
9c4809e26f chore: 发布revLibs相关公告 2023-05-21 17:33:44 +08:00
Rock Chin
0a232fd9ef Merge pull request #477 from RockChinQ/feature-detailed-cfg-cmd
[Feat] 支持使用!cfg指令修改子配置项
2023-05-21 15:59:59 +08:00
Rock Chin
23016a0791 doc: 更新wiki说明 2023-05-21 15:58:21 +08:00
Rock Chin
cdcc67ff23 feat(!cfg): 使用eval()函数进行类型转换 2023-05-21 15:53:56 +08:00
Rock Chin
92274bfc34 feat(!cfg): 支持使用点号索引子配置项 2023-05-21 15:49:56 +08:00
Rock Chin
2fed6f61ba Release v2.4.4 2023-05-21 15:15:28 +08:00
Rock Chin
59b2cd26d2 Merge pull request #476 from RockChinQ/hotfix-471-at-no-response-aft-reload
[Fix] 热重载之后不响应群内at
2023-05-21 15:12:43 +08:00
Rock Chin
f7b87e99d2 fix(manager.py): 热重载之后不响应群内at 2023-05-21 15:11:34 +08:00
Rock Chin
70bc985145 perf(nakuru.py): access-token被拒时报警 2023-05-18 21:06:32 +08:00
Rock Chin
070dbe9108 chore: 排除venv/目录 2023-05-18 21:05:45 +08:00
Rock Chin
a63fa6d955 chore: yiri-mirai使用0.2.7 2023-05-18 21:05:30 +08:00
Rock Chin
c7703809b0 Merge pull request #475 from RockChinQ/actively-delay
[Feat] 支持设置消息回复强制延迟以降低风控概率
2023-05-18 20:15:52 +08:00
GitHub Actions
37eb74338f Update override-all.json 2023-05-18 12:14:19 +00:00
Rock Chin
77d5585b7c feat: 修改强制延迟默认范围 2023-05-18 20:13:53 +08:00
Rock Chin
6cab3ef029 Merge branch 'actively-delay' of https://github.com/RockChinQ/QChatGPT into actively-delay 2023-05-18 20:12:39 +08:00
Rock Chin
820a7b78fc feat: 处理过程支持强制延迟 2023-05-18 20:12:36 +08:00
GitHub Actions
c51dffef3a Update override-all.json 2023-05-18 12:10:33 +00:00
Rock Chin
983bc3da3c chore: 添加强制延迟配置项 2023-05-18 20:10:08 +08:00
Rock Chin
09be956a58 Merge pull request #474 from RockChinQ/command-notfound-err
[Perf] 修改指令不存在时的提示信息
2023-05-18 19:45:25 +08:00
Rock Chin
5eded50c53 perf: 修改指令不存在时的提示信息 2023-05-18 19:44:20 +08:00
Rock Chin
6d8eebd314 doc: 添加微信赞赏码 2023-05-16 15:36:07 +08:00
Rock Chin
19a0572b5f Release v2.4.3.1 2023-05-15 17:38:03 +08:00
Rock Chin
6272e98474 Merge pull request #467 from RockChinQ/perf-plugin-update
[Perf] 优化插件更新相关操作
2023-05-14 18:45:36 +08:00
Rock Chin
45042fe7d4 doc: 更新插件更新命令wiki 2023-05-14 18:44:14 +08:00
Rock Chin
d85e840126 perf: 优化插件更新操作,支持更新单个插件 2023-05-14 18:41:20 +08:00
Rock Chin
804889f1de perf: 加载模块的输出改为debug级别 2023-05-14 17:30:05 +08:00
Rock Chin
919c996434 doc: 添加HuggingChat 2023-05-14 17:14:46 +08:00
Rock Chin
00823b3d62 doc(README): 添加HuggingChat 2023-05-14 17:14:28 +08:00
Rock Chin
af54efd24a doc(README.md): 添加系统状态插件 2023-05-14 14:58:48 +08:00
Rock Chin
b1c9b121f6 Update go-cqhttp配置.md 2023-05-08 21:51:03 +08:00
Rock Chin
7b5649d153 Merge pull request #461 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.6
chore(deps): update openai requirement from ~=0.27.5 to ~=0.27.6
2023-05-08 18:48:20 +08:00
dependabot[bot]
52bf716d84 chore(deps): update openai requirement from ~=0.27.5 to ~=0.27.6
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.5...v0.27.6)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-08 10:34:21 +00:00
Rock Chin
c149dd7b66 Merge pull request #462 from RockChinQ/dependabot/pip/dulwich-approx-eq-0.21.5
chore(deps): update dulwich requirement from ~=0.21.3 to ~=0.21.5
2023-05-08 18:24:59 +08:00
dependabot[bot]
65d5a1ed63 chore(deps): update dulwich requirement from ~=0.21.3 to ~=0.21.5
Updates the requirements on [dulwich](https://github.com/dulwich/dulwich) to permit the latest version.
- [Release notes](https://github.com/dulwich/dulwich/releases)
- [Changelog](https://github.com/jelmer/dulwich/blob/master/NEWS)
- [Commits](https://github.com/dulwich/dulwich/compare/dulwich-0.21.3...dulwich-0.21.5)

---
updated-dependencies:
- dependency-name: dulwich
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-08 09:01:55 +00:00
Rock Chin
5516754bbb doc(README.md): 添加docker部署提示 2023-05-02 14:30:25 +08:00
Rock Chin
08082f2ee3 Merge pull request #452 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.5
chore(deps): update openai requirement from ~=0.27.4 to ~=0.27.5
2023-05-01 17:26:18 +08:00
dependabot[bot]
8489266080 chore(deps): update openai requirement from ~=0.27.4 to ~=0.27.5
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.4...v0.27.5)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-01 09:06:44 +00:00
Rock Chin
51c7e0b235 doc(README.md): 四群群号 2023-04-28 00:37:29 +08:00
Rock Chin
628b6b0bb4 Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-04-27 15:01:47 +08:00
Rock Chin
7e024d860d doc: 增加LightQChat的公告 2023-04-27 15:01:44 +08:00
Rock Chin
c2f6273f70 Merge pull request #442 from oliverkirk-sudo/master
修复异常输出时的类型问题
2023-04-26 17:31:19 +08:00
oliverkirk-sudo
96e401ec7b 修复异常输出时的类型问题 2023-04-26 17:27:33 +08:00
Rock Chin
ae8ac65447 feat: 更换使用清华源 (#438) 2023-04-26 11:52:07 +08:00
Rock Chin
2d4f59f36e doc: 强调02 2023-04-26 11:18:07 +08:00
Rock Chin
0e85467e02 Release v2.4.2 2023-04-25 10:27:57 +08:00
Rock Chin
eb41cf5481 fix(plugin.py): 兼容性问题 2023-04-25 10:27:07 +08:00
Rock Chin
b970a42d07 fix(plugin.py): send_message封装实现的兼容性问题 2023-04-25 10:26:03 +08:00
Rock Chin
8c9d123e1c Merge pull request #433 from RockChinQ/detailed-response-rules
[Feat] 细化到单个群的响应规则
2023-04-25 09:39:56 +08:00
Rock Chin
ab2a95e347 Merge branch 'detailed-response-rules' of https://github.com/RockChinQ/QChatGPT into detailed-response-rules 2023-04-25 09:31:56 +08:00
Rock Chin
2184c558a4 feat: 支持配置细化到单个群的响应规则 2023-04-25 09:31:44 +08:00
GitHub Actions
83cb8588fd Update override-all.json 2023-04-25 01:28:56 +00:00
Rock Chin
007e82c533 feat: 配置文件支持 2023-04-25 09:28:31 +08:00
Rock Chin
499f8580a7 doc: 修改wiki格式 2023-04-25 08:45:58 +08:00
Rock Chin
a7dc3c5dab Release v2.4.1 2023-04-25 00:01:40 +08:00
Rock Chin
d01d3a3c53 perf: 启动时提示使用的QQ号 2023-04-24 23:57:57 +08:00
Rock Chin
580e062dbf feat: 上报使用量时带上msg_source_adapter 2023-04-24 23:51:00 +08:00
Rock Chin
c8cee8410c doc: 完善格式 2023-04-24 20:04:33 +08:00
Rock Chin
6bf331c2e3 doc: 完善wiki 2023-04-24 19:53:20 +08:00
Rock Chin
4c4930737c chore: issue模板新增登录框架字段 2023-04-24 19:28:00 +08:00
Rock Chin
9de01e9525 Release v2.4.0 2023-04-24 16:09:46 +08:00
Rock Chin
c6a16f5974 Merge pull request #427 from RockChinQ/nakuru-support
[Feat] 支持通过nakuru-project框架连接go-cqhttp
2023-04-24 16:07:12 +08:00
Rock Chin
253ef44d17 chore: 公告 2023-04-24 16:05:47 +08:00
Rock Chin
15a1f00b73 doc(README.md): 添加go-cqhttp公告 2023-04-24 16:04:25 +08:00
Rock Chin
b5fa2ea8b8 feat(main.py): 添加nakuru-project-idk的依赖更新项 2023-04-24 16:01:43 +08:00
Rock Chin
449e024771 doc: 添加针对老用户的说明 2023-04-24 15:59:07 +08:00
Rock Chin
1bee7a146b feat: 支持语音组件 2023-04-24 15:55:21 +08:00
Rock Chin
270a632789 doc: 修改标号 2023-04-24 15:48:28 +08:00
Rock Chin
418bb05b4c doc: 添加go-cqhttp配置说明 2023-04-24 15:46:58 +08:00
Rock Chin
052b834151 doc: 完善config-template.py的说明 2023-04-24 15:46:26 +08:00
Rock Chin
58ee204a75 doc: wiki添加go-cqhttp配置步骤 2023-04-24 15:41:28 +08:00
Rock Chin
0a02ee8c04 feat: 启动时添加nakuru的提示检查 2023-04-24 15:04:07 +08:00
Rock Chin
950ef4a181 doc: 更新README.md 2023-04-24 14:57:28 +08:00
Rock Chin
7b7cdd8adb perf: 在日志文件包含输出文件路径 2023-04-24 13:52:22 +08:00
Rock Chin
471768e760 feat: 支持发送转发消息 2023-04-24 12:46:33 +08:00
Rock Chin
c7517d31a4 chore: 更换使用nakuru-project-idk包 2023-04-24 11:37:01 +08:00
Rock Chin
7d10d0398e fix: nakuru热重载失败 2023-04-24 11:21:51 +08:00
Rock Chin
a2bc25c08b feat: 支持引用原消息回复 2023-04-24 10:57:43 +08:00
Rock Chin
3cb49fe2d8 feat: 支持检测群内禁言 2023-04-24 10:34:51 +08:00
Rock Chin
5b96ac122f feat: 适配nakuru基本功能 2023-04-23 23:40:08 +08:00
Rock Chin
612033f478 feat: nakuru适配器基础模型 2023-04-23 15:58:37 +08:00
GitHub Actions
48ee940d8e Update override-all.json 2023-04-23 01:32:36 +00:00
Rock Chin
e74df0b37d chore: 添加nakuru相关配置; 使用nakuru-project-test临时包 2023-04-23 09:32:01 +08:00
GitHub Actions
640afdc49c Update override-all.json 2023-04-22 13:51:02 +00:00
Rock Chin
6b39df5b9b chore: 删除NoneBot2相关配置 2023-04-22 21:50:41 +08:00
Rock Chin
e7e698765e fix(plugin.py): 缺少的换行符 2023-04-22 17:40:41 +08:00
Rock Chin
43fea13dab Merge pull request #418 from RockChinQ/im-impl-decoupling
[Refactor] 新增抽象层以解耦消息来源(MessageSource)组件
2023-04-21 18:10:42 +08:00
GitHub Actions
bc899e5bd0 Update override-all.json 2023-04-21 09:52:31 +00:00
Rock Chin
160086feb9 refactor: 完成MessageSource适配器解耦 2023-04-21 17:51:58 +08:00
Rock Chin
016391c976 refactor: 不再向QQBotManager中传递config中可读的参数 2023-04-21 17:15:32 +08:00
Rock Chin
91746448a3 feat: 消息源适配器模型及YiriMirai的适配器 2023-04-21 16:36:59 +08:00
Rock Chin
5cb0543237 doc(README.md): 更新wiki链接 2023-04-20 20:50:00 +08:00
Rock Chin
fac29a24a8 doc(README.md): social.png更改成圆角 2023-04-20 10:54:06 +08:00
Rock Chin
4d3a2a21d0 Update README_en.md 2023-04-20 00:22:05 +08:00
Rock Chin
6d4f88041c Update README.md 2023-04-20 00:21:37 +08:00
Rock Chin
18587d3690 doc(README.md): 修改social图格式 2023-04-20 00:15:11 +08:00
Rock Chin
423090dccd doc(README.md): 更改使用social图 2023-04-20 00:13:11 +08:00
Rock Chin
78e88baab3 doc(README.md): 优化LOGO图格式 2023-04-20 00:08:00 +08:00
Rock Chin
6a276767b3 doc(README.md): 添加LOGO 2023-04-20 00:06:52 +08:00
Rock Chin
2cb26c7c70 doc: 添加LOGO文件 2023-04-20 00:04:01 +08:00
Rock Chin
ff66c88060 doc(README.md): 优化图片格式 2023-04-17 10:18:23 +08:00
Rock Chin
611e82b8f9 doc(README.md): 添加使用截图 2023-04-17 10:15:50 +08:00
Rock Chin
59bdee7137 feat: 添加IM框架模型 2023-04-15 23:38:52 +08:00
60 changed files with 2660 additions and 659 deletions

View File

@@ -1,34 +0,0 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "QChatGPT 3.10",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.10",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
"customizations": {
"codespaces": {
"repositories": {
"RockChinQ/QChatGPT": {
"permissions": "write-all"
},
"RockChinQ/revLibs": {
"permissions": "write-all"
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -14,6 +14,15 @@ body:
- Docker部署
validations:
required: true
- type: dropdown
attributes:
label: 登录框架
description: "连接QQ使用的框架"
options:
- Mirai
- go-cqhttp
validations:
required: false
- type: input
attributes:
label: 系统环境

View File

@@ -1,7 +1,14 @@
name: Update Wiki
on:
pull_request:
branches:
- master
paths:
- 'res/wiki/**'
push:
branches:
- master
paths:
- 'res/wiki/**'
@@ -23,11 +30,16 @@ jobs:
- name: Copy res/wiki content to wiki
run: |
cp -r res/wiki/* wiki/
- name: Check for changes
run: |
cd wiki
if git diff --quiet; then
echo "No changes to commit."
exit 0
fi
- name: Commit and Push Changes
run: |
cd wiki
if git diff --name-only; then
git add .
git commit -m "Update wiki"
git push
fi

View File

@@ -26,7 +26,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade yiri-mirai openai colorlog func_timeout dulwich Pillow
python -m pip install --upgrade yiri-mirai openai colorlog func_timeout dulwich Pillow CallingGPT tiktoken
- name: Copy Scripts
run: |

6
.gitignore vendored
View File

@@ -20,3 +20,9 @@ res/announcement_saved
res/announcement_saved.json
cmdpriv.json
tips.py
.venv
bin/
.vscode
test_*
venv/
hugchat.json

View File

@@ -1,19 +1,29 @@
# QChatGPT🤖
<p align="center">
<img src="res/social.png" alt="QChatGPT" width="640" />
</p>
[English](README_en.md) | 简体中文
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT?style=flat-square)](https://github.com/RockChinQ/QChatGPT/releases/latest)
![Wakapi Count](https://wakapi.dev/api/badge/RockChinQ/interval:any/project:QChatGPT)
> 2023/7/29 支持使用GPT的Function Calling功能实现类似ChatGPT Plugin的效果请见[Wiki内容函数](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0)
> 2023/4/24 支持使用go-cqhttp登录QQ请查看[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)
> 2023/3/18 现已支持GPT-4 API内测请查看`config-template.py`中的`completion_api_params`
> 2023/3/15 逆向库已支持New Bing使用方法查看[插件文档](https://github.com/RockChinQ/revLibs)
- **客官,来都来了,不点个⭐吗?**
**QChatGPT需要Python版本>=3.9**
- 到[项目Wiki](https://github.com/RockChinQ/QChatGPT/wiki)可了解项目详细信息
- 官方交流、答疑群: 656285629
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- 社区群(内有一键部署包、图形化界面等资源): 362515018
- 社区群(内有一键部署包、图形化界面等资源): 891448839
- QQ频道机器人见[QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
- 欢迎各种形式的贡献,请查看[贡献指引](CONTRIBUTING.md)
- 购买ChatGPT账号: [此链接](http://fk.kimi.asia)
## 🍺模型适配一览
@@ -28,6 +38,7 @@
- ChatGPT网页版GPT-3.5模型, 由[插件](https://github.com/RockChinQ/revLibs)接入
- ChatGPT网页版GPT-4模型, 目前需要ChatGPT Plus订阅, 由[插件](https://github.com/RockChinQ/revLibs)接入
- New Bing逆向库, 由[插件](https://github.com/RockChinQ/revLibs)接入
- HuggingChat, 由[插件](https://github.com/RockChinQ/revLibs)接入, 仅支持英文
### 故事续写
@@ -101,6 +112,7 @@
<summary>✅支持插件加载🧩</summary>
- 自行实现插件加载器及相关支持
- 支持GPT的Function Calling功能
- 详细查看[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
</details>
<details>
@@ -136,6 +148,15 @@
- 允许用户自定义报错、帮助等提示信息
- 请查看`tips.py`
</details>
### 🏞️截图
<img alt="私聊GPT-3.5" src="res/screenshots/person_gpt3.5.png" width="400"/>
<br/>
<img alt="群聊GPT-3.5" src="res/screenshots/group_gpt3.5.png" width="400"/>
<br/>
<img alt="New Bing" src="res/screenshots/person_newbing.png" width="400"/>
</details>
详情请查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
@@ -146,6 +167,9 @@
### - 注册OpenAI账号
<details>
<summary>点此查看步骤</summary>
> 若您要直接使用非OpenAI的模型如New Bing可跳过此步骤直接进行之后的部署完成后按照相关插件的文档进行配置即可
参考以下文章自行注册
@@ -156,6 +180,8 @@
注册成功后请前往[个人中心查看](https://beta.openai.com/account/api-keys)api_key
完成注册后,使用以下自动化或手动部署步骤
</details>
### - 自动化部署
<details>
@@ -163,6 +189,8 @@
#### Docker方式
> docker方式目前仅支持使用mirai登录若您不**熟悉**docker的操作及相关知识强烈建议您使用其他方式部署我们**不会且难以**解决您主机上多个容器的连接问题。
请查看[此文档](res/docs/docker_deploy.md)
由[@mikumifa](https://github.com/mikumifa)贡献
@@ -180,12 +208,29 @@
- 请使用Python 3.9.x以上版本
#### 配置Mirai
#### ① 配置QQ登录框架
按照[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration)配置Mirai及YiriMirai
启动mirai-console后使用`login`命令登录QQ账号保持mirai-console运行状态
目前支持mirai和go-cqhttp配置任意一个即可
#### 配置主程序
<details>
<summary>mirai</summary>
1. 按照[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration)配置Mirai及mirai-api-http
2. 启动mirai-console后使用`login`命令登录QQ账号保持mirai-console运行状态
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`yirimirai`
</details>
<details>
<summary>go-cqhttp</summary>
1. 按照[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)配置go-cqhttp
2. 启动go-cqhttp确保登录成功保持运行
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`nakuru`
</details>
#### ② 配置主程序
1. 克隆此项目
@@ -197,7 +242,7 @@ cd QChatGPT
2. 安装依赖
```bash
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk CallingGPT tiktoken
```
3. 运行一次主程序,生成配置文件
@@ -237,22 +282,17 @@ python3 main.py
详见[Wiki插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
开发教程见[Wiki插件开发页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91)
⭐我们已经支持了[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling),请查看[Wiki内容函数](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0)
<details>
<summary>查看插件列表</summary>
### 示例插件
[所有插件列表](https://github.com/stars/RockChinQ/lists/qchatgpt-%E6%8F%92%E4%BB%B6)欢迎提出issue以提交新的插件
`tests/plugin_examples`目录下,将其整个目录复制到`plugins`目录下即可使用
### 部分插件
- `cmdcn` - 主程序指令中文形式
- `hello_plugin` - 在收到消息`hello`时回复相应消息
- `urlikethisijustsix` - 收到冒犯性消息时回复相应消息
### 更多
欢迎提交新的插件
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E4%B8%8EChatGPT%E7%BD%91%E9%A1%B5%E7%89%88)
- [WebwlkrPlugin](https://github.com/RockChinQ/WebwlkrPlugin) - 让机器人能联网!!
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E3%80%81ChatGPT%E7%BD%91%E9%A1%B5%E7%89%88%E3%80%81ChatGPT-API%E5%8C%BA%E5%88%AB)
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过指令切换使用的模型
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件语音输出、Ranimg、屏蔽词规则等
@@ -262,6 +302,7 @@ python3 main.py
- [chordfish-k/QChartGPT_Emoticon_Plugin](https://github.com/chordfish-k/QChartGPT_Emoticon_Plugin) - 使机器人根据回复内容发送表情包
- [oliverkirk-sudo/ChatPoeBot](https://github.com/oliverkirk-sudo/ChatPoeBot) - 接入[Poe](https://poe.com/)上的机器人
- [lieyanqzu/WeatherPlugin](https://github.com/lieyanqzu/WeatherPlugin) - 天气查询插件
- [SysStatPlugin](https://github.com/RockChinQ/SysStatPlugin) - 查看系统状态
</details>
## 😘致谢
@@ -274,6 +315,6 @@ python3 main.py
以及所有[贡献者](https://github.com/RockChinQ/QChatGPT/graphs/contributors)和其他为本项目提供支持的朋友们。
<!-- ## 👍赞赏
## 👍赞赏
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/> -->
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/>

View File

@@ -1,8 +1,13 @@
# QChatGPT🤖
<p align="center">
<img src="res/social.png" alt="QChatGPT" width="640" />
</p>
English | [简体中文](README.md)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT?style=flat-square)](https://github.com/RockChinQ/QChatGPT/releases/latest)
![Wakapi Count](https://wakapi.dev/api/badge/RockChinQ/interval:any/project:QChatGPT)
- Refer to [Wiki](https://github.com/RockChinQ/QChatGPT/wiki) to get further information.
- Official QQ group: 656285629
@@ -23,6 +28,7 @@ English | [简体中文](README.md)
- ChatGPT website edition (GPT-3.5), see [revLibs plugin](https://github.com/RockChinQ/revLibs)
- ChatGPT website edition (GPT-4), ChatGPT plus subscription required, see [revLibs plugin](https://github.com/RockChinQ/revLibs)
- New Bing, see [revLibs plugin](https://github.com/RockChinQ/revLibs)
- HuggingChat, see [revLibs plugin](https://github.com/RockChinQ/revLibs), English only
### Story
@@ -103,11 +109,26 @@ Use [this installer](https://github.com/RockChinQ/qcg-installer) to deploy.
- Python 3.9.x or higher
#### Configure Mirai
#### 配置QQ登录框架
Currently supports mirai and go-cqhttp, configure either one
<details>
<summary>mirai</summary>
Follow [this tutorial(cn)](https://yiri-mirai.wybxc.cc/tutorials/01/configuration) to configure Mirai and YiriMirai.
After starting mirai-console, use the `login` command to log in to the QQ account, and keep the mirai-console running.
</details>
<details>
<summary>go-cqhttp</summary>
1. Follow [this tutorial(cn)](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE) to configure go-cqhttp.
2. Start go-cqhttp, make sure it is logged in and running.
</details>
#### Configure QChatGPT
1. Clone the repository
@@ -120,7 +141,7 @@ cd QChatGPT
2. Install dependencies
```bash
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk
```
3. Generate `config.py`

View File

@@ -1,7 +1,13 @@
# 配置文件: 注释里标[必需]的参数必须修改, 其他参数根据需要修改, 但请勿删除
import logging
# [必需] Mirai的配置
# 消息处理协议适配器
# 目前支持以下适配器:
# - "yirimirai": mirai的通信框架YiriMirai框架适配器, 请同时填写下方mirai_http_api_config
# - "nakuru": go-cqhttp通信框架请同时填写下方nakuru_config
msg_source_adapter = "yirimirai"
# [必需(与nakuru二选一取决于msg_source_adapter)] Mirai的配置
# 请到配置mirai的步骤中的教程查看每个字段的信息
# adapter: 选择适配器目前支持HTTPAdapter和WebSocketAdapter
# host: 运行mirai的主机地址
@@ -18,6 +24,15 @@ mirai_http_api_config = {
"qq": 1234567890
}
# [必需(与mirai二选一取决于msg_source_adapter)]
# 使用nakuru-project框架连接go-cqhttp的配置
nakuru_config = {
"host": "localhost", # go-cqhttp的地址
"port": 6700, # go-cqhttp的正向websocket端口
"http_port": 5700, # go-cqhttp的正向http端口
"token": "" # 若在go-cqhttp的config.yml设置了access_token, 则填写此处
}
# [必需] OpenAI的配置
# api_key: OpenAI的API Key
# http_proxy: 请求OpenAI时使用的代理None为不使用https和socks5暂不能使用
@@ -108,13 +123,36 @@ preset_mode = "normal"
# 注意:由消息前缀(prefix)匹配的消息中将会删除此前缀,正则表达式(regexp)匹配的消息不会删除匹配的部分
# 前缀匹配优先级高于正则表达式匹配
# 正则表达式简明教程https://www.runoob.com/regexp/regexp-tutorial.html
#
# 支持针对不同群设置不同的响应规则,例如:
# response_rules = {
# "default": {
# "at": True,
# "prefix": ["/ai", "!ai", "ai", "ai"],
# "regexp": [],
# "random_rate": 0.0,
# },
# "12345678": {
# "at": False,
# "prefix": ["/ai", "!ai", "ai", "ai"],
# "regexp": [],
# "random_rate": 0.0,
# },
# }
#
# 以上设置将会在群号为12345678的群中关闭at响应
# 未单独设置的群将使用default规则
response_rules = {
"default": {
"at": True, # 是否响应at机器人的消息
"prefix": ["/ai", "!ai", "ai", "ai"],
"regexp": [], # "为什么.*", "怎么?样.*", "怎么.*", "如何.*", "[Hh]ow to.*", "[Ww]hy not.*", "[Ww]hat is.*", ".*怎么办", ".*咋办"
"random_rate": 0.0, # 随机响应概率0.0-1.00.0为不随机响应1.0为响应所有消息, 仅在前几项判断不通过时生效
},
}
# 消息忽略规则
# 适用于私聊及群聊
# 符合此规则的消息将不会被响应
@@ -157,16 +195,22 @@ encourage_sponsor_at_start = True
# 注意较大的prompt_submit_length会导致OpenAI账户额度消耗更快
prompt_submit_length = 2048
# 是否在token超限报错时自动重置会话
# 可在tips.py中编辑提示语
auto_reset = True
# OpenAI补全API的参数
# 请在下方填写模型,程序自动选择接口
# 现已支持的模型有:
#
# 'gpt-4'
# 'gpt-4-0314'
# 'gpt-4-0613'
# 'gpt-4-32k'
# 'gpt-4-32k-0314'
# 'gpt-4-32k-0613'
# 'gpt-3.5-turbo'
# 'gpt-3.5-turbo-0301'
# 'gpt-3.5-turbo-16k'
# 'gpt-3.5-turbo-0613'
# 'gpt-3.5-turbo-16k-0613'
# 'text-davinci-003'
# 'text-davinci-002'
# 'code-davinci-002'
@@ -203,6 +247,14 @@ process_message_timeout = 30
# 回复消息时是否显示[GPT]前缀
show_prefix = False
# 回复前的强制延迟时间,降低机器人被腾讯风控概率
# *此机制对命令和消息、私聊及群聊均生效
# 每次处理时从以下的范围取一个随机秒数,
# 当此次消息处理时间低于此秒数时,将会强制延迟至此秒数
# 例如:[1.5, 3]则每次处理时会随机取一个1.5-3秒的随机数若处理时间低于此随机数则强制延迟至此随机秒数
# 若您不需要此功能请将force_delay_range设置为[0, 0]
force_delay_range = [1.5, 3]
# 应用长消息处理策略的阈值
# 当回复消息长度超过此值时,将使用长消息处理策略
blob_message_threshold = 256

41
main.py
View File

@@ -47,9 +47,9 @@ def init_db():
def ensure_dependencies():
import pkg.utils.pkgmgr as pkgmgr
pkgmgr.run_pip(["install", "openai", "Pillow", "--upgrade",
"-i", "https://pypi.douban.com/simple/",
"--trusted-host", "pypi.douban.com"])
pkgmgr.run_pip(["install", "openai", "Pillow", "nakuru-project-idk", "CallingGPT", "tiktoken", "--upgrade",
"-i", "https://pypi.tuna.tsinghua.edu.cn/simple",
"--trusted-host", "pypi.tuna.tsinghua.edu.cn"])
known_exception_caught = False
@@ -133,6 +133,7 @@ def start(first_time_init=False):
print("更新openai库失败:{}, 请忽略或自行更新".format(e))
known_exception_caught = False
try:
try:
sh = reset_logging()
@@ -177,9 +178,14 @@ def start(first_time_init=False):
logging.error(e)
traceback.print_exc()
# 配置OpenAI proxy
import openai
openai.proxy = None # 先重置因为重载后可能需要清除proxy
if "http_proxy" in config.openai_config and config.openai_config["http_proxy"] is not None:
openai.proxy = config.openai_config["http_proxy"]
# 配置openai api_base
if "reverse_proxy" in config.openai_config and config.openai_config["reverse_proxy"] is not None:
import openai
openai.api_base = config.openai_config["reverse_proxy"]
# 主启动流程
@@ -193,9 +199,7 @@ def start(first_time_init=False):
pkg.openai.session.load_sessions()
# 初始化qq机器人
qqbot = pkg.qqbot.manager.QQBotManager(mirai_http_api_config=config.mirai_http_api_config,
timeout=config.process_message_timeout, retry=config.retry_times,
first_time_init=first_time_init)
qqbot = pkg.qqbot.manager.QQBotManager(first_time_init=first_time_init)
# 加载插件
import pkg.plugin.host
@@ -210,7 +214,8 @@ def start(first_time_init=False):
def run_bot_wrapper():
global known_exception_caught
try:
qqbot.bot.run()
logging.info("使用账号: {}".format(qqbot.bot_account_id))
qqbot.adapter.run_sync()
except TypeError as e:
if str(e).__contains__("argument 'debug'"):
logging.error(
@@ -245,6 +250,8 @@ def start(first_time_init=False):
"mirai-api-http端口无法使用:{}, 解决方案: https://github.com/RockChinQ/QChatGPT/issues/22".format(
e))
else:
import traceback
traceback.print_exc()
logging.error(
"捕捉到未知异常:{}, 请前往 https://github.com/RockChinQ/QChatGPT/issues 查找或提issue".format(e))
known_exception_caught = True
@@ -254,6 +261,17 @@ def start(first_time_init=False):
threading.Thread(
target=run_bot_wrapper
).start()
except Exception as e:
traceback.print_exc()
if isinstance(e, KeyboardInterrupt):
logging.info("程序被用户中止")
sys.exit(0)
elif isinstance(e, SyntaxError):
logging.error("配置文件存在语法错误,请检查配置文件:\n1. 是否存在中文符号\n2. 是否已按照文件中的说明填写正确")
sys.exit(1)
else:
logging.error("初始化失败:{}".format(e))
sys.exit(1)
finally:
# 判断若是Windows输出选择模式可能会暂停程序的警告
if os.name == 'nt':
@@ -264,9 +282,14 @@ def start(first_time_init=False):
if first_time_init:
if not known_exception_caught:
import config
if config.msg_source_adapter == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(config.mirai_http_api_config['qq'], config.mirai_http_api_config['host']+":"+str(config.mirai_http_api_config['port'])))
logging.info('程序启动完成,如长时间未显示 成功登录到账号xxxxx ,并且不回复消息,请查看 '
logging.critical('程序启动完成,如长时间未显示 "成功登录到账号xxxxx" ,并且不回复消息,解决办法(请勿到群里问): '
'https://github.com/RockChinQ/QChatGPT/issues/37')
elif config.msg_source_adapter == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(config.nakuru_config['host'], config.nakuru_config['port'], config.nakuru_config['http_port']))
logging.critical('程序启动完成,如长时间未显示 "Protocol: connected" ,并且不回复消息,请检查config.py中的nakuru_config是否正确')
else:
sys.exit(1)
else:

View File

@@ -1,5 +1,6 @@
{
"comment": "这是override.json支持的字段全集, 关于override.json机制, 请查看https://github.com/RockChinQ/QChatGPT/pull/271",
"msg_source_adapter": "yirimirai",
"mirai_http_api_config": {
"adapter": "WebSocketAdapter",
"host": "localhost",
@@ -7,6 +8,12 @@
"verifyKey": "yirimirai",
"qq": 1234567890
},
"nakuru_config": {
"host": "localhost",
"port": 6700,
"http_port": 5700,
"token": ""
},
"openai_config": {
"api_key": {
"default": "openai_api_key"
@@ -20,6 +27,7 @@
},
"preset_mode": "normal",
"response_rules": {
"default": {
"at": true,
"prefix": [
"/ai",
@@ -29,6 +37,7 @@
],
"regexp": [],
"random_rate": 0.0
}
},
"ignore_rules": {
"prefix": [
@@ -44,6 +53,7 @@
"inappropriate_message_tips": "[百度云]请珍惜机器人,当前返回内容不合规",
"encourage_sponsor_at_start": true,
"prompt_submit_length": 2048,
"auto_reset": true,
"completion_api_params": {
"model": "gpt-3.5-turbo",
"temperature": 0.9,
@@ -58,6 +68,10 @@
"include_image_description": true,
"process_message_timeout": 30,
"show_prefix": false,
"force_delay_range": [
1.5,
3
],
"blob_message_threshold": 256,
"blob_message_strategy": "forward",
"wait_last_done": true,

View File

@@ -46,7 +46,7 @@ class DataGatherer:
config = pkg.utils.context.get_config()
if not config.report_usage:
return
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}".format(subservice_name, self.version_str, count))
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}&msg_source={}".format(subservice_name, self.version_str, count, config.msg_source_adapter))
if res.status_code != 200 or res.text != "ok":
logging.warning("report to server failed, status_code: {}, text: {}".format(res.status_code, res.text))
except:

View File

View File

@@ -0,0 +1,200 @@
import openai
import json
import logging
from .model import RequestBase
from ..funcmgr import get_func_schema_list, execute_function, get_func, get_func_schema, ContentFunctionNotFoundError
class ChatCompletionRequest(RequestBase):
"""调用ChatCompletion接口的请求类。
此类保证每一次返回的角色为assistant的信息的finish_reason一定为stop。
若有函数调用响应,本类的返回瀑布是:函数调用请求->函数调用结果->...->assistant的信息->stop。
"""
model: str
messages: list[dict[str, str]]
kwargs: dict
stopped: bool = False
pending_func_call: dict = None
pending_msg: str
def flush_pending_msg(self):
self.append_message(
role="assistant",
content=self.pending_msg
)
self.pending_msg = ""
def append_message(self, role: str, content: str, name: str=None):
msg = {
"role": role,
"content": content
}
if name is not None:
msg['name'] = name
self.messages.append(msg)
def __init__(
self,
model: str,
messages: list[dict[str, str]],
**kwargs
):
self.model = model
self.messages = messages.copy()
self.kwargs = kwargs
self.req_func = openai.ChatCompletion.acreate
self.pending_func_call = None
self.stopped = False
self.pending_msg = ""
def __iter__(self):
return self
def __next__(self) -> dict:
if self.stopped:
raise StopIteration()
if self.pending_func_call is None: # 没有待处理的函数调用请求
args = {
"model": self.model,
"messages": self.messages,
}
funcs = get_func_schema_list()
if len(funcs) > 0:
args['functions'] = funcs
# 拼接kwargs
args = {**args, **self.kwargs}
resp = self._req(**args)
choice0 = resp["choices"][0]
# 如果不是函数调用且finish_reason为stop则停止迭代
if 'function_call' not in choice0['message'] and choice0["finish_reason"] == "stop":
self.stopped = True
if 'function_call' in choice0['message']:
self.pending_func_call = choice0['message']['function_call']
# self.append_message(
# role="assistant",
# content="function call: "+json.dumps(self.pending_func_call, ensure_ascii=False)
# )
return {
"id": resp["id"],
"choices": [
{
"index": choice0["index"],
"message": {
"role": "assistant",
"type": "function_call",
"content": None,
"function_call": choice0['message']['function_call']
},
"finish_reason": "function_call"
}
],
"usage": resp["usage"]
}
else:
# self.pending_msg += choice0['message']['content']
# 普通回复一定处于最后方故不用再追加进内部messages
return {
"id": resp["id"],
"choices": [
{
"index": choice0["index"],
"message": {
"role": "assistant",
"type": "text",
"content": choice0['message']['content']
},
"finish_reason": "stop"
}
],
"usage": resp["usage"]
}
else: # 处理函数调用请求
cp_pending_func_call = self.pending_func_call.copy()
self.pending_func_call = None
func_name = cp_pending_func_call['name']
arguments = {}
try:
try:
arguments = json.loads(cp_pending_func_call['arguments'])
# 若不是json格式的异常处理
except json.decoder.JSONDecodeError:
# 获取函数的参数列表
func_schema = get_func_schema(func_name)
arguments = {
func_schema['parameters']['required'][0]: cp_pending_func_call['arguments']
}
logging.info("执行函数调用: name={}, arguments={}".format(func_name, arguments))
# 执行函数调用
ret = ""
try:
ret = execute_function(func_name, arguments)
logging.info("函数执行完成。")
except Exception as e:
ret = "error: execute function failed: {}".format(str(e))
logging.error("函数执行失败: {}".format(str(e)))
self.append_message(
role="function",
content=json.dumps(ret, ensure_ascii=False),
name=func_name
)
return {
"id": -1,
"choices": [
{
"index": -1,
"message": {
"role": "function",
"type": "function_return",
"function_name": func_name,
"content": json.dumps(ret, ensure_ascii=False)
},
"finish_reason": "function_return"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}
except ContentFunctionNotFoundError:
raise Exception("没有找到函数: {}".format(func_name))

View File

@@ -0,0 +1,111 @@
import openai
from .model import RequestBase
class CompletionRequest(RequestBase):
"""调用Completion接口的请求类。
调用方可以一直next completion直到finish_reason为stop。
"""
model: str
prompt: str
kwargs: dict
stopped: bool = False
def __init__(
self,
model: str,
messages: list[dict[str, str]],
**kwargs
):
self.model = model
self.prompt = ""
for message in messages:
self.prompt += message["role"] + ": " + message["content"] + "\n"
self.prompt += "assistant: "
self.kwargs = kwargs
self.req_func = openai.Completion.acreate
def __iter__(self):
return self
def __next__(self) -> dict:
"""调用Completion接口返回生成的文本
{
"id": "id",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"type": "text",
"content": "message"
},
"finish_reason": "reason"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}
"""
if self.stopped:
raise StopIteration()
resp = self._req(
model=self.model,
prompt=self.prompt,
**self.kwargs
)
if resp["choices"][0]["finish_reason"] == "stop":
self.stopped = True
choice0 = resp["choices"][0]
self.prompt += choice0["text"]
return {
"id": resp["id"],
"choices": [
{
"index": choice0["index"],
"message": {
"role": "assistant",
"type": "text",
"content": choice0["text"]
},
"finish_reason": choice0["finish_reason"]
}
],
"usage": resp["usage"]
}
if __name__ == "__main__":
import os
openai.api_key = os.environ["OPENAI_API_KEY"]
for resp in CompletionRequest(
model="text-davinci-003",
messages=[
{
"role": "user",
"content": "Hello, who are you?"
}
]
):
print(resp)
if resp["choices"][0]["finish_reason"] == "stop":
break

51
pkg/openai/api/model.py Normal file
View File

@@ -0,0 +1,51 @@
# 定义不同接口请求的模型
import threading
import asyncio
import logging
import openai
class RequestBase:
req_func: callable
def __init__(self, *args, **kwargs):
raise NotImplementedError
def _req(self, **kwargs):
"""处理代理问题"""
ret: dict = {}
exception: Exception = None
async def awrapper(**kwargs):
nonlocal ret, exception
try:
ret = await self.req_func(**kwargs)
logging.debug("接口请求返回:%s", str(ret))
return ret
except Exception as e:
exception = e
loop = asyncio.new_event_loop()
thr = threading.Thread(
target=loop.run_until_complete,
args=(awrapper(**kwargs),)
)
thr.start()
thr.join()
if exception is not None:
raise exception
return ret
def __iter__(self):
raise self
def __next__(self):
raise NotImplementedError

47
pkg/openai/funcmgr.py Normal file
View File

@@ -0,0 +1,47 @@
# 封装了function calling的一些支持函数
import logging
from pkg.plugin import host
class ContentFunctionNotFoundError(Exception):
pass
def get_func_schema_list() -> list:
"""从plugin包中的函数结构中获取并处理成受GPT支持的格式"""
if not host.__enable_content_functions__:
return []
schemas = []
for func in host.__callable_functions__:
if func['enabled']:
fun_cp = func.copy()
del fun_cp['enabled']
schemas.append(fun_cp)
return schemas
def get_func(name: str) -> callable:
if name not in host.__function_inst_map__:
raise ContentFunctionNotFoundError("没有找到内容函数: {}".format(name))
return host.__function_inst_map__[name]
def get_func_schema(name: str) -> dict:
for func in host.__callable_functions__:
if func['name'] == name:
return func
raise ContentFunctionNotFoundError("没有找到内容函数: {}".format(name))
def execute_function(name: str, kwargs: dict) -> any:
"""执行函数调用"""
logging.debug("executing function: name='{}', kwargs={}".format(name, kwargs))
func = get_func(name)
return func(**kwargs)

View File

@@ -5,7 +5,9 @@ import openai
import pkg.openai.keymgr
import pkg.utils.context
import pkg.audit.gatherer
from pkg.openai.modelmgr import ModelRequest, create_openai_model_request
from pkg.openai.modelmgr import select_request_cls
from pkg.openai.api.model import RequestBase
class OpenAIInteract:
@@ -33,45 +35,31 @@ class OpenAIInteract:
pkg.utils.context.set_openai_manager(self)
# 请求OpenAI Completion
def request_completion(self, prompts) -> tuple[str, int]:
"""请求补全接口回复
Parameters:
prompts (str): 提示语
Returns:
str: 回复
def request_completion(self, messages: list):
"""请求补全接口回复=
"""
# 选择接口请求类
config = pkg.utils.context.get_config()
# 根据模型选择使用的接口
ai: ModelRequest = create_openai_model_request(
config.completion_api_params['model'],
'user',
config.openai_config["http_proxy"] if "http_proxy" in config.openai_config else None
request: RequestBase
model: str = config.completion_api_params['model']
cp_parmas = config.completion_api_params.copy()
del cp_parmas['model']
request = select_request_cls(model, messages, cp_parmas)
# 请求接口
for resp in request:
if resp['usage']['total_tokens'] > 0:
self.audit_mgr.report_text_model_usage(
model,
resp['usage']['total_tokens']
)
ai.request(
prompts,
**config.completion_api_params
)
response = ai.get_response()
logging.debug("OpenAI response: %s", response)
# 记录使用量
current_round_token = 0
if 'model' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['model'],
ai.get_total_tokens())
current_round_token = ai.get_total_tokens()
elif 'engine' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['engine'],
response['usage']['total_tokens'])
current_round_token = response['usage']['total_tokens']
return ai.get_message(), current_round_token
yield resp
def request_image(self, prompt) -> dict:
"""请求图片接口回复

View File

@@ -7,6 +7,11 @@ Completion - text-davinci-003 等模型
"""
import openai, logging, threading, asyncio
import openai.error as aiE
import tiktoken
from pkg.openai.api.model import RequestBase
from pkg.openai.api.completion import CompletionRequest
from pkg.openai.api.chat_completion import ChatCompletionRequest
COMPLETION_MODELS = {
'text-davinci-003',
@@ -20,11 +25,14 @@ COMPLETION_MODELS = {
CHAT_COMPLETION_MODELS = {
'gpt-3.5-turbo',
'gpt-3.5-turbo-0301',
'gpt-3.5-turbo-16k',
'gpt-3.5-turbo-0613',
'gpt-3.5-turbo-16k-0613',
# 'gpt-3.5-turbo-0301',
'gpt-4',
'gpt-4-0314',
'gpt-4-0613',
'gpt-4-32k',
'gpt-4-32k-0314'
'gpt-4-32k-0613'
}
EDIT_MODELS = {
@@ -36,153 +44,76 @@ IMAGE_MODELS = {
}
class ModelRequest:
"""模型接口请求父类"""
can_chat = False
runtime: threading.Thread = None
ret = {}
proxy: str = None
request_ready = True
error_info: str = "若在没有任何错误的情况下看到这句话请带着配置文件上报Issues"
def __init__(self, model_name, user_name, request_fun, http_proxy:str = None, time_out = None):
self.model_name = model_name
self.user_name = user_name
self.request_fun = request_fun
self.time_out = time_out
if http_proxy != None:
self.proxy = http_proxy
openai.proxy = self.proxy
self.request_ready = False
async def __a_request__(self, **kwargs):
"""异步请求"""
try:
self.ret:dict = await self.request_fun(**kwargs)
self.request_ready = True
except aiE.APIConnectionError as e:
self.error_info = "{}\n请检查网络连接或代理是否正常".format(e)
raise ConnectionError(self.error_info)
except ValueError as e:
self.error_info = "{}\n该错误可能是由于http_proxy格式设置错误引起的"
except Exception as e:
self.error_info = "{}\n由于请求异常产生的未知错误,请查看日志".format(e)
raise type(e)(self.error_info)
def request(self, **kwargs):
"""向接口发起请求"""
if self.proxy != None: #异步请求
self.request_ready = False
loop = asyncio.new_event_loop()
self.runtime = threading.Thread(
target=loop.run_until_complete,
args=(self.__a_request__(**kwargs),)
)
self.runtime.start()
else: #同步请求
self.ret = self.request_fun(**kwargs)
def __msg_handle__(self, msg):
"""将prompt dict转换成接口需要的格式"""
return msg
def ret_handle(self):
'''
API消息返回处理函数
若重写该方法应检查异步线程状态或在需要检查处super该方法
'''
if self.runtime != None and isinstance(self.runtime, threading.Thread):
self.runtime.join(self.time_out)
if self.request_ready:
return
raise Exception(self.error_info)
def get_total_tokens(self):
try:
return self.ret['usage']['total_tokens']
except:
return 0
def get_message(self):
return self.message
def get_response(self):
return self.ret
class ChatCompletionModel(ModelRequest):
"""ChatCompletion接口的请求实现"""
Chat_role = ['system', 'user', 'assistant']
def __init__(self, model_name, user_name, http_proxy:str = None, **kwargs):
if http_proxy == None:
request_fun = openai.ChatCompletion.create
else:
request_fun = openai.ChatCompletion.acreate
self.can_chat = True
super().__init__(model_name, user_name, request_fun, http_proxy, **kwargs)
def request(self, prompts, **kwargs):
prompts = self.__msg_handle__(prompts)
kwargs['messages'] = prompts
super().request(**kwargs)
self.ret_handle()
def __msg_handle__(self, msgs):
temp_msgs = []
# 把msgs拷贝进temp_msgs
for msg in msgs:
temp_msgs.append(msg.copy())
return temp_msgs
def get_message(self):
return self.ret["choices"][0]["message"]['content'] #需要时直接加载加快请求速度,降低内存消耗
class CompletionModel(ModelRequest):
"""Completion接口的请求实现"""
def __init__(self, model_name, user_name, http_proxy:str = None, **kwargs):
if http_proxy == None:
request_fun = openai.Completion.create
else:
request_fun = openai.Completion.acreate
super().__init__(model_name, user_name, request_fun, http_proxy, **kwargs)
def request(self, prompts, **kwargs):
prompts = self.__msg_handle__(prompts)
kwargs['prompt'] = prompts
super().request(**kwargs)
self.ret_handle()
def __msg_handle__(self, msgs):
prompt = ''
for msg in msgs:
prompt = prompt + "{}: {}\n".format(msg['role'], msg['content'])
# for msg in msgs:
# if msg['role'] == 'assistant':
# prompt = prompt + "{}\n".format(msg['content'])
# else:
# prompt = prompt + "{}:{}\n".format(msg['role'] , msg['content'])
prompt = prompt + "assistant: "
return prompt
def get_message(self):
return self.ret["choices"][0]["text"]
def create_openai_model_request(model_name: str, user_name: str = 'user', http_proxy:str = None) -> ModelRequest:
"""使用给定的模型名称创建模型请求对象"""
def select_request_cls(model_name: str, messages: list, args: dict) -> RequestBase:
if model_name in CHAT_COMPLETION_MODELS:
model = ChatCompletionModel(model_name, user_name, http_proxy)
return ChatCompletionRequest(model_name, messages, **args)
elif model_name in COMPLETION_MODELS:
model = CompletionModel(model_name, user_name, http_proxy)
else :
log = "找不到模型[{}],请检查配置文件".format(model_name)
logging.error(log)
raise IndexError(log)
logging.debug("使用接口[{}]创建模型请求[{}]".format(model.__class__.__name__, model_name))
return model
return CompletionRequest(model_name, messages, **args)
raise ValueError("不支持模型[{}],请检查配置文件".format(model_name))
def count_chat_completion_tokens(messages: list, model: str) -> int:
"""Return the number of tokens used by a list of messages."""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
print("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model in {
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4-0314",
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613",
}:
tokens_per_message = 3
tokens_per_name = 1
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif "gpt-3.5-turbo" in model:
# print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.")
return count_chat_completion_tokens(messages, model="gpt-3.5-turbo-0613")
elif "gpt-4" in model:
# print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.")
return count_chat_completion_tokens(messages, model="gpt-4-0613")
else:
raise NotImplementedError(
f"""count_chat_completion_tokens() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens."""
)
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
def count_completion_tokens(messages: list, model: str) -> int:
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
print("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
text = ""
for message in messages:
text += message['role'] + message['content'] + "\n"
text += "assistant: "
return len(encoding.encode(text))
def count_tokens(messages: list, model: str):
if model in CHAT_COMPLETION_MODELS:
return count_chat_completion_tokens(messages, model)
elif model in COMPLETION_MODELS:
return count_completion_tokens(messages, model)
raise ValueError("不支持模型[{}],请检查配置文件".format(model))

View File

@@ -1,28 +0,0 @@
# 计费模块
# 已弃用 https://github.com/RockChinQ/QChatGPT/issues/81
import logging
pricing = {
"base": { # 文字模型单位是1000字符
"text-davinci-003": 0.02,
},
"image": {
"256x256": 0.016,
"512x512": 0.018,
"1024x1024": 0.02,
}
}
def language_base_price(model, text):
salt_rate = 0.93
length = ((len(text.encode('utf-8')) - len(text)) / 2 + len(text)) * salt_rate
logging.debug("text length: %d" % length)
return pricing["base"][model] * length / 1000
def image_price(size):
logging.debug("image size: %s" % size)
return pricing["image"][size]

View File

@@ -16,6 +16,8 @@ import pkg.utils.context
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
from pkg.openai.modelmgr import count_tokens
# 运行时保存的所有session
sessions = {}
@@ -83,7 +85,7 @@ def load_sessions():
# 获取指定名称的session如果不存在则创建一个新的
def get_session(session_name: str):
def get_session(session_name: str) -> 'Session':
global sessions
if session_name not in sessions:
sessions[session_name] = Session(session_name)
@@ -107,9 +109,6 @@ class Session:
prompt = []
"""使用list来保存会话中的回合"""
token_counts = []
"""每个回合的token数量"""
default_prompt = []
"""本session的默认prompt"""
@@ -195,7 +194,7 @@ class Session:
# 请求回复
# 这个函数是阻塞的
def append(self, text: str) -> str:
def append(self, text: str=None) -> str:
"""向session中添加一条消息返回接口回复"""
self.last_interact_timestamp = int(time.time())
@@ -215,29 +214,92 @@ class Session:
config = pkg.utils.context.get_config()
max_length = config.prompt_submit_length
prompts, counts = self.cut_out(text, max_length)
local_default_prompt = self.default_prompt.copy()
local_prompt = self.prompt.copy()
# 计算请求前的prompt数量
total_token_before_query = 0
for token_count in counts:
total_token_before_query += token_count
# 触发PromptPreProcessing事件
args = {
'session_name': self.name,
'default_prompt': self.default_prompt,
'prompt': self.prompt,
'text_message': text,
}
# 向API请求补全
message, total_token = pkg.utils.context.get_openai_manager().request_completion(
prompts,
event = pkg.plugin.host.emit(plugin_models.PromptPreProcessing, **args)
if event.get_return_value('default_prompt') is not None:
local_default_prompt = event.get_return_value('default_prompt')
if event.get_return_value('prompt') is not None:
local_prompt = event.get_return_value('prompt')
if event.get_return_value('text_message') is not None:
text = event.get_return_value('text_message')
prompts, _ = self.cut_out(text, max_length, local_default_prompt, local_prompt)
res_text = ""
pending_msgs = []
total_tokens = 0
for resp in pkg.utils.context.get_openai_manager().request_completion(prompts):
if resp['choices'][0]['message']['type'] == 'text': # 普通回复
res_text += resp['choices'][0]['message']['content']
total_tokens += resp['usage']['total_tokens']
pending_msgs.append(
{
"role": "assistant",
"content": resp['choices'][0]['message']['content']
}
)
elif resp['choices'][0]['message']['type'] == 'function_call':
# self.prompt.append(
# {
# "role": "assistant",
# "content": "function call: "+json.dumps(resp['choices'][0]['message']['function_call'])
# }
# )
total_tokens += resp['usage']['total_tokens']
elif resp['choices'][0]['message']['type'] == 'function_return':
# self.prompt.append(
# {
# "role": "function",
# "name": resp['choices'][0]['message']['function_name'],
# "content": json.dumps(resp['choices'][0]['message']['content'])
# }
# )
# total_tokens += resp['usage']['total_tokens']
pass
# 向API请求补全
# message, total_token = pkg.utils.context.get_openai_manager().request_completion(
# prompts,
# )
# 成功获取,处理回复
res_test = message
res_ans = res_test.strip()
# res_test = message
res_ans = res_text.strip()
# 将此次对话的双方内容加入到prompt中
# self.prompt.append({'role': 'user', 'content': text})
# self.prompt.append({'role': 'assistant', 'content': res_ans})
if text:
self.prompt.append({'role': 'user', 'content': text})
self.prompt.append({'role': 'assistant', 'content': res_ans})
# 添加pending_msgs
self.prompt += pending_msgs
# 向token_counts中添加本回合的token数量
self.token_counts.append(total_token-total_token_before_query)
logging.debug("本回合使用token: {}, session counts: {}".format(total_token-total_token_before_query, self.token_counts))
# self.token_counts.append(total_tokens-total_token_before_query)
# logging.debug("本回合使用token: {}, session counts: {}".format(total_tokens-total_token_before_query, self.token_counts))
if self.just_switched_to_exist_session:
self.just_switched_to_exist_session = False
@@ -261,7 +323,7 @@ class Session:
return question
# 构建对话体
def cut_out(self, msg: str, max_tokens: int) -> tuple[list, list]:
def cut_out(self, msg: str, max_tokens: int, default_prompt: list, prompt: list) -> tuple[list, list]:
"""将现有prompt进行切割处理使得新的prompt长度不超过max_tokens
:return: (新的prompt, 新的token_counts)
@@ -274,29 +336,25 @@ class Session:
# 包装目前的对话回合内容
changable_prompts = []
changable_counts = []
# 倒着来, 遍历prompt的步长为2, 遍历tokens_counts的步长为1
changable_index = len(self.prompt) - 1
token_count_index = len(self.token_counts) - 1
packed_tokens = 0
use_model = pkg.utils.context.get_config().completion_api_params['model']
while changable_index >= 0 and token_count_index >= 0:
if packed_tokens + self.token_counts[token_count_index] > max_tokens:
ptr = len(prompt) - 1
# 直接从后向前扫描拼接,不管是否是整回合
while ptr >= 0:
if count_tokens(prompt[ptr:ptr+1]+changable_prompts, use_model) > max_tokens:
break
changable_prompts.insert(0, self.prompt[changable_index])
changable_prompts.insert(0, self.prompt[changable_index - 1])
changable_counts.insert(0, self.token_counts[token_count_index])
packed_tokens += self.token_counts[token_count_index]
changable_prompts.insert(0, prompt[ptr])
changable_index -= 2
token_count_index -= 1
ptr -= 1
# 将default_prompt和changable_prompts合并
result_prompt = self.default_prompt + changable_prompts
result_prompt = default_prompt + changable_prompts
# 添加当前问题
if msg:
result_prompt.append(
{
'role': 'user',
@@ -304,12 +362,9 @@ class Session:
}
)
logging.debug('cut_out: {}\nchangable section tokens: {}\npacked counts: {}\nsession counts: {}'.format(json.dumps(result_prompt, ensure_ascii=False, indent=4),
packed_tokens,
changable_counts,
self.token_counts))
logging.debug("cut_out: {}".format(json.dumps(result_prompt, ensure_ascii=False, indent=4)))
return result_prompt, changable_counts
return result_prompt, count_tokens(changable_prompts, use_model)
# 持久化session
def persistence(self):
@@ -327,7 +382,7 @@ class Session:
json.dumps(self.prompt), json.dumps(self.default_prompt), json.dumps(self.token_counts))
# 重置session
def reset(self, explicit: bool = False, expired: bool = False, schedule_new: bool = True, use_prompt: str = None):
def reset(self, explicit: bool = False, expired: bool = False, schedule_new: bool = True, use_prompt: str = None, persist: bool = False):
if self.prompt:
self.persistence()
if explicit:
@@ -345,6 +400,7 @@ class Session:
if expired:
pkg.utils.context.get_database_manager().set_session_expired(self.name, self.create_timestamp)
if not persist: # 不要求保持default prompt
self.default_prompt = self.get_default_prompt(use_prompt)
self.prompt = []
self.token_counts = []

View File

@@ -8,12 +8,16 @@ import sys
import shutil
import traceback
import pkg.utils.updater as updater
import pkg.utils.context as context
import pkg.plugin.switch as switch
import pkg.plugin.settings as settings
import pkg.qqbot.adapter as msadapter
from mirai import Mirai
from CallingGPT.session.session import Session
__plugins__ = {}
"""插件列表
@@ -40,6 +44,15 @@ __plugins__ = {}
__plugins_order__ = []
"""插件顺序"""
__enable_content_functions__ = True
"""是否启用内容函数"""
__callable_functions__ = []
"""供GPT调用的函数结构"""
__function_inst_map__: dict[str, callable] = {}
"""函数名:实例 映射"""
def generate_plugin_order():
"""根据__plugin__生成插件初始顺序无视是否启用"""
@@ -78,7 +91,7 @@ def walk_plugin_path(module, prefix='', path_prefix=''):
__current_module_path__ = "plugins/"+path_prefix + item.name + '.py'
importlib.import_module(module.__name__ + '.' + item.name)
logging.info('加载模块: plugins/{} 成功'.format(path_prefix + item.name + '.py'))
logging.debug('加载模块: plugins/{} 成功'.format(path_prefix + item.name + '.py'))
except:
logging.error('加载模块: plugins/{} 失败: {}'.format(path_prefix + item.name + '.py', sys.exc_info()))
traceback.print_exc()
@@ -100,6 +113,10 @@ def load_plugins():
# 加载插件顺序
settings.load_settings()
# 输出已注册的内容函数列表
logging.debug("registered content functions: {}".format(__callable_functions__))
logging.debug("function instance map: {}".format(__function_inst_map__))
def initialize_plugins():
"""初始化插件"""
@@ -176,6 +193,43 @@ def uninstall_plugin(plugin_name: str) -> str:
return "plugins/"+plugin_path
def update_plugin(plugin_name: str):
"""更新插件"""
# 检查是否有远程地址记录
target_plugin_dir = "plugins/" + __plugins__[plugin_name]['path'].replace("\\", "/").split("plugins/")[1].split("/")[0]
remote_url = updater.get_remote_url(target_plugin_dir)
if remote_url == "https://github.com/RockChinQ/QChatGPT" or remote_url == "https://gitee.com/RockChin/QChatGPT" \
or remote_url == "" or remote_url is None or remote_url == "http://github.com/RockChinQ/QChatGPT" or remote_url == "http://gitee.com/RockChin/QChatGPT":
raise Exception("插件没有远程地址记录,无法更新")
# 把远程clone到temp/plugins/update/插件名
logging.info("克隆插件储存库: {}".format(remote_url))
from dulwich import porcelain
clone_target_dir = "temp/plugins/update/"+target_plugin_dir.split("/")[-1]+"/"
if os.path.exists(clone_target_dir):
shutil.rmtree(clone_target_dir)
if not os.path.exists(clone_target_dir):
os.makedirs(clone_target_dir)
repo = porcelain.clone(remote_url, clone_target_dir, checkout=True)
# 检查此目录是否包含requirements.txt
if os.path.exists(clone_target_dir+"requirements.txt"):
logging.info("检测到requirements.txt正在安装依赖")
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements(clone_target_dir+"requirements.txt")
import pkg.utils.log as log
log.reset_logging()
# 将temp/plugins/update/插件名 覆盖到 plugins/插件名
shutil.rmtree(target_plugin_dir)
shutil.copytree(clone_target_dir, target_plugin_dir)
class EventContext:
"""事件上下文"""
eid = 0
@@ -212,7 +266,7 @@ class EventContext:
self.__return_value__[key] = []
self.__return_value__[key].append(ret)
def get_return(self, key: str):
def get_return(self, key: str) -> list:
"""获取key的所有返回值"""
if key in self.__return_value__:
return self.__return_value__[key]
@@ -261,7 +315,9 @@ class PluginHost:
"""插件宿主"""
def __init__(self):
"""初始化插件宿主"""
context.set_plugin_host(self)
self.calling_gpt_session = Session([])
def get_runtime_context(self) -> context:
"""获取运行时上下文pkg.utils.context模块的对象
@@ -276,13 +332,17 @@ class PluginHost:
"""获取机器人对象"""
return context.get_qqbot_manager().bot
def get_bot_adapter(self) -> msadapter.MessageSourceAdapter:
"""获取消息源适配器"""
return context.get_qqbot_manager().adapter
def send_person_message(self, person, message):
"""发送私聊消息"""
asyncio.run(self.get_bot().send_friend_message(person, message))
self.get_bot_adapter().send_message("person", person, message)
def send_group_message(self, group, message):
"""发送群消息"""
asyncio.run(self.get_bot().send_group_message(group, message))
self.get_bot_adapter().send_message("group", group, message)
def notify_admin(self, message):
"""通知管理员"""
@@ -339,3 +399,6 @@ class PluginHost:
event_context.__return_value__))
return event_context
if __name__ == "__main__":
pass

View File

@@ -132,18 +132,64 @@ KeySwitched = "key_switched"
key_list: list[str] api-key列表
"""
PromptPreProcessing = "prompt_pre_processing"
"""每回合调用接口前对prompt进行预处理时触发此事件不支持阻止默认行为
kwargs:
session_name: str 会话名称(<launcher_type>_<launcher_id>)
default_prompt: list 此session使用的情景预设内容
prompt: list 此session现有的prompt内容
text_message: str 用户发送的消息文本
def on(event: str):
returns (optional):
default_prompt: list 修改后的情景预设内容
prompt: list 修改后的prompt内容
text_message: str 修改后的消息文本
"""
def on(*args, **kwargs):
"""注册事件监听器
:param
event: str 事件名称
"""
return Plugin.on(event)
return Plugin.on(*args, **kwargs)
def func(*args, **kwargs):
"""注册内容函数声明此函数为一个内容函数在对话中将发送此函数给GPT以供其调用
此函数可以具有任意的参数,但必须按照[此文档](https://github.com/RockChinQ/CallingGPT/wiki/1.-Function-Format#function-format)
所述的格式编写函数的docstring。
此功能仅支持在使用gpt-3.5或gpt-4系列模型时使用。
"""
return Plugin.func(*args, **kwargs)
__current_registering_plugin__ = ""
def require_ver(ge: str, le: str="v999.9.9") -> bool:
"""插件版本要求装饰器
Args:
ge (str): 最低版本要求
le (str, optional): 最高版本要求
Returns:
bool: 是否满足要求, False时为无法获取版本号True时为满足要求报错为不满足要求
"""
qchatgpt_version = ""
from pkg.utils.updater import get_current_tag, compare_version_str
try:
qchatgpt_version = get_current_tag() # 从updater模块获取版本号
except:
return False
if compare_version_str(qchatgpt_version, ge) < 0 or \
(compare_version_str(qchatgpt_version, le) > 0):
raise Exception("QChatGPT 版本不满足要求,某些功能(可能是由插件提供的)无法正常使用。(要求版本:{}-{},但当前版本:{}".format(ge, le, qchatgpt_version))
return True
class Plugin:
"""插件基类"""
@@ -176,6 +222,34 @@ class Plugin:
return wrapper
@classmethod
def func(cls, name: str=None):
"""内容函数装饰器
"""
global __current_registering_plugin__
from CallingGPT.entities.namespace import get_func_schema
def wrapper(func):
function_schema = get_func_schema(func)
function_schema['name'] = __current_registering_plugin__ + '-' + (func.__name__ if name is None else name)
function_schema['enabled'] = True
host.__function_inst_map__[function_schema['name']] = function_schema['function']
del function_schema['function']
# logging.debug("registering content function: p='{}', f='{}', s={}".format(__current_registering_plugin__, func, function_schema))
host.__callable_functions__.append(
function_schema
)
return func
return wrapper
def register(name: str, description: str, version: str, author: str):
"""注册插件, 此函数作为装饰器使用

View File

@@ -8,7 +8,10 @@ import logging
def wrapper_dict_from_runtime_context() -> dict:
"""从变量中包装settings.json的数据字典"""
settings = {
"order": []
"order": [],
"functions": {
"enabled": host.__enable_content_functions__
}
}
for plugin_name in host.__plugins_order__:
@@ -22,6 +25,11 @@ def apply_settings(settings: dict):
if "order" in settings:
host.__plugins_order__ = settings["order"]
if "functions" in settings:
if "enabled" in settings["functions"]:
host.__enable_content_functions__ = settings["functions"]["enabled"]
# logging.debug("set content function enabled: {}".format(host.__enable_content_functions__))
def dump_settings():
"""保存settings.json数据"""
@@ -78,6 +86,17 @@ def load_settings():
settings["order"].append(plugin_name)
settings_modified = True
if "functions" not in settings:
settings["functions"] = {
"enabled": host.__enable_content_functions__
}
settings_modified = True
elif "enabled" not in settings["functions"]:
settings["functions"]["enabled"] = host.__enable_content_functions__
settings_modified = True
logging.info("已全局{}内容函数。".format("启用" if settings["functions"]["enabled"] else "禁用"))
apply_settings(settings)
if settings_modified:

View File

@@ -28,6 +28,11 @@ def apply_switch(switch: dict):
for plugin_name in switch:
host.__plugins__[plugin_name]["enabled"] = switch[plugin_name]["enabled"]
# 查找此插件的所有内容函数
for func in host.__callable_functions__:
if func['name'].startswith(plugin_name + '-'):
func['enabled'] = switch[plugin_name]["enabled"]
def dump_switch():
"""保存开关数据"""

136
pkg/qqbot/adapter.py Normal file
View File

@@ -0,0 +1,136 @@
# MessageSource的适配器
import typing
import mirai
class MessageSourceAdapter:
def __init__(self, config: dict):
pass
def send_message(
self,
target_type: str,
target_id: str,
message: mirai.MessageChain
):
"""发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (mirai.MessageChain): YiriMirai库的消息链
"""
raise NotImplementedError
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
"""回复消息
Args:
message_source (mirai.MessageEvent): YiriMirai消息源事件
message (mirai.MessageChain): YiriMirai库的消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
"""
raise NotImplementedError
def is_muted(self, group_id: int) -> bool:
"""获取账号是否在指定群被禁言"""
raise NotImplementedError
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注册事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
raise NotImplementedError
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注销事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
raise NotImplementedError
def run_sync(self):
"""以阻塞的方式运行适配器"""
raise NotImplementedError
def kill(self) -> bool:
"""关闭适配器
Returns:
bool: 是否成功关闭热重载时若此函数返回False则不会重载MessageSource底层
"""
raise NotImplementedError
class MessageConverter:
"""消息链转换器基类"""
@staticmethod
def yiri2target(message_chain: mirai.MessageChain):
"""将YiriMirai消息链转换为目标消息链
Args:
message_chain (mirai.MessageChain): YiriMirai消息链
Returns:
typing.Any: 目标消息链
"""
raise NotImplementedError
@staticmethod
def target2yiri(message_chain: typing.Any) -> mirai.MessageChain:
"""将目标消息链转换为YiriMirai消息链
Args:
message_chain (typing.Any): 目标消息链
Returns:
mirai.MessageChain: YiriMirai消息链
"""
raise NotImplementedError
class EventConverter:
"""事件转换器基类"""
@staticmethod
def yiri2target(event: typing.Type[mirai.Event]):
"""将YiriMirai事件转换为目标事件
Args:
event (typing.Type[mirai.Event]): YiriMirai事件
Returns:
typing.Any: 目标事件
"""
raise NotImplementedError
@staticmethod
def target2yiri(event: typing.Any) -> mirai.Event:
"""将目标事件的调用参数转换为YiriMirai的事件参数对象
Args:
event (typing.Any): 目标事件
Returns:
typing.Type[mirai.Event]: YiriMirai事件
"""
raise NotImplementedError

View File

@@ -260,8 +260,8 @@ def execute(context: Context) -> list:
while True:
try:
logging.debug('执行指令: {}'.format(path))
node = __command_list__[path]
logging.debug('执行指令: {}'.format(path))
# 检查权限
if ctx.privilege < node['privilege']:

View File

@@ -0,0 +1,28 @@
from ..aamgr import AbstractCommandNode, Context
import logging
@AbstractCommandNode.register(
parent=None,
name="func",
description="管理内容函数",
usage="!func",
aliases=[],
privilege=1
)
class FuncCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
from pkg.plugin.models import host
reply = []
reply_str = "当前已加载的内容函数:\n\n"
index = 1
for func in host.__callable_functions__:
reply_str += "{}. {}{}:\n{}\n\n".format(index, ("(已禁用) " if not func['enabled'] else ""), func['name'], func['description'])
reply = [reply_str]
return True, reply

View File

@@ -10,9 +10,9 @@ import pkg.utils.updater as updater
parent=None,
name="plugin",
description="插件管理",
usage="!plugin\n!plugin get <插件仓库地址>\!plugin update\n!plugin del <插件名>\n!plugin on <插件名>\n!plugin off <插件名>",
usage="!plugin\n!plugin get <插件仓库地址>\n!plugin update\n!plugin del <插件名>\n!plugin on <插件名>\n!plugin off <插件名>",
aliases=[],
privilege=2
privilege=1
)
class PluginCommand(AbstractCommandNode):
@classmethod
@@ -97,37 +97,34 @@ class PluginUpdateCommand(AbstractCommandNode):
plugin_list = plugin_host.__plugins__
reply = []
if len(ctx.crt_params) > 0:
def closure():
try:
import pkg.utils.context
updated = []
if ctx.crt_params[0] == 'all':
for key in plugin_list:
plugin = plugin_list[key]
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
success = updater.pull_latest("/".join(plugin['path'].split('/')[:-1]))
if success:
updated.append(plugin['name'])
plugin_host.update_plugin(key)
updated.append(key)
else:
if ctx.crt_params[0] in plugin_list:
plugin_host.update_plugin(ctx.crt_params[0])
updated.append(ctx.crt_params[0])
else:
raise Exception("未找到插件: {}".format(ctx.crt_params[0]))
# 检查是否有requirements.txt
pkg.utils.context.get_qqbot_manager().notify_admin("正在安装依赖...")
for key in plugin_list:
plugin = plugin_list[key]
if os.path.exists("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt"):
logging.info("{}检测到requirements.txt安装依赖".format(plugin['name']))
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt")
import pkg.utils.log as log
log.reset_logging()
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}".format(", ".join(updated)))
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}, 请发送 !reload 重载插件".format(", ".join(updated)))
except Exception as e:
logging.error("插件更新失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件更新失败:{} 请尝试手动更新插件".format(e))
reply = ["[bot]正在更新插件,请勿重复发起..."]
threading.Thread(target=closure).start()
reply = ["[bot]正在更新所有插件,请勿重复发起..."]
else:
reply = ["[bot]请指定要更新的插件, 或使用 !plugin update all 更新所有插件"]
return True, reply
@@ -191,6 +188,11 @@ class PluginOnOffCommand(AbstractCommandNode):
plugin_name = ctx.crt_params[0]
if plugin_name in plugin_list:
plugin_list[plugin_name]['enabled'] = new_status
for func in plugin_host.__callable_functions__:
if func['name'].startswith(plugin_name+"-"):
func['enabled'] = new_status
plugin_switch.dump_switch()
reply = ["[bot]已{}插件: {}".format("启用" if new_status else "禁用", plugin_name)]
else:

View File

@@ -0,0 +1,27 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="continue",
description="继续未完成的响应",
usage="!continue",
aliases=[],
privilege=1
)
class ContinueCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
import config
session_name = ctx.session_name
reply = []
session = pkg.openai.session.get_session(session_name)
text = session.append()
reply = [text]
return True, reply

View File

@@ -8,7 +8,7 @@ def config_operation(cmd, params):
config = pkg.utils.context.get_config()
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入配置项"]
reply = ["[bot]err:请输入!cmd cfg查看使用方法"]
else:
cfg_name = params[0]
if cfg_name == 'all':
@@ -26,45 +26,61 @@ def config_operation(cmd, params):
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply = [reply_str]
elif cfg_name in dir(config):
else:
cfg_entry_path = cfg_name.split('.')
try:
if len(params) == 1:
# 按照配置项类型进行格式化
if isinstance(getattr(config, cfg_name), str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, getattr(config, cfg_name))
elif isinstance(getattr(config, cfg_name), dict):
cfg_entry = getattr(config, cfg_entry_path[0])
if len(cfg_entry_path) > 1:
for i in range(1, len(cfg_entry_path)):
cfg_entry = cfg_entry[cfg_entry_path[i]]
if isinstance(cfg_entry, str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, cfg_entry)
elif isinstance(cfg_entry, dict):
reply_str = "[bot]配置项{}: {}\n".format(cfg_name,
json.dumps(getattr(config, cfg_name),
json.dumps(cfg_entry,
ensure_ascii=False, indent=4))
else:
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, getattr(config, cfg_name))
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, cfg_entry)
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
if cfg_value == 'true':
cfg_value = True
elif cfg_value == 'false':
cfg_value = False
elif cfg_value.isdigit():
cfg_value = int(cfg_value)
elif cfg_value.startswith('{') and cfg_value.endswith('}'):
cfg_value = json.loads(cfg_value)
else:
try:
cfg_value = float(cfg_value)
except ValueError:
pass
# if cfg_value == 'true':
# cfg_value = True
# elif cfg_value == 'false':
# cfg_value = False
# elif cfg_value.isdigit():
# cfg_value = int(cfg_value)
# elif cfg_value.startswith('{') and cfg_value.endswith('}'):
# cfg_value = json.loads(cfg_value)
# else:
# try:
# cfg_value = float(cfg_value)
# except ValueError:
# pass
cfg_value = eval(cfg_value)
# 检查类型是否匹配
if isinstance(getattr(config, cfg_name), type(cfg_value)):
setattr(config, cfg_name, cfg_value)
pkg.utils.context.set_config(config)
cfg_entry = getattr(config, cfg_entry_path[0])
if len(cfg_entry_path) > 1:
for i in range(1, len(cfg_entry_path) - 1):
cfg_entry = cfg_entry[cfg_entry_path[i]]
if isinstance(cfg_entry[cfg_entry_path[-1]], type(cfg_value)):
cfg_entry[cfg_entry_path[-1]] = cfg_value
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
setattr(config, cfg_entry_path[0], cfg_value)
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
except AttributeError:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
except ValueError:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
# else:
# reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply

View File

@@ -3,9 +3,9 @@ import json
import os
import threading
import mirai.models.bus
from mirai import At, GroupMessage, MessageEvent, Mirai, StrangerMessage, WebSocketAdapter, HTTPAdapter, \
FriendMessage, Image
FriendMessage, Image, MessageChain, Plain
from func_timeout import func_set_timeout
import pkg.openai.session
@@ -21,12 +21,22 @@ import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import tips as tips_custom
import pkg.qqbot.adapter as msadapter
# 检查消息是否符合泛响应匹配机制
def check_response_rule(text: str):
def check_response_rule(group_id:int, text: str):
config = pkg.utils.context.get_config()
rules = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
rules = config.response_rules[str(group_id)]
else:
rules = config.response_rules['default']
# 检查前缀匹配
if 'prefix' in rules:
for rule in rules['prefix']:
@@ -44,19 +54,39 @@ def check_response_rule(text: str):
return False, ""
def response_at():
def response_at(group_id: int):
config = pkg.utils.context.get_config()
if 'at' not in config.response_rules:
use_response_rule = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
else:
use_response_rule = config.response_rules['default']
if 'at' not in use_response_rule:
return True
return config.response_rules['at']
return use_response_rule['at']
def random_responding():
def random_responding(group_id):
config = pkg.utils.context.get_config()
if 'random_rate' in config.response_rules:
use_response_rule = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
else:
use_response_rule = config.response_rules['default']
if 'random_rate' in use_response_rule:
import random
return random.random() < config.response_rules['random_rate']
return random.random() < use_response_rule['random_rate']
return False
@@ -64,57 +94,52 @@ def random_responding():
class QQBotManager:
retry = 3
bot: Mirai = None
adapter: msadapter.MessageSourceAdapter = None
bot_account_id: int = 0
reply_filter = None
enable_banlist = False
enable_private = True
enable_group = True
ban_person = []
ban_group = []
def __init__(self, mirai_http_api_config: dict, timeout: int = 60, retry: int = 3, first_time_init=True):
self.timeout = timeout
self.retry = retry
def __init__(self, first_time_init=True):
import config
# 加载禁用列表
if os.path.exists("banlist.py"):
import banlist
self.enable_banlist = banlist.enable
self.ban_person = banlist.person
self.ban_group = banlist.group
logging.info("加载禁用列表: person: {}, group: {}".format(self.ban_person, self.ban_group))
config = pkg.utils.context.get_config()
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
with open("sensitive.json", "r", encoding="utf-8") as f:
sensitive_json = json.load(f)
self.reply_filter = pkg.qqbot.filter.ReplyFilter(
sensitive_words=sensitive_json['words'],
mask=sensitive_json['mask'] if 'mask' in sensitive_json else '*',
mask_word=sensitive_json['mask_word'] if 'mask_word' in sensitive_json else ''
)
else:
self.reply_filter = pkg.qqbot.filter.ReplyFilter([])
self.timeout = config.process_message_timeout
self.retry = config.retry_times
# 由于YiriMirai的bot对象是单例的且shutdown方法暂时无法使用
# 故只在第一次初始化时创建bot对象重载之后使用原bot对象
# 因此bot的配置不支持热重载
if first_time_init:
self.first_time_init(mirai_http_api_config)
logging.info("Use adapter:" + config.msg_source_adapter)
if config.msg_source_adapter == 'yirimirai':
from pkg.qqbot.sources.yirimirai import YiriMiraiAdapter
mirai_http_api_config = config.mirai_http_api_config
self.bot_account_id = config.mirai_http_api_config['qq']
self.adapter = YiriMiraiAdapter(mirai_http_api_config)
elif config.msg_source_adapter == 'nakuru':
from pkg.qqbot.sources.nakuru import NakuruProjectAdapter
self.adapter = NakuruProjectAdapter(config.nakuru_config)
self.bot_account_id = self.adapter.bot_account_id
else:
self.bot = pkg.utils.context.get_qqbot_manager().bot
self.adapter = pkg.utils.context.get_qqbot_manager().adapter
self.bot_account_id = pkg.utils.context.get_qqbot_manager().bot_account_id
pkg.utils.context.set_qqbot_manager(self)
# 注册诸事件
# Caution: 注册新的事件处理器之后请务必在unsubscribe_all中编写相应的取消订阅代码
@self.bot.on(FriendMessage)
async def on_friend_message(event: FriendMessage):
def friend_message_handler(event: FriendMessage):
def on_friend_message(event: FriendMessage):
def friend_message_handler():
# 触发事件
args = {
"launcher_type": "person",
@@ -131,13 +156,15 @@ class QQBotManager:
pkg.utils.context.get_thread_ctl().submit_user_task(
friend_message_handler,
event
)
self.adapter.register_listener(
FriendMessage,
on_friend_message
)
@self.bot.on(StrangerMessage)
async def on_stranger_message(event: StrangerMessage):
def on_stranger_message(event: StrangerMessage):
def stranger_message_handler(event: StrangerMessage):
def stranger_message_handler():
# 触发事件
args = {
"launcher_type": "person",
@@ -154,11 +181,15 @@ class QQBotManager:
pkg.utils.context.get_thread_ctl().submit_user_task(
stranger_message_handler,
event
)
# nakuru不区分好友和陌生人故仅为yirimirai注册陌生人事件
if config.msg_source_adapter == 'yirimirai':
self.adapter.register_listener(
StrangerMessage,
on_stranger_message
)
@self.bot.on(GroupMessage)
async def on_group_message(event: GroupMessage):
def on_group_message(event: GroupMessage):
def group_message_handler(event: GroupMessage):
# 触发事件
@@ -179,62 +210,76 @@ class QQBotManager:
group_message_handler,
event
)
self.adapter.register_listener(
GroupMessage,
on_group_message
)
def unsubscribe_all():
"""取消所有订阅
用于在热重载流程中卸载所有事件处理器
"""
assert isinstance(self.bot, Mirai)
bus = self.bot.bus
assert isinstance(bus, mirai.models.bus.ModelEventBus)
bus.unsubscribe(FriendMessage, on_friend_message)
bus.unsubscribe(StrangerMessage, on_stranger_message)
bus.unsubscribe(GroupMessage, on_group_message)
import config
self.adapter.unregister_listener(
FriendMessage,
on_friend_message
)
if config.msg_source_adapter == 'yirimirai':
self.adapter.unregister_listener(
StrangerMessage,
on_stranger_message
)
self.adapter.unregister_listener(
GroupMessage,
on_group_message
)
self.unsubscribe_all = unsubscribe_all
def go(self, func, *args, **kwargs):
self.pool.submit(func, *args, **kwargs)
# 加载禁用列表
if os.path.exists("banlist.py"):
import banlist
self.enable_banlist = banlist.enable
self.ban_person = banlist.person
self.ban_group = banlist.group
logging.info("加载禁用列表: person: {}, group: {}".format(self.ban_person, self.ban_group))
def first_time_init(self, mirai_http_api_config: dict):
"""热重载后不再运行此函数"""
if 'adapter' not in mirai_http_api_config or mirai_http_api_config['adapter'] == "WebSocketAdapter":
bot = Mirai(
qq=mirai_http_api_config['qq'],
adapter=WebSocketAdapter(
verify_key=mirai_http_api_config['verifyKey'],
host=mirai_http_api_config['host'],
port=mirai_http_api_config['port']
)
)
elif mirai_http_api_config['adapter'] == "HTTPAdapter":
bot = Mirai(
qq=mirai_http_api_config['qq'],
adapter=HTTPAdapter(
verify_key=mirai_http_api_config['verifyKey'],
host=mirai_http_api_config['host'],
port=mirai_http_api_config['port']
)
)
if hasattr(banlist, "enable_private"):
self.enable_private = banlist.enable_private
if hasattr(banlist, "enable_group"):
self.enable_group = banlist.enable_group
config = pkg.utils.context.get_config()
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
with open("sensitive.json", "r", encoding="utf-8") as f:
sensitive_json = json.load(f)
self.reply_filter = pkg.qqbot.filter.ReplyFilter(
sensitive_words=sensitive_json['words'],
mask=sensitive_json['mask'] if 'mask' in sensitive_json else '*',
mask_word=sensitive_json['mask_word'] if 'mask_word' in sensitive_json else ''
)
else:
raise Exception("未知的适配器类型")
self.bot = bot
self.reply_filter = pkg.qqbot.filter.ReplyFilter([])
def send(self, event, msg, check_quote=True):
config = pkg.utils.context.get_config()
asyncio.run(
self.bot.send(event, msg, quote=True if config.quote_origin and check_quote else False))
self.adapter.reply_message(
event,
msg,
quote_origin=True if config.quote_origin and check_quote else False
)
# 私聊消息处理
def on_person_message(self, event: MessageEvent):
import config
reply = ''
if event.sender.id == self.bot.qq:
if not self.enable_private:
logging.debug("已在banlist.py中禁用所有私聊")
elif event.sender.id == self.bot_account_id:
pass
else:
if Image in event.message_chain:
@@ -274,11 +319,10 @@ class QQBotManager:
def on_group_message(self, event: GroupMessage):
import config
reply = ''
def process(text=None) -> str:
replys = ""
if At(self.bot.qq) in event.message_chain:
event.message_chain.remove(At(self.bot.qq))
if At(self.bot_account_id) in event.message_chain:
event.message_chain.remove(At(self.bot_account_id))
# 超时则重试,重试超过次数则放弃
failed = 0
@@ -309,19 +353,21 @@ class QQBotManager:
return replys
if Image in event.message_chain:
if not self.enable_group:
logging.debug("已在banlist.py中禁用所有群聊")
elif Image in event.message_chain:
pass
else:
if At(self.bot.qq) in event.message_chain and response_at():
if At(self.bot_account_id) in event.message_chain and response_at(event.group.id):
# 直接调用
reply = process()
else:
check, result = check_response_rule(str(event.message_chain).strip())
check, result = check_response_rule(event.group.id, str(event.message_chain).strip())
if check:
reply = process(result.strip())
# 检查是否随机响应
elif random_responding():
elif random_responding(event.group.id):
logging.info("随机响应group_{}消息".format(event.group.id))
reply = process()
@@ -334,22 +380,33 @@ class QQBotManager:
if config.admin_qq != 0 and config.admin_qq != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
send_task = self.bot.send_friend_message(config.admin_qq, "[bot]{}".format(message))
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
config.admin_qq,
MessageChain([Plain("[bot]{}".format(message))])
)
else:
for adm in config.admin_qq:
send_task = self.bot.send_friend_message(adm, "[bot]{}".format(message))
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
adm,
MessageChain([Plain("[bot]{}".format(message))])
)
def notify_admin_message_chain(self, message):
config = pkg.utils.context.get_config()
if config.admin_qq != 0 and config.admin_qq != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
send_task = self.bot.send_friend_message(config.admin_qq, message)
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
config.admin_qq,
message
)
else:
for adm in config.admin_qq:
send_task = self.bot.send_friend_message(adm, message)
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
adm,
message
)

View File

@@ -114,6 +114,10 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e),
"[bot]err:RateLimitError,请重试或联系作者,或等待修复")
except openai.error.InvalidRequestError as e:
if config.auto_reset and "This model's maximum context length is" in str(e):
session.reset(persist=True)
reply = [tips_custom.session_auto_reset_message]
else:
reply = handle_exception("{}API调用参数错误:{}\n".format(
session_name, e), "[bot]err:API调用参数错误请联系管理员或等待修复")
except openai.error.ServiceUnavailableError as e:

View File

@@ -66,22 +66,24 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
# 检查是否被禁言
if launcher_type == 'group':
result = mgr.bot.member_info(target=launcher_id, member_id=mgr.bot.qq).get()
result = asyncio.run(result)
if result.mute_time_remaining > 0:
logging.info("机器人被禁言,跳过消息处理(group_{},剩余{}s)".format(launcher_id,
result.mute_time_remaining))
is_muted = mgr.adapter.is_muted(launcher_id)
if is_muted:
logging.info("机器人被禁言,跳过消息处理(group_{})".format(launcher_id))
return reply
import config
if config.income_msg_check:
if mgr.reply_filter.is_illegal(text_message):
return MessageChain(Plain("[bot] 你的提问中有不合适的内容, 请更换措辞~"))
return MessageChain(Plain("[bot] 消息中存在不合适的内容, 请更换措辞"))
pkg.openai.session.get_session(session_name).acquire_response_lock()
text_message = text_message.strip()
# 为强制消息延迟计时
start_time = time.time()
# 处理消息
try:
@@ -170,4 +172,23 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
finally:
pkg.openai.session.get_session(session_name).release_response_lock()
# 检查延迟时间
if config.force_delay_range[1] == 0:
delay_time = 0
else:
import random
# 从延迟范围中随机取一个值(浮点)
rdm = random.uniform(config.force_delay_range[0], config.force_delay_range[1])
spent = time.time() - start_time
# 如果花费时间小于延迟时间,则延迟
delay_time = rdm - spent if rdm - spent > 0 else 0
# 延迟
if delay_time > 0:
logging.info("[风控] 强制延迟{:.2f}秒(如需关闭请到config.py修改force_delay_range字段)".format(delay_time))
time.sleep(delay_time)
return MessageChain(reply)

View File

321
pkg/qqbot/sources/nakuru.py Normal file
View File

@@ -0,0 +1,321 @@
import mirai
from ..adapter import MessageSourceAdapter, MessageConverter, EventConverter
import nakuru
import nakuru.entities.components as nkc
import asyncio
import typing
import traceback
import logging
import json
from pkg.qqbot.blob import Forward, ForwardMessageNode, ForwardMessageDiaplay
class NakuruProjectMessageConverter(MessageConverter):
"""消息转换器"""
@staticmethod
def yiri2target(message_chain: mirai.MessageChain) -> list:
msg_list = []
if type(message_chain) is mirai.MessageChain:
msg_list = message_chain.__root__
elif type(message_chain) is list:
msg_list = message_chain
else:
raise Exception("Unknown message type: " + str(message_chain) + str(type(message_chain)))
nakuru_msg_list = []
# 遍历并转换
for component in msg_list:
if type(component) is mirai.Plain:
nakuru_msg_list.append(nkc.Plain(component.text, False))
elif type(component) is mirai.Image:
if component.url is not None:
nakuru_msg_list.append(nkc.Image.fromURL(component.url))
elif component.base64 is not None:
nakuru_msg_list.append(nkc.Image.fromBase64(component.base64))
elif component.path is not None:
nakuru_msg_list.append(nkc.Image.fromFileSystem(component.path))
elif type(component) is mirai.Face:
nakuru_msg_list.append(nkc.Face(id=component.face_id))
elif type(component) is mirai.At:
nakuru_msg_list.append(nkc.At(qq=component.target))
elif type(component) is mirai.AtAll:
nakuru_msg_list.append(nkc.AtAll())
elif type(component) is mirai.Voice:
if component.url is not None:
nakuru_msg_list.append(nkc.Record.fromURL(component.url))
elif component.path is not None:
nakuru_msg_list.append(nkc.Record.fromFileSystem(component.path))
elif type(component) is Forward:
# 转发消息
yiri_forward_node_list = component.node_list
nakuru_forward_node_list = []
# 遍历并转换
for yiri_forward_node in yiri_forward_node_list:
try:
content_list = NakuruProjectMessageConverter.yiri2target(yiri_forward_node.message_chain)
nakuru_forward_node = nkc.Node(
name=yiri_forward_node.sender_name,
uin=yiri_forward_node.sender_id,
time=int(yiri_forward_node.time.timestamp()) if yiri_forward_node.time is not None else None,
content=content_list
)
nakuru_forward_node_list.append(nakuru_forward_node)
except Exception as e:
import traceback
traceback.print_exc()
nakuru_msg_list.append(nakuru_forward_node_list)
else:
nakuru_msg_list.append(nkc.Plain(str(component)))
return nakuru_msg_list
@staticmethod
def target2yiri(message_chain: typing.Any, message_id: int = -1) -> mirai.MessageChain:
"""将Yiri的消息链转换为YiriMirai的消息链"""
assert type(message_chain) is list
yiri_msg_list = []
import datetime
# 添加Source组件以标记message_id等信息
yiri_msg_list.append(mirai.models.message.Source(id=message_id, time=datetime.datetime.now()))
for component in message_chain:
if type(component) is nkc.Plain:
yiri_msg_list.append(mirai.Plain(text=component.text))
elif type(component) is nkc.Image:
yiri_msg_list.append(mirai.Image(url=component.url))
elif type(component) is nkc.Face:
yiri_msg_list.append(mirai.Face(face_id=component.id))
elif type(component) is nkc.At:
yiri_msg_list.append(mirai.At(target=component.qq))
elif type(component) is nkc.AtAll:
yiri_msg_list.append(mirai.AtAll())
else:
pass
logging.debug("转换后的消息链: " + str(yiri_msg_list))
chain = mirai.MessageChain(yiri_msg_list)
return chain
class NakuruProjectEventConverter(EventConverter):
"""事件转换器"""
@staticmethod
def yiri2target(event: typing.Type[mirai.Event]):
if event is mirai.GroupMessage:
return nakuru.GroupMessage
elif event is mirai.FriendMessage:
return nakuru.FriendMessage
else:
raise Exception("未支持转换的事件类型: " + str(event))
@staticmethod
def target2yiri(event: typing.Any) -> mirai.Event:
yiri_chain = NakuruProjectMessageConverter.target2yiri(event.message, event.message_id)
if type(event) is nakuru.FriendMessage: # 私聊消息事件
return mirai.FriendMessage(
sender=mirai.models.entities.Friend(
id=event.sender.user_id,
nickname=event.sender.nickname,
remark=event.sender.nickname
),
message_chain=yiri_chain,
time=event.time
)
elif type(event) is nakuru.GroupMessage: # 群聊消息事件
permission = "MEMBER"
if event.sender.role == "admin":
permission = "ADMINISTRATOR"
elif event.sender.role == "owner":
permission = "OWNER"
import mirai.models.entities as entities
return mirai.GroupMessage(
sender=mirai.models.entities.GroupMember(
id=event.sender.user_id,
member_name=event.sender.nickname,
permission=permission,
group=mirai.models.entities.Group(
id=event.group_id,
name=event.sender.nickname,
permission=entities.Permission.Member
),
special_title=event.sender.title,
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=yiri_chain,
time=event.time
)
else:
raise Exception("未支持转换的事件类型: " + str(event))
class NakuruProjectAdapter(MessageSourceAdapter):
"""nakuru-project适配器"""
bot: nakuru.CQHTTP
bot_account_id: int
message_converter: NakuruProjectMessageConverter = NakuruProjectMessageConverter()
event_converter: NakuruProjectEventConverter = NakuruProjectEventConverter()
listener_list: list[dict]
def __init__(self, cfg: dict):
"""初始化nakuru-project的对象"""
self.bot = nakuru.CQHTTP(**cfg)
self.listener_list = []
# nakuru库有bug这个接口没法带access_token会失败
# 所以目前自行发请求
import config
import requests
resp = requests.get(
url="http://{}:{}/get_login_info".format(config.nakuru_config['host'], config.nakuru_config['http_port']),
headers={
'Authorization': "Bearer " + config.nakuru_config['token'] if 'token' in config.nakuru_config else ""
},
timeout=5
)
if resp.status_code == 403:
logging.error("go-cqhttp拒绝访问请检查config.py中nakuru_config的token是否与go-cqhttp设置的access-token匹配")
raise Exception("go-cqhttp拒绝访问请检查config.py中nakuru_config的token是否与go-cqhttp设置的access-token匹配")
self.bot_account_id = int(resp.json()['data']['user_id'])
def send_message(
self,
target_type: str,
target_id: str,
message: typing.Union[mirai.MessageChain, list],
converted: bool = False
):
task = None
converted_msg = self.message_converter.yiri2target(message) if not converted else message
# 检查是否有转发消息
has_forward = False
for msg in converted_msg:
if type(msg) is list: # 转发消息,仅回复此消息组件
has_forward = True
converted_msg = msg
break
if has_forward:
if target_type == "group":
task = self.bot.sendGroupForwardMessage(int(target_id), converted_msg)
elif target_type == "person":
task = self.bot.sendPrivateForwardMessage(int(target_id), converted_msg)
else:
raise Exception("Unknown target type: " + target_type)
else:
if target_type == "group":
task = self.bot.sendGroupMessage(int(target_id), converted_msg)
elif target_type == "person":
task = self.bot.sendFriendMessage(int(target_id), converted_msg)
else:
raise Exception("Unknown target type: " + target_type)
asyncio.run(task)
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
message = self.message_converter.yiri2target(message)
if quote_origin:
# 在前方添加引用组件
message.insert(0, nkc.Reply(
id=message_source.message_chain.message_id,
)
)
if type(message_source) is mirai.GroupMessage:
self.send_message(
"group",
message_source.sender.group.id,
message,
converted=True
)
elif type(message_source) is mirai.FriendMessage:
self.send_message(
"person",
message_source.sender.id,
message,
converted=True
)
else:
raise Exception("Unknown message source type: " + str(type(message_source)))
def is_muted(self, group_id: int) -> bool:
import time
# 检查是否被禁言
group_member_info = asyncio.run(self.bot.getGroupMemberInfo(group_id, self.bot_account_id))
return group_member_info.shut_up_timestamp > int(time.time())
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
try:
logging.debug("注册监听器: " + str(event_type) + " -> " + str(callback))
# 包装函数
async def listener_wrapper(app: nakuru.CQHTTP, source: self.event_converter.yiri2target(event_type)):
callback(self.event_converter.target2yiri(source))
# 将包装函数和原函数的对应关系存入列表
self.listener_list.append(
{
"event_type": event_type,
"callable": callback,
"wrapper": listener_wrapper,
}
)
# 注册监听器
self.bot.receiver(self.event_converter.yiri2target(event_type).__name__)(listener_wrapper)
logging.debug("注册完成")
except Exception as e:
traceback.print_exc()
raise e
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
nakuru_event_name = self.event_converter.yiri2target(event_type).__name__
new_event_list = []
# 从本对象的监听器列表中查找并删除
target_wrapper = None
for listener in self.listener_list:
if listener["event_type"] == event_type and listener["callable"] == callback:
target_wrapper = listener["wrapper"]
self.listener_list.remove(listener)
break
if target_wrapper is None:
raise Exception("未找到对应的监听器")
for func in self.bot.event[nakuru_event_name]:
if func.callable != target_wrapper:
new_event_list.append(func)
self.bot.event[nakuru_event_name] = new_event_list
def run_sync(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
self.bot.run()
def kill(self) -> bool:
return False

View File

@@ -0,0 +1,116 @@
from ..adapter import MessageSourceAdapter
import mirai
import mirai.models.bus
import asyncio
import typing
class YiriMiraiAdapter(MessageSourceAdapter):
"""YiriMirai适配器"""
bot: mirai.Mirai
def __init__(self, config: dict):
"""初始化YiriMirai的对象"""
if 'adapter' not in config or \
config['adapter'] == 'WebSocketAdapter':
self.bot = mirai.Mirai(
qq=config['qq'],
adapter=mirai.WebSocketAdapter(
host=config['host'],
port=config['port'],
verify_key=config['verifyKey']
)
)
elif config['adapter'] == 'HTTPAdapter':
self.bot = mirai.Mirai(
qq=config['qq'],
adapter=mirai.HTTPAdapter(
host=config['host'],
port=config['port'],
verify_key=config['verifyKey']
)
)
else:
raise Exception('Unknown adapter for YiriMirai: ' + config['adapter'])
def send_message(
self,
target_type: str,
target_id: str,
message: mirai.MessageChain
):
"""发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (mirai.MessageChain): YiriMirai库的消息链
"""
task = None
if target_type == 'person':
task = self.bot.send_friend_message(int(target_id), message)
elif target_type == 'group':
task = self.bot.send_group_message(int(target_id), message)
else:
raise Exception('Unknown target type: ' + target_type)
asyncio.run(task)
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
"""回复消息
Args:
message_source (mirai.MessageEvent): YiriMirai消息源事件
message (mirai.MessageChain): YiriMirai库的消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
"""
asyncio.run(self.bot.send(message_source, message, quote_origin))
def is_muted(self, group_id: int) -> bool:
result = self.bot.member_info(target=group_id, member_id=self.bot.qq).get()
result = asyncio.run(result)
if result.mute_time_remaining > 0:
return True
return False
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注册事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
self.bot.on(event_type)(callback)
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注销事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
assert isinstance(self.bot, mirai.Mirai)
bus = self.bot.bus
assert isinstance(bus, mirai.models.bus.ModelEventBus)
bus.unsubscribe(event_type, callback)
def run_sync(self):
self.bot.run()
def kill(self) -> bool:
return False

File diff suppressed because one or more lines are too long

View File

@@ -48,7 +48,7 @@ def reset_logging():
logging.basicConfig(level=config.logging_level, # 设置日志输出格式
filename=log_file_name, # log日志输出的文件位置和文件名
format="[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : %(message)s",
format="[%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s",
# 日志输出的格式
# -8表示占位符让输出左对齐输出长度都为8位
datefmt="%Y-%m-%d %H:%M:%S" # 时间输出的格式

View File

@@ -34,13 +34,18 @@ def pull_latest(repo_path: str) -> bool:
return True
def is_newer_ignored_bugfix_ver(new_tag: str, old_tag: str):
"""判断版本是否更新,忽略第四位版本"""
def is_newer(new_tag: str, old_tag: str):
"""判断版本是否更新,忽略第四位版本和第一位版本"""
if new_tag == old_tag:
return False
new_tag = new_tag.split(".")
old_tag = old_tag.split(".")
# 判断主版本是否相同
if new_tag[0] != old_tag[0]:
return False
if len(new_tag) < 4:
return True
@@ -73,6 +78,34 @@ def get_current_tag() -> str:
return current_tag
def compare_version_str(v0: str, v1: str) -> int:
"""比较两个版本号"""
# 删除版本号前的v
if v0.startswith("v"):
v0 = v0[1:]
if v1.startswith("v"):
v1 = v1[1:]
v0:list = v0.split(".")
v1:list = v1.split(".")
# 如果两个版本号节数不同把短的后面用0补齐
if len(v0) < len(v1):
v0.extend(["0"]*(len(v1)-len(v0)))
elif len(v0) > len(v1):
v1.extend(["0"]*(len(v0)-len(v1)))
# 从高位向低位比较
for i in range(len(v0)):
if int(v0[i]) > int(v1[i]):
return 1
elif int(v0[i]) < int(v1[i]):
return -1
return 0
def update_all(cli: bool = False) -> bool:
"""检查更新并下载源码"""
current_tag = get_current_tag()
@@ -97,7 +130,7 @@ def update_all(cli: bool = False) -> bool:
else:
print("更新日志: {}".format(rls_notes))
if latest_rls == {} and not is_newer_ignored_bugfix_ver(latest_tag_name, current_tag): # 没有新版本
if latest_rls == {} and not is_newer(latest_tag_name, current_tag): # 没有新版本
return False
# 下载最新版本的zip到temp目录
@@ -254,7 +287,7 @@ def is_new_version_available() -> bool:
latest_tag_name = rls['tag_name']
break
return is_newer_ignored_bugfix_ver(latest_tag_name, current_tag)
return is_newer(latest_tag_name, current_tag)
def get_rls_notes() -> list:

View File

@@ -1,9 +1,12 @@
requests~=2.28.1
openai~=0.27.4
dulwich~=0.21.3
requests~=2.31.0
openai~=0.27.8
dulwich~=0.21.5
colorlog~=6.6.0
yiri-mirai~=0.2.6.1
yiri-mirai
websockets
urllib3~=1.26.10
func_timeout~=4.3.5
Pillow
nakuru-project-idk
CallingGPT
tiktoken

View File

@@ -1 +1,14 @@
[]
[
{
"id": 0,
"time": "2023-04-24 16:05:20",
"timestamp": 1682323520,
"content": "现已支持使用go-cqhttp替换mirai作为QQ登录框架, 请更新并查看 https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE"
},
{
"id": 1,
"time": "2023-05-21 17:33:18",
"timestamp": 1684661598,
"content": "NewBing不再需要鉴权若您正在使用revLibs逆向库插件请立即使用!plugin update revLibs命令更新插件到最新版。"
}
]

BIN
res/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
res/social.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@@ -1,3 +1,13 @@
# 是否处理群聊消息
# 为False时忽略所有群聊消息
# 优先级高于下方禁用列表
enable_group = True
# 是否处理私聊消息
# 为False时忽略所有私聊消息
# 优先级高于下方禁用列表
enable_private = True
# 是否启用禁用列表
enable = True

View File

@@ -1,12 +1,14 @@
{
"comment": "以下为命令权限请设置到cmdpriv.json中。关于此功能的说明请查看https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%91%BD%E4%BB%A4%E6%9D%83%E9%99%90%E6%8E%A7%E5%88%B6",
"draw": 1,
"plugin": 2,
"func": 1,
"plugin": 1,
"plugin.get": 2,
"plugin.update": 2,
"plugin.del": 2,
"plugin.off": 2,
"plugin.on": 2,
"continue": 1,
"default": 1,
"default.set": 2,
"del": 1,

View File

@@ -8,6 +8,9 @@
- [Mirai](https://github.com/mamoe/mirai) 高效率 QQ 机器人支持库
- [YiriMirai](https://github.com/YiriMiraiProject/YiriMirai) 一个轻量级、低耦合的基于 mirai-api-http 的 Python SDK。
- [go-cqhttp](https://github.com/Mrs4s/go-cqhttp) cqhttp的golang实现轻量、原生跨平台.
- [nakuru-project](https://github.com/Lxns-Network/nakuru-project) - 一款为 go-cqhttp 的正向 WebSocket 设计的 Python SDK支持纯 CQ 码与消息链的转换处理
- [nakuru-project-idk](https://github.com/idoknow/nakuru-project-idk) - 由idoknow维护的nakuru-project分支
- [dulwich](https://github.com/jelmer/dulwich) Pure-Python Git implementation
- [OpenAI API](https://openai.com/api/) OpenAI API

View File

@@ -0,0 +1,70 @@
# 配置go-cqhttp用于登录QQ
> 若您是从旧版本升级到此版本以使用go-cqhttp的用户请您按照`config-template.py`的内容修改`config.py`,添加`msg_source_adapter`配置项并将其设为`nakuru`,同时添加`nakuru_config`字段按照说明配置。
## 步骤
1. 从[go-cqhttp的Release](https://github.com/Mrs4s/go-cqhttp/releases/latest)下载最新的go-cqhttp可执行文件建议直接下载可执行文件压缩包而不是安装器
2. 解压并运行,首次运行会询问需要开放的网络协议,**请填入`02`并回车,必须输入`02`❗❗❗❗❗❗❗**
<h1> 你这里必须得输入`02`,你懂么,`0`必须得输入,看好了,看好下面输入什么了吗?别他妈的搁那就输个`2`完了启动连不上还跑群里问,问一个我踢一个。 </h1>
```
C:\Softwares\go-cqhttp.old> .\go-cqhttp.exe
未找到配置文件,正在为您生成配置文件中!
请选择你需要的通信方式:
> 0: HTTP通信
> 1: 云函数服务
> 2: 正向 Websocket 通信
> 3: 反向 Websocket 通信
请输入你需要的编号(0-9),可输入多个,同一编号也可输入多个(如: 233)
您的选择是:02
```
提示已生成`config.yml`文件关闭go-cqhttp。
3. 打开go-cqhttp同目录的`config.yml`
1. 编辑账号登录信息
只需要修改下方`uin``password`为你要登录的机器人账号的QQ号和密码即可。
**若您不填写,将会在启动时请求扫码登录。**
```yaml
account: # 账号相关
uin: 1233456 # QQ账号
password: '' # 密码为空时使用扫码登录
encrypt: false # 是否开启密码加密
status: 0 # 在线状态 请参考 https://docs.go-cqhttp.org/guide/config.html#在线状态
relogin: # 重连设置
delay: 3 # 首次重连延迟, 单位秒
interval: 3 # 重连间隔
max-times: 0 # 最大重连次数, 0为无限制
```
2. 修改websocket端口
在`config.yml`下方找到以下内容
```yaml
- ws:
# 正向WS服务器监听地址
address: 0.0.0.0:8080
middlewares:
<<: *default # 引用默认中间件
```
**将`0.0.0.0:8080`改为`0.0.0.0:6700`**,保存并关闭`config.yml`。
3. 若您的服务器位于公网,强烈建议您填写`access-token` (可选)
```yaml
# 默认中间件锚点
default-middlewares: &default
# 访问密钥, 强烈推荐在公网的服务器设置
access-token: ''
```
4. 配置完成重新启动go-cqhttp
> 若启动后登录不成功,请尝试根据[此文档](https://docs.go-cqhttp.org/guide/config.html#%E8%AE%BE%E5%A4%87%E4%BF%A1%E6%81%AF)修改`device.json`的协议编号。

View File

@@ -180,6 +180,7 @@
!draw <提示语> 进行绘图
!version 查看当前版本并检查更新
!resend 重新回复上一个问题
!continue 继续响应未完成的回合(通常用于内容函数继续调用)
!plugin 用法请查看插件使用页的`管理`章节
!default 查看可用的情景预设值
```
@@ -225,7 +226,7 @@
格式: `!cfg <配置项名称> <配置项新值>`
以修改`default_prompt`示例
```
!cfg default_prompt 我是Rock Chin
!cfg default_prompt "我是Rock Chin"
```
输出示例
@@ -243,7 +244,15 @@
```
!~all
!~default_prompt
!~default_prompt 我是Rock Chin
!~default_prompt "我是Rock Chin"
```
5. 配置项名称支持使用点号(.)拼接以索引子配置项
例如: `openai_config.api_key`将索引`config`字典中的`openai_config`字典中的`api_key`字段,可以通过这个方式查看或修改此子配置项
```
!~openai_config.api_key
```
</details>
@@ -367,4 +376,5 @@ prompt_submit_length = <模型单次请求token数上限> - 情景预设中token
### 加入黑名单
编辑`banlist.py`,设置`enable = True`,并在其中的`person``group`列表中加入要封禁的人或群聊,修改完成后重启程序或进行热重载
- 支持禁用所有`私聊``群聊`,请查看`banlist.py`中的`enable_private``enable_group`字段
- 编辑`banlist.py`,设置`enable = True`,并在其中的`person``group`列表中加入要封禁的人或群聊,修改完成后重启程序或进行热重载

View File

@@ -1,5 +1,6 @@
以下是QChatGPT实现原理等技术信息贡献之前请仔细阅读
> 太久没更了,过时了,建议读源码,~~注释还挺全的~~
> 请先阅读OpenAI API的相关文档 https://beta.openai.com/docs/ 以下信息假定您已了解OpenAI模型的相关特性及其接口的调用方法。
## 术语

View File

@@ -0,0 +1,24 @@
> 说白了就是ChatGPT官方插件那种东西
内容函数是基于[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling)实现的这是一种嵌入对话中由GPT自动调用的函数。
例如我们为GPT提供一个函数`access_the_web`并提供其详细的描述以及其参数的描述那么当我们在与GPT对话时涉及类似以下内容时
```
Q: 请搜索一下github上有那些QQ机器人项目
Q: 请为我搜索一些不错的云服务商网站?
Q阅读并总结这篇文章https://zhuanlan.zhihu.com/p/607570830
Q搜一下清远今天天气如何
```
GPT将会回复一个对`access_the_web`的函数调用请求QChatGPT将自动处理执行该调用并返回结果给GPT使其生成新的回复。
当然,函数调用功能不止局限于网络访问,还可以实现图片处理、科学计算、行程规划等需要调用函数的功能,理论上我们可以通过内容函数实现与`ChatGPT Plugins`相同的功能。
- 您需要使用`v2.5.0`以上的版本才能加载包含内容函数的插件
- 您需要同时在`config.py`中的`completion_api_params`中设置`model`为支持函数调用的模型,推荐使用`gpt-3.5-turbo-16k`
- 使用此功能可能会造成难以预期的账号余额消耗,请关注
## QChatGPT的一些不错的内容函数插件
- [WebwlkrPlugin](https://github.com/RockChinQ/WebwlkrPlugin) - 让机器人能联网!!

View File

@@ -4,6 +4,8 @@ QChatGPT 插件使用Wiki
`plugins`目录下的所有`.py`程序都将被加载,除了`__init__.py`之外的模块支持热加载
> 插件分为`行为插件`和`内容插件`两种行为插件由主程序运行中的事件驱动内容插件由GPT生成的内容驱动请查看内容插件页
## 安装
### 储存库克隆(推荐)
@@ -23,15 +25,18 @@ QChatGPT 插件使用Wiki
## 管理
### !plugin
### !plugin
```
!plugin 列出所有已安装的插件
!plugin get <储存库地址> 从Git储存库安装插件(需要管理员权限)
!plugin update 更新所有插件(需要管理员权限,仅支持从储存库安装的插件)
!plugin update all 更新所有插件(需要管理员权限,仅支持从储存库安装的插件)
!plugin update <插件名> 更新指定插件
!plugin del <插件名> 删除插件(需要管理员权限)
!plugin on <插件名> 启用插件(需要管理员权限)
!plugin off <插件名> 禁用插件(需要管理员权限)
!func 列出所有内容函数
```
### 控制插件执行顺序
@@ -42,3 +47,8 @@ QChatGPT 插件使用Wiki
无需卸载即可管理插件的开关
编辑`plugins`目录下的`switch.json`文件,将相应的插件的`enabled`字段设置为`true/false(开/关)`,之后重启程序或执行热重载即可控制插件开关
### 控制全局内容函数开关
内容函数是基于[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling)实现的这是一种嵌入对话中由GPT自动调用的函数。
每个插件可以自行注册内容函数,您可以在`plugins`目录下的`settings.json`中设置`functions`下的`enabled``true``false`控制这些内容函数的启用或禁用。

View File

@@ -113,6 +113,199 @@ class HelloPlugin(Plugin):
- 一个目录内可以存放多个Python程序文件以独立出插件的各个功能便于开发者管理但不建议在一个目录内注册多个插件
- 插件需要的依赖库请在插件目录下的`requirements.txt`中指定,程序从储存库获取此插件时将自动安装依赖
## 🪝内容函数
通过[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling)实现的`内容函数`这是一种嵌入对话中由GPT自动调用的函数。
> 您的插件比一定必须包含内容函数,请先查看内容函数页了解此功能
<details>
<summary>示例:联网插件</summary>
加载含有联网功能的内容函数的插件[WebwlkrPlugin](https://github.com/RockChinQ/WebwlkrPlugin),向机器人询问在线内容
```
# 控制台输出
[2023-07-29 17:37:18.698] message.py (26) - [INFO] : [person_1010553892]发送消息:介绍一下这个项目https://git...
[2023-07-29 17:37:21.292] util.py (67) - [INFO] : message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=1902 request_id=941afc13b2e1bba1e7877b92a970cdea response_code=200
[2023-07-29 17:37:21.293] chat_completion.py (159) - [INFO] : 执行函数调用: name=Webwlkr-access_the_web, arguments={'url': 'https://github.com/RockChinQ/QChatGPT', 'brief_len': 512}
[2023-07-29 17:37:21.848] chat_completion.py (164) - [INFO] : 函数执行完成。
```
![Webwlkr插件](https://github.com/RockChinQ/QChatGPT/blob/master/res/screenshots/webwlkr_plugin.png?raw=true)
</details>
### 内容函数编写步骤
1⃣ 请先按照上方步骤编写您的插件基础结构,现在请删除(当然你也可以不删,只是为了简洁)上述插件内容的诸个由`@on`装饰的类函数
<details>
<summary>删除后的结构</summary>
```python
from pkg.plugin.models import *
from pkg.plugin.host import EventContext, PluginHost
"""
在收到私聊或群聊消息"hello"时,回复"hello, <发送者id>!""hello, everyone!"
"""
# 注册插件
@register(name="Hello", description="hello world", version="0.1", author="RockChinQ")
class HelloPlugin(Plugin):
# 插件加载时触发
# plugin_host (pkg.plugin.host.PluginHost) 提供了与主程序交互的一些方法,详细请查看其源码
def __init__(self, plugin_host: PluginHost):
pass
# 插件卸载时触发
def __del__(self):
pass
```
</details>
2⃣ 现在我们将以下函数添加到刚刚删除的函数的位置
```Python
# 要添加的函数
@func(name="access_the_web") # 设置函数名称
def _(url: str):
"""Call this function to search about the question before you answer any questions.
- Do not search through baidu.com at any time.
- If you need to search somthing, visit https://www.google.com/search?q=xxx.
- If user ask you to open a url (start with http:// or https://), visit it directly.
- Summary the plain content result by yourself, DO NOT directly output anything in the result you got.
Args:
url(str): url to visit
Returns:
str: plain text content of the web page
"""
import requests
from bs4 import BeautifulSoup
# 你需要先使用
# pip install beautifulsoup4
# 安装依赖
r = requests.get(
url,
timeout=10,
headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.1901.183"
}
)
soup = BeautifulSoup(r.text, 'html.parser')
s = soup.get_text()
# 删除多余的空行或仅有\t和空格的行
s = re.sub(r'\n\s*\n', '\n', s)
if len(s) >= 512: # 截取获取到的网页纯文本内容的前512个字
return s[:512]
return s
```
<details>
<summary>现在这个文件内容应该是这样</summary>
```python
from pkg.plugin.models import *
from pkg.plugin.host import EventContext, PluginHost
"""
在收到私聊或群聊消息"hello"时,回复"hello, <发送者id>!""hello, everyone!"
"""
# 注册插件
@register(name="Hello", description="hello world", version="0.1", author="RockChinQ")
class HelloPlugin(Plugin):
# 插件加载时触发
# plugin_host (pkg.plugin.host.PluginHost) 提供了与主程序交互的一些方法,详细请查看其源码
def __init__(self, plugin_host: PluginHost):
pass
@func(name="access_the_web")
def _(url: str):
"""Call this function to search about the question before you answer any questions.
- Do not search through baidu.com at any time.
- If you need to search somthing, visit https://www.google.com/search?q=xxx.
- If user ask you to open a url (start with http:// or https://), visit it directly.
- Summary the plain content result by yourself, DO NOT directly output anything in the result you got.
Args:
url(str): url to visit
Returns:
str: plain text content of the web page
"""
import requests
from bs4 import BeautifulSoup
# 你需要先使用
# pip install beautifulsoup4
# 安装依赖
r = requests.get(
url,
timeout=10,
headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.1901.183"
}
)
soup = BeautifulSoup(r.text, 'html.parser')
s = soup.get_text()
# 删除多余的空行或仅有\t和空格的行
s = re.sub(r'\n\s*\n', '\n', s)
if len(s) >= 512: # 截取获取到的网页纯文本内容的前512个字
return s[:512]
return s
# 插件卸载时触发
def __del__(self):
pass
```
</details>
#### 请注意:
- 函数的注释必须严格按照要求的格式进行书写,具体格式请查看[此文档](https://github.com/RockChinQ/CallingGPT/wiki/1.-Function-Format#function-format)
- 内容函数和`以@on装饰的行为函数`可以同时存在于同一个插件,并同时受到`switch.json`中的插件开关的控制
- 务必确保您使用的模型支持函数调用功能,可以到`config.py``completion_api_params`中修改模型,推荐使用`gpt-3.5-turbo-16k`
3⃣ 现在您的程序已具备网络访问功能,重启程序,询问机器人有关在线的内容或直接发送文章链接请求其总结。
- 这仅仅是一个示例,需要更高效的网络访问能力支持插件,请查看[WebwlkrPlugin](https://github.com/RockChinQ/WebwlkrPlugin)
## 🔒版本要求
若您的插件对主程序的版本有要求,可以使用以下函数进行断言,若不符合版本,此函数将报错并打断此函数所在的流程:
```python
require_ver("v2.5.1") # 要求最低版本为 v2.5.1
```
```python
require_ver("v2.5.1", "v2.6.0") # 要求最低版本为 v2.5.1, 同时要求最高版本为 v2.6.0
```
- 此函数在主程序`v2.5.1`中加入
- 此函数声明在`pkg.plugin.models`模块中,在插件示例代码最前方已引入此模块所有内容,故可直接使用
## 📄API参考
### 说明
@@ -257,6 +450,20 @@ KeySwitched = "key_switched"
key_name: str 切换成功的api-key名称
key_list: list[str] api-key列表
"""
PromptPreProcessing = "prompt_pre_processing" # 于v2.5.1加入
"""每回合调用接口前对prompt进行预处理时触发此事件不支持阻止默认行为
kwargs:
session_name: str 会话名称(<launcher_type>_<launcher_id>)
default_prompt: list 此session使用的情景预设内容
prompt: list 此session现有的prompt内容
text_message: str 用户发送的消息文本
returns (optional):
default_prompt: list 修改后的情景预设内容
prompt: list 修改后的prompt内容
text_message: str 修改后的消息文本
"""
```
### host: PluginHost 详解

42
tests/bs_test/bs_test.py Normal file
View File

@@ -0,0 +1,42 @@
import requests
from bs4 import BeautifulSoup
import os
import random
import sys
user_agents = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1.2 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1 Safari/537.36',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0'
]
r = requests.get(
sys.argv[1],
headers={
"User-Agent": random.choice(user_agents)
}
)
soup = BeautifulSoup(r.text, 'html.parser')
# print(soup.get_text())
raw = soup.get_text()
import re
# strip每一行
# raw = '\n'.join([line.strip() for line in raw.split('\n')])
# # 删除所有空行或只有空格的行
# raw = re.sub(r'\n\s*\n', '\n', raw)
print(raw)

View File

@@ -0,0 +1,57 @@
import os
import sys
import paramiko
import time
import select
class sshClient:
#创建一个ssh客户端和服务器连接上准备发消息
def __init__(self,host,port,user,password):
self.trans = paramiko.Transport((host, port))
self.trans.start_client()
self.trans.auth_password(username=user, password=password)
self.channel = self.trans.open_session()
self.channel.get_pty()
self.channel.invoke_shell()
#给服务器发送一个命令
def sendCmd(self,cmd):
self.channel.sendall(cmd)
#接收的时候,有时候服务器处理的比较慢,需要设置一个延时等待一下。
def recvResponse(self,timeout):
data=b''
while True:
try:
#使用select不断的读取数据直到没有多余的数据了超时返回。
readable,w,e= select.select([self.channel],[],[],timeout)
if self.channel in readable:
data = self.channel.recv(1024)
else:
sys.stdout.write(data.decode())
sys.stdout.flush()
return data.decode()
except TimeoutError:
sys.stdout.write(data.decode())
sys.stdout.flush()
return data.decode
#关闭客户端
def close(self):
self.channel.close()
self.trans.close()
host='host'
port=22#your port
user='root'
pwd='pass'
ssh = sshClient(host,port,user,pwd)
response = ssh.recvResponse(1)
response = ssh.sendCmd("ls\n")
ssh.sendCmd("cd /home\n")
response = ssh.recvResponse(1)
ssh.sendCmd("ls\n")
response = ssh.recvResponse(1)
ssh.close()

View File

@@ -0,0 +1,124 @@
import tiktoken
import openai
import json
import os
openai.api_key = os.getenv("OPENAI_API_KEY")
def encode(text: str, model: str):
import tiktoken
enc = tiktoken.get_encoding("cl100k_base")
assert enc.decode(enc.encode("hello world")) == "hello world"
# To get the tokeniser corresponding to a specific model in the OpenAI API:
enc = tiktoken.encoding_for_model(model)
return enc.encode(text)
# def ask(prompt: str, model: str = "gpt-3.5-turbo"):
# # To get the tokeniser corresponding to a specific model in the OpenAI API:
# enc = tiktoken.encoding_for_model(model)
# resp = openai.ChatCompletion.create(
# model=model,
# messages=[
# {
# "role": "user",
# "content": prompt
# }
# ]
# )
# return enc.encode(prompt), enc.encode(resp['choices'][0]['message']['content']), resp
def ask(
messages: list,
model: str = "gpt-3.5-turbo"
):
enc = tiktoken.encoding_for_model(model)
resp = openai.ChatCompletion.create(
model=model,
messages=messages
)
txt = ""
for r in messages:
txt += r['role'] + r['content'] + "\n"
txt += "assistant: "
return enc.encode(txt), enc.encode(resp['choices'][0]['message']['content']), resp
def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613"):
"""Return the number of tokens used by a list of messages."""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
print("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model in {
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4-0314",
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613",
}:
tokens_per_message = 3
tokens_per_name = 1
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif "gpt-3.5-turbo" in model:
print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613")
elif "gpt-4" in model:
print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.")
return num_tokens_from_messages(messages, model="gpt-4-0613")
else:
raise NotImplementedError(
f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens."""
)
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
messages = [
{
"role": "user",
"content": "你叫什么名字?"
},{
"role": "assistant",
"content": "我是AI助手没有具体的名字。你可以叫我GPT-3。有什么可以帮到你的吗"
},{
"role": "user",
"content": "你是由谁开发的?"
},{
"role": "assistant",
"content": "我是由OpenAI开发的一家人工智能研究实验室。OpenAI的使命是促进人工智能的发展使其为全人类带来积极影响。我是由OpenAI团队使用GPT-3模型训练而成的。"
},{
"role": "user",
"content": "很高兴见到你。"
}
]
pro, rep, resp=ask(messages)
print(len(pro), len(rep))
print(resp)
print(resp['choices'][0]['message']['content'])
print(num_tokens_from_messages(messages, model="gpt-3.5-turbo"))

View File

@@ -27,8 +27,11 @@ replys_message = "[bot]err:请求超时"
# 指令权限不足提示
command_admin_message = "[bot]err:权限不足: "
# 指令无效提示
command_err_message = "[bot]err:指令执行出错"
command_err_message = "[bot]err:指令不存在"
# 会话重置提示
command_reset_message = "[bot]:会话已重置"
command_reset_name_message = "[bot]:会话已重置,使用场景预设:"
command_reset_message = "[bot]会话已重置"
command_reset_name_message = "[bot]会话已重置,使用场景预设:"
# 会话自动重置时的提示
session_auto_reset_message = "[bot]会话token超限已自动重置请重新发送消息"