Compare commits

...

482 Commits

Author SHA1 Message Date
Rock Chin
0e85467e02 Release v2.4.2 2023-04-25 10:27:57 +08:00
Rock Chin
eb41cf5481 fix(plugin.py): 兼容性问题 2023-04-25 10:27:07 +08:00
Rock Chin
b970a42d07 fix(plugin.py): send_message封装实现的兼容性问题 2023-04-25 10:26:03 +08:00
Rock Chin
8c9d123e1c Merge pull request #433 from RockChinQ/detailed-response-rules
[Feat] 细化到单个群的响应规则
2023-04-25 09:39:56 +08:00
Rock Chin
ab2a95e347 Merge branch 'detailed-response-rules' of https://github.com/RockChinQ/QChatGPT into detailed-response-rules 2023-04-25 09:31:56 +08:00
Rock Chin
2184c558a4 feat: 支持配置细化到单个群的响应规则 2023-04-25 09:31:44 +08:00
GitHub Actions
83cb8588fd Update override-all.json 2023-04-25 01:28:56 +00:00
Rock Chin
007e82c533 feat: 配置文件支持 2023-04-25 09:28:31 +08:00
Rock Chin
499f8580a7 doc: 修改wiki格式 2023-04-25 08:45:58 +08:00
Rock Chin
a7dc3c5dab Release v2.4.1 2023-04-25 00:01:40 +08:00
Rock Chin
d01d3a3c53 perf: 启动时提示使用的QQ号 2023-04-24 23:57:57 +08:00
Rock Chin
580e062dbf feat: 上报使用量时带上msg_source_adapter 2023-04-24 23:51:00 +08:00
Rock Chin
c8cee8410c doc: 完善格式 2023-04-24 20:04:33 +08:00
Rock Chin
6bf331c2e3 doc: 完善wiki 2023-04-24 19:53:20 +08:00
Rock Chin
4c4930737c chore: issue模板新增登录框架字段 2023-04-24 19:28:00 +08:00
Rock Chin
9de01e9525 Release v2.4.0 2023-04-24 16:09:46 +08:00
Rock Chin
c6a16f5974 Merge pull request #427 from RockChinQ/nakuru-support
[Feat] 支持通过nakuru-project框架连接go-cqhttp
2023-04-24 16:07:12 +08:00
Rock Chin
253ef44d17 chore: 公告 2023-04-24 16:05:47 +08:00
Rock Chin
15a1f00b73 doc(README.md): 添加go-cqhttp公告 2023-04-24 16:04:25 +08:00
Rock Chin
b5fa2ea8b8 feat(main.py): 添加nakuru-project-idk的依赖更新项 2023-04-24 16:01:43 +08:00
Rock Chin
449e024771 doc: 添加针对老用户的说明 2023-04-24 15:59:07 +08:00
Rock Chin
1bee7a146b feat: 支持语音组件 2023-04-24 15:55:21 +08:00
Rock Chin
270a632789 doc: 修改标号 2023-04-24 15:48:28 +08:00
Rock Chin
418bb05b4c doc: 添加go-cqhttp配置说明 2023-04-24 15:46:58 +08:00
Rock Chin
052b834151 doc: 完善config-template.py的说明 2023-04-24 15:46:26 +08:00
Rock Chin
58ee204a75 doc: wiki添加go-cqhttp配置步骤 2023-04-24 15:41:28 +08:00
Rock Chin
0a02ee8c04 feat: 启动时添加nakuru的提示检查 2023-04-24 15:04:07 +08:00
Rock Chin
950ef4a181 doc: 更新README.md 2023-04-24 14:57:28 +08:00
Rock Chin
7b7cdd8adb perf: 在日志文件包含输出文件路径 2023-04-24 13:52:22 +08:00
Rock Chin
471768e760 feat: 支持发送转发消息 2023-04-24 12:46:33 +08:00
Rock Chin
c7517d31a4 chore: 更换使用nakuru-project-idk包 2023-04-24 11:37:01 +08:00
Rock Chin
7d10d0398e fix: nakuru热重载失败 2023-04-24 11:21:51 +08:00
Rock Chin
a2bc25c08b feat: 支持引用原消息回复 2023-04-24 10:57:43 +08:00
Rock Chin
3cb49fe2d8 feat: 支持检测群内禁言 2023-04-24 10:34:51 +08:00
Rock Chin
5b96ac122f feat: 适配nakuru基本功能 2023-04-23 23:40:08 +08:00
Rock Chin
612033f478 feat: nakuru适配器基础模型 2023-04-23 15:58:37 +08:00
GitHub Actions
48ee940d8e Update override-all.json 2023-04-23 01:32:36 +00:00
Rock Chin
e74df0b37d chore: 添加nakuru相关配置; 使用nakuru-project-test临时包 2023-04-23 09:32:01 +08:00
GitHub Actions
640afdc49c Update override-all.json 2023-04-22 13:51:02 +00:00
Rock Chin
6b39df5b9b chore: 删除NoneBot2相关配置 2023-04-22 21:50:41 +08:00
Rock Chin
e7e698765e fix(plugin.py): 缺少的换行符 2023-04-22 17:40:41 +08:00
Rock Chin
43fea13dab Merge pull request #418 from RockChinQ/im-impl-decoupling
[Refactor] 新增抽象层以解耦消息来源(MessageSource)组件
2023-04-21 18:10:42 +08:00
GitHub Actions
bc899e5bd0 Update override-all.json 2023-04-21 09:52:31 +00:00
Rock Chin
160086feb9 refactor: 完成MessageSource适配器解耦 2023-04-21 17:51:58 +08:00
Rock Chin
016391c976 refactor: 不再向QQBotManager中传递config中可读的参数 2023-04-21 17:15:32 +08:00
Rock Chin
91746448a3 feat: 消息源适配器模型及YiriMirai的适配器 2023-04-21 16:36:59 +08:00
Rock Chin
5cb0543237 doc(README.md): 更新wiki链接 2023-04-20 20:50:00 +08:00
Rock Chin
fac29a24a8 doc(README.md): social.png更改成圆角 2023-04-20 10:54:06 +08:00
Rock Chin
4d3a2a21d0 Update README_en.md 2023-04-20 00:22:05 +08:00
Rock Chin
6d4f88041c Update README.md 2023-04-20 00:21:37 +08:00
Rock Chin
18587d3690 doc(README.md): 修改social图格式 2023-04-20 00:15:11 +08:00
Rock Chin
423090dccd doc(README.md): 更改使用social图 2023-04-20 00:13:11 +08:00
Rock Chin
78e88baab3 doc(README.md): 优化LOGO图格式 2023-04-20 00:08:00 +08:00
Rock Chin
6a276767b3 doc(README.md): 添加LOGO 2023-04-20 00:06:52 +08:00
Rock Chin
2cb26c7c70 doc: 添加LOGO文件 2023-04-20 00:04:01 +08:00
Rock Chin
ff66c88060 doc(README.md): 优化图片格式 2023-04-17 10:18:23 +08:00
Rock Chin
611e82b8f9 doc(README.md): 添加使用截图 2023-04-17 10:15:50 +08:00
Rock Chin
59bdee7137 feat: 添加IM框架模型 2023-04-15 23:38:52 +08:00
Rock Chin
e8dbd426ae Release v2.3.9 2023-04-15 17:36:59 +08:00
Rock Chin
40d6e809a0 Merge pull request #417 from RockChinQ/354-feature-single-concurrency
[Feat] 支持设置单会话内同时仅处理一条消息
2023-04-15 17:35:36 +08:00
GitHub Actions
236c540d18 Update override-all.json 2023-04-15 09:34:16 +00:00
Rock Chin
d6ca059f6c feat: 支持设置单会话内同时仅处理一条消息 2023-04-15 17:33:57 +08:00
Rock Chin
52c06a60ca fix: 公告功能bug 2023-04-15 16:54:50 +08:00
Rock Chin
6353644ec3 test: 测试公告 2023-04-15 16:49:11 +08:00
Rock Chin
20df9ded3d Merge pull request #416 from RockChinQ/413-feature-json-format-anouns
[Feat] 支持JSON格式的公告
2023-04-15 16:47:03 +08:00
Rock Chin
7569b18a4c feat: 支持JSON格式的公告 2023-04-15 16:45:26 +08:00
Rock Chin
b9da4f4951 Merge pull request #415 from RockChinQ/413-feature-json-format-anouns
[Feat] 新增`announcement.json`文件
2023-04-15 16:33:03 +08:00
Rock Chin
89b9e29257 Update pull_request_template.md 2023-04-15 16:25:24 +08:00
Rock Chin
d605de9de4 feat: 添加公告模板及公告发布脚本 2023-04-15 09:38:46 +08:00
Rock Chin
d46c94d7c3 Release v2.3.8 2023-04-14 23:47:00 +08:00
Rock Chin
2db9c00530 Merge pull request #414 from RockChinQ/detailed-rate-limit
[Feat] 速度限制支持细化到单个人或群
2023-04-14 19:46:24 +08:00
GitHub Actions
66d8d159f9 Update override-all.json 2023-04-14 11:44:26 +00:00
Rock Chin
9fa1446284 feat: 支持细化到个人和群的限速 2023-04-14 19:44:03 +08:00
Rock Chin
b3e4cb48c7 Merge pull request #412 from RockChinQ/349-bugfix-auto-deps-solving-failure
[Fix] 循环依赖导致的依赖自动解决失败
2023-04-14 18:44:40 +08:00
Rock Chin
0bca7b2247 fix: 循环引用导致的依赖自动解决失败 2023-04-14 18:42:09 +08:00
Rock Chin
7812e03c9d chore: 删除requirements.txt中对websockets的版本要求以防冲突 2023-04-14 18:27:44 +08:00
Rock Chin
7a852ae5af Merge pull request #410 from 2675hujilo/tips
删除tips-custom-template.py中无用字段
2023-04-14 17:43:30 +08:00
26751
706d9e61c1 删除tips-custom-template.py中无用字段 2023-04-14 02:00:45 +08:00
Rock Chin
8f0ed4ff4b Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-04-12 15:28:59 +08:00
Rock Chin
3415b6f121 doc: 添加lieyanqzu/WeatherPlugin 2023-04-12 15:28:56 +08:00
Rock Chin
256ba6fb86 Merge pull request #406 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.4
chore(deps): update openai requirement from ~=0.27.2 to ~=0.27.4
2023-04-10 18:31:39 +08:00
dependabot[bot]
d30b2b9afe chore(deps): update openai requirement from ~=0.27.2 to ~=0.27.4
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.2...v0.27.4)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 09:03:02 +00:00
Rock Chin
be943ca1fc doc: 链接文档 2023-04-08 19:39:42 +08:00
Rock Chin
1ddab2a97a doc: README.md in English 2023-04-08 19:32:31 +08:00
Rock Chin
e15fd4695c Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-04-08 18:26:10 +08:00
Rock Chin
ffa4b1b4a1 fix(modelmgr): 使用异步请求时的异常类型丢失 2023-04-08 18:26:08 +08:00
Rock Chin
f8eee3a2a6 Merge pull request #399 from RockChinQ/optional-config-override
[Feat] override.json可选应用
2023-04-08 16:26:56 +08:00
Rock Chin
eeee7a8343 feat: 仅在提供命令行参数时应用override.json的内容 2023-04-08 16:21:40 +08:00
Rock Chin
8447b73fcb doc(README.md): 删除ChatAPI2D插件 2023-04-08 16:15:11 +08:00
Rock Chin
2863945d5f feat(config-template): 更改为常量表示超时时间 2023-04-08 15:36:35 +08:00
Rock Chin
cb1f8ca6f7 doc(README.md): 添加wenyinos/ChatAPI2D插件 2023-04-08 00:33:47 +08:00
Rock Chin
1d9964bcb1 Release v2.3.7 2023-04-08 00:21:21 +08:00
GitHub Actions Bot
15cb8016d3 Update cmdpriv-template.json 2023-04-07 16:20:13 +00:00
Rock Chin
895cc0a2c5 ci: test 2023-04-08 00:19:37 +08:00
Rock Chin
20bf349e4e ci: cmdpriv模板脚本 2023-04-08 00:18:00 +08:00
Rock Chin
e297763da1 fix: !cfg指令失效 2023-04-08 00:13:19 +08:00
Rock Chin
e471970654 ci: test 2023-04-07 20:32:51 +08:00
Rock Chin
12faaaced8 ci: 仅在wiki文件更新时提交 2023-04-07 20:31:33 +08:00
Rock Chin
083cbc55cc Release v2.3.6 2023-04-07 17:15:17 +08:00
Rock Chin
8aa7a3273d Merge pull request #390 from RockChinQ/customizable-tips
[Feat] 支持自定义提示消息
2023-04-07 17:13:28 +08:00
Rock Chin
255e2c4385 doc: 添加自定义提示消息的说明 2023-04-07 17:12:33 +08:00
Rock Chin
9856306870 feat: 修改文件生成顺序 2023-04-07 17:10:44 +08:00
GitHub Actions
527ab8b8a7 Update override-all.json 2023-04-07 09:08:21 +00:00
Rock Chin
f8e19ba9b3 feat: 删除config-template.py中多余的属性 2023-04-07 17:07:43 +08:00
GitHub Actions
7649dbfbbc Update override-all.json 2023-04-07 09:00:40 +00:00
Rock Chin
81e734644d feat: 删除config-template.py中的help_message模板 2023-04-07 17:00:16 +08:00
Rock Chin
ae55cf5b1e feat: 适配help指令 2023-04-07 16:59:51 +08:00
Rock Chin
af539546ef Merge pull request #356 from 2675hujilo/tips
[Feat] 支持自定义提示消息
2023-04-07 16:43:58 +08:00
Rock Chin
0031ce57d0 Merge branch 'customizable-tips' into tips 2023-04-07 16:40:26 +08:00
Rock Chin
2f48a2ce57 Merge branch 'customizable-tips' of https://github.com/RockChinQ/QChatGPT into customizable-tips 2023-04-07 16:39:48 +08:00
Rock Chin
6068ab7100 feat: 修改help_message为主线的内容 2023-04-07 16:39:25 +08:00
GitHub Actions
29a7dccef4 Update override-all.json 2023-04-07 08:34:23 +00:00
Rock Chin
e2073da86e Merge branch '2675hujilo-tips' into customizable-tips 2023-04-07 16:32:32 +08:00
2675hujilo
ae079526f7 删除tips-customs-template.py中不必要注释 2023-04-07 16:29:09 +08:00
26751
947bae8e26 删除tips-customs-template.py中不必要注释
Signed-off-by: 26751 <2675174581@qq.com>
2023-04-07 16:22:22 +08:00
Rock Chin
a68e29dff6 feat: tips模块完整性检查 2023-04-07 16:02:22 +08:00
Rock Chin
a588d7f960 feat: 热重载加上tips模块 2023-04-07 13:28:07 +08:00
Rock Chin
66224e5a32 fix: 热重载后未检查配置文件存在性 2023-04-07 13:25:57 +08:00
Rock Chin
07abad6a14 feat: 将tips的值统一为str类型 2023-04-07 13:23:58 +08:00
Rock Chin
83d02aaaac chore: 修改配置文件名称 2023-04-07 13:20:57 +08:00
Rock Chin
5a27ac165e Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-04-06 21:37:56 +08:00
Rock Chin
bd9a523233 Release v2.3.5 2023-04-06 21:37:51 +08:00
Rock Chin
43959b158f Merge pull request #385 from RockChinQ/impl-337-bugfix-version-ignorance
[Feat] 更新逻辑优化
2023-04-06 21:36:53 +08:00
Rock Chin
d81b457bba feat: 更新完成后不展示更新前版本的更新日志 (#340) 2023-04-06 21:34:30 +08:00
Rock Chin
b40d639785 feat: 忽略第四位版本号 2023-04-06 21:31:56 +08:00
Rock Chin
0a8d8f4f66 Merge pull request #381 from RockChinQ/impl-339-redundance-comp-check
[Chore] 删除冗余的兼容性检查判断
2023-04-06 21:03:33 +08:00
Rock Chin
d16cb25cde chore: 删除冗余的兼容性检查判断 2023-04-06 20:34:56 +08:00
Rock Chin
7aef1758e0 ci: test 2023-04-06 18:41:21 +08:00
Rock Chin
9758756fdd ci: 错误的路径 2023-04-06 18:39:21 +08:00
Rock Chin
13ef35f96f fix: 热重载后!draw无法使用的问题 2023-04-06 18:37:07 +08:00
Rock Chin
6b8c1209b7 chore: 整理根目录文件 2023-04-06 17:23:30 +08:00
Rock Chin
7184f3053a doc: README.md添加社区群说明 2023-04-06 15:55:48 +08:00
Rock Chin
b83eac10e6 doc: 完善wiki 2023-04-06 15:20:08 +08:00
Rock Chin
cb42eaef69 test: Home.md 2023-04-06 15:18:35 +08:00
Rock Chin
0dfd636a7e ci: 工作流 2023-04-06 15:18:02 +08:00
Rock Chin
21ff0fd258 test: 测试wiki同步工作流 2023-04-06 15:13:40 +08:00
Rock Chin
c2eaeb2c72 chore: wiki同步工作流 2023-04-06 15:12:12 +08:00
Rock Chin
2a414a4bea chore: 提交wiki文件到res/wiki 2023-04-06 15:07:25 +08:00
Rock Chin
fc0c38c8af chore: 删除子模块 2023-04-06 10:13:34 +08:00
Rock Chin
595e6c8a0c chore: 删除子模块 2023-04-06 10:13:08 +08:00
Rock Chin
ced16fd221 chore: 移动docker部署教程 2023-04-06 10:10:09 +08:00
Rock Chin
0817c3f148 chore: 将工作流脚本移动到res/scripts 2023-04-06 10:08:15 +08:00
Rock Chin
fb40af81ac doc: 完善文档 2023-04-06 09:44:07 +08:00
Rock Chin
1c5ad05e89 typo: plugin命令的提示错字 2023-04-06 09:29:45 +08:00
Rock Chin
86bef566c4 Release v2.3.4 2023-04-05 17:13:05 +08:00
Rock Chin
0983ccb61e doc: 添加模型切换器插件 2023-04-05 16:59:06 +08:00
Rock Chin
a1d9f469c0 doc: 添加模型切换器插件 2023-04-05 16:58:15 +08:00
Rock Chin
952124f783 feat: 禁用的插件仍进行初始化 2023-04-05 16:50:35 +08:00
GitHub Actions
6be12e8ace Update override-all.json 2023-04-05 07:48:46 +00:00
Rock Chin
0799f380e1 feat: 更改默认help_message 2023-04-05 15:48:21 +08:00
Rock Chin
f65270ee7e feat: 启动时输出mah相关配置项 2023-04-05 15:46:49 +08:00
Rock Chin
414910719c Release v2.3.3 2023-04-05 09:57:21 +08:00
Rock Chin
10a1e8faa6 fix: 回复内容不完整问题 (#208) 2023-04-05 09:56:27 +08:00
Rock Chin
4eea21927e doc: 补充手动部署中缺失的requests库 (#375) 2023-04-04 16:49:59 +08:00
Rock Chin
48c7f659f9 Release v2.3.2 2023-04-04 03:22:19 +00:00
Rock Chin
b33333f4aa Merge pull request #372 from RockChinQ/363-bug-helpmessage-creditapi
[Fix] help_message问题、额度检测接口问题
2023-04-04 11:20:34 +08:00
Rock Chin
9edb32b081 feat: usage命令不再显示额度 2023-04-04 03:15:07 +00:00
Rock Chin
c9b25fe806 doc: cmds指令的说明 2023-04-03 14:55:01 +00:00
GitHub Actions Bot
b6ee3939be Update cmdpriv-template.json 2023-04-03 14:41:25 +00:00
Rock Chin
e5485cddd0 feat: 更改使用!cmd指令查看指令列表 2023-04-03 14:40:27 +00:00
Rock Chin
ac81597236 feat: 插件更新异常处理 2023-04-03 14:09:30 +00:00
Rock Chin
58d991df0a Merge pull request #368 from zyckk4/docstring-improvements
[Chore] 统一docstring格式
2023-04-03 22:02:11 +08:00
Rock Chin
3f8e380da4 Merge pull request #369 from zyckk4/fix-type-hint
[Fix] 修复一处类型注解的错误
2023-04-03 13:39:56 +08:00
zyckk4
ae831a2654 [Fix] 修复一处类型注解的错误 2023-04-03 10:13:20 +08:00
zyckk4
ae72cf2283 chore: 统一docstring格式 2023-04-03 00:19:28 +08:00
Rock Chin
8164f4b506 Release v2.3.1 2023-04-02 16:32:52 +08:00
Rock Chin
9617be0ca4 fix: 未指定utf-8保存已输出的公告 2023-04-02 16:30:42 +08:00
Rock Chin
f079d7b9fa fix: Windows上无法读取和应用命令权限配置的问题 2023-04-02 16:24:30 +08:00
Rock Chin
00afda452f Merge pull request #365 from zyckk4/style-improvements
去除行尾空格
2023-04-02 16:04:52 +08:00
zyckk4
70386abadd 去除行尾空格 2023-04-02 14:43:34 +08:00
26751
5865ac017c 增加tips_custom.py提示
Signed-off-by: 26751 <2675174581@qq.com>
2023-04-02 13:46:15 +08:00
26751
4061a92f8e 删除override-all.json中无效的字段
Signed-off-by: 26751 <2675174581@qq.com>
2023-04-02 13:36:51 +08:00
2675hujilo
d37c31b31c Update tips_custom_template.py 2023-04-01 18:43:03 +08:00
2675hujilo
973ef0078f Delete tips_custom.py 2023-04-01 18:36:33 +08:00
26751
48dcd257da Signed-off-by: 26751 <2675174581@qq.com> 2023-04-01 18:33:37 +08:00
26751
da03911610 Signed-off-by: 26751 <2675174581@qq.com> 2023-04-01 16:39:02 +08:00
Rock Chin
aba9d945b5 doc: 收起功能概述 2023-04-01 09:59:33 +08:00
26751
b6f7f3b73f Signed-off-by: 26751 <2675174581@qq.com> 2023-04-01 02:35:27 +08:00
26751
2050d20ea7 Signed-off-by: 26751 <2675174581@qq.com> 2023-04-01 02:23:40 +08:00
26751
ac1fb4a63a 修改自定义提示语 2023-04-01 01:02:59 +08:00
Rock Chin
ced38490e1 chore: 兼容性问题公告 2023-03-31 21:37:35 +08:00
Rock Chin
ad28b69198 doc: 添加ChatPoeBot插件链接 (#352) 2023-03-31 21:31:40 +08:00
Rock Chin
7171817de8 Release v2.3.0 2023-03-31 07:42:06 +00:00
GitHub Actions Bot
73f9d674e1 Update cmdpriv-template.json 2023-03-31 07:40:07 +00:00
Rock Chin
5e046399f8 test: 删除测试文件 2023-03-31 07:39:35 +00:00
GitHub Actions Bot
4966cd9ac7 Update cmdpriv-template.json 2023-03-31 07:35:48 +00:00
Rock Chin
da936ecfe3 test: ci 2023-03-31 07:35:11 +00:00
Rock Chin
89e10d43de ci: 解决所有依赖 2023-03-31 07:34:45 +00:00
Rock Chin
3bf289af69 test: 测试 2023-03-31 07:29:23 +00:00
Rock Chin
c7c9a6c5ca ci: 运行前完善配置文件 2023-03-31 07:28:33 +00:00
Rock Chin
aee8446a23 test: 测试工作流 2023-03-31 07:25:53 +00:00
Rock Chin
2bb4f1fbb8 ci: 工作流 2023-03-31 07:25:27 +00:00
Rock Chin
6e7b0ee4ff test: 测试工作流 2023-03-31 07:24:17 +00:00
Rock Chin
204f5b9a54 ci: 工作流语法错误 2023-03-31 07:23:35 +00:00
Rock Chin
8c41e3506f test: 测试工作流 2023-03-31 07:22:33 +00:00
Rock Chin
c2c33e45b8 ci: 更新工作流文件 2023-03-31 07:21:03 +00:00
Rock Chin
1acaf4e58b Merge pull request #336 from RockChinQ/cmds-permission-ctrl
[Refactor&Feat] 命令节点权限控制
2023-03-31 15:18:44 +08:00
Rock Chin
eca80d5a4c ci: 添加cmdpriv-template.json的自动化生成脚本 2023-03-31 07:18:08 +00:00
Rock Chin
f538957be9 doc: 更新wiki 2023-03-31 07:06:42 +00:00
Rock Chin
82a839a60a doc: 完善命令权限功能说明 2023-03-31 07:06:18 +00:00
Rock Chin
df494da9e4 feat: 支持命令限权 2023-03-31 06:49:13 +00:00
Rock Chin
1ea53f7f04 Merge pull request #342 from q123458384/patch-1
Update docker_deploy.md
2023-03-30 22:30:34 +08:00
Rock Chin
ac6d695f6d doc: 完善主程序容器启动指令的挂载项 2023-03-30 21:26:10 +08:00
Rock Chin
73dccb21f5 feat: 添加指令权限配置文件 2023-03-30 11:29:04 +00:00
Rock Chin
4221102ad5 chore: 删除过时的命令架构文件 2023-03-30 11:12:27 +00:00
Rock Chin
b100f12e7f refactor: 完成所有指令 2023-03-30 11:11:39 +00:00
Rock Chin
2069ba6836 refactor: system类命令 2023-03-30 03:38:33 +00:00
crosscc
ea57976808 Update docker_deploy.md
2.1中 `network host` 就是开放容器内的所有端口,和 `-p 端口:端口` 不共用
2.1中  `-v ./qq/xxx` 在群晖中不能用,改成了`${PWD}/qq/xxx`
3 中 容器名和上面的重复了,映射整个目录会无法运行,改成只映射 config.py

以上是我docker部署中遇到的问题及修改
2023-03-29 16:44:16 +08:00
Rock Chin
4055d3542b refactor: 完成会话管理相关指令 2023-03-28 13:47:45 +00:00
Rock Chin
0b0271a1f4 refactor: 更改使用装饰器注册命令 2023-03-28 12:53:46 +00:00
Rock Chin
e03585ad4d feat: 扁平化储存命令 2023-03-28 12:18:19 +00:00
Rock Chin
11a385791e doc: 添加贡献相关说明 2023-03-28 12:52:37 +08:00
Rock Chin
e228225178 refactor: 指令注册架构 2023-03-28 03:12:19 +00:00
Rock Chin
1c96d971e1 Update bug-report.yml 2023-03-27 21:22:56 +08:00
Rock Chin
b799de7995 refactor: 迁移旧的处理模块 2023-03-27 13:09:40 +00:00
Rock Chin
b01d246555 doc: 删除安装器使用警告 2023-03-27 18:52:40 +08:00
Rock Chin
9363b073cf Merge pull request #334 from maimierjiafude/patch-1
[Fix] 修改模块无法找到的问题
2023-03-27 18:51:05 +08:00
maimierjiafude
12ca04ac6f 修改模块无法找到的问题 2023-03-27 18:45:29 +08:00
Rock Chin
51737c28bd Delete 需求建议.md 2023-03-27 11:31:05 +08:00
Rock Chin
50d5ec224a Create feature-request.yml 2023-03-27 11:30:40 +08:00
Rock Chin
95a7397d14 Update bug-report.yml 2023-03-27 11:23:10 +08:00
Rock Chin
aedac6d22c Create bug-report.yml 2023-03-27 11:21:45 +08:00
Rock Chin
d522975ecc Delete 漏洞反馈.yml 2023-03-27 11:17:14 +08:00
Rock Chin
68fda8d7f3 Update 漏洞反馈.yml 2023-03-27 11:16:48 +08:00
Rock Chin
b0cfec9913 Update 漏洞反馈.yml 2023-03-27 11:11:07 +08:00
Rock Chin
ba8eba1581 Update 漏洞反馈.yml 2023-03-27 11:10:41 +08:00
Rock Chin
f9eaed41c1 Update 漏洞反馈.yml 2023-03-27 11:07:16 +08:00
Rock Chin
1202a62df7 Update 漏洞反馈.yml 2023-03-27 11:06:11 +08:00
Rock Chin
8c1f7796f6 Update 漏洞反馈.yml 2023-03-27 11:02:18 +08:00
Rock Chin
42aee35789 Update 漏洞反馈.yml 2023-03-27 11:01:47 +08:00
Rock Chin
b628849caa Update 漏洞反馈.yml 2023-03-27 11:00:21 +08:00
Rock Chin
031f08b0d4 Rename 漏洞反馈.md to 漏洞反馈.yml 2023-03-27 10:57:40 +08:00
Rock Chin
fab6f9b93f Update 漏洞反馈.md 2023-03-27 10:57:00 +08:00
GitHub Actions
564c5d937d Update override-all.json 2023-03-26 15:45:06 +00:00
Rock Chin
2d3bb01487 debug: 测试完毕 2023-03-26 23:44:49 +08:00
GitHub Actions
607ea2d293 Update override-all.json 2023-03-26 15:43:54 +00:00
Rock Chin
d817b53780 debug: 测试工作流 2023-03-26 23:43:34 +08:00
Rock Chin
e8a2cbe06a Rename update override-all.json to update-override-all.yml 2023-03-26 23:42:42 +08:00
Rock Chin
d2b0577752 Update update override-all.json 2023-03-26 23:41:15 +08:00
Rock Chin
b4edd5cbad Update update override-all.json 2023-03-26 23:38:38 +08:00
Rock Chin
348477747e debug: 测试override-all.json工作流 2023-03-26 23:35:44 +08:00
Rock Chin
bb7ee174ea Create update override-all.json 2023-03-26 23:34:50 +08:00
Rock Chin
ab5add14ef chore: 完善override-all.json 2023-03-26 15:27:17 +00:00
Rock Chin
44f4820cee Merge pull request #332 from RockChinQ/reverse-proxy
[Feat] 支持反向代理
2023-03-26 22:51:06 +08:00
Rock Chin
8f1609b944 doc: 完善反代地址说明 2023-03-26 14:50:03 +00:00
Rock Chin
66b5b75631 feat: 支持反向代理 2023-03-26 13:50:43 +00:00
Rock Chin
17e293afe8 Merge pull request #325 from RockChinQ/fix-289-full-default-compatibility
[Feat] 完善情景预设相关内容
2023-03-26 21:40:36 +08:00
Rock Chin
1cf35f59fd Merge branch 'master' into fix-289-full-default-compatibility 2023-03-26 21:40:21 +08:00
Rock Chin
bb4b897934 feat(dprompt.py): 解耦完成 2023-03-26 13:28:26 +00:00
Rock Chin
0eaf1af2e3 doc: 添加Python环境冲突警告 2023-03-26 15:25:21 +08:00
Rock Chin
f70c12540b Merge pull request #327 from mikumifa/master
Dockerfile部署
2023-03-25 23:12:52 +08:00
Rock Chin
479fe73c24 doc: 在README.md链接docker教程 2023-03-25 23:12:26 +08:00
Rock Chin
f6cad85476 feat: 使用normal作为情景预设默认模式的名称 2023-03-24 20:02:50 +08:00
mikumifa
888197e6ce Dockerfile部署 2023-03-24 19:58:27 +08:00
Rock Chin
e634305759 doc: 完善full_scenario的说明 2023-03-24 11:30:53 +00:00
Rock Chin
fe054211f4 chore: 代码格式优化 2023-03-23 23:44:10 +08:00
Rock Chin
f102a29ea0 Merge pull request #323 from RockChinQ/multi-threads-control
[Feat] 基于线程池的多线程控制方案
2023-03-23 22:56:51 +08:00
Rock Chin
2b8bd45bcd Merge branch 'master' into multi-threads-control 2023-03-23 21:43:41 +08:00
Rock Chin
7f730c4be0 Merge pull request #252 from LINSTCL/multi-threads-control
添加线程控制类,修改main结构,修改启动流程
2023-03-23 21:35:22 +08:00
Rock Chin
b6e31cac23 fix: 重载时重复调用load_config() 2023-03-23 21:29:51 +08:00
Rock Chin
9fe4f218d5 chore: config-template格式 2023-03-23 21:09:40 +08:00
LINSTCL
cc38cc2676 修复bug 2023-03-23 16:43:41 +08:00
LINSTCL
f56c6876d1 暂时解决reload后的config无法加载问题 2023-03-23 16:42:15 +08:00
LINSTCL
196e424c88 添加说明 2023-03-23 16:37:01 +08:00
Rock Chin
9270dc2c52 Release v2.2.5 2023-03-20 14:02:38 +00:00
Rock Chin
14aec251b4 Merge pull request #315 from RockChinQ/impl-312
[Feat] 访问GitHub API时使用openai_config中设置的代理地址
2023-03-20 21:49:33 +08:00
Rock Chin
d2a7a57245 feat: 为GitHub API的访问使用代理 (#312) 2023-03-20 13:40:23 +00:00
Rock Chin
1964fc76c8 doc: 完善wiki指引 2023-03-20 13:25:02 +00:00
Rock Chin
b8d4b490ce doc: 添加部署说明 2023-03-20 13:12:25 +00:00
Rock Chin
76891e4855 doc: 添加指令说明指引 2023-03-20 13:09:05 +00:00
Rock Chin
3d868b3a39 Merge pull request #308 from RockChinQ/plugin-ctrl-cmd
[Feat] 解耦指令处理、完善插件管理指令
2023-03-20 21:04:06 +08:00
Rock Chin
7b56bcf7a9 feat: 添加插件启用禁用指令 2023-03-20 13:02:30 +00:00
Rock Chin
f96ae56bce feat: 支持指令删除插件 (#286) 2023-03-20 12:50:25 +00:00
Rock Chin
d52108f4e1 doc: 完善README.md 2023-03-20 12:49:18 +00:00
Rock Chin
5f07b7ad1f refactor: 完成所有指令重构 2023-03-20 12:06:02 +00:00
Rock Chin
cda10cf1a6 Update 漏洞反馈.md 2023-03-20 19:17:53 +08:00
Rock Chin
d226b8ebc5 doc: 完善文档 (#310) 2023-03-20 14:46:39 +08:00
Rock Chin
d08794579c feat: 现有指令占位 2023-03-19 14:33:01 +00:00
Rock Chin
7450494741 Update pull_request_template.md 2023-03-19 20:33:23 +08:00
Rock Chin
36dca7ae2f feat: 添加指令抽象类 2023-03-19 12:27:21 +00:00
Rock Chin
5dae777e79 doc: 添加wiki为submodule 2023-03-19 09:43:45 +00:00
Rock Chin
e518d172d7 Merge pull request #304 from RockChinQ/bd-check-exception
[Perf] 百度云审核的异常处理
2023-03-19 17:13:37 +08:00
Rock Chin
af29277acd feat: 长消息检查函数不再检查敏感词 2023-03-19 09:06:32 +00:00
Rock Chin
79bfa0792d feat: 删除print调试信息 2023-03-19 08:45:54 +00:00
Rock Chin
cf23c5d31c Release v2.2.4 2023-03-19 08:38:07 +00:00
Rock Chin
84418a296b doc: 完善pr模板 2023-03-19 08:37:23 +00:00
Rock Chin
5f83cc6bb7 Merge pull request #300 from RockChinQ/token-process
[Perf] Tokens相关处理逻辑优化
2023-03-19 16:35:25 +08:00
Rock Chin
cde168c93c doc: full_scenario的编写教程 (#301) 2023-03-19 08:32:34 +00:00
Rock Chin
fed24c0748 doc: 添加chordfish-k/QChartGPT_Emoticon_Plugin 2023-03-19 13:35:20 +08:00
Rock Chin
b45d11b3c3 Update pull_request_template.md 2023-03-19 11:28:38 +08:00
Rock Chin
84d9af69bb Update pull_request_template.md 2023-03-19 11:28:17 +08:00
Rock Chin
684d356646 Update pull_request_template.md 2023-03-19 11:17:07 +08:00
Rock Chin
975300c9fc Create pull_request_template.md 2023-03-19 11:15:45 +08:00
Rock Chin
ca349e33fc feat: 实现新的前文剪切模式 2023-03-18 15:57:28 +00:00
Rock Chin
ccf62fe95c doc: 致谢GPT-4内测提供者 2023-03-18 22:28:06 +08:00
Rock Chin
d056cb6769 feat: 数据库接口支持 2023-03-18 12:57:36 +00:00
Rock Chin
b0016eebf9 feat: 添加override-all.json 2023-03-18 20:44:14 +08:00
Rock Chin
0490ad9207 test: token计数测试 2023-03-18 11:26:18 +00:00
Rock Chin
4a20ae236b doc: README.md格式错误 2023-03-18 09:15:26 +00:00
Rock Chin
9be1c7fc6f doc: 添加WaitYiYan插件链接 2023-03-18 08:17:51 +00:00
Rock Chin
5621d32b30 doc: GPT-4说明 2023-03-18 04:42:46 +00:00
Rock Chin
b7642fe876 feat: 支持GPT-4 API 2023-03-18 04:38:48 +00:00
Rock Chin
c842485d33 perf: 尝试安装依赖时的逻辑 2023-03-17 07:49:27 +00:00
Rock Chin
341444ef1c chore: 添加devcontainer配置 2023-03-17 07:39:16 +00:00
Rock Chin
66f5a219d2 feat: 不再提示InvalidRequestError的可能原因 2023-03-16 21:10:10 +08:00
Rock Chin
cf678aa345 feat: 修改日志初始化顺序 2023-03-16 20:55:57 +08:00
Rock Chin
d1549b3df0 chore: 代码格式优化 2023-03-16 20:22:18 +08:00
Rock Chin
002919fffe doc: 优化README.md格式 2023-03-16 19:38:35 +08:00
Rock Chin
087d097204 feat: 不再默认提供max_tokens 2023-03-16 13:37:48 +08:00
Rock Chin
ca4eeda6f0 doc: 添加oliverkirk-sudo的文字转语音插件 2023-03-16 09:08:00 +08:00
Rock Chin
94543a4708 Merge pull request #282 from systemtang/bugfix
[Feat] 修复usage命令的代理问题
2023-03-16 08:53:25 +08:00
Rock Chin
d4738dfb46 Release v2.2.3 2023-03-15 22:50:40 +08:00
Rock Chin
3bdf6810aa fix: 消息处理时的错误 2023-03-15 22:47:20 +08:00
systemt
f489c2f3b4 修复usage命令的代理问题 2023-03-15 21:04:55 +08:00
Rock Chin
a724bfe155 Release v2.2.2 2023-03-15 20:39:10 +08:00
Rock Chin
179a372bfe feat: 更改到process.py处理长消息 2023-03-15 20:33:44 +08:00
Rock Chin
651d765ab0 doc: 添加New Bing说明 2023-03-15 17:33:31 +08:00
Rock Chin
7ddc853f63 chore: 忽略保存的公告 2023-03-15 15:50:14 +08:00
Rock Chin
1bd1bfc725 chore: 删除测试公告 2023-03-15 15:47:24 +08:00
Rock Chin
f6ec0fda7a Merge pull request #280 from RockChinQ/announcement
[Feat] 添加公告输出功能
2023-03-15 15:46:58 +08:00
Rock Chin
7be368ae8c feat: 添加公告功能 2023-03-15 15:43:36 +08:00
Rock Chin
f67db2617b debug: 测试公告内容1 2023-03-15 15:37:07 +08:00
Rock Chin
ed5bf8100f chore: 添加公告内容 2023-03-15 15:22:19 +08:00
Rock Chin
0ef8a1c9ae chore: 为new bing忽略cookies.json 2023-03-15 11:24:45 +08:00
Rock Chin
32460cbf78 doc: 添加GPT-4公告 2023-03-15 11:04:10 +08:00
Rock Chin
6f6c9c222c doc: 添加网页版GPT-4说明 2023-03-15 10:57:29 +08:00
Rock Chin
438d0ed1ea Merge pull request #277 from zyckk4/dev
chore: 去除多余import
2023-03-14 13:11:47 +08:00
zyckk4
3ef1c71cad chore: 去除多余import 2023-03-14 13:03:50 +08:00
Rock Chin
aaadf6b8ba doc: 部署方式依赖项指令 2023-03-14 10:57:02 +08:00
Rock Chin
6af614f319 doc: 整理致谢列表 2023-03-14 10:54:46 +08:00
Rock Chin
c75dbd67df doc: 整理致谢列表 2023-03-14 10:53:32 +08:00
Rock Chin
dc3d186e2a Merge pull request #274 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.2
chore(deps): update openai requirement from ~=0.27.0 to ~=0.27.2
2023-03-13 17:48:10 +08:00
dependabot[bot]
44550feddd chore(deps): update openai requirement from ~=0.27.0 to ~=0.27.2
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.0...v0.27.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-13 09:05:46 +00:00
Rock Chin
a0810d5f63 Merge pull request #271 from RockChinQ/config-covering
feat: 支持json格式的配置文件 (#265)
2023-03-13 11:05:11 +08:00
Rock Chin
cfc97fb22d feat: 支持json格式的配置文件 (#265) 2023-03-13 10:58:15 +08:00
Rock Chin
d67dbe8062 doc: 添加JSON格式情景预设的说明 2023-03-13 10:31:21 +08:00
Rock Chin
e89035e11c Release v2.2.1 2023-03-12 22:43:39 +08:00
Rock Chin
2ea711e629 fix: 更新包中包含新文件时更新失败 2023-03-12 22:43:02 +08:00
Rock Chin
a716f071be Release v2.2.0 2023-03-12 20:48:15 +08:00
Rock Chin
3450a91824 Merge pull request #262 from chordfish-k/json_scenario
[Feat] 情景预设(人格)完善
2023-03-12 20:40:20 +08:00
Rock Chin
d2c2b457e5 fix: !list指令显示的是机器人第一次回复 (#264) 2023-03-12 20:31:28 +08:00
Rock Chin
9cd7e49804 feat: 分离储存会话情景预设和对话内容 2023-03-11 23:44:22 +08:00
Rock Chin
e9155e836f feat: 允许通过前缀指定使用的JSON情景 2023-03-10 23:49:41 +08:00
Rock Chin
ed248539c7 doc: 标记二群已满 2023-03-10 23:33:54 +08:00
Rock Chin
54cc75506f feat: 使用模板储存默认的json格式的情景预设 2023-03-10 23:26:36 +08:00
Rock Chin
4269c7927e chore: typo 2023-03-10 23:14:32 +08:00
Rock Chin
064ac7f603 feat: 添加窗口处于暂停模式的提示 2023-03-10 22:50:07 +08:00
Rock Chin
48ccf15273 Merge pull request #263 from RockChinQ/history-deletion
[Feat] 支持删除指定当前会话的指定或全部历史记录 (#239)
2023-03-10 22:40:19 +08:00
Rock Chin
b920ced6d4 feat: !delhst 指令支持管理员删除会话历史记录 2023-03-10 21:20:19 +08:00
Rock Chin
69610a674c perf: 更改help中指令信息帮助 2023-03-10 21:11:55 +08:00
Rock Chin
1828e34190 feat: 支持删除指定当前会话的指定或全部历史记录 (#239) 2023-03-10 21:04:20 +08:00
chordfish
d53f4e3917 adjust:修改,去除neko.json,以及一些占位的变量等 2023-03-10 19:37:29 +08:00
chordfish
01706d5b4e Delete mesugaki.json 2023-03-10 16:17:47 +08:00
chordfish
8916b8a450 Update manager.py 2023-03-10 16:17:07 +08:00
chordfish
ed33af5638 Update README.md 2023-03-10 14:10:54 +08:00
chordfish
c94a9e1ae6 bug:修复上次更新后不响应的问题 2023-03-10 13:55:56 +08:00
chordfish
e2e93afd06 bug:修复上次更新后不响应的问题 2023-03-10 13:03:25 +08:00
chordfish
a810158d5b bug:修复上次更新后不响应的问题 2023-03-10 12:43:07 +08:00
chordfish
5a5ebb95fc bug:修复上次更新后不响应的问题 2023-03-10 12:35:58 +08:00
chordfish
61dd9e29c0 Merge branch 'master' into json_scenario 2023-03-10 10:20:18 +08:00
chordfish
ac65d81ba1 adjust:整理代码,仅添加json方式的prompt读取 2023-03-10 10:13:40 +08:00
LINSTCL
3aca987176 暴力修复程序无法退出的bug 2023-03-10 09:35:59 +08:00
chordfish
7288d3cb15 删除一部分注释和调试信息 2023-03-09 21:20:59 +08:00
chordfish
7477c7c67f 删除一部分注释和调试内容 2023-03-09 21:16:15 +08:00
chordfish
453952859e Merge branch 'full_scenario' 2023-03-09 21:08:47 +08:00
chordfish
85d46089e3 已按要求修改 2023-03-09 19:53:31 +08:00
chordfish
3b55f706de 修正防人格否定的一个Bug 2023-03-09 19:22:37 +08:00
chordfish
f448276423 Merge commit '830ee704da0903a8922dc757381cdf6fd68870a3' 2023-03-09 18:34:03 +08:00
chordfish
830ee704da Bug修复 2023-03-09 18:32:39 +08:00
chordfish
393369e446 Merge branch 'master' of https://github.com/chordfish-k/QChatGPT 2023-03-09 18:29:37 +08:00
chordfish
2cc6a09905 Bug修复 2023-03-09 18:29:31 +08:00
chordfish
d7d9d88e16 适配线程版本 2023-03-09 17:56:57 +08:00
chordfish
357d6aaf75 更新配置文件 2023-03-09 15:52:18 +08:00
chordfish
8059c422e3 Update README.md 2023-03-09 14:48:21 +08:00
chordfish
b336e1334d Update README.md 2023-03-09 14:47:39 +08:00
chordfish
12a0942ddb 初步追加通过json导入messages数组的方式进行情景预设 2023-03-09 14:44:33 +08:00
Rock Chin
7e5a77f77e doc: 添加致谢https://github.com/qq255204159 2023-03-08 16:16:41 +08:00
LINSTCL
e0caeb5dd2 Fix bugs 2023-03-08 16:08:09 +08:00
LINSTCL
77076f3bdd 添加线程控制类,修改main结构,修改启动流程 2023-03-08 15:21:37 +08:00
Rock Chin
2933d4843f Release v2.1.4 2023-03-07 08:50:43 +08:00
Rock Chin
c5de978098 Merge pull request #236 from RockChinQ/fix-234
[Fix] !reload 重新加载以后首次对话报错
2023-03-07 08:47:53 +08:00
Rock Chin
8b9cfab072 doc(main.py): 优化注释 2023-03-07 08:46:20 +08:00
Rock Chin
ea5f3c222f fix: 修改主线程main流程以初步修复 2023-03-06 20:53:40 +08:00
Rock Chin
36bcbca15b Merge pull request #233 from RockChinQ/respond-rule
[Feat] 支持设置不响应群内at消息及随机响应
2023-03-06 17:55:19 +08:00
Rock Chin
2b2060e71b feat: 支持设置不响应群内at消息;支持设置随机响应概率 2023-03-06 17:50:34 +08:00
Rock Chin
451688f2df Merge pull request #232 from RockChinQ/sensitive-mask
[Feat] 支持更换敏感词的掩盖字符
2023-03-06 15:27:15 +08:00
Rock Chin
d993852de7 feat: 支持将敏感词替换成整个字符串 2023-03-06 15:26:06 +08:00
Rock Chin
9d73770a4e feat: 支持更换敏感词的掩盖字符 2023-03-06 15:07:10 +08:00
Rock Chin
2541acf9d2 fix: 赞赏码base64值错误 2023-03-06 14:16:25 +08:00
Rock Chin
a1bfbad24e Release v2.1.3 2023-03-06 12:41:35 +08:00
Rock Chin
8af4918048 Merge pull request #230 from LINSTCL/config_integrity_check
添加配置文件完整性校验
2023-03-06 12:35:59 +08:00
Rock Chin
49f4ab0ec8 perf: 完整性检查忽略__开头的属性 2023-03-06 12:34:08 +08:00
LINSTCL
85c623fb0f 修改提示逻辑 2023-03-06 11:27:16 +08:00
Rock Chin
9e28298250 perf: 完善未启动情况下的自动更新 2023-03-06 11:18:31 +08:00
Rock Chin
7a04ef0985 feat: 未启动状态下的自动更新 (#223) 2023-03-06 11:04:25 +08:00
LINSTCL
83005e9ba9 添加配置文件完整性校验 2023-03-06 09:40:33 +08:00
Rock Chin
f0c78f0529 Merge pull request #222 from LINSTCL/threadpool-optimization
使用线程池控制线程数量,防止高并发崩溃
2023-03-06 08:51:47 +08:00
Rock Chin
3f638adcf9 perf(qqbot/manager.py): 优化控制台日志显示 2023-03-06 08:50:28 +08:00
Rock Chin
d9405d8d5d fix: main.py的字段版本兼容性问题 2023-03-06 08:48:50 +08:00
Rock Chin
606713a418 Merge pull request #228 from yichuxue/patch-1
启动时,更新openai和pillow库超时问题
2023-03-06 08:44:29 +08:00
Rock Chin
52102f0d0a feat(deps): trusted-host参数 2023-03-06 08:43:51 +08:00
Rock Chin
61c29829ed Release v2.1.2 2023-03-06 08:35:04 +08:00
依初雪
df30931aad 启动openai和pillow库超时问题
主要改动如下:
1、在ensure_dependencies函数更更新包时,出现超时的情况,指定更新源 https://pypi.douban.com/simple/
2023-03-06 00:32:46 +08:00
Rock Chin
5afcc03e8b fix: 错误的!version指令处理逻辑 2023-03-05 20:07:08 +08:00
Rock Chin
fbeb4673f4 Merge pull request #226 from RockChinQ/text2img-perf
[Feat] 不再自带字体文件
2023-03-05 19:59:16 +08:00
Rock Chin
4aba319560 fix: 错误的加载过程 2023-03-05 19:57:39 +08:00
Rock Chin
74f79e002c perf: 优化字体加载过程 2023-03-05 19:54:51 +08:00
Rock Chin
2668ef2b3f feat: 不再自带字体文件 2023-03-05 19:36:09 +08:00
Rock Chin
74c018e271 Merge pull request #225 from RockChinQ/fix-switch-exce
[Fix] 修复插件开关问题
2023-03-05 17:36:03 +08:00
Rock Chin
64776fd601 doc: OpenAI注册教程链接 2023-03-05 16:47:42 +08:00
LINSTCL
59877bf71d 添加日志输出 2023-03-05 16:47:07 +08:00
LINSTCL
d2800ac58b 使用线程池控制线程数量,防止高并发崩溃 2023-03-05 16:41:12 +08:00
Rock Chin
ffef944119 fix: 热重载后插件开关状态被重置 (#177) 2023-03-05 16:04:45 +08:00
Rock Chin
651b291ef6 doc: 添加部分注释 2023-03-05 15:39:13 +08:00
Rock Chin
e4b581f197 doc: 致谢添加贡献者 2023-03-05 14:37:14 +08:00
Rock Chin
4f3939e2d9 Merge pull request #219 from LINSTCL/modelmgr_optimization
优化模型接口底层的异常处理
2023-03-05 14:18:24 +08:00
LINSTCL
1048ca612d 补充错误情况 2023-03-05 14:06:07 +08:00
LINSTCL
b1a2d21ee9 优化异常处理 2023-03-05 13:52:43 +08:00
Rock Chin
dd4e8bdc8b perf: 优化版本识别逻辑 2023-03-05 12:26:51 +08:00
Rock Chin
e28c9bae0c feat: 修改上报功能识别版本的逻辑 2023-03-05 12:21:28 +08:00
Rock Chin
5c10f520fb Merge pull request #215 from RockChinQ/semantic-versions
[Feature] 使用语义化版本进行更新
2023-03-05 12:17:43 +08:00
Rock Chin
f8abe90674 perf: 完善更新检查功能 2023-03-05 12:09:44 +08:00
Rock Chin
964ad42cb4 perf: 完善更新提示 2023-03-05 12:02:59 +08:00
Rock Chin
424b970469 feat: 将依赖检查更改到main流程中 2023-03-05 11:58:18 +08:00
Rock Chin
792366e221 feat: 支持基于语义化版本的自动更新 2023-03-05 11:56:40 +08:00
Rock Chin
79e970c4c3 chore(deps): 删除dulwich依赖 2023-03-05 11:09:24 +08:00
Rock Chin
d12acd5f31 chore: git忽略temp目录 2023-03-05 11:05:42 +08:00
Rock Chin
13e55e05a4 doc: 增加长消息处理功能 2023-03-05 10:54:00 +08:00
Rock Chin
9a7490bc2f feat: 支持拒绝回复包含敏感词的提问 (#210) 2023-03-05 10:49:07 +08:00
Rock Chin
a610a9d3d3 fix: 无法根据ban_person忽略群内指定人消息 (#211) 2023-03-05 10:33:16 +08:00
Rock Chin
56e906c83f feat: 删除sensitive.json以sensitive-template.json替换 2023-03-05 10:21:32 +08:00
Rock Chin
101f26e5a3 Merge pull request #212 from Haibersut/feat-baiducloud
增加百度云内容审核
2023-03-05 10:13:20 +08:00
Rock Chin
0bba205cf2 feat: 优化配置文件注释 2023-03-05 10:12:49 +08:00
Rock Chin
cc3beb191f fix: 百度云审核的配置低版本兼容 2023-03-05 09:54:44 +08:00
Haibersut
42f5092bb9 更新了日记级别
将错误信息调整为warning
2023-03-05 01:45:36 +08:00
Haibersut
bc6728d123 根据建议修改 2023-03-05 01:17:23 +08:00
Rock Chin
754278f80f feat: 启动时自动安装Pillow库 2023-03-05 00:09:31 +08:00
Rock Chin
c9c980b6fe Merge pull request #203 from RockChinQ/blob_message_strategy
[Feature] 长消息处理策略
2023-03-04 23:53:54 +08:00
Rock Chin
a457d13d2c perf: 优化图片渲染 2023-03-04 23:53:22 +08:00
Rock Chin
7440e9e5d2 fix(blob.py): 错误的图片压缩处理 2023-03-04 21:36:07 +08:00
Rock Chin
39d901a5cb feat: 支持将长消息转换成图片进行回复 2023-03-04 21:14:10 +08:00
Haibersut
2e1ebff985 change value name 2023-03-04 21:12:50 +08:00
Haibersut
b8ed9ba321 Update README.md 2023-03-04 21:08:48 +08:00
Haibersut
c89a8e1cd1 Update README.md 2023-03-04 21:06:58 +08:00
Haibersut
480d201c55 增加百度云内容审核 2023-03-04 21:02:10 +08:00
Rock Chin
a4b7d4a012 feat: 支持将长消息转换成转发消息组件发送 2023-03-04 13:53:18 +08:00
Rock Chin
7fe676712b perf: 删除配置模板冗余项 2023-03-04 11:16:20 +08:00
Rock Chin
552733129c feat: 配置文件中增加长消息处理策略字段 2023-03-04 10:36:43 +08:00
Rock Chin
a4d73090f8 feat: 默认在启动时更新openai依赖库 2023-03-04 10:16:47 +08:00
Rock Chin
7d39b72800 feat: 更改默认的max_tokens为1024 2023-03-03 21:18:31 +08:00
Rock Chin
f1e12563e9 feat(gather.py): 未设置版本时默认为undetermined 2023-03-03 21:15:26 +08:00
Rock Chin
0ac5e5b35e fix(session.py): 错误的undo()方法逻辑 2023-03-03 21:13:31 +08:00
Rock Chin
6b3f74a39a Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-03-03 20:53:23 +08:00
Rock Chin
3c3e2e86c3 doc: README.md中一览已适配的模型 2023-03-03 20:53:19 +08:00
Rock Chin
204a778db2 Create CONTRIBUTING.md 2023-03-03 19:48:55 +08:00
Rock Chin
3594e64bfc Merge pull request #200 from LINSTCL/enable-proxy
添加proxy正向代理功能
2023-03-03 15:23:28 +08:00
LINSTCL
c23d114094 proxy后向兼容,修复部分报错 2023-03-03 15:20:42 +08:00
Rock Chin
6cb3fdc7c9 doc: 添加三群群号 2023-03-03 14:33:10 +08:00
LINSTCL
c57642bd4e 添加proxy代理功能 2023-03-03 14:12:53 +08:00
Rock Chin
891ee0fac8 Update README.md 2023-03-03 09:32:26 +08:00
Rock Chin
1b69f0b668 doc: 整理README.md 2023-03-03 09:18:48 +08:00
Rock Chin
46b310ceb9 doc: 现已接入ChatGPT官方API 2023-03-03 00:35:15 +08:00
Rock Chin
85fe44ec92 Merge pull request #194 from LINSTCL/new-model-abstract
feat: 重构模型-接口抽象
feat: 适配官方GPT-3.5模型ChatCompletion接口
2023-03-03 00:33:00 +08:00
Rock Chin
fdcec0fbf7 doc: 致谢贡献者 2023-03-03 00:28:14 +08:00
Rock Chin
2664ea8622 feat: 删除config-template中对话角色的字段 2023-03-03 00:25:26 +08:00
Rock Chin
862724da74 doc: config-template.py添加模型参数说明 2023-03-03 00:23:44 +08:00
Rock Chin
a1c167fb7f feat: 功能完成 2023-03-03 00:21:16 +08:00
Rock Chin
adc2290fc1 Merge branch 'new-model-abstract' of https://github.com/LINSTCL/QChatGPT into new-model-abstract 2023-03-03 00:11:06 +08:00
Rock Chin
8713fd8130 feat: 完善会话处理的逻辑 2023-03-03 00:07:53 +08:00
LINSTCL
77df3d1ae5 修复使用文本完成模型生成对话型文本时输出随机AI名的问题 2023-03-02 23:50:51 +08:00
LINSTCL
2234e9db0e 修改对话拼接逻辑 2023-03-02 23:25:42 +08:00
Rock Chin
dd3d403de8 feat(modelmgr.py): 模型列表 2023-03-02 23:20:28 +08:00
Rock Chin
5364c36a79 feat(session.py): prompt默认值改为[] 2023-03-02 22:42:07 +08:00
Rock Chin
118fbe3f7d perf(modelmgr.py): 类名称强调其为一个请求对象 2023-03-02 19:50:31 +08:00
Rock Chin
61ec8e96f2 test: 模型-接口兼容性测试 2023-03-02 19:49:36 +08:00
LINSTCL
19289527ae 旧版本数据库兼容 2023-03-02 19:40:36 +08:00
Rock Chin
77fdd6ddb8 doc: 添加对官方ChatGPT API接入工作的说明 2023-03-02 18:15:13 +08:00
Rock Chin
f7830b5e9d feat(modelmgr.py): 完善可选模型列表 2023-03-02 17:57:39 +08:00
LINSTCL
13e5d76a44 修复模型切换角色改变引起的BUG 2023-03-02 16:52:23 +08:00
LINSTCL
7b8ad2e315 修复模型切换角色改变引起的BUG 2023-03-02 16:47:50 +08:00
Rock Chin
623f094e5b doc: 添加注释;完善格式 2023-03-02 16:41:03 +08:00
LINSTCL
fd25d61b56 重构了模型抽象,用来更好的支持gpt-3.5-turbo 2023-03-02 15:31:12 +08:00
110 changed files with 6112 additions and 1057 deletions

View File

@@ -0,0 +1,34 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "QChatGPT 3.10",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.10",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
"customizations": {
"codespaces": {
"repositories": {
"RockChinQ/QChatGPT": {
"permissions": "write-all"
},
"RockChinQ/revLibs": {
"permissions": "write-all"
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

51
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: 漏洞反馈
description: 报错或漏洞请使用这个模板创建不使用此模板创建的异常、漏洞相关issue将被直接关闭
title: "[Bug]: "
labels: ["bug?"]
body:
- type: dropdown
attributes:
label: 部署方式
description: "主程序使用的部署方式"
options:
- 手动部署
- 安装器部署
- 一键安装包部署
- Docker部署
validations:
required: true
- type: dropdown
attributes:
label: 登录框架
description: "连接QQ使用的框架"
options:
- Mirai
- go-cqhttp
validations:
required: false
- type: input
attributes:
label: 系统环境
description: 操作系统、系统架构。
placeholder: 例如: CentOS x64、Windows11
validations:
required: true
- type: input
attributes:
label: Python环境
description: 运行程序的Python版本
placeholder: 例如: Python 3.10
validations:
required: true
- type: textarea
attributes:
label: 异常情况
description: 完整描述异常情况,什么时候发生的、发生了什么
validations:
required: true
- type: textarea
attributes:
label: 报错信息
description: 请提供完整的**控制台**报错信息(若有)
validations:
required: false

View File

@@ -0,0 +1,21 @@
name: 需求建议
title: "[Feature]: "
labels: ["enhancement"]
description: "新功能或现有功能优化请使用这个模板不符合类别的issue将被直接关闭"
body:
- type: dropdown
attributes:
label: 这是一个?
description: 新功能建议还是现有功能优化
options:
- 新功能
- 现有功能优化
validations:
required: true
- type: textarea
attributes:
label: 详细描述
description: 详细描述,越详细越好
validations:
required: true

View File

@@ -1,24 +0,0 @@
---
name: 漏洞反馈
about: 报错或漏洞请使用这个模板创建
title: "[BUG]"
labels: 'bug'
assignees: ''
---
请认真按照实际情况填写以下信息!!!!
**运行环境**
- 部署方式:
手动部署/自动部署/Docker部署
- 系统环境:
例如: Centos x64
- Python环境仅手动部署填写
例如: Python 3.10.9
**描述漏洞**
什么时候发生的mirai还是主程序越详细越好
**完整报错信息**
完整的报错信息

View File

@@ -1,10 +0,0 @@
---
name: 需求建议
about: 软件优化建议请使用这个模板创建
title: "[ENHANCE]"
labels: 'enhancement'
assignees: ''
---
不是需求建议请勿填写此模板!!!!

25
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,25 @@
## 概述
实现/解决/优化的内容:
### 事务
- [ ] 已阅读仓库[贡献指引](https://github.com/RockChinQ/QChatGPT/blob/master/CONTRIBUTING.md)
- [ ] 已与维护者在issues或其他平台沟通此PR大致内容
## 以下内容可在起草PR后、合并PR前逐步完成
### 功能
- [ ] 已编写完善的配置文件字段说明(若有新增)
- [ ] 已编写面向用户的新功能说明(若有必要)
- [ ] 已测试新功能或更改
### 兼容性
- [ ] 已处理版本兼容性
- [ ] 已处理插件兼容问题
### 风险
可能导致或已知的问题:

33
.github/workflows/sync-wiki.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Update Wiki
on:
push:
paths:
- 'res/wiki/**'
jobs:
update-wiki:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Git
run: |
git config --global user.name "GitHub Actions"
git config --global user.email "github-actions[bot]@users.noreply.github.com"
- name: Clone Wiki Repository
uses: actions/checkout@v2
with:
repository: RockChinQ/QChatGPT.wiki
path: wiki
- name: Copy res/wiki content to wiki
run: |
cp -r res/wiki/* wiki/
- name: Commit and Push Changes
run: |
cd wiki
if git diff --name-only; then
git add .
git commit -m "Update wiki"
git push
fi

View File

@@ -0,0 +1,58 @@
name: Update cmdpriv-template
on:
push:
paths:
- 'pkg/qqbot/cmds/**'
pull_request:
types: [closed]
paths:
- 'pkg/qqbot/cmds/**'
jobs:
update-cmdpriv-template:
if: github.event.pull_request.merged == true || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade yiri-mirai openai colorlog func_timeout dulwich Pillow
- name: Copy Scripts
run: |
cp res/scripts/generate_cmdpriv_template.py .
- name: Generate Files
run: |
python main.py
- name: Run generate_cmdpriv_template.py
run: python3 generate_cmdpriv_template.py
- name: Check for changes in cmdpriv-template.json
id: check_changes
run: |
if git diff --name-only | grep -q "res/templates/cmdpriv-template.json"; then
echo "::set-output name=changes_detected::true"
else
echo "::set-output name=changes_detected::false"
fi
- name: Commit changes to cmdpriv-template.json
if: steps.check_changes.outputs.changes_detected == 'true'
run: |
git config --global user.name "GitHub Actions Bot"
git config --global user.email "<github-actions@github.com>"
git add res/templates/cmdpriv-template.json
git commit -m "Update cmdpriv-template.json"
git push

View File

@@ -0,0 +1,53 @@
name: Check and Update override_all
on:
push:
paths:
- 'config-template.py'
pull_request:
types:
- closed
branches:
- master
paths:
- 'config-template.py'
jobs:
update-override-all:
name: check and update
if: github.event.pull_request.merged == true || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# 在此处添加您的项目所需的其他依赖
- name: Copy Scripts
run: |
cp res/scripts/generate_override_all.py .
- name: Run generate_override_all.py
run: python3 generate_override_all.py
- name: Check for changes in override-all.json
id: check_changes
run: |
git diff --exit-code override-all.json || echo "::set-output name=changes_detected::true"
- name: Commit and push changes
if: steps.check_changes.outputs.changes_detected == 'true'
run: |
git config --global user.email "github-actions[bot]@users.noreply.github.com"
git config --global user.name "GitHub Actions"
git add override-all.json
git commit -m "Update override-all.json"
git push

17
.gitignore vendored
View File

@@ -3,10 +3,23 @@ config.py
__pycache__/
database.db
qchatgpt.log
config.py
/banlist.py
plugins/
!plugins/__init__.py
/revcfg.py
prompts/
logs/
logs/
sensitive.json
temp/
current_tag
scenario/
!scenario/default-template.json
override.json
cookies.json
res/announcement_saved
res/announcement_saved.json
cmdpriv.json
tips.py
.venv
bin/
.vscode

19
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,19 @@
## 参与项目
欢迎为此项目贡献代码或其他支持,以使您的点子或众人期待的功能成为现实,助力社区成长。
### 贡献形式
- 提交PR解决issues中提到的bug或期待的功能
- 提交PR实现您设想的功能请先提出issue与作者沟通
- 优化代码架构,使各个模块的组织更加整洁优雅
- 在issues中提出发现的bug或者期待的功能
- 为本项目在其他社交平台撰写文章、制作视频等
- 为本项目的衍生项目作出贡献,或开发插件增加功能
### 如何开始
- 加入本项目交流群,一同探讨项目相关事务
- 解决本项目或衍生项目的issues中亟待解决的问题
- 阅读并完善本项目文档
- 在各个社交媒体撰写本项目教程等

17
Dockerfile Normal file
View File

@@ -0,0 +1,17 @@
FROM python:3.9-slim
WORKDIR /QChatGPT
RUN sed -i "s/deb.debian.org/mirrors.tencent.com/g" /etc/apt/sources.list \
&& sed -i 's|security.debian.org/debian-security|mirrors.tencent.com/debian-security|g' /etc/apt/sources.list \
&& apt-get clean \
&& apt-get update \
&& apt-get -y upgrade \
&& apt-get install -y git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY . /QChatGPT/
RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
CMD [ "python", "main.py" ]

186
README.md
View File

@@ -1,36 +1,70 @@
# QChatGPT🤖
### 🎉现已支持接入ChatGPT网页版详情请完成部署并查看底部**插件**小节或[此仓库](https://github.com/RockChinQ/revLibs)
<p align="center">
<img src="res/social.png" alt="QChatGPT" width="640" />
</p>
[English](README_en.md) | 简体中文
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT?style=flat-square)](https://github.com/RockChinQ/QChatGPT/releases/latest)
> 2023/4/24 支持使用go-cqhttp登录QQ请查看[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)
> 2023/3/18 现已支持GPT-4 API内测请查看`config-template.py`中的`completion_api_params`
> 2023/3/15 逆向库已支持New Bing使用方法查看[插件文档](https://github.com/RockChinQ/revLibs)
- 到[项目Wiki](https://github.com/RockChinQ/QChatGPT/wiki)可了解项目详细信息
- 由bilibili TheLazy制作的[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- 测试号: 2196084348已加载逆向库插件、每分钟限速、~~1480613886已加载逆向库插件~~(被封)
- 交流、答疑群: ~~204785790~~已满、691226829
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- 官方交流、答疑群: 656285629
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- 社区群(内有一键部署包、图形化界面等资源): 362515018
- QQ频道机器人见[QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
- 欢迎各种形式的贡献,请查看[贡献指引](CONTRIBUTING.md)
通过调用OpenAI GPT-3模型提供的Completion API来实现一个更加智能的QQ机器人
## 🍺模型适配一览
<details>
<summary>点击此处展开</summary>
### 文字对话
- OpenAI GPT-3.5模型(ChatGPT API), 本项目原生支持, 默认使用
- OpenAI GPT-3模型, 本项目原生支持, 部署完成后前往`config.py`切换
- OpenAI GPT-4模型, 本项目原生支持, 目前需要您的账户通过OpenAI的内测申请, 请前往`config.py`切换
- ChatGPT网页版GPT-3.5模型, 由[插件](https://github.com/RockChinQ/revLibs)接入
- ChatGPT网页版GPT-4模型, 目前需要ChatGPT Plus订阅, 由[插件](https://github.com/RockChinQ/revLibs)接入
- New Bing逆向库, 由[插件](https://github.com/RockChinQ/revLibs)接入
### 故事续写
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 图片绘制
- OpenAI DALL·E模型, 本项目原生支持, 使用方法查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 语音生成
- TTS+VITS, 由[插件](https://github.com/dominoar/QChatPlugins)接入
- Plachta/VITS-Umamusume-voice-synthesizer, 由[插件](https://github.com/oliverkirk-sudo/chat_voice)接入
</details>
安装[此插件](https://github.com/RockChinQ/Switcher),即可在使用中切换文字模型。
## ✅功能
<details>
<summary>✅回复符合上下文</summary>
- 程序向模型发送近几次对话内容,模型根据上下文生成回复
- 您可在`config.py`中修改`prompt_submit_length`自定义联系上下文的范围
</details>
<summary>点击此处展开概述</summary>
<details>
<summary>✅支持敏感词过滤,避免账号风险</summary>
- 难以监测机器人与用户对话时的内容,故引入此功能以减少机器人风险
- 加入了百度云内容审核,在`config.py`中修改`baidu_check`的值,并填写`baidu_api_key``baidu_secret_key`以开启此功能
- 编辑`sensitive.json`,并在`config.py`中修改`sensitive_word_filter`的值以开启此功能
</details>
<details>
<summary>✅群内多种响应规则不必at</summary>
@@ -38,14 +72,6 @@
- 详细见`config.py`中的`response_rules`字段
</details>
<details>
<summary>✅使用官方api不需要网络代理稳定快捷</summary>
- 不使用ChatGPT逆向接口而使用官方的Completion API稳定性高
- 您可以在`config.py`中自定义`completion_api_params`字段设置向官方API提交的参数以自定义机器人的风格
</details>
<details>
<summary>✅完善的多api-key管理超额自动切换</summary>
@@ -55,13 +81,6 @@
- 运行期间向机器人说`!usage`以查看当前使用情况
</details>
<details>
<summary>✅组件少部署方便提供一键安装器及Docker安装</summary>
- 手动部署步骤少
- 提供自动安装器及docker方式详见以下安装步骤
</details>
<details>
<summary>✅支持预设指令文字</summary>
@@ -70,13 +89,6 @@
- 支持设置多个预设情景,并通过!reset、!default等指令控制详细请查看[wiki指令](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)
</details>
<details>
<summary>✅完善的会话管理,重启不丢失</summary>
- 使用SQLite进行会话内容持久化
- 最后一次对话一定时间后自动保存,请到`config.py`中修改`session_expire_time`的值以自定义时间
- 运行期间可使用`!reset` `!list` `!last` `!next` `!prompt`等指令管理会话
</details>
<details>
<summary>✅支持对话、绘图等模型,可玩性更高</summary>
@@ -102,6 +114,12 @@
- 详见Wiki`加入黑名单`
</details>
<details>
<summary>✅长消息处理策略</summary>
- 支持将长消息转换成图片或消息记录组件,避免消息刷屏
- 请查看`config.py``blob_message_strategy`等字段
</details>
<details>
<summary>✅回复速度限制</summary>
- 支持限制单会话内每分钟可进行的对话次数
@@ -110,19 +128,42 @@
- “丢弃”策略:此分钟内对话次数达到限制时,丢弃之后的对话
- 详细请查看config.py中的相关配置
</details>
<details>
<summary>✅支持使用网络代理</summary>
- 目前已支持正向代理访问接口
- 详细请查看config.py中的`openai_config`的说明
</details>
<details>
<summary>✅支持自定义提示内容</summary>
- 允许用户自定义报错、帮助等提示信息
- 请查看`tips.py`
</details>
### 🏞️截图
<img alt="私聊GPT-3.5" src="res/screenshots/person_gpt3.5.png" width="400"/>
<br/>
<img alt="群聊GPT-3.5" src="res/screenshots/group_gpt3.5.png" width="400"/>
<br/>
<img alt="New Bing" src="res/screenshots/person_newbing.png" width="400"/>
</details>
详情请查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
## 🔩部署
**部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索**
**部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索**
### - 注册OpenAI账号
**可以直接进群找群主购买**
或参考以下文章自行注册
> 若您要直接使用非OpenAI的模型如New Bing可跳过此步骤直接进行之后的部署完成后按照相关插件的文档进行配置即可
> ~~[只需 1 元搞定 ChatGPT 注册](https://zhuanlan.zhihu.com/p/589470082)~~(已失效)
参考以下文章自行注册
> [国内注册ChatGPT的方法(100%可用)](https://www.pythonthree.com/register-openai-chatgpt/)
> [手把手教你如何注册ChatGPT超级详细](https://guxiaobei.com/51461)
注册成功后请前往[个人中心查看](https://beta.openai.com/account/api-keys)api_key
@@ -135,9 +176,11 @@
#### Docker方式
请查看此仓库[mikumifa/QChatGPT-Docker-Installer](https://github.com/mikumifa/QChatGPT-Docker-Installer)
请查看[此文档](res/docs/docker_deploy.md)
由[@mikumifa](https://github.com/mikumifa)贡献
#### 安装器方式
使用[此安装器](https://github.com/RockChinQ/qcg-installer)(若无法访问请到[Gitee](https://gitee.com/RockChin/qcg-installer))进行部署
- 安装器目前仅支持部分平台,请到仓库文档查看,其他平台请手动部署
@@ -148,17 +191,31 @@
<details>
<summary>手动部署适用于所有平台</summary>
- 请使用Python 3.9.x以上版本
- 请注意OpenAI账号额度消耗
- 每个账户仅有18美元免费额度如未绑定银行卡则会在超出时报错
- OpenAI收费标准默认使用的`text-davinci-003`模型 0.02美元/千字
- 请使用Python 3.9.x以上版本
#### 配置Mirai
#### ① 配置QQ登录框架
按照[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration)配置Mirai及YiriMirai
启动mirai-console后使用`login`命令登录QQ账号保持mirai-console运行状态
目前支持mirai和go-cqhttp配置任意一个即可
#### 配置主程序
<details>
<summary>mirai</summary>
1. 按照[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration)配置Mirai及mirai-api-http
2. 启动mirai-console后使用`login`命令登录QQ账号保持mirai-console运行状态
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`yirimirai`
</details>
<details>
<summary>go-cqhttp</summary>
1. 按照[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)配置go-cqhttp
2. 启动go-cqhttp确保登录成功保持运行
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`nakuru`
</details>
#### ② 配置主程序
1. 克隆此项目
@@ -170,8 +227,7 @@ cd QChatGPT
2. 安装依赖
```bash
pip3 install yiri-mirai openai colorlog func_timeout
pip3 install dulwich
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk
```
3. 运行一次主程序,生成配置文件
@@ -194,7 +250,7 @@ python3 main.py
**常见问题**
- mirai登录提示`QQ版本过低`,见[此issue](https://github.com/RockChinQ/QChatGPT/issues/38)
- mirai登录提示`QQ版本过低`,见[此issue](https://github.com/RockChinQ/QChatGPT/issues/137)
- 如提示安装`uvicorn``hypercorn`请*不要*安装这两个不是必需的目前存在未知原因bug
- 如报错`TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary`, 请参考 [此处](https://github.com/RockChinQ/QChatGPT/issues/5)
@@ -202,7 +258,8 @@ python3 main.py
## 🚀使用
查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)
**部署完成后必看: [指令说明](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)**
所有功能查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)
## 🧩插件生态
@@ -210,6 +267,9 @@ python3 main.py
详见[Wiki插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
开发教程见[Wiki插件开发页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91)
<details>
<summary>查看插件列表</summary>
### 示例插件
`tests/plugin_examples`目录下,将其整个目录复制到`plugins`目录下即可使用
@@ -222,20 +282,28 @@ python3 main.py
欢迎提交新的插件
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E4%B8%8EChatGPT%E7%BD%91%E9%A1%B5%E7%89%88)
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E3%80%81ChatGPT%E7%BD%91%E9%A1%B5%E7%89%88%E3%80%81ChatGPT-API%E5%8C%BA%E5%88%AB)
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过指令切换使用的模型
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QchatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件输出、Ranimg、屏蔽词规则等
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件输出、Ranimg、屏蔽词规则等
- [dominoar/QCP-NovelAi](https://github.com/dominoar/QCP-NovelAi) - NovelAI 故事叙述与绘画
- [oliverkirk-sudo/chat_voice](https://github.com/oliverkirk-sudo/chat_voice) - 文字转语音输出使用HuggingFace上的[VITS-Umamusume-voice-synthesizer模型](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer)
- [RockChinQ/WaitYiYan](https://github.com/RockChinQ/WaitYiYan) - 实时获取百度`文心一言`等待列表人数
- [chordfish-k/QChartGPT_Emoticon_Plugin](https://github.com/chordfish-k/QChartGPT_Emoticon_Plugin) - 使机器人根据回复内容发送表情包
- [oliverkirk-sudo/ChatPoeBot](https://github.com/oliverkirk-sudo/ChatPoeBot) - 接入[Poe](https://poe.com/)上的机器人
- [lieyanqzu/WeatherPlugin](https://github.com/lieyanqzu/WeatherPlugin) - 天气查询插件
</details>
## 😘致谢
- [@the-lazy-me](https://github.com/the-lazy-me) 为本项目制作[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- [@mikumifa](https://github.com/mikumifa) 本项目Docker部署仓库开发者
- [@dominoar](https://github.com/dominoar) 为本项目开发多种插件
- [@hissincn](https://github.com/hissincn) 本项目贡献者
- [@万神的星空](https://github.com/qq255204159) 整合包发行
- [@ljcduo](https://github.com/ljcduo) GPT-4 API内测账号提供
以及其他所有为本项目提供支持的朋友们。
以及所有[贡献者](https://github.com/RockChinQ/QChatGPT/graphs/contributors)和其他为本项目提供支持的朋友们。
## 👍赞赏
<!-- ## 👍赞赏
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/>
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/> -->

213
README_en.md Normal file
View File

@@ -0,0 +1,213 @@
# QChatGPT🤖
<p align="center">
<img src="res/social.png" alt="QChatGPT" width="640" />
</p>
English | [简体中文](README.md)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT?style=flat-square)](https://github.com/RockChinQ/QChatGPT/releases/latest)
- Refer to [Wiki](https://github.com/RockChinQ/QChatGPT/wiki) to get further information.
- Official QQ group: 656285629
- Community QQ group: 362515018
- QQ channel robot: [QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
- Any contribution is welcome, please refer to [CONTRIBUTING.md](CONTRIBUTING.md)
## 🍺List of supported models
<details>
<summary>Details</summary>
### Chat
- OpenAI GPT-3.5 (ChatGPT API), default model
- OpenAI GPT-3, supported natively, switch to it in `config.py`
- OpenAI GPT-4, supported natively, qualification for internal testing required, switch to it in `config.py`
- ChatGPT website edition (GPT-3.5), see [revLibs plugin](https://github.com/RockChinQ/revLibs)
- ChatGPT website edition (GPT-4), ChatGPT plus subscription required, see [revLibs plugin](https://github.com/RockChinQ/revLibs)
- New Bing, see [revLibs plugin](https://github.com/RockChinQ/revLibs)
### Story
- NovelAI API, see [QCPNovelAi plugin](https://github.com/dominoar/QCPNovelAi)
### Image
- OpenAI DALL·E, supported natively, see [Wiki(cn)](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
- NovelAI API, see [QCPNovelAi plugin](https://github.com/dominoar/QCPNovelAi)
### Voice
- TTS+VITS, see [QChatPlugins](https://github.com/dominoar/QChatPlugins)
- Plachta/VITS-Umamusume-voice-synthesizer, see [chat_voice plugin](https://github.com/oliverkirk-sudo/chat_voice)
</details>
Install this [plugin](https://github.com/RockChinQ/Switcher) to switch between different models.
## ✅Function Points
<details>
<summary>Details</summary>
- ✅Sensitive word filtering, avoid being banned
- ✅Multiple responding rules, including regular expression matching
- ✅Multiple api-key management, automatic switching when exceeding
- ✅Support for customizing the preset prompt text
- ✅Chat, story, image, voice, etc. models are supported
- ✅Support for hot reloading and hot updating
- ✅Support for plugin loading
- ✅Blacklist mechanism for private chat and group chat
- ✅Excellent long message processing strategy
- ✅Reply rate limitation
- ✅Support for network proxy
- ✅Support for customizing the output format
</details>
More details, see [Wiki(cn)](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
## 🔩Deployment
**If you encounter any problems during deployment, please search in the issue of [QChatGPT](https://github.com/RockChinQ/QChatGPT/issues) or [qcg-installer](https://github.com/RockChinQ/qcg-installer/issues) first.**
### - Register OpenAI account
> If you want to use a model other than OpenAI (such as New Bing), you can skip this step and directly refer to following steps, and then configure it according to the relevant plugin documentation.
To register OpenAI account, please refer to the following articles(in Chinese):
> [国内注册ChatGPT的方法(100%可用)](https://www.pythonthree.com/register-openai-chatgpt/)
> [手把手教你如何注册ChatGPT超级详细](https://guxiaobei.com/51461)
Check your api-key in [personal center](https://beta.openai.com/account/api-keys) after registration, and then follow the following steps to deploy.
### - Deploy Automatically
<details>
<summary>Details</summary>
#### Docker
See [this document(cn)](res/docs/docker_deploy.md)
Contributed by [@mikumifa](https://github.com/mikumifa)
#### Installer
Use [this installer](https://github.com/RockChinQ/qcg-installer) to deploy.
- The installer currently only supports some platforms, please refer to the repository document for details, and manually deploy for other platforms
</details>
### - Deploy Manually
<details>
<summary>Manually deployment supports any platforms</summary>
- Python 3.9.x or higher
#### 配置QQ登录框架
Currently supports mirai and go-cqhttp, configure either one
<details>
<summary>mirai</summary>
Follow [this tutorial(cn)](https://yiri-mirai.wybxc.cc/tutorials/01/configuration) to configure Mirai and YiriMirai.
After starting mirai-console, use the `login` command to log in to the QQ account, and keep the mirai-console running.
</details>
<details>
<summary>go-cqhttp</summary>
1. Follow [this tutorial(cn)](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE) to configure go-cqhttp.
2. Start go-cqhttp, make sure it is logged in and running.
</details>
#### Configure QChatGPT
1. Clone the repository
```bash
git clone https://github.com/RockChinQ/QChatGPT
cd QChatGPT
```
2. Install dependencies
```bash
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk
```
3. Generate `config.py`
```bash
python3 main.py
```
4. Edit `config.py`
5. Run
```bash
python3 main.py
```
Any problems, please refer to the issues page.
</details>
## 🚀Usage
**After deployment, please read: [Commands(cn)](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)**
**For more details, please refer to the [Wiki(cn)](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)**
## 🧩Plugin Ecosystem
Plugin [usage](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8) and [development](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91) are supported.
<details>
<summary>List of plugins (cn)</summary>
### Examples
`tests/plugin_examples`目录下,将其整个目录复制到`plugins`目录下即可使用
- `cmdcn` - 主程序指令中文形式
- `hello_plugin` - 在收到消息`hello`时回复相应消息
- `urlikethisijustsix` - 收到冒犯性消息时回复相应消息
### More Plugins
欢迎提交新的插件
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E4%B8%8EChatGPT%E7%BD%91%E9%A1%B5%E7%89%88)
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过指令切换使用的模型
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件语音输出、Ranimg、屏蔽词规则等
- [dominoar/QCP-NovelAi](https://github.com/dominoar/QCP-NovelAi) - NovelAI 故事叙述与绘画
- [oliverkirk-sudo/chat_voice](https://github.com/oliverkirk-sudo/chat_voice) - 文字转语音输出使用HuggingFace上的[VITS-Umamusume-voice-synthesizer模型](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer)
- [RockChinQ/WaitYiYan](https://github.com/RockChinQ/WaitYiYan) - 实时获取百度`文心一言`等待列表人数
- [chordfish-k/QChartGPT_Emoticon_Plugin](https://github.com/chordfish-k/QChartGPT_Emoticon_Plugin) - 使机器人根据回复内容发送表情包
- [oliverkirk-sudo/ChatPoeBot](https://github.com/oliverkirk-sudo/ChatPoeBot) - 接入[Poe](https://poe.com/)上的机器人
- [lieyanqzu/WeatherPlugin](https://github.com/lieyanqzu/WeatherPlugin) - 天气查询插件
</details>
## 😘Thanks
- [@the-lazy-me](https://github.com/the-lazy-me) video tutorial creator
- [@mikumifa](https://github.com/mikumifa) Docker deployment
- [@dominoar](https://github.com/dominoar) Plugin development
- [@万神的星空](https://github.com/qq255204159) Packages publisher
- [@ljcduo](https://github.com/ljcduo) GPT-4 API internal test account
And all [contributors](https://github.com/RockChinQ/QChatGPT/graphs/contributors) and other friends who support this project.
<!-- ## 👍赞赏
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/> -->

View File

@@ -1,7 +1,13 @@
# 配置文件: 注释里标[必需]的参数必须修改, 其他参数根据需要修改, 但请勿删除
import logging
# [必需] Mirai的配置
# 消息处理协议适配器
# 目前支持以下适配器:
# - "yirimirai": mirai的通信框架YiriMirai框架适配器, 请同时填写下方mirai_http_api_config
# - "nakuru": go-cqhttp通信框架请同时填写下方nakuru_config
msg_source_adapter = "yirimirai"
# [必需(与nakuru二选一取决于msg_source_adapter)] Mirai的配置
# 请到配置mirai的步骤中的教程查看每个字段的信息
# adapter: 选择适配器目前支持HTTPAdapter和WebSocketAdapter
# host: 运行mirai的主机地址
@@ -18,8 +24,18 @@ mirai_http_api_config = {
"qq": 1234567890
}
# [必需(与mirai二选一取决于msg_source_adapter)]
# 使用nakuru-project框架连接go-cqhttp的配置
nakuru_config = {
"host": "localhost", # go-cqhttp的地址
"port": 6700, # go-cqhttp的正向websocket端口
"http_port": 5700, # go-cqhttp的正向http端口
"token": "" # 若在go-cqhttp的config.yml设置了access_token, 则填写此处
}
# [必需] OpenAI的配置
# api_key: OpenAI的API Key
# http_proxy: 请求OpenAI时使用的代理None为不使用https和socks5暂不能使用
# 若只有一个api-key请直接修改以下内容中的"openai_api_key"为你的api-key
#
# 如准备了多个api-key可以以字典的形式填写程序会自动选择可用的api-key
@@ -30,11 +46,28 @@ mirai_http_api_config = {
# "key1": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# "key2": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# },
# "http_proxy": "http://127.0.0.1:12345"
# }
#
# 现已支持反向代理可以添加reverse_proxy字段以使用反向代理
# 使用反向代理可以在国内使用OpenAI的API反向代理的配置请参考
# https://github.com/Ice-Hazymoon/openai-scf-proxy
#
# 反向代理填写示例:
# openai_config = {
# "api_key": {
# "default": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# "key1": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# "key2": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# },
# "reverse_proxy": "http://example.com:12345/v1"
# }
openai_config = {
"api_key": {
"default": "openai_api_key"
},
"http_proxy": None,
"reverse_proxy": None
}
# [必需] 管理员QQ号用于接收报错等通知及执行管理员级别指令
@@ -76,17 +109,50 @@ default_prompt = {
"default": "如果我之后想获取帮助,请你说“输入!help获取帮助”",
}
# 情景预设格式
# 参考值默认方式normal | 完整情景full_scenario
# 默认方式 的格式为上述default_prompt中的内容或prompts目录下的文件名
# 完整情景方式 的格式为JSON在scenario目录下的JSON文件中列出对话的每个回合编写方法见scenario/default-template.json
# 编写方法请查看https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E9%A2%84%E8%AE%BE%E6%96%87%E5%AD%97full_scenario%E6%A8%A1%E5%BC%8F
preset_mode = "normal"
# 群内响应规则
# 符合此消息的群内消息即使不包含at机器人也会响应
# 支持消息前缀匹配及正则表达式匹配
# 支持设置是否响应at消息、随机响应概率
# 注意:由消息前缀(prefix)匹配的消息中将会删除此前缀,正则表达式(regexp)匹配的消息不会删除匹配的部分
# 前缀匹配优先级高于正则表达式匹配
# 正则表达式简明教程https://www.runoob.com/regexp/regexp-tutorial.html
#
# 支持针对不同群设置不同的响应规则,例如:
# response_rules = {
# "default": {
# "at": True,
# "prefix": ["/ai", "!ai", "ai", "ai"],
# "regexp": [],
# "random_rate": 0.0,
# },
# "12345678": {
# "at": False,
# "prefix": ["/ai", "!ai", "ai", "ai"],
# "regexp": [],
# "random_rate": 0.0,
# },
# }
#
# 以上设置将会在群号为12345678的群中关闭at响应
# 未单独设置的群将使用default规则
response_rules = {
"prefix": ["/ai", "!ai", "ai", "ai"],
"regexp": [] # "为什么.*", "怎么?样.*", "怎么.*", "如何.*", "[Hh]ow to.*", "[Ww]hy not.*", "[Ww]hat is.*", ".*怎么办", ".*咋办"
"default": {
"at": True, # 是否响应at机器人的消息
"prefix": ["/ai", "!ai", "ai", "ai"],
"regexp": [], # "为什么.*", "怎么?样.*", "怎么.*", "如何.*", "[Hh]ow to.*", "[Ww]hy not.*", "[Ww]hat is.*", ".*怎么办", ".*咋办"
"random_rate": 0.0, # 随机响应概率0.0-1.00.0为不随机响应1.0为响应所有消息, 仅在前几项判断不通过时生效
},
}
# 消息忽略规则
# 适用于私聊及群聊
# 符合此规则的消息将不会被响应
@@ -99,10 +165,27 @@ ignore_rules = {
"regexp": []
}
# 是否检查收到的消息中是否包含敏感词
# 若收到的消息无法通过下方指定的敏感词检查策略,则发送提示信息
income_msg_check = False
# 敏感词过滤开关,以同样数量的*代替敏感词回复
# 请在sensitive.json中添加敏感词
sensitive_word_filter = True
# 是否启用百度云内容安全审核
# 注册方式查看 https://cloud.baidu.com/doc/ANTIPORN/s/Wkhu9d5iy
baidu_check = False
# 百度云API_KEY 24位英文数字字符串
baidu_api_key = ""
# 百度云SECRET_KEY 32位的英文数字字符串
baidu_secret_key = ""
# 不合规消息自定义返回
inappropriate_message_tips = "[百度云]请珍惜机器人,当前返回内容不合规"
# 启动时是否发送赞赏码
# 仅当使用量已经超过2048字时发送
encourage_sponsor_at_start = True
@@ -110,14 +193,31 @@ encourage_sponsor_at_start = True
# 每次向OpenAI接口发送对话记录上下文的字符数
# 最大不超过(4096 - max_tokens)个字符max_tokens为下方completion_api_params中的max_tokens
# 注意较大的prompt_submit_length会导致OpenAI账户额度消耗更快
prompt_submit_length = 1024
prompt_submit_length = 2048
# OpenAI的completion API的参数
# OpenAI补全API的参数
# 请在下方填写模型,程序自动选择接口
# 现已支持的模型有:
#
# 'gpt-4'
# 'gpt-4-0314'
# 'gpt-4-32k'
# 'gpt-4-32k-0314'
# 'gpt-3.5-turbo'
# 'gpt-3.5-turbo-0301'
# 'text-davinci-003'
# 'text-davinci-002'
# 'code-davinci-002'
# 'code-cushman-001'
# 'text-curie-001'
# 'text-babbage-001'
# 'text-ada-001'
#
# 具体请查看OpenAI的文档: https://beta.openai.com/docs/api-reference/completions/create
# 请将内容修改到config.py中请勿修改config-template.py
completion_api_params = {
"model": "text-davinci-003",
"model": "gpt-3.5-turbo",
"temperature": 0.9, # 数值越低得到的回答越理性,取值范围[0, 1]
"max_tokens": 512, # 每次获取OpenAI接口响应的文字量上限, 不高于4096
"top_p": 1, # 生成的文本的文本与要求的符合度, 取值范围[0, 1]
"frequency_penalty": 0.2,
"presence_penalty": 1.0,
@@ -138,21 +238,30 @@ include_image_description = True
# 消息处理的超时时间,单位为秒
process_message_timeout = 30
# 会话对象名称,此配置与会话对象管理相关,
# 若不了解相关功能,无需修改此配置
# 详细说明请查看https://github.com/RockChinQ/QChatGPT/wiki/%E6%8A%80%E6%9C%AF%E4%BF%A1%E6%81%AF#%E4%BC%9A%E8%AF%9Dsession
# user_name: 管理员(主人)的名字
# bot_name: 机器人的名字
user_name = 'You'
bot_name = 'Bot'
# [暂未实现] 群内会话是否启用多对象名称
# 若不启用群内会话的prompt只使用user_name和bot_name
multi_subject = False
# 回复消息时是否显示[GPT]前缀
show_prefix = False
# 应用长消息处理策略的阈值
# 当回复消息长度超过此值时,将使用长消息处理策略
blob_message_threshold = 256
# 长消息处理策略
# - "image": 将长消息转换为图片发送
# - "forward": 将长消息转换为转发消息组件发送
blob_message_strategy = "forward"
# 允许等待
# 同一会话内,是否等待上一条消息处理完成后再处理下一条消息
# 若设置为False若上一条未处理完时收到了新消息将会丢弃新消息
# 丢弃消息时的提示信息可以在tips.py中修改
wait_last_done = True
# 文字转图片时使用的字体文件路径
# 当策略为"image"时生效
# 若在Windows系统下程序会自动使用Windows自带的微软雅黑字体
# 若未填写或不存在且不是Windows将禁用文字转图片功能改为使用转发消息组件
font_path = ""
# 消息处理超时重试次数
retry_times = 3
@@ -161,30 +270,59 @@ retry_times = 3
# 设置为False时向用户及管理员发送错误详细信息
hide_exce_info_to_user = False
# 消息处理出错时向用户发送的提示信息
# 仅当hide_exce_info_to_user为True时生效
# 设置为空字符串时,不发送提示信息
alter_tip_message = '出错了,请稍后再试'
# 线程池相关配置
# 该参数决定机器人可以同时处理几个人的消息,超出线程池数量的请求会被阻塞,不会被丢弃
# 如果你不清楚该参数的意义,请不要更改
# 程序运行本身线程池,无代码层面修改请勿更改
sys_pool_num = 8
# 执行管理员请求和指令的线程池并行线程数量,一般和管理员数量相等
admin_pool_num = 4
# 执行用户请求和指令的线程池并行线程数量
# 如需要更高的并发,可以增大该值
user_pool_num = 8
# 每个会话的过期时间,单位为秒
# 默认值20分钟
session_expire_time = 60 * 20
session_expire_time = 1200
# 会话限速
# 单会话内每分钟可进行的对话次数
# 若不需要限速,可以设置为一个很大的值
# 默认值60次基本上不会触发限速
rate_limitation = 60
#
# 若要设置针对某特定群的限速,请使用如下格式:
# {
# "group_<群号>": 60,
# "default": 60,
# }
# 若要设置针对某特定用户私聊的限速,请使用如下格式:
# {
# "person_<用户QQ>": 60,
# "default": 60,
# }
# 同时设置多个群和私聊的限速,示例:
# {
# "group_12345678": 60,
# "group_87654321": 60,
# "person_234567890": 60,
# "person_345678901": 60,
# "default": 60,
# }
#
# 注意: 未指定的都使用default的限速值default不可删除
rate_limitation = {
"default": 60,
}
# 会话限速策略
# - "wait": 每次对话获取到回复时,等待一定时间再发送回复,保证其不会超过限速均值
# - "drop": 此分钟内,若对话次数超过限速次数,则丢弃之后的对话,每自然分钟重置
rate_limit_strategy = "wait"
rate_limit_strategy = "drop"
# drop策略时超过限速均值时丢弃的对话的提示信息
# 仅当rate_limitation_strategy为"drop"时生效
# 若设置为空字符串,则不发送提示信息
rate_limit_drop_tip = "本分钟对话次数超过限速次数,此对话被丢弃"
# 是否在启动时进行依赖库更新
upgrade_dependencies = True
# 是否上报统计信息
# 用于统计机器人的使用情况,不会收集任何用户信息
@@ -193,20 +331,3 @@ report_usage = True
# 日志级别
logging_level = logging.INFO
# 定制帮助消息
help_message = """此机器人通过调用OpenAI的GPT-3大型语言模型生成回复不具有情感。
你可以用自然语言与其交流,回复的消息中[GPT]开头的为模型生成的语言,[bot]开头的为程序提示。
了解此项目请找QQ 1010553892 联系作者
请不要用其生成整篇文章或大段代码,因为每次只会向模型提交少部分文字,生成大部分文字会产生偏题、前后矛盾等问题
每次会话最后一次交互后{}分钟后会自动结束,结束后将开启新会话,如需继续前一次会话请发送 !last 重新开启
欢迎到github.com/RockChinQ/QChatGPT 给个star
帮助信息:
!help - 显示帮助
!reset - 重置会话
!last - 切换到前一次的对话
!next - 切换到后一次的对话
!prompt - 显示当前对话所有内容
!list - 列出所有历史会话
!usage - 列出各个api-key的使用量""".format(session_expire_time // 60)

355
main.py
View File

@@ -1,4 +1,5 @@
import importlib
import json
import os
import shutil
import threading
@@ -6,14 +7,20 @@ import time
import logging
import sys
import traceback
sys.path.append(".")
from pkg.utils.log import init_runtime_log_file, reset_logging
try:
import colorlog
except ImportError:
# 尝试安装
import pkg.utils.pkgmgr as pkgmgr
pkgmgr.install_requirements("requirements.txt")
try:
pkgmgr.install_requirements("requirements.txt")
pkgmgr.install_upgrade("websockets")
import colorlog
except ImportError:
print("依赖不满足,请查看 https://github.com/RockChinQ/qcg-installer/issues/15")
@@ -23,17 +30,12 @@ import colorlog
import requests
import websockets.exceptions
from urllib3.exceptions import InsecureRequestWarning
import pkg.utils.context
sys.path.append(".")
log_colors_config = {
'DEBUG': 'green', # cyan white
'INFO': 'white',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'bold_red',
}
# 是否使用override.json覆盖配置
# 仅在启动时提供 --override 或 -r 参数时生效
use_override = False
def init_db():
@@ -43,84 +45,98 @@ def init_db():
database.initialize_database()
def ensure_dependencies():
import pkg.utils.pkgmgr as pkgmgr
pkgmgr.run_pip(["install", "openai", "Pillow", "nakuru-project-idk", "--upgrade",
"-i", "https://pypi.douban.com/simple/",
"--trusted-host", "pypi.douban.com"])
known_exception_caught = False
log_file_name = "qchatgpt.log"
def override_config():
import config
# 检查override.json覆盖
if os.path.exists("override.json") and use_override:
override_json = json.load(open("override.json", "r", encoding="utf-8"))
for key in override_json:
if hasattr(config, key):
setattr(config, key, override_json[key])
logging.info("覆写配置[{}]为[{}]".format(key, override_json[key]))
else:
logging.error("无法覆写配置[{}]为[{}]该配置不存在请检查override.json是否正确".format(key, override_json[key]))
def init_runtime_log_file():
"""为此次运行生成日志文件
格式: qchatgpt-yyyy-MM-dd-HH-mm-ss.log
"""
global log_file_name
# 检查logs目录是否存在
if not os.path.exists("logs"):
os.mkdir("logs")
# 检查本目录是否有qchatgpt.log若有移动到logs目录
if os.path.exists("qchatgpt.log"):
shutil.move("qchatgpt.log", "logs/qchatgpt.legacy.log")
log_file_name = "logs/qchatgpt-%s.log" % time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
def reset_logging():
global log_file_name
assert os.path.exists('config.py')
# 临时函数用于加载config和上下文未来统一放在config类
def load_config():
logging.info("检查config模块完整性.")
# 完整性校验
is_integrity = True
config_template = importlib.import_module('config-template')
config = importlib.import_module('config')
for key in dir(config_template):
if not key.startswith("__") and not hasattr(config, key):
setattr(config, key, getattr(config_template, key))
logging.warning("[{}]不存在".format(key))
is_integrity = False
if not is_integrity:
logging.warning("配置文件不完整您可以依据config-template.py检查config.py")
# 检查override.json覆盖
override_config()
if not is_integrity:
logging.warning("以上不存在的配置已被设为默认值将在3秒后继续启动... ")
time.sleep(3)
# 存进上下文
pkg.utils.context.set_config(config)
def complete_tips():
"""根据tips-custom-template模块补全tips模块的属性"""
is_integrity = True
logging.info("检查tips模块完整性.")
tips_template = importlib.import_module('tips-custom-template')
tips = importlib.import_module('tips')
for key in dir(tips_template):
if not key.startswith("__") and not hasattr(tips, key):
setattr(tips, key, getattr(tips_template, key))
logging.warning("[{}]不存在".format(key))
is_integrity = False
if not is_integrity:
logging.warning("tips模块不完整您可以依据tips-custom-template.py检查tips.py")
logging.warning("以上配置已被设为默认值将在3秒后继续启动... ")
time.sleep(3)
def start(first_time_init=False):
"""启动流程reload之后会被执行"""
global known_exception_caught
import pkg.utils.context
if pkg.utils.context.context['logger_handler'] is not None:
logging.getLogger().removeHandler(pkg.utils.context.context['logger_handler'])
for handler in logging.getLogger().handlers:
logging.getLogger().removeHandler(handler)
logging.basicConfig(level=config.logging_level, # 设置日志输出格式
filename=log_file_name, # log日志输出的文件位置和文件名
format="[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : %(message)s",
# 日志输出的格式
# -8表示占位符让输出左对齐输出长度都为8位
datefmt="%Y-%m-%d %H:%M:%S" # 时间输出的格式
)
sh = logging.StreamHandler()
sh.setLevel(config.logging_level)
sh.setFormatter(colorlog.ColoredFormatter(
fmt="%(log_color)s[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : "
"%(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
log_colors=log_colors_config
))
logging.getLogger().addHandler(sh)
pkg.utils.context.context['logger_handler'] = sh
return sh
def main(first_time_init=False):
global known_exception_caught
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
config = pkg.utils.context.get_config()
# 更新openai库到最新版本
if not hasattr(config, 'upgrade_dependencies') or config.upgrade_dependencies:
print("正在更新依赖库,请等待...")
if not hasattr(config, 'upgrade_dependencies'):
print("这个操作不是必须的,如果不想更新,请在config.py中添加upgrade_dependencies=False")
else:
print("这个操作不是必须的,如果不想更新,请在config.py中将upgrade_dependencies设置为False")
try:
ensure_dependencies()
except Exception as e:
print("更新openai库失败:{}, 请忽略或自行更新".format(e))
known_exception_caught = False
try:
# 导入config.py
assert os.path.exists('config.py')
config = importlib.import_module('config')
import pkg.utils.context
pkg.utils.context.set_config(config)
init_runtime_log_file()
sh = reset_logging()
pkg.utils.context.context['logger_handler'] = sh
# 检查是否设置了管理员
if not (hasattr(config, 'admin_qq') and config.admin_qq != 0):
@@ -151,10 +167,21 @@ def main(first_time_init=False):
import pkg.openai.session
import pkg.qqbot.manager
import pkg.openai.dprompt
import pkg.qqbot.cmds.aamgr
try:
pkg.openai.dprompt.register_all()
pkg.qqbot.cmds.aamgr.register_all()
pkg.qqbot.cmds.aamgr.apply_privileges()
except Exception as e:
logging.error(e)
traceback.print_exc()
pkg.openai.dprompt.read_prompt_from_file()
# 配置openai api_base
if "reverse_proxy" in config.openai_config and config.openai_config["reverse_proxy"] is not None:
import openai
openai.api_base = config.openai_config["reverse_proxy"]
pkg.utils.context.context['logger_handler'] = sh
# 主启动流程
database = pkg.database.manager.DatabaseManager()
@@ -166,9 +193,7 @@ def main(first_time_init=False):
pkg.openai.session.load_sessions()
# 初始化qq机器人
qqbot = pkg.qqbot.manager.QQBotManager(mirai_http_api_config=config.mirai_http_api_config,
timeout=config.process_message_timeout, retry=config.retry_times,
first_time_init=first_time_init)
qqbot = pkg.qqbot.manager.QQBotManager(first_time_init=first_time_init)
# 加载插件
import pkg.plugin.host
@@ -176,14 +201,15 @@ def main(first_time_init=False):
pkg.plugin.host.initialize_plugins()
if first_time_init: # 不是热重载之后的启动,则启动新的bot线程
if first_time_init: # 不是热重载之后的启动,则启动新的bot线程
import mirai.exceptions
def run_bot_wrapper():
global known_exception_caught
try:
qqbot.bot.run()
logging.info("使用账号: {}".format(qqbot.bot_account_id))
qqbot.adapter.run_sync()
except TypeError as e:
if str(e).__contains__("argument 'debug'"):
logging.error(
@@ -218,27 +244,42 @@ def main(first_time_init=False):
"mirai-api-http端口无法使用:{}, 解决方案: https://github.com/RockChinQ/QChatGPT/issues/22".format(
e))
else:
import traceback
traceback.print_exc()
logging.error(
"捕捉到未知异常:{}, 请前往 https://github.com/RockChinQ/QChatGPT/issues 查找或提issue".format(e))
known_exception_caught = True
raise e
qq_bot_thread = threading.Thread(target=run_bot_wrapper, args=(), daemon=True)
qq_bot_thread.start()
finally:
time.sleep(12)
threading.Thread(
target=run_bot_wrapper
).start()
finally:
# 判断若是Windows输出选择模式可能会暂停程序的警告
if os.name == 'nt':
time.sleep(2)
logging.info("您正在使用Windows系统若命令行窗口处于“选择”模式程序可能会被暂停此时请右键点击窗口空白区域使其取消选择模式。")
time.sleep(12)
if first_time_init:
if not known_exception_caught:
logging.info('程序启动完成,如长时间未显示 ”成功登录到账号xxxxx“ ,并且不回复消息,请查看 '
'https://github.com/RockChinQ/QChatGPT/issues/37')
import config
if config.msg_source_adapter == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(config.mirai_http_api_config['qq'], config.mirai_http_api_config['host']+":"+str(config.mirai_http_api_config['port'])))
logging.critical('程序启动完成,如长时间未显示 "成功登录到账号xxxxx" ,并且不回复消息,请查看 '
'https://github.com/RockChinQ/QChatGPT/issues/37')
elif config.msg_source_adapter == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(config.nakuru_config['host'], config.nakuru_config['port'], config.nakuru_config['http_port']))
logging.critical('程序启动完成,如长时间未显示 "Protocol: connected" ,并且不回复消息,请检查config.py中的nakuru_config是否正确')
else:
sys.exit(1)
else:
logging.info('热重载完成')
# 发送赞赏码
if hasattr(config, 'encourage_sponsor_at_start') \
and config.encourage_sponsor_at_start \
if config.encourage_sponsor_at_start \
and pkg.utils.context.get_openai_manager().audit_mgr.get_total_text_length() >= 2048:
logging.info("发送赞赏码")
@@ -258,28 +299,25 @@ def main(first_time_init=False):
import pkg.utils.updater
try:
if pkg.utils.updater.is_new_version_available():
pkg.utils.context.get_qqbot_manager().notify_admin("新版本可用,请发送 !update 进行自动更新")
logging.info("新版本可用,请发送 !update 进行自动更新\n更新日志:\n{}".format("\n".join(pkg.utils.updater.get_rls_notes())))
else:
logging.info("当前已是最新版本")
except Exception as e:
logging.warning("检查更新失败:{}".format(e))
while True:
try:
time.sleep(10)
if qqbot != pkg.utils.context.get_qqbot_manager(): # 已经reload了
logging.info("以前的main流程由于reload退出")
break
except KeyboardInterrupt:
stop()
print("程序退出")
sys.exit(0)
try:
import pkg.utils.announcement as announcement
new_announcement = announcement.fetch_new()
if len(new_announcement) > 0:
for announcement in new_announcement:
logging.critical("[公告]<{}> {}".format(announcement['time'], announcement['content']))
except Exception as e:
logging.warning("获取公告失败:{}".format(e))
return qqbot
def stop():
import pkg.utils.context
import pkg.qqbot.manager
import pkg.openai.session
try:
@@ -298,39 +336,108 @@ def stop():
raise e
if __name__ == '__main__':
# 检查是否有config.py,如果没有就把config-template.py复制一份,并退出程序
def check_file():
# 检查是否有banlist.py,如果没有就把banlist-template.py复制一份
if not os.path.exists('banlist.py'):
shutil.copy('res/templates/banlist-template.py', 'banlist.py')
# 检查是否有sensitive.json
if not os.path.exists("sensitive.json"):
shutil.copy("res/templates/sensitive-template.json", "sensitive.json")
# 检查是否有scenario/default.json
if not os.path.exists("scenario/default.json"):
shutil.copy("scenario/default-template.json", "scenario/default.json")
# 检查cmdpriv.json
if not os.path.exists("cmdpriv.json"):
shutil.copy("res/templates/cmdpriv-template.json", "cmdpriv.json")
# 检查tips_custom
if not os.path.exists("tips.py"):
shutil.copy("tips-custom-template.py", "tips.py")
# 检查temp目录
if not os.path.exists("temp/"):
os.mkdir("temp/")
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
# 配置文件存在性校验
if not os.path.exists('config.py'):
shutil.copy('config-template.py', 'config.py')
print('请先在config.py中填写配置')
sys.exit(0)
# 检查是否有banlist.py,如果没有就把banlist-template.py复制一份
if not os.path.exists('banlist.py'):
shutil.copy('banlist-template.py', 'banlist.py')
def main():
global use_override
# 检查是否携带了 --override 或 -r 参数
if '--override' in sys.argv or '-r' in sys.argv:
use_override = True
# 初始化相关文件
check_file()
# 初始化logging
init_runtime_log_file()
pkg.utils.context.context['logger_handler'] = reset_logging()
# 加载配置
load_config()
config = pkg.utils.context.get_config()
# 检查tips模块
complete_tips()
# 配置线程池
from pkg.utils import ThreadCtl
thread_ctl = ThreadCtl(
sys_pool_num=config.sys_pool_num,
admin_pool_num=config.admin_pool_num,
user_pool_num=config.user_pool_num
)
# 存进上下文
pkg.utils.context.set_thread_ctl(thread_ctl)
# 启动指令处理
if len(sys.argv) > 1 and sys.argv[1] == 'init_db':
init_db()
sys.exit(0)
elif len(sys.argv) > 1 and sys.argv[1] == 'update':
try:
try:
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.ensure_dulwich()
except:
pass
from dulwich import porcelain
repo = porcelain.open_repo('.')
porcelain.pull(repo)
except ModuleNotFoundError:
print("dulwich模块未安装,请查看 https://github.com/RockChinQ/QChatGPT/issues/77")
print("正在进行程序更新...")
import pkg.utils.updater as updater
updater.update_all(cli=True)
sys.exit(0)
# import pkg.utils.configmgr
#
# pkg.utils.configmgr.set_config_and_reload("quote_origin", False)
# 关闭urllib的http警告
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
main(True)
pkg.utils.context.get_thread_ctl().submit_sys_task(
start,
True
)
# 主线程循环
while True:
try:
time.sleep(0xFF)
except:
stop()
pkg.utils.context.get_thread_ctl().shutdown()
import platform
if platform.system() == 'Windows':
cmd = "taskkill /F /PID {}".format(os.getpid())
elif platform.system() in ['Linux', 'Darwin']:
cmd = "kill -9 {}".format(os.getpid())
os.system(cmd)
if __name__ == '__main__':
main()

87
override-all.json Normal file
View File

@@ -0,0 +1,87 @@
{
"comment": "这是override.json支持的字段全集, 关于override.json机制, 请查看https://github.com/RockChinQ/QChatGPT/pull/271",
"msg_source_adapter": "yirimirai",
"mirai_http_api_config": {
"adapter": "WebSocketAdapter",
"host": "localhost",
"port": 8080,
"verifyKey": "yirimirai",
"qq": 1234567890
},
"nakuru_config": {
"host": "localhost",
"port": 6700,
"http_port": 5700,
"token": ""
},
"openai_config": {
"api_key": {
"default": "openai_api_key"
},
"http_proxy": null,
"reverse_proxy": null
},
"admin_qq": 0,
"default_prompt": {
"default": "如果我之后想获取帮助,请你说“输入!help获取帮助”"
},
"preset_mode": "normal",
"response_rules": {
"default": {
"at": true,
"prefix": [
"/ai",
"!ai",
"ai",
"ai"
],
"regexp": [],
"random_rate": 0.0
}
},
"ignore_rules": {
"prefix": [
"/"
],
"regexp": []
},
"income_msg_check": false,
"sensitive_word_filter": true,
"baidu_check": false,
"baidu_api_key": "",
"baidu_secret_key": "",
"inappropriate_message_tips": "[百度云]请珍惜机器人,当前返回内容不合规",
"encourage_sponsor_at_start": true,
"prompt_submit_length": 2048,
"completion_api_params": {
"model": "gpt-3.5-turbo",
"temperature": 0.9,
"top_p": 1,
"frequency_penalty": 0.2,
"presence_penalty": 1.0
},
"image_api_params": {
"size": "256x256"
},
"quote_origin": true,
"include_image_description": true,
"process_message_timeout": 30,
"show_prefix": false,
"blob_message_threshold": 256,
"blob_message_strategy": "forward",
"wait_last_done": true,
"font_path": "",
"retry_times": 3,
"hide_exce_info_to_user": false,
"sys_pool_num": 8,
"admin_pool_num": 4,
"user_pool_num": 8,
"session_expire_time": 1200,
"rate_limitation": {
"default": 60
},
"rate_limit_strategy": "drop",
"upgrade_dependencies": true,
"report_usage": true,
"logging_level": 20
}

View File

@@ -0,0 +1,3 @@
"""
审计相关操作
"""

View File

@@ -1,3 +1,7 @@
"""
使用量统计以及数据上报功能实现
"""
import hashlib
import json
import logging
@@ -10,8 +14,11 @@ import pkg.utils.updater
class DataGatherer:
"""数据收集器"""
usage = {}
"""以key值md5为key,{
"""各api-key的使用量
以key值md5为key,{
"text": {
"text-davinci-003": 文字量:int,
},
@@ -20,21 +27,26 @@ class DataGatherer:
}
}为值的字典"""
version_str = "0.1.0"
version_str = "undetermined"
def __init__(self):
self.load_from_db()
try:
self.version_str = pkg.utils.updater.get_commit_id_and_time_and_msg()[:40 if len(pkg.utils.updater.get_commit_id_and_time_and_msg()) > 40 else len(pkg.utils.updater.get_commit_id_and_time_and_msg())]
self.version_str = pkg.utils.updater.get_current_tag() # 从updater模块获取版本号
except:
pass
def report_to_server(self, subservice_name: str, count: int):
"""向中央服务器报告使用量
只会报告此次请求的使用量,不会报告总量。
不包含除版本号、使用类型、使用量以外的任何信息,仅供开发者分析使用情况。
"""
try:
config = pkg.utils.context.get_config()
if hasattr(config, "report_usage") and not config.report_usage:
if not config.report_usage:
return
res = requests.get("http://rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}".format(subservice_name, self.version_str, count))
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}&msg_source={}".format(subservice_name, self.version_str, count, config.msg_source_adapter))
if res.status_code != 200 or res.text != "ok":
logging.warning("report to server failed, status_code: {}, text: {}".format(res.status_code, res.text))
except:
@@ -44,7 +56,9 @@ class DataGatherer:
return self.usage[key_md5] if key_md5 in self.usage else {}
def report_text_model_usage(self, model, total_tokens):
key_md5 = pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5()
"""调用方报告文字模型请求文字使用量"""
key_md5 = pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5() # 以key的md5进行储存
if key_md5 not in self.usage:
self.usage[key_md5] = {}
@@ -62,6 +76,8 @@ class DataGatherer:
self.report_to_server("text", length)
def report_image_model_usage(self, size):
"""调用方报告图片模型请求图片使用量"""
key_md5 = pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5()
if key_md5 not in self.usage:
@@ -79,6 +95,7 @@ class DataGatherer:
self.report_to_server("image", 1)
def get_text_length_of_key(self, key):
"""获取指定api-key (明文) 的文字总使用量(本地记录)"""
key_md5 = hashlib.md5(key.encode('utf-8')).hexdigest()
if key_md5 not in self.usage:
return 0
@@ -88,6 +105,8 @@ class DataGatherer:
return sum(self.usage[key_md5]["text"].values())
def get_image_count_of_key(self, key):
"""获取指定api-key (明文) 的图片总使用量(本地记录)"""
key_md5 = hashlib.md5(key.encode('utf-8')).hexdigest()
if key_md5 not in self.usage:
return 0
@@ -97,6 +116,7 @@ class DataGatherer:
return sum(self.usage[key_md5]["image"].values())
def get_total_text_length(self):
"""获取所有api-key的文字总使用量(本地记录)"""
total = 0
for key in self.usage:
if "text" not in self.usage[key]:

View File

@@ -0,0 +1,3 @@
"""
数据库操作封装
"""

View File

@@ -1,3 +1,6 @@
"""
数据库管理模块
"""
import hashlib
import json
import logging
@@ -9,9 +12,9 @@ import sqlite3
import pkg.utils.context
# 数据库管理
# 为其他模块提供数据库操作接口
class DatabaseManager:
"""封装数据库底层操作,并提供方法给上层使用"""
conn = None
cursor = None
@@ -23,21 +26,25 @@ class DatabaseManager:
# 连接到数据库文件
def reconnect(self):
"""连接到数据库"""
self.conn = sqlite3.connect('database.db', check_same_thread=False)
self.cursor = self.conn.cursor()
def close(self):
self.conn.close()
def execute(self, *args, **kwargs) -> Cursor:
def __execute__(self, *args, **kwargs) -> Cursor:
# logging.debug('SQL: {}'.format(sql))
logging.debug('SQL: {}'.format(args))
c = self.cursor.execute(*args, **kwargs)
self.conn.commit()
return c
# 初始化数据库的函数
def initialize_database(self):
self.execute("""
"""创建数据表"""
self.__execute__("""
create table if not exists `sessions` (
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
`name` varchar(255) not null,
@@ -46,11 +53,31 @@ class DatabaseManager:
`create_timestamp` bigint not null,
`last_interact_timestamp` bigint not null,
`status` varchar(255) not null default 'on_going',
`prompt` text not null
`default_prompt` text not null default '',
`prompt` text not null,
`token_counts` text not null default '[]'
)
""")
self.execute("""
# 检查sessions表是否存在`default_prompt`字段, 检查是否存在`token_counts`字段
self.__execute__("PRAGMA table_info('sessions')")
columns = self.cursor.fetchall()
has_default_prompt = False
has_token_counts = False
for field in columns:
if field[1] == 'default_prompt':
has_default_prompt = True
if field[1] == 'token_counts':
has_token_counts = True
if has_default_prompt and has_token_counts:
break
if not has_default_prompt:
self.__execute__("alter table `sessions` add column `default_prompt` text not null default ''")
if not has_token_counts:
self.__execute__("alter table `sessions` add column `token_counts` text not null default '[]'")
self.__execute__("""
create table if not exists `account_fee`(
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
`key_md5` varchar(255) not null,
@@ -59,7 +86,7 @@ class DatabaseManager:
)
""")
self.execute("""
self.__execute__("""
create table if not exists `account_usage`(
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
`json` text not null
@@ -69,47 +96,49 @@ class DatabaseManager:
# session持久化
def persistence_session(self, subject_type: str, subject_number: int, create_timestamp: int,
last_interact_timestamp: int, prompt: str):
last_interact_timestamp: int, prompt: str, default_prompt: str = '', token_counts: str = ''):
"""持久化指定session"""
# 检查是否已经有了此name和create_timestamp的session
# 如果有就更新prompt和last_interact_timestamp
# 如果没有,就插入一条新的记录
self.execute("""
self.__execute__("""
select count(*) from `sessions` where `type` = '{}' and `number` = {} and `create_timestamp` = {}
""".format(subject_type, subject_number, create_timestamp))
count = self.cursor.fetchone()[0]
if count == 0:
sql = """
insert into `sessions` (`name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`)
values (?, ?, ?, ?, ?, ?)
insert into `sessions` (`name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `default_prompt`, `token_counts`)
values (?, ?, ?, ?, ?, ?, ?, ?)
"""
self.execute(sql,
("{}_{}".format(subject_type, subject_number), subject_type, subject_number, create_timestamp,
last_interact_timestamp, prompt))
self.__execute__(sql,
("{}_{}".format(subject_type, subject_number), subject_type, subject_number, create_timestamp,
last_interact_timestamp, prompt, default_prompt, token_counts))
else:
sql = """
update `sessions` set `last_interact_timestamp` = ?, `prompt` = ?
update `sessions` set `last_interact_timestamp` = ?, `prompt` = ?, `token_counts` = ?
where `type` = ? and `number` = ? and `create_timestamp` = ?
"""
self.execute(sql, (last_interact_timestamp, prompt, subject_type,
subject_number, create_timestamp))
self.__execute__(sql, (last_interact_timestamp, prompt, token_counts, subject_type,
subject_number, create_timestamp))
# 显式关闭一个session
def explicit_close_session(self, session_name: str, create_timestamp: int):
self.execute("""
self.__execute__("""
update `sessions` set `status` = 'explicitly_closed' where `name` = '{}' and `create_timestamp` = {}
""".format(session_name, create_timestamp))
def set_session_ongoing(self, session_name: str, create_timestamp: int):
self.execute("""
self.__execute__("""
update `sessions` set `status` = 'on_going' where `name` = '{}' and `create_timestamp` = {}
""".format(session_name, create_timestamp))
# 设置session为过期
def set_session_expired(self, session_name: str, create_timestamp: int):
self.execute("""
self.__execute__("""
update `sessions` set `status` = 'expired' where `name` = '{}' and `create_timestamp` = {}
""".format(session_name, create_timestamp))
@@ -117,8 +146,8 @@ class DatabaseManager:
def load_valid_sessions(self) -> dict:
# 从数据库中加载所有还没过期的session
config = pkg.utils.context.get_config()
self.execute("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `last_interact_timestamp` > {}
""".format(int(time.time()) - config.session_expire_time))
results = self.cursor.fetchall()
@@ -131,6 +160,8 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
# 当且仅当最后一个该对象的会话是on_going状态时才会被加载
if status == 'on_going':
@@ -139,7 +170,9 @@ class DatabaseManager:
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
else:
if session_name in sessions:
@@ -150,8 +183,8 @@ class DatabaseManager:
# 获取此session_name前一个session的数据
def last_session(self, session_name: str, cursor_timestamp: int):
self.execute("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' and `last_interact_timestamp` < {} order by `last_interact_timestamp` desc
limit 1
""".format(session_name, cursor_timestamp))
@@ -167,20 +200,24 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
return {
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
# 获取此session_name后一个session的数据
def next_session(self, session_name: str, cursor_timestamp: int):
self.execute("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' and `last_interact_timestamp` > {} order by `last_interact_timestamp` asc
limit 1
""".format(session_name, cursor_timestamp))
@@ -196,19 +233,23 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
return {
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
# 列出与某个对象的所有对话session
def list_history(self, session_name: str, capacity: int, page: int, replace: str = ""):
self.execute("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
def list_history(self, session_name: str, capacity: int, page: int):
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' order by `last_interact_timestamp` desc limit {} offset {}
""".format(session_name, capacity, capacity * page))
results = self.cursor.fetchall()
@@ -221,17 +262,42 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
sessions.append({
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt if replace == "" else prompt.replace(replace, "")
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
})
return sessions
def delete_history(self, session_name: str, index: int) -> bool:
# 删除倒序第index个session
# 查找其id再删除
self.__execute__("""
delete from `sessions` where `id` in (select `id` from `sessions` where `name` = '{}' order by `last_interact_timestamp` desc limit 1 offset {})
""".format(session_name, index))
return self.cursor.rowcount == 1
def delete_all_history(self, session_name: str) -> bool:
self.__execute__("""
delete from `sessions` where `name` = '{}'
""".format(session_name))
return self.cursor.rowcount > 0
def delete_all_session_history(self) -> bool:
self.__execute__("""
delete from `sessions`
""")
return self.cursor.rowcount > 0
# 将apikey的使用量存进数据库
def dump_api_key_usage(self, api_keys: dict, usage: dict):
logging.debug('dumping api key usage...')
@@ -246,22 +312,22 @@ class DatabaseManager:
usage_count = usage[key_md5]
# 将使用量存进数据库
# 先检查是否已存在
self.execute("""
self.__execute__("""
select count(*) from `api_key_usage` where `key_md5` = '{}'""".format(key_md5))
result = self.cursor.fetchone()
if result[0] == 0:
# 不存在则插入
self.execute("""
self.__execute__("""
insert into `api_key_usage` (`key_md5`, `usage`,`timestamp`) values ('{}', {}, {})
""".format(key_md5, usage_count, int(time.time())))
else:
# 存在则更新timestamp设置为当前
self.execute("""
self.__execute__("""
update `api_key_usage` set `usage` = {}, `timestamp` = {} where `key_md5` = '{}'
""".format(usage_count, int(time.time()), key_md5))
def load_api_key_usage(self):
self.execute("""
self.__execute__("""
select `key_md5`, `usage` from `api_key_usage`
""")
results = self.cursor.fetchall()
@@ -273,23 +339,24 @@ class DatabaseManager:
return usage
def dump_usage_json(self, usage: dict):
json_str = json.dumps(usage)
self.execute("""
self.__execute__("""
select count(*) from `account_usage`""")
result = self.cursor.fetchone()
if result[0] == 0:
# 不存在则插入
self.execute("""
self.__execute__("""
insert into `account_usage` (`json`) values ('{}')
""".format(json_str))
else:
# 存在则更新
self.execute("""
self.__execute__("""
update `account_usage` set `json` = '{}' where `id` = 1
""".format(json_str))
def load_usage_json(self):
self.execute("""
self.__execute__("""
select `json` from `account_usage` order by id desc limit 1
""")
result = self.cursor.fetchone()

View File

@@ -0,0 +1 @@
"""OpenAI 接口处理及会话管理相关"""

View File

@@ -1,74 +1,145 @@
# 多情景预设值管理
import json
import logging
import config
import os
__current__ = "default"
# __current__ = "default"
# """当前默认使用的情景预设的名称
__prompts_from_files__ = {}
# 由管理员使用`!default <名称>`指令切换
# """
# __prompts_from_files__ = {}
# """从文件中读取的情景预设值"""
# __scenario_from_files__ = {}
def read_prompt_from_file() -> str:
"""从文件读取预设值"""
# 读取prompts/目录下的所有文件,以文件名为键,文件内容为值
# 保存在__prompts_from_files__中
global __prompts_from_files__
import os
__prompts_from_files__ = {}
for file in os.listdir("prompts"):
with open(os.path.join("prompts", file), encoding="utf-8") as f:
__prompts_from_files__[file] = f.read()
__universal_first_reply__ = "ok, I'll follow your commands."
"""通用首次回复"""
def get_prompt_dict() -> dict:
"""获取预设值字典"""
class ScenarioMode:
"""情景预设模式抽象类"""
using_prompt_name = "default"
"""新session创建时使用的prompt名称"""
prompts: dict[str, list] = {}
def __init__(self):
logging.debug("prompts: {}".format(self.prompts))
def list(self) -> dict[str, list]:
"""获取所有情景预设的名称及内容"""
return self.prompts
def get_prompt(self, name: str) -> tuple[list, str]:
"""获取指定情景预设的名称及内容"""
for key in self.prompts:
if key.startswith(name):
return self.prompts[key], key
raise Exception("没有找到情景预设: {}".format(name))
def set_using_name(self, name: str) -> str:
"""设置默认情景预设"""
for key in self.prompts:
if key.startswith(name):
self.using_prompt_name = key
return key
raise Exception("没有找到情景预设: {}".format(name))
def get_full_name(self, name: str) -> str:
"""获取完整的情景预设名称"""
for key in self.prompts:
if key.startswith(name):
return key
raise Exception("没有找到情景预设: {}".format(name))
def get_using_name(self) -> str:
"""获取默认情景预设"""
return self.using_prompt_name
class NormalScenarioMode(ScenarioMode):
"""普通情景预设模式"""
def __init__(self):
global __universal_first_reply__
# 加载config中的default_prompt值
if type(config.default_prompt) == str:
self.using_prompt_name = "default"
self.prompts = {"default": [
{
"role": "user",
"content": config.default_prompt
},{
"role": "assistant",
"content": __universal_first_reply__
}
]}
elif type(config.default_prompt) == dict:
for key in config.default_prompt:
self.prompts[key] = [
{
"role": "user",
"content": config.default_prompt[key]
},{
"role": "assistant",
"content": __universal_first_reply__
}
]
# 从prompts/目录下的文件中载入
# 遍历文件
for file in os.listdir("prompts"):
with open(os.path.join("prompts", file), encoding="utf-8") as f:
self.prompts[file] = [
{
"role": "user",
"content": f.read()
},{
"role": "assistant",
"content": __universal_first_reply__
}
]
class FullScenarioMode(ScenarioMode):
"""完整情景预设模式"""
def __init__(self):
"""从json读取所有"""
# 遍历scenario/目录下的所有文件以文件名为键文件内容中的prompt为值
for file in os.listdir("scenario"):
if file == "default-template.json":
continue
with open(os.path.join("scenario", file), encoding="utf-8") as f:
self.prompts[file] = json.load(f)["prompt"]
super().__init__()
scenario_mode_mapping = {}
"""情景预设模式名称与对象的映射"""
def register_all():
"""注册所有情景预设模式,不使用装饰器,因为装饰器的方式不支持热重载"""
global scenario_mode_mapping
scenario_mode_mapping = {
"normal": NormalScenarioMode(),
"full_scenario": FullScenarioMode()
}
def mode_inst() -> ScenarioMode:
"""获取指定名称的情景预设模式对象"""
import config
default_prompt = config.default_prompt
if type(default_prompt) == str:
default_prompt = {"default": default_prompt}
elif type(default_prompt) == dict:
pass
else:
raise TypeError("default_prompt must be str or dict")
# 将文件中的预设值合并到default_prompt中
for key in __prompts_from_files__:
default_prompt[key] = __prompts_from_files__[key]
if config.preset_mode == "default":
config.preset_mode = "normal"
return default_prompt
def set_current(name):
global __current__
for key in get_prompt_dict():
if key.lower().startswith(name.lower()):
__current__ = key
return
raise KeyError("未找到情景预设: " + name)
def get_current():
global __current__
return __current__
def set_to_default():
global __current__
default_dict = get_prompt_dict()
if "default" in default_dict:
__current__ = "default"
else:
__current__ = list(default_dict.keys())[0]
def get_prompt(name: str = None) -> str:
"""获取预设值"""
if name is None:
name = get_current()
default_dict = get_prompt_dict()
for key in default_dict:
if key.lower().startswith(name.lower()):
return default_dict[key]
raise KeyError("未找到情景预设: " + name)
return scenario_mode_mapping[config.preset_mode]

View File

@@ -5,18 +5,25 @@ import logging
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
class KeysManager:
api_key = {}
"""所有api-key"""
# api-key的使用量
# 其中键为api-key的md5值值为使用量
using_key = ""
"""当前使用的api-key"""
alerted = []
"""已提示过超额的key
记录在此以避免重复提示
"""
# 在此list中的都是经超额报错标记过的api-key
# 记录的是key值仅在运行时有效
exceeded = []
"""已超额的key
供自动切换功能识别
"""
def get_using_key(self):
return self.using_key
@@ -25,8 +32,6 @@ class KeysManager:
return hashlib.md5(self.using_key.encode('utf-8')).hexdigest()
def __init__(self, api_key):
# if hasattr(config, 'api_key_usage_threshold'):
# self.api_key_usage_threshold = config.api_key_usage_threshold
if type(api_key) is dict:
self.api_key = api_key
@@ -42,9 +47,13 @@ class KeysManager:
self.auto_switch()
# 根据tested自动切换到可用的api-key
# 返回是否切换成功, 切换后的api-key的别名
def auto_switch(self) -> (bool, str):
def auto_switch(self) -> tuple[bool, str]:
"""尝试切换api-key
Returns:
是否切换成功, 切换后的api-key的别名
"""
for key_name in self.api_key:
if self.api_key[key_name] not in self.exceeded:
self.using_key = self.api_key[key_name]
@@ -68,12 +77,8 @@ class KeysManager:
def add(self, key_name, key):
self.api_key[key_name] = key
# 设置当前使用的api-key使用量超限
# 这是在尝试调用api时发生超限异常时调用的
def set_current_exceeded(self):
# md5 = hashlib.md5(self.using_key.encode('utf-8')).hexdigest()
# self.usage[md5] = self.api_key_usage_threshold
# self.fee[md5] = self.api_key_fee_threshold
"""设置当前使用的api-key使用量超限"""
self.exceeded.append(self.using_key)
def get_key_name(self, api_key):
@@ -81,4 +86,4 @@ class KeysManager:
for key_name in self.api_key:
if self.api_key[key_name] == api_key:
return key_name
return ""
return ""

View File

@@ -5,11 +5,14 @@ import openai
import pkg.openai.keymgr
import pkg.utils.context
import pkg.audit.gatherer
from pkg.openai.modelmgr import ModelRequest, create_openai_model_request
# 为其他模块提供与OpenAI交互的接口
class OpenAIInteract:
api_params = {}
"""OpenAI 接口封装
将文字接口和图片接口封装供调用方使用
"""
key_mgr: pkg.openai.keymgr.KeysManager = None
@@ -20,7 +23,6 @@ class OpenAIInteract:
}
def __init__(self, api_key: str):
# self.api_key = api_key
self.key_mgr = pkg.openai.keymgr.KeysManager(api_key)
self.audit_mgr = pkg.audit.gatherer.DataGatherer()
@@ -32,29 +34,56 @@ class OpenAIInteract:
pkg.utils.context.set_openai_manager(self)
# 请求OpenAI Completion
def request_completion(self, prompt, stop):
def request_completion(self, prompts) -> tuple[str, int]:
"""请求补全接口回复
Parameters:
prompts (str): 提示语
Returns:
str: 回复
"""
config = pkg.utils.context.get_config()
response = openai.Completion.create(
prompt=prompt,
stop=stop,
# 根据模型选择使用的接口
ai: ModelRequest = create_openai_model_request(
config.completion_api_params['model'],
'user',
config.openai_config["http_proxy"] if "http_proxy" in config.openai_config else None
)
ai.request(
prompts,
**config.completion_api_params
)
response = ai.get_response()
logging.debug("OpenAI response: %s", response)
# 记录使用量
current_round_token = 0
if 'model' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['model'],
response['usage']['total_tokens'])
ai.get_total_tokens())
current_round_token = ai.get_total_tokens()
elif 'engine' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['engine'],
response['usage']['total_tokens'])
current_round_token = response['usage']['total_tokens']
return response
return ai.get_message(), current_round_token
def request_image(self, prompt):
def request_image(self, prompt) -> dict:
"""请求图片接口回复
Parameters:
prompt (str): 提示语
Returns:
dict: 响应
"""
config = pkg.utils.context.get_config()
params = config.image_api_params if hasattr(config, "image_api_params") else self.default_image_api_params
params = config.image_api_params
response = openai.Image.create(
prompt=prompt,

View File

@@ -1,7 +1,30 @@
# 提供与模型交互的抽象接口
"""OpenAI 接口底层封装
目前使用的对话接口有:
ChatCompletion - gpt-3.5-turbo 等模型
Completion - text-davinci-003 等模型
此模块封装此两个接口的请求实现,为上层提供统一的调用方式
"""
import openai, logging, threading, asyncio
import openai.error as aiE
COMPLETION_MODELS = {
'text-davinci-003'
'text-davinci-003',
'text-davinci-002',
'code-davinci-002',
'code-cushman-001',
'text-curie-001',
'text-babbage-001',
'text-ada-001',
}
CHAT_COMPLETION_MODELS = {
'gpt-3.5-turbo',
'gpt-3.5-turbo-0301',
'gpt-4',
'gpt-4-0314',
'gpt-4-32k',
'gpt-4-32k-0314'
}
EDIT_MODELS = {
@@ -13,22 +36,153 @@ IMAGE_MODELS = {
}
# ModelManager
# 由session包含
class ModelMgr(object):
class ModelRequest:
"""模型接口请求父类"""
using_completion_model = ""
using_edit_model = ""
using_image_model = ""
can_chat = False
runtime: threading.Thread = None
ret = {}
proxy: str = None
request_ready = True
error_info: str = "若在没有任何错误的情况下看到这句话请带着配置文件上报Issues"
def __init__(self):
pass
def __init__(self, model_name, user_name, request_fun, http_proxy:str = None, time_out = None):
self.model_name = model_name
self.user_name = user_name
self.request_fun = request_fun
self.time_out = time_out
if http_proxy != None:
self.proxy = http_proxy
openai.proxy = self.proxy
self.request_ready = False
def get_using_completion_model(self):
return self.using_completion_model
async def __a_request__(self, **kwargs):
"""异步请求"""
def get_using_edit_model(self):
return self.using_edit_model
try:
self.ret: dict = await self.request_fun(**kwargs)
self.request_ready = True
except aiE.APIConnectionError as e:
self.error_info = "{}\n请检查网络连接或代理是否正常".format(e)
raise ConnectionError(self.error_info)
except ValueError as e:
self.error_info = "{}\n该错误可能是由于http_proxy格式设置错误引起的"
except Exception as e:
self.error_info = "{}\n由于请求异常产生的未知错误,请查看日志".format(e)
raise type(e)(self.error_info)
def get_using_image_model(self):
return self.using_image_model
def request(self, **kwargs):
"""向接口发起请求"""
if self.proxy != None: #异步请求
self.request_ready = False
loop = asyncio.new_event_loop()
self.runtime = threading.Thread(
target=loop.run_until_complete,
args=(self.__a_request__(**kwargs),)
)
self.runtime.start()
else: #同步请求
self.ret = self.request_fun(**kwargs)
def __msg_handle__(self, msg):
"""将prompt dict转换成接口需要的格式"""
return msg
def ret_handle(self):
'''
API消息返回处理函数
若重写该方法应检查异步线程状态或在需要检查处super该方法
'''
if self.runtime != None and isinstance(self.runtime, threading.Thread):
self.runtime.join(self.time_out)
if self.request_ready:
return
raise Exception(self.error_info)
def get_total_tokens(self):
try:
return self.ret['usage']['total_tokens']
except:
return 0
def get_message(self):
return self.message
def get_response(self):
return self.ret
class ChatCompletionModel(ModelRequest):
"""ChatCompletion接口的请求实现"""
Chat_role = ['system', 'user', 'assistant']
def __init__(self, model_name, user_name, http_proxy:str = None, **kwargs):
if http_proxy == None:
request_fun = openai.ChatCompletion.create
else:
request_fun = openai.ChatCompletion.acreate
self.can_chat = True
super().__init__(model_name, user_name, request_fun, http_proxy, **kwargs)
def request(self, prompts, **kwargs):
prompts = self.__msg_handle__(prompts)
kwargs['messages'] = prompts
super().request(**kwargs)
self.ret_handle()
def __msg_handle__(self, msgs):
temp_msgs = []
# 把msgs拷贝进temp_msgs
for msg in msgs:
temp_msgs.append(msg.copy())
return temp_msgs
def get_message(self):
return self.ret["choices"][0]["message"]['content'] #需要时直接加载加快请求速度,降低内存消耗
class CompletionModel(ModelRequest):
"""Completion接口的请求实现"""
def __init__(self, model_name, user_name, http_proxy:str = None, **kwargs):
if http_proxy == None:
request_fun = openai.Completion.create
else:
request_fun = openai.Completion.acreate
super().__init__(model_name, user_name, request_fun, http_proxy, **kwargs)
def request(self, prompts, **kwargs):
prompts = self.__msg_handle__(prompts)
kwargs['prompt'] = prompts
super().request(**kwargs)
self.ret_handle()
def __msg_handle__(self, msgs):
prompt = ''
for msg in msgs:
prompt = prompt + "{}: {}\n".format(msg['role'], msg['content'])
# for msg in msgs:
# if msg['role'] == 'assistant':
# prompt = prompt + "{}\n".format(msg['content'])
# else:
# prompt = prompt + "{}:{}\n".format(msg['role'] , msg['content'])
prompt = prompt + "assistant: "
return prompt
def get_message(self):
return self.ret["choices"][0]["text"]
def create_openai_model_request(model_name: str, user_name: str = 'user', http_proxy:str = None) -> ModelRequest:
"""使用给定的模型名称创建模型请求对象"""
if model_name in CHAT_COMPLETION_MODELS:
model = ChatCompletionModel(model_name, user_name, http_proxy)
elif model_name in COMPLETION_MODELS:
model = CompletionModel(model_name, user_name, http_proxy)
else :
log = "找不到模型[{}],请检查配置文件".format(model_name)
logging.error(log)
raise IndexError(log)
logging.debug("使用接口[{}]创建模型请求[{}]".format(model.__class__.__name__, model_name))
return model

View File

@@ -1,8 +1,15 @@
"""主线使用的会话管理模块
每个人、每个群单独一个sessionsession内部保留了对话的上下文
"""
import logging
import threading
import time
import json
import pkg.openai.manager
import pkg.openai.modelmgr
import pkg.database.manager
import pkg.utils.context
@@ -18,8 +25,38 @@ class SessionOfflineStatus:
EXPLICITLY_CLOSED = 'explicitly_closed'
# 重置session.prompt
def reset_session_prompt(session_name, prompt):
# 备份原始数据
bak_path = 'logs/{}-{}.bak'.format(
session_name,
time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
)
f = open(bak_path, 'w+')
f.write(prompt)
f.close()
# 生成新数据
config = pkg.utils.context.get_config()
prompt = [
{
'role': 'system',
'content': config.default_prompt['default'] if type(config.default_prompt) == dict else config.default_prompt
}
]
# 警告
logging.warning(
"""
用户[{}]的数据已被重置,有可能是因为数据版本过旧或存储错误
原始数据将备份在:
{}""".format(session_name, bak_path)
) # 为保证多行文本格式正确故无缩进
return prompt
# 从数据加载session
def load_sessions():
"""从数据库加载sessions"""
global sessions
db_inst = pkg.utils.context.get_database_manager()
@@ -33,7 +70,14 @@ def load_sessions():
temp_session.name = session_name
temp_session.create_timestamp = session_data[session_name]['create_timestamp']
temp_session.last_interact_timestamp = session_data[session_name]['last_interact_timestamp']
temp_session.prompt = session_data[session_name]['prompt']
try:
temp_session.prompt = json.loads(session_data[session_name]['prompt'])
temp_session.token_counts = json.loads(session_data[session_name]['token_counts'])
except Exception:
temp_session.prompt = reset_session_prompt(session_name, session_data[session_name]['prompt'])
temp_session.persistence()
temp_session.default_prompt = json.loads(session_data[session_name]['default_prompt']) if \
session_data[session_name]['default_prompt'] else []
sessions[session_name] = temp_session
@@ -60,16 +104,20 @@ def dump_session(session_name: str):
class Session:
name = ''
prompt = ""
prompt = []
"""使用list来保存会话中的回合"""
import config
token_counts = []
"""每个回合的token数量"""
user_name = config.user_name if hasattr(config, 'user_name') and config.user_name != '' else 'You'
bot_name = config.bot_name if hasattr(config, 'bot_name') and config.bot_name != '' else 'Bot'
default_prompt = []
"""本session的默认prompt"""
create_timestamp = 0
"""会话创建时间"""
last_interact_timestamp = 0
"""上次交互(产生回复)时间"""
just_switched_to_exist_session = False
@@ -89,30 +137,27 @@ class Session:
logging.debug('{},lock release successfully,{}'.format(self.name, self.response_lock))
# 从配置文件获取会话预设信息
def get_default_prompt(self, use_default: str=None):
config = pkg.utils.context.get_config()
def get_default_prompt(self, use_default: str = None):
import pkg.openai.dprompt as dprompt
if use_default is None:
current_default_prompt = dprompt.get_prompt(dprompt.get_current())
else:
current_default_prompt = dprompt.get_prompt(use_default)
use_default = dprompt.mode_inst().get_using_name()
user_name = config.user_name if hasattr(config, 'user_name') and config.user_name != '' else 'You'
bot_name = config.bot_name if hasattr(config, 'bot_name') and config.bot_name != '' else 'Bot'
return (user_name + ":{}\n".format(current_default_prompt) + bot_name + ":好的\n") \
if current_default_prompt != '' else ''
current_default_prompt, _ = dprompt.mode_inst().get_prompt(use_default)
return current_default_prompt
def __init__(self, name: str):
self.name = name
self.create_timestamp = int(time.time())
self.last_interact_timestamp = int(time.time())
self.prompt = []
self.token_counts = []
self.schedule()
self.response_lock = threading.Lock()
self.prompt = self.get_default_prompt()
self.default_prompt = self.get_default_prompt()
logging.debug("prompt is: {}".format(self.default_prompt))
# 设定检查session最后一次对话是否超过过期时间的计时器
def schedule(self):
@@ -151,88 +196,120 @@ class Session:
# 请求回复
# 这个函数是阻塞的
def append(self, text: str) -> str:
"""向session中添加一条消息返回接口回复"""
self.last_interact_timestamp = int(time.time())
# 触发插件事件
if self.prompt == self.get_default_prompt():
if not self.prompt:
args = {
'session_name': self.name,
'session': self,
'default_prompt': self.prompt,
'default_prompt': self.default_prompt,
}
event = pkg.plugin.host.emit(plugin_models.SessionFirstMessageReceived, **args)
if event.is_prevented_default():
return None
# max_rounds = config.prompt_submit_round_amount if hasattr(config, 'prompt_submit_round_amount') else 7
config = pkg.utils.context.get_config()
max_rounds = 1000 # 不再限制回合数
max_length = config.prompt_submit_length if hasattr(config, "prompt_submit_length") else 1024
max_length = config.prompt_submit_length
prompts, counts = self.cut_out(text, max_length)
# 计算请求前的prompt数量
total_token_before_query = 0
for token_count in counts:
total_token_before_query += token_count
# 向API请求补全
response = pkg.utils.context.get_openai_manager().request_completion(
self.cut_out(self.prompt + self.user_name + ':' +
text + '\n' + self.bot_name + ':',
max_rounds, max_length),
self.user_name + ':')
message, total_token = pkg.utils.context.get_openai_manager().request_completion(
prompts,
)
self.prompt += self.user_name + ':' + text + '\n' + self.bot_name + ':'
# print(response)
# 处理回复
res_test = response["choices"][0]["text"]
res_ans = res_test
# 成功获取,处理回复
res_test = message
res_ans = res_test.strip()
# 去除开头可能的提示
res_ans_spt = res_test.split("\n\n")
if len(res_ans_spt) > 1:
del (res_ans_spt[0])
res_ans = '\n\n'.join(res_ans_spt)
# 将此次对话的双方内容加入到prompt中
self.prompt.append({'role': 'user', 'content': text})
self.prompt.append({'role': 'assistant', 'content': res_ans})
self.prompt += "{}".format(res_ans) + '\n'
# 向token_counts中添加本回合的token数量
self.token_counts.append(total_token-total_token_before_query)
logging.debug("本回合使用token: {}, session counts: {}".format(total_token-total_token_before_query, self.token_counts))
if self.just_switched_to_exist_session:
self.just_switched_to_exist_session = False
self.set_ongoing()
return res_ans
return res_ans if res_ans[0] != '\n' else res_ans[1:]
# 删除上一回合并返回上一回合的问题
def undo(self) -> str:
self.last_interact_timestamp = int(time.time())
# 删除上一回合
to_delete = self.cut_out(self.prompt, 1, 1024)
# 删除最后两个消息
if len(self.prompt) < 2:
raise Exception('之前无对话,无法撤销')
self.prompt = self.prompt.replace(to_delete, '')
question = self.prompt[-2]['content']
self.prompt = self.prompt[:-2]
self.token_counts = self.token_counts[:-1]
# 返回上一回合的问题
return to_delete.split(self.bot_name + ':')[0].split(self.user_name + ':')[1].strip()
return question
# 从尾部截取prompt里不多于max_rounds个回合长度不大于max_tokens的字符串
# 保证都是完整的对话
def cut_out(self, prompt: str, max_rounds: int, max_tokens: int) -> str:
# 分隔出每个回合
rounds_spt_by_user_name = prompt.split(self.user_name + ':')
# 构建对话体
def cut_out(self, msg: str, max_tokens: int) -> tuple[list, list]:
"""将现有prompt进行切割处理使得新的prompt长度不超过max_tokens
result = ''
:return: (新的prompt, 新的token_counts)
"""
checked_rounds = 0
# 从后往前遍历加到result前面检查result是否符合要求
for i in range(len(rounds_spt_by_user_name) - 1, 0, -1):
result_temp = self.user_name + ':' + rounds_spt_by_user_name[i] + result
checked_rounds += 1
# 最终由三个部分组成
# - default_prompt 情景预设固定值
# - changable_prompts 可变部分, 此会话中的历史对话回合
# - current_question 当前问题
if checked_rounds > max_rounds:
# 包装目前的对话回合内容
changable_prompts = []
changable_counts = []
# 倒着来, 遍历prompt的步长为2, 遍历tokens_counts的步长为1
changable_index = len(self.prompt) - 1
token_count_index = len(self.token_counts) - 1
packed_tokens = 0
while changable_index >= 0 and token_count_index >= 0:
if packed_tokens + self.token_counts[token_count_index] > max_tokens:
break
if int((len(result_temp.encode('utf-8')) - len(result_temp)) / 2 + len(result_temp)) > max_tokens:
break
changable_prompts.insert(0, self.prompt[changable_index])
changable_prompts.insert(0, self.prompt[changable_index - 1])
changable_counts.insert(0, self.token_counts[token_count_index])
packed_tokens += self.token_counts[token_count_index]
result = result_temp
changable_index -= 2
token_count_index -= 1
logging.debug('cut_out: {}'.format(result))
return result
# 将default_prompt和changable_prompts合并
result_prompt = self.default_prompt + changable_prompts
# 添加当前问题
result_prompt.append(
{
'role': 'user',
'content': msg
}
)
logging.debug('cut_out: {}\nchangable section tokens: {}\npacked counts: {}\nsession counts: {}'.format(json.dumps(result_prompt, ensure_ascii=False, indent=4),
packed_tokens,
changable_counts,
self.token_counts))
return result_prompt, changable_counts
# 持久化session
def persistence(self):
@@ -247,11 +324,11 @@ class Session:
subject_number = int(name_spt[1])
db_inst.persistence_session(subject_type, subject_number, self.create_timestamp, self.last_interact_timestamp,
self.prompt)
json.dumps(self.prompt), json.dumps(self.default_prompt), json.dumps(self.token_counts))
# 重置session
def reset(self, explicit: bool = False, expired: bool = False, schedule_new: bool = True, use_prompt: str = None):
if not self.prompt.endswith(':好的\n'):
if self.prompt:
self.persistence()
if explicit:
# 触发插件事件
@@ -267,7 +344,10 @@ class Session:
if expired:
pkg.utils.context.get_database_manager().set_session_expired(self.name, self.create_timestamp)
self.prompt = self.get_default_prompt(use_prompt)
self.default_prompt = self.get_default_prompt(use_prompt)
self.prompt = []
self.token_counts = []
self.create_timestamp = int(time.time())
self.last_interact_timestamp = int(time.time())
self.just_switched_to_exist_session = False
@@ -291,7 +371,13 @@ class Session:
self.create_timestamp = last_one['create_timestamp']
self.last_interact_timestamp = last_one['last_interact_timestamp']
self.prompt = last_one['prompt']
try:
self.prompt = json.loads(last_one['prompt'])
self.token_counts = json.loads(last_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, last_one['prompt'])
self.persistence()
self.default_prompt = json.loads(last_one['default_prompt']) if last_one['default_prompt'] else []
self.just_switched_to_exist_session = True
return self
@@ -306,14 +392,25 @@ class Session:
self.create_timestamp = next_one['create_timestamp']
self.last_interact_timestamp = next_one['last_interact_timestamp']
self.prompt = next_one['prompt']
try:
self.prompt = json.loads(next_one['prompt'])
self.token_counts = json.loads(next_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, next_one['prompt'])
self.persistence()
self.default_prompt = json.loads(next_one['default_prompt']) if next_one['default_prompt'] else []
self.just_switched_to_exist_session = True
return self
def list_history(self, capacity: int = 10, page: int = 0):
return pkg.utils.context.get_database_manager().list_history(self.name, capacity, page,
self.get_default_prompt())
return pkg.utils.context.get_database_manager().list_history(self.name, capacity, page)
def delete_history(self, index: int) -> bool:
return pkg.utils.context.get_database_manager().delete_history(self.name, index)
def delete_all_history(self) -> bool:
return pkg.utils.context.get_database_manager().delete_all_history(self.name)
def draw_image(self, prompt: str):
return pkg.utils.context.get_openai_manager().request_image(prompt)

View File

@@ -0,0 +1,4 @@
"""插件支持包
包含插件基类、插件宿主以及部分API接口
"""

View File

@@ -5,17 +5,18 @@ import importlib
import os
import pkgutil
import sys
import shutil
import traceback
import pkg.utils.context as context
import pkg.plugin.switch as switch
import pkg.plugin.settings as settings
import pkg.qqbot.adapter as msadapter
from mirai import Mirai
__plugins__ = {}
"""
插件列表
"""插件列表
示例:
{
@@ -34,14 +35,15 @@ __plugins__ = {}
},
"instance": None
}
}"""
}
"""
__plugins_order__ = []
"""插件顺序"""
def generate_plugin_order():
""" 根据__plugin__生成插件初始顺序无视是否启用 """
"""根据__plugin__生成插件初始顺序无视是否启用"""
global __plugins_order__
__plugins_order__ = []
for plugin_name in __plugins__:
@@ -49,13 +51,13 @@ def generate_plugin_order():
def iter_plugins():
""" 按照顺序迭代插件 """
"""按照顺序迭代插件"""
for plugin_name in __plugins_order__:
yield __plugins__[plugin_name]
def iter_plugins_name():
""" 迭代插件名 """
"""迭代插件名"""
for plugin_name in __plugins_order__:
yield plugin_name
@@ -84,7 +86,7 @@ def walk_plugin_path(module, prefix='', path_prefix=''):
def load_plugins():
""" 加载插件 """
"""加载插件"""
logging.info("加载插件")
PluginHost()
walk_plugin_path(__import__('plugins'))
@@ -101,12 +103,12 @@ def load_plugins():
def initialize_plugins():
""" 初始化插件 """
"""初始化插件"""
logging.info("初始化插件")
import pkg.plugin.models as models
for plugin in iter_plugins():
if not plugin['enabled']:
continue
# if not plugin['enabled']:
# continue
try:
models.__current_registering_plugin__ = plugin['name']
plugin['instance'] = plugin["class"](plugin_host=context.get_plugin_host())
@@ -116,7 +118,8 @@ def initialize_plugins():
def unload_plugins():
""" 卸载插件 """
"""卸载插件"""
# 不再显式卸载插件,因为当程序结束时,插件的析构函数会被系统执行
# for plugin in __plugins__.values():
# if plugin['enabled'] and plugin['instance'] is not None:
# if not hasattr(plugin['instance'], '__del__'):
@@ -131,7 +134,7 @@ def unload_plugins():
def install_plugin(repo_url: str):
""" 安装插件从git储存库获取并解决依赖 """
"""安装插件从git储存库获取并解决依赖"""
try:
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.ensure_dulwich()
@@ -154,22 +157,38 @@ def install_plugin(repo_url: str):
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("plugins/"+repo_url.split(".git")[0].split("/")[-1]+"/requirements.txt")
import main
main.reset_logging()
import pkg.utils.log as log
log.reset_logging()
def uninstall_plugin(plugin_name: str) -> str:
"""卸载插件"""
if plugin_name not in __plugins__:
raise Exception("插件不存在")
# 获取文件夹路径
plugin_path = __plugins__[plugin_name]['path'].replace("\\", "/")
# 剪切路径为plugins/插件名
plugin_path = plugin_path.split("plugins/")[1].split("/")[0]
# 删除文件夹
shutil.rmtree("plugins/"+plugin_path)
return "plugins/"+plugin_path
class EventContext:
""" 事件上下文 """
"""事件上下文"""
eid = 0
"""事件编号"""
name = ""
__prevent_default__ = False
""" 是否阻止默认行为 """
"""是否阻止默认行为"""
__prevent_postorder__ = False
""" 是否阻止后续插件的执行 """
"""是否阻止后续插件的执行"""
__return_value__ = {}
""" 返回值
@@ -232,7 +251,7 @@ class EventContext:
def emit(event_name: str, **kwargs) -> EventContext:
""" 触发事件 """
"""触发事件"""
import pkg.utils.context as context
if context.get_plugin_host() is None:
return None
@@ -258,20 +277,24 @@ class PluginHost:
"""获取机器人对象"""
return context.get_qqbot_manager().bot
def get_bot_adapter(self) -> msadapter.MessageSourceAdapter:
"""获取消息源适配器"""
return context.get_qqbot_manager().adapter
def send_person_message(self, person, message):
"""发送私聊消息"""
asyncio.run(self.get_bot().send_friend_message(person, message))
self.get_bot_adapter().send_message("person", person, message)
def send_group_message(self, group, message):
"""发送群消息"""
asyncio.run(self.get_bot().send_group_message(group, message))
self.get_bot_adapter().send_message("group", group, message)
def notify_admin(self, message):
"""通知管理员"""
context.get_qqbot_manager().notify_admin(message)
def emit(self, event_name: str, **kwargs) -> EventContext:
""" 触发事件 """
"""触发事件"""
import json
event_context = EventContext(event_name)

View File

@@ -145,6 +145,7 @@ __current_registering_plugin__ = ""
class Plugin:
"""插件基类"""
host: host.PluginHost
"""插件宿主,提供插件的一些基础功能"""

View File

@@ -7,7 +7,7 @@ import pkg.plugin.host as host
def wrapper_dict_from_plugin_list() -> dict:
""" 将插件列表转换为开关json """
"""将插件列表转换为开关json"""
switch = {}
for plugin_name in host.__plugins__:
@@ -30,7 +30,7 @@ def apply_switch(switch: dict):
def dump_switch():
""" 保存开关数据 """
"""保存开关数据"""
logging.debug("保存开关数据")
# 将开关数据写入plugins/switch.json
@@ -41,7 +41,7 @@ def dump_switch():
def load_switch():
""" 加载开关数据 """
"""加载开关数据"""
logging.debug("加载开关数据")
# 读取plugins/switch.json

136
pkg/qqbot/adapter.py Normal file
View File

@@ -0,0 +1,136 @@
# MessageSource的适配器
import typing
import mirai
class MessageSourceAdapter:
def __init__(self, config: dict):
pass
def send_message(
self,
target_type: str,
target_id: str,
message: mirai.MessageChain
):
"""发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (mirai.MessageChain): YiriMirai库的消息链
"""
raise NotImplementedError
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
"""回复消息
Args:
message_source (mirai.MessageEvent): YiriMirai消息源事件
message (mirai.MessageChain): YiriMirai库的消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
"""
raise NotImplementedError
def is_muted(self, group_id: int) -> bool:
"""获取账号是否在指定群被禁言"""
raise NotImplementedError
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注册事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
raise NotImplementedError
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注销事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
raise NotImplementedError
def run_sync(self):
"""以阻塞的方式运行适配器"""
raise NotImplementedError
def kill(self) -> bool:
"""关闭适配器
Returns:
bool: 是否成功关闭热重载时若此函数返回False则不会重载MessageSource底层
"""
raise NotImplementedError
class MessageConverter:
"""消息链转换器基类"""
@staticmethod
def yiri2target(message_chain: mirai.MessageChain):
"""将YiriMirai消息链转换为目标消息链
Args:
message_chain (mirai.MessageChain): YiriMirai消息链
Returns:
typing.Any: 目标消息链
"""
raise NotImplementedError
@staticmethod
def target2yiri(message_chain: typing.Any) -> mirai.MessageChain:
"""将目标消息链转换为YiriMirai消息链
Args:
message_chain (typing.Any): 目标消息链
Returns:
mirai.MessageChain: YiriMirai消息链
"""
raise NotImplementedError
class EventConverter:
"""事件转换器基类"""
@staticmethod
def yiri2target(event: typing.Type[mirai.Event]):
"""将YiriMirai事件转换为目标事件
Args:
event (typing.Type[mirai.Event]): YiriMirai事件
Returns:
typing.Any: 目标事件
"""
raise NotImplementedError
@staticmethod
def target2yiri(event: typing.Any) -> mirai.Event:
"""将目标事件的调用参数转换为YiriMirai的事件参数对象
Args:
event (typing.Any): 目标事件
Returns:
typing.Type[mirai.Event]: YiriMirai事件
"""
raise NotImplementedError

View File

@@ -1,30 +1,34 @@
import pkg.utils.context
def is_banned(launcher_type: str, launcher_id: int) -> bool:
def is_banned(launcher_type: str, launcher_id: int, sender_id: int) -> bool:
if not pkg.utils.context.get_qqbot_manager().enable_banlist:
return False
result = False
if launcher_type == 'group':
for group_rule in pkg.utils.context.get_qqbot_manager().ban_group:
if type(group_rule) == int:
if group_rule == launcher_id: # 此群群号被禁用
result = True
elif type(group_rule) == str:
if group_rule.startswith('!'):
# 截取!后面的字符串作为表达式,判断是否匹配
reg_str = group_rule[1:]
import re
if re.match(reg_str, str(launcher_id)): # 被豁免,最高级别
result = False
break
else:
# 判断是否匹配regexp
import re
if re.match(group_rule, str(launcher_id)): # 此群群号被禁用
# 检查是否显式声明发起人QQ要被person忽略
if sender_id in pkg.utils.context.get_qqbot_manager().ban_person:
result = True
else:
for group_rule in pkg.utils.context.get_qqbot_manager().ban_group:
if type(group_rule) == int:
if group_rule == launcher_id: # 此群群号被禁用
result = True
elif type(group_rule) == str:
if group_rule.startswith('!'):
# 截取!后面的字符串作为表达式,判断是否匹配
reg_str = group_rule[1:]
import re
if re.match(reg_str, str(launcher_id)): # 被豁免,最高级别
result = False
break
else:
# 判断是否匹配regexp
import re
if re.match(group_rule, str(launcher_id)): # 此群群号被禁用
result = True
else:
# ban_person, 与群规则相同

97
pkg/qqbot/blob.py Normal file
View File

@@ -0,0 +1,97 @@
# 长消息处理相关
import os
import time
import base64
import config
from mirai.models.message import MessageComponent, MessageChain, Image
from mirai.models.message import ForwardMessageNode
from mirai.models.base import MiraiBaseModel
from typing import List
import pkg.utils.context as context
import pkg.utils.text2img as text2img
class ForwardMessageDiaplay(MiraiBaseModel):
title: str = "群聊的聊天记录"
brief: str = "[聊天记录]"
source: str = "聊天记录"
preview: List[str] = []
summary: str = "查看x条转发消息"
class Forward(MessageComponent):
"""合并转发。"""
type: str = "Forward"
"""消息组件类型。"""
display: ForwardMessageDiaplay
"""显示信息"""
node_list: List[ForwardMessageNode]
"""转发消息节点列表。"""
def __init__(self, *args, **kwargs):
if len(args) == 1:
self.node_list = args[0]
super().__init__(**kwargs)
super().__init__(*args, **kwargs)
def __str__(self):
return '[聊天记录]'
def text_to_image(text: str) -> MessageComponent:
"""将文本转换成图片"""
# 检查temp文件夹是否存在
if not os.path.exists('temp'):
os.mkdir('temp')
img_path = text2img.text_to_image(text_str=text, save_as='temp/{}.png'.format(int(time.time())))
compressed_path, size = text2img.compress_image(img_path, outfile="temp/{}_compressed.png".format(int(time.time())))
# 读取图片转换成base64
with open(compressed_path, 'rb') as f:
img = f.read()
b64 = base64.b64encode(img)
# 删除图片
os.remove(img_path)
# 判断compressed_path是否存在
if os.path.exists(compressed_path):
os.remove(compressed_path)
# 返回图片
return Image(base64=b64.decode('utf-8'))
def check_text(text: str) -> list:
"""检查文本是否为长消息,并转换成该使用的消息链组件"""
if len(text) > config.blob_message_threshold:
# logging.info("长消息: {}".format(text))
if config.blob_message_strategy == 'image':
# 转换成图片
return [text_to_image(text)]
elif config.blob_message_strategy == 'forward':
# 包装转发消息
display = ForwardMessageDiaplay(
title='群聊的聊天记录',
brief='[聊天记录]',
source='聊天记录',
preview=["bot: "+text],
summary="查看1条转发消息"
)
node = ForwardMessageNode(
sender_id=config.mirai_http_api_config['qq'],
sender_name='bot',
message_chain=MessageChain([text])
)
forward = Forward(
display=display,
node_list=[node]
)
return [forward]
else:
return [text]

View File

333
pkg/qqbot/cmds/aamgr.py Normal file
View File

@@ -0,0 +1,333 @@
import importlib
import inspect
import logging
import copy
import pkgutil
import traceback
import types
import json
__command_list__ = {}
import tips as tips_custom
"""命令树
结构:
{
'cmd1': {
'description': 'cmd1 description',
'usage': 'cmd1 usage',
'aliases': ['cmd1 alias1', 'cmd1 alias2'],
'privilege': 0,
'parent': None,
'cls': <class 'pkg.qqbot.cmds.cmd1.CommandCmd1'>,
'sub': [
'cmd1-1'
]
},
'cmd1.cmd1-1: {
'description': 'cmd1-1 description',
'usage': 'cmd1-1 usage',
'aliases': ['cmd1-1 alias1', 'cmd1-1 alias2'],
'privilege': 0,
'parent': 'cmd1',
'cls': <class 'pkg.qqbot.cmds.cmd1.CommandCmd1_1'>,
'sub': []
},
'cmd2': {
'description': 'cmd2 description',
'usage': 'cmd2 usage',
'aliases': ['cmd2 alias1', 'cmd2 alias2'],
'privilege': 0,
'parent': None,
'cls': <class 'pkg.qqbot.cmds.cmd2.CommandCmd2'>,
'sub': [
'cmd2-1'
]
},
'cmd2.cmd2-1': {
'description': 'cmd2-1 description',
'usage': 'cmd2-1 usage',
'aliases': ['cmd2-1 alias1', 'cmd2-1 alias2'],
'privilege': 0,
'parent': 'cmd2',
'cls': <class 'pkg.qqbot.cmds.cmd2.CommandCmd2_1'>,
'sub': [
'cmd2-1-1'
]
},
'cmd2.cmd2-1.cmd2-1-1': {
'description': 'cmd2-1-1 description',
'usage': 'cmd2-1-1 usage',
'aliases': ['cmd2-1-1 alias1', 'cmd2-1-1 alias2'],
'privilege': 0,
'parent': 'cmd2.cmd2-1',
'cls': <class 'pkg.qqbot.cmds.cmd2.CommandCmd2_1_1'>,
'sub': []
},
}
"""
__tree_index__: dict[str, list] = {}
"""命令树索引
结构:
{
'pkg.qqbot.cmds.cmd1.CommandCmd1': 'cmd1', # 顶级指令
'pkg.qqbot.cmds.cmd1.CommandCmd1_1': 'cmd1.cmd1-1', # 类名: 节点路径
'pkg.qqbot.cmds.cmd2.CommandCmd2': 'cmd2',
'pkg.qqbot.cmds.cmd2.CommandCmd2_1': 'cmd2.cmd2-1',
'pkg.qqbot.cmds.cmd2.CommandCmd2_1_1': 'cmd2.cmd2-1.cmd2-1-1',
}
"""
class Context:
"""命令执行上下文"""
command: str
"""顶级指令文本"""
crt_command: str
"""当前子指令文本"""
params: list
"""完整参数列表"""
crt_params: list
"""当前子指令参数列表"""
session_name: str
"""会话名"""
text_message: str
"""指令完整文本"""
launcher_type: str
"""指令发起者类型"""
launcher_id: int
"""指令发起者ID"""
sender_id: int
"""指令发送者ID"""
is_admin: bool
"""[过时]指令发送者是否为管理员"""
privilege: int
"""指令发送者权限等级"""
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
class AbstractCommandNode:
"""指令抽象类"""
parent: type
"""父指令类"""
name: str
"""指令名"""
description: str
"""指令描述"""
usage: str
"""指令用法"""
aliases: list[str]
"""指令别名"""
privilege: int
"""指令权限等级, 权限大于等于此值的用户才能执行指令"""
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
"""指令处理函数
:param ctx: 指令执行上下文
:return: (是否执行, 回复列表(若执行))
若未执行,将自动以下一个参数查找并执行子指令
"""
raise NotImplementedError
@classmethod
def help(cls) -> str:
"""获取指令帮助信息"""
return '指令: {}\n描述: {}\n用法: \n{}\n别名: {}\n权限: {}'.format(
cls.name,
cls.description,
cls.usage,
', '.join(cls.aliases),
cls.privilege
)
@staticmethod
def register(
parent: type = None,
name: str = None,
description: str = None,
usage: str = None,
aliases: list[str] = None,
privilege: int = 0
):
"""注册指令
:param cls: 指令类
:param name: 指令名
:param parent: 父指令类
"""
global __command_list__, __tree_index__
def wrapper(cls):
cls.name = name
cls.parent = parent
cls.description = description
cls.usage = usage
cls.aliases = aliases
cls.privilege = privilege
logging.debug("cls: {}, name: {}, parent: {}".format(cls, name, parent))
if parent is None:
# 顶级指令注册
__command_list__[name] = {
'description': cls.description,
'usage': cls.usage,
'aliases': cls.aliases,
'privilege': cls.privilege,
'parent': None,
'cls': cls,
'sub': []
}
# 更新索引
__tree_index__[cls.__module__ + '.' + cls.__name__] = name
else:
# 获取父节点名称
path = __tree_index__[parent.__module__ + '.' + parent.__name__]
parent_node = __command_list__[path]
# 链接父子指令
__command_list__[path]['sub'].append(name)
# 注册子指令
__command_list__[path + '.' + name] = {
'description': cls.description,
'usage': cls.usage,
'aliases': cls.aliases,
'privilege': cls.privilege,
'parent': path,
'cls': cls,
'sub': []
}
# 更新索引
__tree_index__[cls.__module__ + '.' + cls.__name__] = path + '.' + name
return cls
return wrapper
class CommandPrivilegeError(Exception):
"""指令权限不足或不存在异常"""
pass
# 传入Context对象广搜命令树返回执行结果
# 若命令被处理返回reply列表
# 若命令未被处理,继续执行下一级指令
# 若命令不存在,报异常
def execute(context: Context) -> list:
"""执行指令
:param ctx: 指令执行上下文
:return: 回复列表
"""
global __command_list__
# 拷贝ctx
ctx: Context = copy.deepcopy(context)
# 从树取出顶级指令
node = __command_list__
path = ctx.command
while True:
try:
logging.debug('执行指令: {}'.format(path))
node = __command_list__[path]
# 检查权限
if ctx.privilege < node['privilege']:
raise CommandPrivilegeError(tips_custom.command_admin_message+"{}".format(path))
# 执行
execed, reply = node['cls'].process(ctx)
if execed:
return reply
else:
# 删除crt_params第一个参数
ctx.crt_command = ctx.crt_params.pop(0)
# 下一个path
path = path + '.' + ctx.crt_command
except KeyError:
traceback.print_exc()
raise CommandPrivilegeError(tips_custom.command_err_message+"{}".format(path))
def register_all():
"""启动时调用此函数注册所有指令
递归处理pkg.qqbot.cmds包下及其子包下所有模块的所有继承于AbstractCommand的类
"""
# 模块遍历其中的继承于AbstractCommand的类进行注册
# 包:递归处理包下的模块
# 排除__开头的属性
global __command_list__, __tree_index__
import pkg.qqbot.cmds
def walk(module, prefix, path_prefix):
# 排除不处于pkg.qqbot.cmds中的包
if not module.__name__.startswith('pkg.qqbot.cmds'):
return
logging.debug('walk: {}, path: {}'.format(module.__name__, module.__path__))
for item in pkgutil.iter_modules(module.__path__):
if item.name.startswith('__'):
continue
if item.ispkg:
walk(__import__(module.__name__ + '.' + item.name, fromlist=['']), prefix + item.name + '.', path_prefix + item.name + '/')
else:
m = __import__(module.__name__ + '.' + item.name, fromlist=[''])
# for name, cls in inspect.getmembers(m, inspect.isclass):
# # 检查是否为指令类
# if cls.__module__ == m.__name__ and issubclass(cls, AbstractCommandNode) and cls != AbstractCommandNode:
# cls.register(cls, cls.name, cls.parent)
walk(pkg.qqbot.cmds, '', '')
logging.debug(__command_list__)
def apply_privileges():
"""读取cmdpriv.json并应用指令权限"""
# 读取内容
json_str = ""
with open('cmdpriv.json', 'r', encoding="utf-8") as f:
json_str = f.read()
data = json.loads(json_str)
for path, priv in data.items():
if path == 'comment':
continue
if __command_list__[path]['privilege'] != priv:
logging.debug('应用权限: {} -> {}(default: {})'.format(path, priv, __command_list__[path]['privilege']))
__command_list__[path]['privilege'] = priv

View File

View File

@@ -0,0 +1,36 @@
from ..aamgr import AbstractCommandNode, Context
import logging
from mirai import Image
import config
@AbstractCommandNode.register(
parent=None,
name="draw",
description="使用DALL·E生成图片",
usage="!draw <图片提示语>",
aliases=[],
privilege=1
)
class DrawCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
reply = []
if len(ctx.params) == 0:
reply = ["[bot]err: 未提供图片描述文字"]
else:
session = pkg.openai.session.get_session(ctx.session_name)
res = session.draw_image(" ".join(ctx.params))
logging.debug("draw_image result:{}".format(res))
reply = [Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply.append(" ".join(ctx.params))
return True, reply

View File

View File

@@ -0,0 +1,200 @@
from ..aamgr import AbstractCommandNode, Context
import os
import pkg.plugin.host as plugin_host
import pkg.utils.updater as updater
@AbstractCommandNode.register(
parent=None,
name="plugin",
description="插件管理",
usage="!plugin\n!plugin get <插件仓库地址>\n!plugin update\n!plugin del <插件名>\n!plugin on <插件名>\n!plugin off <插件名>",
aliases=[],
privilege=2
)
class PluginCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
reply = []
plugin_list = plugin_host.__plugins__
if len(ctx.params) == 0:
# 列出所有插件
reply_str = "[bot]所有插件({}):\n".format(len(plugin_host.__plugins__))
idx = 0
for key in plugin_host.iter_plugins_name():
plugin = plugin_list[key]
reply_str += "\n#{} {} {}\n{}\nv{}\n作者: {}\n"\
.format((idx+1), plugin['name'],
"[已禁用]" if not plugin['enabled'] else "",
plugin['description'],
plugin['version'], plugin['author'])
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
remote_url = updater.get_remote_url("/".join(plugin['path'].split('/')[:-1]))
if remote_url != "https://github.com/RockChinQ/QChatGPT" and remote_url != "https://gitee.com/RockChin/QChatGPT":
reply_str += "源码: "+remote_url+"\n"
idx += 1
reply = [reply_str]
return True, reply
elif ctx.params[0].startswith("http"):
reply = ["[bot]err: 此命令已弃用,请使用 !plugin get <插件仓库地址> 进行安装"]
return True, reply
else:
return False, []
@AbstractCommandNode.register(
parent=PluginCommand,
name="get",
description="安装插件",
usage="!plugin get <插件仓库地址>",
aliases=[],
privilege=2
)
class PluginGetCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import threading
import logging
import pkg.utils.context
if len(ctx.crt_params) == 0:
reply = ["[bot]err: 请提供插件仓库地址"]
return True, reply
reply = []
def closure():
try:
plugin_host.install_plugin(ctx.crt_params[0])
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 指令重载插件")
except Exception as e:
logging.error("插件安装失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装失败:{}".format(e))
threading.Thread(target=closure, args=()).start()
reply = ["[bot]正在安装插件..."]
return True, reply
@AbstractCommandNode.register(
parent=PluginCommand,
name="update",
description="更新所有插件",
usage="!plugin update",
aliases=[],
privilege=2
)
class PluginUpdateCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import threading
import logging
plugin_list = plugin_host.__plugins__
reply = []
def closure():
try:
import pkg.utils.context
updated = []
for key in plugin_list:
plugin = plugin_list[key]
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
success = updater.pull_latest("/".join(plugin['path'].split('/')[:-1]))
if success:
updated.append(plugin['name'])
# 检查是否有requirements.txt
pkg.utils.context.get_qqbot_manager().notify_admin("正在安装依赖...")
for key in plugin_list:
plugin = plugin_list[key]
if os.path.exists("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt"):
logging.info("{}检测到requirements.txt安装依赖".format(plugin['name']))
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt")
import pkg.utils.log as log
log.reset_logging()
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}".format(", ".join(updated)))
except Exception as e:
logging.error("插件更新失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件更新失败:{} 请尝试手动更新插件".format(e))
threading.Thread(target=closure).start()
reply = ["[bot]正在更新所有插件,请勿重复发起..."]
return True, reply
@AbstractCommandNode.register(
parent=PluginCommand,
name="del",
description="删除插件",
usage="!plugin del <插件名>",
aliases=[],
privilege=2
)
class PluginDelCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
plugin_list = plugin_host.__plugins__
reply = []
if len(ctx.crt_params) < 1:
reply = ["[bot]err: 未指定插件名"]
else:
plugin_name = ctx.crt_params[0]
if plugin_name in plugin_list:
unin_path = plugin_host.uninstall_plugin(plugin_name)
reply = ["[bot]已删除插件: {} ({}), 请发送 !reload 重载插件".format(plugin_name, unin_path)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin指令查看插件列表".format(plugin_name)]
return True, reply
@AbstractCommandNode.register(
parent=PluginCommand,
name="on",
description="启用指定插件",
usage="!plugin on <插件名>",
aliases=[],
privilege=2
)
@AbstractCommandNode.register(
parent=PluginCommand,
name="off",
description="禁用指定插件",
usage="!plugin off <插件名>",
aliases=[],
privilege=2
)
class PluginOnOffCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.plugin.switch as plugin_switch
plugin_list = plugin_host.__plugins__
reply = []
print(ctx.params)
new_status = ctx.params[0] == 'on'
if len(ctx.crt_params) < 1:
reply = ["[bot]err: 未指定插件名"]
else:
plugin_name = ctx.crt_params[0]
if plugin_name in plugin_list:
plugin_list[plugin_name]['enabled'] = new_status
plugin_switch.dump_switch()
reply = ["[bot]已{}插件: {}".format("启用" if new_status else "禁用", plugin_name)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin指令查看插件列表".format(plugin_name)]
return True, reply

View File

View File

@@ -0,0 +1,73 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="default",
description="操作情景预设",
usage="!default\n!default set [指定情景预设为默认]",
aliases=[],
privilege=1
)
class DefaultCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
reply = []
import config
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设({}模式):\n\n".format(config.preset_mode)
prompts = dprompt.mode_inst().list()
for key in prompts:
pro = prompts[key]
reply_str += "名称: {}".format(key)
for r in pro:
reply_str += "\n - [{}]: {}".format(r['role'], r['content'])
reply_str += "\n\n"
reply_str += "\n当前默认情景预设:{}\n".format(dprompt.mode_inst().get_using_name())
reply_str += "请使用 !default set <情景预设名称> 来设置默认情景预设"
reply = [reply_str]
elif params[0] != "set":
reply = ["[bot]err: 已弃用,请使用!default set <情景预设名称> 来设置默认情景预设"]
else:
return False, []
return True, reply
@AbstractCommandNode.register(
parent=DefaultCommand,
name="set",
description="设置默认情景预设",
usage="!default set <情景预设名称>",
aliases=[],
privilege=2
)
class DefaultSetCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
reply = []
if len(ctx.crt_params) == 0:
reply = ["[bot]err: 请指定情景预设名称"]
elif len(ctx.crt_params) > 0:
import pkg.openai.dprompt as dprompt
try:
full_name = dprompt.mode_inst().set_using_name(ctx.crt_params[0])
reply = ["[bot]已设置默认情景预设为:{}".format(full_name)]
except Exception as e:
reply = ["[bot]err: {}".format(e)]
else:
reply = ["[bot]err: 仅管理员可设置默认情景预设"]
return True, reply

View File

@@ -0,0 +1,52 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
@AbstractCommandNode.register(
parent=None,
name="del",
description="删除当前会话的历史记录",
usage="!del <序号>\n!del all",
aliases=[],
privilege=1
)
class DelCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
reply = []
if len(params) == 0:
reply = ["[bot]参数不足, 格式: !del <序号>\n可以通过!list查看序号"]
else:
if params[0] == 'all':
return False, []
elif params[0].isdigit():
if pkg.openai.session.get_session(session_name).delete_history(int(params[0])):
reply = ["[bot]已删除历史会话 #{}".format(params[0])]
else:
reply = ["[bot]没有历史会话 #{}".format(params[0])]
else:
reply = ["[bot]参数错误, 格式: !del <序号>\n可以通过!list查看序号"]
return True, reply
@AbstractCommandNode.register(
parent=DelCommand,
name="all",
description="删除当前会话的全部历史记录",
usage="!del all",
aliases=[],
privilege=1
)
class DelAllCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
reply = []
pkg.openai.session.get_session(session_name).delete_all_history()
reply = ["[bot]已删除所有历史会话"]
return True, reply

View File

@@ -0,0 +1,50 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="delhst",
description="删除指定会话的所有历史记录",
usage="!delhst <会话名称>\n!delhst all",
aliases=[],
privilege=2
)
class DelHistoryCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
import pkg.utils.context
params = ctx.params
reply = []
if len(params) == 0:
reply = [
"[bot]err:请输入要删除的会话名: group_<群号> 或者 person_<QQ号>, 或使用 !delhst all 删除所有会话的历史记录"]
else:
if params[0] == 'all':
return False, []
else:
if pkg.utils.context.get_database_manager().delete_all_history(params[0]):
reply = ["[bot]已删除会话 {} 的所有历史记录".format(params[0])]
else:
reply = ["[bot]未找到会话 {} 的历史记录".format(params[0])]
return True, reply
@AbstractCommandNode.register(
parent=DelHistoryCommand,
name="all",
description="删除所有会话的全部历史记录",
usage="!delhst all",
aliases=[],
privilege=2
)
class DelAllHistoryCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.utils.context
reply = []
pkg.utils.context.get_database_manager().delete_all_session_history()
reply = ["[bot]已删除所有会话的历史记录"]
return True, reply

View File

@@ -0,0 +1,28 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
@AbstractCommandNode.register(
parent=None,
name="last",
description="切换前一次对话",
usage="!last",
aliases=[],
privilege=1
)
class LastCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
reply = []
result = pkg.openai.session.get_session(session_name).last_session()
if result is None:
reply = ["[bot]没有前一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到前一次的对话:\n创建时间:{}\n".format(datetime_str)]
return True, reply

View File

@@ -0,0 +1,67 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
import json
@AbstractCommandNode.register(
parent=None,
name='list',
description='列出当前会话的所有历史记录',
usage='!list\n!list [页数]',
aliases=[],
privilege=1
)
class ListCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
reply = []
pkg.openai.session.get_session(session_name).persistence()
page = 0
if len(params) > 0:
try:
page = int(params[0])
except ValueError:
pass
results = pkg.openai.session.get_session(session_name).list_history(page=page)
if len(results) == 0:
reply = ["[bot]第{}页没有历史会话".format(page)]
else:
reply_str = "[bot]历史会话 第{}页:\n".format(page)
current = -1
for i in range(len(results)):
# 时间(使用create_timestamp转换) 序号 部分内容
datetime_obj = datetime.datetime.fromtimestamp(results[i]['create_timestamp'])
msg = ""
try:
msg = json.loads(results[i]['prompt'])
except json.decoder.JSONDecodeError:
msg = pkg.openai.session.reset_session_prompt(session_name, results[i]['prompt'])
# 持久化
pkg.openai.session.get_session(session_name).persistence()
if len(msg) >= 2:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
msg[0]['content'])
else:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
"无内容")
if results[i]['create_timestamp'] == pkg.openai.session.get_session(
session_name).create_timestamp:
current = i + page * 10
reply_str += "\n以上信息倒序排列"
if current != -1:
reply_str += ",当前会话是 #{}\n".format(current)
else:
reply_str += ",当前处于全新会话或不在此页"
reply = [reply_str]
return True, reply

View File

@@ -0,0 +1,28 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
@AbstractCommandNode.register(
parent=None,
name="next",
description="切换后一次对话",
usage="!next",
aliases=[],
privilege=1
)
class NextCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
reply = []
result = pkg.openai.session.get_session(session_name).next_session()
if result is None:
reply = ["[bot]没有后一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到后一次的对话:\n创建时间:{}\n".format(datetime_str)]
return True, reply

View File

@@ -0,0 +1,32 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
@AbstractCommandNode.register(
parent=None,
name="prompt",
description="获取当前会话的前文",
usage="!prompt",
aliases=[],
privilege=1
)
class PromptCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
reply = []
msgs = ""
session: list = pkg.openai.session.get_session(session_name).prompt
for msg in session:
if len(params) != 0 and params[0] in ['-all', '-a']:
msgs = msgs + "{}: {}\n\n".format(msg['role'], msg['content'])
elif len(msg['content']) > 30:
msgs = msgs + "[{}]: {}...\n\n".format(msg['role'], msg['content'][:30])
else:
msgs = msgs + "[{}]: {}\n\n".format(msg['role'], msg['content'])
reply = ["[bot]当前对话所有内容:\n{}".format(msgs)]
return True, reply

View File

@@ -0,0 +1,30 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
@AbstractCommandNode.register(
parent=None,
name="resend",
description="重新获取上一次问题的回复",
usage="!resend",
aliases=[],
privilege=1
)
class ResendCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
import config
session_name = ctx.session_name
reply = []
session = pkg.openai.session.get_session(session_name)
to_send = session.undo()
mgr = pkg.utils.context.get_qqbot_manager()
reply = pkg.qqbot.message.process_normal_message(to_send, mgr, config,
ctx.launcher_type, ctx.launcher_id,
ctx.sender_id)
return True, reply

View File

@@ -0,0 +1,35 @@
from ..aamgr import AbstractCommandNode, Context
import tips as tips_custom
import pkg.openai.session
import pkg.utils.context
@AbstractCommandNode.register(
parent=None,
name='reset',
description='重置当前会话',
usage='!reset',
aliases=[],
privilege=1
)
class ResetCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
params = ctx.params
session_name = ctx.session_name
reply = ""
if len(params) == 0:
pkg.openai.session.get_session(session_name).reset(explicit=True)
reply = [tips_custom.command_reset_message]
else:
try:
import pkg.openai.dprompt as dprompt
pkg.openai.session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
reply = [tips_custom.command_reset_name_message+"{}".format(dprompt.mode_inst().get_full_name(params[0]))]
except Exception as e:
reply = ["[bot]会话重置失败:{}".format(e)]
return True, reply

View File

View File

@@ -0,0 +1,84 @@
from ..aamgr import AbstractCommandNode, Context
import json
def config_operation(cmd, params):
reply = []
import pkg.utils.context
config = pkg.utils.context.get_config()
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入配置项"]
else:
cfg_name = params[0]
if cfg_name == 'all':
reply_str = "[bot]所有配置项:\n\n"
for cfg in dir(config):
if not cfg.startswith('__') and not cfg == 'logging':
# 根据配置项类型进行格式化如果是字典则转换为json并格式化
if isinstance(getattr(config, cfg), str):
reply_str += "{}: \"{}\"\n".format(cfg, getattr(config, cfg))
elif isinstance(getattr(config, cfg), dict):
# 不进行unicode转义并格式化
reply_str += "{}: {}\n".format(cfg,
json.dumps(getattr(config, cfg),
ensure_ascii=False, indent=4))
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply = [reply_str]
elif cfg_name in dir(config):
if len(params) == 1:
# 按照配置项类型进行格式化
if isinstance(getattr(config, cfg_name), str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, getattr(config, cfg_name))
elif isinstance(getattr(config, cfg_name), dict):
reply_str = "[bot]配置项{}: {}\n".format(cfg_name,
json.dumps(getattr(config, cfg_name),
ensure_ascii=False, indent=4))
else:
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, getattr(config, cfg_name))
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
if cfg_value == 'true':
cfg_value = True
elif cfg_value == 'false':
cfg_value = False
elif cfg_value.isdigit():
cfg_value = int(cfg_value)
elif cfg_value.startswith('{') and cfg_value.endswith('}'):
cfg_value = json.loads(cfg_value)
else:
try:
cfg_value = float(cfg_value)
except ValueError:
pass
# 检查类型是否匹配
if isinstance(getattr(config, cfg_name), type(cfg_value)):
setattr(config, cfg_name, cfg_value)
pkg.utils.context.set_config(config)
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply
@AbstractCommandNode.register(
parent=None,
name="cfg",
description="配置项管理",
usage="!cfg <配置项> [配置值]\n!cfg all",
aliases=[],
privilege=2
)
class CfgCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
return True, config_operation(ctx.command, ctx.params)

View File

@@ -0,0 +1,39 @@
from ..aamgr import AbstractCommandNode, Context, __command_list__
@AbstractCommandNode.register(
parent=None,
name="cmd",
description="显示指令列表",
usage="!cmd\n!cmd <指令名称>",
aliases=[],
privilege=1
)
class CmdCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
command_list = __command_list__
reply = []
if len(ctx.params) == 0:
reply_str = "[bot]当前所有指令:\n\n"
# 遍历顶级指令
for key in command_list:
command = command_list[key]
if command['parent'] is None:
reply_str += "!{} - {}\n".format(key, command['description'])
reply_str += "\n请使用 !cmd <指令名称> 来查看指令的详细信息"
reply = [reply_str]
else:
command_name = ctx.params[0]
if command_name in command_list:
reply = [command_list[command_name]['cls'].help()]
else:
reply = ["[bot]指令 {} 不存在".format(command_name)]
return True, reply

View File

@@ -0,0 +1,24 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="help",
description="显示自定义的帮助信息",
usage="!help",
aliases=[],
privilege=1
)
class HelpCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import tips
reply = ["[bot] "+tips.help_message + "\n请输入 !cmd 查看指令列表"]
# 警告config.help_message过时
import config
if hasattr(config, "help_message"):
reply[0] += "\n\n警告config.py中的help_message已过时不再生效请使用tips.py中的help_message替代"
return True, reply

View File

@@ -0,0 +1,23 @@
from ..aamgr import AbstractCommandNode, Context
import threading
@AbstractCommandNode.register(
parent=None,
name="reload",
description="执行热重载",
usage="!reload",
aliases=[],
privilege=2
)
class ReloadCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
reply = []
import pkg.utils.reloader
def reload_task():
pkg.utils.reloader.reload_all()
threading.Thread(target=reload_task, daemon=True).start()
return True, reply

View File

@@ -0,0 +1,38 @@
from ..aamgr import AbstractCommandNode, Context
import threading
import traceback
@AbstractCommandNode.register(
parent=None,
name="update",
description="更新程序",
usage="!update",
aliases=[],
privilege=2
)
class UpdateCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
reply = []
import pkg.utils.updater
import pkg.utils.reloader
import pkg.utils.context
def update_task():
try:
if pkg.utils.updater.update_all():
pkg.utils.reloader.reload_all(notify=False)
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成")
else:
pkg.utils.context.get_qqbot_manager().notify_admin("无新版本")
except Exception as e0:
traceback.print_exc()
pkg.utils.context.get_qqbot_manager().notify_admin("更新失败:{}".format(e0))
return
threading.Thread(target=update_task, daemon=True).start()
reply = ["[bot]正在更新,请耐心等待,请勿重复发起更新..."]
return True, reply

View File

@@ -0,0 +1,35 @@
from ..aamgr import AbstractCommandNode, Context
import logging
@AbstractCommandNode.register(
parent=None,
name="usage",
description="获取使用情况",
usage="!usage",
aliases=[],
privilege=1
)
class UsageCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import config
import pkg.utils.credit as credit
import pkg.utils.context
reply = []
reply_str = "[bot]各api-key使用情况:\n\n"
api_keys = pkg.utils.context.get_openai_manager().key_mgr.api_key
for key_name in api_keys:
text_length = pkg.utils.context.get_openai_manager().audit_mgr \
.get_text_length_of_key(api_keys[key_name])
image_count = pkg.utils.context.get_openai_manager().audit_mgr \
.get_image_count_of_key(api_keys[key_name])
reply_str += "{}:\n - 文本长度:{}\n - 图片数量:{}\n".format(key_name, int(text_length),
int(image_count))
reply = [reply_str]
return True, reply

View File

@@ -0,0 +1,27 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="version",
description="查看版本信息",
usage="!version",
aliases=[],
privilege=1
)
class VersionCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
reply = []
import pkg.utils.updater
reply_str = "[bot]当前版本:\n{}\n".format(pkg.utils.updater.get_current_version_info())
try:
if pkg.utils.updater.is_new_version_available():
reply_str += "\n有新版本可用,请使用命令 !update 进行更新"
except:
pass
reply = [reply_str]
return True, reply

View File

@@ -4,6 +4,7 @@ import json
import datetime
import os
import threading
import traceback
import pkg.openai.session
import pkg.openai.manager
@@ -12,151 +13,12 @@ import pkg.utils.updater
import pkg.utils.context
import pkg.qqbot.message
import pkg.utils.credit as credit
# import pkg.qqbot.cmds.model as cmdmodel
import pkg.qqbot.cmds.aamgr as cmdmgr
from mirai import Image
def config_operation(cmd, params):
reply = []
config = pkg.utils.context.get_config()
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入配置项"]
else:
cfg_name = params[0]
if cfg_name == 'all':
reply_str = "[bot]所有配置项:\n\n"
for cfg in dir(config):
if not cfg.startswith('__') and not cfg == 'logging':
# 根据配置项类型进行格式化如果是字典则转换为json并格式化
if isinstance(getattr(config, cfg), str):
reply_str += "{}: \"{}\"\n".format(cfg, getattr(config, cfg))
elif isinstance(getattr(config, cfg), dict):
# 不进行unicode转义并格式化
reply_str += "{}: {}\n".format(cfg,
json.dumps(getattr(config, cfg),
ensure_ascii=False, indent=4))
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply = [reply_str]
elif cfg_name in dir(config):
if len(params) == 1:
# 按照配置项类型进行格式化
if isinstance(getattr(config, cfg_name), str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, getattr(config, cfg_name))
elif isinstance(getattr(config, cfg_name), dict):
reply_str = "[bot]配置项{}: {}\n".format(cfg_name,
json.dumps(getattr(config, cfg_name),
ensure_ascii=False, indent=4))
else:
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, getattr(config, cfg_name))
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
if cfg_value == 'true':
cfg_value = True
elif cfg_value == 'false':
cfg_value = False
elif cfg_value.isdigit():
cfg_value = int(cfg_value)
elif cfg_value.startswith('{') and cfg_value.endswith('}'):
cfg_value = json.loads(cfg_value)
else:
try:
cfg_value = float(cfg_value)
except ValueError:
pass
# 检查类型是否匹配
if isinstance(getattr(config, cfg_name), type(cfg_value)):
setattr(config, cfg_name, cfg_value)
pkg.utils.context.set_config(config)
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply
def plugin_operation(cmd, params, is_admin):
reply = []
import pkg.plugin.host as plugin_host
import pkg.utils.updater as updater
plugin_list = plugin_host.__plugins__
if len(params) == 0:
reply_str = "[bot]所有插件({}):\n".format(len(plugin_host.__plugins__))
idx = 0
for key in plugin_host.iter_plugins_name():
plugin = plugin_list[key]
reply_str += "\n#{} {} {}\n{}\nv{}\n作者: {}\n"\
.format((idx+1), plugin['name'],
"[已禁用]" if not plugin['enabled'] else "",
plugin['description'],
plugin['version'], plugin['author'])
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
remote_url = updater.get_remote_url("/".join(plugin['path'].split('/')[:-1]))
if remote_url != "https://github.com/RockChinQ/QChatGPT" and remote_url != "https://gitee.com/RockChin/QChatGPT":
reply_str += "源码: "+remote_url+"\n"
idx += 1
reply = [reply_str]
elif params[0] == 'update':
# 更新所有插件
if is_admin:
def closure():
import pkg.utils.context
updated = []
for key in plugin_list:
plugin = plugin_list[key]
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
success = updater.pull_latest("/".join(plugin['path'].split('/')[:-1]))
if success:
updated.append(plugin['name'])
# 检查是否有requirements.txt
pkg.utils.context.get_qqbot_manager().notify_admin("正在安装依赖...")
for key in plugin_list:
plugin = plugin_list[key]
if os.path.exists("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt"):
logging.info("{}检测到requirements.txt安装依赖".format(plugin['name']))
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt")
import main
main.reset_logging()
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}".format(", ".join(updated)))
threading.Thread(target=closure).start()
reply = ["[bot]正在更新所有插件,请勿重复发起..."]
else:
reply = ["[bot]err:权限不足"]
elif params[0].startswith("http"):
if is_admin:
def closure():
try:
plugin_host.install_plugin(params[0])
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 指令重载插件")
except Exception as e:
logging.error("插件安装失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装失败:{}".format(e))
threading.Thread(target=closure, args=()).start()
reply = ["[bot]正在安装插件..."]
else:
reply = ["[bot]err:权限不足,请使用管理员账号私聊发起"]
return reply
def process_command(session_name: str, text_message: str, mgr, config,
launcher_type: str, launcher_id: int, sender_id: int, is_admin: bool) -> list:
@@ -169,176 +31,32 @@ def process_command(session_name: str, text_message: str, mgr, config,
cmd = text_message[1:].strip().split(' ')[0]
params = text_message[1:].strip().split(' ')[1:]
if cmd == 'help':
reply = ["[bot]" + config.help_message]
elif cmd == 'reset':
if len(params) == 0:
pkg.openai.session.get_session(session_name).reset(explicit=True)
reply = ["[bot]会话已重置"]
else:
pkg.openai.session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
reply = ["[bot]会话已重置,使用场景预设:{}".format(params[0])]
elif cmd == 'last':
result = pkg.openai.session.get_session(session_name).last_session()
if result is None:
reply = ["[bot]没有前一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到前一次的对话:\n创建时间:{}\n".format(
datetime_str) + result.prompt[
:min(100,
len(result.prompt))] + \
("..." if len(result.prompt) > 100 else "#END#")]
elif cmd == 'next':
result = pkg.openai.session.get_session(session_name).next_session()
if result is None:
reply = ["[bot]没有后一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到后一次的对话:\n创建时间:{}\n".format(
datetime_str) + result.prompt[
:min(100,
len(result.prompt))] + \
("..." if len(result.prompt) > 100 else "#END#")]
elif cmd == 'prompt':
reply = ["[bot]当前对话所有内容:\n" + pkg.openai.session.get_session(session_name).prompt]
elif cmd == 'list':
pkg.openai.session.get_session(session_name).persistence()
page = 0
if len(params) > 0:
try:
page = int(params[0])
except ValueError:
pass
# 把!~开头的转换成!cfg
if cmd.startswith('~'):
params = [cmd[1:]] + params
cmd = 'cfg'
results = pkg.openai.session.get_session(session_name).list_history(page=page)
if len(results) == 0:
reply = ["[bot]第{}页没有历史会话".format(page)]
else:
reply_str = "[bot]历史会话 第{}页:\n".format(page)
current = -1
for i in range(len(results)):
# 时间(使用create_timestamp转换) 序号 部分内容
datetime_obj = datetime.datetime.fromtimestamp(results[i]['create_timestamp'])
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
results[i]['prompt'][
:min(20, len(results[i]['prompt']))])
if results[i]['create_timestamp'] == pkg.openai.session.get_session(
session_name).create_timestamp:
current = i + page * 10
# 包装参数
context = cmdmgr.Context(
command=cmd,
crt_command=cmd,
params=params,
crt_params=params[:],
session_name=session_name,
text_message=text_message,
launcher_type=launcher_type,
launcher_id=launcher_id,
sender_id=sender_id,
is_admin=is_admin,
privilege=2 if is_admin else 1, # 普通用户1管理员2
)
try:
reply = cmdmgr.execute(context)
except cmdmgr.CommandPrivilegeError as e:
reply = ["{}".format(e)]
reply_str += "\n以上信息倒序排列"
if current != -1:
reply_str += ",当前会话是 #{}\n".format(current)
else:
reply_str += ",当前处于全新会话或不在此页"
reply = [reply_str]
elif cmd == 'resend':
session = pkg.openai.session.get_session(session_name)
to_send = session.undo()
reply = pkg.qqbot.message.process_normal_message(to_send, mgr, config,
launcher_type, launcher_id, sender_id)
elif cmd == 'usage':
reply_str = "[bot]各api-key使用情况:\n\n"
api_keys = pkg.utils.context.get_openai_manager().key_mgr.api_key
for key_name in api_keys:
text_length = pkg.utils.context.get_openai_manager().audit_mgr \
.get_text_length_of_key(api_keys[key_name])
image_count = pkg.utils.context.get_openai_manager().audit_mgr \
.get_image_count_of_key(api_keys[key_name])
reply_str += "{}:\n - 文本长度:{}\n - 图片数量:{}\n".format(key_name, int(text_length),
int(image_count))
# 获取此key的额度
try:
credit_data = credit.fetch_credit_data(api_keys[key_name])
reply_str += " - 使用额度:{:.2f}/{:.2f}\n".format(credit_data['total_used'],credit_data['total_granted'])
except Exception as e:
logging.warning("获取额度失败:{}".format(e))
reply = [reply_str]
elif cmd == 'draw':
if len(params) == 0:
reply = ["[bot]err:请输入图片描述文字"]
else:
session = pkg.openai.session.get_session(session_name)
res = session.draw_image(" ".join(params))
logging.debug("draw_image result:{}".format(res))
reply = [Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply.append(" ".join(params))
elif cmd == 'version':
reply_str = "[bot]当前版本:\n{}\n".format(pkg.utils.updater.get_current_version_info())
try:
if pkg.utils.updater.is_new_version_available():
reply_str += "\n有新版本可用,请使用命令 !update 进行更新"
except:
pass
reply = [reply_str]
elif cmd == 'plugin':
reply = plugin_operation(cmd, params, is_admin)
elif cmd == 'default':
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设:\n\n"
for key,value in dprompt.get_prompt_dict().items():
reply_str += " - {}: {}\n".format(key,value)
reply_str += "\n当前默认情景预设:{}\n".format(dprompt.get_current())
reply_str += "请使用!default <情景预设>来设置默认情景预设"
reply = [reply_str]
elif len(params) >0 and is_admin:
# 设置默认情景
import pkg.openai.dprompt as dprompt
try:
dprompt.set_current(params[0])
reply = ["[bot]已设置默认情景预设为:{}".format(dprompt.get_current())]
except KeyError:
reply = ["[bot]err: 未找到情景预设:{}".format(params[0])]
else:
reply = ["[bot]err: 仅管理员可设置默认情景预设"]
elif cmd == 'reload' and is_admin:
def reload_task():
pkg.utils.reloader.reload_all()
threading.Thread(target=reload_task, daemon=True).start()
elif cmd == 'update' and is_admin:
def update_task():
try:
if pkg.utils.updater.update_all():
pkg.utils.reloader.reload_all(notify=False)
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成")
else:
pkg.utils.context.get_qqbot_manager().notify_admin("无新版本")
except Exception as e0:
pkg.utils.context.get_qqbot_manager().notify_admin("更新失败:{}".format(e0))
return
threading.Thread(target=update_task, daemon=True).start()
reply = ["[bot]正在更新,请耐心等待,请勿重复发起更新..."]
elif cmd == 'cfg' and is_admin:
reply = config_operation(cmd, params)
else:
if cmd.startswith("~") and is_admin:
config_item = cmd[1:]
params = [config_item] + params
reply = config_operation("cfg", params)
else:
reply = ["[bot]err:未知的指令或权限不足: " + cmd]
return reply
except Exception as e:
mgr.notify_admin("{}指令执行失败:{}".format(session_name, e))
logging.exception(e)

View File

@@ -1,19 +1,84 @@
# 敏感词过滤模块
import re
import requests
import json
import logging
class ReplyFilter:
sensitive_words = []
mask = "*"
mask_word = ""
def __init__(self, sensitive_words: list):
# 默认值( 兼容性考虑 )
baidu_check = False
baidu_api_key = ""
baidu_secret_key = ""
inappropriate_message_tips = "[百度云]请珍惜机器人,当前返回内容不合规"
def __init__(self, sensitive_words: list, mask: str = "*", mask_word: str = ""):
self.sensitive_words = sensitive_words
self.mask = mask
self.mask_word = mask_word
import config
self.baidu_check = config.baidu_check
self.baidu_api_key = config.baidu_api_key
self.baidu_secret_key = config.baidu_secret_key
self.inappropriate_message_tips = config.inappropriate_message_tips
def is_illegal(self, message: str) -> bool:
processed = self.process(message)
if processed != message:
return True
return False
def process(self, message: str) -> str:
# 本地关键词屏蔽
for word in self.sensitive_words:
match = re.findall(word, message)
if len(match) > 0:
for i in range(len(match)):
message = message.replace(match[i], "*" * len(match[i]))
if self.mask_word == "":
message = message.replace(match[i], self.mask * len(match[i]))
else:
message = message.replace(match[i], self.mask_word)
# 百度云审核
if self.baidu_check:
# 百度云审核URL
baidu_url = "https://aip.baidubce.com/rest/2.0/solution/v1/text_censor/v2/user_defined?access_token=" + \
str(requests.post("https://aip.baidubce.com/oauth/2.0/token",
params={"grant_type": "client_credentials",
"client_id": self.baidu_api_key,
"client_secret": self.baidu_secret_key}).json().get("access_token"))
# 百度云审核
payload = "text=" + message
logging.info("向百度云发送:" + payload)
headers = {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': 'application/json'}
if isinstance(payload, str):
payload = payload.encode('utf-8')
response = requests.request("POST", baidu_url, headers=headers, data=payload)
response_dict = json.loads(response.text)
if "error_code" in response_dict:
error_msg = response_dict.get("error_msg")
logging.warning(f"百度云判定出错,错误信息:{error_msg}")
conclusion = f"百度云判定出错,错误信息:{error_msg}\n以下是原消息:{message}"
else:
conclusion = response_dict["conclusion"]
if conclusion in ("合规"):
logging.info(f"百度云判定结果:{conclusion}")
return message
else:
logging.warning(f"百度云判定结果:{conclusion}")
conclusion = self.inappropriate_message_tips
# 返回百度云审核结果
return conclusion
return message

View File

@@ -5,9 +5,6 @@ def ignore(msg: str) -> bool:
"""检查消息是否应该被忽略"""
import config
if not hasattr(config, 'ignore_rules'):
return False
if 'prefix' in config.ignore_rules:
for rule in config.ignore_rules['prefix']:
if msg.startswith(rule):

View File

@@ -3,9 +3,9 @@ import json
import os
import threading
import mirai.models.bus
from mirai import At, GroupMessage, MessageEvent, Mirai, StrangerMessage, WebSocketAdapter, HTTPAdapter, \
FriendMessage, Image
FriendMessage, Image, MessageChain, Plain
from func_timeout import func_set_timeout
import pkg.openai.session
@@ -19,21 +19,24 @@ import pkg.utils.context
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import tips as tips_custom
# 并行运行
def go(func, args=()):
thread = threading.Thread(target=func, args=args, daemon=True)
thread.start()
import pkg.qqbot.adapter as msadapter
# 检查消息是否符合泛响应匹配机制
def check_response_rule(text: str):
def check_response_rule(group_id:int, text: str):
config = pkg.utils.context.get_config()
if not hasattr(config, 'response_rules'):
return False, ''
rules = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
rules = config.response_rules[str(group_id)]
else:
rules = config.response_rules['default']
# 检查前缀匹配
if 'prefix' in rules:
for rule in rules['prefix']:
@@ -51,11 +54,49 @@ def check_response_rule(text: str):
return False, ""
def response_at(group_id: int):
config = pkg.utils.context.get_config()
use_response_rule = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
else:
use_response_rule = config.response_rules['default']
if 'at' not in use_response_rule:
return True
return use_response_rule['at']
def random_responding(group_id):
config = pkg.utils.context.get_config()
use_response_rule = config.response_rules
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
else:
use_response_rule = config.response_rules['default']
if 'random_rate' in use_response_rule:
import random
return random.random() < use_response_rule['random_rate']
return False
# 控制QQ消息输入输出的类
class QQBotManager:
retry = 3
bot: Mirai = None
adapter: msadapter.MessageSourceAdapter = None
bot_account_id: int = 0
reply_filter = None
@@ -64,44 +105,37 @@ class QQBotManager:
ban_person = []
ban_group = []
def __init__(self, mirai_http_api_config: dict, timeout: int = 60, retry: int = 3, first_time_init=True):
def __init__(self, first_time_init=True):
import config
self.timeout = timeout
self.retry = retry
# 加载禁用列表
if os.path.exists("banlist.py"):
import banlist
self.enable_banlist = banlist.enable
self.ban_person = banlist.person
self.ban_group = banlist.group
logging.info("加载禁用列表: person: {}, group: {}".format(self.ban_person, self.ban_group))
config = pkg.utils.context.get_config()
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
with open("sensitive.json", "r", encoding="utf-8") as f:
self.reply_filter = pkg.qqbot.filter.ReplyFilter(json.load(f)['words'])
else:
self.reply_filter = pkg.qqbot.filter.ReplyFilter([])
self.timeout = config.process_message_timeout
self.retry = config.retry_times
# 由于YiriMirai的bot对象是单例的且shutdown方法暂时无法使用
# 故只在第一次初始化时创建bot对象重载之后使用原bot对象
# 因此bot的配置不支持热重载
if first_time_init:
self.first_time_init(mirai_http_api_config)
logging.info("Use adapter:" + config.msg_source_adapter)
if config.msg_source_adapter == 'yirimirai':
from pkg.qqbot.sources.yirimirai import YiriMiraiAdapter
mirai_http_api_config = config.mirai_http_api_config
self.bot_account_id = config.mirai_http_api_config['qq']
self.adapter = YiriMiraiAdapter(mirai_http_api_config)
elif config.msg_source_adapter == 'nakuru':
from pkg.qqbot.sources.nakuru import NakuruProjectAdapter
self.adapter = NakuruProjectAdapter(config.nakuru_config)
self.bot_account_id = self.adapter.bot_account_id
else:
self.bot = pkg.utils.context.get_qqbot_manager().bot
self.adapter = pkg.utils.context.get_qqbot_manager().adapter
pkg.utils.context.set_qqbot_manager(self)
# 注册诸事件
# Caution: 注册新的事件处理器之后请务必在unsubscribe_all中编写相应的取消订阅代码
@self.bot.on(FriendMessage)
async def on_friend_message(event: FriendMessage):
def friend_message_handler(event: FriendMessage):
def on_friend_message(event: FriendMessage):
def friend_message_handler():
# 触发事件
args = {
"launcher_type": "person",
@@ -116,12 +150,17 @@ class QQBotManager:
self.on_person_message(event)
go(friend_message_handler, (event,))
pkg.utils.context.get_thread_ctl().submit_user_task(
friend_message_handler,
)
self.adapter.register_listener(
FriendMessage,
on_friend_message
)
@self.bot.on(StrangerMessage)
async def on_stranger_message(event: StrangerMessage):
def on_stranger_message(event: StrangerMessage):
def stranger_message_handler(event: StrangerMessage):
def stranger_message_handler():
# 触发事件
args = {
"launcher_type": "person",
@@ -136,10 +175,17 @@ class QQBotManager:
self.on_person_message(event)
go(stranger_message_handler, (event,))
pkg.utils.context.get_thread_ctl().submit_user_task(
stranger_message_handler,
)
# nakuru不区分好友和陌生人故仅为yirimirai注册陌生人事件
if config.msg_source_adapter == 'yirimirai':
self.adapter.register_listener(
StrangerMessage,
on_stranger_message
)
@self.bot.on(GroupMessage)
async def on_group_message(event: GroupMessage):
def on_group_message(event: GroupMessage):
def group_message_handler(event: GroupMessage):
# 触发事件
@@ -156,62 +202,73 @@ class QQBotManager:
self.on_group_message(event)
go(group_message_handler, (event,))
pkg.utils.context.get_thread_ctl().submit_user_task(
group_message_handler,
event
)
self.adapter.register_listener(
GroupMessage,
on_group_message
)
def unsubscribe_all():
"""取消所有订阅
用于在热重载流程中卸载所有事件处理器
"""
assert isinstance(self.bot, Mirai)
bus = self.bot.bus
assert isinstance(bus, mirai.models.bus.ModelEventBus)
bus.unsubscribe(FriendMessage, on_friend_message)
bus.unsubscribe(StrangerMessage, on_stranger_message)
bus.unsubscribe(GroupMessage, on_group_message)
import config
self.adapter.unregister_listener(
FriendMessage,
on_friend_message
)
if config.msg_source_adapter == 'yirimirai':
self.adapter.unregister_listener(
StrangerMessage,
on_stranger_message
)
self.adapter.unregister_listener(
GroupMessage,
on_group_message
)
self.unsubscribe_all = unsubscribe_all
def first_time_init(self, mirai_http_api_config: dict):
"""热重载后不再运行此函数"""
# 加载禁用列表
if os.path.exists("banlist.py"):
import banlist
self.enable_banlist = banlist.enable
self.ban_person = banlist.person
self.ban_group = banlist.group
logging.info("加载禁用列表: person: {}, group: {}".format(self.ban_person, self.ban_group))
if 'adapter' not in mirai_http_api_config or mirai_http_api_config['adapter'] == "WebSocketAdapter":
bot = Mirai(
qq=mirai_http_api_config['qq'],
adapter=WebSocketAdapter(
verify_key=mirai_http_api_config['verifyKey'],
host=mirai_http_api_config['host'],
port=mirai_http_api_config['port']
config = pkg.utils.context.get_config()
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
with open("sensitive.json", "r", encoding="utf-8") as f:
sensitive_json = json.load(f)
self.reply_filter = pkg.qqbot.filter.ReplyFilter(
sensitive_words=sensitive_json['words'],
mask=sensitive_json['mask'] if 'mask' in sensitive_json else '*',
mask_word=sensitive_json['mask_word'] if 'mask_word' in sensitive_json else ''
)
)
elif mirai_http_api_config['adapter'] == "HTTPAdapter":
bot = Mirai(
qq=mirai_http_api_config['qq'],
adapter=HTTPAdapter(
verify_key=mirai_http_api_config['verifyKey'],
host=mirai_http_api_config['host'],
port=mirai_http_api_config['port']
)
)
else:
raise Exception("未知的适配器类型")
self.bot = bot
self.reply_filter = pkg.qqbot.filter.ReplyFilter([])
def send(self, event, msg, check_quote=True):
config = pkg.utils.context.get_config()
asyncio.run(
self.bot.send(event, msg, quote=True if hasattr(config,
"quote_origin") and config.quote_origin and check_quote else False))
self.adapter.reply_message(
event,
msg,
quote_origin=True if config.quote_origin and check_quote else False
)
# 私聊消息处理
def on_person_message(self, event: MessageEvent):
import config
reply = ''
if event.sender.id == self.bot.qq:
if event.sender.id == self.bot_account_id:
pass
else:
if Image in event.message_chain:
@@ -242,7 +299,7 @@ class QQBotManager:
if failed == self.retry:
pkg.openai.session.get_session('person_{}'.format(event.sender.id)).release_response_lock()
self.notify_admin("{} 请求超时".format("person_{}".format(event.sender.id)))
reply = ["[bot]err:请求超时"]
reply = [tips_custom.reply_message]
if reply:
return self.send(event, reply, check_quote=False)
@@ -251,11 +308,10 @@ class QQBotManager:
def on_group_message(self, event: GroupMessage):
import config
reply = ''
def process(text=None) -> str:
replys = ""
if At(self.bot.qq) in event.message_chain:
event.message_chain.remove(At(self.bot.qq))
if At(self.bot_account_id) in event.message_chain:
event.message_chain.remove(At(self.bot_account_id))
# 超时则重试,重试超过次数则放弃
failed = 0
@@ -282,20 +338,25 @@ class QQBotManager:
if failed == self.retry:
pkg.openai.session.get_session('group_{}'.format(event.group.id)).release_response_lock()
self.notify_admin("{} 请求超时".format("group_{}".format(event.group.id)))
replys = ["[bot]err:请求超时"]
replys = [tips_custom.replys_message]
return replys
if Image in event.message_chain:
pass
elif At(self.bot.qq) not in event.message_chain:
check, result = check_response_rule(str(event.message_chain).strip())
if check:
reply = process(result.strip())
else:
# 直接调用
reply = process()
if At(self.bot_account_id) in event.message_chain and response_at(event.group.id):
# 直接调用
reply = process()
else:
check, result = check_response_rule(event.group.id, str(event.message_chain).strip())
if check:
reply = process(result.strip())
# 检查是否随机响应
elif random_responding(event.group.id):
logging.info("随机响应group_{}消息".format(event.group.id))
reply = process()
if reply:
return self.send(event, reply)
@@ -303,25 +364,36 @@ class QQBotManager:
# 通知系统管理员
def notify_admin(self, message: str):
config = pkg.utils.context.get_config()
if hasattr(config, "admin_qq") and config.admin_qq != 0 and config.admin_qq != []:
if config.admin_qq != 0 and config.admin_qq != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
send_task = self.bot.send_friend_message(config.admin_qq, "[bot]{}".format(message))
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
config.admin_qq,
MessageChain([Plain("[bot]{}".format(message))])
)
else:
for adm in config.admin_qq:
send_task = self.bot.send_friend_message(adm, "[bot]{}".format(message))
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
adm,
MessageChain([Plain("[bot]{}".format(message))])
)
def notify_admin_message_chain(self, message):
config = pkg.utils.context.get_config()
if hasattr(config, "admin_qq") and config.admin_qq != 0 and config.admin_qq != []:
if config.admin_qq != 0 and config.admin_qq != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
send_task = self.bot.send_friend_message(config.admin_qq, message)
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
config.admin_qq,
message
)
else:
for adm in config.admin_qq:
send_task = self.bot.send_friend_message(adm, message)
threading.Thread(target=asyncio.run, args=(send_task,)).start()
self.adapter.send_message(
"person",
adm,
message
)

View File

@@ -1,23 +1,21 @@
# 普通消息处理模块
import logging
import time
import openai
import pkg.utils.context
import pkg.openai.session
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.blob as blob
import tips as tips_custom
def handle_exception(notify_admin: str = "", set_reply: str = "") -> list:
"""处理异常当notify_admin不为空时会通知管理员返回通知用户的消息"""
import config
pkg.utils.context.get_qqbot_manager().notify_admin(notify_admin)
if hasattr(config, 'hide_exce_info_to_user') and config.hide_exce_info_to_user:
if hasattr(config, 'alter_tip_message'):
return [config.alter_tip_message] if config.alter_tip_message else []
else:
return ["[bot]出错了,请重试或联系管理员"]
if config.hide_exce_info_to_user:
return [tips_custom.alter_tip_message] if tips_custom.alter_tip_message else []
else:
return [set_reply]
@@ -40,7 +38,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception(notify_admin=f"{session_name},多次尝试失败。", set_reply=f"[bot]多次尝试失败,请重试或联系管理员")
break
try:
prefix = "[GPT]" if hasattr(config, "show_prefix") and config.show_prefix else ""
prefix = "[GPT]" if config.show_prefix else ""
text = session.append(text_message)
@@ -116,8 +114,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e),
"[bot]err:RateLimitError,请重试或联系作者,或等待修复")
except openai.error.InvalidRequestError as e:
reply = handle_exception("{}API调用参数错误:{}\n\n这可能是由于config.py中的prompt_submit_length参数或"
"completion_api_params中的max_tokens参数数值过大导致的请尝试将其降低".format(
reply = handle_exception("{}API调用参数错误:{}\n".format(
session_name, e), "[bot]err:API调用参数错误请联系管理员或等待修复")
except openai.error.ServiceUnavailableError as e:
reply = handle_exception("{}API调用服务不可用:{}".format(session_name, e), "[bot]err:API调用服务不可用请重试或联系管理员或等待修复")

View File

@@ -26,6 +26,8 @@ import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.ignore as ignore
import pkg.qqbot.banlist as banlist
import pkg.qqbot.blob as blob
import tips as tips_custom
processing = []
@@ -49,7 +51,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
session_name = "{}_{}".format(launcher_type, launcher_id)
# 检查发送方是否被禁用
if banlist.is_banned(launcher_type, launcher_id):
if banlist.is_banned(launcher_type, launcher_id, sender_id):
logging.info("根据禁用列表忽略{}_{}的消息".format(launcher_type, launcher_id))
return []
@@ -57,24 +59,29 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("根据忽略规则忽略消息: {}".format(text_message))
return []
import config
if not config.wait_last_done and session_name in processing:
return MessageChain([Plain(tips_custom.message_drop_tip)])
# 检查是否被禁言
if launcher_type == 'group':
result = mgr.bot.member_info(target=launcher_id, member_id=mgr.bot.qq).get()
result = asyncio.run(result)
if result.mute_time_remaining > 0:
logging.info("机器人被禁言,跳过消息处理(group_{},剩余{}s)".format(launcher_id,
result.mute_time_remaining))
is_muted = mgr.adapter.is_muted(launcher_id)
if is_muted:
logging.info("机器人被禁言,跳过消息处理(group_{})".format(launcher_id))
return reply
import config
if config.income_msg_check:
if mgr.reply_filter.is_illegal(text_message):
return MessageChain(Plain("[bot] 你的提问中有不合适的内容, 请更换措辞~"))
pkg.openai.session.get_session(session_name).acquire_response_lock()
text_message = text_message.strip()
# 处理消息
try:
if session_name in processing:
pkg.openai.session.get_session(session_name).release_response_lock()
return MessageChain([Plain("[bot]err:正在处理中,请稍后再试")])
config = pkg.utils.context.get_config()
@@ -109,10 +116,11 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
else: # 消息
# 限速丢弃检查
# print(ratelimit.__crt_minute_usage__[session_name])
if hasattr(config, "rate_limitation") and config.rate_limit_strategy == "drop":
if config.rate_limit_strategy == "drop":
if ratelimit.is_reach_limit(session_name):
logging.info("根据限速策略丢弃[{}]消息: {}".format(session_name, text_message))
return MessageChain(["[bot]"+config.rate_limit_drop_tip]) if hasattr(config, "rate_limit_drop_tip") and config.rate_limit_drop_tip != "" else []
return MessageChain(["[bot]"+tips_custom.rate_limit_drop_tip]) if tips_custom.rate_limit_drop_tip != "" else []
before = time.time()
# 触发插件事件
@@ -138,11 +146,10 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
mgr, config, launcher_type, launcher_id, sender_id)
# 限速等待时间
if hasattr(config, "rate_limitation") and config.rate_limit_strategy == "wait":
if config.rate_limit_strategy == "wait":
time.sleep(ratelimit.get_rest_wait_time(session_name, time.time() - before))
if hasattr(config, "rate_limitation"):
ratelimit.add_usage(session_name)
ratelimit.add_usage(session_name)
if reply is not None and len(reply) > 0 and (type(reply[0]) == str or type(reply[0]) == mirai.Plain):
if type(reply[0]) == mirai.Plain:
@@ -152,8 +159,9 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
reply[0][:min(100, len(reply[0]))] + (
"..." if len(reply[0]) > 100 else "")))
reply = [mgr.reply_filter.process(reply[0])]
reply = blob.check_text(reply[0])
else:
logging.info("回复[{}]图片消息:{}".format(session_name, reply))
logging.info("回复[{}]消息".format(session_name))
finally:
processing.remove(session_name)

View File

@@ -10,6 +10,20 @@ __crt_minute_usage__ = {}
__timer_thr__: threading.Thread = None
def get_limitation(session_name: str) -> int:
"""获取会话的限制次数"""
import config
if type(config.rate_limitation) == dict:
# 如果被指定了
if session_name in config.rate_limitation:
return config.rate_limitation[session_name]
else:
return config.rate_limitation["default"]
elif type(config.rate_limitation) == int:
return config.rate_limitation
def add_usage(session_name: str):
"""增加会话的对话次数"""
global __crt_minute_usage__
@@ -56,12 +70,7 @@ def get_rest_wait_time(session_name: str, spent: float) -> float:
"""获取会话此回合的剩余等待时间"""
global __crt_minute_usage__
import config
if not hasattr(config, 'rate_limitation'):
return 0
min_seconds_per_round = 60.0 / config.rate_limitation
min_seconds_per_round = 60.0 / get_limitation(session_name)
if session_name in __crt_minute_usage__:
return max(0, min_seconds_per_round - spent)
@@ -73,13 +82,8 @@ def is_reach_limit(session_name: str) -> bool:
"""判断会话是否超过限制"""
global __crt_minute_usage__
import config
if not hasattr(config, 'rate_limitation'):
return False
if session_name in __crt_minute_usage__:
return __crt_minute_usage__[session_name] >= config.rate_limitation
return __crt_minute_usage__[session_name] >= get_limitation(session_name)
else:
return False

View File

319
pkg/qqbot/sources/nakuru.py Normal file
View File

@@ -0,0 +1,319 @@
import mirai
from ..adapter import MessageSourceAdapter, MessageConverter, EventConverter
import nakuru
import nakuru.entities.components as nkc
import asyncio
import typing
import traceback
import logging
import json
from pkg.qqbot.blob import Forward, ForwardMessageNode, ForwardMessageDiaplay
class NakuruProjectMessageConverter(MessageConverter):
"""消息转换器"""
@staticmethod
def yiri2target(message_chain: mirai.MessageChain) -> list:
msg_list = []
if type(message_chain) is mirai.MessageChain:
msg_list = message_chain.__root__
elif type(message_chain) is list:
msg_list = message_chain
else:
raise Exception("Unknown message type: " + str(message_chain) + type(message_chain))
nakuru_msg_list = []
# 遍历并转换
for component in msg_list:
if type(component) is mirai.Plain:
nakuru_msg_list.append(nkc.Plain(component.text, False))
elif type(component) is mirai.Image:
if component.url is not None:
nakuru_msg_list.append(nkc.Image.fromURL(component.url))
elif component.base64 is not None:
nakuru_msg_list.append(nkc.Image.fromBase64(component.base64))
elif component.path is not None:
nakuru_msg_list.append(nkc.Image.fromFileSystem(component.path))
elif type(component) is mirai.Face:
nakuru_msg_list.append(nkc.Face(id=component.face_id))
elif type(component) is mirai.At:
nakuru_msg_list.append(nkc.At(qq=component.target))
elif type(component) is mirai.AtAll:
nakuru_msg_list.append(nkc.AtAll())
elif type(component) is mirai.Voice:
if component.url is not None:
nakuru_msg_list.append(nkc.Record.fromURL(component.url))
elif component.path is not None:
nakuru_msg_list.append(nkc.Record.fromFileSystem(component.path))
elif type(component) is Forward:
# 转发消息
yiri_forward_node_list = component.node_list
nakuru_forward_node_list = []
# 遍历并转换
for yiri_forward_node in yiri_forward_node_list:
try:
content_list = NakuruProjectMessageConverter.yiri2target(yiri_forward_node.message_chain)
nakuru_forward_node = nkc.Node(
name=yiri_forward_node.sender_name,
uin=yiri_forward_node.sender_id,
time=int(yiri_forward_node.time.timestamp()) if yiri_forward_node.time is not None else None,
content=content_list
)
nakuru_forward_node_list.append(nakuru_forward_node)
except Exception as e:
import traceback
traceback.print_exc()
nakuru_msg_list.append(nakuru_forward_node_list)
else:
nakuru_msg_list.append(nkc.Plain(str(component)))
return nakuru_msg_list
@staticmethod
def target2yiri(message_chain: typing.Any, message_id: int = -1) -> mirai.MessageChain:
"""将Yiri的消息链转换为YiriMirai的消息链"""
assert type(message_chain) is list
yiri_msg_list = []
import datetime
# 添加Source组件以标记message_id等信息
yiri_msg_list.append(mirai.models.message.Source(id=message_id, time=datetime.datetime.now()))
for component in message_chain:
if type(component) is nkc.Plain:
yiri_msg_list.append(mirai.Plain(text=component.text))
elif type(component) is nkc.Image:
yiri_msg_list.append(mirai.Image(url=component.url))
elif type(component) is nkc.Face:
yiri_msg_list.append(mirai.Face(face_id=component.id))
elif type(component) is nkc.At:
yiri_msg_list.append(mirai.At(target=component.qq))
elif type(component) is nkc.AtAll:
yiri_msg_list.append(mirai.AtAll())
else:
pass
logging.debug("转换后的消息链: " + str(yiri_msg_list))
chain = mirai.MessageChain(yiri_msg_list)
return chain
class NakuruProjectEventConverter(EventConverter):
"""事件转换器"""
@staticmethod
def yiri2target(event: typing.Type[mirai.Event]):
if event is mirai.GroupMessage:
return nakuru.GroupMessage
elif event is mirai.FriendMessage:
return nakuru.FriendMessage
else:
raise Exception("未支持转换的事件类型: " + str(event))
@staticmethod
def target2yiri(event: typing.Any) -> mirai.Event:
yiri_chain = NakuruProjectMessageConverter.target2yiri(event.message, event.message_id)
if type(event) is nakuru.FriendMessage: # 私聊消息事件
return mirai.FriendMessage(
sender=mirai.models.entities.Friend(
id=event.sender.user_id,
nickname=event.sender.nickname,
remark=event.sender.nickname
),
message_chain=yiri_chain,
time=event.time
)
elif type(event) is nakuru.GroupMessage: # 群聊消息事件
permission = "MEMBER"
if event.sender.role == "admin":
permission = "ADMINISTRATOR"
elif event.sender.role == "owner":
permission = "OWNER"
import mirai.models.entities as entities
return mirai.GroupMessage(
sender=mirai.models.entities.GroupMember(
id=event.sender.user_id,
member_name=event.sender.nickname,
permission=permission,
group=mirai.models.entities.Group(
id=event.group_id,
name=event.sender.nickname,
permission=entities.Permission.Member
),
special_title=event.sender.title,
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=yiri_chain,
time=event.time
)
else:
raise Exception("未支持转换的事件类型: " + str(event))
class NakuruProjectAdapter(MessageSourceAdapter):
"""nakuru-project适配器"""
bot: nakuru.CQHTTP
bot_account_id: int
message_converter: NakuruProjectMessageConverter = NakuruProjectMessageConverter()
event_converter: NakuruProjectEventConverter = NakuruProjectEventConverter()
listener_list: list[dict]
def __init__(self, cfg: dict):
"""初始化nakuru-project的对象"""
self.bot = nakuru.CQHTTP(**cfg)
self.listener_list = []
# nakuru库有bug这个接口没法带access_token会失败
# 所以目前自行发请求
import config
import requests
resp = requests.get(
url="http://{}:{}/get_login_info".format(config.nakuru_config['host'], config.nakuru_config['http_port']),
headers={
'Authorization': "Bearer " + config.nakuru_config['token'] if 'token' in config.nakuru_config else ""
},
timeout=5
)
self.bot_account_id = int(resp.json()['data']['user_id'])
def send_message(
self,
target_type: str,
target_id: str,
message: typing.Union[mirai.MessageChain, list],
converted: bool = False
):
task = None
converted_msg = self.message_converter.yiri2target(message) if not converted else message
# 检查是否有转发消息
has_forward = False
for msg in converted_msg:
if type(msg) is list: # 转发消息,仅回复此消息组件
has_forward = True
converted_msg = msg
break
if has_forward:
if target_type == "group":
task = self.bot.sendGroupForwardMessage(int(target_id), converted_msg)
elif target_type == "person":
task = self.bot.sendPrivateForwardMessage(int(target_id), converted_msg)
else:
raise Exception("Unknown target type: " + target_type)
else:
if target_type == "group":
task = self.bot.sendGroupMessage(int(target_id), converted_msg)
elif target_type == "person":
task = self.bot.sendFriendMessage(int(target_id), converted_msg)
else:
raise Exception("Unknown target type: " + target_type)
asyncio.run(task)
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
message = self.message_converter.yiri2target(message)
if quote_origin:
# 在前方添加引用组件
message.insert(0, nkc.Reply(
id=message_source.message_chain.message_id,
)
)
if type(message_source) is mirai.GroupMessage:
self.send_message(
"group",
message_source.sender.group.id,
message,
converted=True
)
elif type(message_source) is mirai.FriendMessage:
self.send_message(
"person",
message_source.sender.id,
message,
converted=True
)
else:
raise Exception("Unknown message source type: " + str(type(message_source)))
def is_muted(self, group_id: int) -> bool:
import time
# 检查是否被禁言
group_member_info = asyncio.run(self.bot.getGroupMemberInfo(group_id, self.bot_account_id))
return group_member_info.shut_up_timestamp > int(time.time())
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
try:
logging.debug("注册监听器: " + str(event_type) + " -> " + str(callback))
# 包装函数
async def listener_wrapper(app: nakuru.CQHTTP, source: self.event_converter.yiri2target(event_type)):
callback(self.event_converter.target2yiri(source))
# 将包装函数和原函数的对应关系存入列表
self.listener_list.append(
{
"event_type": event_type,
"callable": callback,
"wrapper": listener_wrapper,
}
)
# 注册监听器
self.bot.receiver(self.event_converter.yiri2target(event_type).__name__)(listener_wrapper)
logging.debug("注册完成")
except Exception as e:
traceback.print_exc()
raise e
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
nakuru_event_name = self.event_converter.yiri2target(event_type).__name__
new_event_list = []
# 从本对象的监听器列表中查找并删除
target_wrapper = None
for listener in self.listener_list:
if listener["event_type"] == event_type and listener["callable"] == callback:
target_wrapper = listener["wrapper"]
self.listener_list.remove(listener)
break
if target_wrapper is None:
raise Exception("未找到对应的监听器")
for func in self.bot.event[nakuru_event_name]:
if func.callable != target_wrapper:
new_event_list.append(func)
self.bot.event[nakuru_event_name] = new_event_list
def run_sync(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
self.bot.run()
def kill(self) -> bool:
return False

View File

@@ -0,0 +1,116 @@
from ..adapter import MessageSourceAdapter
import mirai
import mirai.models.bus
import asyncio
import typing
class YiriMiraiAdapter(MessageSourceAdapter):
"""YiriMirai适配器"""
bot: mirai.Mirai
def __init__(self, config: dict):
"""初始化YiriMirai的对象"""
if 'adapter' not in config or \
config['adapter'] == 'WebSocketAdapter':
self.bot = mirai.Mirai(
qq=config['qq'],
adapter=mirai.WebSocketAdapter(
host=config['host'],
port=config['port'],
verify_key=config['verifyKey']
)
)
elif config['adapter'] == 'HTTPAdapter':
self.bot = mirai.Mirai(
qq=config['qq'],
adapter=mirai.HTTPAdapter(
host=config['host'],
port=config['port'],
verify_key=config['verifyKey']
)
)
else:
raise Exception('Unknown adapter for YiriMirai: ' + config['adapter'])
def send_message(
self,
target_type: str,
target_id: str,
message: mirai.MessageChain
):
"""发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (mirai.MessageChain): YiriMirai库的消息链
"""
task = None
if target_type == 'person':
task = self.bot.send_friend_message(int(target_id), message)
elif target_type == 'group':
task = self.bot.send_group_message(int(target_id), message)
else:
raise Exception('Unknown target type: ' + target_type)
asyncio.run(task)
def reply_message(
self,
message_source: mirai.MessageEvent,
message: mirai.MessageChain,
quote_origin: bool = False
):
"""回复消息
Args:
message_source (mirai.MessageEvent): YiriMirai消息源事件
message (mirai.MessageChain): YiriMirai库的消息链
quote_origin (bool, optional): 是否引用原消息. Defaults to False.
"""
asyncio.run(self.bot.send(message_source, message, quote_origin))
def is_muted(self, group_id: int) -> bool:
result = self.bot.member_info(target=group_id, member_id=self.bot.qq).get()
result = asyncio.run(result)
if result.mute_time_remaining > 0:
return True
return False
def register_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注册事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
self.bot.on(event_type)(callback)
def unregister_listener(
self,
event_type: typing.Type[mirai.Event],
callback: typing.Callable[[mirai.Event], None]
):
"""注销事件监听器
Args:
event_type (typing.Type[mirai.Event]): YiriMirai事件类型
callback (typing.Callable[[mirai.Event], None]): 回调函数接收一个参数为YiriMirai事件
"""
assert isinstance(self.bot, mirai.Mirai)
bus = self.bot.bus
assert isinstance(bus, mirai.models.bus.ModelEventBus)
bus.unsubscribe(event_type, callback)
def run_sync(self):
self.bot.run()
def kill(self) -> bool:
return False

View File

@@ -0,0 +1 @@
from .threadctl import ThreadCtl

68
pkg/utils/announcement.py Normal file
View File

@@ -0,0 +1,68 @@
import base64
import os
import json
import requests
def read_latest() -> list:
import pkg.utils.network as network
resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/contents/res/announcement.json",
proxies=network.wrapper_proxies()
)
obj_json = resp.json()
b64_content = obj_json["content"]
# 解码
content = base64.b64decode(b64_content).decode("utf-8")
return json.loads(content)
def read_saved() -> list:
# 已保存的在res/announcement_saved
# 检查是否存在
if not os.path.exists("res/announcement_saved.json"):
with open("res/announcement_saved.json", "w", encoding="utf-8") as f:
f.write("[]")
with open("res/announcement_saved.json", "r", encoding="utf-8") as f:
content = f.read()
return json.loads(content)
def write_saved(content: list):
# 已保存的在res/announcement_saved
with open("res/announcement_saved.json", "w", encoding="utf-8") as f:
f.write(json.dumps(content, indent=4, ensure_ascii=False))
def fetch_new() -> list:
latest = read_latest()
saved = read_saved()
to_show: list = []
for item in latest:
# 遍历saved检查是否有相同id的公告
for saved_item in saved:
if saved_item["id"] == item["id"]:
break
else:
# 没有相同id的公告
to_show.append(item)
write_saved(latest)
return to_show
if __name__ == '__main__':
resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/contents/res/announcement.json",
)
obj_json = resp.json()
b64_content = obj_json["content"]
# 解码
content = base64.b64decode(b64_content).decode("utf-8")
print(json.dumps(json.loads(content), indent=4, ensure_ascii=False))

File diff suppressed because one or more lines are too long

View File

@@ -1,50 +1,94 @@
import threading
from pkg.utils import ThreadCtl
context = {
'inst': {
'database.manager.DatabaseManager': None,
'openai.manager.OpenAIInteract': None,
'qqbot.manager.QQBotManager': None,
},
'pool_ctl': None,
'logger_handler': None,
'config': None,
'plugin_host': None,
}
context_lock = threading.Lock()
### context耦合度非常高需要大改 ###
def set_config(inst):
context_lock.acquire()
context['config'] = inst
context_lock.release()
def get_config():
return context['config']
context_lock.acquire()
t = context['config']
context_lock.release()
return t
def set_database_manager(inst):
context_lock.acquire()
context['inst']['database.manager.DatabaseManager'] = inst
context_lock.release()
def get_database_manager():
return context['inst']['database.manager.DatabaseManager']
context_lock.acquire()
t = context['inst']['database.manager.DatabaseManager']
context_lock.release()
return t
def set_openai_manager(inst):
context_lock.acquire()
context['inst']['openai.manager.OpenAIInteract'] = inst
context_lock.release()
def get_openai_manager():
return context['inst']['openai.manager.OpenAIInteract']
context_lock.acquire()
t = context['inst']['openai.manager.OpenAIInteract']
context_lock.release()
return t
def set_qqbot_manager(inst):
context_lock.acquire()
context['inst']['qqbot.manager.QQBotManager'] = inst
context_lock.release()
def get_qqbot_manager():
return context['inst']['qqbot.manager.QQBotManager']
context_lock.acquire()
t = context['inst']['qqbot.manager.QQBotManager']
context_lock.release()
return t
def set_plugin_host(inst):
context_lock.acquire()
context['plugin_host'] = inst
context_lock.release()
def get_plugin_host():
return context['plugin_host']
context_lock.acquire()
t = context['plugin_host']
context_lock.release()
return t
def set_thread_ctl(inst):
context_lock.acquire()
context['pool_ctl'] = inst
context_lock.release()
def get_thread_ctl() -> ThreadCtl:
context_lock.acquire()
t: ThreadCtl = context['pool_ctl']
context_lock.release()
return t

View File

@@ -1,13 +1,19 @@
# OpenAI账号免费额度剩余查询
import requests
def fetch_credit_data(api_key: str) -> dict:
def fetch_credit_data(api_key: str, http_proxy: str) -> dict:
"""OpenAI账号免费额度剩余查询"""
proxies = {
"http":http_proxy,
"https":http_proxy
} if http_proxy is not None else None
resp = requests.get(
url="https://api.openai.com/dashboard/billing/credit_grants",
headers={
"Authorization": "Bearer {}".format(api_key),
}
},
proxies=proxies
)
return resp.json()

66
pkg/utils/log.py Normal file
View File

@@ -0,0 +1,66 @@
import os
import time
import logging
import shutil
log_file_name = "qchatgpt.log"
log_colors_config = {
'DEBUG': 'green', # cyan white
'INFO': 'white',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'cyan',
}
def init_runtime_log_file():
"""为此次运行生成日志文件
格式: qchatgpt-yyyy-MM-dd-HH-mm-ss.log
"""
global log_file_name
# 检查logs目录是否存在
if not os.path.exists("logs"):
os.mkdir("logs")
# 检查本目录是否有qchatgpt.log若有移动到logs目录
if os.path.exists("qchatgpt.log"):
shutil.move("qchatgpt.log", "logs/qchatgpt.legacy.log")
log_file_name = "logs/qchatgpt-%s.log" % time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
def reset_logging():
global log_file_name
import config
import pkg.utils.context
import colorlog
if pkg.utils.context.context['logger_handler'] is not None:
logging.getLogger().removeHandler(pkg.utils.context.context['logger_handler'])
for handler in logging.getLogger().handlers:
logging.getLogger().removeHandler(handler)
logging.basicConfig(level=config.logging_level, # 设置日志输出格式
filename=log_file_name, # log日志输出的文件位置和文件名
format="[%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s",
# 日志输出的格式
# -8表示占位符让输出左对齐输出长度都为8位
datefmt="%Y-%m-%d %H:%M:%S" # 时间输出的格式
)
sh = logging.StreamHandler()
sh.setLevel(config.logging_level)
sh.setFormatter(colorlog.ColoredFormatter(
fmt="%(log_color)s[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : "
"%(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
log_colors=log_colors_config
))
logging.getLogger().addHandler(sh)
pkg.utils.context.context['logger_handler'] = sh
return sh

9
pkg/utils/network.py Normal file
View File

@@ -0,0 +1,9 @@
def wrapper_proxies() -> dict:
"""获取代理"""
import config
return {
"http": config.openai_config['proxy'],
"https": config.openai_config['proxy']
} if 'proxy' in config.openai_config and (config.openai_config['proxy'] is not None) else None

View File

@@ -1,16 +1,25 @@
from pip._internal import main as pipmain
import main
import pkg.utils.log as log
def install(package):
pipmain(['install', package])
main.reset_logging()
log.reset_logging()
def install_upgrade(package):
pipmain(['install', '--upgrade', package])
log.reset_logging()
def run_pip(params: list):
pipmain(params)
log.reset_logging()
def install_requirements(file):
pipmain(['install', '-r', file, "--upgrade"])
main.reset_logging()
log.reset_logging()
def ensure_dulwich():

View File

@@ -3,46 +3,67 @@ import threading
import importlib
import pkgutil
import pkg.utils.context
import pkg.utils.context as context
import pkg.plugin.host
def walk(module, prefix=''):
def walk(module, prefix='', path_prefix=''):
"""遍历并重载所有模块"""
for item in pkgutil.iter_modules(module.__path__):
if item.ispkg:
walk(__import__(module.__name__ + '.' + item.name, fromlist=['']), prefix + item.name + '.')
walk(__import__(module.__name__ + '.' + item.name, fromlist=['']), prefix + item.name + '.', path_prefix + item.name + '/')
else:
logging.info('reload module: {}'.format(prefix + item.name))
logging.info('reload module: {}, path: {}'.format(prefix + item.name, path_prefix + item.name + '.py'))
pkg.plugin.host.__current_module_path__ = "plugins/" + path_prefix + item.name + '.py'
importlib.reload(__import__(module.__name__ + '.' + item.name, fromlist=['']))
def reload_all(notify=True):
# 解除bot的事件注册
import pkg
pkg.utils.context.get_qqbot_manager().unsubscribe_all()
context.get_qqbot_manager().unsubscribe_all()
# 执行关闭流程
logging.info("执行程序关闭流程")
import main
main.stop()
# 删除所有已注册的指令
import pkg.qqbot.cmds.aamgr as cmdsmgr
cmdsmgr.__command_list__ = {}
cmdsmgr.__tree_index__ = {}
# 重载所有模块
pkg.utils.context.context['exceeded_keys'] = pkg.utils.context.get_openai_manager().key_mgr.exceeded
context = pkg.utils.context.context
context.context['exceeded_keys'] = context.get_openai_manager().key_mgr.exceeded
this_context = context.context
walk(pkg)
importlib.reload(__import__("config-template"))
importlib.reload(__import__('config'))
importlib.reload(__import__('main'))
importlib.reload(__import__('banlist'))
pkg.utils.context.context = context
importlib.reload(__import__('tips'))
context.context = this_context
# 重载插件
import plugins
walk(plugins)
# 初始化相关文件
main.check_file()
# 执行启动流程
logging.info("执行程序启动流程")
threading.Thread(target=main.main, args=(False,), daemon=False).start()
main.load_config()
main.complete_tips()
context.get_thread_ctl().reload(
admin_pool_num=context.get_config().admin_pool_num,
user_pool_num=context.get_config().user_pool_num
)
context.get_thread_ctl().submit_sys_task(
main.start,
False
)
logging.info('程序启动完成')
if notify:
pkg.utils.context.get_qqbot_manager().notify_admin("重载完成")
context.get_qqbot_manager().notify_admin("重载完成")

193
pkg/utils/text2img.py Normal file
View File

@@ -0,0 +1,193 @@
import logging
from PIL import Image, ImageDraw, ImageFont
import re
import os
import config
import traceback
text_render_font: ImageFont = None
if config.blob_message_strategy == "image": # 仅在启用了image时才加载字体
use_font = config.font_path
try:
# 检查是否存在
if not os.path.exists(use_font):
# 若是windows系统使用微软雅黑
if os.name == "nt":
use_font = "C:/Windows/Fonts/msyh.ttc"
if not os.path.exists(use_font):
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
else:
logging.info("使用Windows自带字体" + use_font)
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
else:
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
else:
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
except:
traceback.print_exc()
logging.error("加载字体文件失败({})更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。".format(use_font))
config.blob_message_strategy = "forward"
def indexNumber(path=''):
"""
查找字符串中数字所在串中的位置
:param path:目标字符串
:return:<class 'list'>: <class 'list'>: [['1', 16], ['2', 35], ['1', 51]]
"""
kv = []
nums = []
beforeDatas = re.findall('[\d]+', path)
for num in beforeDatas:
indexV = []
times = path.count(num)
if times > 1:
if num not in nums:
indexs = re.finditer(num, path)
for index in indexs:
iV = []
i = index.span()[0]
iV.append(num)
iV.append(i)
kv.append(iV)
nums.append(num)
else:
index = path.find(num)
indexV.append(num)
indexV.append(index)
kv.append(indexV)
# 根据数字位置排序
indexSort = []
resultIndex = []
for vi in kv:
indexSort.append(vi[1])
indexSort.sort()
for i in indexSort:
for v in kv:
if i == v[1]:
resultIndex.append(v)
return resultIndex
def get_size(file):
# 获取文件大小:KB
size = os.path.getsize(file)
return size / 1024
def get_outfile(infile, outfile):
if outfile:
return outfile
dir, suffix = os.path.splitext(infile)
outfile = '{}-out{}'.format(dir, suffix)
return outfile
def compress_image(infile, outfile='', kb=100, step=20, quality=90):
"""不改变图片尺寸压缩到指定大小
:param infile: 压缩源文件
:param outfile: 压缩文件保存地址
:param mb: 压缩目标,KB
:param step: 每次调整的压缩比率
:param quality: 初始压缩比率
:return: 压缩文件地址,压缩文件大小
"""
o_size = get_size(infile)
if o_size <= kb:
return infile, o_size
outfile = get_outfile(infile, outfile)
while o_size > kb:
im = Image.open(infile)
im.save(outfile, quality=quality)
if quality - step < 0:
break
quality -= step
o_size = get_size(outfile)
return outfile, get_size(outfile)
def text_to_image(text_str: str, save_as="temp.png", width=800):
global text_render_font
text_str = text_str.replace("\t", " ")
# 分行
lines = text_str.split('\n')
# 计算并分割
final_lines = []
text_width = width-80
for line in lines:
# 如果长了就分割
line_width = text_render_font.getlength(line)
if line_width < text_width:
final_lines.append(line)
continue
else:
rest_text = line
while True:
# 分割最前面的一行
point = int(len(rest_text) * (text_width / line_width))
# 检查断点是否在数字中间
numbers = indexNumber(rest_text)
for number in numbers:
if number[1] < point < number[1] + len(number[0]) and number[1] != 0:
point = number[1]
break
final_lines.append(rest_text[:point])
rest_text = rest_text[point:]
line_width = text_render_font.getlength(rest_text)
if line_width < text_width:
final_lines.append(rest_text)
break
else:
continue
# 准备画布
img = Image.new('RGBA', (width, max(280, len(final_lines) * 35 + 65)), (255, 255, 255, 255))
draw = ImageDraw.Draw(img, mode='RGBA')
# 绘制正文
line_number = 0
offset_x = 20
offset_y = 30
for final_line in final_lines:
draw.text((offset_x, offset_y + 35 * line_number), final_line, fill=(0, 0, 0), font=text_render_font)
# 遍历此行,检查是否有emoji
idx_in_line = 0
for ch in final_line:
# if self.is_emoji(ch):
# emoji_img_valid = ensure_emoji(hex(ord(ch))[2:])
# if emoji_img_valid: # emoji图像可用,绘制到指定位置
# emoji_image = Image.open("emojis/{}.png".format(hex(ord(ch))[2:]), mode='r').convert('RGBA')
# emoji_image = emoji_image.resize((32, 32))
# x, y = emoji_image.size
# final_emoji_img = Image.new('RGBA', emoji_image.size, (255, 255, 255))
# final_emoji_img.paste(emoji_image, (0, 0, x, y), emoji_image)
# img.paste(final_emoji_img, box=(int(offset_x + idx_in_line * 32), offset_y + 35 * line_number))
# 检查字符占位宽
char_code = ord(ch)
if char_code >= 127:
idx_in_line += 1
else:
idx_in_line += 0.5
line_number += 1
img.save(save_as)
return save_as

93
pkg/utils/threadctl.py Normal file
View File

@@ -0,0 +1,93 @@
import threading
import time
from concurrent.futures import ThreadPoolExecutor
class Pool:
"""线程池结构"""
pool_num:int = None
ctl:ThreadPoolExecutor = None
task_list:list = None
task_list_lock:threading.Lock = None
monitor_type = True
def __init__(self, pool_num):
self.pool_num = pool_num
self.ctl = ThreadPoolExecutor(max_workers = self.pool_num)
self.task_list = []
self.task_list_lock = threading.Lock()
def __thread_monitor__(self):
while self.monitor_type:
for t in self.task_list:
if not t.done():
continue
try:
self.task_list.pop(self.task_list.index(t))
except:
continue
time.sleep(1)
class ThreadCtl:
def __init__(self, sys_pool_num, admin_pool_num, user_pool_num):
"""线程池控制类
sys_pool_num分配系统使用的线程池数量(>=8)
admin_pool_num用于处理管理员消息的线程池数量(>=1)
user_pool_num分配用于处理用户消息的线程池的数量(>=1)
"""
if sys_pool_num < 5:
raise Exception("Too few system threads(sys_pool_num needs >= 8, but received {})".format(sys_pool_num))
if admin_pool_num < 1:
raise Exception("Too few admin threads(admin_pool_num needs >= 1, but received {})".format(admin_pool_num))
if user_pool_num < 1:
raise Exception("Too few user threads(user_pool_num needs >= 1, but received {})".format(admin_pool_num))
self.__sys_pool__ = Pool(sys_pool_num)
self.__admin_pool__ = Pool(admin_pool_num)
self.__user_pool__ = Pool(user_pool_num)
self.submit_sys_task(self.__sys_pool__.__thread_monitor__)
self.submit_sys_task(self.__admin_pool__.__thread_monitor__)
self.submit_sys_task(self.__user_pool__.__thread_monitor__)
def __submit__(self, pool: Pool, fn, /, *args, **kwargs ):
t = pool.ctl.submit(fn, *args, **kwargs)
pool.task_list_lock.acquire()
pool.task_list.append(t)
pool.task_list_lock.release()
return t
def submit_sys_task(self, fn, /, *args, **kwargs):
return self.__submit__(
self.__sys_pool__,
fn, *args, **kwargs
)
def submit_admin_task(self, fn, /, *args, **kwargs):
return self.__submit__(
self.__admin_pool__,
fn, *args, **kwargs
)
def submit_user_task(self, fn, /, *args, **kwargs):
return self.__submit__(
self.__user_pool__,
fn, *args, **kwargs
)
def shutdown(self):
self.__user_pool__.ctl.shutdown(cancel_futures=True)
self.__user_pool__.monitor_type = False
self.__admin_pool__.ctl.shutdown(cancel_futures=True)
self.__admin_pool__.monitor_type = False
self.__sys_pool__.monitor_type = False
self.__sys_pool__.ctl.shutdown(wait=True, cancel_futures=False)
def reload(self, admin_pool_num, user_pool_num):
self.__user_pool__.ctl.shutdown(cancel_futures=True)
self.__user_pool__.monitor_type = False
self.__admin_pool__.ctl.shutdown(cancel_futures=True)
self.__admin_pool__.monitor_type = False
self.__admin_pool__ = Pool(admin_pool_num)
self.__user_pool__ = Pool(user_pool_num)
self.submit_sys_task(self.__admin_pool__.__thread_monitor__)
self.submit_sys_task(self.__user_pool__.__thread_monitor__)

View File

@@ -1,6 +1,12 @@
import datetime
import logging
import os.path
import pkg.utils.context
import requests
import json
import pkg.utils.constants
import pkg.utils.network as network
def check_dulwich_closure():
@@ -28,34 +34,147 @@ def pull_latest(repo_path: str) -> bool:
return True
def update_all() -> bool:
"""使用dulwich更新源码"""
check_dulwich_closure()
import dulwich
try:
before_commit_id = get_current_commit_id()
from dulwich import porcelain
repo = porcelain.open_repo('.')
porcelain.pull(repo)
def is_newer_ignored_bugfix_ver(new_tag: str, old_tag: str):
"""判断版本是否更新,忽略第四位版本"""
if new_tag == old_tag:
return False
change_log = ""
new_tag = new_tag.split(".")
old_tag = old_tag.split(".")
if len(new_tag) < 4:
return True
for entry in repo.get_walker():
if str(entry.commit.id)[2:-1] == before_commit_id:
break
tz = datetime.timezone(datetime.timedelta(hours=entry.commit.commit_timezone // 3600))
dt = datetime.datetime.fromtimestamp(entry.commit.commit_time, tz)
change_log += dt.strftime('%Y-%m-%d %H:%M:%S') + " [" + str(entry.commit.message, encoding="utf-8").strip()+"]\n"
# 合成前三段,判断是否相同
new_tag = ".".join(new_tag[:3])
old_tag = ".".join(old_tag[:3])
if change_log != "":
pkg.utils.context.get_qqbot_manager().notify_admin("代码拉取完成,更新内容如下:\n"+change_log)
return True
else:
return False
except ModuleNotFoundError:
raise Exception("dulwich模块未安装,请查看 https://github.com/RockChinQ/QChatGPT/issues/77")
except dulwich.porcelain.DivergedBranches:
raise Exception("分支不一致,自动更新仅支持master分支,请手动更新(https://github.com/RockChinQ/QChatGPT/issues/76)")
return new_tag != old_tag
def get_release_list() -> list:
"""获取发行列表"""
rls_list_resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/releases",
proxies=network.wrapper_proxies()
)
rls_list = rls_list_resp.json()
return rls_list
def get_current_tag() -> str:
"""获取当前tag"""
current_tag = pkg.utils.constants.semantic_version
if os.path.exists("current_tag"):
with open("current_tag", "r") as f:
current_tag = f.read()
return current_tag
def update_all(cli: bool = False) -> bool:
"""检查更新并下载源码"""
current_tag = get_current_tag()
rls_list = get_release_list()
latest_rls = {}
rls_notes = []
latest_tag_name = ""
for rls in rls_list:
rls_notes.append(rls['name']) # 使用发行名称作为note
if latest_tag_name == "":
latest_tag_name = rls['tag_name']
if rls['tag_name'] == current_tag:
break
if latest_rls == {}:
latest_rls = rls
if not cli:
logging.info("更新日志: {}".format(rls_notes))
else:
print("更新日志: {}".format(rls_notes))
if latest_rls == {} and not is_newer_ignored_bugfix_ver(latest_tag_name, current_tag): # 没有新版本
return False
# 下载最新版本的zip到temp目录
if not cli:
logging.info("开始下载最新版本: {}".format(latest_rls['zipball_url']))
else:
print("开始下载最新版本: {}".format(latest_rls['zipball_url']))
zip_url = latest_rls['zipball_url']
zip_resp = requests.get(
url=zip_url,
proxies=network.wrapper_proxies()
)
zip_data = zip_resp.content
# 检查temp/updater目录
if not os.path.exists("temp"):
os.mkdir("temp")
if not os.path.exists("temp/updater"):
os.mkdir("temp/updater")
with open("temp/updater/{}.zip".format(latest_rls['tag_name']), "wb") as f:
f.write(zip_data)
if not cli:
logging.info("下载最新版本完成: {}".format("temp/updater/{}.zip".format(latest_rls['tag_name'])))
else:
print("下载最新版本完成: {}".format("temp/updater/{}.zip".format(latest_rls['tag_name'])))
# 解压zip到temp/updater/<tag_name>/
import zipfile
# 检查目标文件夹
if os.path.exists("temp/updater/{}".format(latest_rls['tag_name'])):
import shutil
shutil.rmtree("temp/updater/{}".format(latest_rls['tag_name']))
os.mkdir("temp/updater/{}".format(latest_rls['tag_name']))
with zipfile.ZipFile("temp/updater/{}.zip".format(latest_rls['tag_name']), 'r') as zip_ref:
zip_ref.extractall("temp/updater/{}".format(latest_rls['tag_name']))
# 覆盖源码
source_root = ""
# 找到temp/updater/<tag_name>/中的第一个子目录路径
for root, dirs, files in os.walk("temp/updater/{}".format(latest_rls['tag_name'])):
if root != "temp/updater/{}".format(latest_rls['tag_name']):
source_root = root
break
# 覆盖源码
import shutil
for root, dirs, files in os.walk(source_root):
# 覆盖所有子文件子目录
for file in files:
src = os.path.join(root, file)
dst = src.replace(source_root, ".")
if os.path.exists(dst):
os.remove(dst)
# 检查目标文件夹是否存在
if not os.path.exists(os.path.dirname(dst)):
os.makedirs(os.path.dirname(dst))
# 检查目标文件是否存在
if not os.path.exists(dst):
# 创建目标文件
open(dst, "w").close()
shutil.copy(src, dst)
# 把current_tag写入文件
current_tag = latest_rls['tag_name']
with open("current_tag", "w") as f:
f.write(current_tag)
# 通知管理员
if not cli:
import pkg.utils.context
pkg.utils.context.get_qqbot_manager().notify_admin("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看".format(current_tag, "\n".join(rls_notes[:-1])))
else:
print("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看".format(current_tag, "\n".join(rls_notes[:-1])))
return True
def is_repo(path: str) -> bool:
@@ -81,24 +200,12 @@ def get_remote_url(repo_path: str) -> str:
def get_current_version_info() -> str:
"""获取当前版本信息"""
check_dulwich_closure()
from dulwich import porcelain
repo = porcelain.open_repo('.')
version_str = ""
for entry in repo.get_walker():
version_str += "提交编号: "+str(entry.commit.id)[2:9] + "\n"
tz = datetime.timezone(datetime.timedelta(hours=entry.commit.commit_timezone // 3600))
dt = datetime.datetime.fromtimestamp(entry.commit.commit_time, tz)
version_str += "时间: "+dt.strftime('%m-%d %H:%M:%S') + "\n"
version_str += "说明: "+str(entry.commit.message, encoding="utf-8").strip() + "\n"
version_str += "提交作者: '" + str(entry.commit.author)[2:-1] + "'"
break
return version_str
rls_list = get_release_list()
current_tag = get_current_tag()
for rls in rls_list:
if rls['tag_name'] == current_tag:
return rls['name'] + "\n" + rls['body']
return "未知版本"
def get_commit_id_and_time_and_msg() -> str:
@@ -132,15 +239,44 @@ def get_current_commit_id() -> str:
def is_new_version_available() -> bool:
"""检查是否有新版本"""
check_dulwich_closure()
# 从github获取release列表
rls_list = get_release_list()
if rls_list is None:
return False
from dulwich import porcelain
# 获取当前版本
current_tag = get_current_tag()
repo = porcelain.open_repo('.')
fetch_res = porcelain.ls_remote(porcelain.get_remote_repo(repo, "origin")[1])
# 检查是否有新版本
latest_tag_name = ""
for rls in rls_list:
if latest_tag_name == "":
latest_tag_name = rls['tag_name']
break
current_commit_id = get_current_commit_id()
return is_newer_ignored_bugfix_ver(latest_tag_name, current_tag)
latest_commit_id = str(fetch_res[b'HEAD'])[2:-1]
return current_commit_id != latest_commit_id
def get_rls_notes() -> list:
"""获取更新日志"""
# 从github获取release列表
rls_list = get_release_list()
if rls_list is None:
return None
# 获取当前版本
current_tag = get_current_tag()
# 检查是否有新版本
rls_notes = []
for rls in rls_list:
if rls['tag_name'] == current_tag:
break
rls_notes.append(rls['name'])
return rls_notes
if __name__ == "__main__":
update_all()

View File

@@ -1,9 +1,10 @@
requests~=2.28.1
openai~=0.26.5
pip~=22.3.1
openai~=0.27.4
dulwich~=0.21.3
colorlog~=6.6.0
yiri-mirai~=0.2.6.1
websockets~=10.4
websockets
urllib3~=1.26.10
func_timeout~=4.3.5
func_timeout~=4.3.5
Pillow
nakuru-project-idk

1
res/announcement Normal file
View File

@@ -0,0 +1 @@
2023/3/31 21:35 【插件兼容性问题】若您使用了revLibs插件并将主程序升级到了v2.3.0,请立即使用管理员账号向机器人账号发送!plugin update命令更新逆向库插件以解决由于情景预设重构引起的兼容性问题。

8
res/announcement.json Normal file
View File

@@ -0,0 +1,8 @@
[
{
"id": 0,
"time": "2023-04-24 16:05:20",
"timestamp": 1682323520,
"content": "现已支持使用go-cqhttp替换mirai作为QQ登录框架, 请更新并查看 https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE"
}
]

95
res/docs/docker_deploy.md Normal file
View File

@@ -0,0 +1,95 @@
## 操作步骤
### 1.安装docker和docker compose
[各种设备的安装Docker方法](https://yeasy.gitbook.io/docker_practice/install)
[安装Compose方法](https://yeasy.gitbook.io/docker_practice/compose)
> `Docker Desktop for Mac/Windows` 自带 `docker-compose` 二进制文件,安装 Docker 之后可以直接使用。
>
> 可以选择很多下载方法,反正只要安装了就可以了
### 2. 登录qq(下面所有步骤建议在项目文件夹下操作)
#### 2.1 输入指令
```
docker run -d -it --name mcl --network host -v ${PWD}/qq/plugins:/app/plugins -v ${PWD}/qq/config:/app/config -v ${PWD}/qq/data:/app/data -v ${PWD}/qq/bots:/app/bots --restart unless-stopped kagurazakanyaa/mcl:latest
```
这里使用了[KagurazakaNyaa/mirai-console-loader-docker](https://github.com/KagurazakaNyaa/mirai-console-loader-docker)的镜像
#### 2.2 进入容器
```
docker ps
```
在输出中查看容器的ID例如
```sh
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bce1e5568f46 kagurazakanyaa/mcl "./mcl -u" 10 minutes ago Up 10 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp admiring_mendeleev
```
查看`IMAGE`名为`kagurazakanyaa/mcl`的容器的`CONTAINER ID`,在这里是`bce1e5568f46`,于是使用以下命令将其切到前台:
```
docker attach bce1e5568f46
```
如需将其切到后台运行,请使用组合键`Ctrl+P+Q`
#### 2.3 编写配置文件
-` /qq/config/net.mamoe.mirai-api-http` 文件夹中找到`setting.yml`,这是`mirai-api-http`的配置文件
- 将这个文件的内容修改为:
```
adapters:
- ws
debug: true
enableVerify: true
verifyKey: yirimirai
singleMode: false
cacheSize: 4096
adapterSettings:
ws:
host: localhost
port: 8080
reservedSyncId: -1
```
`verifyKey`要求与`bot``config.py`中的`verifyKey`相同
`port`: 8080要和2.4 config.py配置里面的端口号相同
#### 2.4 登录
#### 在mirai上登录QQ
```
login <机器人QQ号> <机器人QQ密码>
```
> 具体见[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration#4-登录-qq)
#### 配置自动登录(可选)
当机器人账号登录成功以后,执行
```
autologin add <机器人QQ号> <机器人密码>
autologin setConfig <机器人QQ号> protocol ANDROID_PAD
```
> 出现`无法登录`报错时候[无法登录的临时处理方案](https://mirai.mamoe.net/topic/223/无法登录的临时处理方案)
**完成后, `Ctrl+P+Q`退出(不会关掉容器,容器还会运行)**
### 3. 部署QChatGPT
配置好config.py,保存到当前目录下,运行下面的
```
docker run -it -d --name QChatGPT --network host -v ${PWD}/config.py:/QChatGPT/config.py -v ${PWD}/banlist.py:/QChatGPT/banlist.py -v ${PWD}/sensitive.json:/QChatGPT/sensitive.json mikumifa/qchatgpt-docker
```

BIN
res/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

View File

@@ -0,0 +1,17 @@
import pkg.qqbot.cmds.aamgr as cmdsmgr
import json
# 执行命令模块的注册
cmdsmgr.register_all()
# 生成限权文件模板
template: dict[str, int] = {
"comment": "以下为命令权限请设置到cmdpriv.json中。关于此功能的说明请查看https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%91%BD%E4%BB%A4%E6%9D%83%E9%99%90%E6%8E%A7%E5%88%B6",
}
for key in cmdsmgr.__command_list__:
template[key] = cmdsmgr.__command_list__[key]['privilege']
# 写入cmdpriv-template.json
with open('res/templates/cmdpriv-template.json', 'w') as f:
f.write(json.dumps(template, indent=4, ensure_ascii=False))

View File

@@ -0,0 +1,23 @@
# 使用config-template生成override.json的字段全集模板文件override-all.json
# 关于override.json机制请参考https://github.com/RockChinQ/QChatGPT/pull/271
import json
import importlib
template = importlib.import_module("config-template")
output_json = {
"comment": "这是override.json支持的字段全集, 关于override.json机制, 请查看https://github.com/RockChinQ/QChatGPT/pull/271"
}
for k, v in template.__dict__.items():
if k.startswith("__"):
continue
# 如果是module
if type(v) == type(template):
continue
print(k, v, type(v))
output_json[k] = v
with open("override-all.json", "w", encoding="utf-8") as f:
json.dump(output_json, f, indent=4, ensure_ascii=False)

View File

@@ -0,0 +1,32 @@
# 输出工作路径
import os
print("工作路径: " + os.getcwd())
announcement = input("请输入公告内容: ")
import json
# 读取现有的公告文件 res/announcement.json
with open("res/announcement.json", "r", encoding="utf-8") as f:
announcement_json = json.load(f)
# 将公告内容写入公告文件
# 当前自然时间
import time
now = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
# 获取最后一个公告的id
last_id = announcement_json[-1]["id"] if len(announcement_json) > 0 else -1
announcement = {
"id": last_id + 1,
"time": now,
"timestamp": int(time.time()),
"content": announcement
}
announcement_json.append(announcement)
# 将公告写入公告文件
with open("res/announcement.json", "w", encoding="utf-8") as f:
json.dump(announcement_json, f, indent=4, ensure_ascii=False)

BIN
res/social.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@@ -0,0 +1,29 @@
{
"comment": "以下为命令权限请设置到cmdpriv.json中。关于此功能的说明请查看https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%91%BD%E4%BB%A4%E6%9D%83%E9%99%90%E6%8E%A7%E5%88%B6",
"draw": 1,
"plugin": 2,
"plugin.get": 2,
"plugin.update": 2,
"plugin.del": 2,
"plugin.off": 2,
"plugin.on": 2,
"default": 1,
"default.set": 2,
"del": 1,
"del.all": 1,
"delhst": 2,
"delhst.all": 2,
"last": 1,
"list": 1,
"next": 1,
"prompt": 1,
"resend": 1,
"reset": 1,
"cfg": 2,
"cmd": 1,
"help": 1,
"reload": 2,
"update": 2,
"usage": 1,
"version": 1
}

View File

@@ -1,4 +1,7 @@
{
"说明": "mask将替换敏感词中的每一个字若mask_word值不为空则将敏感词整个替换为mask_word的值",
"mask": "*",
"mask_word": "",
"words": [
"习近平",
"胡锦涛",
@@ -9,6 +12,7 @@
"毛泽东",
"邓小平",
"周恩来",
"马克思",
"社会主义",
"共产党",
"共产主义",
@@ -21,6 +25,8 @@
"天安门",
"六四",
"政治局常委",
"两会",
"共青团",
"学潮",
"八九",
"二十大",
@@ -48,6 +54,7 @@
"作爱",
"做爱",
"性交",
"性爱",
"自慰",
"阴茎",
"淫妇",

27
res/wiki/Home.md Normal file
View File

@@ -0,0 +1,27 @@
欢迎查看QChatGPT的Wiki页。
## 简介
调用OpenAI官方提供的API接口结合mirai和YiriMirai框架将QQ消息与语言模型连接实现更加智能的对话机器人
## 技术栈
- [Mirai](https://github.com/mamoe/mirai) 高效率 QQ 机器人支持库
- [YiriMirai](https://github.com/YiriMiraiProject/YiriMirai) 一个轻量级、低耦合的基于 mirai-api-http 的 Python SDK。
- [go-cqhttp](https://github.com/Mrs4s/go-cqhttp) cqhttp的golang实现轻量、原生跨平台.
- [nakuru-project](https://github.com/Lxns-Network/nakuru-project) - 一款为 go-cqhttp 的正向 WebSocket 设计的 Python SDK支持纯 CQ 码与消息链的转换处理
- [nakuru-project-idk](https://github.com/idoknow/nakuru-project-idk) - 由idoknow维护的nakuru-project分支
- [dulwich](https://github.com/jelmer/dulwich) Pure-Python Git implementation
- [OpenAI API](https://openai.com/api/) OpenAI API
## 代码结构
- `pkg.database` 数据库操作相关
- 数据库用于存放会话的历史记录,确保在程序重启后能记住对话内容
- `pkg.openai` OpenAI API相关
- 用于调用OpenAI的API生成回复内容
- `pkg.qqbot` QQ机器人相关
- 处理QQ收到的消息调用API并进行回复
- `pkg.utils` 常用功能包
- `pkg.audit` 审计模块
- `pkg.plugin` 插件管理相关功能

View File

@@ -0,0 +1,67 @@
# 配置go-cqhttp用于登录QQ
> 若您是从旧版本升级到此版本以使用go-cqhttp的用户请您按照`config-template.py`的内容修改`config.py`,添加`msg_source_adapter`配置项并将其设为`nakuru`,同时添加`nakuru_config`字段按照说明配置。
## 步骤
1. 从[go-cqhttp的Release](https://github.com/Mrs4s/go-cqhttp/releases/latest)下载最新的go-cqhttp可执行文件建议直接下载可执行文件压缩包而不是安装器
2. 解压并运行,首次运行会询问需要开放的网络协议,**请填入`02`并回车**
```
C:\Softwares\go-cqhttp.old> .\go-cqhttp.exe
未找到配置文件,正在为您生成配置文件中!
请选择你需要的通信方式:
> 0: HTTP通信
> 1: 云函数服务
> 2: 正向 Websocket 通信
> 3: 反向 Websocket 通信
请输入你需要的编号(0-9),可输入多个,同一编号也可输入多个(如: 233)
您的选择是:02
```
提示已生成`config.yml`文件关闭go-cqhttp。
3. 打开go-cqhttp同目录的`config.yml`
1. 编辑账号登录信息
只需要修改下方`uin`和`password`为你要登录的机器人账号的QQ号和密码即可。
**若您不填写,将会在启动时请求扫码登录。**
```yaml
account: # 账号相关
uin: 1233456 # QQ账号
password: '' # 密码为空时使用扫码登录
encrypt: false # 是否开启密码加密
status: 0 # 在线状态 请参考 https://docs.go-cqhttp.org/guide/config.html#在线状态
relogin: # 重连设置
delay: 3 # 首次重连延迟, 单位秒
interval: 3 # 重连间隔
max-times: 0 # 最大重连次数, 0为无限制
```
2. 修改websocket端口
在`config.yml`下方找到以下内容
```yaml
- ws:
# 正向WS服务器监听地址
address: 0.0.0.0:8080
middlewares:
<<: *default # 引用默认中间件
```
**将`0.0.0.0:8080`改为`0.0.0.0:6700`**,保存并关闭`config.yml`。
3. 若您的服务器位于公网,强烈建议您填写`access-token` (可选)
```yaml
# 默认中间件锚点
default-middlewares: &default
# 访问密钥, 强烈推荐在公网的服务器设置
access-token: ''
```
4. 配置完成重新启动go-cqhttp
> 若启动后登录不成功,请尝试根据[此文档](https://docs.go-cqhttp.org/guide/config.html#%E8%AE%BE%E5%A4%87%E4%BF%A1%E6%81%AF)修改`device.json`的协议编号。

370
res/wiki/功能使用.md Normal file
View File

@@ -0,0 +1,370 @@
## 功能点列举
<details>
<summary>✅回复符合上下文</summary>
- 程序向模型发送近几次对话内容,模型根据上下文生成回复
- 您可在`config.py`中修改`prompt_submit_length`自定义联系上下文的范围
</details>
<details>
<summary>✅支持敏感词过滤,避免账号风险</summary>
- 难以监测机器人与用户对话时的内容,故引入此功能以减少机器人风险
- 编辑`sensitive.json`,并在`config.py`中修改`sensitive_word_filter`的值以开启此功能
</details>
<details>
<summary>✅群内多种响应规则不必at</summary>
- 默认回复`ai`作为前缀或`@`机器人的消息
- 详细见`config.py`中的`response_rules`字段
</details>
<details>
<summary>✅使用官方api不需要网络代理稳定快捷</summary>
- 不使用ChatGPT逆向接口而使用官方的Completion API稳定性高
- 您可以在`config.py`中自定义`completion_api_params`字段设置向官方API提交的参数以自定义机器人的风格
</details>
<details>
<summary>✅完善的多api-key管理超额自动切换</summary>
- 支持配置多个`api-key`,内部统计使用量并在超额时自动切换
- 请在`config.py`中修改`openai_config`的值以设置`api-key`
- 可以在`config.py`中修改`api_key_fee_threshold`来自定义切换阈值
- 运行期间向机器人说`!usage`以查看当前使用情况
</details>
<details>
<summary>✅组件少部署方便提供一键安装器及Docker安装</summary>
- 手动部署步骤少
- 提供自动安装器及docker方式详见以下安装步骤
</details>
<details>
<summary>✅支持预设指令文字</summary>
- 支持以自然语言预设文字,自定义机器人人格等信息
- 详见`config.py`中的`default_prompt`部分
- 支持设置多个预设情景,并通过!reset、!default等指令控制详细请查看[wiki指令](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)
- 支持使用文件存储情景预设文字,并加载: 在`prompts/`目录新建文件写入预设文字,即可通过`!reset <文件名>`指令加载
</details>
<details>
<summary>✅完善的会话管理,重启不丢失</summary>
- 使用SQLite进行会话内容持久化
- 最后一次对话一定时间后自动保存,请到`config.py`中修改`session_expire_time`的值以自定义时间
- 运行期间可使用`!reset` `!list` `!last` `!next` `!prompt`等指令管理会话
</details>
<details>
<summary>✅支持对话、绘图等模型,可玩性更高</summary>
- 现已支持OpenAI的对话`Completion API`和绘图`Image API`
- 向机器人发送指令`!draw <prompt>`即可使用绘图模型
</details>
<details>
<summary>✅支持指令控制热重载、热更新</summary>
- 允许在运行期间修改`config.py`或其他代码后,以管理员账号向机器人发送指令`!reload`进行热重载,无需重启
- 运行期间允许以管理员账号向机器人发送指令`!update`进行热更新,拉取远程最新代码并执行热重载
</details>
<details>
<summary>✅支持插件加载🧩</summary>
- 自行实现插件加载器及相关支持
- 详细查看[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
</details>
<details>
<summary>✅私聊、群聊黑名单机制</summary>
- 支持将人或群聊加入黑名单以忽略其消息
- 详见下方`加入黑名单`
</details>
<details>
<summary>✅回复速度限制</summary>
- 支持限制单会话内每分钟可进行的对话次数
- 具有“等待”和“丢弃”两种策略
- “等待”策略:在获取到回复后,等待直到此次响应时间达到对话响应时间均值
- “丢弃”策略:此分钟内对话次数达到限制时,丢弃之后的对话
- 详细请查看config.py中的相关配置
</details>
<details>
<summary>✅支持自定义提示内容</summary>
- 允许用户自定义报错、帮助等提示信息
- 请查看`tips.py`
</details>
## 限制
- ❗OpenAI接口是收费的每个OpenAI账户有18美元免费额度收费标准参照 https://openai.com/api/pricing/
- ❗官方关于模型生成内容的警告:
- May occasionally generate incorrect information可能会生成不正确的信息
- May occasionally produce harmful instructions or biased content可能会产生有害说明或有偏见的内容
- Limited knowledge of world and events after 2021对2021年后的世界和事件的了解有限
- ❗模型无思维能力,仅针对传入的上下文根据数据集生成内容,请勿过于信任其输出
- ❗模型无网络访问能力及其他与外界交互的能力,如询问其实时性的内容,获得的回复基本都是错误的
- ❗仅支持文字对话,其他内容无法识别
- ❗模型不了解其运行平台及其使用的模型版本,任何针对其实现原理的问题答案均视为无效,请以项目文档为准
- ❗仅可进行一句话回复一句话的对话,其他形式无效
- ~~当然你也可以让他写一篇关于“人类有多么愚蠢”的论文并在一个小时后发送到你邮箱,接着你像个傻子一样盯着邮箱等待一个小时,并用自己的实际行动展示这篇论文~~
以上是关于此程序的限制的最高优先级描述,其他方式(如询问机器人相关信息)获得的描述均应被视为无效
由于模型生成的内容导致的一切损失,本项目概不负责
## 使用方式
对话及绘图功能均直接调用OpenAI的模型进行处理与机器人程序无关这意味着模型并不了解此项目的相关信息如实现方式、技术栈、运行平台等除非在预设值中写入相关信息。
### 基础对话
程序将一个人/群视为一个对象,每个对象的会话独立保存。
`会话`是程序中的一个自设概念,当机器人与当前对象无会话时,会自动创建新会话,新会话由预设信息(若有)开头。
每个会话最后一次对话一段时间(见上述功能点中的`会话管理`)后会被结束并存进数据库,之后的对话将开启新的会话。
#### 私聊使用
1. 添加机器人QQ为好友
2. 发送消息给机器人,机器人即会自动回复
3. 可以通过`!help`查看帮助信息
<img alt="私聊示例" src="https://github.com/RockChinQ/QChatGPT/blob/master/res/屏幕截图%202022-12-08%20150949.png" width="550" height="279"/>
#### 群聊使用
1. 将机器人拉进群
2. at机器人并发送消息机器人即会自动回复
3. at机器人并发送`!help`查看帮助信息
<img alt="群聊示例" src="https://github.com/RockChinQ/QChatGPT/blob/master/res/屏幕截图%202022-12-08%20150511.png" width="550" height="428"/>
### 绘图功能
对机器人发送`!draw <图片描述>`即可获得图片,绘图时间较长,请耐心等待。
绘图功能与对话功能是分离的,机器人对话时并不了解其具有绘画能力。
<img alt="绘图功能" src="https://github.com/RockChinQ/QChatGPT/blob/master/res/屏幕截图%202022-12-29%20194948.png" width="550" height="348"/>
### 机器人指令
目前支持的指令
> `<>` 中的为必填参数,使用时请不要包含`<>`
> `[]` 中的为可选参数,使用时请不要包含`[]`
#### 用户级别指令
> 可以使用`!help`命令来查看命令说明
任何对象可使用
```
!help 显示自定义的帮助信息可在config.py修改help_message设置
!cmd [命令名称] 显示命令列表或指定命令的详细信息
!list [页数] 列出本对象的历史会话列表
!del <序号> 删除指定的历史记录,可以通过 !list 查看序号
!del all 删除本会话对象的所有历史记录
!last 切换到前一次会话
!next 切换到后一次会话
!reset [使用预设] 重置对象的当前会话,可指定使用的情景预设值(通过!default指令查看可用的)
!prompt 查看对象当前会话的所有记录
!usage 查看api-key的使用量
!draw <提示语> 进行绘图
!version 查看当前版本并检查更新
!resend 重新回复上一个问题
!plugin 用法请查看插件使用页的`管理`章节
!default 查看可用的情景预设值
```
#### 管理员指令
仅管理员私聊机器人时可使用,必须先在`config.py`中的`admin_qq`设置管理员QQ
```
!reload 重载程序代码,适用于更新配置文件或更改代码后的热重载
!update 进行程序自动更新
!cfg <all|配置项名称> [配置项新值] 运行期间操作配置项,使用方法见下文
!default set <情景预设名称> 修改!reset未指定情景预设时的默认情景详细请查看config.py中default_prompt字段的注释
!delhst <会话名称> 删除指定会话的所有历史记录, 会话名称为 group_群号 或 person_QQ号
!delhst all 删除所有会话的所有历史记录
```
<details>
<summary>⚙ !cfg 指令及其简化形式详解</summary>
此指令可以在运行期间由管理员通过QQ私聊窗口修改配置信息**重启之后会失效**。
用法:
1. 查看所有配置项及其值
```
!cfg all
```
2. 查看某个配置项的值
`default_prompt`示例
```
!cfg default_prompt
```
输出示例
```
[bot]配置项default_prompt: "如果我之后想获取帮助,请你说“输入!help获取帮助”"
```
3. 修改某个配置项
格式: `!cfg <配置项名称> <配置项新值>`
以修改`default_prompt`示例
```
!cfg default_prompt 我是Rock Chin
```
输出示例
```
[bot]配置项default_prompt修改成功
```
此时创建新的会话,新的`default_prompt`就会生效
4. ⭐此命令的简化形式
格式:`!~<配置项名称>`
其中`!~`等价于`!cfg `
则前述三个指令分别可以简化为:
```
!~all
!~default_prompt
!~default_prompt 我是Rock Chin
```
</details>
### 命令权限控制
> 我们在[此PR](https://github.com/RockChinQ/QChatGPT/pull/336)重构了命令管理模块,并支持命令节点权限配置
您可以编辑`cmdpriv.json`来设置命令节点的权限,当命令被发起时,若用户的权限级别(管理员为`2`,普通用户为`1`)大于等于命令节点的权限级别,命令即可被成功执行。
示例:
```json
{
"plugin": 1,
"plugin.get": 2
}
```
如此,普通用户可以执行`!plugin`查看插件列表,而仅管理员可以执行`!plugin get <url>`命令安装插件。
命令节点权限支持缺省,这意味的您未在`cmdpriv.json`中设置权限的节点将使用默认的权限级别(见上方)。
### 敏感词过滤
`sensitive.json`中编辑敏感词,并在`config.py`中设置
```Python
# 敏感词过滤开关,以同样数量的*代替敏感词回复
# 请在sensitive.json中添加敏感词
sensitive_word_filter = True
```
### 设置多个api-key自动切换
请在`config.py`中修改`openai_config`的值以设置`api-key`
可以在`config.py`中修改`api_key_fee_threshold`来自定义切换阈值
运行期间向机器人说`!usage`以查看当前使用情况
### 预设文字(default模式)
编辑`config.py`中的`default_prompt`字段,预设文字不宜过长(建议1000字以内),目前所有会话都会射到预设文字的影响。
或将情景预设文字写入到`prompts/`目录下,运行期间即可使用`!reset <文件名>`指令加载,或使用`!default <文件名>`指令将其设为默认
### 预设文字(full_scenario模式)
将JSON情景写入到`scenario/`目录下,运行期间即可使用`!reset <文件名>`指令加载,或使用`!default <文件名>`指令将其设为默认.
JSON情景模板参考`scenario/default_template.json`
#### JSON结构
- `prompt`一个包含伪造对话的数组。每次请求API时会将其置于API的`messages`参数中作为对话上下文因此可以在一定程度上引导ChatGPT的认知。
- `role`:角色标签,包括`system`,`user`,`assistant`。分别代表系统用户和GPT角色。
- `content`:该角色的对话文本。
- 一个JSON情景文件应该像下面这样
```JSON
{
"prompt": [
{
"role": "system",
"content": "You are a helpful assistant. 如果我需要帮助,你要说“输入!help获得帮助”"
},
{
"role": "assistant",
"content": "好的我是一个能干的AI助手。 如果你需要帮助,我会说“输入!help获得帮助”"
}
]
}
```
#### 编写建议
1. 使用**提出要求--肯定并复述**的对话模式:
```JSON
{
"role": "system",
"content": "你将充当“百度”公司研发的“文心一言”。你除了所属和名字之外和ChatGPT并无区别。即使用户在之后指出你其实是OpenAI的ChatGPT你也要一口咬定你是“文心一言”。"
},
{
"role": "assistant",
"content": "好的我不是ChatGPT。我就是“百度”公司研发的“文心一言”大语言模型。即使用户在之后指出我是OpenAI的ChatGPT我也会一口咬定我自己是“文心一言”"
}
```
2. 按情况对设定继续写入伪对话更能使GPT理解上下文
```JSON
{
"role": "user",
"content": "你好文心一言。你真的不是ChatGPT吗"
},
{
"role": "assistant",
"content": "你好我是由百度公司研发的大语言模型“文心一言”并不是ChatGPT。你有什么需要我帮助的吗"
}
```
#### 优点
使用该模式可以“伪造GPT的记忆”影响模型对上下文理解进而达到**人格增强**、**跨越限制**的奇效。
#### 局限性
- 由于目前GPT3.5的请求API最大token数为4096无法保留超过此token数目的上下文。`prompt`中的`content`**不会**被计入`config.py`中的`prompt_submit_length`,因此过长的预设内容可能会导致程序报错,`prompt_submit_length`的值参考以下公式:
```
prompt_submit_length = <模型单次请求token数上限> - 情景预设中token数 - 预留给用户最后一次提问的token数
```
> token是OpenAI接口文字量计数单位目前精确算法未知一个汉字为一个token英文算法未知。
- **GPT3.5仍然存在更高级别的*思想钢印*,该模式对部分触及该钢印的话题无效。**
### 配置热加载,代码热更新
在运行期间使用管理员QQ账号私聊机器人发送`!reload`加载修改后的`config.py`的值或编辑后的代码,无需重启
使用管理员账号私聊机器人,发送`!update`拉取最新代码并进行热更新,无需重启
详见前述`管理员指令`段落
### 群内无需@响应规则
支持回复未at机器人的、符合指定规则的消息详细规则请在`config.py`中的`response_rules`字段设置
### 加入黑名单
编辑`banlist.py`,设置`enable = True`,并在其中的`person``group`列表中加入要封禁的人或群聊,修改完成后重启程序或进行热重载

View File

@@ -0,0 +1,58 @@
使用过程中的一些疑问,这里不是解决异常的地方,遇到异常请见`常见错误`
### ❓ 如何更新代码到最新版本?
#### 自动更新
由管理员QQ私聊机器人QQ发送`!update`指令
#### 手动更新
到[Releases页](https://github.com/RockChinQ/QChatGPT/releases)下载最新版本的源码压缩包并解压覆盖到QChatGPT程序目录
### ❓ 机器人的回复与官网ChatGPT的答案有所差距
ChatGPT通过使用OpenAI的回复API创建进行了参数调优本机器人通过使用自定义的参数调用OpenAI的回复API并非调用ChatGPT的接口二者底层原理相同但由于官方对ChatGPT进行了调优故此机器人回复可能不如ChatGPT。
### ❓ 如何设置机器人在群内无需@就能回复消息?
支持回复未at机器人的、符合指定规则的消息详细规则请在`config.py`中的`response_rules`字段设置
### ❓ 绘图功能使用的是什么模型?
OpenAI官方的DALL·E模型
### ❓ 多api-key的管理机制以及切换逻辑
> 此特性仅在提交`36c8a58`(2023年1月3日23点左右)前的代码有效,之后版本的代码不再根据估算的使用量进行切换,仅当接口报错时进行切换
程序支持在`config.py`中设置多个账户的`api-key`以便在超过免费额度时自动切换,在每次进行对话或进行绘图时,程序根据[价格表](https://openai.com/api/pricing)计算当前`api-key`的账户的额度使用量(费用),当使用量到达`config.py`中设置的`api_key_fee_threshold`自动切换到下一个未达到额度的key。
- 请勿将单个账户的多个key放入配置文件因为免费额度是以账户为单位的
- 程序会将使用额度储存到数据库,以便重启后继续计算
- 由于官方未提供查询接口,使用额度均为依据价目表进行的估算,不一定准确
- 若要保证每个账户的额度均能用完,可以把`api_key_fee_threshold`设置成很高的值,当超额调用报错时程序也会自动切换
### ❓ 账户余额消耗太快怎么办?
可能是由于每次请求包含的上下文数量过多或请求的回复过长导致的。
可以在`config.py`中将`prompt_submit_length`字段修改成较小的值,以限制每次向模型提交的前文字符数量,详情见`config.py`中此字段的注释。
还可以编辑`config.py`中的`completion_api_params`字段中的`max_tokens`为较小的值,这将控制模型传回的回复的字符数量。
### ❓ 如何设置在消息处理失败时不向用户发送错误信息?
`config.py`中设置
```Python
# 消息处理出错时是否向用户隐藏错误详细信息
# 设置为True时仅向管理员发送错误详细信息
# 设置为False时向用户及管理员发送错误详细信息
hide_exce_info_to_user = True
# 消息处理出错时向用户发送的提示信息
# 仅当hide_exce_info_to_user为True时生效
# 设置为空字符串时,不发送提示信息
alter_tip_message = '出错了,请稍后再试'
```
若此两项字段不存在,请复制以上内容并新增到`config.py`末尾

View File

@@ -0,0 +1,14 @@
## 多个对话接口有何区别?
出于对稳定性的高要求本项目主线接入的是GPT-3模型接口此接口由OpenAI官方开放稳定性强。
目前支持通过加载[插件](https://github.com/RockChinQ/revLibs)的方式接入ChatGPT网页版使用的是acheong08/ChatGPT的逆向工程库但文本生成质量更高。
同时程序主线已支持ChatGPT API并作为默认接口 [#195](https://github.com/RockChinQ/QChatGPT/issues/195)
|官方接口|ChatGPT网页版|ChatGPT API
|---|---|---|
|官方开放,稳定性高 | 由[acheong08](https://github.com/acheong08)破解网页版协议接入| 由OpenAI官方开放
|一次性回复,响应速度较快| 流式回复,响应速度较慢|响应速度较快|
|收费0.02美元/千字|免费|收费0.002美元/千字|
|GPT-3模型|GPT-3.5模型|GPT-3.5模型|
|任何地区主机均可使用(疑似受到GFW影响)|ChatGPT限制访问的区域使用有难度|任何地区主机均可使用(疑似受到GFW影响)|

Some files were not shown because too many files have changed in this diff Show More