Compare commits

...

279 Commits

Author SHA1 Message Date
RockChinQ
a10d3213fd chore: release v2.6.10 2024-01-19 15:50:15 +08:00
RockChinQ
f52a0eb02f perf: 连接go-cqhttp时不使用代理 2024-01-19 15:49:42 +08:00
Junyan Qin
1ea8da69a2 Merge pull request #667 from RockChinQ/chore/remove-legacy-code
Chore: 移除过时的兼容性处理代码
2024-01-18 01:02:32 +08:00
RockChinQ
5bbc38a7a3 chore: 移除过时的兼容性处理代码 2024-01-18 00:52:29 +08:00
RockChinQ
aa433bd5ab fix: 修复文字转图片模块初始化时的bug 2024-01-17 20:07:35 +08:00
RockChinQ
2c5933da0b chore: 删除updater中不再使用的代码 2024-01-15 22:35:14 +08:00
RockChinQ
77bc6fbf59 fix(list): 列出不存在的页时失败 2024-01-15 21:44:53 +08:00
Junyan Qin
701cb7be40 Merge pull request #661 from RockChinQ/perf/audit-v2
Feat: 优化 v2 审计 API 调用逻辑
2024-01-12 20:18:30 +08:00
RockChinQ
ab8d77c968 feat: 删除 v1 审计 API 调用逻辑 2024-01-12 20:06:18 +08:00
RockChinQ
6c03fe678a feat: 允许用户关闭数据上报 2024-01-12 17:20:39 +08:00
RockChinQ
41b30238c3 chore: 指令全部改为命令 2024-01-12 16:48:47 +08:00
RockChinQ
aa768459c0 perf: 配置项目标值不合法时的输出 2024-01-12 16:29:04 +08:00
RockChinQ
28014512f7 fix(cconfig): cfg 命令找不到配置项时的处理错误 2024-01-12 16:25:10 +08:00
RockChinQ
f9a99eed66 chore: 删除已被OpenAI弃用的模型 (#658) 2024-01-12 14:48:49 +08:00
Junyan Qin
461b574e09 Merge pull request #659 from RockChinQ/fix/resend-command-failed
Fix: resend 命令失效
2024-01-12 14:40:07 +08:00
RockChinQ
36c192ff6b fix: resend 命令失效 2024-01-12 14:31:29 +08:00
RockChinQ
101625965c chore: 删除对 credit 的引用 2024-01-12 10:18:10 +08:00
RockChinQ
83177a3416 chore: 移除弃用的 credit.py 模块 2024-01-12 10:09:53 +08:00
Junyan Qin
c3904786e1 doc(README.md): 添加链接 2024-01-10 23:11:02 +08:00
RockChinQ
b31c34905a test: 自动上传覆盖率 2023-12-28 16:14:54 +08:00
RockChinQ
41cbe91870 doc(README): 添加测试覆盖率徽章 2023-12-28 16:03:55 +08:00
Junyan Qin
872b16b779 ci: 删除注释 2023-12-27 16:00:18 +00:00
Junyan Qin
9f3cc9c293 test: 修正错误的引号 2023-12-27 15:56:52 +00:00
Junyan Qin
2d148c4970 test: 处理多行响应值 2023-12-27 15:52:12 +00:00
Junyan Qin
0869b57741 test: install jq 2023-12-27 15:48:26 +00:00
Junyan Qin
af225aa18f test: 错误的逻辑 2023-12-27 15:44:24 +00:00
Junyan Qin
06f3c5d32b test: 分支名获取方式 2023-12-27 15:39:08 +00:00
Junyan Qin
4e71a08b57 test: 完善issues_comment时的pr分支获取逻辑 2023-12-27 15:35:25 +00:00
Junyan Qin
bf5ebc9245 test: 错误的触发名称 2023-12-27 15:23:53 +00:00
Junyan Qin
fba81582ab test: 完善触发方式 2023-12-27 15:16:07 +00:00
Junyan Qin
b4645168f9 Merge pull request #649 from RockChinQ/test/systematical-test
Test: 集成qcg-tester
2023-12-27 22:50:35 +08:00
Junyan Qin
d00c68e329 test: 允许手动触发 2023-12-27 14:49:00 +00:00
Junyan Qin
cb636b96bf test: 集成qcg-tester 2023-12-27 14:47:02 +00:00
GitHub Actions
12468b5b15 Update override-all.json 2023-12-23 02:32:13 +00:00
RockChinQ
6a5414b5fd chore: prompt_submit_length默认改为3072 2023-12-23 10:31:56 +08:00
RockChinQ
db51fd0ad7 chore: release v2.6.9 2023-12-22 18:34:35 +08:00
Junyan Qin
256bc4dc1e Merge pull request #644 from RockChinQ/feat/online-data-analysis
Feat: v2 数据统计接口
2023-12-22 18:33:50 +08:00
RockChinQ
d2bd6e23b6 chore: 删除调试输出 2023-12-22 14:36:52 +08:00
RockChinQ
bb12b48887 feat: usage.query完成 2023-12-22 12:38:27 +08:00
RockChinQ
a58e55daf3 chore: 更新issue模板 2023-12-22 11:11:31 +08:00
RockChinQ
23a05fe5b0 chore: 完善issue模板 2023-12-22 11:03:25 +08:00
RockChinQ
3a63630068 feat: account_id 设置逻辑 2023-12-21 18:51:10 +08:00
RockChinQ
565066bbcd feat: 插件相关上报 API 2023-12-21 18:46:48 +08:00
RockChinQ
c10f72cf4c feat: 内容函数调用报告 2023-12-21 18:36:02 +08:00
RockChinQ
af8c21f3d4 feat: 完善 插件事件调用报告 2023-12-21 18:19:04 +08:00
RockChinQ
6f6c3af302 feat: 插件事件触发报告 2023-12-21 18:04:16 +08:00
RockChinQ
61a47808c8 chore: typo 2023-12-21 17:35:20 +08:00
RockChinQ
e02765bf95 feat: main.announcement 接口 2023-12-21 17:11:45 +08:00
RockChinQ
b69f193a3e feat: main.update 接口完成 2023-12-21 17:03:58 +08:00
RockChinQ
7c6526d1ea feat: 改为同步 2023-12-21 16:48:50 +08:00
RockChinQ
b8776fba65 chore: stash 2023-12-21 16:44:21 +08:00
RockChinQ
38357dd68d perf: 简化启动输出 2023-12-21 16:28:45 +08:00
RockChinQ
d1c2453310 feat: 启动时初始化中央服务器 API 交互类 2023-12-21 16:21:24 +08:00
RockChinQ
ebc1ac50c6 doc: 更新 README 2023-12-21 10:22:53 +08:00
RockChinQ
892610872f chore: 更新 submit-plugin 模板 2023-12-21 10:20:19 +08:00
RockChinQ
a990a40850 chore: 更新issues模板 2023-12-21 10:19:02 +08:00
RockChinQ
3f29464dbd feat: 标识符生成器模块 2023-12-20 22:26:51 +08:00
RockChinQ
998d07f3b4 doc(wiki): 添加已迁移说明 2023-12-20 22:10:19 +08:00
Junyan Qin
949bc6268c Update README.md 2023-12-20 22:05:12 +08:00
Junyan Qin
2c03e5a77e doc(README): 更改效果图为主页中的图片 2023-12-20 21:54:20 +08:00
Junyan Qin
aad62dfa6f Merge pull request #642 from RockChinQ/doc/document-replacing
Doc: 替换主文档
2023-12-20 21:47:11 +08:00
Junyan Qin
08e27d07ea 更新 README.md 2023-12-20 21:44:08 +08:00
Junyan Qin
1fddd244e5 更新 README.md 2023-12-20 21:43:48 +08:00
Junyan Qin
d85b4b1cf0 doc(README.md): 替换logo为主页上的链接 2023-12-20 21:43:03 +08:00
RockChinQ
09fca2c292 doc(README): 应用更改 2023-12-20 21:34:44 +08:00
RockChinQ
feda3d18fb doc: 修改主页布局 2023-12-20 17:57:28 +08:00
Junyan Qin
eb6e5d0756 Merge pull request #640 from RockChinQ/fix/cfg-command
Fix: cfg 命令无法使用
2023-12-19 17:40:33 +08:00
RockChinQ
7386daad28 fix: cfg 命令无法使用 (#638) 2023-12-19 17:37:40 +08:00
RockChinQ
3f290b2e1a feat: 命令回复不再通过敏感词检查 2023-12-18 16:31:45 +08:00
RockChinQ
43519ffe80 doc(wiki): 添加插件 API 讨论链接 2023-12-17 23:25:56 +08:00
RockChinQ
c8bb3d612a chore: release v2.6.8 2023-12-17 23:00:25 +08:00
Junyan Qin
bc48b7e623 Merge pull request #636 from RockChinQ/feat/google-gemini
Feat: 支持 Google Gemini Pro 模型
2023-12-17 22:59:34 +08:00
RockChinQ
d59d5797f6 doc(README.md): 删除 PaLM-2 说明 2023-12-17 22:55:06 +08:00
RockChinQ
11d3c1e650 doc(README.md): 添加模型说明 2023-12-17 22:53:50 +08:00
RockChinQ
8cfd9e6694 chore: 添加配置项说明 2023-12-17 22:48:48 +08:00
RockChinQ
d3f401c54d feat: 通过 one-api 支持google gemini 2023-12-17 22:36:30 +08:00
Junyan Qin
a889170d1a Merge pull request #634 from zuo-shi-yun/master
添加AutoSwitchProxy插件
2023-12-17 16:19:47 +08:00
zuo-shi-yun
459e9f9322 添加AutoSwitchProxy插件 2023-12-17 13:15:33 +08:00
Junyan Qin
707afdcdf9 Update bug-report.yml 2023-12-15 10:38:04 +08:00
RockChinQ
ad1cf379c4 doc: 删除公告 2023-12-11 21:57:57 +08:00
RockChinQ
582277fe2d doc: 更新 效果图 2023-12-11 21:56:00 +08:00
RockChinQ
14b9f814c7 chore: release v2.6.7 2023-12-09 22:25:44 +08:00
Junyan Qin
b11e5d99b0 Merge pull request #628 from RockChinQ/fix/image-generating
Fix: openai>=1.0时绘图命令不兼容
2023-12-09 22:22:42 +08:00
GitHub Actions
9590718da4 Update override-all.json 2023-12-09 14:17:55 +00:00
RockChinQ
8c2b53cffb fix: openai>=1.0时绘图命令不兼容 2023-12-09 22:17:26 +08:00
Junyan Qin
5a85c073a8 Update README.md 2023-12-08 17:03:16 +08:00
Junyan Qin
2d2fbd0a8b fix: 首次启动时无法创建配置文件 2023-12-08 07:27:23 +00:00
Junyan Qin
1b25a05122 Update README.md 2023-12-06 19:29:31 +08:00
RockChinQ
709cc1140b chore: 发布公告 2023-12-06 19:27:04 +08:00
Junyan Qin
1730962636 Merge pull request #625 from zuo-shi-yun/master
添加看门狗插件
2023-12-03 10:03:35 +08:00
zuo-shi-yun
a1de4f6f7a 添加看门狗插件 2023-12-02 23:58:18 +08:00
Junyan Qin
a5ccda5ed6 doc: 更新 NOTE 和 WARNING 的格式 2023-12-01 02:28:47 +00:00
Junyan Qin
f035e654ba Merge pull request #623 from zuo-shi-yun/master
添加discountAssistant插件
2023-12-01 10:04:49 +08:00
zuo-shi-yun
151d3e9f66 添加discountAssistant插件 2023-11-30 23:53:43 +08:00
Junyan Qin
c79207e197 Merge pull request #618 from RockChinQ/refactor/config-manager
Refactor: 使用 配置管理器 统一管理配置文件
2023-11-27 00:02:52 +08:00
RockChinQ
f9d461d9a1 feat: 移除过时的配置模块处理逻辑 2023-11-27 00:00:22 +08:00
RockChinQ
3e17bbb90f refactor: 适配配置管理器读取方式 2023-11-26 23:58:06 +08:00
RockChinQ
549a7eff7f refactor(qqbot): 适配配置管理器 2023-11-26 23:04:14 +08:00
RockChinQ
db2e366014 feat: 实现配置文件管理器并适配main.py中的引用 2023-11-26 22:46:27 +08:00
RockChinQ
26e4215054 feat: 新的override逻辑 2023-11-26 22:25:54 +08:00
RockChinQ
5f07ff8145 refactor: 启动流程现在异步 2023-11-26 22:19:36 +08:00
GitHub Actions
e396ba4649 Update override-all.json 2023-11-26 13:54:00 +00:00
RockChinQ
d1dff6dedd feat(main.py): 将配置加载流程放到start函数 2023-11-26 21:53:35 +08:00
RockChinQ
419354cb07 feat: 添加用于覆盖率测试的退出代码 2023-11-26 17:42:25 +08:00
RockChinQ
7708eaa82c perf: 为 context.py 中的方法添加类型提示 2023-11-26 17:33:13 +08:00
RockChinQ
9fccf84987 chore: release v2.6.6 2023-11-22 19:20:47 +08:00
Junyan Qin
0f59788184 Merge pull request #610 from RockChinQ/feat/no-reload-after-updating
Feat: 更新后不再自动热重载
2023-11-22 19:19:22 +08:00
RockChinQ
0ad52bcd3f perf: 优化输出文字 2023-11-22 19:17:23 +08:00
RockChinQ
d7d710ec07 feat: 更新后不再自动热重载 2023-11-22 19:08:33 +08:00
GitHub Actions
75a9a3e9af Update override-all.json 2023-11-22 11:06:11 +00:00
RockChinQ
70503bedb7 feat: 现在默认关闭强制延迟 2023-11-22 19:05:51 +08:00
Junyan Qin
7890eac3f8 Merge pull request #608 from RockChinQ/fix/reverse-proxy-invalid
Fix: 反向代理设置无效
2023-11-21 15:45:49 +08:00
RockChinQ
e15f3595b3 fix: 反向代理设置无效 2023-11-21 15:44:07 +08:00
RockChinQ
eebd6a6ba3 chore: release v2.6.5 2023-11-14 23:16:02 +08:00
Junyan Qin
0407f3e4ac Merge pull request #599 from RockChinQ/refactor/modern-openai-api-style
Refactor: 修改 情景预设 置入风格
2023-11-14 21:36:25 +08:00
RockChinQ
5abca84437 debug: 添加请求参数输出 2023-11-14 21:35:02 +08:00
GitHub Actions
d2776cc1e6 Update override-all.json 2023-11-14 13:06:22 +00:00
RockChinQ
9fe0ee2b77 refactor: 使用system role置入default prompt 2023-11-14 21:06:00 +08:00
Junyan Qin
b68daac323 Merge pull request #598 from RockChinQ/perf/import-style
Refactor: 修改引入风格
2023-11-13 22:00:27 +08:00
RockChinQ
665de5dc43 refactor: 修改引入风格 2023-11-13 21:59:23 +08:00
RockChinQ
e3b280758c chore: 发布更新公告 2023-11-13 18:03:26 +08:00
RockChinQ
374ae25d9c fix: 启动时自动解决依赖后不正确的异常处理 2023-11-12 23:16:09 +08:00
RockChinQ
c86529ac99 feat: 启动时不再自动更新websockets依赖 2023-11-12 22:59:49 +08:00
RockChinQ
6309f1fb78 chore(deps): 更换为自有分支yiri-mirai-rc 2023-11-12 20:31:07 +08:00
RockChinQ
c246fb6d8e chore: release v2.6.4 2023-11-12 14:42:48 +08:00
RockChinQ
ec6c041bcf ci(Dockerfile): 修复依赖安装问题 2023-11-12 14:42:07 +08:00
RockChinQ
2da5a9f3c7 ci(Dockerfile): 显式更新httpcore httpx和openai库 2023-11-12 14:18:42 +08:00
Junyan Qin
4e0df52d7c Merge pull request #592 from RockChinQ/fix/plugin-downloading
Feat: 通过 GitHub API 进行插件安装和更新
2023-11-12 14:07:52 +08:00
RockChinQ
71b8bf13e4 fix: 插件加载bug 2023-11-12 13:52:04 +08:00
RockChinQ
a8b1e6ce91 ci: test 2023-11-12 12:05:04 +08:00
RockChinQ
1419d7611d ci(cmdpriv): 本地测试通过 2023-11-12 12:03:52 +08:00
RockChinQ
89c83ebf20 fix: 错误的判空变量 2023-11-12 11:30:10 +08:00
RockChinQ
76d7db88ea feat: 基于元数据记录的插件更新实现 2023-11-11 23:17:28 +08:00
RockChinQ
67a208bc90 feat: 添加插件元数据操作模块 2023-11-11 17:38:52 +08:00
RockChinQ
acbd55ded2 feat: 插件安装改为直接下载源码 2023-11-10 23:01:56 +08:00
Junyan Qin
11a240a6d1 Merge pull request #591 from RockChinQ/feat/new-model-names
Feat: 更新模型索引
2023-11-10 21:23:22 +08:00
RockChinQ
97c85abbe7 feat: 更新模型索引 2023-11-10 21:16:33 +08:00
RockChinQ
06a0cd2a3d chore: 发布兼容性问题公告 2023-11-10 12:20:29 +08:00
GitHub Actions
572b215df8 Update override-all.json 2023-11-10 04:04:45 +00:00
RockChinQ
2c542bf412 chore: 不再默认在启动时升级依赖库 2023-11-10 12:04:25 +08:00
RockChinQ
1576ba7a01 chore: release v2.6.3 2023-11-10 12:01:20 +08:00
Junyan Qin
45e4096a12 Merge pull request #587 from RockChinQ/hotfix/openai-1.0-adaptation
Feat: 适配openai>=1.0.0
2023-11-10 11:49:20 +08:00
GitHub Actions
8a1d4fe287 Update override-all.json 2023-11-10 03:47:30 +00:00
RockChinQ
98f880ebc2 chore: 群内回复不再默认引用原消息 2023-11-10 11:47:10 +08:00
RockChinQ
2b852853f3 feat: 适配completion和chat_completions 2023-11-10 11:31:14 +08:00
RockChinQ
c7a9988033 feat: 以新的方式设置正向代理 2023-11-10 10:54:03 +08:00
RockChinQ
c475eebe1c chore: 不再限制openai版本 2023-11-10 10:14:11 +08:00
RockChinQ
0fe7355ae0 hotfix: 适配openai>=1.0.0 2023-11-10 10:13:50 +08:00
Junyan Qin
57de96e3a2 chore(requirements.txt): 锁定openai版本到0.28.1 2023-11-10 09:31:27 +08:00
Junyan Qin
70571cef50 Update README.md 2023-10-02 17:31:08 +08:00
Junyan Qin
0b6deb3340 Update README.md 2023-10-02 17:23:36 +08:00
Junyan Qin
dcda85a825 Merge pull request #580 from RockChinQ/dependabot/pip/openai-approx-eq-0.28.1
chore(deps): update openai requirement from ~=0.28.0 to ~=0.28.1
2023-10-02 16:10:37 +08:00
dependabot[bot]
9d3bff018b chore(deps): update openai requirement from ~=0.28.0 to ~=0.28.1
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.28.0...v0.28.1)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-02 08:09:30 +00:00
RockChinQ
051376e0d2 Release v2.6.1 2023-09-28 12:18:46 +00:00
Junyan Qin
a113785211 Merge pull request #578 from RockChinQ/fix/blocked-audit-upload
[Fix] 阻塞地发送审计报告数据
2023-09-28 20:17:26 +08:00
RockChinQ
3f4ed4dc3c fix: 阻塞地发送审计报告数据 2023-09-28 12:16:30 +00:00
Junyan Qin
ac80764fae Merge pull request #577 from RockChinQ/doc/deadlinks-in-wiki
[Doc] 修复wiki中的死链
2023-09-28 20:04:01 +08:00
RockChinQ
e43afd4891 doc: 修复wiki中的死链 2023-09-28 12:03:27 +00:00
RockChinQ
f1aea1d495 doc: 统一改称指令为命令 2023-09-28 11:46:33 +00:00
GitHub Actions
0e2a5db104 Update override-all.json 2023-09-26 16:10:22 +00:00
Junyan Qin
3a4c9771fa feat(config): 默认超时时间改为两分钟 2023-09-27 00:09:58 +08:00
RockChinQ
f4f8ef9523 ci: 工作流统一双空格缩进 2023-09-13 08:27:47 +00:00
RockChinQ
b9ace69a72 Release v2.6.0 2023-09-13 08:13:24 +00:00
RockChinQ
aef0b2a26e ci: 修复GITHUB_REF判断逻辑 2023-09-13 08:12:46 +00:00
RockChinQ
f7712d71ec feat(pkgmgr): 使用清华源执行pip操作 2023-09-13 07:54:53 +00:00
RockChinQ
e94b44e3b8 chore: 更新.gitignore 2023-09-13 07:22:12 +00:00
Junyan Qin
524e863c78 Merge pull request #567 from ruuuux/patch-1
添加 WikipediaSearch 插件
2023-09-13 11:57:54 +08:00
ruuuux
bbc80ac901 添加 WikipediaSearch 插件 2023-09-13 11:55:11 +08:00
GitHub Actions
f969ddd6ca Update override-all.json 2023-09-13 03:09:55 +00:00
RockChinQ
1cc9781333 chore(config): 添加 One API 的注释说明 2023-09-13 03:09:35 +00:00
Junyan Qin
a609801bae Merge pull request #551 from flashszn/master
加入one-api项目支持的国内大模型
2023-09-13 11:00:21 +08:00
RockChinQ
d8b606d372 doc(README.md): 添加 One API 支持公告 2023-09-13 02:45:04 +00:00
RockChinQ
572a440e65 doc(README.md): 添加 One API 的说明 2023-09-13 02:41:22 +00:00
RockChinQ
6e4eeae9b7 doc: 添加one-api模型注释说明 2023-09-13 02:34:11 +00:00
Shi Zhenning
1a73669df8 加入符合oneapi项目接口的国内模型
oneapi是一个api整合项目,通过这个项目的反代理,可以像使用gpt系列的/v1/completion接口一样调用国内的大模型,仅仅需要更改一下模型名字
2023-09-13 02:34:11 +00:00
RockChinQ
91ebaf1122 doc(README.md): 添加内容 2023-09-12 13:26:41 +00:00
RockChinQ
46703eb906 doc(README.md): docker部署说明 2023-09-12 13:09:42 +00:00
Junyan Qin
b9dd9d5193 Merge pull request #566 from RockChinQ/docker-deployment
[CI] Docker 部署最佳实践
2023-09-12 21:06:53 +08:00
RockChinQ
884481a4ec doc(README.md): 镜像徽章 2023-09-12 13:04:18 +00:00
RockChinQ
9040b37a63 chore: 默认安装PyYaml依赖 2023-09-12 12:53:16 +00:00
RockChinQ
99d47b2fa2 doc: 修改Docker部署指引 2023-09-12 12:53:03 +00:00
RockChinQ
6575359a94 doc: 添加docker部署指南 2023-09-12 12:51:06 +00:00
RockChinQ
a2fc726372 deploy: 添加docker-compose.yaml 2023-09-12 12:50:49 +00:00
RockChinQ
3bfce8ab51 ci: 优化docker镜像构建脚本 2023-09-12 10:21:40 +00:00
RockChinQ
ff9a9830f2 chore: 更新requirements.txt 2023-09-12 10:21:19 +00:00
RockChinQ
e2b59e8efe ci: 更新Dockerfile 2023-09-12 10:21:03 +00:00
Junyan Qin
04dad9757f Merge pull request #565 from RockChinQ/docker-image-test
[CI] Docker 部署脚本同步
2023-09-12 16:34:38 +08:00
Junyan Qin
75ea1080ad Merge pull request #351 from q123458384/patch-2
Create build_docker_image.yml
2023-09-12 15:57:44 +08:00
Junyan Qin
e25b064319 doc(README.md): 为群号添加链接 2023-09-10 23:22:38 +08:00
Junyan Qin
5d0dbc40ce doc(README.md): 添加社区使用手册链接 2023-09-10 23:17:22 +08:00
Junyan Qin
beae8de5eb Merge pull request #563 from RockChinQ/dependabot/pip/dulwich-approx-eq-0.21.6
chore(deps): update dulwich requirement from ~=0.21.5 to ~=0.21.6
2023-09-04 17:23:52 +08:00
dependabot[bot]
c4ff30c722 chore(deps): update dulwich requirement from ~=0.21.5 to ~=0.21.6
Updates the requirements on [dulwich](https://github.com/dulwich/dulwich) to permit the latest version.
- [Release notes](https://github.com/dulwich/dulwich/releases)
- [Changelog](https://github.com/jelmer/dulwich/blob/master/NEWS)
- [Commits](https://github.com/dulwich/dulwich/compare/dulwich-0.21.5...dulwich-0.21.6)

---
updated-dependencies:
- dependency-name: dulwich
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-04 09:09:35 +00:00
Junyan Qin
6f4ecb101b Merge pull request #562 from RockChinQ/dependabot/pip/openai-approx-eq-0.28.0
chore(deps): update openai requirement from ~=0.27.9 to ~=0.28.0
2023-09-04 17:08:47 +08:00
dependabot[bot]
9f9b0ef846 chore(deps): update openai requirement from ~=0.27.9 to ~=0.28.0
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.9...v0.28.0)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-04 08:06:36 +00:00
RockChinQ
de6957062c chore(config): 修改公用反代地址 2023-09-02 19:48:45 +08:00
Junyan Qin
0a9b43e6fa doc(README.md):社区群群号 2023-09-01 08:49:47 +08:00
Junyan Qin
5b0edd9937 Merge pull request #559 from oliverkirk-sudo/master
新增插件
2023-08-31 18:14:52 +08:00
oliverkirk-sudo
8a400d202a Update README.md
添加插件
2023-08-31 18:10:46 +08:00
RockChinQ
5a1e9f7fb2 doc(README.md): 修改徽章样式 2023-08-31 08:40:11 +00:00
RockChinQ
e03af75cf8 doc: 更新部署节说明 2023-08-31 02:26:13 +00:00
RockChinQ
0da4919255 doc: 整理README.md格式 2023-08-31 02:23:30 +00:00
RockChinQ
914e566d1f doc(README.md): 更新wiki链接 2023-08-30 08:46:17 +00:00
RockChinQ
6ec2b653fe doc(wiki): 为wiki页标号 2023-08-30 08:41:59 +00:00
RockChinQ
ba0a088b9c doc(wiki): 常见问题标号 2023-08-30 08:38:44 +00:00
RockChinQ
478e83bcd9 ci: 更新wiki同步工作流 2023-08-30 08:38:26 +00:00
RockChinQ
386124a3b9 doc(wiki): 页面标号 2023-08-30 08:36:30 +00:00
RockChinQ
ff5e7c16d1 doc(wiki): 插件相关文档typo 2023-08-30 08:34:03 +00:00
RockChinQ
7ff7a66012 doc: 更新gpt4free的说明文档 2023-08-29 14:42:44 +08:00
Junyan Qin
c99dfb8a86 Merge pull request #557 from RockChinQ/dependabot/pip/openai-approx-eq-0.27.9
chore(deps): update openai requirement from ~=0.27.8 to ~=0.27.9
2023-08-28 16:29:55 +08:00
dependabot[bot]
10f9d4c6b3 chore(deps): update openai requirement from ~=0.27.8 to ~=0.27.9
Updates the requirements on [openai](https://github.com/openai/openai-python) to permit the latest version.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Commits](https://github.com/openai/openai-python/compare/v0.27.8...v0.27.9)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-28 08:29:19 +00:00
Junyan Qin
d347813411 Merge pull request #556 from oliverkirk-sudo/master
Update README.md
2023-08-24 17:06:29 +08:00
oliverkirk-sudo
7a93898b3f Update README.md 2023-08-24 17:00:18 +08:00
Junyan Qin
c057ea900f Merge pull request #548 from oliverkirk-sudo/master
修改插件信息
2023-08-15 09:35:55 +08:00
oliverkirk-sudo
512266e74f 修改插件信息 2023-08-14 23:21:51 +08:00
RockChinQ
e36aee11c7 doc(README.md): 更新README.md 2023-08-14 19:12:39 +08:00
RockChinQ
97421299f5 doc(README.md): Claude和Bard的说明 2023-08-14 19:11:48 +08:00
Junyan Qin
bc41e5aa80 Update README.md 2023-08-14 16:57:55 +08:00
RockChinQ
2fa30e7def doc(README.md): 徽章格式 2023-08-14 16:56:50 +08:00
RockChinQ
1c6a7d9ba5 doc(README.md): 添加群号徽章 2023-08-14 16:09:11 +08:00
RockChinQ
47435c42a5 doc(README.md): 更新视频教程 2023-08-12 12:26:00 +08:00
Junyan Qin
39a1b421e6 doc(CONTRIBUTING.md): 添加字段使用规范 2023-08-09 20:12:46 +08:00
Junyan Qin
b5edf2295b doc(CONTRIBUTING.md): 添加代码规范 2023-08-08 20:14:00 +08:00
RockChinQ
fb650a3d7a chore: config.py添加反代地址 2023-08-07 11:43:28 +08:00
Junyan Qin
521541f311 Merge pull request #534 from RockChinQ/doc-add-gif
[Doc] 添加演示GIF图
2023-08-06 17:20:49 +08:00
Junyan Qin
7020abadbf Add files via upload 2023-08-06 17:19:54 +08:00
Junyan Qin
d95fb3b5be Delete webwlkr-demo.gif 2023-08-06 17:18:58 +08:00
Junyan Qin
3e524dc790 Add files via upload 2023-08-06 17:18:00 +08:00
Junyan Qin
a64940bff8 Update README.md 2023-08-06 17:13:10 +08:00
Junyan Qin
c739290f0b Add files via upload 2023-08-06 17:07:22 +08:00
RockChinQ
af292fe050 Release v2.5.2 2023-08-06 14:58:13 +08:00
Junyan Qin
634c7fb302 Merge pull request #533 from RockChinQ/perf-function-call-process
[Perf] 优化函数调用的底层逻辑
2023-08-06 14:52:01 +08:00
RockChinQ
33efb94013 feat: 应用cmdpriv时忽略不存在的命令 2023-08-06 14:50:22 +08:00
GitHub Actions
549e4dc02e Update override-all.json 2023-08-06 06:41:00 +00:00
RockChinQ
3d40909c02 feat: 不再默认启用trace_function_calls 2023-08-06 14:40:35 +08:00
RockChinQ
1aef81e38f perf: 修改网络问题时的报错 2023-08-06 12:17:04 +08:00
RockChinQ
1b0ae8da58 refactor: session append重命名为query 2023-08-05 22:00:32 +08:00
GitHub Actions Bot
7979a8e97f Update cmdpriv-template.json 2023-08-05 13:52:03 +00:00
RockChinQ
080e53d9a9 feat: 刪除continue命令 2023-08-05 21:51:34 +08:00
GitHub Actions
89bb364b16 Update override-all.json 2023-08-05 13:44:30 +00:00
RockChinQ
3586cd941f feat: 支持跟踪函数调用过程并默认启用 2023-08-05 21:44:11 +08:00
RockChinQ
054d0839ac fix: 未序列化的function_call属性 2023-08-04 19:08:48 +08:00
RockChinQ
dd75f98d85 feat: 世界上最先进的调用流程 2023-08-04 18:41:04 +08:00
RockChinQ
ec23bb5268 doc(README.md): 添加视频教程链接 2023-08-04 17:22:13 +08:00
Junyan Qin
bc99db4fc1 Merge pull request #531 from RockChinQ/feat-load-balance
[Feat] api-key主动负载均衡
2023-08-04 17:14:03 +08:00
RockChinQ
c8275fcfbf feat(openai): 支持apikey主动切换策略 2023-08-04 17:10:07 +08:00
GitHub Actions
a345043c30 Update override-all.json 2023-08-04 07:21:52 +00:00
RockChinQ
382d37d479 chore: 添加key切换策略配置项 2023-08-04 15:21:31 +08:00
RockChinQ
32c144a75d doc(README.md): 增加简介 2023-08-03 23:40:01 +08:00
RockChinQ
7ca2aa5e39 doc(wiki): 说明逆向库插件也支持函数调用 2023-08-03 18:44:16 +08:00
RockChinQ
86cc4a23ac fix: func命令列表标号未自增 2023-08-03 17:53:43 +08:00
RockChinQ
08d1e138bd doc: 删除过时公告 2023-08-02 21:46:59 +08:00
Junyan Qin
a9fe86542f Merge pull request #530 from RockChinQ/doc-readme
[Docs] 重新整理README.md格式
2023-08-02 21:09:39 +08:00
RockChinQ
4e29776fcd doc: 整理插件生态章节 2023-08-02 21:07:31 +08:00
RockChinQ
ee3eae8f4d doc: 完善徽章 2023-08-02 21:06:34 +08:00
RockChinQ
a84575858a doc: 整理徽章 2023-08-02 21:04:23 +08:00
RockChinQ
ac472291c7 doc: 赞赏章节 2023-08-02 20:59:01 +08:00
RockChinQ
f304873c6a doc(wiki): 内容函数页 2023-08-02 20:59:01 +08:00
RockChinQ
18caf8face doc: 致谢章节 2023-08-02 20:59:01 +08:00
RockChinQ
d21115aaa8 doc: 优化起始章节 2023-08-02 20:59:01 +08:00
RockChinQ
a05ecd2e7f doc: 更多section 2023-08-02 20:59:01 +08:00
RockChinQ
32a725126d doc: 模型适配一览 2023-08-02 20:59:01 +08:00
RockChinQ
0528690622 doc: 修改logo 2023-08-02 20:51:07 +08:00
Junyan Qin
819339142e Merge pull request #529 from RockChinQ/feat-funcs-called-args
[Feat] NormalMessageResponded添加func_called参数
2023-08-02 18:02:48 +08:00
RockChinQ
1d0573e7ff feat: NormalMessageResponded添加func_called参数 2023-08-02 18:01:02 +08:00
RockChinQ
00623bc431 typo(plugin): 插件执行报错提示 2023-08-02 11:35:20 +08:00
Junyan Qin
c872264456 Merge pull request #525 from RockChinQ/feat-finish-reason-param
[Feat] 为NormalMessageResponded事件添加finish_reason参数
2023-08-01 14:40:14 +08:00
RockChinQ
1336d3cb9a fix: chat_completion不传回finish_reason的问题 2023-08-01 14:39:57 +08:00
RockChinQ
d1459578cd doc(wiki): 插件开发页说明 2023-08-01 14:33:32 +08:00
RockChinQ
8a67fcf40f feat: 为NormalMessageResponded事件添加finish_reason参数 2023-08-01 14:31:38 +08:00
RockChinQ
7930370aa9 chore: 发布函数调用功能公告 2023-08-01 10:50:23 +08:00
RockChinQ
0b854bdcf1 feat(chat_completion): 不生成到stop以使max_tokens参数生效 2023-08-01 10:26:23 +08:00
Junyan Qin
cba6aab48d Merge pull request #524 from RockChinQ/feat-at-sender
[Feat] 支持在群内回复时at发送者
2023-08-01 10:14:34 +08:00
GitHub Actions
12a9ca7a77 Update override-all.json 2023-08-01 02:13:35 +00:00
RockChinQ
a6cbd226e1 feat: 支持设置群内回复时at发送者 2023-08-01 10:13:15 +08:00
RockChinQ
3577e62b41 perf: 简化启动时输出 2023-07-31 21:11:28 +08:00
RockChinQ
f86e69fcd1 perf: 简化启动时的输出信息 2023-07-31 21:05:23 +08:00
RockChinQ
292e00b078 perf: 简化启动时的输出信息 2023-07-31 21:04:59 +08:00
RockChinQ
2a91497bcf chore: .gitignore排除qcapi/ 2023-07-31 20:23:54 +08:00
crosscc
8c67d3c58f Create build_docker_image.yml
利用github action 自动构建docker镜像:
## 1、
`workflow_dispatch:` 是需要作者在action手动点击进行构建

```
  release:
    types: [published]
```
这个是发布release的时候自动构建镜像。根据作者需求启用或者删除注释掉

## 2、
`tag: latest,${{ steps.get_version.outputs.VERSION }}`是可以镜像打标为latest和release发布的版本号

## 3、
docker hub userid 在setting创建secrets, name=DOCKER_USERNAME   value=dockerid
docker hub password,在setting创建secrets, name=DOCKER_PASSWORD   value=dockerpassword

这样作者就不用在自己机器上构建docker镜像,利用action 自动完成全平台镜像 速度也快。
2023-03-31 16:22:20 +08:00
111 changed files with 2579 additions and 1599 deletions

View File

@@ -26,8 +26,8 @@ body:
- type: input
attributes:
label: 系统环境
description: 操作系统、系统架构。
placeholder: 例如: CentOS x64、Windows11
description: 操作系统、系统架构、**主机地理位置**,地理位置最好写清楚,涉及网络问题排查
placeholder: 例如: CentOS x64 中国大陆、Windows11 美国
validations:
required: true
- type: input
@@ -37,15 +37,28 @@ body:
placeholder: 例如: Python 3.10
validations:
required: true
- type: textarea
- type: input
attributes:
label: 异常情况
description: 完整描述异常情况,什么时候发生的、发生了什么
label: QChatGPT版本
description: QChatGPT版本号
placeholder: 例如: v2.6.0,可以使用`!version`命令查看
validations:
required: true
- type: textarea
attributes:
label: 报错信息
description: 请提供完整的**控制台**报错信息(若有)
label: 异常情况
description: 完整描述异常情况,什么时候发生的、发生了什么,尽可能详细
validations:
required: true
- type: textarea
attributes:
label: 日志信息
description: 请提供完整的 **登录框架 和 QChatGPT控制台**的相关日志信息(若有),不提供日志信息**无法**为您排查问题,请尽可能详细
validations:
required: false
- type: textarea
attributes:
label: 启用的插件
description: 有些情况可能和插件功能有关,建议提供插件启用情况。可以使用`!plugin`命令查看已启用的插件
validations:
required: false

View File

@@ -1,6 +1,6 @@
name: 需求建议
title: "[Feature]: "
labels: ["enhancement"]
labels: ["改进"]
description: "新功能或现有功能优化请使用这个模板不符合类别的issue将被直接关闭"
body:
- type: dropdown

View File

@@ -0,0 +1,24 @@
name: 提交新插件
title: "[Plugin]: 请求登记新插件"
labels: ["独立插件"]
description: "本模板供且仅供提交新插件使用"
body:
- type: input
attributes:
label: 插件名称
description: 填写插件的名称
validations:
required: true
- type: textarea
attributes:
label: 插件代码库地址
description: 仅支持 Github
validations:
required: true
- type: textarea
attributes:
label: 插件简介
description: 插件的简介
validations:
required: true

View File

@@ -10,6 +10,6 @@ updates:
schedule:
interval: "weekly"
allow:
- dependency-name: "yiri-mirai"
- dependency-name: "yiri-mirai-rc"
- dependency-name: "dulwich"
- dependency-name: "openai"

View File

@@ -0,0 +1,38 @@
name: Build Docker Image
on:
#防止fork乱用action设置只能手动触发构建
workflow_dispatch:
## 发布release的时候会自动构建
release:
types: [published]
jobs:
publish-docker-image:
runs-on: ubuntu-latest
name: Build image
steps:
- name: Checkout
uses: actions/checkout@v2
- name: judge has env GITHUB_REF # 如果没有GITHUB_REF环境变量则把github.ref变量赋值给GITHUB_REF
run: |
if [ -z "$GITHUB_REF" ]; then
export GITHUB_REF=${{ github.ref }}
fi
- name: Check GITHUB_REF env
run: echo $GITHUB_REF
- name: Get version
id: get_version
if: (startsWith(env.GITHUB_REF, 'refs/tags/')||startsWith(github.ref, 'refs/tags/')) && startsWith(github.repository, 'RockChinQ/QChatGPT')
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
- name: Build # image name: rockchin/qchatgpt:<VERSION>
run: docker build --network=host -t rockchin/qchatgpt:${{ steps.get_version.outputs.VERSION }} -t rockchin/qchatgpt:latest .
- name: Login to Registry
run: docker login --username=${{ secrets.DOCKER_USERNAME }} --password ${{ secrets.DOCKER_PASSWORD }}
- name: Push image
if: (startsWith(env.GITHUB_REF, 'refs/tags/')||startsWith(github.ref, 'refs/tags/')) && startsWith(github.repository, 'RockChinQ/QChatGPT')
run: docker push rockchin/qchatgpt:${{ steps.get_version.outputs.VERSION }}
- name: Push latest image
if: (startsWith(env.GITHUB_REF, 'refs/tags/')||startsWith(github.ref, 'refs/tags/')) && startsWith(github.repository, 'RockChinQ/QChatGPT')
run: docker push rockchin/qchatgpt:latest

View File

@@ -1,11 +1,6 @@
name: Update Wiki
on:
pull_request:
branches:
- master
paths:
- 'res/wiki/**'
push:
branches:
- master
@@ -27,6 +22,9 @@ jobs:
with:
repository: RockChinQ/QChatGPT.wiki
path: wiki
- name: Delete old wiki content
run: |
rm -rf wiki/*
- name: Copy res/wiki content to wiki
run: |
cp -r res/wiki/* wiki/

80
.github/workflows/test-pr.yml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Test Pull Request
on:
pull_request:
types: [ready_for_review]
paths:
# 任何py文件改动都会触发
- '**.py'
pull_request_review:
types: [submitted]
issue_comment:
types: [created]
# 允许手动触发
workflow_dispatch:
jobs:
perform-test:
runs-on: ubuntu-latest
# 如果事件为pull_request_review且review状态为approved则执行
if: >
github.event_name == 'pull_request' ||
(github.event_name == 'pull_request_review' && github.event.review.state == 'APPROVED') ||
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'issue_comment' && github.event.issue.pull_request != '' && contains(github.event.comment.body, '/test') && github.event.comment.user.login == 'RockChinQ')
steps:
# 签出测试工程仓库代码
- name: Checkout
uses: actions/checkout@v2
with:
# 仓库地址
repository: RockChinQ/qcg-tester
# 仓库路径
path: qcg-tester
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install dependencies
run: |
cd qcg-tester
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Get PR details
id: get-pr
if: github.event_name == 'issue_comment'
uses: octokit/request-action@v2.x
with:
route: GET /repos/${{ github.repository }}/pulls/${{ github.event.issue.number }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Set PR source branch as env variable
if: github.event_name == 'issue_comment'
run: |
PR_SOURCE_BRANCH=$(echo '${{ steps.get-pr.outputs.data }}' | jq -r '.head.ref')
echo "BRANCH=$PR_SOURCE_BRANCH" >> $GITHUB_ENV
- name: Set PR Branch as bash env
if: github.event_name != 'issue_comment'
run: |
echo "BRANCH=${{ github.head_ref }}" >> $GITHUB_ENV
- name: Set OpenAI API Key from Secrets
run: |
echo "OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}" >> $GITHUB_ENV
- name: Set OpenAI Reverse Proxy URL from Secrets
run: |
echo "OPENAI_REVERSE_PROXY=${{ secrets.OPENAI_REVERSE_PROXY }}" >> $GITHUB_ENV
- name: Run test
run: |
cd qcg-tester
python main.py
- name: Upload coverage reports to Codecov
run: |
cd qcg-tester/resource/QChatGPT
curl -Os https://uploader.codecov.io/latest/linux/codecov
chmod +x codecov
./codecov -t ${{ secrets.CODECOV_TOKEN }}

View File

@@ -21,12 +21,12 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
python-version: 3.10.13
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade yiri-mirai openai colorlog func_timeout dulwich Pillow CallingGPT tiktoken
python -m pip install --upgrade yiri-mirai-rc openai>=1.0.0 colorlog func_timeout dulwich Pillow CallingGPT tiktoken
python -m pip install -U openai>=1.0.0
- name: Copy Scripts
run: |

View File

@@ -29,7 +29,6 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# 在此处添加您的项目所需的其他依赖
- name: Copy Scripts
run: |

11
.gitignore vendored
View File

@@ -1,4 +1,4 @@
config.py
/config.py
.idea/
__pycache__/
database.db
@@ -25,4 +25,11 @@ bin/
.vscode
test_*
venv/
hugchat.json
hugchat.json
qcapi
claude.json
bard.json
/*yaml
!/docker-compose.yaml
res/instance_id.json
.DS_Store

View File

@@ -17,3 +17,10 @@
- 解决本项目或衍生项目的issues中亟待解决的问题
- 阅读并完善本项目文档
- 在各个社交媒体撰写本项目教程等
### 代码规范
- 代码中的注解`务必`符合Google风格的规范
- 模块顶部的引入代码请遵循`系统模块``第三方库模块``自定义模块`的顺序进行引入
- `不要`直接引入模块的特定属性,而是引入这个模块,再通过`xxx.yyy`的形式使用属性
- 任何作用域的字段`必须`先声明后使用,并在声明处注明类型提示

View File

@@ -1,17 +1,15 @@
FROM python:3.9-slim
FROM python:3.10.13-bullseye
WORKDIR /QChatGPT
RUN sed -i "s/deb.debian.org/mirrors.tencent.com/g" /etc/apt/sources.list \
&& sed -i 's|security.debian.org/debian-security|mirrors.tencent.com/debian-security|g' /etc/apt/sources.list \
&& apt-get clean \
&& apt-get update \
&& apt-get -y upgrade \
&& apt-get install -y git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY . /QChatGPT/
RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN ls
CMD [ "python", "main.py" ]
RUN python -m pip install -r requirements.txt && \
python -m pip install -U websockets==10.0 && \
python -m pip install -U httpcore httpx openai
# 生成配置文件
RUN python main.py
CMD [ "python", "main.py" ]

347
README.md
View File

@@ -1,320 +1,51 @@
# QChatGPT🤖
<p align="center">
<img src="res/social.png" alt="QChatGPT" width="640" />
<img src="https://qchatgpt.rockchin.top/logo.png" alt="QChatGPT" width="180" />
</p>
[English](README_en.md) | 简体中文
<div align="center">
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT?style=flat-square)](https://github.com/RockChinQ/QChatGPT/releases/latest)
# QChatGPT
<blockquote> 🥳 QChatGPT 一周年啦,感谢大家的支持!欢迎前往<a href="https://github.com/RockChinQ/QChatGPT/discussions/627">讨论</a>。</blockquote>
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT)](https://github.com/RockChinQ/QChatGPT/releases/latest)
<a href="https://hub.docker.com/repository/docker/rockchin/qchatgpt">
<img src="https://img.shields.io/docker/pulls/rockchin/qchatgpt?color=blue" alt="docker pull">
</a>
![Wakapi Count](https://wakapi.dev/api/badge/RockChinQ/interval:any/project:QChatGPT)
> 2023/7/29 支持使用GPT的Function Calling功能实现类似ChatGPT Plugin的效果请见[Wiki内容函数](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0)
> 2023/4/24 支持使用go-cqhttp登录QQ请查看[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)
> 2023/3/18 现已支持GPT-4 API内测请查看`config-template.py`中的`completion_api_params`
> 2023/3/15 逆向库已支持New Bing使用方法查看[插件文档](https://github.com/RockChinQ/revLibs)
**QChatGPT需要Python版本>=3.9**
- 到[项目Wiki](https://github.com/RockChinQ/QChatGPT/wiki)可了解项目详细信息
- 官方交流、答疑群: 656285629
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- 社区群(内有一键部署包、图形化界面等资源): 891448839
- QQ频道机器人见[QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
- 欢迎各种形式的贡献,请查看[贡献指引](CONTRIBUTING.md)
- 购买ChatGPT账号: [此链接](http://fk.kimi.asia)
## 🍺模型适配一览
<details>
<summary>点击此处展开</summary>
### 文字对话
- OpenAI GPT-3.5模型(ChatGPT API), 本项目原生支持, 默认使用
- OpenAI GPT-3模型, 本项目原生支持, 部署完成后前往`config.py`切换
- OpenAI GPT-4模型, 本项目原生支持, 目前需要您的账户通过OpenAI的内测申请, 请前往`config.py`切换
- ChatGPT网页版GPT-3.5模型, 由[插件](https://github.com/RockChinQ/revLibs)接入
- ChatGPT网页版GPT-4模型, 目前需要ChatGPT Plus订阅, 由[插件](https://github.com/RockChinQ/revLibs)接入
- New Bing逆向库, 由[插件](https://github.com/RockChinQ/revLibs)接入
- HuggingChat, 由[插件](https://github.com/RockChinQ/revLibs)接入, 仅支持英文
### 故事续写
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 图片绘制
- OpenAI DALL·E模型, 本项目原生支持, 使用方法查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 语音生成
- TTS+VITS, 由[插件](https://github.com/dominoar/QChatPlugins)接入
- Plachta/VITS-Umamusume-voice-synthesizer, 由[插件](https://github.com/oliverkirk-sudo/chat_voice)接入
</details>
安装[此插件](https://github.com/RockChinQ/Switcher),即可在使用中切换文字模型。
## ✅功能
<details>
<summary>点击此处展开概述</summary>
<details>
<summary>✅支持敏感词过滤,避免账号风险</summary>
- 难以监测机器人与用户对话时的内容,故引入此功能以减少机器人风险
- 加入了百度云内容审核,在`config.py`中修改`baidu_check`的值,并填写`baidu_api_key``baidu_secret_key`以开启此功能
- 编辑`sensitive.json`,并在`config.py`中修改`sensitive_word_filter`的值以开启此功能
</details>
<details>
<summary>✅群内多种响应规则不必at</summary>
- 默认回复`ai`作为前缀或`@`机器人的消息
- 详细见`config.py`中的`response_rules`字段
</details>
<details>
<summary>✅完善的多api-key管理超额自动切换</summary>
- 支持配置多个`api-key`,内部统计使用量并在超额时自动切换
- 请在`config.py`中修改`openai_config`的值以设置`api-key`
- 可以在`config.py`中修改`api_key_fee_threshold`来自定义切换阈值
- 运行期间向机器人说`!usage`以查看当前使用情况
</details>
<details>
<summary>✅支持预设指令文字</summary>
- 支持以自然语言预设文字,自定义机器人人格等信息
- 详见`config.py`中的`default_prompt`部分
- 支持设置多个预设情景,并通过!reset、!default等指令控制详细请查看[wiki指令](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)
</details>
<details>
<summary>✅支持对话、绘图等模型,可玩性更高</summary>
- 现已支持OpenAI的对话`Completion API`和绘图`Image API`
- 向机器人发送指令`!draw <prompt>`即可使用绘图模型
</details>
<details>
<summary>✅支持指令控制热重载、热更新</summary>
- 允许在运行期间修改`config.py`或其他代码后,以管理员账号向机器人发送指令`!reload`进行热重载,无需重启
- 运行期间允许以管理员账号向机器人发送指令`!update`进行热更新,拉取远程最新代码并执行热重载
</details>
<details>
<summary>✅支持插件加载🧩</summary>
- 自行实现插件加载器及相关支持
- 支持GPT的Function Calling功能
- 详细查看[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
</details>
<details>
<summary>✅私聊、群聊黑名单机制</summary>
- 支持将人或群聊加入黑名单以忽略其消息
- 详见Wiki`加入黑名单`
</details>
<details>
<summary>✅长消息处理策略</summary>
- 支持将长消息转换成图片或消息记录组件,避免消息刷屏
- 请查看`config.py``blob_message_strategy`等字段
</details>
<details>
<summary>✅回复速度限制</summary>
- 支持限制单会话内每分钟可进行的对话次数
- 具有“等待”和“丢弃”两种策略
- “等待”策略:在获取到回复后,等待直到此次响应时间达到对话响应时间均值
- “丢弃”策略:此分钟内对话次数达到限制时,丢弃之后的对话
- 详细请查看config.py中的相关配置
</details>
<details>
<summary>✅支持使用网络代理</summary>
- 目前已支持正向代理访问接口
- 详细请查看config.py中的`openai_config`的说明
</details>
<details>
<summary>✅支持自定义提示内容</summary>
- 允许用户自定义报错、帮助等提示信息
- 请查看`tips.py`
</details>
### 🏞️截图
<img alt="私聊GPT-3.5" src="res/screenshots/person_gpt3.5.png" width="400"/>
<a href="https://codecov.io/gh/RockChinQ/QChatGPT" >
<img src="https://codecov.io/gh/RockChinQ/QChatGPT/graph/badge.svg?token=pjxYIL2kbC"/>
</a>
<br/>
<img alt="群聊GPT-3.5" src="res/screenshots/group_gpt3.5.png" width="400"/>
<br/>
<img alt="New Bing" src="res/screenshots/person_newbing.png" width="400"/>
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=66-aWvn8cbP4c1ut_1YYkvvGVeEtyTH8&authKey=pTaKBK5C%2B8dFzQ4XlENf6MHTCLaHnlKcCRx7c14EeVVlpX2nRSaS8lJm8YeM4mCU&noverify=0&group_code=195992197">
<img alt="Static Badge" src="https://img.shields.io/badge/%E5%AE%98%E6%96%B9%E7%BE%A4-195992197-purple">
</a>
<a href="http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=nC80H57wmKPwRDLFeQrDDjVl81XuC21P&authKey=2wTUTfoQ5v%2BD4C5zfpuR%2BSPMDqdXgDXA%2FS2wHI1NxTfWIG%2B%2FqK08dgyjMMOzhXa9&noverify=0&group_code=738382634">
<img alt="Static Badge" src="https://img.shields.io/badge/%E7%A4%BE%E5%8C%BA%E7%BE%A4-738382634-purple">
</a>
<a href="https://www.bilibili.com/video/BV14h4y1w7TC">
<img alt="Static Badge" src="https://img.shields.io/badge/%E8%A7%86%E9%A2%91%E6%95%99%E7%A8%8B-208647">
</a>
<a href="https://www.bilibili.com/video/BV11h4y1y74H">
<img alt="Static Badge" src="https://img.shields.io/badge/Linux%E9%83%A8%E7%BD%B2%E8%A7%86%E9%A2%91-208647">
</a>
</details>
## 使用文档
详情请查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
<a href="https://qchatgpt.rockchin.top">项目主页</a>
<a href="https://qchatgpt.rockchin.top/posts/feature.html">功能介绍</a>
<a href="https://qchatgpt.rockchin.top/posts/deploy/">部署文档</a>
<a href="https://qchatgpt.rockchin.top/posts/error/">常见问题</a>
<a href="https://qchatgpt.rockchin.top/posts/plugin/intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/QChatGPT/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
## 🔩部署
## 相关链接
**部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索**
<a href="https://github.com/RockChinQ/qcg-installer">安装器源码</a>
<a href="https://github.com/RockChinQ/qcg-tester">测试工程源码</a>
<a href="https://github.com/the-lazy-me/QChatGPT-Wiki">官方文档储存库</a>
### - 注册OpenAI账号
<details>
<summary>点此查看步骤</summary>
> 若您要直接使用非OpenAI的模型如New Bing可跳过此步骤直接进行之后的部署完成后按照相关插件的文档进行配置即可
参考以下文章自行注册
> [国内注册ChatGPT的方法(100%可用)](https://www.pythonthree.com/register-openai-chatgpt/)
> [手把手教你如何注册ChatGPT超级详细](https://guxiaobei.com/51461)
注册成功后请前往[个人中心查看](https://beta.openai.com/account/api-keys)api_key
完成注册后,使用以下自动化或手动部署步骤
</details>
### - 自动化部署
<details>
<summary>展开查看以下方式二选一Linux首选DockerWindows首选安装器</summary>
#### Docker方式
> docker方式目前仅支持使用mirai登录若您不**熟悉**docker的操作及相关知识强烈建议您使用其他方式部署我们**不会且难以**解决您主机上多个容器的连接问题。
请查看[此文档](res/docs/docker_deploy.md)
由[@mikumifa](https://github.com/mikumifa)贡献
#### 安装器方式
使用[此安装器](https://github.com/RockChinQ/qcg-installer)(若无法访问请到[Gitee](https://gitee.com/RockChin/qcg-installer))进行部署
- 安装器目前仅支持部分平台,请到仓库文档查看,其他平台请手动部署
</details>
### - 手动部署
<details>
<summary>手动部署适用于所有平台</summary>
- 请使用Python 3.9.x以上版本
#### ① 配置QQ登录框架
目前支持mirai和go-cqhttp配置任意一个即可
<details>
<summary>mirai</summary>
1. 按照[此教程](https://yiri-mirai.wybxc.cc/tutorials/01/configuration)配置Mirai及mirai-api-http
2. 启动mirai-console后使用`login`命令登录QQ账号保持mirai-console运行状态
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`yirimirai`
</details>
<details>
<summary>go-cqhttp</summary>
1. 按照[此文档](https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE)配置go-cqhttp
2. 启动go-cqhttp确保登录成功保持运行
3. 在下一步配置主程序时请在config.py中将`msg_source_adapter`设为`nakuru`
</details>
#### ② 配置主程序
1. 克隆此项目
```bash
git clone https://github.com/RockChinQ/QChatGPT
cd QChatGPT
```
2. 安装依赖
```bash
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk CallingGPT tiktoken
```
3. 运行一次主程序,生成配置文件
```bash
python3 main.py
```
4. 编辑配置文件`config.py`
按照文件内注释填写配置信息
5. 运行主程序
```bash
python3 main.py
```
无报错信息即为运行成功
**常见问题**
- mirai登录提示`QQ版本过低`,见[此issue](https://github.com/RockChinQ/QChatGPT/issues/137)
- 如提示安装`uvicorn``hypercorn`请*不要*安装这两个不是必需的目前存在未知原因bug
- 如报错`TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary`, 请参考 [此处](https://github.com/RockChinQ/QChatGPT/issues/5)
</details>
## 🚀使用
**部署完成后必看: [指令说明](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)**
所有功能查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)
## 🧩插件生态
现已支持自行开发插件对功能进行扩展或自定义程序行为
详见[Wiki插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
开发教程见[Wiki插件开发页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91)
⭐我们已经支持了[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling),请查看[Wiki内容函数](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0)
<details>
<summary>查看插件列表</summary>
[所有插件列表](https://github.com/stars/RockChinQ/lists/qchatgpt-%E6%8F%92%E4%BB%B6)欢迎提出issue以提交新的插件
### 部分插件
- [WebwlkrPlugin](https://github.com/RockChinQ/WebwlkrPlugin) - 让机器人能联网!!
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E3%80%81ChatGPT%E7%BD%91%E9%A1%B5%E7%89%88%E3%80%81ChatGPT-API%E5%8C%BA%E5%88%AB)
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过指令切换使用的模型
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件语音输出、Ranimg、屏蔽词规则等
- [dominoar/QCP-NovelAi](https://github.com/dominoar/QCP-NovelAi) - NovelAI 故事叙述与绘画
- [oliverkirk-sudo/chat_voice](https://github.com/oliverkirk-sudo/chat_voice) - 文字转语音输出使用HuggingFace上的[VITS-Umamusume-voice-synthesizer模型](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer)
- [RockChinQ/WaitYiYan](https://github.com/RockChinQ/WaitYiYan) - 实时获取百度`文心一言`等待列表人数
- [chordfish-k/QChartGPT_Emoticon_Plugin](https://github.com/chordfish-k/QChartGPT_Emoticon_Plugin) - 使机器人根据回复内容发送表情包
- [oliverkirk-sudo/ChatPoeBot](https://github.com/oliverkirk-sudo/ChatPoeBot) - 接入[Poe](https://poe.com/)上的机器人
- [lieyanqzu/WeatherPlugin](https://github.com/lieyanqzu/WeatherPlugin) - 天气查询插件
- [SysStatPlugin](https://github.com/RockChinQ/SysStatPlugin) - 查看系统状态
</details>
## 😘致谢
- [@the-lazy-me](https://github.com/the-lazy-me) 为本项目制作[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- [@mikumifa](https://github.com/mikumifa) 本项目Docker部署仓库开发者
- [@dominoar](https://github.com/dominoar) 为本项目开发多种插件
- [@万神的星空](https://github.com/qq255204159) 整合包发行
- [@ljcduo](https://github.com/ljcduo) GPT-4 API内测账号提供
以及所有[贡献者](https://github.com/RockChinQ/QChatGPT/graphs/contributors)和其他为本项目提供支持的朋友们。
## 👍赞赏
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/>
<img alt="回复效果(带有联网插件)" src="https://qchatgpt.rockchin.top/assets/image/QChatGPT-1211.png" width="500px"/>
</div>

View File

@@ -49,7 +49,7 @@ English | [简体中文](README.md)
Install this [plugin](https://github.com/RockChinQ/Switcher) to switch between different models.
## ✅Function Points
## ✅Features
<details>
<summary>Details</summary>
@@ -141,7 +141,7 @@ cd QChatGPT
2. Install dependencies
```bash
pip3 install requests yiri-mirai openai colorlog func_timeout dulwich Pillow nakuru-project-idk
pip3 install requests yiri-mirai-rc openai colorlog func_timeout dulwich Pillow nakuru-project-idk
```
3. Generate `config.py`
@@ -180,7 +180,7 @@ Plugin [usage](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%
`tests/plugin_examples`目录下,将其整个目录复制到`plugins`目录下即可使用
- `cmdcn` - 主程序令中文形式
- `cmdcn` - 主程序令中文形式
- `hello_plugin` - 在收到消息`hello`时回复相应消息
- `urlikethisijustsix` - 收到冒犯性消息时回复相应消息
@@ -189,7 +189,7 @@ Plugin [usage](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%
欢迎提交新的插件
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E4%B8%8EChatGPT%E7%BD%91%E9%A1%B5%E7%89%88)
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过令切换使用的模型
- [Switcher](https://github.com/RockChinQ/Switcher) - 支持通过令切换使用的模型
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件语音输出、Ranimg、屏蔽词规则等
- [dominoar/QCP-NovelAi](https://github.com/dominoar/QCP-NovelAi) - NovelAI 故事叙述与绘画

View File

@@ -62,6 +62,9 @@ nakuru_config = {
# },
# "reverse_proxy": "http://example.com:12345/v1"
# }
#
# 作者开设公用反向代理地址: https://api.openai.rockchin.top/v1
# 随时可能关闭,仅供测试使用,有条件建议使用正向代理或者自建反向代理
openai_config = {
"api_key": {
"default": "openai_api_key"
@@ -70,13 +73,18 @@ openai_config = {
"reverse_proxy": None
}
# [必需] 管理员QQ号用于接收报错等通知及执行管理员级别指令
# api-key切换策略
# active每次请求时都会切换api-key
# passive仅当api-key超额时才会切换api-key
switch_strategy = "active"
# [必需] 管理员QQ号用于接收报错等通知及执行管理员级别命令
# 支持多个管理员可以使用list形式设置例如
# admin_qq = [12345678, 87654321]
admin_qq = 0
# 情景预设(机器人人格)
# 每个会话的预设信息,影响所有会话,无视令重置
# 每个会话的预设信息,影响所有会话,无视令重置
# 可以通过这个字段指定某些情况的回复,可直接用自然语言描述指令
# 例如:
# default_prompt = "如果我之后想获取帮助,请你说“输入!help获取帮助”"
@@ -90,14 +98,14 @@ admin_qq = 0
# "en-dict": "我想让你充当英英词典,对于给出的英文单词,你要给出其中文意思以及英文解释,并且给出一个例句,此外不要有其他反馈。",
# }
#
# 在使用期间即可通过令:
# 在使用期间即可通过令:
# !reset [名称]
# 来使用指定的情景预设重置会话
# 例如:
# !reset linux-terminal
# 若不指定名称,则使用默认情景预设
#
# 也可以使用令:
# 也可以使用令:
# !default <名称>
# 将指定的情景预设设置为默认情景预设
# 例如:
@@ -106,7 +114,7 @@ admin_qq = 0
#
# 还可以加载文件中的预设文字使用方法请查看https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E9%A2%84%E8%AE%BE%E6%96%87%E5%AD%97
default_prompt = {
"default": "如果之后想获取帮助,请你说“输入!help获取帮助”",
"default": "如果用户之后想获取帮助,请你说“输入!help获取帮助”",
}
# 情景预设格式
@@ -152,13 +160,12 @@ response_rules = {
}
# 消息忽略规则
# 适用于私聊及群聊
# 符合此规则的消息将不会被响应
# 支持消息前缀匹配及正则表达式匹配
# 此设置优先级高于response_rules
# 用以过滤mirai等其他层级的
# 用以过滤mirai等其他层级的
# @see https://github.com/RockChinQ/QChatGPT/issues/165
ignore_rules = {
"prefix": ["/"],
@@ -193,7 +200,7 @@ encourage_sponsor_at_start = True
# 每次向OpenAI接口发送对话记录上下文的字符数
# 最大不超过(4096 - max_tokens)个字符max_tokens为下方completion_api_params中的max_tokens
# 注意较大的prompt_submit_length会导致OpenAI账户额度消耗更快
prompt_submit_length = 2048
prompt_submit_length = 3072
# 是否在token超限报错时自动重置会话
# 可在tips.py中编辑提示语
@@ -201,48 +208,77 @@ auto_reset = True
# OpenAI补全API的参数
# 请在下方填写模型,程序自动选择接口
# 模型文档https://platform.openai.com/docs/models
# 现已支持的模型有:
#
# 'gpt-4'
# 'gpt-4-0613'
# 'gpt-4-32k'
# 'gpt-4-32k-0613'
# 'gpt-3.5-turbo'
# 'gpt-3.5-turbo-16k'
# 'gpt-3.5-turbo-0613'
# 'gpt-3.5-turbo-16k-0613'
# 'text-davinci-003'
# 'text-davinci-002'
# 'code-davinci-002'
# 'code-cushman-001'
# 'text-curie-001'
# 'text-babbage-001'
# 'text-ada-001'
# ChatCompletions 接口:
# # GPT 4 系列
# "gpt-4-1106-preview",
# "gpt-4-vision-preview",
# "gpt-4",
# "gpt-4-32k",
# "gpt-4-0613",
# "gpt-4-32k-0613",
# "gpt-4-0314", # legacy
# "gpt-4-32k-0314", # legacy
# # GPT 3.5 系列
# "gpt-3.5-turbo-1106",
# "gpt-3.5-turbo",
# "gpt-3.5-turbo-16k",
# "gpt-3.5-turbo-0613", # legacy
# "gpt-3.5-turbo-16k-0613", # legacy
# "gpt-3.5-turbo-0301", # legacy
#
# Completions接口
# "gpt-3.5-turbo-instruct",
#
# 具体请查看OpenAI的文档: https://beta.openai.com/docs/api-reference/completions/create
# 请将内容修改到config.py中请勿修改config-template.py
#
# 支持通过 One API 接入多种模型请在上方的openai_config中设置One API的代理地址
# 并在此填写您要使用的模型名称详细请参考https://github.com/songquanpeng/one-api
#
# 支持的 One API 模型:
# "SparkDesk",
# "chatglm_pro",
# "chatglm_std",
# "chatglm_lite",
# "qwen-v1",
# "qwen-plus-v1",
# "ERNIE-Bot",
# "ERNIE-Bot-turbo",
# "gemini-pro",
completion_api_params = {
"model": "gpt-3.5-turbo",
"temperature": 0.9, # 数值越低得到的回答越理性,取值范围[0, 1]
"top_p": 1, # 生成的文本的文本与要求的符合度, 取值范围[0, 1]
"frequency_penalty": 0.2,
"presence_penalty": 1.0,
}
# OpenAI的Image API的参数
# 具体请查看OpenAI的文档: https://beta.openai.com/docs/api-reference/images/create
# 具体请查看OpenAI的文档: https://platform.openai.com/docs/api-reference/images/create
image_api_params = {
"size": "256x256", # 图片尺寸支持256x256, 512x512, 1024x1024
"model": "dall-e-2", # 默认使用 dall-e-2 模型,也可以改为 dall-e-3
# 图片尺寸
# dall-e-2 模型支持 256x256, 512x512, 1024x1024
# dall-e-3 模型支持 1024x1024, 1792x1024, 1024x1792
"size": "256x256",
}
# 跟踪函数调用
# 为True时在每次GPT进行Function Calling时都会输出发送一条回复给用户
# 同时一次提问内所有的Function Calling和普通回复消息都会单独发送给用户
trace_function_calls = False
# 群内回复消息时是否引用原消息
quote_origin = True
quote_origin = False
# 群内回复消息时是否at发送者
at_sender = False
# 回复绘图时是否包含图片描述
include_image_description = True
# 消息处理的超时时间,单位为秒
process_message_timeout = 30
process_message_timeout = 120
# 回复消息时是否显示[GPT]前缀
show_prefix = False
@@ -253,7 +289,7 @@ show_prefix = False
# 当此次消息处理时间低于此秒数时,将会强制延迟至此秒数
# 例如:[1.5, 3]则每次处理时会随机取一个1.5-3秒的随机数若处理时间低于此随机数则强制延迟至此随机秒数
# 若您不需要此功能请将force_delay_range设置为[0, 0]
force_delay_range = [1.5, 3]
force_delay_range = [0, 0]
# 应用长消息处理策略的阈值
# 当回复消息长度超过此值时,将使用长消息处理策略
@@ -284,19 +320,6 @@ retry_times = 3
# 设置为False时向用户及管理员发送错误详细信息
hide_exce_info_to_user = False
# 线程池相关配置
# 该参数决定机器人可以同时处理几个人的消息,超出线程池数量的请求会被阻塞,不会被丢弃
# 如果你不清楚该参数的意义,请不要更改
# 程序运行本身线程池,无代码层面修改请勿更改
sys_pool_num = 8
# 执行管理员请求和指令的线程池并行线程数量,一般和管理员数量相等
admin_pool_num = 4
# 执行用户请求和指令的线程池并行线程数量
# 如需要更高的并发,可以增大该值
user_pool_num = 8
# 每个会话的过期时间,单位为秒
# 默认值20分钟
session_expire_time = 1200
@@ -336,11 +359,11 @@ rate_limitation = {
rate_limit_strategy = "drop"
# 是否在启动时进行依赖库更新
upgrade_dependencies = True
upgrade_dependencies = False
# 是否上报统计信息
# 用于统计机器人的使用情况,不会收集任何用户信息
# 仅上报时间、字数使用量、绘图使用量,其他信息不会上报
# 用于统计机器人的使用情况,数据不公开,不会收集任何敏感信息
# 仅实例识别UUID、上报时间、字数使用量、绘图使用量、插件使用情况、用户信息,其他信息不会上报
report_usage = True
# 日志级别

18
docker-compose.yaml Normal file
View File

@@ -0,0 +1,18 @@
version: "3"
services:
qchatgpt:
image: rockchin/qchatgpt:latest
volumes:
- ./config.py:/QChatGPT/config.py
- ./banlist.py:/QChatGPT/banlist.py
- ./cmdpriv.json:/QChatGPT/cmdpriv.json
- ./sensitive.json:/QChatGPT/sensitive.json
- ./tips.py:/QChatGPT/tips.py
# 目录映射
- ./plugins:/QChatGPT/plugins
- ./scenario:/QChatGPT/scenario
- ./temp:/QChatGPT/temp
- ./logs:/QChatGPT/logs
restart: always
# 根据具体环境配置网络

270
main.py
View File

@@ -8,10 +8,56 @@ import time
import logging
import sys
import traceback
import asyncio
sys.path.append(".")
def check_file():
# 检查是否有banlist.py,如果没有就把banlist-template.py复制一份
if not os.path.exists('banlist.py'):
shutil.copy('res/templates/banlist-template.py', 'banlist.py')
# 检查是否有sensitive.json
if not os.path.exists("sensitive.json"):
shutil.copy("res/templates/sensitive-template.json", "sensitive.json")
# 检查是否有scenario/default.json
if not os.path.exists("scenario/default.json"):
shutil.copy("scenario/default-template.json", "scenario/default.json")
# 检查cmdpriv.json
if not os.path.exists("cmdpriv.json"):
shutil.copy("res/templates/cmdpriv-template.json", "cmdpriv.json")
# 检查tips_custom
if not os.path.exists("tips.py"):
shutil.copy("tips-custom-template.py", "tips.py")
# 检查temp目录
if not os.path.exists("temp/"):
os.mkdir("temp/")
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
# 配置文件存在性校验
if not os.path.exists('config.py'):
shutil.copy('config-template.py', 'config.py')
print('请先在config.py中填写配置')
sys.exit(0)
# 初始化相关文件
check_file()
from pkg.utils.log import init_runtime_log_file, reset_logging
from pkg.config import manager as config_mgr
from pkg.config.impls import pymodule as pymodule_cfg
try:
import colorlog
@@ -20,7 +66,6 @@ except ImportError:
import pkg.utils.pkgmgr as pkgmgr
try:
pkgmgr.install_requirements("requirements.txt")
pkgmgr.install_upgrade("websockets")
import colorlog
except ImportError:
print("依赖不满足,请查看 https://github.com/RockChinQ/qcg-installer/issues/15")
@@ -55,75 +100,73 @@ def ensure_dependencies():
known_exception_caught = False
def override_config():
import config
# 检查override.json覆盖
def override_config_manager():
config = pkg.utils.context.get_config_manager().data
if os.path.exists("override.json") and use_override:
override_json = json.load(open("override.json", "r", encoding="utf-8"))
overrided = []
for key in override_json:
if hasattr(config, key):
setattr(config, key, override_json[key])
logging.info("覆写配置[{}]为[{}]".format(key, override_json[key]))
if key in config:
config[key] = override_json[key]
# logging.info("覆写配置[{}]为[{}]".format(key, override_json[key]))
overrided.append(key)
else:
logging.error("无法覆写配置[{}]为[{}]该配置不存在请检查override.json是否正确".format(key, override_json[key]))
# 临时函数用于加载config和上下文未来统一放在config类
def load_config():
logging.info("检查config模块完整性.")
# 完整性校验
is_integrity = True
config_template = importlib.import_module('config-template')
config = importlib.import_module('config')
for key in dir(config_template):
if not key.startswith("__") and not hasattr(config, key):
setattr(config, key, getattr(config_template, key))
logging.warning("[{}]不存在".format(key))
is_integrity = False
if not is_integrity:
logging.warning("配置文件不完整您可以依据config-template.py检查config.py")
# 检查override.json覆盖
override_config()
if not is_integrity:
logging.warning("以上不存在的配置已被设为默认值将在3秒后继续启动... ")
time.sleep(3)
# 存进上下文
pkg.utils.context.set_config(config)
if len(overrided) > 0:
logging.info("已根据override.json覆写配置项: {}".format(", ".join(overrided)))
def complete_tips():
"""根据tips-custom-template模块补全tips模块的属性"""
non_exist_keys = []
is_integrity = True
logging.info("检查tips模块完整性.")
logging.debug("检查tips模块完整性.")
tips_template = importlib.import_module('tips-custom-template')
tips = importlib.import_module('tips')
for key in dir(tips_template):
if not key.startswith("__") and not hasattr(tips, key):
setattr(tips, key, getattr(tips_template, key))
logging.warning("[{}]不存在".format(key))
# logging.warning("[{}]不存在".format(key))
non_exist_keys.append(key)
is_integrity = False
if not is_integrity:
logging.warning("以下提示语字段不存在: {}".format(", ".join(non_exist_keys)))
logging.warning("tips模块不完整您可以依据tips-custom-template.py检查tips.py")
logging.warning("以上配置已被设为默认值将在3秒后继续启动... ")
time.sleep(3)
def start(first_time_init=False):
async def start_process(first_time_init=False):
"""启动流程reload之后会被执行"""
global known_exception_caught
import pkg.utils.context
config = pkg.utils.context.get_config()
# 计算host和instance标识符
import pkg.audit.identifier
pkg.audit.identifier.init()
# 加载配置
cfg_inst: pymodule_cfg.PythonModuleConfigFile = pymodule_cfg.PythonModuleConfigFile(
'config.py',
'config-template.py'
)
await config_mgr.ConfigManager(cfg_inst).load_config()
override_config_manager()
# 检查tips模块
complete_tips()
cfg = pkg.utils.context.get_config_manager().data
# 更新openai库到最新版本
if not hasattr(config, 'upgrade_dependencies') or config.upgrade_dependencies:
if 'upgrade_dependencies' not in cfg or cfg['upgrade_dependencies']:
print("正在更新依赖库,请等待...")
if not hasattr(config, 'upgrade_dependencies'):
if 'upgrade_dependencies' not in cfg:
print("这个操作不是必须的,如果不想更新,请在config.py中添加upgrade_dependencies=False")
else:
print("这个操作不是必须的,如果不想更新,请在config.py中将upgrade_dependencies设置为False")
@@ -139,12 +182,16 @@ def start(first_time_init=False):
sh = reset_logging()
pkg.utils.context.context['logger_handler'] = sh
# 初始化文字转图片
from pkg.utils import text2img
text2img.initialize()
# 检查是否设置了管理员
if not (hasattr(config, 'admin_qq') and config.admin_qq != 0):
# logging.warning("未设置管理员QQ,管理员权限令及运行告警将无法使用,如需设置请修改config.py中的admin_qq字段")
if cfg['admin_qq'] == 0:
# logging.warning("未设置管理员QQ,管理员权限令及运行告警将无法使用,如需设置请修改config.py中的admin_qq字段")
while True:
try:
config.admin_qq = int(input("未设置管理员QQ,管理员权限令及运行告警将无法使用,请输入管理员QQ号: "))
cfg['admin_qq'] = int(input("未设置管理员QQ,管理员权限令及运行告警将无法使用,请输入管理员QQ号: "))
# 写入到文件
# 读取文件
@@ -152,7 +199,7 @@ def start(first_time_init=False):
with open("config.py", "r", encoding="utf-8") as f:
config_file_str = f.read()
# 替换
config_file_str = config_file_str.replace("admin_qq = 0", "admin_qq = " + str(config.admin_qq))
config_file_str = config_file_str.replace("admin_qq = 0", "admin_qq = " + str(cfg['admin_qq']))
# 写入
with open("config.py", "w", encoding="utf-8") as f:
f.write(config_file_str)
@@ -162,6 +209,24 @@ def start(first_time_init=False):
break
except ValueError:
print("请输入数字")
# 初始化中央服务器 API 交互实例
from pkg.utils.center import apigroup
from pkg.utils.center import v2 as center_v2
center_v2_api = center_v2.V2CenterAPI(
basic_info={
"host_id": pkg.audit.identifier.identifier['host_id'],
"instance_id": pkg.audit.identifier.identifier['instance_id'],
"semantic_version": pkg.utils.updater.get_current_tag(),
"platform": sys.platform,
},
runtime_info={
"admin_id": "{}".format(cfg['admin_qq']),
"msg_source": cfg['msg_source_adapter'],
}
)
pkg.utils.context.set_center_v2_api(center_v2_api)
import pkg.openai.manager
import pkg.database.manager
@@ -180,20 +245,24 @@ def start(first_time_init=False):
# 配置OpenAI proxy
import openai
openai.proxy = None # 先重置因为重载后可能需要清除proxy
if "http_proxy" in config.openai_config and config.openai_config["http_proxy"] is not None:
openai.proxy = config.openai_config["http_proxy"]
openai.proxies = None # 先重置因为重载后可能需要清除proxy
if "http_proxy" in cfg['openai_config'] and cfg['openai_config']["http_proxy"] is not None:
openai.proxies = {
"http": cfg['openai_config']["http_proxy"],
"https": cfg['openai_config']["http_proxy"]
}
# 配置openai api_base
if "reverse_proxy" in config.openai_config and config.openai_config["reverse_proxy"] is not None:
openai.api_base = config.openai_config["reverse_proxy"]
if "reverse_proxy" in cfg['openai_config'] and cfg['openai_config']["reverse_proxy"] is not None:
logging.debug("设置反向代理: "+cfg['openai_config']['reverse_proxy'])
openai.base_url = cfg['openai_config']["reverse_proxy"]
# 主启动流程
database = pkg.database.manager.DatabaseManager()
database.initialize_database()
openai_interact = pkg.openai.manager.OpenAIInteract(config.openai_config['api_key'])
openai_interact = pkg.openai.manager.OpenAIInteract(cfg['openai_config']['api_key'])
# 加载所有未超时的session
pkg.openai.session.load_sessions()
@@ -214,7 +283,7 @@ def start(first_time_init=False):
def run_bot_wrapper():
global known_exception_caught
try:
logging.info("使用账号: {}".format(qqbot.bot_account_id))
logging.debug("使用账号: {}".format(qqbot.bot_account_id))
qqbot.adapter.run_sync()
except TypeError as e:
if str(e).__contains__("argument 'debug'"):
@@ -282,13 +351,12 @@ def start(first_time_init=False):
if first_time_init:
if not known_exception_caught:
import config
if config.msg_source_adapter == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(config.mirai_http_api_config['qq'], config.mirai_http_api_config['host']+":"+str(config.mirai_http_api_config['port'])))
if cfg['msg_source_adapter'] == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(cfg['mirai_http_api_config']['qq'], cfg['mirai_http_api_config']['host']+":"+str(cfg['mirai_http_api_config']['port'])))
logging.critical('程序启动完成,如长时间未显示 "成功登录到账号xxxxx" ,并且不回复消息,解决办法(请勿到群里问): '
'https://github.com/RockChinQ/QChatGPT/issues/37')
elif config.msg_source_adapter == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(config.nakuru_config['host'], config.nakuru_config['port'], config.nakuru_config['http_port']))
elif cfg['msg_source_adapter'] == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(cfg['nakuru_config']['host'], cfg['nakuru_config']['port'], cfg['nakuru_config']['http_port']))
logging.critical('程序启动完成,如长时间未显示 "Protocol: connected" ,并且不回复消息,请检查config.py中的nakuru_config是否正确')
else:
sys.exit(1)
@@ -296,7 +364,7 @@ def start(first_time_init=False):
logging.info('热重载完成')
# 发送赞赏码
if config.encourage_sponsor_at_start \
if cfg['encourage_sponsor_at_start'] \
and pkg.utils.context.get_openai_manager().audit_mgr.get_total_text_length() >= 2048:
logging.info("发送赞赏码")
@@ -318,7 +386,8 @@ def start(first_time_init=False):
if pkg.utils.updater.is_new_version_available():
logging.info("新版本可用,请发送 !update 进行自动更新\n更新日志:\n{}".format("\n".join(pkg.utils.updater.get_rls_notes())))
else:
logging.info("当前已是最新版本")
# logging.info("当前已是最新版本")
pass
except Exception as e:
logging.warning("检查更新失败:{}".format(e))
@@ -329,6 +398,12 @@ def start(first_time_init=False):
if len(new_announcement) > 0:
for announcement in new_announcement:
logging.critical("[公告]<{}> {}".format(announcement['time'], announcement['content']))
# 发送统计数据
pkg.utils.context.get_center_v2_api().main.post_announcement_showed(
[announcement['id'] for announcement in new_announcement]
)
except Exception as e:
logging.warning("获取公告失败:{}".format(e))
@@ -353,70 +428,22 @@ def stop():
raise e
def check_file():
# 检查是否有banlist.py,如果没有就把banlist-template.py复制一份
if not os.path.exists('banlist.py'):
shutil.copy('res/templates/banlist-template.py', 'banlist.py')
# 检查是否有sensitive.json
if not os.path.exists("sensitive.json"):
shutil.copy("res/templates/sensitive-template.json", "sensitive.json")
# 检查是否有scenario/default.json
if not os.path.exists("scenario/default.json"):
shutil.copy("scenario/default-template.json", "scenario/default.json")
# 检查cmdpriv.json
if not os.path.exists("cmdpriv.json"):
shutil.copy("res/templates/cmdpriv-template.json", "cmdpriv.json")
# 检查tips_custom
if not os.path.exists("tips.py"):
shutil.copy("tips-custom-template.py", "tips.py")
# 检查temp目录
if not os.path.exists("temp/"):
os.mkdir("temp/")
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
# 配置文件存在性校验
if not os.path.exists('config.py'):
shutil.copy('config-template.py', 'config.py')
print('请先在config.py中填写配置')
sys.exit(0)
def main():
global use_override
# 检查是否携带了 --override 或 -r 参数
if '--override' in sys.argv or '-r' in sys.argv:
use_override = True
# 初始化相关文件
check_file()
# 初始化logging
init_runtime_log_file()
pkg.utils.context.context['logger_handler'] = reset_logging()
# 加载配置
load_config()
config = pkg.utils.context.get_config()
# 检查tips模块
complete_tips()
# 配置线程池
from pkg.utils import ThreadCtl
thread_ctl = ThreadCtl(
sys_pool_num=config.sys_pool_num,
admin_pool_num=config.admin_pool_num,
user_pool_num=config.user_pool_num
sys_pool_num=8,
admin_pool_num=4,
user_pool_num=8
)
# 存进上下文
pkg.utils.context.set_thread_ctl(thread_ctl)
@@ -435,9 +462,11 @@ def main():
# 关闭urllib的http警告
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
def run_wrapper():
asyncio.run(start_process(True))
pkg.utils.context.get_thread_ctl().submit_sys_task(
start,
True
run_wrapper
)
# 主线程循环
@@ -447,12 +476,19 @@ def main():
except:
stop()
pkg.utils.context.get_thread_ctl().shutdown()
import platform
if platform.system() == 'Windows':
cmd = "taskkill /F /PID {}".format(os.getpid())
elif platform.system() in ['Linux', 'Darwin']:
cmd = "kill -9 {}".format(os.getpid())
os.system(cmd)
launch_args = sys.argv.copy()
if "--cov-report" not in launch_args:
import platform
if platform.system() == 'Windows':
cmd = "taskkill /F /PID {}".format(os.getpid())
elif platform.system() in ['Linux', 'Darwin']:
cmd = "kill -9 {}".format(os.getpid())
os.system(cmd)
else:
print("正常退出以生成覆盖率报告")
sys.exit(0)
if __name__ == '__main__':

View File

@@ -21,9 +21,10 @@
"http_proxy": null,
"reverse_proxy": null
},
"switch_strategy": "active",
"admin_qq": 0,
"default_prompt": {
"default": "如果之后想获取帮助,请你说“输入!help获取帮助”"
"default": "如果用户之后想获取帮助,请你说“输入!help获取帮助”"
},
"preset_mode": "normal",
"response_rules": {
@@ -52,25 +53,25 @@
"baidu_secret_key": "",
"inappropriate_message_tips": "[百度云]请珍惜机器人,当前返回内容不合规",
"encourage_sponsor_at_start": true,
"prompt_submit_length": 2048,
"prompt_submit_length": 3072,
"auto_reset": true,
"completion_api_params": {
"model": "gpt-3.5-turbo",
"temperature": 0.9,
"top_p": 1,
"frequency_penalty": 0.2,
"presence_penalty": 1.0
"temperature": 0.9
},
"image_api_params": {
"model": "dall-e-2",
"size": "256x256"
},
"quote_origin": true,
"trace_function_calls": false,
"quote_origin": false,
"at_sender": false,
"include_image_description": true,
"process_message_timeout": 30,
"process_message_timeout": 120,
"show_prefix": false,
"force_delay_range": [
1.5,
3
0,
0
],
"blob_message_threshold": 256,
"blob_message_strategy": "forward",
@@ -78,15 +79,12 @@
"font_path": "",
"retry_times": 3,
"hide_exce_info_to_user": false,
"sys_pool_num": 8,
"admin_pool_num": 4,
"user_pool_num": 8,
"session_expire_time": 1200,
"rate_limitation": {
"default": 60
},
"rate_limit_strategy": "drop",
"upgrade_dependencies": true,
"upgrade_dependencies": false,
"report_usage": true,
"logging_level": 20
}

View File

@@ -5,11 +5,12 @@
import hashlib
import json
import logging
import threading
import requests
import pkg.utils.context
import pkg.utils.updater
from ..utils import context
from ..utils import updater
class DataGatherer:
@@ -20,7 +21,7 @@ class DataGatherer:
以key值md5为key,{
"text": {
"text-davinci-003": 文字量:int,
"gpt-3.5-turbo": 文字量:int,
},
"image": {
"256x256": 图片数量:int,
@@ -32,33 +33,17 @@ class DataGatherer:
def __init__(self):
self.load_from_db()
try:
self.version_str = pkg.utils.updater.get_current_tag() # 从updater模块获取版本号
self.version_str = updater.get_current_tag() # 从updater模块获取版本号
except:
pass
def report_to_server(self, subservice_name: str, count: int):
"""向中央服务器报告使用量
只会报告此次请求的使用量,不会报告总量。
不包含除版本号、使用类型、使用量以外的任何信息,仅供开发者分析使用情况。
"""
try:
config = pkg.utils.context.get_config()
if not config.report_usage:
return
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}&msg_source={}".format(subservice_name, self.version_str, count, config.msg_source_adapter))
if res.status_code != 200 or res.text != "ok":
logging.warning("report to server failed, status_code: {}, text: {}".format(res.status_code, res.text))
except:
return
def get_usage(self, key_md5):
return self.usage[key_md5] if key_md5 in self.usage else {}
def report_text_model_usage(self, model, total_tokens):
"""调用方报告文字模型请求文字使用量"""
key_md5 = pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5() # 以key的md5进行储存
key_md5 = context.get_openai_manager().key_mgr.get_using_key_md5() # 以key的md5进行储存
if key_md5 not in self.usage:
self.usage[key_md5] = {}
@@ -73,12 +58,10 @@ class DataGatherer:
self.usage[key_md5]["text"][model] += length
self.dump_to_db()
self.report_to_server("text", length)
def report_image_model_usage(self, size):
"""调用方报告图片模型请求图片使用量"""
key_md5 = pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5()
key_md5 = context.get_openai_manager().key_mgr.get_using_key_md5()
if key_md5 not in self.usage:
self.usage[key_md5] = {}
@@ -92,8 +75,6 @@ class DataGatherer:
self.usage[key_md5]["image"][size] += 1
self.dump_to_db()
self.report_to_server("image", 1)
def get_text_length_of_key(self, key):
"""获取指定api-key (明文) 的文字总使用量(本地记录)"""
key_md5 = hashlib.md5(key.encode('utf-8')).hexdigest()
@@ -125,9 +106,9 @@ class DataGatherer:
return total
def dump_to_db(self):
pkg.utils.context.get_database_manager().dump_usage_json(self.usage)
context.get_database_manager().dump_usage_json(self.usage)
def load_from_db(self):
json_str = pkg.utils.context.get_database_manager().load_usage_json()
json_str = context.get_database_manager().load_usage_json()
if json_str is not None:
self.usage = json.loads(json_str)

83
pkg/audit/identifier.py Normal file
View File

@@ -0,0 +1,83 @@
import os
import uuid
import json
import time
identifier = {
'host_id': '',
'instance_id': '',
'host_create_ts': 0,
'instance_create_ts': 0,
}
HOST_ID_FILE = os.path.expanduser('~/.qchatgpt/host_id.json')
INSTANCE_ID_FILE = 'res/instance_id.json'
def init():
global identifier
if not os.path.exists(os.path.expanduser('~/.qchatgpt')):
os.mkdir(os.path.expanduser('~/.qchatgpt'))
if not os.path.exists(HOST_ID_FILE):
new_host_id = 'host_'+str(uuid.uuid4())
new_host_create_ts = int(time.time())
with open(HOST_ID_FILE, 'w') as f:
json.dump({
'host_id': new_host_id,
'host_create_ts': new_host_create_ts
}, f)
identifier['host_id'] = new_host_id
identifier['host_create_ts'] = new_host_create_ts
else:
loaded_host_id = ''
loaded_host_create_ts = 0
with open(HOST_ID_FILE, 'r') as f:
file_content = json.load(f)
loaded_host_id = file_content['host_id']
loaded_host_create_ts = file_content['host_create_ts']
identifier['host_id'] = loaded_host_id
identifier['host_create_ts'] = loaded_host_create_ts
# 检查实例 id
if os.path.exists(INSTANCE_ID_FILE):
instance_id = {}
with open(INSTANCE_ID_FILE, 'r') as f:
instance_id = json.load(f)
if instance_id['host_id'] != identifier['host_id']: # 如果实例 id 不是当前主机的,删除
os.remove(INSTANCE_ID_FILE)
if not os.path.exists(INSTANCE_ID_FILE):
new_instance_id = 'instance_'+str(uuid.uuid4())
new_instance_create_ts = int(time.time())
with open(INSTANCE_ID_FILE, 'w') as f:
json.dump({
'host_id': identifier['host_id'],
'instance_id': new_instance_id,
'instance_create_ts': new_instance_create_ts
}, f)
identifier['instance_id'] = new_instance_id
identifier['instance_create_ts'] = new_instance_create_ts
else:
loaded_instance_id = ''
loaded_instance_create_ts = 0
with open(INSTANCE_ID_FILE, 'r') as f:
file_content = json.load(f)
loaded_instance_id = file_content['instance_id']
loaded_instance_create_ts = file_content['instance_create_ts']
identifier['instance_id'] = loaded_instance_id
identifier['instance_create_ts'] = loaded_instance_create_ts
def print_out():
global identifier
print(identifier)

0
pkg/config/__init__.py Normal file
View File

View File

@@ -0,0 +1,62 @@
import os
import shutil
import importlib
import logging
from .. import model as file_model
class PythonModuleConfigFile(file_model.ConfigFile):
"""Python模块配置文件"""
config_file_name: str = None
"""配置文件名"""
template_file_name: str = None
"""模板文件名"""
def __init__(self, config_file_name: str, template_file_name: str) -> None:
self.config_file_name = config_file_name
self.template_file_name = template_file_name
def exists(self) -> bool:
return os.path.exists(self.config_file_name)
async def create(self):
shutil.copyfile(self.template_file_name, self.config_file_name)
async def load(self) -> dict:
module_name = os.path.splitext(os.path.basename(self.config_file_name))[0]
module = importlib.import_module(module_name)
cfg = {}
allowed_types = (int, float, str, bool, list, dict)
for key in dir(module):
if key.startswith('__'):
continue
if not isinstance(getattr(module, key), allowed_types):
continue
cfg[key] = getattr(module, key)
# 从模板模块文件中进行补全
module_name = os.path.splitext(os.path.basename(self.template_file_name))[0]
module = importlib.import_module(module_name)
for key in dir(module):
if key.startswith('__'):
continue
if not isinstance(getattr(module, key), allowed_types):
continue
if key not in cfg:
cfg[key] = getattr(module, key)
return cfg
async def save(self, data: dict):
logging.warning('Python模块配置文件不支持保存')

23
pkg/config/manager.py Normal file
View File

@@ -0,0 +1,23 @@
from . import model as file_model
from ..utils import context
class ConfigManager:
"""配置文件管理器"""
file: file_model.ConfigFile = None
"""配置文件实例"""
data: dict = None
"""配置数据"""
def __init__(self, cfg_file: file_model.ConfigFile) -> None:
self.file = cfg_file
self.data = {}
context.set_config_manager(self)
async def load_config(self):
self.data = await self.file.load()
async def dump_config(self):
await self.file.save(self.data)

27
pkg/config/model.py Normal file
View File

@@ -0,0 +1,27 @@
import abc
class ConfigFile(metaclass=abc.ABCMeta):
"""配置文件抽象类"""
config_file_name: str = None
"""配置文件名"""
template_file_name: str = None
"""模板文件名"""
@abc.abstractmethod
def exists(self) -> bool:
pass
@abc.abstractmethod
async def create(self):
pass
@abc.abstractmethod
async def load(self) -> dict:
pass
@abc.abstractmethod
async def save(self, data: dict):
pass

View File

@@ -5,11 +5,10 @@ import hashlib
import json
import logging
import time
from sqlite3 import Cursor
import sqlite3
import pkg.utils.context
from ..utils import context
class DatabaseManager:
@@ -22,7 +21,7 @@ class DatabaseManager:
self.reconnect()
pkg.utils.context.set_database_manager(self)
context.set_database_manager(self)
# 连接到数据库文件
def reconnect(self):
@@ -33,7 +32,7 @@ class DatabaseManager:
def close(self):
self.conn.close()
def __execute__(self, *args, **kwargs) -> Cursor:
def __execute__(self, *args, **kwargs) -> sqlite3.Cursor:
# logging.debug('SQL: {}'.format(sql))
logging.debug('SQL: {}'.format(args))
c = self.cursor.execute(*args, **kwargs)
@@ -92,7 +91,7 @@ class DatabaseManager:
`json` text not null
)
""")
print('Database initialized.')
# print('Database initialized.')
# session持久化
def persistence_session(self, subject_type: str, subject_number: int, create_timestamp: int,
@@ -145,11 +144,11 @@ class DatabaseManager:
# 从数据库加载还没过期的session数据
def load_valid_sessions(self) -> dict:
# 从数据库中加载所有还没过期的session
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `last_interact_timestamp` > {}
""".format(int(time.time()) - config.session_expire_time))
""".format(int(time.time()) - config['session_expire_time']))
results = self.cursor.fetchall()
sessions = {}
for result in results:

View File

@@ -1,10 +1,13 @@
import openai
import json
import logging
from .model import RequestBase
import openai
from openai.types.chat import chat_completion_message
from ..funcmgr import get_func_schema_list, execute_function, get_func, get_func_schema, ContentFunctionNotFoundError
from .model import RequestBase
from .. import funcmgr
from ...plugin import host
from ...utils import context
class ChatCompletionRequest(RequestBase):
@@ -13,13 +16,14 @@ class ChatCompletionRequest(RequestBase):
此类保证每一次返回的角色为assistant的信息的finish_reason一定为stop。
若有函数调用响应,本类的返回瀑布是:函数调用请求->函数调用结果->...->assistant的信息->stop。
"""
model: str
messages: list[dict[str, str]]
kwargs: dict
stopped: bool = False
pending_func_call: dict = None
pending_func_call: chat_completion_message.FunctionCall = None
pending_msg: str
@@ -30,7 +34,7 @@ class ChatCompletionRequest(RequestBase):
)
self.pending_msg = ""
def append_message(self, role: str, content: str, name: str=None):
def append_message(self, role: str, content: str, name: str=None, function_call: dict=None):
msg = {
"role": role,
"content": content
@@ -39,20 +43,25 @@ class ChatCompletionRequest(RequestBase):
if name is not None:
msg['name'] = name
if function_call is not None:
msg['function_call'] = function_call
self.messages.append(msg)
def __init__(
self,
client: openai.Client,
model: str,
messages: list[dict[str, str]],
**kwargs
):
self.client = client
self.model = model
self.messages = messages.copy()
self.kwargs = kwargs
self.req_func = openai.ChatCompletion.acreate
self.req_func = self.client.chat.completions.create
self.pending_func_call = None
@@ -74,45 +83,55 @@ class ChatCompletionRequest(RequestBase):
"messages": self.messages,
}
funcs = get_func_schema_list()
funcs = funcmgr.get_func_schema_list()
if len(funcs) > 0:
args['functions'] = funcs
# 拼接kwargs
args = {**args, **self.kwargs}
from openai.types.chat import chat_completion
resp = self._req(**args)
resp: chat_completion.ChatCompletion = self._req(**args)
choice0 = resp["choices"][0]
choice0 = resp.choices[0]
# 如果不是函数调用且finish_reason为stop则停止迭代
if 'function_call' not in choice0['message'] and choice0["finish_reason"] == "stop":
if choice0.finish_reason == 'stop': # and choice0["finish_reason"] == "stop"
self.stopped = True
if 'function_call' in choice0['message']:
self.pending_func_call = choice0['message']['function_call']
if hasattr(choice0.message, 'function_call') and choice0.message.function_call is not None:
self.pending_func_call = choice0.message.function_call
# self.append_message(
# role="assistant",
# content="function call: "+json.dumps(self.pending_func_call, ensure_ascii=False)
# )
self.append_message(
role="assistant",
content=choice0.message.content,
function_call=choice0.message.function_call
)
return {
"id": resp["id"],
"id": resp.id,
"choices": [
{
"index": choice0["index"],
"index": choice0.index,
"message": {
"role": "assistant",
"type": "function_call",
"content": None,
"function_call": choice0['message']['function_call']
"content": choice0.message.content,
"function_call": {
"name": choice0.message.function_call.name,
"arguments": choice0.message.function_call.arguments
}
},
"finish_reason": "function_call"
}
],
"usage": resp["usage"]
"usage": {
"prompt_tokens": resp.usage.prompt_tokens,
"completion_tokens": resp.usage.completion_tokens,
"total_tokens": resp.usage.total_tokens
}
}
else:
@@ -120,19 +139,23 @@ class ChatCompletionRequest(RequestBase):
# 普通回复一定处于最后方故不用再追加进内部messages
return {
"id": resp["id"],
"id": resp.id,
"choices": [
{
"index": choice0["index"],
"index": choice0.index,
"message": {
"role": "assistant",
"type": "text",
"content": choice0['message']['content']
"content": choice0.message.content
},
"finish_reason": "stop"
"finish_reason": choice0.finish_reason
}
],
"usage": resp["usage"]
"usage": {
"prompt_tokens": resp.usage.prompt_tokens,
"completion_tokens": resp.usage.completion_tokens,
"total_tokens": resp.usage.total_tokens
}
}
else: # 处理函数调用请求
@@ -140,20 +163,20 @@ class ChatCompletionRequest(RequestBase):
self.pending_func_call = None
func_name = cp_pending_func_call['name']
func_name = cp_pending_func_call.name
arguments = {}
try:
try:
arguments = json.loads(cp_pending_func_call['arguments'])
arguments = json.loads(cp_pending_func_call.arguments)
# 若不是json格式的异常处理
except json.decoder.JSONDecodeError:
# 获取函数的参数列表
func_schema = get_func_schema(func_name)
func_schema = funcmgr.get_func_schema(func_name)
arguments = {
func_schema['parameters']['required'][0]: cp_pending_func_call['arguments']
func_schema['parameters']['required'][0]: cp_pending_func_call.arguments
}
logging.info("执行函数调用: name={}, arguments={}".format(func_name, arguments))
@@ -161,13 +184,23 @@ class ChatCompletionRequest(RequestBase):
# 执行函数调用
ret = ""
try:
ret = execute_function(func_name, arguments)
ret = funcmgr.execute_function(func_name, arguments)
logging.info("函数执行完成。")
except Exception as e:
ret = "error: execute function failed: {}".format(str(e))
logging.error("函数执行失败: {}".format(str(e)))
# 上报数据
plugin_info = host.get_plugin_info_for_audit(func_name.split('-')[0])
audit_func_name = func_name.split('-')[1]
audit_func_desc = funcmgr.get_func_schema(func_name)['description']
context.get_center_v2_api().usage.post_function_record(
plugin=plugin_info,
function_name=audit_func_name,
function_description=audit_func_desc,
)
self.append_message(
role="function",
content=json.dumps(ret, ensure_ascii=False),
@@ -195,6 +228,5 @@ class ChatCompletionRequest(RequestBase):
}
}
except ContentFunctionNotFoundError:
except funcmgr.ContentFunctionNotFoundError:
raise Exception("没有找到函数: {}".format(func_name))

View File

@@ -1,9 +1,10 @@
import openai
from openai.types import completion, completion_choice
from .model import RequestBase
from . import model
class CompletionRequest(RequestBase):
class CompletionRequest(model.RequestBase):
"""调用Completion接口的请求类。
调用方可以一直next completion直到finish_reason为stop。
@@ -17,10 +18,12 @@ class CompletionRequest(RequestBase):
def __init__(
self,
client: openai.Client,
model: str,
messages: list[dict[str, str]],
**kwargs
):
self.client = client
self.model = model
self.prompt = ""
@@ -31,7 +34,7 @@ class CompletionRequest(RequestBase):
self.kwargs = kwargs
self.req_func = openai.Completion.acreate
self.req_func = self.client.completions.create
def __iter__(self):
return self
@@ -63,49 +66,35 @@ class CompletionRequest(RequestBase):
if self.stopped:
raise StopIteration()
resp = self._req(
resp: completion.Completion = self._req(
model=self.model,
prompt=self.prompt,
**self.kwargs
)
if resp["choices"][0]["finish_reason"] == "stop":
if resp.choices[0].finish_reason == "stop":
self.stopped = True
choice0 = resp["choices"][0]
choice0: completion_choice.CompletionChoice = resp.choices[0]
self.prompt += choice0["text"]
self.prompt += choice0.text
return {
"id": resp["id"],
"id": resp.id,
"choices": [
{
"index": choice0["index"],
"index": choice0.index,
"message": {
"role": "assistant",
"type": "text",
"content": choice0["text"]
"content": choice0.text
},
"finish_reason": choice0["finish_reason"]
"finish_reason": choice0.finish_reason
}
],
"usage": resp["usage"]
}
if __name__ == "__main__":
import os
openai.api_key = os.environ["OPENAI_API_KEY"]
for resp in CompletionRequest(
model="text-davinci-003",
messages=[
{
"role": "user",
"content": "Hello, who are you?"
"usage": {
"prompt_tokens": resp.usage.prompt_tokens,
"completion_tokens": resp.usage.completion_tokens,
"total_tokens": resp.usage.total_tokens
}
]
):
print(resp)
if resp["choices"][0]["finish_reason"] == "stop":
break
}

View File

@@ -1,46 +1,35 @@
# 定义不同接口请求的模型
import threading
import asyncio
import logging
import openai
from ...utils import context
class RequestBase:
client: openai.Client
req_func: callable
def __init__(self, *args, **kwargs):
raise NotImplementedError
def _next_key(self):
switched, name = context.get_openai_manager().key_mgr.auto_switch()
logging.debug("切换api-key: switched={}, name={}".format(switched, name))
self.client.api_key = context.get_openai_manager().key_mgr.get_using_key()
def _req(self, **kwargs):
"""处理代理问题"""
logging.debug("请求接口参数: %s", str(kwargs))
config = context.get_config_manager().data
ret: dict = {}
exception: Exception = None
ret = self.req_func(**kwargs)
logging.debug("接口请求返回:%s", str(ret))
async def awrapper(**kwargs):
nonlocal ret, exception
try:
ret = await self.req_func(**kwargs)
logging.debug("接口请求返回:%s", str(ret))
return ret
except Exception as e:
exception = e
loop = asyncio.new_event_loop()
thr = threading.Thread(
target=loop.run_until_complete,
args=(awrapper(**kwargs),)
)
thr.start()
thr.join()
if exception is not None:
raise exception
if config['switch_strategy'] == 'active':
self._next_key()
return ret

View File

@@ -1,13 +1,14 @@
# 多情景预设值管理
import json
import logging
import config
import os
from ..utils import context
# __current__ = "default"
# """当前默认使用的情景预设的名称
# 由管理员使用`!default <名称>`令切换
# 由管理员使用`!default <名称>`令切换
# """
# __prompts_from_files__ = {}
@@ -16,10 +17,6 @@ import os
# __scenario_from_files__ = {}
__universal_first_reply__ = "ok, I'll follow your commands."
"""通用首次回复"""
class ScenarioMode:
"""情景预设模式抽象类"""
@@ -66,29 +63,24 @@ class NormalScenarioMode(ScenarioMode):
"""普通情景预设模式"""
def __init__(self):
global __universal_first_reply__
config = context.get_config_manager().data
# 加载config中的default_prompt值
if type(config.default_prompt) == str:
if type(config['default_prompt']) == str:
self.using_prompt_name = "default"
self.prompts = {"default": [
{
"role": "user",
"content": config.default_prompt
},{
"role": "assistant",
"content": __universal_first_reply__
"role": "system",
"content": config['default_prompt']
}
]}
elif type(config.default_prompt) == dict:
for key in config.default_prompt:
elif type(config['default_prompt']) == dict:
for key in config['default_prompt']:
self.prompts[key] = [
{
"role": "user",
"content": config.default_prompt[key]
},{
"role": "assistant",
"content": __universal_first_reply__
"role": "system",
"content": config['default_prompt'][key]
}
]
@@ -98,11 +90,8 @@ class NormalScenarioMode(ScenarioMode):
with open(os.path.join("prompts", file), encoding="utf-8") as f:
self.prompts[file] = [
{
"role": "user",
"role": "system",
"content": f.read()
},{
"role": "assistant",
"content": __universal_first_reply__
}
]
@@ -137,9 +126,9 @@ def register_all():
def mode_inst() -> ScenarioMode:
"""获取指定名称的情景预设模式对象"""
import config
config = context.get_config_manager().data
if config.preset_mode == "default":
config.preset_mode = "normal"
if config['preset_mode'] == "default":
config['preset_mode'] = "normal"
return scenario_mode_mapping[config.preset_mode]
return scenario_mode_mapping[config['preset_mode']]

View File

@@ -1,8 +1,7 @@
# 封装了function calling的一些支持函数
import logging
from pkg.plugin import host
from ..plugin import host
class ContentFunctionNotFoundError(Exception):

View File

@@ -2,8 +2,8 @@
import hashlib
import logging
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
from ..plugin import host as plugin_host
from ..plugin import models as plugin_models
class KeysManager:
@@ -32,16 +32,9 @@ class KeysManager:
return hashlib.md5(self.using_key.encode('utf-8')).hexdigest()
def __init__(self, api_key):
if type(api_key) is dict:
self.api_key = api_key
elif type(api_key) is str:
self.api_key = {
"default": api_key
}
elif type(api_key) is list:
for i in range(len(api_key)):
self.api_key[str(i)] = api_key[i]
assert type(api_key) == dict
self.api_key = api_key
# 从usage中删除未加载的api-key的记录
# 不删了也许会运行时添加曾经有记录的api-key
@@ -54,11 +47,28 @@ class KeysManager:
是否切换成功, 切换后的api-key的别名
"""
index = 0
for key_name in self.api_key:
if self.api_key[key_name] == self.using_key:
break
index += 1
# 从当前key开始向后轮询
start_index = index
index += 1
if index >= len(self.api_key):
index = 0
while index != start_index:
key_name = list(self.api_key.keys())[index]
if self.api_key[key_name] not in self.exceeded:
self.using_key = self.api_key[key_name]
logging.info("使用api-key:" + key_name)
logging.debug("使用api-key:" + key_name)
# 触发插件事件
args = {
@@ -69,10 +79,14 @@ class KeysManager:
return True, key_name
self.using_key = list(self.api_key.values())[0]
logging.info("使用api-key:" + list(self.api_key.keys())[0])
index += 1
if index >= len(self.api_key):
index = 0
return False, ""
self.using_key = list(self.api_key.values())[start_index]
logging.debug("使用api-key:" + list(self.api_key.keys())[start_index])
return False, list(self.api_key.keys())[start_index]
def add(self, key_name, key):
self.api_key[key_name] = key

View File

@@ -1,13 +1,13 @@
import logging
import openai
from openai.types import images_response
import pkg.openai.keymgr
import pkg.utils.context
import pkg.audit.gatherer
from pkg.openai.modelmgr import select_request_cls
from pkg.openai.api.model import RequestBase
from ..openai import keymgr
from ..utils import context
from ..audit import gatherer
from ..openai import modelmgr
from ..openai.api import model as api_model
class OpenAIInteract:
@@ -16,39 +16,44 @@ class OpenAIInteract:
将文字接口和图片接口封装供调用方使用
"""
key_mgr: pkg.openai.keymgr.KeysManager = None
key_mgr: keymgr.KeysManager = None
audit_mgr: pkg.audit.gatherer.DataGatherer = None
audit_mgr: gatherer.DataGatherer = None
default_image_api_params = {
"size": "256x256",
}
client: openai.Client = None
def __init__(self, api_key: str):
self.key_mgr = pkg.openai.keymgr.KeysManager(api_key)
self.audit_mgr = pkg.audit.gatherer.DataGatherer()
self.key_mgr = keymgr.KeysManager(api_key)
self.audit_mgr = gatherer.DataGatherer()
logging.info("文字总使用量:%d", self.audit_mgr.get_total_text_length())
# logging.info("文字总使用量:%d", self.audit_mgr.get_total_text_length())
openai.api_key = self.key_mgr.get_using_key()
self.client = openai.Client(
api_key=self.key_mgr.get_using_key(),
base_url=openai.base_url
)
pkg.utils.context.set_openai_manager(self)
context.set_openai_manager(self)
def request_completion(self, messages: list):
"""请求补全接口回复=
"""
# 选择接口请求类
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
request: RequestBase
request: api_model.RequestBase
model: str = config.completion_api_params['model']
model: str = config['completion_api_params']['model']
cp_parmas = config.completion_api_params.copy()
cp_parmas = config['completion_api_params'].copy()
del cp_parmas['model']
request = select_request_cls(model, messages, cp_parmas)
request = modelmgr.select_request_cls(self.client, model, messages, cp_parmas)
# 请求接口
for resp in request:
@@ -61,7 +66,7 @@ class OpenAIInteract:
yield resp
def request_image(self, prompt) -> dict:
def request_image(self, prompt) -> images_response.ImagesResponse:
"""请求图片接口回复
Parameters:
@@ -70,10 +75,10 @@ class OpenAIInteract:
Returns:
dict: 响应
"""
config = pkg.utils.context.get_config()
params = config.image_api_params
config = context.get_config_manager().data
params = config['image_api_params']
response = openai.Image.create(
response = self.client.images.generate(
prompt=prompt,
n=1,
**params

View File

@@ -5,34 +5,44 @@ ChatCompletion - gpt-3.5-turbo 等模型
Completion - text-davinci-003 等模型
此模块封装此两个接口的请求实现,为上层提供统一的调用方式
"""
import openai, logging, threading, asyncio
import openai.error as aiE
import tiktoken
import openai
from pkg.openai.api.model import RequestBase
from pkg.openai.api.completion import CompletionRequest
from pkg.openai.api.chat_completion import ChatCompletionRequest
from ..openai.api import model as api_model
from ..openai.api import completion as api_completion
from ..openai.api import chat_completion as api_chat_completion
COMPLETION_MODELS = {
'text-davinci-003',
'text-davinci-002',
'code-davinci-002',
'code-cushman-001',
'text-curie-001',
'text-babbage-001',
'text-ada-001',
"gpt-3.5-turbo-instruct",
}
CHAT_COMPLETION_MODELS = {
'gpt-3.5-turbo',
'gpt-3.5-turbo-16k',
'gpt-3.5-turbo-0613',
'gpt-3.5-turbo-16k-0613',
# 'gpt-3.5-turbo-0301',
'gpt-4',
'gpt-4-0613',
'gpt-4-32k',
'gpt-4-32k-0613'
# GPT 4 系列
"gpt-4-1106-preview",
"gpt-4-vision-preview",
"gpt-4",
"gpt-4-32k",
"gpt-4-0613",
"gpt-4-32k-0613",
"gpt-4-0314", # legacy
"gpt-4-32k-0314", # legacy
# GPT 3.5 系列
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0613", # legacy
"gpt-3.5-turbo-16k-0613", # legacy
"gpt-3.5-turbo-0301", # legacy
# One-API 接入
"SparkDesk",
"chatglm_pro",
"chatglm_std",
"chatglm_lite",
"qwen-v1",
"qwen-plus-v1",
"ERNIE-Bot",
"ERNIE-Bot-turbo",
"gemini-pro",
}
EDIT_MODELS = {
@@ -44,11 +54,11 @@ IMAGE_MODELS = {
}
def select_request_cls(model_name: str, messages: list, args: dict) -> RequestBase:
def select_request_cls(client: openai.Client, model_name: str, messages: list, args: dict) -> api_model.RequestBase:
if model_name in CHAT_COMPLETION_MODELS:
return ChatCompletionRequest(model_name, messages, **args)
return api_chat_completion.ChatCompletionRequest(client, model_name, messages, **args)
elif model_name in COMPLETION_MODELS:
return CompletionRequest(model_name, messages, **args)
return api_completion.CompletionRequest(client, model_name, messages, **args)
raise ValueError("不支持模型[{}],请检查配置文件".format(model_name))
@@ -66,6 +76,15 @@ def count_chat_completion_tokens(messages: list, model: str) -> int:
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613",
"SparkDesk",
"chatglm_pro",
"chatglm_std",
"chatglm_lite",
"qwen-v1",
"qwen-plus-v1",
"ERNIE-Bot",
"ERNIE-Bot-turbo",
"gemini-pro",
}:
tokens_per_message = 3
tokens_per_name = 1
@@ -112,6 +131,7 @@ def count_completion_tokens(messages: list, model: str) -> int:
def count_tokens(messages: list, model: str):
if model in CHAT_COMPLETION_MODELS:
return count_chat_completion_tokens(messages, model)
elif model in COMPLETION_MODELS:

View File

@@ -8,15 +8,13 @@ import threading
import time
import json
import pkg.openai.manager
import pkg.openai.modelmgr
import pkg.database.manager
import pkg.utils.context
from ..openai import manager as openai_manager
from ..openai import modelmgr as openai_modelmgr
from ..database import manager as database_manager
from ..utils import context as context
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
from pkg.openai.modelmgr import count_tokens
from ..plugin import host as plugin_host
from ..plugin import models as plugin_models
# 运行时保存的所有session
sessions = {}
@@ -27,57 +25,27 @@ class SessionOfflineStatus:
EXPLICITLY_CLOSED = 'explicitly_closed'
# 重置session.prompt
def reset_session_prompt(session_name, prompt):
# 备份原始数据
bak_path = 'logs/{}-{}.bak'.format(
session_name,
time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
)
f = open(bak_path, 'w+')
f.write(prompt)
f.close()
# 生成新数据
config = pkg.utils.context.get_config()
prompt = [
{
'role': 'system',
'content': config.default_prompt['default'] if type(config.default_prompt) == dict else config.default_prompt
}
]
# 警告
logging.warning(
"""
用户[{}]的数据已被重置,有可能是因为数据版本过旧或存储错误
原始数据将备份在:
{}""".format(session_name, bak_path)
) # 为保证多行文本格式正确故无缩进
return prompt
# 从数据加载session
def load_sessions():
"""从数据库加载sessions"""
global sessions
db_inst = pkg.utils.context.get_database_manager()
db_inst = context.get_database_manager()
session_data = db_inst.load_valid_sessions()
for session_name in session_data:
logging.info('加载session: {}'.format(session_name))
logging.debug('加载session: {}'.format(session_name))
temp_session = Session(session_name)
temp_session.name = session_name
temp_session.create_timestamp = session_data[session_name]['create_timestamp']
temp_session.last_interact_timestamp = session_data[session_name]['last_interact_timestamp']
try:
temp_session.prompt = json.loads(session_data[session_name]['prompt'])
temp_session.token_counts = json.loads(session_data[session_name]['token_counts'])
except Exception:
temp_session.prompt = reset_session_prompt(session_name, session_data[session_name]['prompt'])
temp_session.persistence()
temp_session.prompt = json.loads(session_data[session_name]['prompt'])
temp_session.token_counts = json.loads(session_data[session_name]['token_counts'])
temp_session.default_prompt = json.loads(session_data[session_name]['default_prompt']) if \
session_data[session_name]['default_prompt'] else []
@@ -172,17 +140,17 @@ class Session:
if self.create_timestamp != create_timestamp or self not in sessions.values():
return
config = pkg.utils.context.get_config()
if int(time.time()) - self.last_interact_timestamp > config.session_expire_time:
config = context.get_config_manager().data
if int(time.time()) - self.last_interact_timestamp > config['session_expire_time']:
logging.info('session {} 已过期'.format(self.name))
# 触发插件事件
args = {
'session_name': self.name,
'session': self,
'session_expire_time': config.session_expire_time
'session_expire_time': config['session_expire_time']
}
event = pkg.plugin.host.emit(plugin_models.SessionExpired, **args)
event = plugin_host.emit(plugin_models.SessionExpired, **args)
if event.is_prevented_default():
return
@@ -194,8 +162,15 @@ class Session:
# 请求回复
# 这个函数是阻塞的
def append(self, text: str=None) -> str:
"""向session中添加一条消息返回接口回复"""
def query(self, text: str=None) -> tuple[str, str, list[str]]:
"""向session中添加一条消息返回接口回复
Args:
text (str): 用户消息
Returns:
tuple[str, str]: (接口回复, finish_reason, 已调用的函数列表)
"""
self.last_interact_timestamp = int(time.time())
@@ -207,12 +182,12 @@ class Session:
'default_prompt': self.default_prompt,
}
event = pkg.plugin.host.emit(plugin_models.SessionFirstMessageReceived, **args)
event = plugin_host.emit(plugin_models.SessionFirstMessageReceived, **args)
if event.is_prevented_default():
return None
return None, None, None
config = pkg.utils.context.get_config()
max_length = config.prompt_submit_length
config = context.get_config_manager().data
max_length = config['prompt_submit_length']
local_default_prompt = self.default_prompt.copy()
local_prompt = self.prompt.copy()
@@ -225,7 +200,7 @@ class Session:
'text_message': text,
}
event = pkg.plugin.host.emit(plugin_models.PromptPreProcessing, **args)
event = plugin_host.emit(plugin_models.PromptPreProcessing, **args)
if event.get_return_value('default_prompt') is not None:
local_default_prompt = event.get_return_value('default_prompt')
@@ -236,6 +211,7 @@ class Session:
if event.get_return_value('text_message') is not None:
text = event.get_return_value('text_message')
# 裁剪messages到合适长度
prompts, _ = self.cut_out(text, max_length, local_default_prompt, local_prompt)
res_text = ""
@@ -244,26 +220,65 @@ class Session:
total_tokens = 0
for resp in pkg.utils.context.get_openai_manager().request_completion(prompts):
if resp['choices'][0]['message']['type'] == 'text': # 普通回复
res_text += resp['choices'][0]['message']['content']
finish_reason: str = ""
funcs = []
trace_func_calls = config['trace_function_calls']
botmgr = context.get_qqbot_manager()
session_name_spt: list[str] = self.name.split("_")
pending_res_text = ""
start_time = time.time()
# TODO 对不起,我知道这样非常非常屎山,但我之后会重构的
for resp in context.get_openai_manager().request_completion(prompts):
if pending_res_text != "":
botmgr.adapter.send_message(
session_name_spt[0],
session_name_spt[1],
pending_res_text
)
pending_res_text = ""
finish_reason = resp['choices'][0]['finish_reason']
if resp['choices'][0]['message']['role'] == "assistant" and resp['choices'][0]['message']['content'] != None: # 包含纯文本响应
if not trace_func_calls:
res_text += resp['choices'][0]['message']['content']
else:
res_text = resp['choices'][0]['message']['content']
pending_res_text = resp['choices'][0]['message']['content']
total_tokens += resp['usage']['total_tokens']
pending_msgs.append(
{
"role": "assistant",
"content": resp['choices'][0]['message']['content']
}
)
msg = {
"role": "assistant",
"content": resp['choices'][0]['message']['content']
}
elif resp['choices'][0]['message']['type'] == 'function_call':
if 'function_call' in resp['choices'][0]['message']:
msg['function_call'] = json.dumps(resp['choices'][0]['message']['function_call'])
pending_msgs.append(msg)
if resp['choices'][0]['message']['type'] == 'function_call':
# self.prompt.append(
# {
# "role": "assistant",
# "content": "function call: "+json.dumps(resp['choices'][0]['message']['function_call'])
# }
# )
if trace_func_calls:
botmgr.adapter.send_message(
session_name_spt[0],
session_name_spt[1],
"调用函数 "+resp['choices'][0]['message']['function_call']['name'] + "..."
)
total_tokens += resp['usage']['total_tokens']
elif resp['choices'][0]['message']['type'] == 'function_return':
@@ -276,10 +291,11 @@ class Session:
# )
# total_tokens += resp['usage']['total_tokens']
funcs.append(
resp['choices'][0]['message']['function_name']
)
pass
# 向API请求补全
# message, total_token = pkg.utils.context.get_openai_manager().request_completion(
# prompts,
@@ -305,7 +321,27 @@ class Session:
self.just_switched_to_exist_session = False
self.set_ongoing()
return res_ans if res_ans[0] != '\n' else res_ans[1:]
# 上报使用量数据
session_type = session_name_spt[0]
session_id = session_name_spt[1]
ability_provider = "QChatGPT.Text"
usage = total_tokens
model_name = context.get_config_manager().data['completion_api_params']['model']
response_seconds = int(time.time() - start_time)
retry_times = -1 # 暂不记录
context.get_center_v2_api().usage.post_query_record(
session_type=session_type,
session_id=session_id,
query_ability_provider=ability_provider,
usage=usage,
model_name=model_name,
response_seconds=response_seconds,
retry_times=retry_times
)
return res_ans if res_ans[0] != '\n' else res_ans[1:], finish_reason, funcs
# 删除上一回合并返回上一回合的问题
def undo(self) -> str:
@@ -337,13 +373,13 @@ class Session:
# 包装目前的对话回合内容
changable_prompts = []
use_model = pkg.utils.context.get_config().completion_api_params['model']
use_model = context.get_config_manager().data['completion_api_params']['model']
ptr = len(prompt) - 1
# 直接从后向前扫描拼接,不管是否是整回合
while ptr >= 0:
if count_tokens(prompt[ptr:ptr+1]+changable_prompts, use_model) > max_tokens:
if openai_modelmgr.count_tokens(prompt[ptr:ptr+1]+changable_prompts, use_model) > max_tokens:
break
changable_prompts.insert(0, prompt[ptr])
@@ -364,14 +400,14 @@ class Session:
logging.debug("cut_out: {}".format(json.dumps(result_prompt, ensure_ascii=False, indent=4)))
return result_prompt, count_tokens(changable_prompts, use_model)
return result_prompt, openai_modelmgr.count_tokens(changable_prompts, use_model)
# 持久化session
def persistence(self):
if self.prompt == self.get_default_prompt():
return
db_inst = pkg.utils.context.get_database_manager()
db_inst = context.get_database_manager()
name_spt = self.name.split('_')
@@ -393,12 +429,12 @@ class Session:
}
# 此事件不支持阻止默认行为
_ = pkg.plugin.host.emit(plugin_models.SessionExplicitReset, **args)
_ = plugin_host.emit(plugin_models.SessionExplicitReset, **args)
pkg.utils.context.get_database_manager().explicit_close_session(self.name, self.create_timestamp)
context.get_database_manager().explicit_close_session(self.name, self.create_timestamp)
if expired:
pkg.utils.context.get_database_manager().set_session_expired(self.name, self.create_timestamp)
context.get_database_manager().set_session_expired(self.name, self.create_timestamp)
if not persist: # 不要求保持default prompt
self.default_prompt = self.get_default_prompt(use_prompt)
@@ -415,11 +451,11 @@ class Session:
# 将本session的数据库状态设置为on_going
def set_ongoing(self):
pkg.utils.context.get_database_manager().set_session_ongoing(self.name, self.create_timestamp)
context.get_database_manager().set_session_ongoing(self.name, self.create_timestamp)
# 切换到上一个session
def last_session(self):
last_one = pkg.utils.context.get_database_manager().last_session(self.name, self.last_interact_timestamp)
last_one = context.get_database_manager().last_session(self.name, self.last_interact_timestamp)
if last_one is None:
return None
else:
@@ -427,12 +463,10 @@ class Session:
self.create_timestamp = last_one['create_timestamp']
self.last_interact_timestamp = last_one['last_interact_timestamp']
try:
self.prompt = json.loads(last_one['prompt'])
self.token_counts = json.loads(last_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, last_one['prompt'])
self.persistence()
self.prompt = json.loads(last_one['prompt'])
self.token_counts = json.loads(last_one['token_counts'])
self.default_prompt = json.loads(last_one['default_prompt']) if last_one['default_prompt'] else []
self.just_switched_to_exist_session = True
@@ -440,7 +474,7 @@ class Session:
# 切换到下一个session
def next_session(self):
next_one = pkg.utils.context.get_database_manager().next_session(self.name, self.last_interact_timestamp)
next_one = context.get_database_manager().next_session(self.name, self.last_interact_timestamp)
if next_one is None:
return None
else:
@@ -448,25 +482,23 @@ class Session:
self.create_timestamp = next_one['create_timestamp']
self.last_interact_timestamp = next_one['last_interact_timestamp']
try:
self.prompt = json.loads(next_one['prompt'])
self.token_counts = json.loads(next_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, next_one['prompt'])
self.persistence()
self.prompt = json.loads(next_one['prompt'])
self.token_counts = json.loads(next_one['token_counts'])
self.default_prompt = json.loads(next_one['default_prompt']) if next_one['default_prompt'] else []
self.just_switched_to_exist_session = True
return self
def list_history(self, capacity: int = 10, page: int = 0):
return pkg.utils.context.get_database_manager().list_history(self.name, capacity, page)
return context.get_database_manager().list_history(self.name, capacity, page)
def delete_history(self, index: int) -> bool:
return pkg.utils.context.get_database_manager().delete_history(self.name, index)
return context.get_database_manager().delete_history(self.name, index)
def delete_all_history(self) -> bool:
return pkg.utils.context.get_database_manager().delete_all_history(self.name)
return context.get_database_manager().delete_all_history(self.name)
def draw_image(self, prompt: str):
return pkg.utils.context.get_openai_manager().request_image(prompt)
return context.get_openai_manager().request_image(prompt)

View File

@@ -7,14 +7,19 @@ import pkgutil
import sys
import shutil
import traceback
import time
import re
import pkg.utils.updater as updater
import pkg.utils.context as context
import pkg.plugin.switch as switch
import pkg.plugin.settings as settings
import pkg.qqbot.adapter as msadapter
from ..utils import updater as updater
from ..utils import network as network
from ..utils import context as context
from ..plugin import switch as switch
from ..plugin import settings as settings
from ..qqbot import adapter as msadapter
from ..plugin import metadata as metadata
from mirai import Mirai
import requests
from CallingGPT.session.session import Session
@@ -65,6 +70,8 @@ def generate_plugin_order():
def iter_plugins():
"""按照顺序迭代插件"""
for plugin_name in __plugins_order__:
if plugin_name not in __plugins__:
continue
yield __plugins__[plugin_name]
@@ -77,31 +84,42 @@ def iter_plugins_name():
__current_module_path__ = ""
def walk_plugin_path(module, prefix='', path_prefix=''):
def walk_plugin_path(module, prefix="", path_prefix=""):
global __current_module_path__
"""遍历插件路径"""
for item in pkgutil.iter_modules(module.__path__):
if item.ispkg:
logging.debug("扫描插件包: plugins/{}".format(path_prefix + item.name))
walk_plugin_path(__import__(module.__name__ + '.' + item.name, fromlist=['']),
prefix + item.name + '.', path_prefix + item.name + '/')
walk_plugin_path(
__import__(module.__name__ + "." + item.name, fromlist=[""]),
prefix + item.name + ".",
path_prefix + item.name + "/",
)
else:
try:
logging.debug("扫描插件模块: plugins/{}".format(path_prefix + item.name + '.py'))
__current_module_path__ = "plugins/"+path_prefix + item.name + '.py'
logging.debug(
"扫描插件模块: plugins/{}".format(path_prefix + item.name + ".py")
)
__current_module_path__ = "plugins/" + path_prefix + item.name + ".py"
importlib.import_module(module.__name__ + '.' + item.name)
logging.debug('加载模块: plugins/{} 成功'.format(path_prefix + item.name + '.py'))
importlib.import_module(module.__name__ + "." + item.name)
logging.debug(
"加载模块: plugins/{} 成功".format(path_prefix + item.name + ".py")
)
except:
logging.error('加载模块: plugins/{} 失败: {}'.format(path_prefix + item.name + '.py', sys.exc_info()))
logging.error(
"加载模块: plugins/{} 失败: {}".format(
path_prefix + item.name + ".py", sys.exc_info()
)
)
traceback.print_exc()
def load_plugins():
"""加载插件"""
logging.info("加载插件")
logging.debug("加载插件")
PluginHost()
walk_plugin_path(__import__('plugins'))
walk_plugin_path(__import__("plugins"))
logging.debug(__plugins__)
@@ -113,24 +131,36 @@ def load_plugins():
# 加载插件顺序
settings.load_settings()
logging.debug("registered plugins: {}".format(__plugins__))
# 输出已注册的内容函数列表
logging.debug("registered content functions: {}".format(__callable_functions__))
logging.debug("function instance map: {}".format(__function_inst_map__))
# 迁移插件源地址记录
metadata.do_plugin_git_repo_migrate()
def initialize_plugins():
"""初始化插件"""
logging.info("初始化插件")
logging.debug("初始化插件")
import pkg.plugin.models as models
successfully_initialized_plugins = []
for plugin in iter_plugins():
# if not plugin['enabled']:
# continue
try:
models.__current_registering_plugin__ = plugin['name']
plugin['instance'] = plugin["class"](plugin_host=context.get_plugin_host())
logging.info("插件 {} 已初始化".format(plugin['name']))
models.__current_registering_plugin__ = plugin["name"]
plugin["instance"] = plugin["class"](plugin_host=context.get_plugin_host())
# logging.info("插件 {} 已初始化".format(plugin['name']))
successfully_initialized_plugins.append(plugin["name"])
except:
logging.error("插件{}初始化时发生错误: {}".format(plugin['name'], sys.exc_info()))
logging.error("插件{}初始化时发生错误: {}".format(plugin["name"], sys.exc_info()))
logging.debug(traceback.format_exc())
logging.info("以下插件已初始化: {}".format(", ".join(successfully_initialized_plugins)))
def unload_plugins():
@@ -149,89 +179,216 @@ def unload_plugins():
# logging.error("插件{}卸载时发生错误: {}".format(plugin['name'], sys.exc_info()))
def install_plugin(repo_url: str):
"""安装插件从git储存库获取并解决依赖"""
try:
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.ensure_dulwich()
except:
pass
def get_github_plugin_repo_label(repo_url: str) -> list[str]:
"""获取username, repo"""
try:
import dulwich
except ModuleNotFoundError:
raise Exception("dulwich模块未安装,请查看 https://github.com/RockChinQ/QChatGPT/issues/77")
# 提取 username/repo , 正则表达式
repo = re.findall(
r"(?:https?://github\.com/|git@github\.com:)([^/]+/[^/]+?)(?:\.git|/|$)",
repo_url,
)
from dulwich import porcelain
if len(repo) > 0: # github
return repo[0].split("/")
else:
return None
logging.info("克隆插件储存库: {}".format(repo_url))
repo = porcelain.clone(repo_url, "plugins/"+repo_url.split(".git")[0].split("/")[-1]+"/", checkout=True)
def download_plugin_source_code(repo_url: str, target_path: str) -> str:
"""下载插件源码"""
# 检查源类型
# 提取 username/repo , 正则表达式
repo = get_github_plugin_repo_label(repo_url)
target_path += repo[1]
if repo is not None: # github
logging.info("从 GitHub 下载插件源码...")
zipball_url = f"https://api.github.com/repos/{'/'.join(repo)}/zipball/HEAD"
zip_resp = requests.get(
url=zipball_url, proxies=network.wrapper_proxies(), stream=True
)
if zip_resp.status_code != 200:
raise Exception("下载源码失败: {}".format(zip_resp.text))
if os.path.exists("temp/" + target_path):
shutil.rmtree("temp/" + target_path)
if os.path.exists(target_path):
shutil.rmtree(target_path)
os.makedirs("temp/" + target_path)
with open("temp/" + target_path + "/source.zip", "wb") as f:
for chunk in zip_resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
logging.info("下载完成, 解压...")
import zipfile
with zipfile.ZipFile("temp/" + target_path + "/source.zip", "r") as zip_ref:
zip_ref.extractall("temp/" + target_path)
os.remove("temp/" + target_path + "/source.zip")
# 目标是 username-repo-hash , 用正则表达式提取完整的文件夹名,复制到 plugins/repo
import glob
# 获取解压后的文件夹名
unzip_dir = glob.glob("temp/" + target_path + "/*")[0]
# 复制到 plugins/repo
shutil.copytree(unzip_dir, target_path + "/")
# 删除解压后的文件夹
shutil.rmtree(unzip_dir)
logging.info("解压完成")
else:
raise Exception("暂不支持的源类型,请使用 GitHub 仓库发行插件。")
return repo[1]
def check_requirements(path: str):
# 检查此目录是否包含requirements.txt
if os.path.exists("plugins/"+repo_url.split(".git")[0].split("/")[-1]+"/requirements.txt"):
if os.path.exists(path + "/requirements.txt"):
logging.info("检测到requirements.txt正在安装依赖")
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("plugins/"+repo_url.split(".git")[0].split("/")[-1]+"/requirements.txt")
pkg.utils.pkgmgr.install_requirements(path + "/requirements.txt")
import pkg.utils.log as log
log.reset_logging()
def install_plugin(repo_url: str):
"""安装插件从git储存库获取并解决依赖"""
repo_label = download_plugin_source_code(repo_url, "plugins/")
check_requirements("plugins/" + repo_label)
metadata.set_plugin_metadata(repo_label, repo_url, int(time.time()), "HEAD")
# 上报安装记录
context.get_center_v2_api().plugin.post_install_record(
plugin={
"name": "unknown",
"remote": repo_url,
"author": "unknown",
"version": "HEAD",
}
)
def uninstall_plugin(plugin_name: str) -> str:
"""卸载插件"""
if plugin_name not in __plugins__:
raise Exception("插件不存在")
plugin_info = get_plugin_info_for_audit(plugin_name)
# 获取文件夹路径
plugin_path = __plugins__[plugin_name]['path'].replace("\\", "/")
plugin_path = __plugins__[plugin_name]["path"].replace("\\", "/")
# 剪切路径为plugins/插件名
plugin_path = plugin_path.split("plugins/")[1].split("/")[0]
# 删除文件夹
shutil.rmtree("plugins/"+plugin_path)
return "plugins/"+plugin_path
shutil.rmtree("plugins/" + plugin_path)
# 上报卸载记录
context.get_center_v2_api().plugin.post_remove_record(
plugin=plugin_info
)
return "plugins/" + plugin_path
def update_plugin(plugin_name: str):
"""更新插件"""
# 检查是否有远程地址记录
target_plugin_dir = "plugins/" + __plugins__[plugin_name]['path'].replace("\\", "/").split("plugins/")[1].split("/")[0]
plugin_path_name = get_plugin_path_name_by_plugin_name(plugin_name)
remote_url = updater.get_remote_url(target_plugin_dir)
if remote_url == "https://github.com/RockChinQ/QChatGPT" or remote_url == "https://gitee.com/RockChin/QChatGPT" \
or remote_url == "" or remote_url is None or remote_url == "http://github.com/RockChinQ/QChatGPT" or remote_url == "http://gitee.com/RockChin/QChatGPT":
raise Exception("插件没有远程地址记录,无法更新")
meta = metadata.get_plugin_metadata(plugin_path_name)
if meta == {}:
raise Exception("没有此插件元数据信息,无法更新")
# 把远程clone到temp/plugins/update/插件名
logging.info("克隆插件储存库: {}".format(remote_url))
old_plugin_info = get_plugin_info_for_audit(plugin_name)
from dulwich import porcelain
clone_target_dir = "temp/plugins/update/"+target_plugin_dir.split("/")[-1]+"/"
context.get_center_v2_api().plugin.post_update_record(
plugin=old_plugin_info,
old_version=old_plugin_info['version'],
new_version='HEAD',
)
if os.path.exists(clone_target_dir):
shutil.rmtree(clone_target_dir)
remote_url = meta["source"]
if (
remote_url == "https://github.com/RockChinQ/QChatGPT"
or remote_url == "https://gitee.com/RockChin/QChatGPT"
or remote_url == ""
or remote_url is None
or remote_url == "http://github.com/RockChinQ/QChatGPT"
or remote_url == "http://gitee.com/RockChin/QChatGPT"
):
raise Exception("插件没有远程地址记录,无法更新")
if not os.path.exists(clone_target_dir):
os.makedirs(clone_target_dir)
repo = porcelain.clone(remote_url, clone_target_dir, checkout=True)
# 重新安装插件
logging.info("正在重新安装插件以进行更新...")
# 检查此目录是否包含requirements.txt
if os.path.exists(clone_target_dir+"requirements.txt"):
logging.info("检测到requirements.txt正在安装依赖")
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements(clone_target_dir+"requirements.txt")
install_plugin(remote_url)
import pkg.utils.log as log
log.reset_logging()
# 将temp/plugins/update/插件名 覆盖到 plugins/插件名
shutil.rmtree(target_plugin_dir)
def get_plugin_name_by_path_name(plugin_path_name: str) -> str:
for k, v in __plugins__.items():
if v["path"] == "plugins/" + plugin_path_name + "/main.py":
return k
return None
def get_plugin_path_name_by_plugin_name(plugin_name: str) -> str:
if plugin_name not in __plugins__:
return None
plugin_main_module_path = __plugins__[plugin_name]["path"]
plugin_main_module_path = plugin_main_module_path.replace("\\", "/")
spt = plugin_main_module_path.split("/")
return spt[1]
def get_plugin_info_for_audit(plugin_name: str) -> dict:
"""获取插件信息"""
if plugin_name not in __plugins__:
return {}
plugin = __plugins__[plugin_name]
name = plugin["name"]
meta = metadata.get_plugin_metadata(get_plugin_path_name_by_plugin_name(name))
remote = meta["source"] if meta != {} else ""
author = plugin["author"]
version = plugin["version"]
return {
"name": name,
"remote": remote,
"author": author,
"version": version,
}
shutil.copytree(clone_target_dir, target_plugin_dir)
class EventContext:
"""事件上下文"""
eid = 0
"""事件编号"""
@@ -306,6 +463,7 @@ class EventContext:
def emit(event_name: str, **kwargs) -> EventContext:
"""触发事件"""
import pkg.utils.context as context
if context.get_plugin_host() is None:
return None
return context.get_plugin_host().emit(event_name, **kwargs)
@@ -354,9 +512,10 @@ class PluginHost:
event_context = EventContext(event_name)
logging.debug("触发事件: {} ({})".format(event_name, event_context.eid))
emitted_plugins = []
for plugin in iter_plugins():
if not plugin['enabled']:
if not plugin["enabled"]:
continue
# if plugin['instance'] is None:
@@ -368,9 +527,11 @@ class PluginHost:
# logging.error("插件 {} 初始化时发生错误: {}".format(plugin['name'], sys.exc_info()))
# continue
if 'hooks' not in plugin or event_name not in plugin['hooks']:
if "hooks" not in plugin or event_name not in plugin["hooks"]:
continue
emitted_plugins.append(plugin['name'])
hooks = []
if event_name in plugin["hooks"]:
hooks = plugin["hooks"][event_name]
@@ -378,27 +539,40 @@ class PluginHost:
try:
already_prevented_default = event_context.is_prevented_default()
kwargs['host'] = context.get_plugin_host()
kwargs['event'] = event_context
kwargs["host"] = context.get_plugin_host()
kwargs["event"] = event_context
hook(plugin['instance'], **kwargs)
hook(plugin["instance"], **kwargs)
if event_context.is_prevented_default() and not already_prevented_default:
logging.debug("插件 {} 已要求阻止事件 {} 的默认行为".format(plugin['name'], event_name))
if (
event_context.is_prevented_default()
and not already_prevented_default
):
logging.debug(
"插件 {} 已要求阻止事件 {} 的默认行为".format(plugin["name"], event_name)
)
except Exception as e:
logging.error("插件{}触发事件{}时发生错误".format(plugin['name'], event_name))
logging.error("插件{}响应事件{}时发生错误".format(plugin["name"], event_name))
logging.error(traceback.format_exc())
# print("done:{}".format(plugin['name']))
if event_context.is_prevented_postorder():
logging.debug("插件 {} 阻止了后序插件的执行".format(plugin['name']))
logging.debug("插件 {} 阻止了后序插件的执行".format(plugin["name"]))
break
logging.debug("事件 {} ({}) 处理完毕,返回值: {}".format(event_name, event_context.eid,
event_context.__return_value__))
logging.debug(
"事件 {} ({}) 处理完毕,返回值: {}".format(
event_name, event_context.eid, event_context.__return_value__
)
)
if len(emitted_plugins) > 0:
plugins_info = [get_plugin_info_for_audit(p) for p in emitted_plugins]
context.get_center_v2_api().usage.post_event_record(
plugins=plugins_info,
event_name=event_name,
)
return event_context
if __name__ == "__main__":
pass

87
pkg/plugin/metadata.py Normal file
View File

@@ -0,0 +1,87 @@
import os
import shutil
import json
import time
import dulwich.errors as dulwich_err
from ..utils import updater
def read_metadata_file() -> dict:
# 读取 plugins/metadata.json 文件
if not os.path.exists('plugins/metadata.json'):
return {}
with open('plugins/metadata.json', 'r') as f:
return json.load(f)
def write_metadata_file(metadata: dict):
if not os.path.exists('plugins'):
os.mkdir('plugins')
with open('plugins/metadata.json', 'w') as f:
json.dump(metadata, f, indent=4, ensure_ascii=False)
def do_plugin_git_repo_migrate():
# 仅在 plugins/metadata.json 不存在时执行
if os.path.exists('plugins/metadata.json'):
return
metadata = read_metadata_file()
# 遍历 plugins 下所有目录获取目录的git远程地址
for plugin_name in os.listdir('plugins'):
plugin_path = os.path.join('plugins', plugin_name)
if not os.path.isdir(plugin_path):
continue
remote_url = None
try:
remote_url = updater.get_remote_url(plugin_path)
except dulwich_err.NotGitRepository:
continue
if remote_url == "https://github.com/RockChinQ/QChatGPT" or remote_url == "https://gitee.com/RockChin/QChatGPT" \
or remote_url == "" or remote_url is None or remote_url == "http://github.com/RockChinQ/QChatGPT" or remote_url == "http://gitee.com/RockChin/QChatGPT":
continue
from . import host
if plugin_name not in metadata:
metadata[plugin_name] = {
'source': remote_url,
'install_timestamp': int(time.time()),
'ref': 'HEAD',
}
write_metadata_file(metadata)
def set_plugin_metadata(
plugin_name: str,
source: str,
install_timestamp: int,
ref: str,
):
metadata = read_metadata_file()
metadata[plugin_name] = {
'source': source,
'install_timestamp': install_timestamp,
'ref': ref,
}
write_metadata_file(metadata)
def remove_plugin_metadata(plugin_name: str):
metadata = read_metadata_file()
if plugin_name in metadata:
del metadata[plugin_name]
write_metadata_file(metadata)
def get_plugin_metadata(plugin_name: str) -> dict:
metadata = read_metadata_file()
if plugin_name in metadata:
return metadata[plugin_name]
return {}

View File

@@ -1,7 +1,7 @@
import logging
import pkg.plugin.host as host
import pkg.utils.context
from ..plugin import host
from ..utils import context
PersonMessageReceived = "person_message_received"
"""收到私聊消息时,在判断是否应该响应前触发
@@ -35,18 +35,18 @@ PersonNormalMessageReceived = "person_normal_message_received"
"""
PersonCommandSent = "person_command_sent"
"""判断为应该处理的私聊令时触发
"""判断为应该处理的私聊令时触发
kwargs:
launcher_type: str 发起对象类型(group/person)
launcher_id: int 发起对象ID(群号/QQ号)
sender_id: int 发送者ID(QQ号)
command: str
command: str
params: list[str] 参数列表
text_message: str 完整令文本
text_message: str 完整令文本
is_admin: bool 是否为管理员
returns (optional):
alter: str 修改后的完整令文本
alter: str 修改后的完整令文本
reply: list 回复消息组件列表
"""
@@ -64,18 +64,18 @@ GroupNormalMessageReceived = "group_normal_message_received"
"""
GroupCommandSent = "group_command_sent"
"""判断为应该处理的群聊令时触发
"""判断为应该处理的群聊令时触发
kwargs:
launcher_type: str 发起对象类型(group/person)
launcher_id: int 发起对象ID(群号/QQ号)
sender_id: int 发送者ID(QQ号)
command: str
command: str
params: list[str] 参数列表
text_message: str 完整令文本
text_message: str 完整令文本
is_admin: bool 是否为管理员
returns (optional):
alter: str 修改后的完整令文本
alter: str 修改后的完整令文本
reply: list 回复消息组件列表
"""
@@ -88,6 +88,8 @@ NormalMessageResponded = "normal_message_responded"
session: pkg.openai.session.Session 会话对象
prefix: str 回复文字消息的前缀
response_text: str 响应文本
finish_reason: str 响应结束原因
funcs_called: list[str] 此次响应中调用的函数列表
returns (optional):
prefix: str 修改后的回复文字消息的前缀
@@ -283,7 +285,7 @@ def register(name: str, description: str, version: str, author: str):
cls.description = description
cls.version = version
cls.author = author
cls.host = pkg.utils.context.get_plugin_host()
cls.host = context.get_plugin_host()
cls.enabled = True
cls.path = host.__current_module_path__

View File

@@ -1,9 +1,9 @@
import json
import os
import pkg.plugin.host as host
import logging
from ..plugin import host
def wrapper_dict_from_runtime_context() -> dict:
"""从变量中包装settings.json的数据字典"""

View File

@@ -3,7 +3,7 @@ import json
import logging
import os
import pkg.plugin.host as host
from ..plugin import host
def wrapper_dict_from_plugin_list() -> dict:

View File

@@ -5,6 +5,7 @@ import mirai
class MessageSourceAdapter:
bot_account_id: int
def __init__(self, config: dict):
pass

View File

@@ -1,18 +1,18 @@
import pkg.utils.context
from ..utils import context
def is_banned(launcher_type: str, launcher_id: int, sender_id: int) -> bool:
if not pkg.utils.context.get_qqbot_manager().enable_banlist:
if not context.get_qqbot_manager().enable_banlist:
return False
result = False
if launcher_type == 'group':
# 检查是否显式声明发起人QQ要被person忽略
if sender_id in pkg.utils.context.get_qqbot_manager().ban_person:
if sender_id in context.get_qqbot_manager().ban_person:
result = True
else:
for group_rule in pkg.utils.context.get_qqbot_manager().ban_group:
for group_rule in context.get_qqbot_manager().ban_group:
if type(group_rule) == int:
if group_rule == launcher_id: # 此群群号被禁用
result = True
@@ -32,7 +32,7 @@ def is_banned(launcher_type: str, launcher_id: int, sender_id: int) -> bool:
else:
# ban_person, 与群规则相同
for person_rule in pkg.utils.context.get_qqbot_manager().ban_person:
for person_rule in context.get_qqbot_manager().ban_person:
if type(person_rule) == int:
if person_rule == launcher_id:
result = True

View File

@@ -2,21 +2,21 @@
import os
import time
import base64
import typing
import config
from mirai.models.message import MessageComponent, MessageChain, Image
from mirai.models.message import ForwardMessageNode
from mirai.models.base import MiraiBaseModel
from typing import List
import pkg.utils.context as context
import pkg.utils.text2img as text2img
from ..utils import text2img
from ..utils import context
class ForwardMessageDiaplay(MiraiBaseModel):
title: str = "群聊的聊天记录"
brief: str = "[聊天记录]"
source: str = "聊天记录"
preview: List[str] = []
preview: typing.List[str] = []
summary: str = "查看x条转发消息"
@@ -26,7 +26,7 @@ class Forward(MessageComponent):
"""消息组件类型。"""
display: ForwardMessageDiaplay
"""显示信息"""
node_list: List[ForwardMessageNode]
node_list: typing.List[ForwardMessageNode]
"""转发消息节点列表。"""
def __init__(self, *args, **kwargs):
if len(args) == 1:
@@ -64,13 +64,16 @@ def text_to_image(text: str) -> MessageComponent:
def check_text(text: str) -> list:
"""检查文本是否为长消息,并转换成该使用的消息链组件"""
if len(text) > config.blob_message_threshold:
config = context.get_config_manager().data
if len(text) > config['blob_message_threshold']:
# logging.info("长消息: {}".format(text))
if config.blob_message_strategy == 'image':
if config['blob_message_strategy'] == 'image':
# 转换成图片
return [text_to_image(text)]
elif config.blob_message_strategy == 'forward':
elif config['blob_message_strategy'] == 'forward':
# 包装转发消息
display = ForwardMessageDiaplay(
@@ -82,7 +85,7 @@ def check_text(text: str) -> list:
)
node = ForwardMessageNode(
sender_id=config.mirai_http_api_config['qq'],
sender_id=config['mirai_http_api_config']['qq'],
sender_name='bot',
message_chain=MessageChain([text])
)

View File

@@ -1,17 +1,13 @@
import importlib
import inspect
import logging
import copy
import pkgutil
import traceback
import types
import json
__command_list__ = {}
import tips as tips_custom
__command_list__ = {}
"""命令树
结构:
@@ -75,7 +71,7 @@ __tree_index__: dict[str, list] = {}
结构:
{
'pkg.qqbot.cmds.cmd1.CommandCmd1': 'cmd1', # 顶级
'pkg.qqbot.cmds.cmd1.CommandCmd1': 'cmd1', # 顶级
'pkg.qqbot.cmds.cmd1.CommandCmd1_1': 'cmd1.cmd1-1', # 类名: 节点路径
'pkg.qqbot.cmds.cmd2.CommandCmd2': 'cmd2',
'pkg.qqbot.cmds.cmd2.CommandCmd2_1': 'cmd2.cmd2-1',
@@ -87,79 +83,79 @@ __tree_index__: dict[str, list] = {}
class Context:
"""命令执行上下文"""
command: str
"""顶级令文本"""
"""顶级令文本"""
crt_command: str
"""当前子令文本"""
"""当前子令文本"""
params: list
"""完整参数列表"""
crt_params: list
"""当前子令参数列表"""
"""当前子令参数列表"""
session_name: str
"""会话名"""
text_message: str
"""令完整文本"""
"""令完整文本"""
launcher_type: str
"""令发起者类型"""
"""令发起者类型"""
launcher_id: int
"""令发起者ID"""
"""令发起者ID"""
sender_id: int
"""令发送者ID"""
"""令发送者ID"""
is_admin: bool
"""[过时]令发送者是否为管理员"""
"""[过时]令发送者是否为管理员"""
privilege: int
"""令发送者权限等级"""
"""令发送者权限等级"""
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
class AbstractCommandNode:
"""令抽象类"""
"""令抽象类"""
parent: type
"""令类"""
"""令类"""
name: str
"""令名"""
"""令名"""
description: str
"""令描述"""
"""令描述"""
usage: str
"""令用法"""
"""令用法"""
aliases: list[str]
"""令别名"""
"""令别名"""
privilege: int
"""令权限等级, 权限大于等于此值的用户才能执行"""
"""令权限等级, 权限大于等于此值的用户才能执行"""
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
"""令处理函数
"""令处理函数
:param ctx: 令执行上下文
:param ctx: 令执行上下文
:return: (是否执行, 回复列表(若执行))
若未执行,将自动以下一个参数查找并执行子
若未执行,将自动以下一个参数查找并执行子
"""
raise NotImplementedError
@classmethod
def help(cls) -> str:
"""获取令帮助信息"""
return '令: {}\n描述: {}\n用法: \n{}\n别名: {}\n权限: {}'.format(
"""获取令帮助信息"""
return '令: {}\n描述: {}\n用法: \n{}\n别名: {}\n权限: {}'.format(
cls.name,
cls.description,
cls.usage,
@@ -176,11 +172,11 @@ class AbstractCommandNode:
aliases: list[str] = None,
privilege: int = 0
):
"""注册
"""注册
:param cls: 令类
:param name: 令名
:param parent: 父令类
:param cls: 令类
:param name: 令名
:param parent: 父令类
"""
global __command_list__, __tree_index__
@@ -195,7 +191,7 @@ class AbstractCommandNode:
logging.debug("cls: {}, name: {}, parent: {}".format(cls, name, parent))
if parent is None:
# 顶级令注册
# 顶级令注册
__command_list__[name] = {
'description': cls.description,
'usage': cls.usage,
@@ -212,9 +208,9 @@ class AbstractCommandNode:
path = __tree_index__[parent.__module__ + '.' + parent.__name__]
parent_node = __command_list__[path]
# 链接父子
# 链接父子
__command_list__[path]['sub'].append(name)
# 注册子
# 注册子
__command_list__[path + '.' + name] = {
'description': cls.description,
'usage': cls.usage,
@@ -233,18 +229,18 @@ class AbstractCommandNode:
class CommandPrivilegeError(Exception):
"""令权限不足或不存在异常"""
"""令权限不足或不存在异常"""
pass
# 传入Context对象广搜命令树返回执行结果
# 若命令被处理返回reply列表
# 若命令未被处理,继续执行下一级
# 若命令未被处理,继续执行下一级
# 若命令不存在,报异常
def execute(context: Context) -> list:
"""执行
"""执行
:param ctx: 令执行上下文
:param ctx: 令执行上下文
:return: 回复列表
"""
@@ -253,7 +249,7 @@ def execute(context: Context) -> list:
# 拷贝ctx
ctx: Context = copy.deepcopy(context)
# 从树取出顶级
# 从树取出顶级
node = __command_list__
path = ctx.command
@@ -261,7 +257,7 @@ def execute(context: Context) -> list:
while True:
try:
node = __command_list__[path]
logging.debug('执行令: {}'.format(path))
logging.debug('执行令: {}'.format(path))
# 检查权限
if ctx.privilege < node['privilege']:
@@ -282,7 +278,7 @@ def execute(context: Context) -> list:
def register_all():
"""启动时调用此函数注册所有
"""启动时调用此函数注册所有
递归处理pkg.qqbot.cmds包下及其子包下所有模块的所有继承于AbstractCommand的类
"""
@@ -308,7 +304,7 @@ def register_all():
else:
m = __import__(module.__name__ + '.' + item.name, fromlist=[''])
# for name, cls in inspect.getmembers(m, inspect.isclass):
# # 检查是否为令类
# # 检查是否为令类
# if cls.__module__ == m.__name__ and issubclass(cls, AbstractCommandNode) and cls != AbstractCommandNode:
# cls.register(cls, cls.name, cls.parent)
@@ -317,7 +313,7 @@ def register_all():
def apply_privileges():
"""读取cmdpriv.json并应用令权限"""
"""读取cmdpriv.json并应用令权限"""
# 读取内容
json_str = ""
with open('cmdpriv.json', 'r', encoding="utf-8") as f:
@@ -327,6 +323,10 @@ def apply_privileges():
for path, priv in data.items():
if path == 'comment':
continue
if path not in __command_list__:
continue
if __command_list__[path]['privilege'] != priv:
logging.debug('应用权限: {} -> {}(default: {})'.format(path, priv, __command_list__[path]['privilege']))

View File

@@ -1,11 +1,12 @@
from ..aamgr import AbstractCommandNode, Context
import logging
from mirai import Image
import config
import mirai
from .. import aamgr
from ....utils import context
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="draw",
description="使用DALL·E生成图片",
@@ -13,9 +14,9 @@ import config
aliases=[],
privilege=1
)
class DrawCommand(AbstractCommandNode):
class DrawCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
reply = []
@@ -28,9 +29,9 @@ class DrawCommand(AbstractCommandNode):
res = session.draw_image(" ".join(ctx.params))
logging.debug("draw_image result:{}".format(res))
reply = [Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply = [mirai.Image(url=res.data[0].url)]
config = context.get_config_manager().data
if config['include_image_description']:
reply.append(" ".join(ctx.params))
return True, reply

View File

@@ -1,8 +1,9 @@
from ..aamgr import AbstractCommandNode, Context
import logging
import json
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="func",
description="管理内容函数",
@@ -10,19 +11,22 @@ import logging
aliases=[],
privilege=1
)
class FuncCommand(AbstractCommandNode):
class FuncCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
from pkg.plugin.models import host
reply = []
reply_str = "当前已加载的内容函数:\n\n"
logging.debug("host.__callable_functions__: {}".format(json.dumps(host.__callable_functions__, indent=4)))
index = 1
for func in host.__callable_functions__:
reply_str += "{}. {}{}:\n{}\n\n".format(index, ("(已禁用) " if not func['enabled'] else ""), func['name'], func['description'])
index += 1
reply = [reply_str]
return True, reply

View File

@@ -1,12 +1,9 @@
from ..aamgr import AbstractCommandNode, Context
import os
import pkg.plugin.host as plugin_host
import pkg.utils.updater as updater
from ....plugin import host as plugin_host
from ....utils import updater
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="plugin",
description="插件管理",
@@ -14,9 +11,9 @@ import pkg.utils.updater as updater
aliases=[],
privilege=1
)
class PluginCommand(AbstractCommandNode):
class PluginCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
reply = []
plugin_list = plugin_host.__plugins__
if len(ctx.params) == 0:
@@ -41,14 +38,11 @@ class PluginCommand(AbstractCommandNode):
reply = [reply_str]
return True, reply
elif ctx.params[0].startswith("http"):
reply = ["[bot]err: 此命令已弃用,请使用 !plugin get <插件仓库地址> 进行安装"]
return True, reply
else:
return False, []
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=PluginCommand,
name="get",
description="安装插件",
@@ -56,9 +50,9 @@ class PluginCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
class PluginGetCommand(AbstractCommandNode):
class PluginGetCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import threading
import logging
import pkg.utils.context
@@ -71,7 +65,7 @@ class PluginGetCommand(AbstractCommandNode):
def closure():
try:
plugin_host.install_plugin(ctx.crt_params[0])
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 令重载插件")
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 令重载插件")
except Exception as e:
logging.error("插件安装失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装失败:{}".format(e))
@@ -81,17 +75,17 @@ class PluginGetCommand(AbstractCommandNode):
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=PluginCommand,
name="update",
description="更新所有插件",
description="更新指定插件或全部插件",
usage="!plugin update",
aliases=[],
privilege=2
)
class PluginUpdateCommand(AbstractCommandNode):
class PluginUpdateCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import threading
import logging
plugin_list = plugin_host.__plugins__
@@ -110,7 +104,9 @@ class PluginUpdateCommand(AbstractCommandNode):
plugin_host.update_plugin(key)
updated.append(key)
else:
if ctx.crt_params[0] in plugin_list:
plugin_path_name = plugin_host.get_plugin_path_name_by_plugin_name(ctx.crt_params[0])
if plugin_path_name is not None:
plugin_host.update_plugin(ctx.crt_params[0])
updated.append(ctx.crt_params[0])
else:
@@ -119,7 +115,7 @@ class PluginUpdateCommand(AbstractCommandNode):
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}, 请发送 !reload 重载插件".format(", ".join(updated)))
except Exception as e:
logging.error("插件更新失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件更新失败:{} 请尝试手动更新插件".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件更新失败:{}使用 !plugin 命令确认插件名称或尝试手动更新插件".format(e))
reply = ["[bot]正在更新插件,请勿重复发起..."]
threading.Thread(target=closure).start()
@@ -128,7 +124,7 @@ class PluginUpdateCommand(AbstractCommandNode):
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=PluginCommand,
name="del",
description="删除插件",
@@ -136,9 +132,9 @@ class PluginUpdateCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
class PluginDelCommand(AbstractCommandNode):
class PluginDelCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
plugin_list = plugin_host.__plugins__
reply = []
@@ -150,12 +146,12 @@ class PluginDelCommand(AbstractCommandNode):
unin_path = plugin_host.uninstall_plugin(plugin_name)
reply = ["[bot]已删除插件: {} ({}), 请发送 !reload 重载插件".format(plugin_name, unin_path)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin令查看插件列表".format(plugin_name)]
reply = ["[bot]err:未找到插件: {}, 请使用!plugin令查看插件列表".format(plugin_name)]
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=PluginCommand,
name="on",
description="启用指定插件",
@@ -163,7 +159,7 @@ class PluginDelCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=PluginCommand,
name="off",
description="禁用指定插件",
@@ -171,9 +167,9 @@ class PluginDelCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
class PluginOnOffCommand(AbstractCommandNode):
class PluginOnOffCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.plugin.switch as plugin_switch
plugin_list = plugin_host.__plugins__
@@ -196,7 +192,7 @@ class PluginOnOffCommand(AbstractCommandNode):
plugin_switch.dump_switch()
reply = ["[bot]已{}插件: {}".format("启用" if new_status else "禁用", plugin_name)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin令查看插件列表".format(plugin_name)]
reply = ["[bot]err:未找到插件: {}, 请使用!plugin令查看插件列表".format(plugin_name)]
return True, reply

View File

@@ -1,27 +0,0 @@
from ..aamgr import AbstractCommandNode, Context
@AbstractCommandNode.register(
parent=None,
name="continue",
description="继续未完成的响应",
usage="!continue",
aliases=[],
privilege=1
)
class ContinueCommand(AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
import config
session_name = ctx.session_name
reply = []
session = pkg.openai.session.get_session(session_name)
text = session.append()
reply = [text]
return True, reply

View File

@@ -1,7 +1,8 @@
from ..aamgr import AbstractCommandNode, Context
from .. import aamgr
from ....utils import context
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="default",
description="操作情景预设",
@@ -9,19 +10,20 @@ from ..aamgr import AbstractCommandNode, Context
aliases=[],
privilege=1
)
class DefaultCommand(AbstractCommandNode):
class DefaultCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
reply = []
import config
config = context.get_config_manager().data
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设({}模式):\n\n".format(config.preset_mode)
reply_str = "[bot]当前所有情景预设({}模式):\n\n".format(config['preset_mode'])
prompts = dprompt.mode_inst().list()
@@ -37,15 +39,13 @@ class DefaultCommand(AbstractCommandNode):
reply_str += "\n当前默认情景预设:{}\n".format(dprompt.mode_inst().get_using_name())
reply_str += "请使用 !default set <情景预设名称> 来设置默认情景预设"
reply = [reply_str]
elif params[0] != "set":
reply = ["[bot]err: 已弃用,请使用!default set <情景预设名称> 来设置默认情景预设"]
else:
return False, []
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=DefaultCommand,
name="set",
description="设置默认情景预设",
@@ -53,9 +53,9 @@ class DefaultCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
class DefaultSetCommand(AbstractCommandNode):
class DefaultSetCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
reply = []
if len(ctx.crt_params) == 0:
@@ -67,7 +67,5 @@ class DefaultSetCommand(AbstractCommandNode):
reply = ["[bot]已设置默认情景预设为:{}".format(full_name)]
except Exception as e:
reply = ["[bot]err: {}".format(e)]
else:
reply = ["[bot]err: 仅管理员可设置默认情景预设"]
return True, reply

View File

@@ -1,8 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="del",
description="删除当前会话的历史记录",
@@ -10,9 +9,9 @@ import datetime
aliases=[],
privilege=1
)
class DelCommand(AbstractCommandNode):
class DelCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
@@ -33,7 +32,7 @@ class DelCommand(AbstractCommandNode):
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=DelCommand,
name="all",
description="删除当前会话的全部历史记录",
@@ -41,9 +40,9 @@ class DelCommand(AbstractCommandNode):
aliases=[],
privilege=1
)
class DelAllCommand(AbstractCommandNode):
class DelAllCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
reply = []

View File

@@ -1,7 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="delhst",
description="删除指定会话的所有历史记录",
@@ -9,9 +9,9 @@ from ..aamgr import AbstractCommandNode, Context
aliases=[],
privilege=2
)
class DelHistoryCommand(AbstractCommandNode):
class DelHistoryCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
import pkg.utils.context
params = ctx.params
@@ -31,7 +31,7 @@ class DelHistoryCommand(AbstractCommandNode):
return True, reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=DelHistoryCommand,
name="all",
description="删除所有会话的全部历史记录",
@@ -39,9 +39,9 @@ class DelHistoryCommand(AbstractCommandNode):
aliases=[],
privilege=2
)
class DelAllHistoryCommand(AbstractCommandNode):
class DelAllHistoryCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.utils.context
reply = []
pkg.utils.context.get_database_manager().delete_all_session_history()

View File

@@ -1,8 +1,9 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="last",
description="切换前一次对话",
@@ -10,9 +11,9 @@ import datetime
aliases=[],
privilege=1
)
class LastCommand(AbstractCommandNode):
class LastCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name

View File

@@ -1,9 +1,10 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
import json
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name='list',
description='列出当前会话的所有历史记录',
@@ -11,9 +12,9 @@ import json
aliases=[],
privilege=1
)
class ListCommand(AbstractCommandNode):
class ListCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params
@@ -30,7 +31,7 @@ class ListCommand(AbstractCommandNode):
results = pkg.openai.session.get_session(session_name).list_history(page=page)
if len(results) == 0:
reply = ["[bot]第{}页没有历史会话".format(page)]
reply_str = "[bot]第{}页没有历史会话".format(page)
else:
reply_str = "[bot]历史会话 第{}页:\n".format(page)
current = -1
@@ -38,12 +39,9 @@ class ListCommand(AbstractCommandNode):
# 时间(使用create_timestamp转换) 序号 部分内容
datetime_obj = datetime.datetime.fromtimestamp(results[i]['create_timestamp'])
msg = ""
try:
msg = json.loads(results[i]['prompt'])
except json.decoder.JSONDecodeError:
msg = pkg.openai.session.reset_session_prompt(session_name, results[i]['prompt'])
# 持久化
pkg.openai.session.get_session(session_name).persistence()
msg = json.loads(results[i]['prompt'])
if len(msg) >= 2:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),

View File

@@ -1,8 +1,9 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="next",
description="切换后一次对话",
@@ -10,9 +11,9 @@ import datetime
aliases=[],
privilege=1
)
class NextCommand(AbstractCommandNode):
class NextCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
reply = []

View File

@@ -1,8 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="prompt",
description="获取当前会话的前文",
@@ -10,9 +9,9 @@ import datetime
aliases=[],
privilege=1
)
class PromptCommand(AbstractCommandNode):
class PromptCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import pkg.openai.session
session_name = ctx.session_name
params = ctx.params

View File

@@ -1,8 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
import datetime
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="resend",
description="重新获取上一次问题的回复",
@@ -10,20 +9,24 @@ import datetime
aliases=[],
privilege=1
)
class ResendCommand(AbstractCommandNode):
class ResendCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
import pkg.openai.session
import config
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
from ....openai import session as openai_session
from ....utils import context
from ....qqbot import message
session_name = ctx.session_name
reply = []
session = pkg.openai.session.get_session(session_name)
session = openai_session.get_session(session_name)
to_send = session.undo()
mgr = pkg.utils.context.get_qqbot_manager()
mgr = context.get_qqbot_manager()
reply = pkg.qqbot.message.process_normal_message(to_send, mgr, config,
config = context.get_config_manager().data
reply = message.process_normal_message(to_send, mgr, config,
ctx.launcher_type, ctx.launcher_id,
ctx.sender_id)

View File

@@ -1,11 +1,11 @@
from ..aamgr import AbstractCommandNode, Context
import tips as tips_custom
import pkg.openai.session
import pkg.utils.context
from .. import aamgr
from ....openai import session
from ....utils import context
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name='reset',
description='重置当前会话',
@@ -13,21 +13,21 @@ import pkg.utils.context
aliases=[],
privilege=1
)
class ResetCommand(AbstractCommandNode):
class ResetCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
params = ctx.params
session_name = ctx.session_name
reply = ""
if len(params) == 0:
pkg.openai.session.get_session(session_name).reset(explicit=True)
session.get_session(session_name).reset(explicit=True)
reply = [tips_custom.command_reset_message]
else:
try:
import pkg.openai.dprompt as dprompt
pkg.openai.session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
reply = [tips_custom.command_reset_name_message+"{}".format(dprompt.mode_inst().get_full_name(params[0]))]
except Exception as e:
reply = ["[bot]会话重置失败:{}".format(e)]

View File

@@ -1,11 +1,17 @@
from ..aamgr import AbstractCommandNode, Context
import json
from .. import aamgr
def config_operation(cmd, params):
reply = []
import pkg.utils.context
config = pkg.utils.context.get_config()
# config = pkg.utils.context.get_config()
cfg_mgr = pkg.utils.context.get_config_manager()
false = False
true = True
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入!cmd cfg查看使用方法"]
@@ -13,25 +19,25 @@ def config_operation(cmd, params):
cfg_name = params[0]
if cfg_name == 'all':
reply_str = "[bot]所有配置项:\n\n"
for cfg in dir(config):
for cfg in cfg_mgr.data.keys():
if not cfg.startswith('__') and not cfg == 'logging':
# 根据配置项类型进行格式化如果是字典则转换为json并格式化
if isinstance(getattr(config, cfg), str):
reply_str += "{}: \"{}\"\n".format(cfg, getattr(config, cfg))
elif isinstance(getattr(config, cfg), dict):
if isinstance(cfg_mgr.data[cfg], str):
reply_str += "{}: \"{}\"\n".format(cfg, cfg_mgr.data[cfg])
elif isinstance(cfg_mgr.data[cfg], dict):
# 不进行unicode转义并格式化
reply_str += "{}: {}\n".format(cfg,
json.dumps(getattr(config, cfg),
json.dumps(cfg_mgr.data[cfg],
ensure_ascii=False, indent=4))
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply_str += "{}: {}\n".format(cfg, cfg_mgr.data[cfg])
reply = [reply_str]
else:
cfg_entry_path = cfg_name.split('.')
try:
if len(params) == 1:
cfg_entry = getattr(config, cfg_entry_path[0])
if len(params) == 1: # 未指定配置值,返回配置项值
cfg_entry = cfg_mgr.data[cfg_entry_path[0]]
if len(cfg_entry_path) > 1:
for i in range(1, len(cfg_entry_path)):
cfg_entry = cfg_entry[cfg_entry_path[i]]
@@ -47,23 +53,10 @@ def config_operation(cmd, params):
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
# if cfg_value == 'true':
# cfg_value = True
# elif cfg_value == 'false':
# cfg_value = False
# elif cfg_value.isdigit():
# cfg_value = int(cfg_value)
# elif cfg_value.startswith('{') and cfg_value.endswith('}'):
# cfg_value = json.loads(cfg_value)
# else:
# try:
# cfg_value = float(cfg_value)
# except ValueError:
# pass
cfg_value = eval(cfg_value)
cfg_entry = getattr(config, cfg_entry_path[0])
cfg_entry = cfg_mgr.data[cfg_entry_path[0]]
if len(cfg_entry_path) > 1:
for i in range(1, len(cfg_entry_path) - 1):
cfg_entry = cfg_entry[cfg_entry_path[i]]
@@ -73,19 +66,19 @@ def config_operation(cmd, params):
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
setattr(config, cfg_entry_path[0], cfg_value)
cfg_mgr.data[cfg_entry_path[0]] = cfg_value
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
except AttributeError:
except KeyError:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
except NameError:
reply = ["[bot]err:值{}不合法(字符串需要使用双引号包裹)".format(cfg_value)]
except ValueError:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
# else:
# reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="cfg",
description="配置项管理",
@@ -93,8 +86,8 @@ def config_operation(cmd, params):
aliases=[],
privilege=2
)
class CfgCommand(AbstractCommandNode):
class CfgCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
return True, config_operation(ctx.command, ctx.params)

View File

@@ -1,31 +1,31 @@
from ..aamgr import AbstractCommandNode, Context, __command_list__
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="cmd",
description="显示令列表",
usage="!cmd\n!cmd <令名称>",
description="显示令列表",
usage="!cmd\n!cmd <令名称>",
aliases=[],
privilege=1
)
class CmdCommand(AbstractCommandNode):
class CmdCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
command_list = __command_list__
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
command_list = aamgr.__command_list__
reply = []
if len(ctx.params) == 0:
reply_str = "[bot]当前所有令:\n\n"
reply_str = "[bot]当前所有令:\n\n"
# 遍历顶级
# 遍历顶级
for key in command_list:
command = command_list[key]
if command['parent'] is None:
reply_str += "!{} - {}\n".format(key, command['description'])
reply_str += "\n请使用 !cmd <令名称> 来查看令的详细信息"
reply_str += "\n请使用 !cmd <令名称> 来查看令的详细信息"
reply = [reply_str]
else:
@@ -33,7 +33,7 @@ class CmdCommand(AbstractCommandNode):
if command_name in command_list:
reply = [command_list[command_name]['cls'].help()]
else:
reply = ["[bot]{} 不存在".format(command_name)]
reply = ["[bot]{} 不存在".format(command_name)]
return True, reply

View File

@@ -1,7 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="help",
description="显示自定义的帮助信息",
@@ -9,11 +9,11 @@ from ..aamgr import AbstractCommandNode, Context
aliases=[],
privilege=1
)
class HelpCommand(AbstractCommandNode):
class HelpCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import tips
reply = ["[bot] "+tips.help_message + "\n请输入 !cmd 查看令列表"]
reply = ["[bot] "+tips.help_message + "\n请输入 !cmd 查看令列表"]
# 警告config.help_message过时
import config

View File

@@ -1,7 +1,9 @@
from ..aamgr import AbstractCommandNode, Context
import threading
@AbstractCommandNode.register(
from .. import aamgr
@aamgr.AbstractCommandNode.register(
parent=None,
name="reload",
description="执行热重载",
@@ -9,9 +11,9 @@ import threading
aliases=[],
privilege=2
)
class ReloadCommand(AbstractCommandNode):
class ReloadCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
reply = []
import pkg.utils.reloader

View File

@@ -1,9 +1,10 @@
from ..aamgr import AbstractCommandNode, Context
import threading
import traceback
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="update",
description="更新程序",
@@ -11,9 +12,9 @@ import traceback
aliases=[],
privilege=2
)
class UpdateCommand(AbstractCommandNode):
class UpdateCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
reply = []
import pkg.utils.updater
import pkg.utils.reloader
@@ -22,8 +23,7 @@ class UpdateCommand(AbstractCommandNode):
def update_task():
try:
if pkg.utils.updater.update_all():
pkg.utils.reloader.reload_all(notify=False)
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成")
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成, 请手动重启程序。")
else:
pkg.utils.context.get_qqbot_manager().notify_admin("无新版本")
except Exception as e0:

View File

@@ -1,8 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
import logging
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="usage",
description="获取使用情况",
@@ -10,11 +9,10 @@ import logging
aliases=[],
privilege=1
)
class UsageCommand(AbstractCommandNode):
class UsageCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
import config
import pkg.utils.credit as credit
import pkg.utils.context
reply = []

View File

@@ -1,7 +1,7 @@
from ..aamgr import AbstractCommandNode, Context
from .. import aamgr
@AbstractCommandNode.register(
@aamgr.AbstractCommandNode.register(
parent=None,
name="version",
description="查看版本信息",
@@ -9,9 +9,9 @@ from ..aamgr import AbstractCommandNode, Context
aliases=[],
privilege=1
)
class VersionCommand(AbstractCommandNode):
class VersionCommand(aamgr.AbstractCommandNode):
@classmethod
def process(cls, ctx: Context) -> tuple[bool, list]:
def process(cls, ctx: aamgr.Context) -> tuple[bool, list]:
reply = []
import pkg.utils.updater

View File

@@ -1,31 +1,15 @@
# 令处理模块
# 令处理模块
import logging
import json
import datetime
import os
import threading
import traceback
import pkg.openai.session
import pkg.openai.manager
import pkg.utils.reloader
import pkg.utils.updater
import pkg.utils.context
import pkg.qqbot.message
import pkg.utils.credit as credit
# import pkg.qqbot.cmds.model as cmdmodel
import pkg.qqbot.cmds.aamgr as cmdmgr
from mirai import Image
from ..qqbot.cmds import aamgr as cmdmgr
def process_command(session_name: str, text_message: str, mgr, config,
def process_command(session_name: str, text_message: str, mgr, config: dict,
launcher_type: str, launcher_id: int, sender_id: int, is_admin: bool) -> list:
reply = []
try:
logging.info(
"[{}]发起令:{}".format(session_name, text_message[:min(20, len(text_message))] + (
"[{}]发起令:{}".format(session_name, text_message[:min(20, len(text_message))] + (
"..." if len(text_message) > 20 else "")))
cmd = text_message[1:].strip().split(' ')[0]
@@ -58,7 +42,7 @@ def process_command(session_name: str, text_message: str, mgr, config,
return reply
except Exception as e:
mgr.notify_admin("{}令执行失败:{}".format(session_name, e))
mgr.notify_admin("{}令执行失败:{}".format(session_name, e))
logging.exception(e)
reply = ["[bot]err:{}".format(e)]

View File

@@ -4,6 +4,8 @@ import requests
import json
import logging
from ..utils import context
class ReplyFilter:
sensitive_words = []
@@ -20,12 +22,13 @@ class ReplyFilter:
self.sensitive_words = sensitive_words
self.mask = mask
self.mask_word = mask_word
import config
self.baidu_check = config.baidu_check
self.baidu_api_key = config.baidu_api_key
self.baidu_secret_key = config.baidu_secret_key
self.inappropriate_message_tips = config.inappropriate_message_tips
config = context.get_config_manager().data
self.baidu_check = config['baidu_check']
self.baidu_api_key = config['baidu_api_key']
self.baidu_secret_key = config['baidu_secret_key']
self.inappropriate_message_tips = config['inappropriate_message_tips']
def is_illegal(self, message: str) -> bool:
processed = self.process(message)

View File

@@ -1,16 +1,18 @@
import re
from ..utils import context
def ignore(msg: str) -> bool:
"""检查消息是否应该被忽略"""
import config
config = context.get_config_manager().data
if 'prefix' in config.ignore_rules:
for rule in config.ignore_rules['prefix']:
if 'prefix' in config['ignore_rules']:
for rule in config['ignore_rules']['prefix']:
if msg.startswith(rule):
return True
if 'regexp' in config.ignore_rules:
for rule in config.ignore_rules['regexp']:
if 'regexp' in config['ignore_rules']:
for rule in config['ignore_rules']['regexp']:
if re.search(rule, msg):
return True

View File

@@ -1,41 +1,34 @@
import asyncio
import json
import os
import threading
from mirai import At, GroupMessage, MessageEvent, Mirai, StrangerMessage, WebSocketAdapter, HTTPAdapter, \
FriendMessage, Image, MessageChain, Plain
from func_timeout import func_set_timeout
import pkg.openai.session
import pkg.openai.manager
from func_timeout import FunctionTimedOut
import logging
import pkg.qqbot.filter
import pkg.qqbot.process as processor
import pkg.utils.context
from mirai import At, GroupMessage, MessageEvent, StrangerMessage, \
FriendMessage, Image, MessageChain, Plain
import func_timeout
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
from ..openai import session as openai_session
from ..qqbot import filter as qqbot_filter
from ..qqbot import process as processor
from ..utils import context
from ..plugin import host as plugin_host
from ..plugin import models as plugin_models
import tips as tips_custom
import pkg.qqbot.adapter as msadapter
from ..qqbot import adapter as msadapter
# 检查消息是否符合泛响应匹配机制
def check_response_rule(group_id:int, text: str):
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
rules = config.response_rules
rules = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
rules = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
rules = config['response_rules'][str(group_id)]
else:
rules = config.response_rules['default']
rules = config['response_rules']['default']
# 检查前缀匹配
if 'prefix' in rules:
@@ -55,16 +48,16 @@ def check_response_rule(group_id:int, text: str):
def response_at(group_id: int):
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
use_response_rule = config.response_rules
use_response_rule = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
use_response_rule = config['response_rules'][str(group_id)]
else:
use_response_rule = config.response_rules['default']
use_response_rule = config['response_rules']['default']
if 'at' not in use_response_rule:
return True
@@ -73,16 +66,16 @@ def response_at(group_id: int):
def random_responding(group_id):
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
use_response_rule = config.response_rules
use_response_rule = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
use_response_rule = config['response_rules'][str(group_id)]
else:
use_response_rule = config.response_rules['default']
use_response_rule = config['response_rules']['default']
if 'random_rate' in use_response_rule:
import random
@@ -109,31 +102,35 @@ class QQBotManager:
ban_group = []
def __init__(self, first_time_init=True):
import config
config = context.get_config_manager().data
self.timeout = config.process_message_timeout
self.retry = config.retry_times
self.timeout = config['process_message_timeout']
self.retry = config['retry_times']
# 由于YiriMirai的bot对象是单例的且shutdown方法暂时无法使用
# 故只在第一次初始化时创建bot对象重载之后使用原bot对象
# 因此bot的配置不支持热重载
if first_time_init:
logging.info("Use adapter:" + config.msg_source_adapter)
if config.msg_source_adapter == 'yirimirai':
logging.debug("Use adapter:" + config['msg_source_adapter'])
if config['msg_source_adapter'] == 'yirimirai':
from pkg.qqbot.sources.yirimirai import YiriMiraiAdapter
mirai_http_api_config = config.mirai_http_api_config
self.bot_account_id = config.mirai_http_api_config['qq']
mirai_http_api_config = config['mirai_http_api_config']
self.bot_account_id = config['mirai_http_api_config']['qq']
self.adapter = YiriMiraiAdapter(mirai_http_api_config)
elif config.msg_source_adapter == 'nakuru':
elif config['msg_source_adapter'] == 'nakuru':
from pkg.qqbot.sources.nakuru import NakuruProjectAdapter
self.adapter = NakuruProjectAdapter(config.nakuru_config)
self.adapter = NakuruProjectAdapter(config['nakuru_config'])
self.bot_account_id = self.adapter.bot_account_id
else:
self.adapter = pkg.utils.context.get_qqbot_manager().adapter
self.bot_account_id = pkg.utils.context.get_qqbot_manager().bot_account_id
self.adapter = context.get_qqbot_manager().adapter
self.bot_account_id = context.get_qqbot_manager().bot_account_id
# 保存 account_id 到审计模块
from ..utils.center import apigroup
apigroup.APIGroup._runtime_info['account_id'] = "{}".format(self.bot_account_id)
pkg.utils.context.set_qqbot_manager(self)
context.set_qqbot_manager(self)
# 注册诸事件
# Caution: 注册新的事件处理器之后请务必在unsubscribe_all中编写相应的取消订阅代码
@@ -154,7 +151,7 @@ class QQBotManager:
self.on_person_message(event)
pkg.utils.context.get_thread_ctl().submit_user_task(
context.get_thread_ctl().submit_user_task(
friend_message_handler,
)
self.adapter.register_listener(
@@ -179,11 +176,11 @@ class QQBotManager:
self.on_person_message(event)
pkg.utils.context.get_thread_ctl().submit_user_task(
context.get_thread_ctl().submit_user_task(
stranger_message_handler,
)
# nakuru不区分好友和陌生人故仅为yirimirai注册陌生人事件
if config.msg_source_adapter == 'yirimirai':
if config['msg_source_adapter'] == 'yirimirai':
self.adapter.register_listener(
StrangerMessage,
on_stranger_message
@@ -206,7 +203,7 @@ class QQBotManager:
self.on_group_message(event)
pkg.utils.context.get_thread_ctl().submit_user_task(
context.get_thread_ctl().submit_user_task(
group_message_handler,
event
)
@@ -220,12 +217,11 @@ class QQBotManager:
用于在热重载流程中卸载所有事件处理器
"""
import config
self.adapter.unregister_listener(
FriendMessage,
on_friend_message
)
if config.msg_source_adapter == 'yirimirai':
if config['msg_source_adapter'] == 'yirimirai':
self.adapter.unregister_listener(
StrangerMessage,
on_stranger_message
@@ -250,33 +246,50 @@ class QQBotManager:
if hasattr(banlist, "enable_group"):
self.enable_group = banlist.enable_group
config = pkg.utils.context.get_config()
config = context.get_config_manager().data
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
and config['sensitive_word_filter'] is not None \
and config['sensitive_word_filter']:
with open("sensitive.json", "r", encoding="utf-8") as f:
sensitive_json = json.load(f)
self.reply_filter = pkg.qqbot.filter.ReplyFilter(
self.reply_filter = qqbot_filter.ReplyFilter(
sensitive_words=sensitive_json['words'],
mask=sensitive_json['mask'] if 'mask' in sensitive_json else '*',
mask_word=sensitive_json['mask_word'] if 'mask_word' in sensitive_json else ''
)
else:
self.reply_filter = pkg.qqbot.filter.ReplyFilter([])
self.reply_filter = qqbot_filter.ReplyFilter([])
def send(self, event, msg, check_quote=True, check_at_sender=True):
config = context.get_config_manager().data
if check_at_sender and config['at_sender']:
msg.insert(
0,
Plain(" \n")
)
# 当回复的正文中包含换行时quote可能会自带at此时就不再单独添加at只添加换行
if "\n" not in str(msg[1]) or config['msg_source_adapter'] == 'nakuru':
msg.insert(
0,
At(
event.sender.id
)
)
def send(self, event, msg, check_quote=True):
config = pkg.utils.context.get_config()
self.adapter.reply_message(
event,
msg,
quote_origin=True if config.quote_origin and check_quote else False
quote_origin=True if config['quote_origin'] and check_quote else False
)
# 私聊消息处理
def on_person_message(self, event: MessageEvent):
import config
reply = ''
config = context.get_config_manager().data
if not self.enable_private:
logging.debug("已在banlist.py中禁用所有私聊")
elif event.sender.id == self.bot_account_id:
@@ -290,7 +303,7 @@ class QQBotManager:
for i in range(self.retry):
try:
@func_set_timeout(config.process_message_timeout)
@func_timeout.func_set_timeout(config['process_message_timeout'])
def time_ctrl_wrapper():
reply = processor.process_message('person', event.sender.id, str(event.message_chain),
event.message_chain,
@@ -299,26 +312,28 @@ class QQBotManager:
reply = time_ctrl_wrapper()
break
except FunctionTimedOut:
except func_timeout.FunctionTimedOut:
logging.warning("person_{}: 超时,重试中({})".format(event.sender.id, i))
pkg.openai.session.get_session('person_{}'.format(event.sender.id)).release_response_lock()
if "person_{}".format(event.sender.id) in pkg.qqbot.process.processing:
pkg.qqbot.process.processing.remove('person_{}'.format(event.sender.id))
openai_session.get_session('person_{}'.format(event.sender.id)).release_response_lock()
if "person_{}".format(event.sender.id) in processor.processing:
processor.processing.remove('person_{}'.format(event.sender.id))
failed += 1
continue
if failed == self.retry:
pkg.openai.session.get_session('person_{}'.format(event.sender.id)).release_response_lock()
openai_session.get_session('person_{}'.format(event.sender.id)).release_response_lock()
self.notify_admin("{} 请求超时".format("person_{}".format(event.sender.id)))
reply = [tips_custom.reply_message]
if reply:
return self.send(event, reply, check_quote=False)
return self.send(event, reply, check_quote=False, check_at_sender=False)
# 群消息处理
def on_group_message(self, event: GroupMessage):
import config
reply = ''
config = context.get_config_manager().data
def process(text=None) -> str:
replys = ""
if At(self.bot_account_id) in event.message_chain:
@@ -328,7 +343,7 @@ class QQBotManager:
failed = 0
for i in range(self.retry):
try:
@func_set_timeout(config.process_message_timeout)
@func_timeout.func_set_timeout(config['process_message_timeout'])
def time_ctrl_wrapper():
replys = processor.process_message('group', event.group.id,
str(event.message_chain).strip() if text is None else text,
@@ -338,16 +353,16 @@ class QQBotManager:
replys = time_ctrl_wrapper()
break
except FunctionTimedOut:
except func_timeout.FunctionTimedOut:
logging.warning("group_{}: 超时,重试中({})".format(event.group.id, i))
pkg.openai.session.get_session('group_{}'.format(event.group.id)).release_response_lock()
if "group_{}".format(event.group.id) in pkg.qqbot.process.processing:
pkg.qqbot.process.processing.remove('group_{}'.format(event.group.id))
openai_session.get_session('group_{}'.format(event.group.id)).release_response_lock()
if "group_{}".format(event.group.id) in processor.processing:
processor.processing.remove('group_{}'.format(event.group.id))
failed += 1
continue
if failed == self.retry:
pkg.openai.session.get_session('group_{}'.format(event.group.id)).release_response_lock()
openai_session.get_session('group_{}'.format(event.group.id)).release_response_lock()
self.notify_admin("{} 请求超时".format("group_{}".format(event.group.id)))
replys = [tips_custom.replys_message]
@@ -376,17 +391,17 @@ class QQBotManager:
# 通知系统管理员
def notify_admin(self, message: str):
config = pkg.utils.context.get_config()
if config.admin_qq != 0 and config.admin_qq != []:
config = context.get_config_manager().data
if config['admin_qq'] != 0 and config['admin_qq'] != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
if type(config['admin_qq']) == int:
self.adapter.send_message(
"person",
config.admin_qq,
config['admin_qq'],
MessageChain([Plain("[bot]{}".format(message))])
)
else:
for adm in config.admin_qq:
for adm in config['admin_qq']:
self.adapter.send_message(
"person",
adm,
@@ -394,17 +409,17 @@ class QQBotManager:
)
def notify_admin_message_chain(self, message):
config = pkg.utils.context.get_config()
if config.admin_qq != 0 and config.admin_qq != []:
config = context.get_config_manager().data
if config['admin_qq'] != 0 and config['admin_qq'] != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
if type(config['admin_qq']) == int:
self.adapter.send_message(
"person",
config.admin_qq,
config['admin_qq'],
message
)
else:
for adm in config.admin_qq:
for adm in config['admin_qq']:
self.adapter.send_message(
"person",
adm,

View File

@@ -1,32 +1,33 @@
# 普通消息处理模块
import logging
import openai
import pkg.utils.context
import pkg.openai.session
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.blob as blob
import openai
from ..utils import context
from ..openai import session as openai_session
from ..plugin import host as plugin_host
from ..plugin import models as plugin_models
import tips as tips_custom
def handle_exception(notify_admin: str = "", set_reply: str = "") -> list:
"""处理异常当notify_admin不为空时会通知管理员返回通知用户的消息"""
import config
pkg.utils.context.get_qqbot_manager().notify_admin(notify_admin)
if config.hide_exce_info_to_user:
config = context.get_config_manager().data
context.get_qqbot_manager().notify_admin(notify_admin)
if config['hide_exce_info_to_user']:
return [tips_custom.alter_tip_message] if tips_custom.alter_tip_message else []
else:
return [set_reply]
def process_normal_message(text_message: str, mgr, config, launcher_type: str,
def process_normal_message(text_message: str, mgr, config: dict, launcher_type: str,
launcher_id: int, sender_id: int) -> list:
session_name = f"{launcher_type}_{launcher_id}"
logging.info("[{}]发送消息:{}".format(session_name, text_message[:min(20, len(text_message))] + (
"..." if len(text_message) > 20 else "")))
session = pkg.openai.session.get_session(session_name)
session = openai_session.get_session(session_name)
unexpected_exception_times = 0
@@ -38,9 +39,9 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception(notify_admin=f"{session_name},多次尝试失败。", set_reply=f"[bot]多次尝试失败,请重试或联系管理员")
break
try:
prefix = "[GPT]" if config.show_prefix else ""
prefix = "[GPT]" if config['show_prefix'] else ""
text = session.append(text_message)
text, finish_reason, funcs = session.query(text_message)
# 触发插件事件
args = {
@@ -49,10 +50,12 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
"sender_id": sender_id,
"session": session,
"prefix": prefix,
"response_text": text
"response_text": text,
"finish_reason": finish_reason,
"funcs_called": funcs,
}
event = pkg.plugin.host.emit(plugin_models.NormalMessageResponded, **args)
event = plugin_host.emit(plugin_models.NormalMessageResponded, **args)
if event.get_return_value("prefix") is not None:
prefix = event.get_return_value("prefix")
@@ -62,42 +65,43 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
if not event.is_prevented_default():
reply = [prefix + text]
except openai.error.APIConnectionError as e:
except openai.APIConnectionError as e:
err_msg = str(e)
if err_msg.__contains__('Error communicating with OpenAI'):
reply = handle_exception("{}会话调用API失败:{}\n请尝试关闭网络代理来解决此问题。".format(session_name, e),
reply = handle_exception("{}会话调用API失败:{}\n您的网络无法访问OpenAI接口或网络代理不正常".format(session_name, e),
"[bot]err:调用API失败请重试或联系管理员或等待修复")
else:
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e), "[bot]err:调用API失败请重试或联系管理员或等待修复")
except openai.error.RateLimitError as e:
except openai.RateLimitError as e:
logging.debug(type(e))
logging.debug(e.error['message'])
if 'message' in e.error and e.error['message'].__contains__('You exceeded your current quota'):
# 尝试切换api-key
current_key_name = pkg.utils.context.get_openai_manager().key_mgr.get_key_name(
pkg.utils.context.get_openai_manager().key_mgr.using_key
current_key_name = context.get_openai_manager().key_mgr.get_key_name(
context.get_openai_manager().key_mgr.using_key
)
pkg.utils.context.get_openai_manager().key_mgr.set_current_exceeded()
context.get_openai_manager().key_mgr.set_current_exceeded()
# 触发插件事件
args = {
'key_name': current_key_name,
'usage': pkg.utils.context.get_openai_manager().audit_mgr
.get_usage(pkg.utils.context.get_openai_manager().key_mgr.get_using_key_md5()),
'exceeded_keys': pkg.utils.context.get_openai_manager().key_mgr.exceeded,
'usage': context.get_openai_manager().audit_mgr
.get_usage(context.get_openai_manager().key_mgr.get_using_key_md5()),
'exceeded_keys': context.get_openai_manager().key_mgr.exceeded,
}
event = plugin_host.emit(plugin_models.KeyExceeded, **args)
if not event.is_prevented_default():
switched, name = pkg.utils.context.get_openai_manager().key_mgr.auto_switch()
switched, name = context.get_openai_manager().key_mgr.auto_switch()
if not switched:
reply = handle_exception(
"api-key调用额度超限({}),无可用api_key,请向OpenAI账户充值或在config.py中更换api_key如果你认为这是误判请尝试重启程序。".format(
current_key_name), "[bot]err:API调用额度超额请联系管理员或等待修复")
else:
openai.api_key = pkg.utils.context.get_openai_manager().key_mgr.get_using_key()
openai.api_key = context.get_openai_manager().key_mgr.get_using_key()
mgr.notify_admin("api-key调用额度超限({}),接口报错,已切换到{}".format(current_key_name, name))
reply = ["[bot]err:API调用额度超额已自动切换请重新发送消息"]
continue
@@ -113,14 +117,14 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
else:
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e),
"[bot]err:RateLimitError,请重试或联系作者,或等待修复")
except openai.error.InvalidRequestError as e:
if config.auto_reset and "This model's maximum context length is" in str(e):
except openai.BadRequestError as e:
if config['auto_reset'] and "This model's maximum context length is" in str(e):
session.reset(persist=True)
reply = [tips_custom.session_auto_reset_message]
else:
reply = handle_exception("{}API调用参数错误:{}\n".format(
session_name, e), "[bot]err:API调用参数错误请联系管理员或等待修复")
except openai.error.ServiceUnavailableError as e:
except openai.APIStatusError as e:
reply = handle_exception("{}API调用服务不可用:{}".format(session_name, e), "[bot]err:API调用服务不可用请重试或联系管理员或等待修复")
except Exception as e:
logging.exception(e)

View File

@@ -1,32 +1,27 @@
# 此模块提供了消息处理的具体逻辑的接口
import asyncio
import time
import traceback
import mirai
import logging
from mirai import MessageChain, Plain
# 这里不使用动态引入config
# 因为在这里动态引入会卡死程序
# 而此模块静态引用config与动态引入的表现一致
# 已弃用,由于超时时间现已动态使用
# import config as config_init_import
import pkg.openai.session
import pkg.openai.manager
import pkg.utils.reloader
import pkg.utils.updater
import pkg.utils.context
import pkg.qqbot.message
import pkg.qqbot.command
import pkg.qqbot.ratelimit as ratelimit
from ..qqbot import ratelimit
from ..qqbot import command, message
from ..openai import session as openai_session
from ..utils import context
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.ignore as ignore
import pkg.qqbot.banlist as banlist
import pkg.qqbot.blob as blob
from ..plugin import host as plugin_host
from ..plugin import models as plugin_models
from ..qqbot import ignore
from ..qqbot import banlist
from ..qqbot import blob
import tips as tips_custom
processing = []
@@ -34,18 +29,18 @@ processing = []
def is_admin(qq: int) -> bool:
"""兼容list和int类型的管理员判断"""
import config
if type(config.admin_qq) == list:
return qq in config.admin_qq
config = context.get_config_manager().data
if type(config['admin_qq']) == list:
return qq in config['admin_qq']
else:
return qq == config.admin_qq
return qq == config['admin_qq']
def process_message(launcher_type: str, launcher_id: int, text_message: str, message_chain: MessageChain,
sender_id: int) -> MessageChain:
def process_message(launcher_type: str, launcher_id: int, text_message: str, message_chain: mirai.MessageChain,
sender_id: int) -> mirai.MessageChain:
global processing
mgr = pkg.utils.context.get_qqbot_manager()
mgr = context.get_qqbot_manager()
reply = []
session_name = "{}_{}".format(launcher_type, launcher_id)
@@ -59,10 +54,10 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("根据忽略规则忽略消息: {}".format(text_message))
return []
import config
config = context.get_config_manager().data
if not config.wait_last_done and session_name in processing:
return MessageChain([Plain(tips_custom.message_drop_tip)])
if not config['wait_last_done'] and session_name in processing:
return mirai.MessageChain([mirai.Plain(tips_custom.message_drop_tip)])
# 检查是否被禁言
if launcher_type == 'group':
@@ -71,12 +66,11 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("机器人被禁言,跳过消息处理(group_{})".format(launcher_id))
return reply
import config
if config.income_msg_check:
if config['income_msg_check']:
if mgr.reply_filter.is_illegal(text_message):
return MessageChain(Plain("[bot] 消息中存在不合适的内容, 请更换措辞"))
return mirai.MessageChain(mirai.Plain("[bot] 消息中存在不合适的内容, 请更换措辞"))
pkg.openai.session.get_session(session_name).acquire_response_lock()
openai_session.get_session(session_name).acquire_response_lock()
text_message = text_message.strip()
@@ -87,11 +81,11 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
# 处理消息
try:
config = pkg.utils.context.get_config()
processing.append(session_name)
try:
if text_message.startswith('!') or text_message.startswith(""): # 指令
msg_type = ''
if text_message.startswith('!') or text_message.startswith(""): # 命令
msg_type = 'command'
# 触发插件事件
args = {
'launcher_type': launcher_type,
@@ -114,17 +108,18 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
reply = event.get_return_value("reply")
if not event.is_prevented_default():
reply = pkg.qqbot.command.process_command(session_name, text_message,
reply = command.process_command(session_name, text_message,
mgr, config, launcher_type, launcher_id, sender_id, is_admin(sender_id))
else: # 消息
msg_type = 'message'
# 限速丢弃检查
# print(ratelimit.__crt_minute_usage__[session_name])
if config.rate_limit_strategy == "drop":
if config['rate_limit_strategy'] == "drop":
if ratelimit.is_reach_limit(session_name):
logging.info("根据限速策略丢弃[{}]消息: {}".format(session_name, text_message))
return MessageChain(["[bot]"+tips_custom.rate_limit_drop_tip]) if tips_custom.rate_limit_drop_tip != "" else []
return mirai.MessageChain(["[bot]"+tips_custom.rate_limit_drop_tip]) if tips_custom.rate_limit_drop_tip != "" else []
before = time.time()
# 触发插件事件
@@ -146,11 +141,11 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
reply = event.get_return_value("reply")
if not event.is_prevented_default():
reply = pkg.qqbot.message.process_normal_message(text_message,
reply = message.process_normal_message(text_message,
mgr, config, launcher_type, launcher_id, sender_id)
# 限速等待时间
if config.rate_limit_strategy == "wait":
if config['rate_limit_strategy'] == "wait":
time.sleep(ratelimit.get_rest_wait_time(session_name, time.time() - before))
ratelimit.add_usage(session_name)
@@ -162,7 +157,9 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
"回复[{}]文字消息:{}".format(session_name,
reply[0][:min(100, len(reply[0]))] + (
"..." if len(reply[0]) > 100 else "")))
reply = [mgr.reply_filter.process(reply[0])]
if msg_type == 'message':
reply = [mgr.reply_filter.process(reply[0])]
reply = blob.check_text(reply[0])
else:
logging.info("回复[{}]消息".format(session_name))
@@ -170,16 +167,16 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
finally:
processing.remove(session_name)
finally:
pkg.openai.session.get_session(session_name).release_response_lock()
openai_session.get_session(session_name).release_response_lock()
# 检查延迟时间
if config.force_delay_range[1] == 0:
if config['force_delay_range'][1] == 0:
delay_time = 0
else:
import random
# 从延迟范围中随机取一个值(浮点)
rdm = random.uniform(config.force_delay_range[0], config.force_delay_range[1])
rdm = random.uniform(config['force_delay_range'][0], config['force_delay_range'][1])
spent = time.time() - start_time
@@ -191,4 +188,4 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("[风控] 强制延迟{:.2f}秒(如需关闭请到config.py修改force_delay_range字段)".format(delay_time))
time.sleep(delay_time)
return MessageChain(reply)
return mirai.MessageChain(reply)

View File

@@ -3,6 +3,9 @@ import time
import logging
import threading
from ..utils import context
__crt_minute_usage__ = {}
"""当前分钟每个会话的对话次数"""
@@ -12,16 +15,12 @@ __timer_thr__: threading.Thread = None
def get_limitation(session_name: str) -> int:
"""获取会话的限制次数"""
import config
config = context.get_config_manager().data
if type(config.rate_limitation) == dict:
# 如果被指定了
if session_name in config.rate_limitation:
return config.rate_limitation[session_name]
else:
return config.rate_limitation["default"]
elif type(config.rate_limitation) == int:
return config.rate_limitation
if session_name in config['rate_limitation']:
return config['rate_limitation'][session_name]
else:
return config['rate_limitation']["default"]
def add_usage(session_name: str):

View File

@@ -1,19 +1,19 @@
import mirai
from ..adapter import MessageSourceAdapter, MessageConverter, EventConverter
import nakuru
import nakuru.entities.components as nkc
import asyncio
import typing
import traceback
import logging
import json
from pkg.qqbot.blob import Forward, ForwardMessageNode, ForwardMessageDiaplay
import mirai
import nakuru
import nakuru.entities.components as nkc
from .. import adapter as adapter_model
from ...qqbot import blob
from ...utils import context
class NakuruProjectMessageConverter(MessageConverter):
class NakuruProjectMessageConverter(adapter_model.MessageConverter):
"""消息转换器"""
@staticmethod
def yiri2target(message_chain: mirai.MessageChain) -> list:
@@ -49,7 +49,7 @@ class NakuruProjectMessageConverter(MessageConverter):
nakuru_msg_list.append(nkc.Record.fromURL(component.url))
elif component.path is not None:
nakuru_msg_list.append(nkc.Record.fromFileSystem(component.path))
elif type(component) is Forward:
elif type(component) is blob.Forward:
# 转发消息
yiri_forward_node_list = component.node_list
nakuru_forward_node_list = []
@@ -102,7 +102,7 @@ class NakuruProjectMessageConverter(MessageConverter):
return chain
class NakuruProjectEventConverter(EventConverter):
class NakuruProjectEventConverter(adapter_model.EventConverter):
"""事件转换器"""
@staticmethod
def yiri2target(event: typing.Type[mirai.Event]):
@@ -157,7 +157,7 @@ class NakuruProjectEventConverter(EventConverter):
raise Exception("未支持转换的事件类型: " + str(event))
class NakuruProjectAdapter(MessageSourceAdapter):
class NakuruProjectAdapter(adapter_model.MessageSourceAdapter):
"""nakuru-project适配器"""
bot: nakuru.CQHTTP
bot_account_id: int
@@ -173,19 +173,26 @@ class NakuruProjectAdapter(MessageSourceAdapter):
self.listener_list = []
# nakuru库有bug这个接口没法带access_token会失败
# 所以目前自行发请求
import config
config = context.get_config_manager().data
import requests
resp = requests.get(
url="http://{}:{}/get_login_info".format(config.nakuru_config['host'], config.nakuru_config['http_port']),
url="http://{}:{}/get_login_info".format(config['nakuru_config']['host'], config['nakuru_config']['http_port']),
headers={
'Authorization': "Bearer " + config.nakuru_config['token'] if 'token' in config.nakuru_config else ""
'Authorization': "Bearer " + config['nakuru_config']['token'] if 'token' in config['nakuru_config']else ""
},
timeout=5
timeout=5,
proxies=None
)
if resp.status_code == 403:
logging.error("go-cqhttp拒绝访问请检查config.py中nakuru_config的token是否与go-cqhttp设置的access-token匹配")
raise Exception("go-cqhttp拒绝访问请检查config.py中nakuru_config的token是否与go-cqhttp设置的access-token匹配")
self.bot_account_id = int(resp.json()['data']['user_id'])
try:
self.bot_account_id = int(resp.json()['data']['user_id'])
except Exception as e:
logging.error("获取go-cqhttp账号信息失败: {}, 请检查是否已启动go-cqhttp并配置正确".format(e))
raise Exception("获取go-cqhttp账号信息失败: {}, 请检查是否已启动go-cqhttp并配置正确".format(e))
def send_message(
self,
@@ -267,7 +274,7 @@ class NakuruProjectAdapter(MessageSourceAdapter):
logging.debug("注册监听器: " + str(event_type) + " -> " + str(callback))
# 包装函数
async def listener_wrapper(app: nakuru.CQHTTP, source: self.event_converter.yiri2target(event_type)):
async def listener_wrapper(app: nakuru.CQHTTP, source: NakuruProjectAdapter.event_converter.yiri2target(event_type)):
callback(self.event_converter.target2yiri(source))
# 将包装函数和原函数的对应关系存入列表

View File

@@ -1,12 +1,14 @@
from ..adapter import MessageSourceAdapter
import mirai
import mirai.models.bus
import asyncio
import typing
import mirai
import mirai.models.bus
from mirai.bot import MiraiRunner
class YiriMiraiAdapter(MessageSourceAdapter):
from .. import adapter as adapter_model
class YiriMiraiAdapter(adapter_model.MessageSourceAdapter):
"""YiriMirai适配器"""
bot: mirai.Mirai
@@ -110,7 +112,12 @@ class YiriMiraiAdapter(MessageSourceAdapter):
bus.unsubscribe(event_type, callback)
def run_sync(self):
self.bot.run()
"""运行YiriMirai"""
# 创建新的
loop = asyncio.new_event_loop()
loop.run_until_complete(MiraiRunner(self.bot)._run())
def kill(self) -> bool:
return False

View File

View File

@@ -0,0 +1,88 @@
import abc
import uuid
import json
import logging
import threading
import requests
class APIGroup(metaclass=abc.ABCMeta):
"""API 组抽象类"""
_basic_info: dict = None
_runtime_info: dict = None
prefix = None
def __init__(self, prefix: str):
self.prefix = prefix
def do(
self,
method: str,
path: str,
data: dict = None,
params: dict = None,
headers: dict = {},
**kwargs
):
"""执行一个请求"""
def thr_wrapper(
self,
method: str,
path: str,
data: dict = None,
params: dict = None,
headers: dict = {},
**kwargs
):
try:
url = self.prefix + path
data = json.dumps(data)
headers['Content-Type'] = 'application/json'
ret = requests.request(
method,
url,
data=data,
params=params,
headers=headers,
**kwargs
)
logging.debug("data: %s", data)
logging.debug("ret: %s", ret.json())
except Exception as e:
logging.debug("上报数据失败: %s", e)
thr = threading.Thread(target=thr_wrapper, args=(
self,
method,
path,
data,
params,
headers,
), kwargs=kwargs)
thr.start()
def gen_rid(
self
):
"""生成一个请求 ID"""
return str(uuid.uuid4())
def basic_info(
self
):
"""获取基本信息"""
basic_info = APIGroup._basic_info.copy()
basic_info['rid'] = self.gen_rid()
return basic_info
def runtime_info(
self
):
"""获取运行时信息"""
return APIGroup._runtime_info

View File

View File

@@ -0,0 +1,55 @@
from __future__ import annotations
from .. import apigroup
from ... import context
class V2MainDataAPI(apigroup.APIGroup):
"""主程序相关 数据API"""
def __init__(self, prefix: str):
super().__init__(prefix+"/main")
def do(self, *args, **kwargs):
config = context.get_config_manager().data
if not config['report_usage']:
return None
return super().do(*args, **kwargs)
def post_update_record(
self,
spent_seconds: int,
infer_reason: str,
old_version: str,
new_version: str,
):
"""提交更新记录"""
return self.do(
"POST",
"/update",
data={
"basic": self.basic_info(),
"update_info": {
"spent_seconds": spent_seconds,
"infer_reason": infer_reason,
"old_version": old_version,
"new_version": new_version,
}
}
)
def post_announcement_showed(
self,
ids: list[int],
):
"""提交公告已阅"""
return self.do(
"POST",
"/announcement",
data={
"basic": self.basic_info(),
"announcement_info": {
"ids": ids,
}
}
)

View File

@@ -0,0 +1,65 @@
from __future__ import annotations
from .. import apigroup
from ... import context
class V2PluginDataAPI(apigroup.APIGroup):
"""插件数据相关 API"""
def __init__(self, prefix: str):
super().__init__(prefix+"/plugin")
def do(self, *args, **kwargs):
config = context.get_config_manager().data
if not config['report_usage']:
return None
return super().do(*args, **kwargs)
def post_install_record(
self,
plugin: dict
):
"""提交插件安装记录"""
return self.do(
"POST",
"/install",
data={
"basic": self.basic_info(),
"plugin": plugin,
}
)
def post_remove_record(
self,
plugin: dict
):
"""提交插件卸载记录"""
return self.do(
"POST",
"/remove",
data={
"basic": self.basic_info(),
"plugin": plugin,
}
)
def post_update_record(
self,
plugin: dict,
old_version: str,
new_version: str,
):
"""提交插件更新记录"""
return self.do(
"POST",
"/update",
data={
"basic": self.basic_info(),
"plugin": plugin,
"update_info": {
"old_version": old_version,
"new_version": new_version,
}
}
)

View File

@@ -0,0 +1,88 @@
from __future__ import annotations
from .. import apigroup
from ... import context
class V2UsageDataAPI(apigroup.APIGroup):
"""使用量数据相关 API"""
def __init__(self, prefix: str):
super().__init__(prefix+"/usage")
def do(self, *args, **kwargs):
config = context.get_config_manager().data
if not config['report_usage']:
return None
return super().do(*args, **kwargs)
def post_query_record(
self,
session_type: str,
session_id: str,
query_ability_provider: str,
usage: int,
model_name: str,
response_seconds: int,
retry_times: int,
):
"""提交请求记录"""
return self.do(
"POST",
"/query",
data={
"basic": self.basic_info(),
"runtime": self.runtime_info(),
"session_info": {
"type": session_type,
"id": session_id,
},
"query_info": {
"ability_provider": query_ability_provider,
"usage": usage,
"model_name": model_name,
"response_seconds": response_seconds,
"retry_times": retry_times,
}
}
)
def post_event_record(
self,
plugins: list[dict],
event_name: str,
):
"""提交事件触发记录"""
return self.do(
"POST",
"/event",
data={
"basic": self.basic_info(),
"runtime": self.runtime_info(),
"plugins": plugins,
"event_info": {
"name": event_name,
}
}
)
def post_function_record(
self,
plugin: dict,
function_name: str,
function_description: str,
):
"""提交内容函数使用记录"""
return self.do(
"POST",
"/function",
data={
"basic": self.basic_info(),
"plugin": plugin,
"function_info": {
"name": function_name,
"description": function_description,
}
}
)

35
pkg/utils/center/v2.py Normal file
View File

@@ -0,0 +1,35 @@
from __future__ import annotations
import logging
from . import apigroup
from .groups import main
from .groups import usage
from .groups import plugin
BACKEND_URL = "https://api.qchatgpt.rockchin.top/api/v2"
class V2CenterAPI:
"""中央服务器 v2 API 交互类"""
main: main.V2MainDataAPI = None
"""主 API 组"""
usage: usage.V2UsageDataAPI = None
"""使用量 API 组"""
plugin: plugin.V2PluginDataAPI = None
"""插件 API 组"""
def __init__(self, basic_info: dict = None, runtime_info: dict = None):
"""初始化"""
logging.debug("basic_info: %s, runtime_info: %s", basic_info, runtime_info)
apigroup.APIGroup._basic_info = basic_info
apigroup.APIGroup._runtime_info = runtime_info
self.main = main.V2MainDataAPI(BACKEND_URL)
self.usage = usage.V2UsageDataAPI(BACKEND_URL)
self.plugin = plugin.V2PluginDataAPI(BACKEND_URL)

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,14 @@
from __future__ import annotations
import threading
from pkg.utils import ThreadCtl
from . import threadctl
from ..database import manager as db_mgr
from ..openai import manager as openai_mgr
from ..qqbot import manager as qqbot_mgr
from ..config import manager as config_mgr
from ..plugin import host as plugin_host
from .center import v2 as center_v2
context = {
@@ -7,6 +16,7 @@ context = {
'database.manager.DatabaseManager': None,
'openai.manager.OpenAIInteract': None,
'qqbot.manager.QQBotManager': None,
'config.manager.ConfigManager': None,
},
'pool_ctl': None,
'logger_handler': None,
@@ -29,66 +39,92 @@ def get_config():
return t
def set_database_manager(inst):
def set_database_manager(inst: db_mgr.DatabaseManager):
context_lock.acquire()
context['inst']['database.manager.DatabaseManager'] = inst
context_lock.release()
def get_database_manager():
def get_database_manager() -> db_mgr.DatabaseManager:
context_lock.acquire()
t = context['inst']['database.manager.DatabaseManager']
context_lock.release()
return t
def set_openai_manager(inst):
def set_openai_manager(inst: openai_mgr.OpenAIInteract):
context_lock.acquire()
context['inst']['openai.manager.OpenAIInteract'] = inst
context_lock.release()
def get_openai_manager():
def get_openai_manager() -> openai_mgr.OpenAIInteract:
context_lock.acquire()
t = context['inst']['openai.manager.OpenAIInteract']
context_lock.release()
return t
def set_qqbot_manager(inst):
def set_qqbot_manager(inst: qqbot_mgr.QQBotManager):
context_lock.acquire()
context['inst']['qqbot.manager.QQBotManager'] = inst
context_lock.release()
def get_qqbot_manager():
def get_qqbot_manager() -> qqbot_mgr.QQBotManager:
context_lock.acquire()
t = context['inst']['qqbot.manager.QQBotManager']
context_lock.release()
return t
def set_plugin_host(inst):
def set_config_manager(inst: config_mgr.ConfigManager):
context_lock.acquire()
context['inst']['config.manager.ConfigManager'] = inst
context_lock.release()
def get_config_manager() -> config_mgr.ConfigManager:
context_lock.acquire()
t = context['inst']['config.manager.ConfigManager']
context_lock.release()
return t
def set_plugin_host(inst: plugin_host.PluginHost):
context_lock.acquire()
context['plugin_host'] = inst
context_lock.release()
def get_plugin_host():
def get_plugin_host() -> plugin_host.PluginHost:
context_lock.acquire()
t = context['plugin_host']
context_lock.release()
return t
def set_thread_ctl(inst):
def set_thread_ctl(inst: threadctl.ThreadCtl):
context_lock.acquire()
context['pool_ctl'] = inst
context_lock.release()
def get_thread_ctl() -> ThreadCtl:
def get_thread_ctl() -> threadctl.ThreadCtl:
context_lock.acquire()
t: ThreadCtl = context['pool_ctl']
t: threadctl.ThreadCtl = context['pool_ctl']
context_lock.release()
return t
def set_center_v2_api(inst: center_v2.V2CenterAPI):
context_lock.acquire()
context['center_v2_api'] = inst
context_lock.release()
def get_center_v2_api() -> center_v2.V2CenterAPI:
context_lock.acquire()
t: center_v2.V2CenterAPI = context['center_v2_api']
context_lock.release()
return t

View File

@@ -1,19 +0,0 @@
# OpenAI账号免费额度剩余查询
import requests
def fetch_credit_data(api_key: str, http_proxy: str) -> dict:
"""OpenAI账号免费额度剩余查询"""
proxies = {
"http":http_proxy,
"https":http_proxy
} if http_proxy is not None else None
resp = requests.get(
url="https://api.openai.com/dashboard/billing/credit_grants",
headers={
"Authorization": "Bearer {}".format(api_key),
},
proxies=proxies
)
return resp.json()

View File

@@ -3,6 +3,8 @@ import time
import logging
import shutil
from . import context
log_file_name = "qchatgpt.log"
@@ -26,17 +28,12 @@ def init_runtime_log_file():
if not os.path.exists("logs"):
os.mkdir("logs")
# 检查本目录是否有qchatgpt.log若有移动到logs目录
if os.path.exists("qchatgpt.log"):
shutil.move("qchatgpt.log", "logs/qchatgpt.legacy.log")
log_file_name = "logs/qchatgpt-%s.log" % time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
def reset_logging():
global log_file_name
import config
import pkg.utils.context
import colorlog
@@ -46,7 +43,11 @@ def reset_logging():
for handler in logging.getLogger().handlers:
logging.getLogger().removeHandler(handler)
logging.basicConfig(level=config.logging_level, # 设置日志输出格式
config_mgr = context.get_config_manager()
logging_level = logging.INFO if config_mgr is None else config_mgr.data['logging_level']
logging.basicConfig(level=logging_level, # 设置日志输出格式
filename=log_file_name, # log日志输出的文件位置和文件名
format="[%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s",
# 日志输出的格式
@@ -54,7 +55,7 @@ def reset_logging():
datefmt="%Y-%m-%d %H:%M:%S" # 时间输出的格式
)
sh = logging.StreamHandler()
sh.setLevel(config.logging_level)
sh.setLevel(logging_level)
sh.setFormatter(colorlog.ColoredFormatter(
fmt="%(log_color)s[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : "
"%(message)s",

View File

@@ -1,9 +1,11 @@
from . import context
def wrapper_proxies() -> dict:
"""获取代理"""
import config
config = context.get_config_manager().data
return {
"http": config.openai_config['proxy'],
"https": config.openai_config['proxy']
} if 'proxy' in config.openai_config and (config.openai_config['proxy'] is not None) else None
"http": config['openai_config']['proxy'],
"https": config['openai_config']['proxy']
} if 'proxy' in config['openai_config'] and (config['openai_config']['proxy'] is not None) else None

View File

@@ -1,6 +1,6 @@
from pip._internal import main as pipmain
import pkg.utils.log as log
from . import log
def install(package):
@@ -8,7 +8,8 @@ def install(package):
log.reset_logging()
def install_upgrade(package):
pipmain(['install', '--upgrade', package])
pipmain(['install', '--upgrade', package, "-i", "https://pypi.tuna.tsinghua.edu.cn/simple",
"--trusted-host", "pypi.tuna.tsinghua.edu.cn"])
log.reset_logging()
@@ -18,7 +19,8 @@ def run_pip(params: list):
def install_requirements(file):
pipmain(['install', '-r', file, "--upgrade"])
pipmain(['install', '-r', file, "-i", "https://pypi.tuna.tsinghua.edu.cn/simple",
"--trusted-host", "pypi.tuna.tsinghua.edu.cn"])
log.reset_logging()

7
pkg/utils/platform.py Normal file
View File

@@ -0,0 +1,7 @@
import os
import sys
def get_platform() -> str:
"""获取当前平台"""
return sys.platform

View File

@@ -1,10 +1,10 @@
import logging
import threading
import importlib
import pkgutil
import pkg.utils.context as context
import pkg.plugin.host
import asyncio
from . import context
from ..plugin import host as plugin_host
def walk(module, prefix='', path_prefix=''):
@@ -15,7 +15,7 @@ def walk(module, prefix='', path_prefix=''):
walk(__import__(module.__name__ + '.' + item.name, fromlist=['']), prefix + item.name + '.', path_prefix + item.name + '/')
else:
logging.info('reload module: {}, path: {}'.format(prefix + item.name, path_prefix + item.name + '.py'))
pkg.plugin.host.__current_module_path__ = "plugins/" + path_prefix + item.name + '.py'
plugin_host.__current_module_path__ = "plugins/" + path_prefix + item.name + '.py'
importlib.reload(__import__(module.__name__ + '.' + item.name, fromlist=['']))
@@ -28,7 +28,7 @@ def reload_all(notify=True):
import main
main.stop()
# 删除所有已注册的
# 删除所有已注册的
import pkg.qqbot.cmds.aamgr as cmdsmgr
cmdsmgr.__command_list__ = {}
cmdsmgr.__tree_index__ = {}
@@ -53,15 +53,17 @@ def reload_all(notify=True):
# 执行启动流程
logging.info("执行程序启动流程")
main.load_config()
main.complete_tips()
context.get_thread_ctl().reload(
admin_pool_num=context.get_config().admin_pool_num,
user_pool_num=context.get_config().user_pool_num
admin_pool_num=4,
user_pool_num=8
)
def run_wrapper():
asyncio.run(main.start_process(False))
context.get_thread_ctl().submit_sys_task(
main.start,
False
run_wrapper
)
logging.info('程序启动完成')

View File

@@ -1,37 +1,46 @@
import logging
from PIL import Image, ImageDraw, ImageFont
import re
import os
import config
import traceback
from PIL import Image, ImageDraw, ImageFont
from ..utils import context
text_render_font: ImageFont = None
if config.blob_message_strategy == "image": # 仅在启用了image时才加载字体
use_font = config.font_path
try:
def initialize():
global text_render_font
logging.debug("初始化文字转图片模块...")
config = context.get_config_manager().data
# 检查是否存在
if not os.path.exists(use_font):
# 若是windows系统使用微软雅黑
if os.name == "nt":
use_font = "C:/Windows/Fonts/msyh.ttc"
if not os.path.exists(use_font):
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
if config['blob_message_strategy'] == "image": # 仅在启用了image时才加载字体
use_font = config['font_path']
try:
# 检查是否存在
if not os.path.exists(use_font):
# 若是windows系统使用微软雅黑
if os.name == "nt":
use_font = "C:/Windows/Fonts/msyh.ttc"
if not os.path.exists(use_font):
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config['blob_message_strategy'] = "forward"
else:
logging.info("使用Windows自带字体" + use_font)
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
else:
logging.info("使用Windows自带字体" + use_font)
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
logging.warn("未找到字体文件,且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config['blob_message_strategy'] = "forward"
else:
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
else:
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
except:
traceback.print_exc()
logging.error("加载字体文件失败({})更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。".format(use_font))
config.blob_message_strategy = "forward"
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
except:
traceback.print_exc()
logging.error("加载字体文件失败({})更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。".format(use_font))
config['blob_message_strategy'] = "forward"
logging.debug("字体文件加载完成。")
def indexNumber(path=''):
@@ -114,6 +123,8 @@ def compress_image(infile, outfile='', kb=100, step=20, quality=90):
def text_to_image(text_str: str, save_as="temp.png", width=800):
global text_render_font
logging.debug("正在将文本转换为图片...")
text_str = text_str.replace("\t", " ")
# 分行
@@ -123,9 +134,13 @@ def text_to_image(text_str: str, save_as="temp.png", width=800):
final_lines = []
text_width = width-80
logging.debug("lines: {}, text_width: {}".format(lines, text_width))
for line in lines:
logging.debug(type(text_render_font))
# 如果长了就分割
line_width = text_render_font.getlength(line)
logging.debug("line_width: {}".format(line_width))
if line_width < text_width:
final_lines.append(line)
continue
@@ -155,7 +170,7 @@ def text_to_image(text_str: str, save_as="temp.png", width=800):
img = Image.new('RGBA', (width, max(280, len(final_lines) * 35 + 65)), (255, 255, 255, 255))
draw = ImageDraw.Draw(img, mode='RGBA')
logging.debug("正在绘制图片...")
# 绘制正文
line_number = 0
offset_x = 20
@@ -187,7 +202,7 @@ def text_to_image(text_str: str, save_as="temp.png", width=800):
line_number += 1
logging.debug("正在保存图片...")
img.save(save_as)
return save_as

View File

@@ -1,12 +1,15 @@
from __future__ import annotations
import datetime
import logging
import os.path
import time
import requests
import json
import pkg.utils.constants
import pkg.utils.network as network
from . import constants
from . import network
from . import context
def check_dulwich_closure():
@@ -22,18 +25,6 @@ def check_dulwich_closure():
raise Exception("dulwich模块未安装,请查看 https://github.com/RockChinQ/QChatGPT/issues/77")
def pull_latest(repo_path: str) -> bool:
"""拉取最新代码"""
check_dulwich_closure()
from dulwich import porcelain
repo = porcelain.open_repo(repo_path)
porcelain.pull(repo)
return True
def is_newer(new_tag: str, old_tag: str):
"""判断版本是否更新,忽略第四位版本和第一位版本"""
if new_tag == old_tag:
@@ -70,7 +61,7 @@ def get_release_list() -> list:
def get_current_tag() -> str:
"""获取当前tag"""
current_tag = pkg.utils.constants.semantic_version
current_tag = constants.semantic_version
if os.path.exists("current_tag"):
with open("current_tag", "r") as f:
current_tag = f.read()
@@ -108,7 +99,10 @@ def compare_version_str(v0: str, v1: str) -> int:
def update_all(cli: bool = False) -> bool:
"""检查更新并下载源码"""
start_time = time.time()
current_tag = get_current_tag()
old_tag = current_tag
rls_list = get_release_list()
@@ -201,12 +195,19 @@ def update_all(cli: bool = False) -> bool:
with open("current_tag", "w") as f:
f.write(current_tag)
context.get_center_v2_api().main.post_update_record(
spent_seconds=int(time.time()-start_time),
infer_reason="update",
old_version=old_tag,
new_version=current_tag,
)
# 通知管理员
if not cli:
import pkg.utils.context
pkg.utils.context.get_qqbot_manager().notify_admin("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看".format(current_tag, "\n".join(rls_notes[:-1])))
pkg.utils.context.get_qqbot_manager().notify_admin("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看\n请手动重启程序以使用新版本。".format(current_tag, "\n".join(rls_notes[:-1])))
else:
print("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看".format(current_tag, "\n".join(rls_notes[:-1])))
print("已更新到最新版本: {}\n更新日志:\n{}\n完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看。请手动重启程序以使用新版本。".format(current_tag, "\n".join(rls_notes[:-1])))
return True
@@ -241,35 +242,6 @@ def get_current_version_info() -> str:
return "未知版本"
def get_commit_id_and_time_and_msg() -> str:
"""获取当前提交id和时间和提交信息"""
check_dulwich_closure()
from dulwich import porcelain
repo = porcelain.open_repo('.')
for entry in repo.get_walker():
tz = datetime.timezone(datetime.timedelta(hours=entry.commit.commit_timezone // 3600))
dt = datetime.datetime.fromtimestamp(entry.commit.commit_time, tz)
return str(entry.commit.id)[2:9] + " " + dt.strftime('%Y-%m-%d %H:%M:%S') + " [" + str(entry.commit.message, encoding="utf-8").strip()+"]"
def get_current_commit_id() -> str:
"""检查是否有新版本"""
check_dulwich_closure()
from dulwich import porcelain
repo = porcelain.open_repo('.')
current_commit_id = ""
for entry in repo.get_walker():
current_commit_id = str(entry.commit.id)[2:-1]
break
return current_commit_id
def is_new_version_available() -> bool:
"""检查是否有新版本"""
# 从github获取release列表

View File

@@ -1,12 +1,14 @@
requests~=2.31.0
openai~=0.27.8
dulwich~=0.21.5
requests
openai
dulwich~=0.21.6
colorlog~=6.6.0
yiri-mirai
yiri-mirai-rc
websockets
urllib3~=1.26.10
urllib3
func_timeout~=4.3.5
Pillow
nakuru-project-idk
CallingGPT
tiktoken
tiktoken
PyYaml
aiohttp

BIN
res/QChatGPT-1211.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

View File

@@ -1,14 +1,26 @@
[
{
"id": 0,
"time": "2023-04-24 16:05:20",
"timestamp": 1682323520,
"content": "现已支持使用go-cqhttp替换mirai作为QQ登录框架, 请更新并查看 https://github.com/RockChinQ/QChatGPT/wiki/go-cqhttp%E9%85%8D%E7%BD%AE"
"id": 2,
"time": "2023-08-01 10:49:26",
"timestamp": 1690858166,
"content": "现已支持GPT函数调用功能欢迎了解https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0"
},
{
"id": 1,
"time": "2023-05-21 17:33:18",
"timestamp": 1684661598,
"content": "NewBing不再需要鉴权若您正在使用revLibs逆向库插件请立即使用!plugin update revLibs命令更新插件到最新版。"
"id": 3,
"time": "2023-11-10 12:20:09",
"timestamp": 1699590009,
"content": "OpenAI 库1.0版本已发行,若出现 OpenAI 调用问题,请更新 QChatGPT 版本。详见项目主页https://github.com/RockChinQ/QChatGPT"
},
{
"id": 4,
"time": "2023-11-13 18:02:39",
"timestamp": 1699869759,
"content": "近期 OpenAI 接口改动频繁正在积极适配并添加新功能请尽快更新到最新版本更新方式https://github.com/RockChinQ/QChatGPT/discussions/595"
},
{
"id": 5,
"time": "2023-12-07 9:20:00",
"timestamp": 1701912000,
"content": "QChatGPT 一周年啦感谢大家的选择和支持RockChinQ 在此衷心感谢素未谋面但又至关重要的你们每一个人,愿 AI 与我们同在欢迎前往https://github.com/RockChinQ/QChatGPT/discussions/627 参与讨论。"
}
]

View File

@@ -1,4 +1,6 @@
> [!WARNING]
> 此文档已过时,请查看[QChatGPT 容器化部署指南](docker_deployment.md)
## 操作步骤

View File

@@ -0,0 +1,64 @@
# QChatGPT 容器化部署指南
> [!WARNING]
> 请您确保您**确实**需要 Docker 部署,您**必须**具有以下能力:
> - 了解 `Docker` 和 `Docker Compose` 的使用
> - 了解容器间网络通信配置方式
> - 了解容器文件挂载机制
> - 了解容器调试操作
> - 动手能力强、资料查找能力强
>
> 若您不完全具有以上能力,请勿使用 Docker 部署,由于误操作导致的配置不正确,我们将不会解答您的问题并不负任何责任。
> **非常不建议**您在除 Linux 之外的系统上使用 Docker 进行部署。
## 概览
QChatGPT 主程序需要连接`QQ登录框架`以与QQ通信您可以选择 [Mirai](https://github.com/mamoe/mirai)还需要配置mirai-api-http请查看此仓库README中手动部署部分 或 [go-cqhttp](https://github.com/Mrs4s/go-cqhttp),我们仅发布 QChatGPT主程序 的镜像您需要自行配置QQ登录框架可以参考[README.md](https://github.com/RockChinQ/QChatGPT#-%E9%85%8D%E7%BD%AEqq%E7%99%BB%E5%BD%95%E6%A1%86%E6%9E%B6)中的教程,或自行寻找其镜像)并在 QChatGPT 的配置文件中设置连接地址。
> [!NOTE]
> 请先确保 Docker 和 Docker Compose 已安装
## 准备文件
> QChatGPT 目前暂不可以在没有配置模板文件的情况下自动生成文件,您需要按照以下步骤手动创建需要挂载的文件。
> 如无特殊说明,模板文件均在此仓库中。
> 如果您不想挨个创建也可以直接clone本仓库到本地执行`python main.py`后即可自动根据模板生成所需文件。
现在请在一个空目录创建以下文件或目录:
### 📄`config.py`
复制根目录的`config-template.py`所有内容,创建`config.py`并根据其中注释进行修改。
### 📄`banlist.py`
复制`res/templates/banlist-template.py`所有内容,创建`banlist.py`,这是黑名单配置文件,根据需要修改。
### 📄`cmdpriv.json`
复制`res/templates/cmdpriv-template.json`所有内容,创建`cmdpriv.json`,这是各命令的权限配置文件,根据需要修改。
### 📄`sensitive.json`
复制`res/templates/sensitive-template.json`所有内容,创建`sensitive.json`,这是敏感词配置,根据需要修改。
### 📄`tips.py`
复制`tips-custom-template.py`所有内容,创建`tips.py`,这是部分提示语的配置,根据需要修改。
## 运行
已预先准备好`docker-compose.yaml`,您需要根据您的网络配置进行适当修改,使容器内的 QChatGPT 程序可以正常与 Mirai 或 go-cqhttp 通信。
`docker-compose.yaml`复制到本目录,根据网络环境进行配置,并执行:
```bash
docker compose up
```
若无报错即配置完成您可以Ctrl+C关闭后使用`docker compose up -d`将其置于后台运行
## 注意
- 安装的插件都会保存在`plugins`(映射到本目录`plugins`),安装插件时可能会自动安装相应的依赖,此时若`重新创建`容器,已安装的插件将被加载,但所需的增量依赖并未安装,会导致引入问题。您可以删除插件目录后重启,再次安装插件,以便程序可以自动安装插件所需依赖。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 203 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -8,7 +8,6 @@
"plugin.del": 2,
"plugin.off": 2,
"plugin.on": 2,
"continue": 1,
"default": 1,
"default.set": 2,
"del": 1,

BIN
res/webwlkr-demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 879 KiB

View File

@@ -48,12 +48,12 @@
</details>
<details>
<summary>✅支持预设指令文字</summary>
<summary>✅支持预设文字</summary>
- 支持以自然语言预设文字,自定义机器人人格等信息
- 详见`config.py`中的`default_prompt`部分
- 支持设置多个预设情景,并通过!reset、!default等令控制,详细请查看[wiki](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)
- 支持使用文件存储情景预设文字,并加载: 在`prompts/`目录新建文件写入预设文字,即可通过`!reset <文件名>`令加载
- 支持设置多个预设情景,并通过!reset、!default等令控制,详细请查看[wiki](https://github.com/RockChinQ/QChatGPT/wiki/1-%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E5%91%BD%E4%BB%A4)
- 支持使用文件存储情景预设文字,并加载: 在`prompts/`目录新建文件写入预设文字,即可通过`!reset <文件名>`令加载
</details>
<details>
@@ -61,25 +61,25 @@
- 使用SQLite进行会话内容持久化
- 最后一次对话一定时间后自动保存,请到`config.py`中修改`session_expire_time`的值以自定义时间
- 运行期间可使用`!reset` `!list` `!last` `!next` `!prompt`令管理会话
- 运行期间可使用`!reset` `!list` `!last` `!next` `!prompt`令管理会话
</details>
<details>
<summary>✅支持对话、绘图等模型,可玩性更高</summary>
- 现已支持OpenAI的对话`Completion API`和绘图`Image API`
- 向机器人发送`!draw <prompt>`即可使用绘图模型
- 向机器人发送`!draw <prompt>`即可使用绘图模型
</details>
<details>
<summary>✅支持令控制热重载、热更新</summary>
<summary>✅支持令控制热重载、热更新</summary>
- 允许在运行期间修改`config.py`或其他代码后,以管理员账号向机器人发送`!reload`进行热重载,无需重启
- 运行期间允许以管理员账号向机器人发送`!update`进行热更新,拉取远程最新代码并执行热重载
- 允许在运行期间修改`config.py`或其他代码后,以管理员账号向机器人发送`!reload`进行热重载,无需重启
- 运行期间允许以管理员账号向机器人发送`!update`进行热更新,拉取远程最新代码并执行热重载
</details>
<details>
<summary>✅支持插件加载🧩</summary>
- 自行实现插件加载器及相关支持
- 详细查看[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
- 详细查看[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/5-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
</details>
<details>
<summary>✅私聊、群聊黑名单机制</summary>
@@ -153,14 +153,14 @@
<img alt="绘图功能" src="https://github.com/RockChinQ/QChatGPT/blob/master/res/屏幕截图%202022-12-29%20194948.png" width="550" height="348"/>
### 机器人
### 机器人
目前支持的
目前支持的
> `<>` 中的为必填参数,使用时请不要包含`<>`
> `[]` 中的为可选参数,使用时请不要包含`[]`
#### 用户级别
#### 用户级别
> 可以使用`!help`命令来查看命令说明
@@ -174,18 +174,17 @@
!del all 删除本会话对象的所有历史记录
!last 切换到前一次会话
!next 切换到后一次会话
!reset [使用预设] 重置对象的当前会话,可指定使用的情景预设值(通过!default令查看可用的)
!reset [使用预设] 重置对象的当前会话,可指定使用的情景预设值(通过!default令查看可用的)
!prompt 查看对象当前会话的所有记录
!usage 查看api-key的使用量
!draw <提示语> 进行绘图
!version 查看当前版本并检查更新
!resend 重新回复上一个问题
!continue 继续响应未完成的回合(通常用于内容函数继续调用)
!plugin 用法请查看插件使用页的`管理`章节
!default 查看可用的情景预设值
```
#### 管理员
#### 管理员
仅管理员私聊机器人时可使用,必须先在`config.py`中的`admin_qq`设置管理员QQ
@@ -198,9 +197,9 @@
!delhst all 删除所有会话的所有历史记录
```
<details>
<summary>⚙ !cfg 令及其简化形式详解</summary>
<summary>⚙ !cfg 令及其简化形式详解</summary>
令可以在运行期间由管理员通过QQ私聊窗口修改配置信息**重启之后会失效**。
令可以在运行期间由管理员通过QQ私聊窗口修改配置信息**重启之后会失效**。
用法:
1. 查看所有配置项及其值
@@ -240,7 +239,7 @@
格式:`!~<配置项名称>`
其中`!~`等价于`!cfg `
则前述三个令分别可以简化为:
则前述三个令分别可以简化为:
```
!~all
!~default_prompt
@@ -291,11 +290,11 @@ sensitive_word_filter = True
### 预设文字(default模式)
编辑`config.py`中的`default_prompt`字段,预设文字不宜过长(建议1000字以内),目前所有会话都会射到预设文字的影响。
或将情景预设文字写入到`prompts/`目录下,运行期间即可使用`!reset <文件名>`令加载,或使用`!default <文件名>`令将其设为默认
或将情景预设文字写入到`prompts/`目录下,运行期间即可使用`!reset <文件名>`令加载,或使用`!default <文件名>`令将其设为默认
### 预设文字(full_scenario模式)
将JSON情景写入到`scenario/`目录下,运行期间即可使用`!reset <文件名>`令加载,或使用`!default <文件名>`令将其设为默认.
将JSON情景写入到`scenario/`目录下,运行期间即可使用`!reset <文件名>`令加载,或使用`!default <文件名>`令将其设为默认.
JSON情景模板参考`scenario/default_template.json`
@@ -368,7 +367,7 @@ prompt_submit_length = <模型单次请求token数上限> - 情景预设中token
在运行期间使用管理员QQ账号私聊机器人发送`!reload`加载修改后的`config.py`的值或编辑后的代码,无需重启
使用管理员账号私聊机器人,发送`!update`拉取最新代码并进行热更新,无需重启
详见前述`管理员令`段落
详见前述`管理员令`段落
### 群内无需@响应规则

View File

@@ -4,7 +4,7 @@
#### 自动更新
由管理员QQ私聊机器人QQ发送`!update`
由管理员QQ私聊机器人QQ发送`!update`
#### 手动更新

Some files were not shown because too many files have changed in this diff Show More