Compare commits

...

35 Commits

Author SHA1 Message Date
RockChinQ
c8bb3d612a chore: release v2.6.8 2023-12-17 23:00:25 +08:00
Junyan Qin
bc48b7e623 Merge pull request #636 from RockChinQ/feat/google-gemini
Feat: 支持 Google Gemini Pro 模型
2023-12-17 22:59:34 +08:00
RockChinQ
d59d5797f6 doc(README.md): 删除 PaLM-2 说明 2023-12-17 22:55:06 +08:00
RockChinQ
11d3c1e650 doc(README.md): 添加模型说明 2023-12-17 22:53:50 +08:00
RockChinQ
8cfd9e6694 chore: 添加配置项说明 2023-12-17 22:48:48 +08:00
RockChinQ
d3f401c54d feat: 通过 one-api 支持google gemini 2023-12-17 22:36:30 +08:00
Junyan Qin
a889170d1a Merge pull request #634 from zuo-shi-yun/master
添加AutoSwitchProxy插件
2023-12-17 16:19:47 +08:00
zuo-shi-yun
459e9f9322 添加AutoSwitchProxy插件 2023-12-17 13:15:33 +08:00
Junyan Qin
707afdcdf9 Update bug-report.yml 2023-12-15 10:38:04 +08:00
RockChinQ
ad1cf379c4 doc: 删除公告 2023-12-11 21:57:57 +08:00
RockChinQ
582277fe2d doc: 更新 效果图 2023-12-11 21:56:00 +08:00
RockChinQ
14b9f814c7 chore: release v2.6.7 2023-12-09 22:25:44 +08:00
Junyan Qin
b11e5d99b0 Merge pull request #628 from RockChinQ/fix/image-generating
Fix: openai>=1.0时绘图命令不兼容
2023-12-09 22:22:42 +08:00
GitHub Actions
9590718da4 Update override-all.json 2023-12-09 14:17:55 +00:00
RockChinQ
8c2b53cffb fix: openai>=1.0时绘图命令不兼容 2023-12-09 22:17:26 +08:00
Junyan Qin
5a85c073a8 Update README.md 2023-12-08 17:03:16 +08:00
Junyan Qin
2d2fbd0a8b fix: 首次启动时无法创建配置文件 2023-12-08 07:27:23 +00:00
Junyan Qin
1b25a05122 Update README.md 2023-12-06 19:29:31 +08:00
RockChinQ
709cc1140b chore: 发布公告 2023-12-06 19:27:04 +08:00
Junyan Qin
1730962636 Merge pull request #625 from zuo-shi-yun/master
添加看门狗插件
2023-12-03 10:03:35 +08:00
zuo-shi-yun
a1de4f6f7a 添加看门狗插件 2023-12-02 23:58:18 +08:00
Junyan Qin
a5ccda5ed6 doc: 更新 NOTE 和 WARNING 的格式 2023-12-01 02:28:47 +00:00
Junyan Qin
f035e654ba Merge pull request #623 from zuo-shi-yun/master
添加discountAssistant插件
2023-12-01 10:04:49 +08:00
zuo-shi-yun
151d3e9f66 添加discountAssistant插件 2023-11-30 23:53:43 +08:00
Junyan Qin
c79207e197 Merge pull request #618 from RockChinQ/refactor/config-manager
Refactor: 使用 配置管理器 统一管理配置文件
2023-11-27 00:02:52 +08:00
RockChinQ
f9d461d9a1 feat: 移除过时的配置模块处理逻辑 2023-11-27 00:00:22 +08:00
RockChinQ
3e17bbb90f refactor: 适配配置管理器读取方式 2023-11-26 23:58:06 +08:00
RockChinQ
549a7eff7f refactor(qqbot): 适配配置管理器 2023-11-26 23:04:14 +08:00
RockChinQ
db2e366014 feat: 实现配置文件管理器并适配main.py中的引用 2023-11-26 22:46:27 +08:00
RockChinQ
26e4215054 feat: 新的override逻辑 2023-11-26 22:25:54 +08:00
RockChinQ
5f07ff8145 refactor: 启动流程现在异步 2023-11-26 22:19:36 +08:00
GitHub Actions
e396ba4649 Update override-all.json 2023-11-26 13:54:00 +00:00
RockChinQ
d1dff6dedd feat(main.py): 将配置加载流程放到start函数 2023-11-26 21:53:35 +08:00
RockChinQ
419354cb07 feat: 添加用于覆盖率测试的退出代码 2023-11-26 17:42:25 +08:00
RockChinQ
7708eaa82c perf: 为 context.py 中的方法添加类型提示 2023-11-26 17:33:13 +08:00
40 changed files with 456 additions and 296 deletions

View File

@@ -45,7 +45,7 @@ body:
required: true
- type: textarea
attributes:
label: 报错信息
description: 请提供完整的**控制台**报错信息(若有)
label: 日志信息
description: 请提供完整的 **登录框架 和 QChatGPT控制台**的相关日志信息(若有),不提供日志信息**无法**为您排查问题
validations:
required: false

2
.gitignore vendored
View File

@@ -1,4 +1,4 @@
config.py
/config.py
.idea/
__pycache__/
database.db

View File

@@ -7,9 +7,6 @@
# QChatGPT
<!-- 高稳定性/持续迭代/架构清晰/支持插件/高可自定义的 ChatGPT QQ机器人框架 -->
<!-- “当然下面是一个使用Java编写的快速排序算法的示例代码” -->
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/QChatGPT)](https://github.com/RockChinQ/QChatGPT/releases/latest)
<a href="https://hub.docker.com/repository/docker/rockchin/qchatgpt">
<img src="https://img.shields.io/docker/pulls/rockchin/qchatgpt?color=blue" alt="docker pull">
@@ -35,18 +32,16 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Linux%E9%83%A8%E7%BD%B2%E8%A7%86%E9%A2%91-208647">
</a>
<blockquote> 🥳 QChatGPT 一周年啦,感谢大家的支持!欢迎前往<a href="https://github.com/RockChinQ/QChatGPT/discussions/627">讨论</a>。</blockquote>
<details>
<!-- <details>
<summary>回复效果演示(带有联网插件)</summary>
<img alt="联网演示GIF" src="res/webwlkr-demo.gif" width="300px">
</details>
</div>
</details> -->
> **NOTE**
> 2023/9/13 现已支持通过[One API](https://github.com/songquanpeng/one-api)接入 Azure、Anthropic Claude、Google PaLM 2、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问以及 360 智脑等模型,欢迎测试并反馈。
> 2023/8/29 [逆向库插件](https://github.com/RockChinQ/revLibs)已支持 gpt4free
> 2023/8/14 [逆向库插件](https://github.com/RockChinQ/revLibs)已支持Claude和Bard
> 2023/7/29 支持使用GPT的Function Calling功能实现类似ChatGPT Plugin的效果请见[Wiki内容函数](https://github.com/RockChinQ/QChatGPT/wiki/6-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8-%E5%86%85%E5%AE%B9%E5%87%BD%E6%95%B0)
<img alt="回复效果(带有联网插件)" src="res/QChatGPT-1211.png" width="500px"/>
</div>
<details>
<summary>
@@ -66,10 +61,11 @@
- HuggingChat, 由[插件](https://github.com/RockChinQ/revLibs)接入, 仅支持英文
- Claude, 由[插件](https://github.com/RockChinQ/revLibs)接入
- Google Bard, 由[插件](https://github.com/RockChinQ/revLibs)接入
- Google Gemini Pro、Azure、Anthropic Claude、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问、360 智脑等官方接口, 通过[One API](https://github.com/songquanpeng/one-api)接入
### 模型聚合平台
- [One API](https://github.com/songquanpeng/one-api), Azure、Anthropic Claude、Google PaLM 2、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问以及 360 智脑等模型的官方接口转换成 OpenAI API 接入QChatGPT 原生支持,您需要先配置 One API之后在`config.py`中设置反向代理和`One API`的密钥后使用。
- [One API](https://github.com/songquanpeng/one-api), Azure、Anthropic Claude、Google Gemini Pro、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问以及 360 智脑等模型的官方接口转换成 OpenAI API 接入QChatGPT 原生支持,您需要先配置 One API之后在`config.py`中设置反向代理和`One API`的密钥后使用。
- [gpt4free](https://github.com/xtekky/gpt4free), 破解以免费使用多个平台的各种文字模型, 由[插件](https://github.com/RockChinQ/revLibs)接入, 无需鉴权, 稳定性较差。
- [Poe](https://poe.com), 破解免费使用Poe上多个平台的模型, 由[oliverkirk-sudo/ChatPoeBot](https://github.com/oliverkirk-sudo/ChatPoeBot)接入(由于 Poe 上可用的大部分模型现已通过[revLibs插件](https://github.com/RockChinQ/revLubs)或其他方式接入,此插件现已停止维护)。
@@ -197,7 +193,7 @@
</summary>
> **NOTE**
> **NOTE**
> - 部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索
> - QChatGPT需要Python版本>=3.9
> - 官方群和社区群群号请见文档顶部
@@ -346,6 +342,9 @@ python3 main.py
- [oliverkirk-sudo/QChatWeather](https://github.com/oliverkirk-sudo/QChatWeather) - 生成好看的天气图片,基于和风天气
- [oliverkirk-sudo/QChatMarkdown](https://github.com/oliverkirk-sudo/QChatMarkdown) - 将机器人输出的markdown转换为图片基于[playwright](https://playwright.dev/python/docs/intro)
- [ruuuux/WikipediaSearch](https://github.com/ruuuux/WikipediaSearch) - Wikipedia 搜索插件
- [zuo-shi-yun/discountAssistant](https://github.com/zuo-shi-yun/discountAssistant) - 自动筛选并发送羊毛群内的优惠券
- [zuo-shi-yun/Gatekeeper ](https://github.com/zuo-shi-yun/Gatekeeper) - QChatGPT的看门狗包含黑白名单、临时用户机制
- [zuo-shi-yun/AutoSwitchProxy](https://github.com/zuo-shi-yun/AutoSwitchProxy) - 自动切换Clash代理模式以节省流量
</details>

View File

@@ -49,7 +49,7 @@ English | [简体中文](README.md)
Install this [plugin](https://github.com/RockChinQ/Switcher) to switch between different models.
## ✅Function Points
## ✅Features
<details>
<summary>Details</summary>

View File

@@ -254,15 +254,20 @@ auto_reset = True
# "qwen-plus-v1",
# "ERNIE-Bot",
# "ERNIE-Bot-turbo",
# "gemini-pro",
completion_api_params = {
"model": "gpt-3.5-turbo",
"temperature": 0.9, # 数值越低得到的回答越理性,取值范围[0, 1]
}
# OpenAI的Image API的参数
# 具体请查看OpenAI的文档: https://beta.openai.com/docs/api-reference/images/create
# 具体请查看OpenAI的文档: https://platform.openai.com/docs/api-reference/images/create
image_api_params = {
"size": "256x256", # 图片尺寸支持256x256, 512x512, 1024x1024
"model": "dall-e-2", # 默认使用 dall-e-2 模型,也可以改为 dall-e-3
# 图片尺寸
# dall-e-2 模型支持 256x256, 512x512, 1024x1024
# dall-e-3 模型支持 1024x1024, 1792x1024, 1024x1792
"size": "256x256",
}
# 跟踪函数调用
@@ -322,19 +327,6 @@ retry_times = 3
# 设置为False时向用户及管理员发送错误详细信息
hide_exce_info_to_user = False
# 线程池相关配置
# 该参数决定机器人可以同时处理几个人的消息,超出线程池数量的请求会被阻塞,不会被丢弃
# 如果你不清楚该参数的意义,请不要更改
# 程序运行本身线程池,无代码层面修改请勿更改
sys_pool_num = 8
# 执行管理员请求和指令的线程池并行线程数量,一般和管理员数量相等
admin_pool_num = 4
# 执行用户请求和指令的线程池并行线程数量
# 如需要更高的并发,可以增大该值
user_pool_num = 8
# 每个会话的过期时间,单位为秒
# 默认值20分钟
session_expire_time = 1200

141
main.py
View File

@@ -8,11 +8,10 @@ import time
import logging
import sys
import traceback
import asyncio
sys.path.append(".")
from pkg.utils.log import init_runtime_log_file, reset_logging
def check_file():
# 检查是否有banlist.py,如果没有就把banlist-template.py复制一份
@@ -55,6 +54,11 @@ def check_file():
# 初始化相关文件
check_file()
from pkg.utils.log import init_runtime_log_file, reset_logging
from pkg.config import manager as config_mgr
from pkg.config.impls import pymodule as pymodule_cfg
try:
import colorlog
except ImportError:
@@ -96,15 +100,15 @@ def ensure_dependencies():
known_exception_caught = False
def override_config():
import config
# 检查override.json覆盖
def override_config_manager():
config = pkg.utils.context.get_config_manager().data
if os.path.exists("override.json") and use_override:
override_json = json.load(open("override.json", "r", encoding="utf-8"))
overrided = []
for key in override_json:
if hasattr(config, key):
setattr(config, key, override_json[key])
if key in config:
config[key] = override_json[key]
# logging.info("覆写配置[{}]为[{}]".format(key, override_json[key]))
overrided.append(key)
else:
@@ -113,36 +117,6 @@ def override_config():
logging.info("已根据override.json覆写配置项: {}".format(", ".join(overrided)))
# 临时函数用于加载config和上下文未来统一放在config类
def load_config():
logging.info("检查config模块完整性.")
# 完整性校验
non_exist_keys = []
is_integrity = True
config_template = importlib.import_module('config-template')
config = importlib.import_module('config')
for key in dir(config_template):
if not key.startswith("__") and not hasattr(config, key):
setattr(config, key, getattr(config_template, key))
# logging.warning("[{}]不存在".format(key))
non_exist_keys.append(key)
is_integrity = False
if not is_integrity:
logging.warning("以下配置字段不存在: {}".format(", ".join(non_exist_keys)))
# 检查override.json覆盖
override_config()
if not is_integrity:
logging.warning("以上不存在的配置已被设为默认值您可以依据config-template.py检查config.py将在3秒后继续启动... ")
time.sleep(3)
# 存进上下文
pkg.utils.context.set_config(config)
def complete_tips():
"""根据tips-custom-template模块补全tips模块的属性"""
non_exist_keys = []
@@ -165,17 +139,29 @@ def complete_tips():
time.sleep(3)
def start(first_time_init=False):
async def start_process(first_time_init=False):
"""启动流程reload之后会被执行"""
global known_exception_caught
import pkg.utils.context
config = pkg.utils.context.get_config()
# 加载配置
cfg_inst: pymodule_cfg.PythonModuleConfigFile = pymodule_cfg.PythonModuleConfigFile(
'config.py',
'config-template.py'
)
await config_mgr.ConfigManager(cfg_inst).load_config()
override_config_manager()
# 检查tips模块
complete_tips()
cfg = pkg.utils.context.get_config_manager().data
# 更新openai库到最新版本
if not hasattr(config, 'upgrade_dependencies') or config.upgrade_dependencies:
if 'upgrade_dependencies' not in cfg or cfg['upgrade_dependencies']:
print("正在更新依赖库,请等待...")
if not hasattr(config, 'upgrade_dependencies'):
if 'upgrade_dependencies' not in cfg:
print("这个操作不是必须的,如果不想更新,请在config.py中添加upgrade_dependencies=False")
else:
print("这个操作不是必须的,如果不想更新,请在config.py中将upgrade_dependencies设置为False")
@@ -184,6 +170,10 @@ def start(first_time_init=False):
except Exception as e:
print("更新openai库失败:{}, 请忽略或自行更新".format(e))
# 初始化文字转图片
from pkg.utils import text2img
text2img.initialize()
known_exception_caught = False
try:
try:
@@ -192,11 +182,11 @@ def start(first_time_init=False):
pkg.utils.context.context['logger_handler'] = sh
# 检查是否设置了管理员
if not (hasattr(config, 'admin_qq') and config.admin_qq != 0):
if cfg['admin_qq'] == 0:
# logging.warning("未设置管理员QQ,管理员权限指令及运行告警将无法使用,如需设置请修改config.py中的admin_qq字段")
while True:
try:
config.admin_qq = int(input("未设置管理员QQ,管理员权限指令及运行告警将无法使用,请输入管理员QQ号: "))
cfg['admin_qq'] = int(input("未设置管理员QQ,管理员权限指令及运行告警将无法使用,请输入管理员QQ号: "))
# 写入到文件
# 读取文件
@@ -204,7 +194,7 @@ def start(first_time_init=False):
with open("config.py", "r", encoding="utf-8") as f:
config_file_str = f.read()
# 替换
config_file_str = config_file_str.replace("admin_qq = 0", "admin_qq = " + str(config.admin_qq))
config_file_str = config_file_str.replace("admin_qq = 0", "admin_qq = " + str(cfg['admin_qq']))
# 写入
with open("config.py", "w", encoding="utf-8") as f:
f.write(config_file_str)
@@ -233,23 +223,23 @@ def start(first_time_init=False):
# 配置OpenAI proxy
import openai
openai.proxies = None # 先重置因为重载后可能需要清除proxy
if "http_proxy" in config.openai_config and config.openai_config["http_proxy"] is not None:
if "http_proxy" in cfg['openai_config'] and cfg['openai_config']["http_proxy"] is not None:
openai.proxies = {
"http": config.openai_config["http_proxy"],
"https": config.openai_config["http_proxy"]
"http": cfg['openai_config']["http_proxy"],
"https": cfg['openai_config']["http_proxy"]
}
# 配置openai api_base
if "reverse_proxy" in config.openai_config and config.openai_config["reverse_proxy"] is not None:
logging.debug("设置反向代理: "+config.openai_config['reverse_proxy'])
openai.base_url = config.openai_config["reverse_proxy"]
if "reverse_proxy" in cfg['openai_config'] and cfg['openai_config']["reverse_proxy"] is not None:
logging.debug("设置反向代理: "+cfg['openai_config']['reverse_proxy'])
openai.base_url = cfg['openai_config']["reverse_proxy"]
# 主启动流程
database = pkg.database.manager.DatabaseManager()
database.initialize_database()
openai_interact = pkg.openai.manager.OpenAIInteract(config.openai_config['api_key'])
openai_interact = pkg.openai.manager.OpenAIInteract(cfg['openai_config']['api_key'])
# 加载所有未超时的session
pkg.openai.session.load_sessions()
@@ -338,13 +328,12 @@ def start(first_time_init=False):
if first_time_init:
if not known_exception_caught:
import config
if config.msg_source_adapter == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(config.mirai_http_api_config['qq'], config.mirai_http_api_config['host']+":"+str(config.mirai_http_api_config['port'])))
if cfg['msg_source_adapter'] == "yirimirai":
logging.info("QQ: {}, MAH: {}".format(cfg['mirai_http_api_config']['qq'], cfg['mirai_http_api_config']['host']+":"+str(cfg['mirai_http_api_config']['port'])))
logging.critical('程序启动完成,如长时间未显示 "成功登录到账号xxxxx" ,并且不回复消息,解决办法(请勿到群里问): '
'https://github.com/RockChinQ/QChatGPT/issues/37')
elif config.msg_source_adapter == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(config.nakuru_config['host'], config.nakuru_config['port'], config.nakuru_config['http_port']))
elif cfg['msg_source_adapter'] == 'nakuru':
logging.info("host: {}, port: {}, http_port: {}".format(cfg['nakuru_config']['host'], cfg['nakuru_config']['port'], cfg['nakuru_config']['http_port']))
logging.critical('程序启动完成,如长时间未显示 "Protocol: connected" ,并且不回复消息,请检查config.py中的nakuru_config是否正确')
else:
sys.exit(1)
@@ -352,7 +341,7 @@ def start(first_time_init=False):
logging.info('热重载完成')
# 发送赞赏码
if config.encourage_sponsor_at_start \
if cfg['encourage_sponsor_at_start'] \
and pkg.utils.context.get_openai_manager().audit_mgr.get_total_text_length() >= 2048:
logging.info("发送赞赏码")
@@ -420,19 +409,12 @@ def main():
init_runtime_log_file()
pkg.utils.context.context['logger_handler'] = reset_logging()
# 加载配置
load_config()
config = pkg.utils.context.get_config()
# 检查tips模块
complete_tips()
# 配置线程池
from pkg.utils import ThreadCtl
thread_ctl = ThreadCtl(
sys_pool_num=config.sys_pool_num,
admin_pool_num=config.admin_pool_num,
user_pool_num=config.user_pool_num
sys_pool_num=8,
admin_pool_num=4,
user_pool_num=8
)
# 存进上下文
pkg.utils.context.set_thread_ctl(thread_ctl)
@@ -451,9 +433,11 @@ def main():
# 关闭urllib的http警告
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
def run_wrapper():
asyncio.run(start_process(True))
pkg.utils.context.get_thread_ctl().submit_sys_task(
start,
True
run_wrapper
)
# 主线程循环
@@ -463,12 +447,19 @@ def main():
except:
stop()
pkg.utils.context.get_thread_ctl().shutdown()
import platform
if platform.system() == 'Windows':
cmd = "taskkill /F /PID {}".format(os.getpid())
elif platform.system() in ['Linux', 'Darwin']:
cmd = "kill -9 {}".format(os.getpid())
os.system(cmd)
launch_args = sys.argv.copy()
if "--cov-report" not in launch_args:
import platform
if platform.system() == 'Windows':
cmd = "taskkill /F /PID {}".format(os.getpid())
elif platform.system() in ['Linux', 'Darwin']:
cmd = "kill -9 {}".format(os.getpid())
os.system(cmd)
else:
print("正常退出以生成覆盖率报告")
sys.exit(0)
if __name__ == '__main__':

View File

@@ -60,6 +60,7 @@
"temperature": 0.9
},
"image_api_params": {
"model": "dall-e-2",
"size": "256x256"
},
"trace_function_calls": false,
@@ -78,9 +79,6 @@
"font_path": "",
"retry_times": 3,
"hide_exce_info_to_user": false,
"sys_pool_num": 8,
"admin_pool_num": 4,
"user_pool_num": 8,
"session_expire_time": 1200,
"rate_limitation": {
"default": 60

View File

@@ -47,10 +47,10 @@ class DataGatherer:
def thread_func():
try:
config = context.get_config()
if not config.report_usage:
config = context.get_config_manager().data
if not config['report_usage']:
return
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}&msg_source={}".format(subservice_name, self.version_str, count, config.msg_source_adapter))
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}&msg_source={}".format(subservice_name, self.version_str, count, config['msg_source_adapter']))
if res.status_code != 200 or res.text != "ok":
logging.warning("report to server failed, status_code: {}, text: {}".format(res.status_code, res.text))
except:

0
pkg/config/__init__.py Normal file
View File

View File

@@ -0,0 +1,62 @@
import os
import shutil
import importlib
import logging
from .. import model as file_model
class PythonModuleConfigFile(file_model.ConfigFile):
"""Python模块配置文件"""
config_file_name: str = None
"""配置文件名"""
template_file_name: str = None
"""模板文件名"""
def __init__(self, config_file_name: str, template_file_name: str) -> None:
self.config_file_name = config_file_name
self.template_file_name = template_file_name
def exists(self) -> bool:
return os.path.exists(self.config_file_name)
async def create(self):
shutil.copyfile(self.template_file_name, self.config_file_name)
async def load(self) -> dict:
module_name = os.path.splitext(os.path.basename(self.config_file_name))[0]
module = importlib.import_module(module_name)
cfg = {}
allowed_types = (int, float, str, bool, list, dict)
for key in dir(module):
if key.startswith('__'):
continue
if not isinstance(getattr(module, key), allowed_types):
continue
cfg[key] = getattr(module, key)
# 从模板模块文件中进行补全
module_name = os.path.splitext(os.path.basename(self.template_file_name))[0]
module = importlib.import_module(module_name)
for key in dir(module):
if key.startswith('__'):
continue
if not isinstance(getattr(module, key), allowed_types):
continue
if key not in cfg:
cfg[key] = getattr(module, key)
return cfg
async def save(self, data: dict):
logging.warning('Python模块配置文件不支持保存')

23
pkg/config/manager.py Normal file
View File

@@ -0,0 +1,23 @@
from . import model as file_model
from ..utils import context
class ConfigManager:
"""配置文件管理器"""
file: file_model.ConfigFile = None
"""配置文件实例"""
data: dict = None
"""配置数据"""
def __init__(self, cfg_file: file_model.ConfigFile) -> None:
self.file = cfg_file
self.data = {}
context.set_config_manager(self)
async def load_config(self):
self.data = await self.file.load()
async def dump_config(self):
await self.file.save(self.data)

27
pkg/config/model.py Normal file
View File

@@ -0,0 +1,27 @@
import abc
class ConfigFile(metaclass=abc.ABCMeta):
"""配置文件抽象类"""
config_file_name: str = None
"""配置文件名"""
template_file_name: str = None
"""模板文件名"""
@abc.abstractmethod
def exists(self) -> bool:
pass
@abc.abstractmethod
async def create(self):
pass
@abc.abstractmethod
async def load(self) -> dict:
pass
@abc.abstractmethod
async def save(self, data: dict):
pass

View File

@@ -144,11 +144,11 @@ class DatabaseManager:
# 从数据库加载还没过期的session数据
def load_valid_sessions(self) -> dict:
# 从数据库中加载所有还没过期的session
config = context.get_config()
config = context.get_config_manager().data
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `last_interact_timestamp` > {}
""".format(int(time.time()) - config.session_expire_time))
""".format(int(time.time()) - config['session_expire_time']))
results = self.cursor.fetchall()
sessions = {}
for result in results:

View File

@@ -3,6 +3,8 @@ import logging
import openai
from ...utils import context
class RequestBase:
@@ -14,7 +16,6 @@ class RequestBase:
raise NotImplementedError
def _next_key(self):
import pkg.utils.context as context
switched, name = context.get_openai_manager().key_mgr.auto_switch()
logging.debug("切换api-key: switched={}, name={}".format(switched, name))
self.client.api_key = context.get_openai_manager().key_mgr.get_using_key()
@@ -22,12 +23,12 @@ class RequestBase:
def _req(self, **kwargs):
"""处理代理问题"""
logging.debug("请求接口参数: %s", str(kwargs))
import config
config = context.get_config_manager().data
ret = self.req_func(**kwargs)
logging.debug("接口请求返回:%s", str(ret))
if config.switch_strategy == 'active':
if config['switch_strategy'] == 'active':
self._next_key()
return ret

View File

@@ -1,9 +1,10 @@
# 多情景预设值管理
import json
import logging
import config
import os
from ..utils import context
# __current__ = "default"
# """当前默认使用的情景预设的名称
@@ -62,22 +63,24 @@ class NormalScenarioMode(ScenarioMode):
"""普通情景预设模式"""
def __init__(self):
config = context.get_config_manager().data
# 加载config中的default_prompt值
if type(config.default_prompt) == str:
if type(config['default_prompt']) == str:
self.using_prompt_name = "default"
self.prompts = {"default": [
{
"role": "system",
"content": config.default_prompt
"content": config['default_prompt']
}
]}
elif type(config.default_prompt) == dict:
for key in config.default_prompt:
elif type(config['default_prompt']) == dict:
for key in config['default_prompt']:
self.prompts[key] = [
{
"role": "system",
"content": config.default_prompt[key]
"content": config['default_prompt'][key]
}
]
@@ -123,9 +126,9 @@ def register_all():
def mode_inst() -> ScenarioMode:
"""获取指定名称的情景预设模式对象"""
import config
config = context.get_config_manager().data
if config.preset_mode == "default":
config.preset_mode = "normal"
if config['preset_mode'] == "default":
config['preset_mode'] = "normal"
return scenario_mode_mapping[config.preset_mode]
return scenario_mode_mapping[config['preset_mode']]

View File

@@ -1,6 +1,7 @@
import logging
import openai
from openai.types import images_response
from ..openai import keymgr
from ..utils import context
@@ -43,13 +44,13 @@ class OpenAIInteract:
"""请求补全接口回复=
"""
# 选择接口请求类
config = context.get_config()
config = context.get_config_manager().data
request: api_model.RequestBase
model: str = config.completion_api_params['model']
model: str = config['completion_api_params']['model']
cp_parmas = config.completion_api_params.copy()
cp_parmas = config['completion_api_params'].copy()
del cp_parmas['model']
request = modelmgr.select_request_cls(self.client, model, messages, cp_parmas)
@@ -65,7 +66,7 @@ class OpenAIInteract:
yield resp
def request_image(self, prompt) -> dict:
def request_image(self, prompt) -> images_response.ImagesResponse:
"""请求图片接口回复
Parameters:
@@ -74,10 +75,10 @@ class OpenAIInteract:
Returns:
dict: 响应
"""
config = context.get_config()
params = config.image_api_params
config = context.get_config_manager().data
params = config['image_api_params']
response = openai.Image.create(
response = self.client.images.generate(
prompt=prompt,
n=1,
**params

View File

@@ -49,6 +49,7 @@ CHAT_COMPLETION_MODELS = {
"qwen-plus-v1",
"ERNIE-Bot",
"ERNIE-Bot-turbo",
"gemini-pro",
}
EDIT_MODELS = {
@@ -90,6 +91,7 @@ def count_chat_completion_tokens(messages: list, model: str) -> int:
"qwen-plus-v1",
"ERNIE-Bot",
"ERNIE-Bot-turbo",
"gemini-pro",
}:
tokens_per_message = 3
tokens_per_name = 1

View File

@@ -36,11 +36,11 @@ def reset_session_prompt(session_name, prompt):
f.write(prompt)
f.close()
# 生成新数据
config = context.get_config()
config = context.get_config_manager().data
prompt = [
{
'role': 'system',
'content': config.default_prompt['default'] if type(config.default_prompt) == dict else config.default_prompt
'content': config['default_prompt']['default'] if type(config['default_prompt']) == dict else config['default_prompt']
}
]
# 警告
@@ -170,15 +170,15 @@ class Session:
if self.create_timestamp != create_timestamp or self not in sessions.values():
return
config = context.get_config()
if int(time.time()) - self.last_interact_timestamp > config.session_expire_time:
config = context.get_config_manager().data
if int(time.time()) - self.last_interact_timestamp > config['session_expire_time']:
logging.info('session {} 已过期'.format(self.name))
# 触发插件事件
args = {
'session_name': self.name,
'session': self,
'session_expire_time': config.session_expire_time
'session_expire_time': config['session_expire_time']
}
event = plugin_host.emit(plugin_models.SessionExpired, **args)
if event.is_prevented_default():
@@ -216,8 +216,8 @@ class Session:
if event.is_prevented_default():
return None, None, None
config = context.get_config()
max_length = config.prompt_submit_length
config = context.get_config_manager().data
max_length = config['prompt_submit_length']
local_default_prompt = self.default_prompt.copy()
local_prompt = self.prompt.copy()
@@ -254,7 +254,7 @@ class Session:
funcs = []
trace_func_calls = config.trace_function_calls
trace_func_calls = config['trace_function_calls']
botmgr = context.get_qqbot_manager()
session_name_spt: list[str] = self.name.split("_")
@@ -381,7 +381,7 @@ class Session:
# 包装目前的对话回合内容
changable_prompts = []
use_model = context.get_config().completion_api_params['model']
use_model = context.get_config_manager().data['completion_api_params']['model']
ptr = len(prompt) - 1

View File

@@ -5,6 +5,7 @@ import mirai
class MessageSourceAdapter:
bot_account_id: int
def __init__(self, config: dict):
pass

View File

@@ -9,7 +9,7 @@ from mirai.models.message import ForwardMessageNode
from mirai.models.base import MiraiBaseModel
from ..utils import text2img
import config
from ..utils import context
class ForwardMessageDiaplay(MiraiBaseModel):
@@ -64,13 +64,16 @@ def text_to_image(text: str) -> MessageComponent:
def check_text(text: str) -> list:
"""检查文本是否为长消息,并转换成该使用的消息链组件"""
if len(text) > config.blob_message_threshold:
config = context.get_config_manager().data
if len(text) > config['blob_message_threshold']:
# logging.info("长消息: {}".format(text))
if config.blob_message_strategy == 'image':
if config['blob_message_strategy'] == 'image':
# 转换成图片
return [text_to_image(text)]
elif config.blob_message_strategy == 'forward':
elif config['blob_message_strategy'] == 'forward':
# 包装转发消息
display = ForwardMessageDiaplay(
@@ -82,7 +85,7 @@ def check_text(text: str) -> list:
)
node = ForwardMessageNode(
sender_id=config.mirai_http_api_config['qq'],
sender_id=config['mirai_http_api_config']['qq'],
sender_name='bot',
message_chain=MessageChain([text])
)

View File

@@ -3,7 +3,7 @@ import logging
import mirai
from .. import aamgr
import config
from ....utils import context
@aamgr.AbstractCommandNode.register(
@@ -29,9 +29,9 @@ class DrawCommand(aamgr.AbstractCommandNode):
res = session.draw_image(" ".join(ctx.params))
logging.debug("draw_image result:{}".format(res))
reply = [mirai.Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply = [mirai.Image(url=res.data[0].url)]
config = context.get_config_manager().data
if config['include_image_description']:
reply.append(" ".join(ctx.params))
return True, reply

View File

@@ -1,4 +1,6 @@
from .. import aamgr
from ....utils import context
@aamgr.AbstractCommandNode.register(
parent=None,
@@ -15,12 +17,13 @@ class DefaultCommand(aamgr.AbstractCommandNode):
session_name = ctx.session_name
params = ctx.params
reply = []
import config
config = context.get_config_manager().data
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设({}模式):\n\n".format(config.preset_mode)
reply_str = "[bot]当前所有情景预设({}模式):\n\n".format(config['preset_mode'])
prompts = dprompt.mode_inst().list()

View File

@@ -4,7 +4,7 @@ import logging
from ..qqbot.cmds import aamgr as cmdmgr
def process_command(session_name: str, text_message: str, mgr, config,
def process_command(session_name: str, text_message: str, mgr, config: dict,
launcher_type: str, launcher_id: int, sender_id: int, is_admin: bool) -> list:
reply = []
try:

View File

@@ -4,6 +4,8 @@ import requests
import json
import logging
from ..utils import context
class ReplyFilter:
sensitive_words = []
@@ -20,12 +22,13 @@ class ReplyFilter:
self.sensitive_words = sensitive_words
self.mask = mask
self.mask_word = mask_word
import config
self.baidu_check = config.baidu_check
self.baidu_api_key = config.baidu_api_key
self.baidu_secret_key = config.baidu_secret_key
self.inappropriate_message_tips = config.inappropriate_message_tips
config = context.get_config_manager().data
self.baidu_check = config['baidu_check']
self.baidu_api_key = config['baidu_api_key']
self.baidu_secret_key = config['baidu_secret_key']
self.inappropriate_message_tips = config['inappropriate_message_tips']
def is_illegal(self, message: str) -> bool:
processed = self.process(message)

View File

@@ -1,16 +1,18 @@
import re
from ..utils import context
def ignore(msg: str) -> bool:
"""检查消息是否应该被忽略"""
import config
config = context.get_config_manager().data
if 'prefix' in config.ignore_rules:
for rule in config.ignore_rules['prefix']:
if 'prefix' in config['ignore_rules']:
for rule in config['ignore_rules']['prefix']:
if msg.startswith(rule):
return True
if 'regexp' in config.ignore_rules:
for rule in config.ignore_rules['regexp']:
if 'regexp' in config['ignore_rules']:
for rule in config['ignore_rules']['regexp']:
if re.search(rule, msg):
return True

View File

@@ -2,7 +2,7 @@ import json
import os
import logging
from mirai import At, GroupMessage, MessageEvent, Mirai, StrangerMessage, WebSocketAdapter, HTTPAdapter, \
from mirai import At, GroupMessage, MessageEvent, StrangerMessage, \
FriendMessage, Image, MessageChain, Plain
import func_timeout
@@ -19,16 +19,16 @@ from ..qqbot import adapter as msadapter
# 检查消息是否符合泛响应匹配机制
def check_response_rule(group_id:int, text: str):
config = context.get_config()
config = context.get_config_manager().data
rules = config.response_rules
rules = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
rules = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
rules = config['response_rules'][str(group_id)]
else:
rules = config.response_rules['default']
rules = config['response_rules']['default']
# 检查前缀匹配
if 'prefix' in rules:
@@ -48,16 +48,16 @@ def check_response_rule(group_id:int, text: str):
def response_at(group_id: int):
config = context.get_config()
config = context.get_config_manager().data
use_response_rule = config.response_rules
use_response_rule = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
use_response_rule = config['response_rules'][str(group_id)]
else:
use_response_rule = config.response_rules['default']
use_response_rule = config['response_rules']['default']
if 'at' not in use_response_rule:
return True
@@ -66,16 +66,16 @@ def response_at(group_id: int):
def random_responding(group_id):
config = context.get_config()
config = context.get_config_manager().data
use_response_rule = config.response_rules
use_response_rule = config['response_rules']
# 检查是否有特定规则
if 'prefix' not in config.response_rules:
if str(group_id) in config.response_rules:
use_response_rule = config.response_rules[str(group_id)]
if 'prefix' not in config['response_rules']:
if str(group_id) in config['response_rules']:
use_response_rule = config['response_rules'][str(group_id)]
else:
use_response_rule = config.response_rules['default']
use_response_rule = config['response_rules']['default']
if 'random_rate' in use_response_rule:
import random
@@ -102,25 +102,25 @@ class QQBotManager:
ban_group = []
def __init__(self, first_time_init=True):
import config
config = context.get_config_manager().data
self.timeout = config.process_message_timeout
self.retry = config.retry_times
self.timeout = config['process_message_timeout']
self.retry = config['retry_times']
# 由于YiriMirai的bot对象是单例的且shutdown方法暂时无法使用
# 故只在第一次初始化时创建bot对象重载之后使用原bot对象
# 因此bot的配置不支持热重载
if first_time_init:
logging.debug("Use adapter:" + config.msg_source_adapter)
if config.msg_source_adapter == 'yirimirai':
logging.debug("Use adapter:" + config['msg_source_adapter'])
if config['msg_source_adapter'] == 'yirimirai':
from pkg.qqbot.sources.yirimirai import YiriMiraiAdapter
mirai_http_api_config = config.mirai_http_api_config
self.bot_account_id = config.mirai_http_api_config['qq']
mirai_http_api_config = config['mirai_http_api_config']
self.bot_account_id = config['mirai_http_api_config']['qq']
self.adapter = YiriMiraiAdapter(mirai_http_api_config)
elif config.msg_source_adapter == 'nakuru':
elif config['msg_source_adapter'] == 'nakuru':
from pkg.qqbot.sources.nakuru import NakuruProjectAdapter
self.adapter = NakuruProjectAdapter(config.nakuru_config)
self.adapter = NakuruProjectAdapter(config['nakuru_config'])
self.bot_account_id = self.adapter.bot_account_id
else:
self.adapter = context.get_qqbot_manager().adapter
@@ -176,7 +176,7 @@ class QQBotManager:
stranger_message_handler,
)
# nakuru不区分好友和陌生人故仅为yirimirai注册陌生人事件
if config.msg_source_adapter == 'yirimirai':
if config['msg_source_adapter'] == 'yirimirai':
self.adapter.register_listener(
StrangerMessage,
on_stranger_message
@@ -213,12 +213,11 @@ class QQBotManager:
用于在热重载流程中卸载所有事件处理器
"""
import config
self.adapter.unregister_listener(
FriendMessage,
on_friend_message
)
if config.msg_source_adapter == 'yirimirai':
if config['msg_source_adapter'] == 'yirimirai':
self.adapter.unregister_listener(
StrangerMessage,
on_stranger_message
@@ -243,10 +242,10 @@ class QQBotManager:
if hasattr(banlist, "enable_group"):
self.enable_group = banlist.enable_group
config = context.get_config()
config = context.get_config_manager().data
if os.path.exists("sensitive.json") \
and config.sensitive_word_filter is not None \
and config.sensitive_word_filter:
and config['sensitive_word_filter'] is not None \
and config['sensitive_word_filter']:
with open("sensitive.json", "r", encoding="utf-8") as f:
sensitive_json = json.load(f)
self.reply_filter = qqbot_filter.ReplyFilter(
@@ -258,16 +257,16 @@ class QQBotManager:
self.reply_filter = qqbot_filter.ReplyFilter([])
def send(self, event, msg, check_quote=True, check_at_sender=True):
config = context.get_config()
config = context.get_config_manager().data
if check_at_sender and config.at_sender:
if check_at_sender and config['at_sender']:
msg.insert(
0,
Plain(" \n")
)
# 当回复的正文中包含换行时quote可能会自带at此时就不再单独添加at只添加换行
if "\n" not in str(msg[1]) or config.msg_source_adapter == 'nakuru':
if "\n" not in str(msg[1]) or config['msg_source_adapter'] == 'nakuru':
msg.insert(
0,
At(
@@ -278,14 +277,15 @@ class QQBotManager:
self.adapter.reply_message(
event,
msg,
quote_origin=True if config.quote_origin and check_quote else False
quote_origin=True if config['quote_origin'] and check_quote else False
)
# 私聊消息处理
def on_person_message(self, event: MessageEvent):
import config
reply = ''
config = context.get_config_manager().data
if not self.enable_private:
logging.debug("已在banlist.py中禁用所有私聊")
elif event.sender.id == self.bot_account_id:
@@ -299,7 +299,7 @@ class QQBotManager:
for i in range(self.retry):
try:
@func_timeout.func_set_timeout(config.process_message_timeout)
@func_timeout.func_set_timeout(config['process_message_timeout'])
def time_ctrl_wrapper():
reply = processor.process_message('person', event.sender.id, str(event.message_chain),
event.message_chain,
@@ -326,8 +326,10 @@ class QQBotManager:
# 群消息处理
def on_group_message(self, event: GroupMessage):
import config
reply = ''
config = context.get_config_manager().data
def process(text=None) -> str:
replys = ""
if At(self.bot_account_id) in event.message_chain:
@@ -337,7 +339,7 @@ class QQBotManager:
failed = 0
for i in range(self.retry):
try:
@func_timeout.func_set_timeout(config.process_message_timeout)
@func_timeout.func_set_timeout(config['process_message_timeout'])
def time_ctrl_wrapper():
replys = processor.process_message('group', event.group.id,
str(event.message_chain).strip() if text is None else text,
@@ -385,17 +387,17 @@ class QQBotManager:
# 通知系统管理员
def notify_admin(self, message: str):
config = context.get_config()
if config.admin_qq != 0 and config.admin_qq != []:
config = context.get_config_manager().data
if config['admin_qq'] != 0 and config['admin_qq'] != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
if type(config['admin_qq']) == int:
self.adapter.send_message(
"person",
config.admin_qq,
config['admin_qq'],
MessageChain([Plain("[bot]{}".format(message))])
)
else:
for adm in config.admin_qq:
for adm in config['admin_qq']:
self.adapter.send_message(
"person",
adm,
@@ -403,17 +405,17 @@ class QQBotManager:
)
def notify_admin_message_chain(self, message):
config = context.get_config()
if config.admin_qq != 0 and config.admin_qq != []:
config = context.get_config_manager().data
if config['admin_qq'] != 0 and config['admin_qq'] != []:
logging.info("通知管理员:{}".format(message))
if type(config.admin_qq) == int:
if type(config['admin_qq']) == int:
self.adapter.send_message(
"person",
config.admin_qq,
config['admin_qq'],
message
)
else:
for adm in config.admin_qq:
for adm in config['admin_qq']:
self.adapter.send_message(
"person",
adm,

View File

@@ -13,15 +13,15 @@ import tips as tips_custom
def handle_exception(notify_admin: str = "", set_reply: str = "") -> list:
"""处理异常当notify_admin不为空时会通知管理员返回通知用户的消息"""
import config
config = context.get_config_manager().data
context.get_qqbot_manager().notify_admin(notify_admin)
if config.hide_exce_info_to_user:
if config['hide_exce_info_to_user']:
return [tips_custom.alter_tip_message] if tips_custom.alter_tip_message else []
else:
return [set_reply]
def process_normal_message(text_message: str, mgr, config, launcher_type: str,
def process_normal_message(text_message: str, mgr, config: dict, launcher_type: str,
launcher_id: int, sender_id: int) -> list:
session_name = f"{launcher_type}_{launcher_id}"
logging.info("[{}]发送消息:{}".format(session_name, text_message[:min(20, len(text_message))] + (
@@ -39,7 +39,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception(notify_admin=f"{session_name},多次尝试失败。", set_reply=f"[bot]多次尝试失败,请重试或联系管理员")
break
try:
prefix = "[GPT]" if config.show_prefix else ""
prefix = "[GPT]" if config['show_prefix'] else ""
text, finish_reason, funcs = session.query(text_message)
@@ -118,7 +118,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e),
"[bot]err:RateLimitError,请重试或联系作者,或等待修复")
except openai.BadRequestError as e:
if config.auto_reset and "This model's maximum context length is" in str(e):
if config['auto_reset'] and "This model's maximum context length is" in str(e):
session.reset(persist=True)
reply = [tips_custom.session_auto_reset_message]
else:

View File

@@ -1,6 +1,7 @@
# 此模块提供了消息处理的具体逻辑的接口
import asyncio
import time
import traceback
import mirai
import logging
@@ -28,11 +29,11 @@ processing = []
def is_admin(qq: int) -> bool:
"""兼容list和int类型的管理员判断"""
import config
if type(config.admin_qq) == list:
return qq in config.admin_qq
config = context.get_config_manager().data
if type(config['admin_qq']) == list:
return qq in config['admin_qq']
else:
return qq == config.admin_qq
return qq == config['admin_qq']
def process_message(launcher_type: str, launcher_id: int, text_message: str, message_chain: mirai.MessageChain,
@@ -53,9 +54,9 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("根据忽略规则忽略消息: {}".format(text_message))
return []
import config
config = context.get_config_manager().data
if not config.wait_last_done and session_name in processing:
if not config['wait_last_done'] and session_name in processing:
return mirai.MessageChain([mirai.Plain(tips_custom.message_drop_tip)])
# 检查是否被禁言
@@ -65,8 +66,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
logging.info("机器人被禁言,跳过消息处理(group_{})".format(launcher_id))
return reply
import config
if config.income_msg_check:
if config['income_msg_check']:
if mgr.reply_filter.is_illegal(text_message):
return mirai.MessageChain(mirai.Plain("[bot] 消息中存在不合适的内容, 请更换措辞"))
@@ -81,8 +81,6 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
# 处理消息
try:
config = context.get_config()
processing.append(session_name)
try:
if text_message.startswith('!') or text_message.startswith(""): # 指令
@@ -114,7 +112,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
else: # 消息
# 限速丢弃检查
# print(ratelimit.__crt_minute_usage__[session_name])
if config.rate_limit_strategy == "drop":
if config['rate_limit_strategy'] == "drop":
if ratelimit.is_reach_limit(session_name):
logging.info("根据限速策略丢弃[{}]消息: {}".format(session_name, text_message))
@@ -144,7 +142,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
mgr, config, launcher_type, launcher_id, sender_id)
# 限速等待时间
if config.rate_limit_strategy == "wait":
if config['rate_limit_strategy'] == "wait":
time.sleep(ratelimit.get_rest_wait_time(session_name, time.time() - before))
ratelimit.add_usage(session_name)
@@ -167,13 +165,13 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
openai_session.get_session(session_name).release_response_lock()
# 检查延迟时间
if config.force_delay_range[1] == 0:
if config['force_delay_range'][1] == 0:
delay_time = 0
else:
import random
# 从延迟范围中随机取一个值(浮点)
rdm = random.uniform(config.force_delay_range[0], config.force_delay_range[1])
rdm = random.uniform(config['force_delay_range'][0], config['force_delay_range'][1])
spent = time.time() - start_time

View File

@@ -3,6 +3,9 @@ import time
import logging
import threading
from ..utils import context
__crt_minute_usage__ = {}
"""当前分钟每个会话的对话次数"""
@@ -12,16 +15,16 @@ __timer_thr__: threading.Thread = None
def get_limitation(session_name: str) -> int:
"""获取会话的限制次数"""
import config
config = context.get_config_manager().data
if type(config.rate_limitation) == dict:
if type(config['rate_limitation']) == dict:
# 如果被指定了
if session_name in config.rate_limitation:
return config.rate_limitation[session_name]
if session_name in config['rate_limitation']:
return config['rate_limitation'][session_name]
else:
return config.rate_limitation["default"]
elif type(config.rate_limitation) == int:
return config.rate_limitation
return config['rate_limitation']["default"]
elif type(config['rate_limitation']) == int:
return config['rate_limitation']
def add_usage(session_name: str):

View File

@@ -10,6 +10,7 @@ import nakuru.entities.components as nkc
from .. import adapter as adapter_model
from ...qqbot import blob
from ...utils import context
class NakuruProjectMessageConverter(adapter_model.MessageConverter):
@@ -172,12 +173,14 @@ class NakuruProjectAdapter(adapter_model.MessageSourceAdapter):
self.listener_list = []
# nakuru库有bug这个接口没法带access_token会失败
# 所以目前自行发请求
import config
config = context.get_config_manager().data
import requests
resp = requests.get(
url="http://{}:{}/get_login_info".format(config.nakuru_config['host'], config.nakuru_config['http_port']),
url="http://{}:{}/get_login_info".format(config['nakuru_config']['host'], config['nakuru_config']['http_port']),
headers={
'Authorization': "Bearer " + config.nakuru_config['token'] if 'token' in config.nakuru_config else ""
'Authorization': "Bearer " + config['nakuru_config']['token'] if 'token' in config['nakuru_config']else ""
},
timeout=5
)
@@ -270,7 +273,7 @@ class NakuruProjectAdapter(adapter_model.MessageSourceAdapter):
logging.debug("注册监听器: " + str(event_type) + " -> " + str(callback))
# 包装函数
async def listener_wrapper(app: nakuru.CQHTTP, source: self.event_converter.yiri2target(event_type)):
async def listener_wrapper(app: nakuru.CQHTTP, source: NakuruProjectAdapter.event_converter.yiri2target(event_type)):
callback(self.event_converter.target2yiri(source))
# 将包装函数和原函数的对应关系存入列表

File diff suppressed because one or more lines are too long

View File

@@ -1,12 +1,21 @@
from __future__ import annotations
import threading
from . import threadctl
from ..database import manager as db_mgr
from ..openai import manager as openai_mgr
from ..qqbot import manager as qqbot_mgr
from ..config import manager as config_mgr
from ..plugin import host as plugin_host
context = {
'inst': {
'database.manager.DatabaseManager': None,
'openai.manager.OpenAIInteract': None,
'qqbot.manager.QQBotManager': None,
'config.manager.ConfigManager': None,
},
'pool_ctl': None,
'logger_handler': None,
@@ -29,59 +38,72 @@ def get_config():
return t
def set_database_manager(inst):
def set_database_manager(inst: db_mgr.DatabaseManager):
context_lock.acquire()
context['inst']['database.manager.DatabaseManager'] = inst
context_lock.release()
def get_database_manager():
def get_database_manager() -> db_mgr.DatabaseManager:
context_lock.acquire()
t = context['inst']['database.manager.DatabaseManager']
context_lock.release()
return t
def set_openai_manager(inst):
def set_openai_manager(inst: openai_mgr.OpenAIInteract):
context_lock.acquire()
context['inst']['openai.manager.OpenAIInteract'] = inst
context_lock.release()
def get_openai_manager():
def get_openai_manager() -> openai_mgr.OpenAIInteract:
context_lock.acquire()
t = context['inst']['openai.manager.OpenAIInteract']
context_lock.release()
return t
def set_qqbot_manager(inst):
def set_qqbot_manager(inst: qqbot_mgr.QQBotManager):
context_lock.acquire()
context['inst']['qqbot.manager.QQBotManager'] = inst
context_lock.release()
def get_qqbot_manager():
def get_qqbot_manager() -> qqbot_mgr.QQBotManager:
context_lock.acquire()
t = context['inst']['qqbot.manager.QQBotManager']
context_lock.release()
return t
def set_plugin_host(inst):
def set_config_manager(inst: config_mgr.ConfigManager):
context_lock.acquire()
context['inst']['config.manager.ConfigManager'] = inst
context_lock.release()
def get_config_manager() -> config_mgr.ConfigManager:
context_lock.acquire()
t = context['inst']['config.manager.ConfigManager']
context_lock.release()
return t
def set_plugin_host(inst: plugin_host.PluginHost):
context_lock.acquire()
context['plugin_host'] = inst
context_lock.release()
def get_plugin_host():
def get_plugin_host() -> plugin_host.PluginHost:
context_lock.acquire()
t = context['plugin_host']
context_lock.release()
return t
def set_thread_ctl(inst):
def set_thread_ctl(inst: threadctl.ThreadCtl):
context_lock.acquire()
context['pool_ctl'] = inst
context_lock.release()

View File

@@ -3,6 +3,8 @@ import time
import logging
import shutil
from . import context
log_file_name = "qchatgpt.log"
@@ -36,7 +38,6 @@ def init_runtime_log_file():
def reset_logging():
global log_file_name
import config
import pkg.utils.context
import colorlog
@@ -46,7 +47,11 @@ def reset_logging():
for handler in logging.getLogger().handlers:
logging.getLogger().removeHandler(handler)
logging.basicConfig(level=config.logging_level, # 设置日志输出格式
config_mgr = context.get_config_manager()
logging_level = logging.INFO if config_mgr is None else config_mgr.data['logging_level']
logging.basicConfig(level=logging_level, # 设置日志输出格式
filename=log_file_name, # log日志输出的文件位置和文件名
format="[%(asctime)s.%(msecs)03d] %(pathname)s (%(lineno)d) - [%(levelname)s] :\n%(message)s",
# 日志输出的格式
@@ -54,7 +59,7 @@ def reset_logging():
datefmt="%Y-%m-%d %H:%M:%S" # 时间输出的格式
)
sh = logging.StreamHandler()
sh.setLevel(config.logging_level)
sh.setLevel(logging_level)
sh.setFormatter(colorlog.ColoredFormatter(
fmt="%(log_color)s[%(asctime)s.%(msecs)03d] %(filename)s (%(lineno)d) - [%(levelname)s] : "
"%(message)s",

View File

@@ -1,9 +1,11 @@
from . import context
def wrapper_proxies() -> dict:
"""获取代理"""
import config
config = context.get_config_manager().data
return {
"http": config.openai_config['proxy'],
"https": config.openai_config['proxy']
} if 'proxy' in config.openai_config and (config.openai_config['proxy'] is not None) else None
"http": config['openai_config']['proxy'],
"https": config['openai_config']['proxy']
} if 'proxy' in config['openai_config'] and (config['openai_config']['proxy'] is not None) else None

View File

@@ -1,6 +1,7 @@
import logging
import importlib
import pkgutil
import asyncio
from . import context
from ..plugin import host as plugin_host
@@ -52,15 +53,17 @@ def reload_all(notify=True):
# 执行启动流程
logging.info("执行程序启动流程")
main.load_config()
main.complete_tips()
context.get_thread_ctl().reload(
admin_pool_num=context.get_config().admin_pool_num,
user_pool_num=context.get_config().user_pool_num
admin_pool_num=4,
user_pool_num=8
)
def run_wrapper():
asyncio.run(main.start_process(False))
context.get_thread_ctl().submit_sys_task(
main.start,
False
run_wrapper
)
logging.info('程序启动完成')

View File

@@ -1,37 +1,42 @@
import logging
import re
import os
import config
import traceback
from PIL import Image, ImageDraw, ImageFont
from ..utils import context
text_render_font: ImageFont = None
if config.blob_message_strategy == "image": # 仅在启用了image时才加载字体
use_font = config.font_path
try:
def initialize():
config = context.get_config_manager().data
# 检查是否存在
if not os.path.exists(use_font):
# 若是windows系统使用微软雅黑
if os.name == "nt":
use_font = "C:/Windows/Fonts/msyh.ttc"
if not os.path.exists(use_font):
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
if config['blob_message_strategy'] == "image": # 仅在启用了image时才加载字体
use_font = config['font_path']
try:
# 检查是否存在
if not os.path.exists(use_font):
# 若是windows系统使用微软雅黑
if os.name == "nt":
use_font = "C:/Windows/Fonts/msyh.ttc"
if not os.path.exists(use_font):
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config['blob_message_strategy'] = "forward"
else:
logging.info("使用Windows自带字体" + use_font)
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
else:
logging.info("使用Windows自带字体" + use_font)
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
logging.warn("未找到字体文件,且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config['blob_message_strategy'] = "forward"
else:
logging.warn("未找到字体文件且无法使用Windows自带字体更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。")
config.blob_message_strategy = "forward"
else:
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
except:
traceback.print_exc()
logging.error("加载字体文件失败({})更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。".format(use_font))
config.blob_message_strategy = "forward"
text_render_font = ImageFont.truetype(use_font, 32, encoding="utf-8")
except:
traceback.print_exc()
logging.error("加载字体文件失败({})更换为转发消息组件以发送长消息您可以在config.py中调整相关设置。".format(use_font))
config['blob_message_strategy'] = "forward"
def indexNumber(path=''):

BIN
res/QChatGPT-1211.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

View File

@@ -16,5 +16,11 @@
"time": "2023-11-13 18:02:39",
"timestamp": 1699869759,
"content": "近期 OpenAI 接口改动频繁正在积极适配并添加新功能请尽快更新到最新版本更新方式https://github.com/RockChinQ/QChatGPT/discussions/595"
},
{
"id": 5,
"time": "2023-12-07 9:20:00",
"timestamp": 1701912000,
"content": "QChatGPT 一周年啦感谢大家的选择和支持RockChinQ 在此衷心感谢素未谋面但又至关重要的你们每一个人,愿 AI 与我们同在欢迎前往https://github.com/RockChinQ/QChatGPT/discussions/627 参与讨论。"
}
]

View File

@@ -1,5 +1,5 @@
> **Warning**
> [!WARNING]
> 此文档已过时,请查看[QChatGPT 容器化部署指南](docker_deployment.md)
## 操作步骤

View File

@@ -1,6 +1,6 @@
# QChatGPT 容器化部署指南
> **Warning**
> [!WARNING]
> 请您确保您**确实**需要 Docker 部署,您**必须**具有以下能力:
> - 了解 `Docker` 和 `Docker Compose` 的使用
> - 了解容器间网络通信配置方式
@@ -15,7 +15,7 @@
QChatGPT 主程序需要连接`QQ登录框架`以与QQ通信您可以选择 [Mirai](https://github.com/mamoe/mirai)还需要配置mirai-api-http请查看此仓库README中手动部署部分 或 [go-cqhttp](https://github.com/Mrs4s/go-cqhttp),我们仅发布 QChatGPT主程序 的镜像您需要自行配置QQ登录框架可以参考[README.md](https://github.com/RockChinQ/QChatGPT#-%E9%85%8D%E7%BD%AEqq%E7%99%BB%E5%BD%95%E6%A1%86%E6%9E%B6)中的教程,或自行寻找其镜像)并在 QChatGPT 的配置文件中设置连接地址。
> **Note**
> [!NOTE]
> 请先确保 Docker 和 Docker Compose 已安装
## 准备文件