Compare commits

..

35 Commits

Author SHA1 Message Date
Rock Chin
5c10f520fb Merge pull request #215 from RockChinQ/semantic-versions
[Feature] 使用语义化版本进行更新
2023-03-05 12:17:43 +08:00
Rock Chin
f8abe90674 perf: 完善更新检查功能 2023-03-05 12:09:44 +08:00
Rock Chin
964ad42cb4 perf: 完善更新提示 2023-03-05 12:02:59 +08:00
Rock Chin
424b970469 feat: 将依赖检查更改到main流程中 2023-03-05 11:58:18 +08:00
Rock Chin
792366e221 feat: 支持基于语义化版本的自动更新 2023-03-05 11:56:40 +08:00
Rock Chin
79e970c4c3 chore(deps): 删除dulwich依赖 2023-03-05 11:09:24 +08:00
Rock Chin
d12acd5f31 chore: git忽略temp目录 2023-03-05 11:05:42 +08:00
Rock Chin
13e55e05a4 doc: 增加长消息处理功能 2023-03-05 10:54:00 +08:00
Rock Chin
9a7490bc2f feat: 支持拒绝回复包含敏感词的提问 (#210) 2023-03-05 10:49:07 +08:00
Rock Chin
a610a9d3d3 fix: 无法根据ban_person忽略群内指定人消息 (#211) 2023-03-05 10:33:16 +08:00
Rock Chin
56e906c83f feat: 删除sensitive.json以sensitive-template.json替换 2023-03-05 10:21:32 +08:00
Rock Chin
101f26e5a3 Merge pull request #212 from Haibersut/feat-baiducloud
增加百度云内容审核
2023-03-05 10:13:20 +08:00
Rock Chin
0bba205cf2 feat: 优化配置文件注释 2023-03-05 10:12:49 +08:00
Rock Chin
cc3beb191f fix: 百度云审核的配置低版本兼容 2023-03-05 09:54:44 +08:00
Haibersut
42f5092bb9 更新了日记级别
将错误信息调整为warning
2023-03-05 01:45:36 +08:00
Haibersut
bc6728d123 根据建议修改 2023-03-05 01:17:23 +08:00
Rock Chin
754278f80f feat: 启动时自动安装Pillow库 2023-03-05 00:09:31 +08:00
Rock Chin
c9c980b6fe Merge pull request #203 from RockChinQ/blob_message_strategy
[Feature] 长消息处理策略
2023-03-04 23:53:54 +08:00
Rock Chin
a457d13d2c perf: 优化图片渲染 2023-03-04 23:53:22 +08:00
Rock Chin
7440e9e5d2 fix(blob.py): 错误的图片压缩处理 2023-03-04 21:36:07 +08:00
Rock Chin
39d901a5cb feat: 支持将长消息转换成图片进行回复 2023-03-04 21:14:10 +08:00
Haibersut
2e1ebff985 change value name 2023-03-04 21:12:50 +08:00
Haibersut
b8ed9ba321 Update README.md 2023-03-04 21:08:48 +08:00
Haibersut
c89a8e1cd1 Update README.md 2023-03-04 21:06:58 +08:00
Haibersut
480d201c55 增加百度云内容审核 2023-03-04 21:02:10 +08:00
Rock Chin
a4b7d4a012 feat: 支持将长消息转换成转发消息组件发送 2023-03-04 13:53:18 +08:00
Rock Chin
7fe676712b perf: 删除配置模板冗余项 2023-03-04 11:16:20 +08:00
Rock Chin
552733129c feat: 配置文件中增加长消息处理策略字段 2023-03-04 10:36:43 +08:00
Rock Chin
a4d73090f8 feat: 默认在启动时更新openai依赖库 2023-03-04 10:16:47 +08:00
Rock Chin
7d39b72800 feat: 更改默认的max_tokens为1024 2023-03-03 21:18:31 +08:00
Rock Chin
f1e12563e9 feat(gather.py): 未设置版本时默认为undetermined 2023-03-03 21:15:26 +08:00
Rock Chin
0ac5e5b35e fix(session.py): 错误的undo()方法逻辑 2023-03-03 21:13:31 +08:00
Rock Chin
6b3f74a39a Merge branch 'master' of https://github.com/RockChinQ/QChatGPT 2023-03-03 20:53:23 +08:00
Rock Chin
3c3e2e86c3 doc: README.md中一览已适配的模型 2023-03-03 20:53:19 +08:00
Rock Chin
204a778db2 Create CONTRIBUTING.md 2023-03-03 19:48:55 +08:00
18 changed files with 624 additions and 85 deletions

6
.gitignore vendored
View File

@@ -3,10 +3,12 @@ config.py
__pycache__/
database.db
qchatgpt.log
config.py
/banlist.py
plugins/
!plugins/__init__.py
/revcfg.py
prompts/
logs/
logs/
sensitive.json
temp/
current_tag

19
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,19 @@
## 参与项目
欢迎为此项目贡献代码或其他支持,以使您的点子或众人期待的功能成为现实,助力社区成长。
### 贡献形式
- 提交PR解决issues中提到的bug或期待的功能
- 提交PR实现您设想的功能请先提出issue与作者沟通
- 优化代码架构,使各个模块的组织更加整洁优雅
- 在issues中提出发现的bug或者期待的功能
- 为本项目在其他社交平台撰写文章、制作视频等
- 为本项目的衍生项目作出贡献,或开发插件增加功能
### 如何开始
- 加入本项目交流群,一同探讨项目相关事务
- 解决本项目或衍生项目的issues中亟待解决的问题
- 阅读并完善本项目文档
- 在各个社交媒体撰写本项目教程等

View File

@@ -1,5 +1,5 @@
# QChatGPT🤖
> 2023/3/3 官方接口疑似被墙,请自行测试,或等待近期敏感时期结束。
> 2023/3/3 官方接口疑似被墙,可考虑使用网络代理 [#198](https://github.com/RockChinQ/QChatGPT/issues/198)
> 2023/3/3 现已在主线支持官方ChatGPT接口使用方法查看[#195](https://github.com/RockChinQ/QChatGPT/issues/195)
> 2023/3/2 OpenAI已发布ChatGPT官方接口我们正在全力接入预计明日前完成请查看[此PR](https://github.com/RockChinQ/QChatGPT/pull/194)
> 2023/2/16 现已支持接入ChatGPT网页版详情请完成部署并查看底部**插件**小节或[此仓库](https://github.com/RockChinQ/revLibs)
@@ -8,18 +8,38 @@
- 由bilibili TheLazy制作的[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- 交流、答疑群: ~~204785790~~已满、691226829、656285629
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- QQ频道机器人见[QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
通过调用OpenAI的ChatGPT等语言模型来实现一个更加智能的QQ机器人
## 🍺模型适配一览
### 文字对话
- OpenAI GPT-3.5模型(ChatGPT API), 本项目原生支持, 默认使用
- OpenAI GPT-3模型, 本项目原生支持, 部署完成后前往config.py切换
- ChatGPT网页版逆向API, 由[插件](https://github.com/RockChinQ/revLibs)接入
### 故事续写
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 图片绘制
- OpenAI DALL·E模型, 本项目原生支持, 使用方法查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
- NovelAI API, 由[插件](https://github.com/dominoar/QCPNovelAi)接入
### 语音生成
- TTS+VITS, 由[插件](https://github.com/dominoar/QChatPlugins)接入
## ✅功能
<details>
<summary>✅支持敏感词过滤,避免账号风险</summary>
- 难以监测机器人与用户对话时的内容,故引入此功能以减少机器人风险
- 加入了百度云内容审核,在`config.py`中修改`baidu_check`的值,并填写`baidu_api_key``baidu_secret_key`以开启此功能
- 编辑`sensitive.json`,并在`config.py`中修改`sensitive_word_filter`的值以开启此功能
</details>
@@ -72,6 +92,12 @@
- 详见Wiki`加入黑名单`
</details>
<details>
<summary>✅长消息处理策略</summary>
- 支持将长消息转换成图片或消息记录组件,避免消息刷屏
- 请查看`config.py``blob_message_strategy`等字段
</details>
<details>
<summary>✅回复速度限制</summary>
- 支持限制单会话内每分钟可进行的对话次数
@@ -118,10 +144,7 @@
<details>
<summary>手动部署适用于所有平台</summary>
- 请使用Python 3.9.x以上版本
- 请注意OpenAI账号额度消耗
- 每个账户仅有18美元免费额度如未绑定银行卡则会在超出时报错
- OpenAI收费标准默认使用的`text-davinci-003`模型 0.02美元/千字
- 请使用Python 3.9.x以上版本
#### 配置Mirai
@@ -164,7 +187,7 @@ python3 main.py
**常见问题**
- mirai登录提示`QQ版本过低`,见[此issue](https://github.com/RockChinQ/QChatGPT/issues/38)
- mirai登录提示`QQ版本过低`,见[此issue](https://github.com/RockChinQ/QChatGPT/issues/137)
- 如提示安装`uvicorn``hypercorn`请*不要*安装这两个不是必需的目前存在未知原因bug
- 如报错`TypeError: As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary`, 请参考 [此处](https://github.com/RockChinQ/QChatGPT/issues/5)

View File

@@ -102,10 +102,27 @@ ignore_rules = {
"regexp": []
}
# 是否检查收到的消息中是否包含敏感词
# 若收到的消息无法通过下方指定的敏感词检查策略,则发送提示信息
income_msg_check = False
# 敏感词过滤开关,以同样数量的*代替敏感词回复
# 请在sensitive.json中添加敏感词
sensitive_word_filter = True
# 是否启用百度云内容安全审核
# 注册方式查看 https://cloud.baidu.com/doc/ANTIPORN/s/Wkhu9d5iy
baidu_check = False
# 百度云API_KEY 24位英文数字字符串
baidu_api_key = ""
# 百度云SECRET_KEY 32位的英文数字字符串
baidu_secret_key = ""
# 不合规消息自定义返回
inappropriate_message_tips = "[百度云]请珍惜机器人,当前返回内容不合规"
# 启动时是否发送赞赏码
# 仅当使用量已经超过2048字时发送
encourage_sponsor_at_start = True
@@ -133,7 +150,7 @@ prompt_submit_length = 1024
completion_api_params = {
"model": "gpt-3.5-turbo",
"temperature": 0.9, # 数值越低得到的回答越理性,取值范围[0, 1]
"max_tokens": 512, # 每次获取OpenAI接口响应的文字量上限, 不高于4096
"max_tokens": 1024, # 每次获取OpenAI接口响应的文字量上限, 不高于4096
"top_p": 1, # 生成的文本的文本与要求的符合度, 取值范围[0, 1]
"frequency_penalty": 0.2,
"presence_penalty": 1.0,
@@ -154,13 +171,18 @@ include_image_description = True
# 消息处理的超时时间,单位为秒
process_message_timeout = 30
# [暂未实现] 群内会话是否启用多对象名称
# 若不启用群内会话的prompt只使用user_name和bot_name
multi_subject = False
# 回复消息时是否显示[GPT]前缀
show_prefix = False
# 应用长消息处理策略的阈值
# 当回复消息长度超过此值时,将使用长消息处理策略
blob_message_threshold = 256
# 长消息处理策略
# - "image": 将长消息转换为图片发送
# - "forward": 将长消息转换为转发消息组件发送
blob_message_strategy = "forward"
# 消息处理超时重试次数
retry_times = 3
@@ -194,6 +216,9 @@ rate_limit_strategy = "wait"
# 若设置为空字符串,则不发送提示信息
rate_limit_drop_tip = "本分钟对话次数超过限速次数,此对话被丢弃"
# 是否在启动时进行依赖库更新
upgrade_dependencies = True
# 是否上报统计信息
# 用于统计机器人的使用情况,不会收集任何用户信息
# 仅上报时间、字数使用量、绘图使用量,其他信息不会上报

39
main.py
View File

@@ -43,6 +43,11 @@ def init_db():
database.initialize_database()
def ensure_dependencies():
import pkg.utils.pkgmgr as pkgmgr
pkgmgr.run_pip(["install", "openai", "Pillow", "--upgrade"])
known_exception_caught = False
log_file_name = "qchatgpt.log"
@@ -102,11 +107,18 @@ def reset_logging():
def main(first_time_init=False):
global known_exception_caught
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
import config
# 更新openai库到最新版本
if not hasattr(config, 'upgrade_dependencies') or config.upgrade_dependencies:
print("正在更新依赖库,请等待...")
if not hasattr(config, 'upgrade_dependencies'):
print("这个操作不是必须的,如果不想更新,请在config.py中添加upgrade_dependencies=False")
else:
print("这个操作不是必须的,如果不想更新,请在config.py中将upgrade_dependencies设置为False")
try:
ensure_dependencies()
except Exception as e:
print("更新openai库失败:{}, 请忽略或自行更新".format(e))
known_exception_caught = False
try:
@@ -258,7 +270,7 @@ def main(first_time_init=False):
import pkg.utils.updater
try:
if pkg.utils.updater.is_new_version_available():
pkg.utils.context.get_qqbot_manager().notify_admin("新版本可用,请发送 !update 进行自动更新")
pkg.utils.context.get_qqbot_manager().notify_admin("新版本可用,请发送 !update 进行自动更新\n更新日志:\n{}".format("\n".join(pkg.utils.updater.get_rls_notes())))
else:
logging.info("当前已是最新版本")
@@ -309,6 +321,20 @@ if __name__ == '__main__':
if not os.path.exists('banlist.py'):
shutil.copy('banlist-template.py', 'banlist.py')
# 检查是否有sensitive.json
if not os.path.exists("sensitive.json"):
shutil.copy("sensitive-template.json", "sensitive.json")
# 检查temp目录
if not os.path.exists("temp/"):
os.mkdir("temp/")
# 检查并创建plugins、prompts目录
check_path = ["plugins", "prompts"]
for path in check_path:
if not os.path.exists(path):
os.mkdir(path)
if len(sys.argv) > 1 and sys.argv[1] == 'init_db':
init_db()
sys.exit(0)
@@ -333,4 +359,5 @@ if __name__ == '__main__':
#
# pkg.utils.configmgr.set_config_and_reload("quote_origin", False)
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
main(True)

View File

@@ -20,7 +20,7 @@ class DataGatherer:
}
}为值的字典"""
version_str = "0.1.0"
version_str = "undetermined"
def __init__(self):
self.load_from_db()

View File

@@ -228,16 +228,15 @@ class Session:
def undo(self) -> str:
self.last_interact_timestamp = int(time.time())
# 删除上一回合
if self.prompt[-1]['role'] != 'user':
res = self.prompt[-1]['content']
self.prompt.remove(self.prompt[-2])
else:
res = self.prompt[-2]['content']
self.prompt.remove(self.prompt[-1])
# 删除最后两个消息
if len(self.prompt) < 2:
raise Exception('之前无对话,无法撤销')
question = self.prompt[-2]['content']
self.prompt = self.prompt[:-2]
# 返回上一回合的问题
return res
return question
# 构建对话体
def cut_out(self, msg: str, max_tokens: int) -> list:

View File

@@ -1,30 +1,34 @@
import pkg.utils.context
def is_banned(launcher_type: str, launcher_id: int) -> bool:
def is_banned(launcher_type: str, launcher_id: int, sender_id: int) -> bool:
if not pkg.utils.context.get_qqbot_manager().enable_banlist:
return False
result = False
if launcher_type == 'group':
for group_rule in pkg.utils.context.get_qqbot_manager().ban_group:
if type(group_rule) == int:
if group_rule == launcher_id: # 此群群号被禁用
result = True
elif type(group_rule) == str:
if group_rule.startswith('!'):
# 截取!后面的字符串作为表达式,判断是否匹配
reg_str = group_rule[1:]
import re
if re.match(reg_str, str(launcher_id)): # 被豁免,最高级别
result = False
break
else:
# 判断是否匹配regexp
import re
if re.match(group_rule, str(launcher_id)): # 此群群号被禁用
# 检查是否显式声明发起人QQ要被person忽略
if sender_id in pkg.utils.context.get_qqbot_manager().ban_person:
result = True
else:
for group_rule in pkg.utils.context.get_qqbot_manager().ban_group:
if type(group_rule) == int:
if group_rule == launcher_id: # 此群群号被禁用
result = True
elif type(group_rule) == str:
if group_rule.startswith('!'):
# 截取!后面的字符串作为表达式,判断是否匹配
reg_str = group_rule[1:]
import re
if re.match(reg_str, str(launcher_id)): # 被豁免,最高级别
result = False
break
else:
# 判断是否匹配regexp
import re
if re.match(group_rule, str(launcher_id)): # 此群群号被禁用
result = True
else:
# ban_person, 与群规则相同

105
pkg/qqbot/blob.py Normal file
View File

@@ -0,0 +1,105 @@
# 长消息处理相关
import logging
import os
import time
import base64
import config
from mirai.models.message import MessageComponent, MessageChain, Image
from mirai.models.message import ForwardMessageNode
from mirai.models.base import MiraiBaseModel
from typing import List
import pkg.utils.context as context
import pkg.utils.text2img as text2img
class ForwardMessageDiaplay(MiraiBaseModel):
title: str = "群聊的聊天记录"
brief: str = "[聊天记录]"
source: str = "聊天记录"
preview: List[str] = []
summary: str = "查看x条转发消息"
class Forward(MessageComponent):
"""合并转发。"""
type: str = "Forward"
"""消息组件类型。"""
display: ForwardMessageDiaplay
"""显示信息"""
node_list: List[ForwardMessageNode]
"""转发消息节点列表。"""
def __init__(self, *args, **kwargs):
if len(args) == 1:
self.node_list = args[0]
super().__init__(**kwargs)
super().__init__(*args, **kwargs)
def __str__(self):
return '[聊天记录]'
def text_to_image(text: str) -> MessageComponent:
"""将文本转换成图片"""
# 检查temp文件夹是否存在
if not os.path.exists('temp'):
os.mkdir('temp')
img_path = text2img.text_to_image(text_str=text, save_as='temp/{}.png'.format(int(time.time())))
compressed_path, size = text2img.compress_image(img_path, outfile="temp/{}_compressed.png".format(int(time.time())))
# 读取图片转换成base64
with open(compressed_path, 'rb') as f:
img = f.read()
b64 = base64.b64encode(img)
# 删除图片
os.remove(img_path)
# 判断compressed_path是否存在
if os.path.exists(compressed_path):
os.remove(compressed_path)
# 返回图片
return Image(base64=b64.decode('utf-8'))
def check_text(text: str) -> list:
"""检查文本是否为长消息,并转换成该使用的消息链组件"""
if not hasattr(config, 'blob_message_threshold'):
return [text]
if len(text) > config.blob_message_threshold:
if not hasattr(config, 'blob_message_strategy'):
raise AttributeError('未定义长消息处理策略')
# logging.info("长消息: {}".format(text))
if config.blob_message_strategy == 'image':
# 转换成图片
return [text_to_image(text)]
elif config.blob_message_strategy == 'forward':
# 敏感词屏蔽
text = context.get_qqbot_manager().reply_filter.process(text)
# 包装转发消息
display = ForwardMessageDiaplay(
title='群聊的聊天记录',
brief='[聊天记录]',
source='聊天记录',
preview=["bot: "+text],
summary="查看1条转发消息"
)
node = ForwardMessageNode(
sender_id=config.mirai_http_api_config['qq'],
sender_name='bot',
message_chain=MessageChain([text])
)
forward = Forward(
display=display,
node_list=[node]
)
return [forward]
else:
return [text]

View File

@@ -1,19 +1,77 @@
# 敏感词过滤模块
import re
import requests
import json
import logging
class ReplyFilter:
sensitive_words = []
# 默认值( 兼容性考虑 )
baidu_check = False
baidu_api_key = ""
baidu_secret_key = ""
inappropriate_message_tips = "[百度云]请珍惜机器人,当前返回内容不合规"
def __init__(self, sensitive_words: list):
self.sensitive_words = sensitive_words
import config
if hasattr(config, 'baidu_check') and hasattr(config, 'baidu_api_key') and hasattr(config, 'baidu_secret_key'):
self.baidu_check = config.baidu_check
self.baidu_api_key = config.baidu_api_key
self.baidu_secret_key = config.baidu_secret_key
self.inappropriate_message_tips = config.inappropriate_message_tips
def is_illegal(self, message: str) -> bool:
processed = self.process(message)
if processed != message:
return True
return False
def process(self, message: str) -> str:
# 本地关键词屏蔽
for word in self.sensitive_words:
match = re.findall(word, message)
if len(match) > 0:
for i in range(len(match)):
message = message.replace(match[i], "*" * len(match[i]))
# 百度云审核
if self.baidu_check:
# 百度云审核URL
baidu_url = "https://aip.baidubce.com/rest/2.0/solution/v1/text_censor/v2/user_defined?access_token=" + \
str(requests.post("https://aip.baidubce.com/oauth/2.0/token",
params={"grant_type": "client_credentials",
"client_id": self.baidu_api_key,
"client_secret": self.baidu_secret_key}).json().get("access_token"))
# 百度云审核
payload = "text=" + message
logging.info("向百度云发送:" + payload)
headers = {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': 'application/json'}
if isinstance(payload, str):
payload = payload.encode('utf-8')
response = requests.request("POST", baidu_url, headers=headers, data=payload)
response_dict = json.loads(response.text)
if "error_code" in response_dict:
error_msg = response_dict.get("error_msg")
logging.warning(f"百度云判定出错,错误信息:{error_msg}")
conclusion = f"百度云判定出错,错误信息:{error_msg}\n以下是原消息:{message}"
else:
conclusion = response_dict["conclusion"]
if conclusion in ("合规"):
logging.info(f"百度云判定结果:{conclusion}")
return message
else:
logging.warning(f"百度云判定结果:{conclusion}")
conclusion = self.inappropriate_message_tips
# 返回百度云审核结果
return conclusion
return message

View File

@@ -7,6 +7,7 @@ import pkg.openai.session
import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.blob as blob
def handle_exception(notify_admin: str = "", set_reply: str = "") -> list:
@@ -63,7 +64,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = event.get_return_value("reply")
if not event.is_prevented_default():
reply = [prefix + text]
reply = blob.check_text(prefix + text)
except openai.error.APIConnectionError as e:
err_msg = str(e)
if err_msg.__contains__('Error communicating with OpenAI'):

View File

@@ -49,7 +49,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
session_name = "{}_{}".format(launcher_type, launcher_id)
# 检查发送方是否被禁用
if banlist.is_banned(launcher_type, launcher_id):
if banlist.is_banned(launcher_type, launcher_id, sender_id):
logging.info("根据禁用列表忽略{}_{}的消息".format(launcher_type, launcher_id))
return []
@@ -66,6 +66,11 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
result.mute_time_remaining))
return reply
import config
if hasattr(config, 'income_msg_check') and config.income_msg_check:
if mgr.reply_filter.is_illegal(text_message):
return MessageChain(Plain("[bot] 你的提问中有不合适的内容, 请更换措辞~"))
pkg.openai.session.get_session(session_name).acquire_response_lock()
text_message = text_message.strip()
@@ -153,7 +158,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
"..." if len(reply[0]) > 100 else "")))
reply = [mgr.reply_filter.process(reply[0])]
else:
logging.info("回复[{}]图片消息:{}".format(session_name, reply))
logging.info("回复[{}]消息".format(session_name))
finally:
processing.remove(session_name)

View File

@@ -8,6 +8,11 @@ def install(package):
main.reset_logging()
def run_pip(params: list):
pipmain(params)
main.reset_logging()
def install_requirements(file):
pipmain(['install', '-r', file, "--upgrade"])
main.reset_logging()

164
pkg/utils/text2img.py Normal file
View File

@@ -0,0 +1,164 @@
from PIL import Image, ImageDraw, ImageFont
import re
import os
text_render_font = ImageFont.truetype("res/simhei.ttf", 32, encoding="utf-8")
def indexNumber(path=''):
"""
查找字符串中数字所在串中的位置
:param path:目标字符串
:return:<class 'list'>: <class 'list'>: [['1', 16], ['2', 35], ['1', 51]]
"""
kv = []
nums = []
beforeDatas = re.findall('[\d]+', path)
for num in beforeDatas:
indexV = []
times = path.count(num)
if times > 1:
if num not in nums:
indexs = re.finditer(num, path)
for index in indexs:
iV = []
i = index.span()[0]
iV.append(num)
iV.append(i)
kv.append(iV)
nums.append(num)
else:
index = path.find(num)
indexV.append(num)
indexV.append(index)
kv.append(indexV)
# 根据数字位置排序
indexSort = []
resultIndex = []
for vi in kv:
indexSort.append(vi[1])
indexSort.sort()
for i in indexSort:
for v in kv:
if i == v[1]:
resultIndex.append(v)
return resultIndex
def get_size(file):
# 获取文件大小:KB
size = os.path.getsize(file)
return size / 1024
def get_outfile(infile, outfile):
if outfile:
return outfile
dir, suffix = os.path.splitext(infile)
outfile = '{}-out{}'.format(dir, suffix)
return outfile
def compress_image(infile, outfile='', kb=100, step=20, quality=90):
"""不改变图片尺寸压缩到指定大小
:param infile: 压缩源文件
:param outfile: 压缩文件保存地址
:param mb: 压缩目标,KB
:param step: 每次调整的压缩比率
:param quality: 初始压缩比率
:return: 压缩文件地址,压缩文件大小
"""
o_size = get_size(infile)
if o_size <= kb:
return infile, o_size
outfile = get_outfile(infile, outfile)
while o_size > kb:
im = Image.open(infile)
im.save(outfile, quality=quality)
if quality - step < 0:
break
quality -= step
o_size = get_size(outfile)
return outfile, get_size(outfile)
def text_to_image(text_str: str, save_as="temp.png", width=800):
global text_render_font
text_str = text_str.replace("\t", " ")
# 分行
lines = text_str.split('\n')
# 计算并分割
final_lines = []
text_width = width-80
for line in lines:
# 如果长了就分割
line_width = text_render_font.getlength(line)
if line_width < text_width:
final_lines.append(line)
continue
else:
rest_text = line
while True:
# 分割最前面的一行
point = int(len(rest_text) * (text_width / line_width))
# 检查断点是否在数字中间
numbers = indexNumber(rest_text)
for number in numbers:
if number[1] < point < number[1] + len(number[0]) and number[1] != 0:
point = number[1]
break
final_lines.append(rest_text[:point])
rest_text = rest_text[point:]
line_width = text_render_font.getlength(rest_text)
if line_width < text_width:
final_lines.append(rest_text)
break
else:
continue
# 准备画布
img = Image.new('RGBA', (width, max(280, len(final_lines) * 35 + 45)), (255, 255, 255, 255))
draw = ImageDraw.Draw(img, mode='RGBA')
# 绘制正文
line_number = 0
offset_x = 20
offset_y = 30
for final_line in final_lines:
draw.text((offset_x, offset_y + 35 * line_number), final_line, fill=(0, 0, 0), font=text_render_font)
# 遍历此行,检查是否有emoji
idx_in_line = 0
for ch in final_line:
# if self.is_emoji(ch):
# emoji_img_valid = ensure_emoji(hex(ord(ch))[2:])
# if emoji_img_valid: # emoji图像可用,绘制到指定位置
# emoji_image = Image.open("emojis/{}.png".format(hex(ord(ch))[2:]), mode='r').convert('RGBA')
# emoji_image = emoji_image.resize((32, 32))
# x, y = emoji_image.size
# final_emoji_img = Image.new('RGBA', emoji_image.size, (255, 255, 255))
# final_emoji_img.paste(emoji_image, (0, 0, x, y), emoji_image)
# img.paste(final_emoji_img, box=(int(offset_x + idx_in_line * 32), offset_y + 35 * line_number))
# 检查字符占位宽
char_code = ord(ch)
if char_code >= 127:
idx_in_line += 1
else:
idx_in_line += 0.5
line_number += 1
img.save(save_as)
return save_as

View File

@@ -1,4 +1,9 @@
import datetime
import logging
import os.path
import requests
import json
import pkg.utils.context
@@ -28,34 +33,100 @@ def pull_latest(repo_path: str) -> bool:
return True
def get_release_list() -> list:
"""获取发行列表"""
rls_list_resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/releases"
)
rls_list = rls_list_resp.json()
return rls_list
def get_current_tag() -> str:
"""获取当前tag"""
current_tag = "v0.1.0"
if os.path.exists("current_tag"):
with open("current_tag", "r") as f:
current_tag = f.read()
return current_tag
def update_all() -> bool:
"""使用dulwich更新源码"""
check_dulwich_closure()
import dulwich
try:
before_commit_id = get_current_commit_id()
from dulwich import porcelain
repo = porcelain.open_repo('.')
porcelain.pull(repo)
"""检查更新并下载源码"""
current_tag = get_current_tag()
change_log = ""
rls_list = get_release_list()
for entry in repo.get_walker():
if str(entry.commit.id)[2:-1] == before_commit_id:
break
tz = datetime.timezone(datetime.timedelta(hours=entry.commit.commit_timezone // 3600))
dt = datetime.datetime.fromtimestamp(entry.commit.commit_time, tz)
change_log += dt.strftime('%Y-%m-%d %H:%M:%S') + " [" + str(entry.commit.message, encoding="utf-8").strip()+"]\n"
latest_rls = {}
rls_notes = []
for rls in rls_list:
rls_notes.append(rls['name']) # 使用发行名称作为note
if rls['tag_name'] == current_tag:
break
if change_log != "":
pkg.utils.context.get_qqbot_manager().notify_admin("代码拉取完成,更新内容如下:\n"+change_log)
return True
else:
return False
except ModuleNotFoundError:
raise Exception("dulwich模块未安装,请查看 https://github.com/RockChinQ/QChatGPT/issues/77")
except dulwich.porcelain.DivergedBranches:
raise Exception("分支不一致,自动更新仅支持master分支,请手动更新(https://github.com/RockChinQ/QChatGPT/issues/76)")
if latest_rls == {}:
latest_rls = rls
logging.info("更新日志: {}".format(rls_notes))
if latest_rls == {}: # 没有新版本
return False
# 下载最新版本的zip到temp目录
logging.info("开始下载最新版本: {}".format(latest_rls['zipball_url']))
zip_url = latest_rls['zipball_url']
zip_resp = requests.get(url=zip_url)
zip_data = zip_resp.content
# 检查temp/updater目录
if not os.path.exists("temp"):
os.mkdir("temp")
if not os.path.exists("temp/updater"):
os.mkdir("temp/updater")
with open("temp/updater/{}.zip".format(latest_rls['tag_name']), "wb") as f:
f.write(zip_data)
logging.info("下载最新版本完成: {}".format("temp/updater/{}.zip".format(latest_rls['tag_name'])))
# 解压zip到temp/updater/<tag_name>/
import zipfile
# 检查目标文件夹
if os.path.exists("temp/updater/{}".format(latest_rls['tag_name'])):
import shutil
shutil.rmtree("temp/updater/{}".format(latest_rls['tag_name']))
os.mkdir("temp/updater/{}".format(latest_rls['tag_name']))
with zipfile.ZipFile("temp/updater/{}.zip".format(latest_rls['tag_name']), 'r') as zip_ref:
zip_ref.extractall("temp/updater/{}".format(latest_rls['tag_name']))
# 覆盖源码
source_root = ""
# 找到temp/updater/<tag_name>/中的第一个子目录路径
for root, dirs, files in os.walk("temp/updater/{}".format(latest_rls['tag_name'])):
if root != "temp/updater/{}".format(latest_rls['tag_name']):
source_root = root
break
# 覆盖源码
import shutil
for root, dirs, files in os.walk(source_root):
# 覆盖所有子文件子目录
for file in files:
src = os.path.join(root, file)
dst = src.replace(source_root, ".")
if os.path.exists(dst):
os.remove(dst)
shutil.copy(src, dst)
# 把current_tag写入文件
current_tag = latest_rls['tag_name']
with open("current_tag", "w") as f:
f.write(current_tag)
# 通知管理员
import pkg.utils.context
pkg.utils.context.get_qqbot_manager().notify_admin("已更新到最新版本: {}\n更新日志:\n{}\n新功能通常可以在config-template.py中看到完整的更新日志请前往 https://github.com/RockChinQ/QChatGPT/releases 查看".format(current_tag, "\n".join(rls_notes)))
return True
def is_repo(path: str) -> bool:
@@ -132,15 +203,42 @@ def get_current_commit_id() -> str:
def is_new_version_available() -> bool:
"""检查是否有新版本"""
check_dulwich_closure()
# 从github获取release列表
rls_list = get_release_list()
if rls_list is None:
return False
from dulwich import porcelain
# 获取当前版本
current_tag = get_current_tag()
repo = porcelain.open_repo('.')
fetch_res = porcelain.ls_remote(porcelain.get_remote_repo(repo, "origin")[1])
# 检查是否有新版本
for rls in rls_list:
if rls['tag_name'] == current_tag:
return False
else:
return True
current_commit_id = get_current_commit_id()
latest_commit_id = str(fetch_res[b'HEAD'])[2:-1]
def get_rls_notes() -> list:
"""获取更新日志"""
# 从github获取release列表
rls_list = get_release_list()
if rls_list is None:
return None
return current_commit_id != latest_commit_id
# 获取当前版本
current_tag = get_current_tag()
# 检查是否有新版本
rls_notes = []
for rls in rls_list:
if rls['tag_name'] == current_tag:
break
rls_notes.append(rls['name'])
return rls_notes
if __name__ == "__main__":
update_all()

View File

@@ -1,9 +1,9 @@
requests~=2.28.1
openai~=0.27.0
pip~=22.3.1
dulwich~=0.21.3
colorlog~=6.6.0
yiri-mirai~=0.2.6.1
websockets~=10.4
urllib3~=1.26.10
func_timeout~=4.3.5
func_timeout~=4.3.5
Pillow

BIN
res/simhei.ttf Normal file

Binary file not shown.

View File

@@ -9,6 +9,7 @@
"毛泽东",
"邓小平",
"周恩来",
"马克思",
"社会主义",
"共产党",
"共产主义",
@@ -21,6 +22,8 @@
"天安门",
"六四",
"政治局常委",
"两会",
"共青团",
"学潮",
"八九",
"二十大",
@@ -48,6 +51,7 @@
"作爱",
"做爱",
"性交",
"性爱",
"自慰",
"阴茎",
"淫妇",