Compare commits

..

64 Commits

Author SHA1 Message Date
RockChinQ
1235fc1339 chore: release v3.3.1.1 2024-09-26 10:39:35 +08:00
Junyan Qin
47e308b99d Merge pull request #889 from YunZLu/add-check-role
Fix: Add Role Check to Prevent Validation Error
2024-09-26 09:31:25 +08:00
YunZL
81c2c3c0e5 Add Role Check to Prevent Validation Error 2024-09-23 23:25:54 +08:00
Junyan Qin
3c2db5097a Merge pull request #888 from Tigrex-Dai/master
fix: 添加了针对报错内容对event.sender中'role'的存在性检查
2024-09-22 16:50:55 +08:00
Tigrex Dai
ce56f79687 Update aiocqhttp.py
针对报错对"role"做存在性检查
2024-09-22 15:39:48 +08:00
RockChinQ
ee0d6dcdae chore: release v3.3.1.0 2024-09-08 15:14:24 +08:00
Junyan Qin
bcf1d92f73 Merge pull request #881 from RockChinQ/version/3.3.1.0
Version/3.3.1.0
2024-09-08 15:13:39 +08:00
RockChinQ
ffdec16ce6 docs: wiki 所有页面加上已弃用说明 2024-09-08 14:52:35 +08:00
RockChinQ
b2f6e84adc typo: 优化插件执行日志信息 2024-09-08 14:51:39 +08:00
Junyan Qin
f76c457e1f Update README.md 2024-09-03 20:07:41 +08:00
RockChinQ
80bd0a20df doc: 修复 README 中的logo图片 2024-08-30 14:48:23 +08:00
RockChinQ
efeaf73339 doc: 修改README图片链接 2024-08-30 11:13:04 +08:00
Junyan Qin
91b5100a24 Merge pull request #872 from RockChinQ/feat/config-file-api
Feat: 添加yaml配置文件的支持
2024-08-24 20:55:19 +08:00
RockChinQ
d1a06f4730 feat: 添加yaml配置文件的支持 2024-08-24 20:54:36 +08:00
Junyan Qin
b0b186e951 Merge pull request #871 from RockChinQ/feat/qq-c2c
Feat: 添加对 QQ 官方 API 私聊场景的支持
2024-08-24 17:04:41 +08:00
RockChinQ
4c8fedef6e feat: QQ官方api群聊和私聊支持图片 2024-08-24 17:01:35 +08:00
RockChinQ
718c221d01 feat: 支持官方机器人私信接口 2024-08-24 16:26:47 +08:00
Junyan Qin
077e77eee5 Merge pull request #869 from ligen131/lg/fix_image_format
fix: 发送正确的图片格式而不是默认的 `image/jpeg`
2024-08-24 15:47:55 +08:00
ligen131
b51ca06c7c fix: 发送正确的图片格式而不是默认的 image/jpeg 2024-08-19 00:00:29 +08:00
RockChinQ
2f092f4a87 chore: release v3.3.0.2 2024-08-01 23:14:07 +08:00
Junyan Qin
f1ff9c05c4 Merge pull request #864 from RockChinQ/version/3.3.0.2
fix: 消息忽略规则失效 (#854)
2024-08-01 23:12:33 +08:00
RockChinQ
c9c8603ccc fix: 消息忽略规则失效 (#854) 2024-08-01 23:01:28 +08:00
RockChinQ
47e281fb61 chore: release v3.3.0.1 2024-07-28 22:47:49 +08:00
RockChinQ
dc625647eb fix: ollama 依赖检查 2024-07-28 22:47:19 +08:00
RockChinQ
66cf1b05be chore: 优化issue和pr模板 2024-07-28 21:32:22 +08:00
RockChinQ
622cc89414 chore: release v3.3.0 2024-07-28 20:58:29 +08:00
Junyan Qin
78d98c40b1 Merge pull request #847 from RockChinQ/version/3.3
Release: 3.3
2024-07-28 20:57:26 +08:00
RockChinQ
1c5f06d9a9 feat: 添加 reply 和 send_message 两个插件api方法 2024-07-28 20:23:52 +08:00
Junyan Qin
998fe5a980 Merge pull request #857 from RockChinQ/feat/runner-abstraction
Feat: Runner 组件抽象
2024-07-28 18:47:38 +08:00
RockChinQ
8cad4089a7 feat: runner 层抽象 (#839) 2024-07-28 18:45:27 +08:00
RockChinQ
48cc3656bd feat: 允许自定义命令前缀 2024-07-28 16:01:58 +08:00
RockChinQ
68ddb3a6e1 feat: 添加 model 命令 2024-07-28 15:46:09 +08:00
ElvisChenML
70583f5ba0 Fixed aiocqhttp mirai.Voice类型无法正确传递url及base64的异常 2024-07-28 15:08:33 +08:00
Junyan Qin
5bebe01dd0 Update README.md 2024-07-28 15:08:33 +08:00
Junyan Qin
4dd976c9c5 Merge pull request #856 from ElvisChenML/pr
Fixed aiocqhttp mirai.Voice类型无法正确传递url及base64的异常
2024-07-28 13:05:06 +08:00
ElvisChenML
221b310485 Fixed aiocqhttp mirai.Voice类型无法正确传递url及base64的异常 2024-07-25 16:14:24 +08:00
Junyan Qin
dd1cec70c0 Update README.md 2024-07-13 09:15:18 +08:00
Junyan Qin
7656443b28 Merge pull request #845 from ElvisChenML/pr
fixed pkg\provider\entities.py\get_content_mirai_message_chain中ce.type图片类型不正确的异常
2024-07-10 00:13:48 +08:00
Junyan Qin
9d91c13b12 Merge pull request #844 from canyuan0801/pr
Feat: Ollama平台集成
2024-07-10 00:09:48 +08:00
RockChinQ
7c06141ce2 perf(ollama): 优化命令显示细节 2024-07-10 00:07:32 +08:00
RockChinQ
3dc413638b feat(ollama): 配置文件迁移 2024-07-09 23:37:34 +08:00
RockChinQ
bdb8baeddd perf(ollama): 修改请求器名称以适配请求路径 2024-07-09 23:37:19 +08:00
ElvisChenML
21966bfb69 fixed pkg\provider\entities.py\get_content_mirai_message_chain中ce.type图片类型不正确的异常 2024-07-09 17:04:11 +08:00
canyuan
e78c82e999 mod: merge ollama cmd 2024-07-09 16:19:09 +08:00
canyuan
2bdc3468d1 add ollama cmd 2024-07-09 14:57:39 +08:00
canyuan
987b3dc4ef add ollama chat 2024-07-09 14:57:28 +08:00
RockChinQ
45a10b4ac7 chore: release v3.2.4 2024-07-05 18:19:10 +08:00
RockChinQ
b5d33ef629 perf: 优化 pipeline 处理时的报错 2024-07-04 13:03:58 +08:00
RockChinQ
d3629916bf fix: user_notice 处理时为对齐为 MessageChain (#809) 2024-07-04 12:47:55 +08:00
RockChinQ
c5cb26d295 fix: GroupNormalMessageReceived事件设置 alter 无效 (#803) 2024-07-03 23:16:16 +08:00
RockChinQ
4b2785c5eb fix: QQ 官方 API 图片识别功能不正常 (#825) 2024-07-03 22:36:35 +08:00
RockChinQ
7ed190e6d2 doc: 删除广告 2024-07-03 17:50:58 +08:00
Junyan Qin
eac041cdd2 Merge pull request #834 from RockChinQ/feat/env-reminder
Feat: 添加启动信息阶段
2024-07-03 17:45:56 +08:00
RockChinQ
05527cfc01 feat: 添加 windows 下针对选择模式的提示 2024-07-03 17:44:10 +08:00
RockChinQ
61e2af4a14 feat: 添加启动信息阶段 2024-07-03 17:34:23 +08:00
RockChinQ
79804b6ecd chore: release 3.2.3 2024-06-26 10:55:21 +08:00
Junyan Qin
76434b2f4e Merge pull request #829 from RockChinQ/version/3.2.3
Release 3.2.3
2024-06-26 10:54:42 +08:00
RockChinQ
ec8bd4922e fix: 错误地resprule选择逻辑 (#810) 2024-06-26 10:37:08 +08:00
RockChinQ
4ffa773fac fix: 前缀响应时图片被错误地转换为文字 (#820) 2024-06-26 10:15:21 +08:00
Junyan Qin
ea8b7bc8aa Merge pull request #818 from Huoyuuu/master
fix: ensure content is string in chatcmpl call method
2024-06-24 17:12:46 +08:00
RockChinQ
39ce5646f6 perf: content元素拼接时使用换行符间隔 2024-06-24 17:04:50 +08:00
Huoyuuu
5092a82739 Update chatcmpl.py 2024-06-19 19:13:00 +08:00
Huoyuuu
3bba0b6d9a Merge pull request #1 from Huoyuuu/fix/issue-817-ensure-content-string
fix: ensure content is string in chatcmpl call method
2024-06-19 17:32:30 +08:00
Huoyuuu
7a19dd503d fix: ensure content is string in chatcmpl call method
fix: ensure content is string in chatcmpl call method

- Ensure user message content is a string instead of an array
- Updated `call` method in `chatcmpl.py` to guarantee content is a string
- Resolves compatibility issue with the yi-large model
2024-06-19 17:26:06 +08:00
70 changed files with 1036 additions and 197 deletions

View File

@@ -3,17 +3,6 @@ description: 报错或漏洞请使用这个模板创建,不使用此模板创
title: "[Bug]: " title: "[Bug]: "
labels: ["bug?"] labels: ["bug?"]
body: body:
- type: dropdown
attributes:
label: 部署方式
description: "主程序使用的部署方式"
options:
- 手动部署
- 安装器部署
- 一键安装包部署
- Docker部署
validations:
required: true
- type: dropdown - type: dropdown
attributes: attributes:
label: 消息平台适配器 label: 消息平台适配器
@@ -27,37 +16,24 @@ body:
required: false required: false
- type: input - type: input
attributes: attributes:
label: 系统环境 label: 运行环境
description: 操作系统、系统架构、**主机地理位置**,地理位置最好写清楚,涉及网络问题排查。 description: 操作系统、系统架构、**Python版本**、**主机地理位置**
placeholder: 例如: CentOS x64 中国大陆、Windows11 美国 placeholder: 例如: CentOS x64 Python 3.10.3、Docker 的直接写 Docker 就行
validations:
required: true
- type: input
attributes:
label: Python环境
description: 运行程序的Python版本
placeholder: 例如: Python 3.10
validations: validations:
required: true required: true
- type: input - type: input
attributes: attributes:
label: QChatGPT版本 label: QChatGPT版本
description: QChatGPT版本号 description: QChatGPT版本号
placeholder: 例如: v2.6.0,可以使用`!version`命令查看 placeholder: 例如:v3.3.0,可以使用`!version`命令查看,或者到 pkg/utils/constants.py 查看
validations: validations:
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: 异常情况 label: 异常情况
description: 完整描述异常情况,什么时候发生的、发生了什么,尽可能详细 description: 完整描述异常情况,什么时候发生的、发生了什么。**请附带日志信息。**
validations: validations:
required: true required: true
- type: textarea
attributes:
label: 日志信息
description: 请提供完整的 **登录框架 和 QChatGPT控制台**的相关日志信息(若有),不提供日志信息**无法**为您排查问题,请尽可能详细
validations:
required: false
- type: textarea - type: textarea
attributes: attributes:
label: 启用的插件 label: 启用的插件

View File

@@ -2,24 +2,16 @@
实现/解决/优化的内容: 实现/解决/优化的内容:
### 事务 ## 检查清单
- [ ] 已阅读仓库[贡献指引](https://github.com/RockChinQ/QChatGPT/blob/master/CONTRIBUTING.md) ### PR 作者完成
- [ ] 已与维护者在issues或其他平台沟通此PR大致内容
## 以下内容可在起草PR后、合并PR前逐步完成 - [ ] 阅读仓库[贡献指引](https://github.com/RockChinQ/QChatGPT/blob/master/CONTRIBUTING.md)了吗?
- [ ] 与项目所有者沟通过了吗?
### 功能 ### 项目所有者完成
- [ ] 已编写完善的配置文件字段说明(若有新增) - [ ] 相关 issues 链接了吗?
- [ ] 已编写面向用户的新功能说明(若有必要) - [ ] 配置项写好了吗?迁移写好了吗?生效了吗?
- [ ] 已测试新功能或更改 - [ ] 依赖写到 requirements.txt 和 core/bootutils/deps.py 了吗
- [ ] 文档编写了吗?
### 兼容性
- [ ] 已处理版本兼容性
- [ ] 已处理插件兼容问题
### 风险
可能导致或已知的问题:

View File

@@ -1,6 +1,6 @@
<p align="center"> <p align="center">
<img src="https://qchatgpt.rockchin.top/logo.png" alt="QChatGPT" width="180" /> <img src="https://qchatgpt.rockchin.top/chrome-512.png" alt="QChatGPT" width="180" />
</p> </p>
<div align="center"> <div align="center">
@@ -19,7 +19,7 @@
<a href="http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=66-aWvn8cbP4c1ut_1YYkvvGVeEtyTH8&authKey=pTaKBK5C%2B8dFzQ4XlENf6MHTCLaHnlKcCRx7c14EeVVlpX2nRSaS8lJm8YeM4mCU&noverify=0&group_code=195992197"> <a href="http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=66-aWvn8cbP4c1ut_1YYkvvGVeEtyTH8&authKey=pTaKBK5C%2B8dFzQ4XlENf6MHTCLaHnlKcCRx7c14EeVVlpX2nRSaS8lJm8YeM4mCU&noverify=0&group_code=195992197">
<img alt="Static Badge" src="https://img.shields.io/badge/%E5%AE%98%E6%96%B9%E7%BE%A4-195992197-purple"> <img alt="Static Badge" src="https://img.shields.io/badge/%E5%AE%98%E6%96%B9%E7%BE%A4-195992197-purple">
</a> </a>
<a href="https://qm.qq.com/q/1yxEaIgXMA"> <a href="https://qm.qq.com/q/PClALFK242">
<img alt="Static Badge" src="https://img.shields.io/badge/%E7%A4%BE%E5%8C%BA%E7%BE%A4-619154800-purple"> <img alt="Static Badge" src="https://img.shields.io/badge/%E7%A4%BE%E5%8C%BA%E7%BE%A4-619154800-purple">
</a> </a>
<a href="https://codecov.io/gh/RockChinQ/QChatGPT" > <a href="https://codecov.io/gh/RockChinQ/QChatGPT" >
@@ -45,11 +45,11 @@
<hr/> <hr/>
<div align="center"> <div align="center">
京东云4090单卡15C90G实例 <br/> 京东云 4090 单卡 15C90G 实例 <br/>
仅需1.89/小时包月1225元起 <br/> 仅需1.89/小时包月1225元起 <br/>
可选预装Stable Diffusion等应用随用随停计费透明欢迎首选支持 <br/> 可选预装Stable Diffusion等应用随用随停计费透明欢迎首选支持 <br/>
https://3.cn/1ZOi6-Gj https://3.cn/24A-2NXd
</div> </div>
<img alt="回复效果(带有联网插件)" src="https://qchatgpt.rockchin.top/assets/image/QChatGPT-0516.png" width="500px"/> <img alt="回复效果(带有联网插件)" src="https://qchatgpt.rockchin.top/QChatGPT-0516.png" width="500px"/>
</div> </div>

View File

@@ -8,7 +8,7 @@ from . import entities, operator, errors
from ..config import manager as cfg_mgr from ..config import manager as cfg_mgr
# 引入所有算子以便注册 # 引入所有算子以便注册
from .operators import func, plugin, default, reset, list as list_cmd, last, next, delc, resend, prompt, cmd, help, version, update from .operators import func, plugin, default, reset, list as list_cmd, last, next, delc, resend, prompt, cmd, help, version, update, ollama, model
class CommandManager: class CommandManager:

View File

@@ -0,0 +1,86 @@
from __future__ import annotations
import typing
from .. import operator, entities, cmdmgr, errors
@operator.operator_class(
name="model",
help='显示和切换模型列表',
usage='!model\n!model show <模型名>\n!model set <模型名>',
privilege=2
)
class ModelOperator(operator.CommandOperator):
"""Model命令"""
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
content = '模型列表:\n'
model_list = self.ap.model_mgr.model_list
for model in model_list:
content += f"\n名称: {model.name}\n"
content += f"请求器: {model.requester.name}\n"
content += f"\n当前对话使用模型: {context.query.use_model.name}\n"
content += f"新对话默认使用模型: {self.ap.provider_cfg.data.get('model')}\n"
yield entities.CommandReturn(text=content.strip())
@operator.operator_class(
name="show",
help='显示模型详情',
privilege=2,
parent_class=ModelOperator
)
class ModelShowOperator(operator.CommandOperator):
"""Model Show命令"""
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
model_name = context.crt_params[0]
model = None
for _model in self.ap.model_mgr.model_list:
if model_name == _model.name:
model = _model
break
if model is None:
yield entities.CommandReturn(error=errors.CommandError(f"未找到模型 {model_name}"))
else:
content = f"模型详情\n"
content += f"名称: {model.name}\n"
if model.model_name is not None:
content += f"请求模型名称: {model.model_name}\n"
content += f"请求器: {model.requester.name}\n"
content += f"密钥组: {model.token_mgr.provider}\n"
content += f"支持视觉: {model.vision_supported}\n"
content += f"支持工具: {model.tool_call_supported}\n"
yield entities.CommandReturn(text=content.strip())
@operator.operator_class(
name="set",
help='设置默认使用模型',
privilege=2,
parent_class=ModelOperator
)
class ModelSetOperator(operator.CommandOperator):
"""Model Set命令"""
async def execute(self, context: entities.ExecuteContext) -> typing.AsyncGenerator[entities.CommandReturn, None]:
model_name = context.crt_params[0]
model = None
for _model in self.ap.model_mgr.model_list:
if model_name == _model.name:
model = _model
break
if model is None:
yield entities.CommandReturn(error=errors.CommandError(f"未找到模型 {model_name}"))
else:
self.ap.provider_cfg.data['model'] = model_name
await self.ap.provider_cfg.dump_config()
yield entities.CommandReturn(text=f"已设置当前使用模型为 {model_name},重置会话以生效")

View File

@@ -0,0 +1,121 @@
from __future__ import annotations
import json
import typing
import traceback
import ollama
from .. import operator, entities, errors
@operator.operator_class(
name="ollama",
help="ollama平台操作",
usage="!ollama\n!ollama show <模型名>\n!ollama pull <模型名>\n!ollama del <模型名>"
)
class OllamaOperator(operator.CommandOperator):
async def execute(
self, context: entities.ExecuteContext
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
try:
content: str = '模型列表:\n'
model_list: list = ollama.list().get('models', [])
for model in model_list:
content += f"名称: {model['name']}\n"
content += f"修改时间: {model['modified_at']}\n"
content += f"大小: {bytes_to_mb(model['size'])}MB\n\n"
yield entities.CommandReturn(text=f"{content.strip()}")
except ollama.ResponseError as e:
yield entities.CommandReturn(error=errors.CommandError(f"无法获取模型列表,请确认 Ollama 服务正常"))
def bytes_to_mb(num_bytes):
mb: float = num_bytes / 1024 / 1024
return format(mb, '.2f')
@operator.operator_class(
name="show",
help="ollama模型详情",
privilege=2,
parent_class=OllamaOperator
)
class OllamaShowOperator(operator.CommandOperator):
async def execute(
self, context: entities.ExecuteContext
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
content: str = '模型详情:\n'
try:
show: dict = ollama.show(model=context.crt_params[0])
model_info: dict = show.get('model_info', {})
ignore_show: str = 'too long to show...'
for key in ['license', 'modelfile']:
show[key] = ignore_show
for key in ['tokenizer.chat_template.rag', 'tokenizer.chat_template.tool_use']:
model_info[key] = ignore_show
content += json.dumps(show, indent=4)
yield entities.CommandReturn(text=content.strip())
except ollama.ResponseError as e:
yield entities.CommandReturn(error=errors.CommandError(f"无法获取模型详情,请确认 Ollama 服务正常"))
@operator.operator_class(
name="pull",
help="ollama模型拉取",
privilege=2,
parent_class=OllamaOperator
)
class OllamaPullOperator(operator.CommandOperator):
async def execute(
self, context: entities.ExecuteContext
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
try:
model_list: list = ollama.list().get('models', [])
if context.crt_params[0] in [model['name'] for model in model_list]:
yield entities.CommandReturn(text="模型已存在")
return
except ollama.ResponseError as e:
yield entities.CommandReturn(error=errors.CommandError(f"无法获取模型列表,请确认 Ollama 服务正常"))
return
on_progress: bool = False
progress_count: int = 0
try:
for resp in ollama.pull(model=context.crt_params[0], stream=True):
total: typing.Any = resp.get('total')
if not on_progress:
if total is not None:
on_progress = True
yield entities.CommandReturn(text=resp.get('status'))
else:
if total is None:
on_progress = False
completed: typing.Any = resp.get('completed')
if isinstance(completed, int) and isinstance(total, int):
percentage_completed = (completed / total) * 100
if percentage_completed > progress_count:
progress_count += 10
yield entities.CommandReturn(
text=f"下载进度: {completed}/{total} ({percentage_completed:.2f}%)")
except ollama.ResponseError as e:
yield entities.CommandReturn(text=f"拉取失败: {e.error}")
@operator.operator_class(
name="del",
help="ollama模型删除",
privilege=2,
parent_class=OllamaOperator
)
class OllamaDelOperator(operator.CommandOperator):
async def execute(
self, context: entities.ExecuteContext
) -> typing.AsyncGenerator[entities.CommandReturn, None]:
try:
ret: str = ollama.delete(model=context.crt_params[0])['status']
except ollama.ResponseError as e:
ret = f"{e.error}"
yield entities.CommandReturn(text=ret)

View File

@@ -20,7 +20,7 @@ class VersionCommand(operator.CommandOperator):
try: try:
if await self.ap.ver_mgr.is_new_version_available(): if await self.ap.ver_mgr.is_new_version_available():
reply_str += "\n\n有新版本可用, 使用 !update 更新" reply_str += "\n\n有新版本可用"
except: except:
pass pass

59
pkg/config/impls/yaml.py Normal file
View File

@@ -0,0 +1,59 @@
import os
import shutil
import yaml
from .. import model as file_model
class YAMLConfigFile(file_model.ConfigFile):
"""YAML配置文件"""
def __init__(
self, config_file_name: str, template_file_name: str = None, template_data: dict = None
) -> None:
self.config_file_name = config_file_name
self.template_file_name = template_file_name
self.template_data = template_data
def exists(self) -> bool:
return os.path.exists(self.config_file_name)
async def create(self):
if self.template_file_name is not None:
shutil.copyfile(self.template_file_name, self.config_file_name)
elif self.template_data is not None:
with open(self.config_file_name, "w", encoding="utf-8") as f:
yaml.dump(self.template_data, f, indent=4, allow_unicode=True)
else:
raise ValueError("template_file_name or template_data must be provided")
async def load(self, completion: bool=True) -> dict:
if not self.exists():
await self.create()
if self.template_file_name is not None:
with open(self.template_file_name, "r", encoding="utf-8") as f:
self.template_data = yaml.load(f, Loader=yaml.FullLoader)
with open(self.config_file_name, "r", encoding="utf-8") as f:
try:
cfg = yaml.load(f, Loader=yaml.FullLoader)
except yaml.YAMLError as e:
raise Exception(f"配置文件 {self.config_file_name} 语法错误: {e}")
if completion:
for key in self.template_data:
if key not in cfg:
cfg[key] = self.template_data[key]
return cfg
async def save(self, cfg: dict):
with open(self.config_file_name, "w", encoding="utf-8") as f:
yaml.dump(cfg, f, indent=4, allow_unicode=True)
def save_sync(self, cfg: dict):
with open(self.config_file_name, "w", encoding="utf-8") as f:
yaml.dump(cfg, f, indent=4, allow_unicode=True)

View File

@@ -1,7 +1,7 @@
from __future__ import annotations from __future__ import annotations
from . import model as file_model from . import model as file_model
from .impls import pymodule, json as json_file from .impls import pymodule, json as json_file, yaml as yaml_file
managers: ConfigManager = [] managers: ConfigManager = []
@@ -31,7 +31,16 @@ class ConfigManager:
async def load_python_module_config(config_name: str, template_name: str, completion: bool=True) -> ConfigManager: async def load_python_module_config(config_name: str, template_name: str, completion: bool=True) -> ConfigManager:
"""加载Python模块配置文件""" """加载Python模块配置文件
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
completion (bool): 是否自动补全内存中的配置文件
Returns:
ConfigManager: 配置文件管理器
"""
cfg_inst = pymodule.PythonModuleConfigFile( cfg_inst = pymodule.PythonModuleConfigFile(
config_name, config_name,
template_name template_name
@@ -44,7 +53,14 @@ async def load_python_module_config(config_name: str, template_name: str, comple
async def load_json_config(config_name: str, template_name: str=None, template_data: dict=None, completion: bool=True) -> ConfigManager: async def load_json_config(config_name: str, template_name: str=None, template_data: dict=None, completion: bool=True) -> ConfigManager:
"""加载JSON配置文件""" """加载JSON配置文件
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
"""
cfg_inst = json_file.JSONConfigFile( cfg_inst = json_file.JSONConfigFile(
config_name, config_name,
template_name, template_name,
@@ -54,4 +70,28 @@ async def load_json_config(config_name: str, template_name: str=None, template_d
cfg_mgr = ConfigManager(cfg_inst) cfg_mgr = ConfigManager(cfg_inst)
await cfg_mgr.load_config(completion=completion) await cfg_mgr.load_config(completion=completion)
return cfg_mgr return cfg_mgr
async def load_yaml_config(config_name: str, template_name: str=None, template_data: dict=None, completion: bool=True) -> ConfigManager:
"""加载YAML配置文件
Args:
config_name (str): 配置文件名
template_name (str): 模板文件名
template_data (dict): 模板数据
completion (bool): 是否自动补全内存中的配置文件
Returns:
ConfigManager: 配置文件管理器
"""
cfg_inst = yaml_file.YAMLConfigFile(
config_name,
template_name,
template_data
)
cfg_mgr = ConfigManager(cfg_inst)
await cfg_mgr.load_config(completion=completion)
return cfg_mgr

View File

@@ -9,13 +9,14 @@ from ..provider.session import sessionmgr as llm_session_mgr
from ..provider.modelmgr import modelmgr as llm_model_mgr from ..provider.modelmgr import modelmgr as llm_model_mgr
from ..provider.sysprompt import sysprompt as llm_prompt_mgr from ..provider.sysprompt import sysprompt as llm_prompt_mgr
from ..provider.tools import toolmgr as llm_tool_mgr from ..provider.tools import toolmgr as llm_tool_mgr
from ..provider import runnermgr
from ..config import manager as config_mgr from ..config import manager as config_mgr
from ..audit.center import v2 as center_mgr from ..audit.center import v2 as center_mgr
from ..command import cmdmgr from ..command import cmdmgr
from ..plugin import manager as plugin_mgr from ..plugin import manager as plugin_mgr
from ..pipeline import pool from ..pipeline import pool
from ..pipeline import controller, stagemgr from ..pipeline import controller, stagemgr
from ..utils import version as version_mgr, proxy as proxy_mgr from ..utils import version as version_mgr, proxy as proxy_mgr, announce as announce_mgr
class Application: class Application:
@@ -33,6 +34,8 @@ class Application:
tool_mgr: llm_tool_mgr.ToolManager = None tool_mgr: llm_tool_mgr.ToolManager = None
runner_mgr: runnermgr.RunnerManager = None
# ======= 配置管理器 ======= # ======= 配置管理器 =======
command_cfg: config_mgr.ConfigManager = None command_cfg: config_mgr.ConfigManager = None
@@ -69,6 +72,8 @@ class Application:
ver_mgr: version_mgr.VersionManager = None ver_mgr: version_mgr.VersionManager = None
ann_mgr: announce_mgr.AnnouncementManager = None
proxy_mgr: proxy_mgr.ProxyManager = None proxy_mgr: proxy_mgr.ProxyManager = None
logger: logging.Logger = None logger: logging.Logger = None

View File

@@ -7,14 +7,15 @@ from ..audit import identifier
from . import stage from . import stage
# 引入启动阶段实现以便注册 # 引入启动阶段实现以便注册
from .stages import load_config, setup_logger, build_app, migrate from .stages import load_config, setup_logger, build_app, migrate, show_notes
stage_order = [ stage_order = [
"LoadConfigStage", "LoadConfigStage",
"MigrationStage", "MigrationStage",
"SetupLoggerStage", "SetupLoggerStage",
"BuildAppStage" "BuildAppStage",
"ShowNotesStage"
] ]

View File

@@ -15,6 +15,7 @@ required_deps = {
"aiohttp": "aiohttp", "aiohttp": "aiohttp",
"psutil": "psutil", "psutil": "psutil",
"async_lru": "async-lru", "async_lru": "async-lru",
"ollama": "ollama",
} }

View File

@@ -73,6 +73,9 @@ class Query(pydantic.BaseModel):
resp_message_chain: typing.Optional[list[mirai.MessageChain]] = None resp_message_chain: typing.Optional[list[mirai.MessageChain]] = None
"""回复消息链从resp_messages包装而得""" """回复消息链从resp_messages包装而得"""
# ======= 内部保留 =======
current_stage: "pkg.pipeline.stagemgr.StageInstContainer" = None
class Config: class Config:
arbitrary_types_allowed = True arbitrary_types_allowed = True

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import abc import abc
import typing import typing
from ..core import app from . import app
preregistered_migrations: list[typing.Type[Migration]] = [] preregistered_migrations: list[typing.Type[Migration]] = []

View File

@@ -0,0 +1,23 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("ollama-requester-config", 10)
class MsgTruncatorConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
return 'ollama-chat' not in self.ap.provider_cfg.data['requester']
async def run(self):
"""执行迁移"""
self.ap.provider_cfg.data['requester']['ollama-chat'] = {
"base-url": "http://127.0.0.1:11434",
"args": {},
"timeout": 600
}
await self.ap.provider_cfg.dump_config()

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("command-prefix-config", 11)
class CommandPrefixConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
return 'command-prefix' not in self.ap.command_cfg.data
async def run(self):
"""执行迁移"""
self.ap.command_cfg.data['command-prefix'] = [
"!", ""
]
await self.ap.command_cfg.dump_config()

View File

@@ -0,0 +1,19 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("runner-config", 12)
class RunnerConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
return 'runner' not in self.ap.provider_cfg.data
async def run(self):
"""执行迁移"""
self.ap.provider_cfg.data['runner'] = 'local-agent'
await self.ap.provider_cfg.dump_config()

44
pkg/core/note.py Normal file
View File

@@ -0,0 +1,44 @@
from __future__ import annotations
import abc
import typing
from . import app
preregistered_notes: list[typing.Type[LaunchNote]] = []
def note_class(name: str, number: int):
"""注册一个启动信息
"""
def decorator(cls: typing.Type[LaunchNote]) -> typing.Type[LaunchNote]:
cls.name = name
cls.number = number
preregistered_notes.append(cls)
return cls
return decorator
class LaunchNote(abc.ABC):
"""启动信息
"""
name: str
number: int
ap: app.Application
def __init__(self, ap: app.Application):
self.ap = ap
@abc.abstractmethod
async def need_show(self) -> bool:
"""判断当前环境是否需要显示此启动信息
"""
pass
@abc.abstractmethod
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
"""生成启动信息
"""
pass

View File

View File

@@ -0,0 +1,20 @@
from __future__ import annotations
import typing
from .. import note, app
@note.note_class("ClassicNotes", 1)
class ClassicNotes(note.LaunchNote):
"""经典启动信息
"""
async def need_show(self) -> bool:
return True
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
yield await self.ap.ann_mgr.show_announcements()
yield await self.ap.ver_mgr.show_version_update()

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
import typing
import os
import sys
import logging
from .. import note, app
@note.note_class("SelectionModeOnWindows", 2)
class SelectionModeOnWindows(note.LaunchNote):
"""Windows 上的选择模式提示信息
"""
async def need_show(self) -> bool:
return os.name == 'nt'
async def yield_note(self) -> typing.AsyncGenerator[typing.Tuple[str, int], None]:
yield """您正在使用 Windows 系统,若窗口左上角显示处于”选择“模式,程序将被暂停运行,此时请右键窗口中空白区域退出选择模式。""", logging.INFO

View File

@@ -13,6 +13,7 @@ from ...provider.session import sessionmgr as llm_session_mgr
from ...provider.modelmgr import modelmgr as llm_model_mgr from ...provider.modelmgr import modelmgr as llm_model_mgr
from ...provider.sysprompt import sysprompt as llm_prompt_mgr from ...provider.sysprompt import sysprompt as llm_prompt_mgr
from ...provider.tools import toolmgr as llm_tool_mgr from ...provider.tools import toolmgr as llm_tool_mgr
from ...provider import runnermgr
from ...platform import manager as im_mgr from ...platform import manager as im_mgr
@stage.stage_class("BuildAppStage") @stage.stage_class("BuildAppStage")
@@ -53,12 +54,10 @@ class BuildAppStage(stage.BootingStage):
# 发送公告 # 发送公告
ann_mgr = announce.AnnouncementManager(ap) ann_mgr = announce.AnnouncementManager(ap)
await ann_mgr.show_announcements() ap.ann_mgr = ann_mgr
ap.query_pool = pool.QueryPool() ap.query_pool = pool.QueryPool()
await ap.ver_mgr.show_version_update()
plugin_mgr_inst = plugin_mgr.PluginManager(ap) plugin_mgr_inst = plugin_mgr.PluginManager(ap)
await plugin_mgr_inst.initialize() await plugin_mgr_inst.initialize()
ap.plugin_mgr = plugin_mgr_inst ap.plugin_mgr = plugin_mgr_inst
@@ -83,6 +82,11 @@ class BuildAppStage(stage.BootingStage):
llm_tool_mgr_inst = llm_tool_mgr.ToolManager(ap) llm_tool_mgr_inst = llm_tool_mgr.ToolManager(ap)
await llm_tool_mgr_inst.initialize() await llm_tool_mgr_inst.initialize()
ap.tool_mgr = llm_tool_mgr_inst ap.tool_mgr = llm_tool_mgr_inst
runner_mgr_inst = runnermgr.RunnerManager(ap)
await runner_mgr_inst.initialize()
ap.runner_mgr = runner_mgr_inst
im_mgr_inst = im_mgr.PlatformManager(ap=ap) im_mgr_inst = im_mgr.PlatformManager(ap=ap)
await im_mgr_inst.initialize() await im_mgr_inst.initialize()
ap.platform_mgr = im_mgr_inst ap.platform_mgr = im_mgr_inst

View File

@@ -3,9 +3,10 @@ from __future__ import annotations
import importlib import importlib
from .. import stage, app from .. import stage, app
from ...config import migration from .. import migration
from ...config.migrations import m001_sensitive_word_migration, m002_openai_config_migration, m003_anthropic_requester_cfg_completion, m004_moonshot_cfg_completion from ..migrations import m001_sensitive_word_migration, m002_openai_config_migration, m003_anthropic_requester_cfg_completion, m004_moonshot_cfg_completion
from ...config.migrations import m005_deepseek_cfg_completion, m006_vision_config, m007_qcg_center_url, m008_ad_fixwin_config_migrate, m009_msg_truncator_cfg from ..migrations import m005_deepseek_cfg_completion, m006_vision_config, m007_qcg_center_url, m008_ad_fixwin_config_migrate, m009_msg_truncator_cfg
from ..migrations import m010_ollama_requester_config, m011_command_prefix_config, m012_runner_config
@stage.stage_class("MigrationStage") @stage.stage_class("MigrationStage")

View File

@@ -0,0 +1,28 @@
from __future__ import annotations
from .. import stage, app, note
from ..notes import n001_classic_msgs, n002_selection_mode_on_windows
@stage.stage_class("ShowNotesStage")
class ShowNotesStage(stage.BootingStage):
"""显示启动信息阶段
"""
async def run(self, ap: app.Application):
# 排序
note.preregistered_notes.sort(key=lambda x: x.number)
for note_cls in note.preregistered_notes:
try:
note_inst = note_cls(ap)
if await note_inst.need_show():
async for ret in note_inst.yield_note():
if not ret:
continue
msg, level = ret
if msg:
ap.logger.log(level, msg)
except Exception as e:
continue

View File

@@ -1,6 +1,8 @@
from __future__ import annotations from __future__ import annotations
import mirai import mirai
import mirai.models
import mirai.models.message
from ...core import app from ...core import app
@@ -63,6 +65,7 @@ class ContentFilterStage(stage.PipelineStage):
"""请求llm前处理消息 """请求llm前处理消息
只要有一个不通过就不放行,只放行 PASS 的消息 只要有一个不通过就不放行,只放行 PASS 的消息
""" """
if not self.ap.pipeline_cfg.data['income-msg-check']: if not self.ap.pipeline_cfg.data['income-msg-check']:
return entities.StageProcessResult( return entities.StageProcessResult(
result_type=entities.ResultType.CONTINUE, result_type=entities.ResultType.CONTINUE,
@@ -145,11 +148,13 @@ class ContentFilterStage(stage.PipelineStage):
contain_non_text = False contain_non_text = False
text_components = [mirai.Plain, mirai.models.message.Source]
for me in query.message_chain: for me in query.message_chain:
if not isinstance(me, mirai.Plain): if type(me) not in text_components:
contain_non_text = True contain_non_text = True
break break
if contain_non_text: if contain_non_text:
self.ap.logger.debug(f"消息中包含非文本消息,跳过内容过滤器检查。") self.ap.logger.debug(f"消息中包含非文本消息,跳过内容过滤器检查。")
return entities.StageProcessResult( return entities.StageProcessResult(

View File

@@ -4,6 +4,8 @@ import asyncio
import typing import typing
import traceback import traceback
import mirai
from ..core import app, entities from ..core import app, entities
from . import entities as pipeline_entities from . import entities as pipeline_entities
from ..plugin import events from ..plugin import events
@@ -68,6 +70,17 @@ class Controller:
"""检查输出 """检查输出
""" """
if result.user_notice: if result.user_notice:
# 处理str类型
if isinstance(result.user_notice, str):
result.user_notice = mirai.MessageChain(
mirai.Plain(result.user_notice)
)
elif isinstance(result.user_notice, list):
result.user_notice = mirai.MessageChain(
*result.user_notice
)
await self.ap.platform_mgr.send( await self.ap.platform_mgr.send(
query.message_event, query.message_event,
result.user_notice, result.user_notice,
@@ -109,6 +122,8 @@ class Controller:
while i < len(self.ap.stage_mgr.stage_containers): while i < len(self.ap.stage_mgr.stage_containers):
stage_container = self.ap.stage_mgr.stage_containers[i] stage_container = self.ap.stage_mgr.stage_containers[i]
query.current_stage = stage_container # 标记到 Query 对象里
result = stage_container.inst.process(query, stage_container.inst_name) result = stage_container.inst.process(query, stage_container.inst_name)
@@ -149,7 +164,7 @@ class Controller:
try: try:
await self._execute_from_stage(0, query) await self._execute_from_stage(0, query)
except Exception as e: except Exception as e:
self.ap.logger.error(f"处理请求时出错 query_id={query.query_id}: {e}") self.ap.logger.error(f"处理请求时出错 query_id={query.query_id} stage={query.current_stage.inst_name} : {e}")
self.ap.logger.debug(f"Traceback: {traceback.format_exc()}") self.ap.logger.debug(f"Traceback: {traceback.format_exc()}")
# traceback.print_exc() # traceback.print_exc()
finally: finally:

View File

@@ -10,7 +10,7 @@ import mirai
from .. import handler from .. import handler
from ... import entities from ... import entities
from ....core import entities as core_entities from ....core import entities as core_entities
from ....provider import entities as llm_entities from ....provider import entities as llm_entities, runnermgr
from ....plugin import events from ....plugin import events
@@ -62,9 +62,8 @@ class ChatMessageHandler(handler.MessageHandler):
) )
if event_ctx.event.alter is not None: if event_ctx.event.alter is not None:
query.message_chain = mirai.MessageChain([ # if isinstance(event_ctx.event, str): # 现在暂时不考虑多模态alter
mirai.Plain(event_ctx.event.alter) query.user_message.content = event_ctx.event.alter
])
text_length = 0 text_length = 0
@@ -72,7 +71,9 @@ class ChatMessageHandler(handler.MessageHandler):
try: try:
async for result in self.runner(query): runner = self.ap.runner_mgr.get_runner()
async for result in runner.run(query):
query.resp_messages.append(result) query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}') self.ap.logger.info(f'对话({query.query_id})响应: {self.cut_str(result.readable_str())}')
@@ -109,64 +110,3 @@ class ChatMessageHandler(handler.MessageHandler):
response_seconds=int(time.time() - start_time), response_seconds=int(time.time() - start_time),
retry_times=-1, retry_times=-1,
) )
async def runner(
self,
query: core_entities.Query,
) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""执行一个请求处理过程中的LLM接口请求、函数调用的循环
这是临时处理方案后续可能改为使用LangChain或者自研的工作流处理器
"""
await query.use_model.requester.preprocess(query)
pending_tool_calls = []
req_messages = query.prompt.messages.copy() + query.messages.copy() + [query.user_message]
# 首次请求
msg = await query.use_model.requester.call(query.use_model, req_messages, query.use_funcs)
yield msg
pending_tool_calls = msg.tool_calls
req_messages.append(msg)
# 持续请求,只要还有待处理的工具调用就继续处理调用
while pending_tool_calls:
for tool_call in pending_tool_calls:
try:
func = tool_call.function
parameters = json.loads(func.arguments)
func_ret = await self.ap.tool_mgr.execute_func_call(
query, func.name, parameters
)
msg = llm_entities.Message(
role="tool", content=json.dumps(func_ret, ensure_ascii=False), tool_call_id=tool_call.id
)
yield msg
req_messages.append(msg)
except Exception as e:
# 工具调用出错,添加一个报错信息到 req_messages
err_msg = llm_entities.Message(
role="tool", content=f"err: {e}", tool_call_id=tool_call.id
)
yield err_msg
req_messages.append(err_msg)
# 处理完所有调用,再次请求
msg = await query.use_model.requester.call(query.use_model, req_messages, query.use_funcs)
yield msg
pending_tool_calls = msg.tool_calls
req_messages.append(msg)

View File

@@ -42,7 +42,9 @@ class Processor(stage.PipelineStage):
self.ap.logger.info(f"处理 {query.launcher_type.value}_{query.launcher_id} 的请求({query.query_id}): {message_text}") self.ap.logger.info(f"处理 {query.launcher_type.value}_{query.launcher_id} 的请求({query.query_id}): {message_text}")
async def generator(): async def generator():
if message_text.startswith('!') or message_text.startswith(''): cmd_prefix = self.ap.command_cfg.data['command-prefix']
if any(message_text.startswith(prefix) for prefix in cmd_prefix):
async for result in self.cmd_handler.handle(query): async for result in self.cmd_handler.handle(query):
yield result yield result
else: else:

View File

@@ -44,8 +44,8 @@ class GroupRespondRuleCheckStage(stage.PipelineStage):
use_rule = rules['default'] use_rule = rules['default']
if str(query.launcher_id) in use_rule: if str(query.launcher_id) in rules:
use_rule = use_rule[str(query.launcher_id)] use_rule = rules[str(query.launcher_id)]
for rule_matcher in self.rule_matchers: # 任意一个匹配就放行 for rule_matcher in self.rule_matchers: # 任意一个匹配就放行
res = await rule_matcher.match(str(query.message_chain), query.message_chain, use_rule, query) res = await rule_matcher.match(str(query.message_chain), query.message_chain, use_rule, query)

View File

@@ -20,11 +20,14 @@ class PrefixRule(rule_model.GroupRespondRule):
for prefix in prefixes: for prefix in prefixes:
if message_text.startswith(prefix): if message_text.startswith(prefix):
# 查找第一个plain元素
for me in message_chain:
if isinstance(me, mirai.Plain):
me.text = me.text[len(prefix):]
return entities.RuleJudgeResult( return entities.RuleJudgeResult(
matching=True, matching=True,
replacement=mirai.MessageChain([ replacement=message_chain,
mirai.Plain(message_text[len(prefix):])
]),
) )
return entities.RuleJudgeResult( return entities.RuleJudgeResult(

View File

@@ -146,9 +146,9 @@ class PlatformManager:
if len(self.adapters) == 0: if len(self.adapters) == 0:
self.ap.logger.warning('未运行平台适配器,请根据文档配置并启用平台适配器。') self.ap.logger.warning('未运行平台适配器,请根据文档配置并启用平台适配器。')
async def send(self, event: mirai.MessageEvent, msg: mirai.MessageChain, adapter: msadapter.MessageSourceAdapter, check_quote=True, check_at_sender=True): async def send(self, event: mirai.MessageEvent, msg: mirai.MessageChain, adapter: msadapter.MessageSourceAdapter):
if check_at_sender and self.ap.platform_cfg.data['at-sender'] and isinstance(event, GroupMessage): if self.ap.platform_cfg.data['at-sender'] and isinstance(event, GroupMessage):
msg.insert( msg.insert(
0, 0,
@@ -160,7 +160,7 @@ class PlatformManager:
await adapter.reply_message( await adapter.reply_message(
event, event,
msg, msg,
quote_origin=True if self.ap.platform_cfg.data['quote-origin'] and check_quote else False quote_origin=True if self.ap.platform_cfg.data['quote-origin'] else False
) )
async def run(self): async def run(self):

View File

@@ -47,7 +47,16 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
elif type(msg) is mirai.Face: elif type(msg) is mirai.Face:
msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id)) msg_list.append(aiocqhttp.MessageSegment.face(msg.face_id))
elif type(msg) is mirai.Voice: elif type(msg) is mirai.Voice:
msg_list.append(aiocqhttp.MessageSegment.record(msg.path)) arg = ''
if msg.base64:
arg = msg.base64
msg_list.append(aiocqhttp.MessageSegment.record(f"base64://{arg}"))
elif msg.url:
arg = msg.url
msg_list.append(aiocqhttp.MessageSegment.record(arg))
elif msg.path:
arg = msg.path
msg_list.append(aiocqhttp.MessageSegment.record(msg.path))
elif type(msg) is forward.Forward: elif type(msg) is forward.Forward:
for node in msg.node_list: for node in msg.node_list:
@@ -164,10 +173,11 @@ class AiocqhttpEventConverter(adapter.EventConverter):
if event.message_type == "group": if event.message_type == "group":
permission = "MEMBER" permission = "MEMBER"
if event.sender["role"] == "admin": if "role" in event.sender:
permission = "ADMINISTRATOR" if event.sender["role"] == "admin":
elif event.sender["role"] == "owner": permission = "ADMINISTRATOR"
permission = "OWNER" elif event.sender["role"] == "owner":
permission = "OWNER"
converted_event = mirai.GroupMessage( converted_event = mirai.GroupMessage(
sender=mirai.models.entities.GroupMember( sender=mirai.models.entities.GroupMember(
id=event.sender["user_id"], # message_seq 放哪? id=event.sender["user_id"], # message_seq 放哪?

View File

@@ -3,16 +3,15 @@ from __future__ import annotations
import logging import logging
import typing import typing
import datetime import datetime
import asyncio
import re import re
import traceback import traceback
import json
import threading
import mirai import mirai
import botpy import botpy
import botpy.message as botpy_message import botpy.message as botpy_message
import botpy.types.message as botpy_message_type import botpy.types.message as botpy_message_type
import pydantic
import pydantic.networks
from .. import adapter as adapter_model from .. import adapter as adapter_model
from ...pipeline.longtext.strategies import forward from ...pipeline.longtext.strategies import forward
@@ -23,10 +22,12 @@ from ...config import manager as cfg_mgr
class OfficialGroupMessage(mirai.GroupMessage): class OfficialGroupMessage(mirai.GroupMessage):
pass pass
class OfficialFriendMessage(mirai.FriendMessage):
pass
event_handler_mapping = { event_handler_mapping = {
mirai.GroupMessage: ["on_at_message_create", "on_group_at_message_create"], mirai.GroupMessage: ["on_at_message_create", "on_group_at_message_create"],
mirai.FriendMessage: ["on_direct_message_create"], mirai.FriendMessage: ["on_direct_message_create", "on_c2c_message_create"],
} }
@@ -193,12 +194,11 @@ class OfficialMessageConverter(adapter_model.MessageConverter):
@staticmethod @staticmethod
def extract_message_chain_from_obj( def extract_message_chain_from_obj(
message: typing.Union[botpy_message.Message, botpy_message.DirectMessage], message: typing.Union[botpy_message.Message, botpy_message.DirectMessage, botpy_message.GroupMessage, botpy_message.C2CMessage],
message_id: str = None, message_id: str = None,
bot_account_id: int = 0, bot_account_id: int = 0,
) -> mirai.MessageChain: ) -> mirai.MessageChain:
yiri_msg_list = [] yiri_msg_list = []
# 存id # 存id
yiri_msg_list.append( yiri_msg_list.append(
@@ -207,7 +207,7 @@ class OfficialMessageConverter(adapter_model.MessageConverter):
) )
) )
if type(message) is not botpy_message.DirectMessage: if type(message) not in [botpy_message.DirectMessage, botpy_message.C2CMessage]:
yiri_msg_list.append(mirai.At(target=bot_account_id)) yiri_msg_list.append(mirai.At(target=bot_account_id))
if hasattr(message, "mentions"): if hasattr(message, "mentions"):
@@ -218,7 +218,7 @@ class OfficialMessageConverter(adapter_model.MessageConverter):
yiri_msg_list.append(mirai.At(target=mention.id)) yiri_msg_list.append(mirai.At(target=mention.id))
for attachment in message.attachments: for attachment in message.attachments:
if attachment.content_type == "image": if attachment.content_type.startswith("image"):
yiri_msg_list.append(mirai.Image(url=attachment.url)) yiri_msg_list.append(mirai.Image(url=attachment.url))
else: else:
logging.warning( logging.warning(
@@ -256,7 +256,7 @@ class OfficialEventConverter(adapter_model.EventConverter):
def target2yiri( def target2yiri(
self, self,
event: typing.Union[botpy_message.Message, botpy_message.DirectMessage] event: typing.Union[botpy_message.Message, botpy_message.DirectMessage, botpy_message.GroupMessage, botpy_message.C2CMessage],
) -> mirai.Event: ) -> mirai.Event:
import mirai.models.entities as mirai_entities import mirai.models.entities as mirai_entities
@@ -296,7 +296,7 @@ class OfficialEventConverter(adapter_model.EventConverter):
).timestamp() ).timestamp()
), ),
) )
elif type(event) == botpy_message.DirectMessage: # 私聊,转私聊事件 elif type(event) == botpy_message.DirectMessage: # 频道私聊,转私聊事件
return mirai.FriendMessage( return mirai.FriendMessage(
sender=mirai_entities.Friend( sender=mirai_entities.Friend(
id=event.guild_id, id=event.guild_id,
@@ -312,7 +312,7 @@ class OfficialEventConverter(adapter_model.EventConverter):
).timestamp() ).timestamp()
), ),
) )
elif type(event) == botpy_message.GroupMessage: elif type(event) == botpy_message.GroupMessage: # 群聊,转群聊事件
replacing_member_id = self.member_openid_mapping.save_openid(event.author.member_openid) replacing_member_id = self.member_openid_mapping.save_openid(event.author.member_openid)
@@ -340,6 +340,25 @@ class OfficialEventConverter(adapter_model.EventConverter):
).timestamp() ).timestamp()
), ),
) )
elif type(event) == botpy_message.C2CMessage: # 私聊,转私聊事件
user_id_alter = self.member_openid_mapping.save_openid(event.author.user_openid) # 实测这里的user_openid与group的member_openid是一样的
return OfficialFriendMessage(
sender=mirai_entities.Friend(
id=user_id_alter,
nickname=user_id_alter,
remark=user_id_alter,
),
message_chain=OfficialMessageConverter.extract_message_chain_from_obj(
event, event.id
),
time=int(
datetime.datetime.strptime(
event.timestamp, "%Y-%m-%dT%H:%M:%S%z"
).timestamp()
),
)
@adapter_model.adapter_class("qq-botpy") @adapter_model.adapter_class("qq-botpy")
@@ -369,6 +388,7 @@ class OfficialAdapter(adapter_model.MessageSourceAdapter):
group_openid_mapping: OpenIDMapping[str, int] = None group_openid_mapping: OpenIDMapping[str, int] = None
group_msg_seq = None group_msg_seq = None
c2c_msg_seq = None
def __init__(self, cfg: dict, ap: app.Application): def __init__(self, cfg: dict, ap: app.Application):
"""初始化适配器""" """初始化适配器"""
@@ -376,6 +396,7 @@ class OfficialAdapter(adapter_model.MessageSourceAdapter):
self.ap = ap self.ap = ap
self.group_msg_seq = 1 self.group_msg_seq = 1
self.c2c_msg_seq = 1
switchs = {} switchs = {}
@@ -455,18 +476,58 @@ class OfficialAdapter(adapter_model.MessageSourceAdapter):
] ]
await self.bot.api.post_dms(**args) await self.bot.api.post_dms(**args)
elif type(message_source) == OfficialGroupMessage: elif type(message_source) == OfficialGroupMessage:
if "image" in args or "file_image" in args:
if "file_image" in args: # 暂不支持发送文件图片
continue continue
args["group_openid"] = self.group_openid_mapping.getkey( args["group_openid"] = self.group_openid_mapping.getkey(
message_source.sender.group.id message_source.sender.group.id
) )
if "image" in args:
uploadMedia = await self.bot.api.post_group_file(
group_openid=args["group_openid"],
file_type=1,
url=str(args['image'])
)
del args['image']
args['media'] = uploadMedia
args['msg_type'] = 7
args["msg_id"] = cached_message_ids[ args["msg_id"] = cached_message_ids[
str(message_source.message_chain.message_id) str(message_source.message_chain.message_id)
] ]
args["msg_seq"] = self.group_msg_seq args["msg_seq"] = self.group_msg_seq
self.group_msg_seq += 1 self.group_msg_seq += 1
await self.bot.api.post_group_message(**args) await self.bot.api.post_group_message(**args)
elif type(message_source) == OfficialFriendMessage:
if "file_image" in args:
continue
args["openid"] = self.member_openid_mapping.getkey(
message_source.sender.id
)
if "image" in args:
uploadMedia = await self.bot.api.post_c2c_file(
openid=args["openid"],
file_type=1,
url=str(args['image'])
)
del args['image']
args['media'] = uploadMedia
args['msg_type'] = 7
args["msg_id"] = cached_message_ids[
str(message_source.message_chain.message_id)
]
args["msg_seq"] = self.c2c_msg_seq
self.c2c_msg_seq += 1
await self.bot.api.post_c2c_message(**args)
async def is_muted(self, group_id: int) -> bool: async def is_muted(self, group_id: int) -> bool:
return False return False

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import typing import typing
import abc import abc
import pydantic import pydantic
import mirai
from . import events from . import events
from ..provider.tools import entities as tools_entities from ..provider.tools import entities as tools_entities
@@ -165,11 +166,54 @@ class EventContext:
} }
""" """
# ========== 插件可调用的 API ==========
def add_return(self, key: str, ret): def add_return(self, key: str, ret):
"""添加返回值""" """添加返回值"""
if key not in self.__return_value__: if key not in self.__return_value__:
self.__return_value__[key] = [] self.__return_value__[key] = []
self.__return_value__[key].append(ret) self.__return_value__[key].append(ret)
async def reply(self, message_chain: mirai.MessageChain):
"""回复此次消息请求
Args:
message_chain (mirai.MessageChain): YiriMirai库的消息链若用户使用的不是 YiriMirai 适配器,程序也能自动转换为目标消息链
"""
await self.host.ap.platform_mgr.send(
event=self.event.query.message_event,
msg=message_chain,
adapter=self.event.query.adapter,
)
async def send_message(
self,
target_type: str,
target_id: str,
message: mirai.MessageChain
):
"""主动发送消息
Args:
target_type (str): 目标类型,`person`或`group`
target_id (str): 目标ID
message (mirai.MessageChain): YiriMirai库的消息链若用户使用的不是 YiriMirai 适配器,程序也能自动转换为目标消息链
"""
await self.event.query.adapter.send_message(
target_type=target_type,
target_id=target_id,
message=message
)
def prevent_postorder(self):
"""阻止后续插件执行"""
self.__prevent_postorder__ = True
def prevent_default(self):
"""阻止默认行为"""
self.__prevent_default__ = True
# ========== 以下是内部保留方法,插件不应调用 ==========
def get_return(self, key: str) -> list: def get_return(self, key: str) -> list:
"""获取key的所有返回值""" """获取key的所有返回值"""
@@ -183,14 +227,6 @@ class EventContext:
return self.__return_value__[key][0] return self.__return_value__[key][0]
return None return None
def prevent_default(self):
"""阻止默认行为"""
self.__prevent_default__ = True
def prevent_postorder(self):
"""阻止后续插件执行"""
self.__prevent_postorder__ = True
def is_prevented_default(self): def is_prevented_default(self):
"""是否阻止默认行为""" """是否阻止默认行为"""
return self.__prevent_default__ return self.__prevent_default__
@@ -198,6 +234,7 @@ class EventContext:
def is_prevented_postorder(self): def is_prevented_postorder(self):
"""是否阻止后序插件执行""" """是否阻止后序插件执行"""
return self.__prevent_postorder__ return self.__prevent_postorder__
def __init__(self, host: APIHost, event: events.BaseEventModel): def __init__(self, host: APIHost, event: events.BaseEventModel):

View File

@@ -140,7 +140,7 @@ class PluginManager:
for plugin in self.plugins: for plugin in self.plugins:
if plugin.enabled: if plugin.enabled:
if event.__class__ in plugin.event_handlers: if event.__class__ in plugin.event_handlers:
self.ap.logger.debug(f'插件 {plugin.plugin_name} 触发事件 {event.__class__.__name__}') self.ap.logger.debug(f'插件 {plugin.plugin_name} 处理事件 {event.__class__.__name__}')
is_prevented_default_before_call = ctx.is_prevented_default() is_prevented_default_before_call = ctx.is_prevented_default()
@@ -150,7 +150,7 @@ class PluginManager:
ctx ctx
) )
except Exception as e: except Exception as e:
self.ap.logger.error(f'插件 {plugin.plugin_name} 触发事件 {event.__class__.__name__} 时发生错误: {e}') self.ap.logger.error(f'插件 {plugin.plugin_name} 处理事件 {event.__class__.__name__} 时发生错误: {e}')
self.ap.logger.debug(f"Traceback: {traceback.format_exc()}") self.ap.logger.debug(f"Traceback: {traceback.format_exc()}")
emitted_plugins.append(plugin) emitted_plugins.append(plugin)

View File

@@ -95,7 +95,7 @@ class Message(pydantic.BaseModel):
for ce in self.content: for ce in self.content:
if ce.type == 'text': if ce.type == 'text':
mc.append(mirai.Plain(ce.text)) mc.append(mirai.Plain(ce.text))
elif ce.type == 'image': elif ce.type == 'image_url':
if ce.image_url.url.startswith("http"): if ce.image_url.url.startswith("http"):
mc.append(mirai.Image(url=ce.image_url.url)) mc.append(mirai.Image(url=ce.image_url.url))
else: # base64 else: # base64

View File

@@ -72,12 +72,13 @@ class AnthropicMessages(api.LLMAPIRequester):
for i, ce in enumerate(m.content): for i, ce in enumerate(m.content):
if ce.type == "image_url": if ce.type == "image_url":
base64_image, image_format = await image.qq_image_url_to_base64(ce.image_url.url)
alter_image_ele = { alter_image_ele = {
"type": "image", "type": "image",
"source": { "source": {
"type": "base64", "type": "base64",
"media_type": "image/jpeg", "media_type": f"image/{image_format}",
"data": await image.qq_image_url_to_base64(ce.image_url.url) "data": base64_image
} }
} }
msg_dict["content"][i] = alter_image_ele msg_dict["content"][i] = alter_image_ele

View File

@@ -55,6 +55,10 @@ class OpenAIChatCompletions(api.LLMAPIRequester):
) -> llm_entities.Message: ) -> llm_entities.Message:
chatcmpl_message = chat_completion.choices[0].message.dict() chatcmpl_message = chat_completion.choices[0].message.dict()
# 确保 role 字段存在且不为 None
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
chatcmpl_message['role'] = 'assistant'
message = llm_entities.Message(**chatcmpl_message) message = llm_entities.Message(**chatcmpl_message)
return message return message
@@ -102,9 +106,16 @@ class OpenAIChatCompletions(api.LLMAPIRequester):
messages: typing.List[llm_entities.Message], messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None, funcs: typing.List[tools_entities.LLMFunction] = None,
) -> llm_entities.Message: ) -> llm_entities.Message:
req_messages = [ # req_messages 仅用于类内,外部同步由 query.messages 进行 req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
m.dict(exclude_none=True) for m in messages for m in messages:
] msg_dict = m.dict(exclude_none=True)
content = msg_dict.get("content")
if isinstance(content, list):
# 检查 content 列表中是否每个部分都是文本
if all(isinstance(part, dict) and part.get("type") == "text" for part in content):
# 将所有文本部分合并为一个字符串
msg_dict["content"] = "\n".join(part["text"] for part in content)
req_messages.append(msg_dict)
try: try:
return await self._closure(req_messages, model, funcs) return await self._closure(req_messages, model, funcs)
@@ -129,7 +140,5 @@ class OpenAIChatCompletions(api.LLMAPIRequester):
self, self,
original_url: str, original_url: str,
) -> str: ) -> str:
base64_image, image_format = await image.qq_image_url_to_base64(original_url)
base64_image = await image.qq_image_url_to_base64(original_url) return f"data:image/{image_format};base64,{base64_image}"
return f"data:image/jpeg;base64,{base64_image}"

View File

@@ -0,0 +1,105 @@
from __future__ import annotations
import asyncio
import os
import typing
from typing import Union, Mapping, Any, AsyncIterator
import async_lru
import ollama
from .. import api, entities, errors
from ... import entities as llm_entities
from ...tools import entities as tools_entities
from ....core import app
from ....utils import image
REQUESTER_NAME: str = "ollama-chat"
@api.requester_class(REQUESTER_NAME)
class OllamaChatCompletions(api.LLMAPIRequester):
"""Ollama平台 ChatCompletion API请求器"""
client: ollama.AsyncClient
request_cfg: dict
def __init__(self, ap: app.Application):
super().__init__(ap)
self.ap = ap
self.request_cfg = self.ap.provider_cfg.data['requester'][REQUESTER_NAME]
async def initialize(self):
os.environ['OLLAMA_HOST'] = self.request_cfg['base-url']
self.client = ollama.AsyncClient(
timeout=self.request_cfg['timeout']
)
async def _req(self,
args: dict,
) -> Union[Mapping[str, Any], AsyncIterator[Mapping[str, Any]]]:
return await self.client.chat(
**args
)
async def _closure(self, req_messages: list[dict], use_model: entities.LLMModelInfo,
user_funcs: list[tools_entities.LLMFunction] = None) -> (
llm_entities.Message):
args: Any = self.request_cfg['args'].copy()
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
messages: list[dict] = req_messages.copy()
for msg in messages:
if 'content' in msg and isinstance(msg["content"], list):
text_content: list = []
image_urls: list = []
for me in msg["content"]:
if me["type"] == "text":
text_content.append(me["text"])
elif me["type"] == "image_url":
image_url = await self.get_base64_str(me["image_url"]['url'])
image_urls.append(image_url)
msg["content"] = "\n".join(text_content)
msg["images"] = [url.split(',')[1] for url in image_urls]
args["messages"] = messages
resp: Mapping[str, Any] | AsyncIterator[Mapping[str, Any]] = await self._req(args)
message: llm_entities.Message = await self._make_msg(resp)
return message
async def _make_msg(
self,
chat_completions: Union[Mapping[str, Any], AsyncIterator[Mapping[str, Any]]]) -> llm_entities.Message:
message: Any = chat_completions.pop('message', None)
if message is None:
raise ValueError("chat_completions must contain a 'message' field")
message.update(chat_completions)
ret_msg: llm_entities.Message = llm_entities.Message(**message)
return ret_msg
async def call(
self,
model: entities.LLMModelInfo,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
) -> llm_entities.Message:
req_messages: list = []
for m in messages:
msg_dict: dict = m.dict(exclude_none=True)
content: Any = msg_dict.get("content")
if isinstance(content, list):
if all(isinstance(part, dict) and part.get('type') == 'text' for part in content):
msg_dict["content"] = "\n".join(part["text"] for part in content)
req_messages.append(msg_dict)
try:
return await self._closure(req_messages, model)
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
@async_lru.alru_cache(maxsize=128)
async def get_base64_str(
self,
original_url: str,
) -> str:
base64_image, image_format = await image.qq_image_url_to_base64(original_url)
return f"data:image/{image_format};base64,{base64_image}"

View File

@@ -6,7 +6,7 @@ from . import entities
from ...core import app from ...core import app
from . import token, api from . import token, api
from .apis import chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl from .apis import chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list" FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"

40
pkg/provider/runner.py Normal file
View File

@@ -0,0 +1,40 @@
from __future__ import annotations
import abc
import typing
from ..core import app, entities as core_entities
from . import entities as llm_entities
preregistered_runners: list[typing.Type[RequestRunner]] = []
def runner_class(name: str):
"""注册一个请求运行器
"""
def decorator(cls: typing.Type[RequestRunner]) -> typing.Type[RequestRunner]:
cls.name = name
preregistered_runners.append(cls)
return cls
return decorator
class RequestRunner(abc.ABC):
"""请求运行器
"""
name: str = None
ap: app.Application
def __init__(self, ap: app.Application):
self.ap = ap
async def initialize(self):
pass
@abc.abstractmethod
async def run(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""运行请求
"""
pass

27
pkg/provider/runnermgr.py Normal file
View File

@@ -0,0 +1,27 @@
from __future__ import annotations
from . import runner
from ..core import app
from .runners import localagent
class RunnerManager:
ap: app.Application
using_runner: runner.RequestRunner
def __init__(self, ap: app.Application):
self.ap = ap
async def initialize(self):
for r in runner.preregistered_runners:
if r.name == self.ap.provider_cfg.data['runner']:
self.using_runner = r(self.ap)
await self.using_runner.initialize()
break
def get_runner(self) -> runner.RequestRunner:
return self.using_runner

View File

View File

@@ -0,0 +1,70 @@
from __future__ import annotations
import json
import typing
from .. import runner
from ...core import app, entities as core_entities
from .. import entities as llm_entities
@runner.runner_class("local-agent")
class LocalAgentRunner(runner.RequestRunner):
"""本地Agent请求运行器
"""
async def run(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.Message, None]:
"""运行请求
"""
await query.use_model.requester.preprocess(query)
pending_tool_calls = []
req_messages = query.prompt.messages.copy() + query.messages.copy() + [query.user_message]
# 首次请求
msg = await query.use_model.requester.call(query.use_model, req_messages, query.use_funcs)
yield msg
pending_tool_calls = msg.tool_calls
req_messages.append(msg)
# 持续请求,只要还有待处理的工具调用就继续处理调用
while pending_tool_calls:
for tool_call in pending_tool_calls:
try:
func = tool_call.function
parameters = json.loads(func.arguments)
func_ret = await self.ap.tool_mgr.execute_func_call(
query, func.name, parameters
)
msg = llm_entities.Message(
role="tool", content=json.dumps(func_ret, ensure_ascii=False), tool_call_id=tool_call.id
)
yield msg
req_messages.append(msg)
except Exception as e:
# 工具调用出错,添加一个报错信息到 req_messages
err_msg = llm_entities.Message(
role="tool", content=f"err: {e}", tool_call_id=tool_call.id
)
yield err_msg
req_messages.append(err_msg)
# 处理完所有调用,再次请求
msg = await query.use_model.requester.call(query.use_model, req_messages, query.use_funcs)
yield msg
pending_tool_calls = msg.tool_calls
req_messages.append(msg)

View File

@@ -4,6 +4,7 @@ import json
import typing import typing
import os import os
import base64 import base64
import logging
import pydantic import pydantic
import requests import requests
@@ -107,17 +108,20 @@ class AnnouncementManager:
async def show_announcements( async def show_announcements(
self self
): ) -> typing.Tuple[str, int]:
"""显示公告""" """显示公告"""
try: try:
announcements = await self.fetch_new() announcements = await self.fetch_new()
ann_text = ""
for ann in announcements: for ann in announcements:
self.ap.logger.info(f'[公告] {ann.time}: {ann.content}') ann_text += f"[公告] {ann.time}: {ann.content}\n"
if announcements: if announcements:
await self.ap.ctr_mgr.main.post_announcement_showed( await self.ap.ctr_mgr.main.post_announcement_showed(
ids=[item.id for item in announcements] ids=[item.id for item in announcements]
) )
return ann_text, logging.INFO
except Exception as e: except Exception as e:
self.ap.logger.warning(f'获取公告时出错: {e}') return f'获取公告时出错: {e}', logging.WARNING

View File

@@ -1 +1 @@
semantic_version = "v3.2.2" semantic_version = "v3.3.1.1"

View File

@@ -8,14 +8,14 @@ import aiohttp
async def qq_image_url_to_base64( async def qq_image_url_to_base64(
image_url: str image_url: str
) -> str: ) -> typing.Tuple[str, str]:
"""将QQ图片URL转为base64 """将QQ图片URL转为base64,并返回图片格式
Args: Args:
image_url (str): QQ图片URL image_url (str): QQ图片URL
Returns: Returns:
str: base64编码 typing.Tuple[str, str]: base64编码和图片格式
""" """
parsed = urlparse(image_url) parsed = urlparse(image_url)
query = parse_qs(parsed.query) query = parse_qs(parsed.query)
@@ -35,7 +35,12 @@ async def qq_image_url_to_base64(
) as resp: ) as resp:
resp.raise_for_status() # 检查HTTP错误 resp.raise_for_status() # 检查HTTP错误
file_bytes = await resp.read() file_bytes = await resp.read()
content_type = resp.headers.get('Content-Type')
if not content_type or not content_type.startswith('image/'):
image_format = 'jpeg'
else:
image_format = content_type.split('/')[-1]
base64_str = base64.b64encode(file_bytes).decode() base64_str = base64.b64encode(file_bytes).decode()
return base64_str return base64_str, image_format

View File

@@ -1,6 +1,8 @@
from __future__ import annotations from __future__ import annotations
import os import os
import typing
import logging
import time import time
import requests import requests
@@ -213,11 +215,11 @@ class VersionManager:
async def show_version_update( async def show_version_update(
self self
): ) -> typing.Tuple[str, int]:
try: try:
if await self.ap.ver_mgr.is_new_version_available(): if await self.ap.ver_mgr.is_new_version_available():
self.ap.logger.info("有新版本可用,请使用 !update 命令更新") return "有新版本可用,请使用管理员账号发送 !update 命令更新", logging.INFO
except Exception as e: except Exception as e:
self.ap.logger.warning(f"检查版本更新时出错: {e}") return f"检查版本更新时出错: {e}", logging.WARNING

View File

@@ -14,4 +14,5 @@ pydantic
websockets websockets
urllib3 urllib3
psutil psutil
async-lru async-lru
ollama

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
## 功能点列举 ## 功能点列举
<details> <details>

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
使用过程中的一些疑问,这里不是解决异常的地方,遇到异常请见`常见错误` 使用过程中的一些疑问,这里不是解决异常的地方,遇到异常请见`常见错误`
### ❓ 如何更新代码到最新版本? ### ❓ 如何更新代码到最新版本?

View File

@@ -1 +1,4 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
搜索[主仓库issue](https://github.com/RockChinQ/QChatGPT/issues)和[安装器issue](https://github.com/RockChinQ/qcg-installer/issues) 搜索[主仓库issue](https://github.com/RockChinQ/QChatGPT/issues)和[安装器issue](https://github.com/RockChinQ/qcg-installer/issues)

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
以下是QChatGPT实现原理等技术信息贡献之前请仔细阅读 以下是QChatGPT实现原理等技术信息贡献之前请仔细阅读
> 太久没更了,过时了,建议读源码,~~注释还挺全的~~ > 太久没更了,过时了,建议读源码,~~注释还挺全的~~

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
QChatGPT 插件使用Wiki QChatGPT 插件使用Wiki
## 简介 ## 简介

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
> 说白了就是ChatGPT官方插件那种东西 > 说白了就是ChatGPT官方插件那种东西
内容函数是基于[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling)实现的这是一种嵌入对话中由GPT自动调用的函数。 内容函数是基于[GPT的Function Calling能力](https://platform.openai.com/docs/guides/gpt/function-calling)实现的这是一种嵌入对话中由GPT自动调用的函数。

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
QChatGPT 插件开发Wiki QChatGPT 插件开发Wiki
> 请先阅读[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/5-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8) > 请先阅读[插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/5-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
## 多个对话接口有何区别? ## 多个对话接口有何区别?
出于对稳定性的高要求本项目主线接入的是GPT-3模型接口此接口由OpenAI官方开放稳定性强。 出于对稳定性的高要求本项目主线接入的是GPT-3模型接口此接口由OpenAI官方开放稳定性强。

View File

@@ -1,3 +1,6 @@
> [!WARNING]
> 此 Wiki 已弃用,所有文档已迁移到 [项目主页](https://qchatgpt.rockchin.top)
# 配置go-cqhttp用于登录QQ # 配置go-cqhttp用于登录QQ
> 若您是从旧版本升级到此版本以使用go-cqhttp的用户请您按照`config-template.py`的内容修改`config.py`,添加`msg_source_adapter`配置项并将其设为`nakuru`,同时添加`nakuru_config`字段按照说明配置。 > 若您是从旧版本升级到此版本以使用go-cqhttp的用户请您按照`config-template.py`的内容修改`config.py`,添加`msg_source_adapter`配置项并将其设为`nakuru`,同时添加`nakuru_config`字段按照说明配置。

View File

@@ -1,3 +1,7 @@
{ {
"privilege": {} "privilege": {},
"command-prefix": [
"!",
""
]
} }

View File

@@ -37,11 +37,17 @@
"base-url": "https://api.deepseek.com", "base-url": "https://api.deepseek.com",
"args": {}, "args": {},
"timeout": 120 "timeout": 120
},
"ollama-chat": {
"base-url": "http://127.0.0.1:11434",
"args": {},
"timeout": 600
} }
}, },
"model": "gpt-3.5-turbo", "model": "gpt-3.5-turbo",
"prompt-mode": "normal", "prompt-mode": "normal",
"prompt": { "prompt": {
"default": "" "default": ""
} },
"runner": "local-agent"
} }