mirror of
https://github.com/langbot-app/LangBot.git
synced 2025-11-26 03:44:58 +08:00
Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8af401eea4 | ||
|
|
446546b69f | ||
|
|
8a6d9d76da | ||
|
|
92acaf6c27 | ||
|
|
4d53b3cb06 | ||
|
|
7cad4ffa37 | ||
|
|
b6f312325f | ||
|
|
43a6492cab | ||
|
|
92e3546e8a | ||
|
|
8a9000cc67 | ||
|
|
6e3514c0b2 | ||
|
|
2782c8cebe | ||
|
|
13e29a9966 | ||
|
|
601b0a8964 | ||
|
|
7c2ceb0aca | ||
|
|
42fabd5133 | ||
|
|
210a8856e2 | ||
|
|
c531cb11af | ||
|
|
07e073f526 | ||
|
|
c5457374a8 | ||
|
|
5198349591 | ||
|
|
8a4967525a | ||
|
|
30b068c6e2 | ||
|
|
ea3fff59ac | ||
|
|
b09ce8296f | ||
|
|
f9d07779a9 |
16
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
16
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -3,22 +3,6 @@ description: 报错或漏洞请使用这个模板创建,不使用此模板创
|
||||
title: "[Bug]: "
|
||||
labels: ["bug?"]
|
||||
body:
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: 消息平台适配器
|
||||
description: "接入的消息平台类型"
|
||||
options:
|
||||
- 其他(或暂未使用)
|
||||
- Nakuru(go-cqhttp)
|
||||
- aiocqhttp(使用 OneBot 协议接入的)
|
||||
- qq-botpy(QQ官方API WebSocket)
|
||||
- qqofficial(QQ官方API Webhook)
|
||||
- lark(飞书)
|
||||
- wecom(企业微信)
|
||||
- gewechat(个人微信)
|
||||
- discord
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
attributes:
|
||||
label: 运行环境
|
||||
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -38,5 +38,5 @@ botpy.log*
|
||||
/poc
|
||||
/libs/wecom_api/test.py
|
||||
/venv
|
||||
/jp-tyo-churros-05.rockchin.top
|
||||
test.py
|
||||
test.py
|
||||
/web_ui
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
<a href="https://docs.langbot.app">项目主页</a> |
|
||||
<a href="https://langbot.app">项目主页</a> |
|
||||
<a href="https://docs.langbot.app/insight/intro.html">功能介绍</a> |
|
||||
<a href="https://docs.langbot.app/insight/guide.html">部署文档</a> |
|
||||
<a href="https://docs.langbot.app/usage/faq.html">常见问题</a> |
|
||||
@@ -87,6 +87,7 @@
|
||||
| QQ 个人号 | ✅ | QQ 个人号私聊、群聊 |
|
||||
| QQ 官方机器人 | ✅ | QQ 官方机器人,支持频道、私聊、群聊 |
|
||||
| 企业微信 | ✅ | |
|
||||
| 企微对外客服 | ✅ | |
|
||||
| 个人微信 | ✅ | 使用 [Gewechat](https://github.com/Devo919/Gewechat) 接入 |
|
||||
| 微信公众号 | ✅ | |
|
||||
| 飞书 | ✅ | |
|
||||
@@ -109,6 +110,7 @@
|
||||
| [Anthropic](https://www.anthropic.com/) | ✅ | |
|
||||
| [xAI](https://x.ai/) | ✅ | |
|
||||
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
|
||||
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
|
||||
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
|
||||
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
|
||||
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型运行平台 |
|
||||
@@ -116,6 +118,7 @@
|
||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
|
||||
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
||||
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
||||
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
|
||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
|
||||
|
||||
### TTS
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
<a href="https://docs.langbot.app">Home</a> |
|
||||
<a href="https://langbot.app">Home</a> |
|
||||
<a href="https://docs.langbot.app/insight/intro.html">Features</a> |
|
||||
<a href="https://docs.langbot.app/insight/guide.html">Deployment</a> |
|
||||
<a href="https://docs.langbot.app/usage/faq.html">FAQ</a> |
|
||||
@@ -85,6 +85,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
|
||||
| Personal QQ | ✅ | |
|
||||
| QQ Official API | ✅ | |
|
||||
| WeCom | ✅ | |
|
||||
| WeComCS | ✅ | |
|
||||
| Personal WeChat | ✅ | Use [Gewechat](https://github.com/Devo919/Gewechat) to access |
|
||||
| Lark | ✅ | |
|
||||
| DingTalk | ✅ | |
|
||||
@@ -107,12 +108,14 @@ Directly use the released version to run, see the [Manual Deployment](https://do
|
||||
| [xAI](https://x.ai/) | ✅ | |
|
||||
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
|
||||
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
|
||||
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
|
||||
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |
|
||||
| [LMStudio](https://lmstudio.ai/) | ✅ | Local LLM running platform |
|
||||
| [GiteeAI](https://ai.gitee.com/) | ✅ | LLM interface gateway(MaaS) |
|
||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
|
||||
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
||||
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
||||
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLM gateway(MaaS) |
|
||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
|
||||
|
||||
## 🤝 Community Contribution
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
<a href="https://trendshift.io/repositories/12901" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12901" alt="RockChinQ%2FLangBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
<a href="https://docs.langbot.app">ホーム</a> |
|
||||
<a href="https://langbot.app">ホーム</a> |
|
||||
<a href="https://docs.langbot.app/insight/intro.html">機能</a> |
|
||||
<a href="https://docs.langbot.app/insight/guide.html">デプロイ</a> |
|
||||
<a href="https://docs.langbot.app/usage/faq.html">FAQ</a> |
|
||||
@@ -84,6 +84,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
|
||||
| 個人QQ | ✅ | |
|
||||
| QQ公式API | ✅ | |
|
||||
| WeCom | ✅ | |
|
||||
| WeComCS | ✅ | |
|
||||
| 個人WeChat | ✅ | [Gewechat](https://github.com/Devo919/Gewechat)を使用して接続 |
|
||||
| Lark | ✅ | |
|
||||
| DingTalk | ✅ | |
|
||||
@@ -105,6 +106,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
|
||||
| [Anthropic](https://www.anthropic.com/) | ✅ | |
|
||||
| [xAI](https://x.ai/) | ✅ | |
|
||||
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
|
||||
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
|
||||
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |
|
||||
| [Ollama](https://ollama.com/) | ✅ | ローカルLLM実行プラットフォーム |
|
||||
| [LMStudio](https://lmstudio.ai/) | ✅ | ローカルLLM実行プラットフォーム |
|
||||
@@ -112,6 +114,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
|
||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLMゲートウェイ(MaaS) |
|
||||
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
||||
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
||||
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLMゲートウェイ(MaaS) |
|
||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | MCPプロトコルをサポート |
|
||||
|
||||
## 🤝 コミュニティ貢献
|
||||
|
||||
0
libs/wecom_customer_service_api/__init__.py
Normal file
0
libs/wecom_customer_service_api/__init__.py
Normal file
337
libs/wecom_customer_service_api/api.py
Normal file
337
libs/wecom_customer_service_api/api.py
Normal file
@@ -0,0 +1,337 @@
|
||||
from quart import request
|
||||
from ..wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
|
||||
import base64
|
||||
import binascii
|
||||
import httpx
|
||||
import traceback
|
||||
from quart import Quart
|
||||
import xml.etree.ElementTree as ET
|
||||
from typing import Callable, Dict, Any
|
||||
from .wecomcsevent import WecomCSEvent
|
||||
from pkg.platform.types import events as platform_events, message as platform_message
|
||||
import aiofiles
|
||||
|
||||
|
||||
class WecomCSClient():
|
||||
def __init__(self,corpid:str,secret:str,token:str,EncodingAESKey:str):
|
||||
self.corpid = corpid
|
||||
self.secret = secret
|
||||
self.access_token_for_contacts =''
|
||||
self.token = token
|
||||
self.aes = EncodingAESKey
|
||||
self.base_url = 'https://qyapi.weixin.qq.com/cgi-bin'
|
||||
self.access_token = ''
|
||||
self.app = Quart(__name__)
|
||||
self.wxcpt = WXBizMsgCrypt(self.token, self.aes, self.corpid)
|
||||
self.app.add_url_rule('/callback/command', 'handle_callback', self.handle_callback_request, methods=['GET', 'POST'])
|
||||
self._message_handlers = {
|
||||
"example":[],
|
||||
}
|
||||
|
||||
async def get_pic_url(self, media_id: str):
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
|
||||
url = f"{self.base_url}/media/get?access_token={self.access_token}&media_id={media_id}"
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url)
|
||||
if response.headers.get("Content-Type", "").startswith("application/json"):
|
||||
data = response.json()
|
||||
if data.get('errcode') in [40014, 42001]:
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
return await self.get_pic_url(media_id)
|
||||
else:
|
||||
raise Exception("Failed to get image: " + str(data))
|
||||
|
||||
# 否则是图片,转成 base64
|
||||
image_bytes = response.content
|
||||
content_type = response.headers.get("Content-Type", "")
|
||||
base64_str = base64.b64encode(image_bytes).decode("utf-8")
|
||||
base64_str = f"data:{content_type};base64,{base64_str}"
|
||||
return base64_str
|
||||
|
||||
|
||||
#access——token操作
|
||||
async def check_access_token(self):
|
||||
return bool(self.access_token and self.access_token.strip())
|
||||
|
||||
async def check_access_token_for_contacts(self):
|
||||
return bool(self.access_token_for_contacts and self.access_token_for_contacts.strip())
|
||||
|
||||
async def get_access_token(self,secret):
|
||||
url = f'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid={self.corpid}&corpsecret={secret}'
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url)
|
||||
data = response.json()
|
||||
if 'access_token' in data:
|
||||
return data['access_token']
|
||||
else:
|
||||
raise Exception(f"未获取access token: {data}")
|
||||
|
||||
async def get_detailed_message_list(self,xml_msg:str):
|
||||
# 在本方法中解析消息,并且获得消息的具体内容
|
||||
root = ET.fromstring(xml_msg)
|
||||
token = root.find("Token").text
|
||||
open_kfid = root.find("OpenKfId").text
|
||||
|
||||
# if open_kfid in self.openkfid_list:
|
||||
# return None
|
||||
# else:
|
||||
# self.openkfid_list.append(open_kfid)
|
||||
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
|
||||
url = self.base_url+'/kf/sync_msg?access_token='+ self.access_token
|
||||
async with httpx.AsyncClient() as client:
|
||||
params = {
|
||||
"token": token,
|
||||
"voice_format": 0,
|
||||
"open_kfid": open_kfid,
|
||||
}
|
||||
response = await client.post(url,json=params)
|
||||
data = response.json()
|
||||
if data['errcode'] != 0:
|
||||
raise Exception("Failed to get message")
|
||||
if data['errcode'] == 40014 or data['errcode'] == 42001:
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
return await self.get_detailed_message_list(xml_msg)
|
||||
last_msg_data = data['msg_list'][-1]
|
||||
open_kfid = last_msg_data.get("open_kfid")
|
||||
# 进行获取图片操作
|
||||
if last_msg_data.get("msgtype") == "image":
|
||||
media_id = last_msg_data.get("image").get("media_id")
|
||||
picurl = await self.get_pic_url(media_id)
|
||||
last_msg_data["picurl"] = picurl
|
||||
# await self.change_service_status(userid=external_userid,openkfid=open_kfid,servicer=servicer)
|
||||
return last_msg_data
|
||||
|
||||
|
||||
async def change_service_status(self,userid:str,openkfid:str,servicer:str):
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
url = self.base_url+"/kf/service_state/get?access_token="+self.access_token
|
||||
async with httpx.AsyncClient() as client:
|
||||
params = {
|
||||
"open_kfid" : openkfid,
|
||||
"external_userid" : userid,
|
||||
"service_state" : 1,
|
||||
"servicer_userid" : servicer,
|
||||
}
|
||||
response = await client.post(url,json=params)
|
||||
data = response.json()
|
||||
if data['errcode'] == 40014 or data['errcode'] == 42001:
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
return await self.change_service_status(userid,openkfid)
|
||||
if data['errcode'] != 0:
|
||||
raise Exception("Failed to change service status: "+str(data))
|
||||
|
||||
|
||||
async def send_image(self,user_id:str,agent_id:int,media_id:str):
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
url = self.base_url+'/media/upload?access_token='+self.access_token
|
||||
async with httpx.AsyncClient() as client:
|
||||
params = {
|
||||
"touser" : user_id,
|
||||
"toparty" : "",
|
||||
"totag":"",
|
||||
"agentid" : agent_id,
|
||||
"msgtype" : "image",
|
||||
"image" : {
|
||||
"media_id" : media_id,
|
||||
},
|
||||
"safe":0,
|
||||
"enable_id_trans": 0,
|
||||
"enable_duplicate_check": 0,
|
||||
"duplicate_check_interval": 1800
|
||||
}
|
||||
try:
|
||||
response = await client.post(url,json=params)
|
||||
data = response.json()
|
||||
except Exception as e:
|
||||
raise Exception("Failed to send image: "+str(e))
|
||||
|
||||
# 企业微信错误码40014和42001,代表accesstoken问题
|
||||
if data['errcode'] == 40014 or data['errcode'] == 42001:
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
return await self.send_image(user_id,agent_id,media_id)
|
||||
|
||||
if data['errcode'] != 0:
|
||||
raise Exception("Failed to send image: "+str(data))
|
||||
|
||||
|
||||
async def send_text_msg(self, open_kfid: str, external_userid: str, msgid: str,content:str):
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
|
||||
url = f"https://qyapi.weixin.qq.com/cgi-bin/kf/send_msg?access_token={self.access_token}"
|
||||
|
||||
payload = {
|
||||
"touser": external_userid,
|
||||
"open_kfid": open_kfid,
|
||||
"msgid": msgid,
|
||||
"msgtype": "text",
|
||||
"text": {
|
||||
"content": content,
|
||||
}
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(url, json=payload)
|
||||
|
||||
data = response.json()
|
||||
if data.get("errcode") != 0:
|
||||
raise Exception(f"消息发送失败: {data}")
|
||||
return data
|
||||
|
||||
|
||||
async def handle_callback_request(self):
|
||||
"""
|
||||
处理回调请求,包括 GET 验证和 POST 消息接收。
|
||||
"""
|
||||
try:
|
||||
|
||||
msg_signature = request.args.get("msg_signature")
|
||||
timestamp = request.args.get("timestamp")
|
||||
nonce = request.args.get("nonce")
|
||||
|
||||
if request.method == "GET":
|
||||
echostr = request.args.get("echostr")
|
||||
ret, reply_echo_str = self.wxcpt.VerifyURL(msg_signature, timestamp, nonce, echostr)
|
||||
if ret != 0:
|
||||
raise Exception(f"验证失败,错误码: {ret}")
|
||||
return reply_echo_str
|
||||
|
||||
elif request.method == "POST":
|
||||
encrypt_msg = await request.data
|
||||
ret, xml_msg = self.wxcpt.DecryptMsg(encrypt_msg, msg_signature, timestamp, nonce)
|
||||
if ret != 0:
|
||||
raise Exception(f"消息解密失败,错误码: {ret}")
|
||||
|
||||
# 解析消息并处理
|
||||
message_data = await self.get_detailed_message_list(xml_msg)
|
||||
if message_data is not None:
|
||||
event = WecomCSEvent.from_payload(message_data)
|
||||
if event:
|
||||
await self._handle_message(event)
|
||||
|
||||
return "success"
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
return f"Error processing request: {str(e)}", 400
|
||||
|
||||
async def run_task(self, host: str, port: int, *args, **kwargs):
|
||||
"""
|
||||
启动 Quart 应用。
|
||||
"""
|
||||
await self.app.run_task(host=host, port=port, *args, **kwargs)
|
||||
|
||||
def on_message(self, msg_type: str):
|
||||
"""
|
||||
注册消息类型处理器。
|
||||
"""
|
||||
def decorator(func: Callable[[WecomCSEvent], None]):
|
||||
if msg_type not in self._message_handlers:
|
||||
self._message_handlers[msg_type] = []
|
||||
self._message_handlers[msg_type].append(func)
|
||||
return func
|
||||
return decorator
|
||||
|
||||
async def _handle_message(self, event: WecomCSEvent):
|
||||
"""
|
||||
处理消息事件。
|
||||
"""
|
||||
msg_type = event.type
|
||||
if msg_type in self._message_handlers:
|
||||
for handler in self._message_handlers[msg_type]:
|
||||
await handler(event)
|
||||
|
||||
|
||||
@staticmethod
|
||||
async def get_image_type(image_bytes: bytes) -> str:
|
||||
"""
|
||||
通过图片的magic numbers判断图片类型
|
||||
"""
|
||||
magic_numbers = {
|
||||
b'\xFF\xD8\xFF': 'jpg',
|
||||
b'\x89\x50\x4E\x47': 'png',
|
||||
b'\x47\x49\x46': 'gif',
|
||||
b'\x42\x4D': 'bmp',
|
||||
b'\x00\x00\x01\x00': 'ico'
|
||||
}
|
||||
|
||||
for magic, ext in magic_numbers.items():
|
||||
if image_bytes.startswith(magic):
|
||||
return ext
|
||||
return 'jpg' # 默认返回jpg
|
||||
|
||||
|
||||
async def upload_to_work(self, image: platform_message.Image):
|
||||
"""
|
||||
获取 media_id
|
||||
"""
|
||||
if not await self.check_access_token():
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
|
||||
url = self.base_url + '/media/upload?access_token=' + self.access_token + '&type=file'
|
||||
file_bytes = None
|
||||
file_name = "uploaded_file.txt"
|
||||
|
||||
# 获取文件的二进制数据
|
||||
if image.path:
|
||||
async with aiofiles.open(image.path, 'rb') as f:
|
||||
file_bytes = await f.read()
|
||||
file_name = image.path.split('/')[-1]
|
||||
elif image.url:
|
||||
file_bytes = await self.download_image_to_bytes(image.url)
|
||||
file_name = image.url.split('/')[-1]
|
||||
elif image.base64:
|
||||
try:
|
||||
base64_data = image.base64
|
||||
if ',' in base64_data:
|
||||
base64_data = base64_data.split(',', 1)[1]
|
||||
padding = 4 - (len(base64_data) % 4) if len(base64_data) % 4 else 0
|
||||
padded_base64 = base64_data + '=' * padding
|
||||
file_bytes = base64.b64decode(padded_base64)
|
||||
except binascii.Error as e:
|
||||
raise ValueError(f"Invalid base64 string: {str(e)}")
|
||||
else:
|
||||
raise ValueError("image对象出错")
|
||||
|
||||
# 设置 multipart/form-data 格式的文件
|
||||
boundary = "-------------------------acebdf13572468"
|
||||
headers = {
|
||||
'Content-Type': f'multipart/form-data; boundary={boundary}'
|
||||
}
|
||||
body = (
|
||||
f"--{boundary}\r\n"
|
||||
f"Content-Disposition: form-data; name=\"media\"; filename=\"{file_name}\"; filelength={len(file_bytes)}\r\n"
|
||||
f"Content-Type: application/octet-stream\r\n\r\n"
|
||||
).encode('utf-8') + file_bytes + f"\r\n--{boundary}--\r\n".encode('utf-8')
|
||||
|
||||
# 上传文件
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(url, headers=headers, content=body)
|
||||
data = response.json()
|
||||
if data['errcode'] == 40014 or data['errcode'] == 42001:
|
||||
self.access_token = await self.get_access_token(self.secret)
|
||||
media_id = await self.upload_to_work(image)
|
||||
if data.get('errcode', 0) != 0:
|
||||
raise Exception("failed to upload file")
|
||||
|
||||
media_id = data.get('media_id')
|
||||
return media_id
|
||||
|
||||
async def download_image_to_bytes(self,url:str) -> bytes:
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url)
|
||||
response.raise_for_status()
|
||||
return response.content
|
||||
|
||||
#进行media_id的获取
|
||||
async def get_media_id(self, image: platform_message.Image):
|
||||
|
||||
media_id = await self.upload_to_work(image=image)
|
||||
return media_id
|
||||
134
libs/wecom_customer_service_api/wecomcsevent.py
Normal file
134
libs/wecom_customer_service_api/wecomcsevent.py
Normal file
@@ -0,0 +1,134 @@
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
|
||||
class WecomCSEvent(dict):
|
||||
"""
|
||||
封装从企业微信收到的事件数据对象(字典),提供属性以获取其中的字段。
|
||||
|
||||
除 `type` 和 `detail_type` 属性对于任何事件都有效外,其它属性是否存在(若不存在则返回 `None`)依事件类型不同而不同。
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def from_payload(payload: Dict[str, Any]) -> Optional["WecomCSEvent"]:
|
||||
"""
|
||||
从企业微信(客服会话)事件数据构造 `WecomEvent` 对象。
|
||||
|
||||
Args:
|
||||
payload (Dict[str, Any]): 解密后的企业微信事件数据。
|
||||
|
||||
Returns:
|
||||
Optional[WecomEvent]: 如果事件数据合法,则返回 WecomEvent 对象;否则返回 None。
|
||||
"""
|
||||
try:
|
||||
event = WecomCSEvent(payload)
|
||||
_ = event.type,
|
||||
return event
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
@property
|
||||
def type(self) -> str:
|
||||
"""
|
||||
事件类型,例如 "message"、"event"、"text" 等。
|
||||
|
||||
Returns:
|
||||
str: 事件类型。
|
||||
"""
|
||||
return self.get("msgtype", "")
|
||||
|
||||
@property
|
||||
def user_id(self) -> Optional[str]:
|
||||
"""
|
||||
用户 ID,例如消息的发送者或事件的触发者。
|
||||
|
||||
Returns:
|
||||
Optional[str]: 用户 ID。
|
||||
"""
|
||||
return self.get("external_userid")
|
||||
|
||||
@property
|
||||
def receiver_id(self) -> Optional[str]:
|
||||
"""
|
||||
接收者 ID,例如机器人自身的企业微信 ID。
|
||||
|
||||
Returns:
|
||||
Optional[str]: 接收者 ID。
|
||||
"""
|
||||
return self.get("open_kfid","")
|
||||
|
||||
@property
|
||||
def picurl(self) -> Optional[str]:
|
||||
"""
|
||||
图片 URL,仅在图片消息中存在。
|
||||
base64格式
|
||||
Returns:
|
||||
Optional[str]: 图片 URL。
|
||||
"""
|
||||
|
||||
return self.get("picurl","")
|
||||
|
||||
@property
|
||||
def message_id(self) -> Optional[str]:
|
||||
"""
|
||||
消息 ID,仅在消息类型事件中存在。
|
||||
|
||||
Returns:
|
||||
Optional[str]: 消息 ID。
|
||||
"""
|
||||
return self.get("msgid")
|
||||
|
||||
@property
|
||||
def message(self) -> Optional[str]:
|
||||
"""
|
||||
消息内容,仅在消息类型事件中存在。
|
||||
|
||||
Returns:
|
||||
Optional[str]: 消息内容。
|
||||
"""
|
||||
if self.get("msgtype") == 'text':
|
||||
return self.get("text").get("content","")
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
@property
|
||||
def timestamp(self) -> Optional[int]:
|
||||
"""
|
||||
事件发生的时间戳。
|
||||
|
||||
Returns:
|
||||
Optional[int]: 时间戳。
|
||||
"""
|
||||
return self.get("send_time")
|
||||
|
||||
|
||||
def __getattr__(self, key: str) -> Optional[Any]:
|
||||
"""
|
||||
允许通过属性访问数据中的任意字段。
|
||||
|
||||
Args:
|
||||
key (str): 字段名。
|
||||
|
||||
Returns:
|
||||
Optional[Any]: 字段值。
|
||||
"""
|
||||
return self.get(key)
|
||||
|
||||
def __setattr__(self, key: str, value: Any) -> None:
|
||||
"""
|
||||
允许通过属性设置数据中的任意字段。
|
||||
|
||||
Args:
|
||||
key (str): 字段名。
|
||||
value (Any): 字段值。
|
||||
"""
|
||||
self[key] = value
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""
|
||||
生成事件对象的字符串表示。
|
||||
|
||||
Returns:
|
||||
str: 字符串表示。
|
||||
"""
|
||||
return f"<WecomEvent {super().__repr__()}>"
|
||||
30
pkg/core/migrations/m039_modelscope_cfg_completion.py
Normal file
30
pkg/core/migrations/m039_modelscope_cfg_completion.py
Normal file
@@ -0,0 +1,30 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import migration
|
||||
|
||||
|
||||
@migration.migration_class("modelscope-config-completion", 39)
|
||||
class ModelScopeConfigCompletionMigration(migration.Migration):
|
||||
"""ModelScope配置迁移
|
||||
"""
|
||||
|
||||
async def need_migrate(self) -> bool:
|
||||
"""判断当前环境是否需要运行此迁移
|
||||
"""
|
||||
return 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester'] \
|
||||
or 'modelscope' not in self.ap.provider_cfg.data['keys']
|
||||
|
||||
async def run(self):
|
||||
"""执行迁移
|
||||
"""
|
||||
if 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester']:
|
||||
self.ap.provider_cfg.data['requester']['modelscope-chat-completions'] = {
|
||||
'base-url': 'https://api-inference.modelscope.cn/v1',
|
||||
'args': {},
|
||||
'timeout': 120,
|
||||
}
|
||||
|
||||
if 'modelscope' not in self.ap.provider_cfg.data['keys']:
|
||||
self.ap.provider_cfg.data['keys']['modelscope'] = []
|
||||
|
||||
await self.ap.provider_cfg.dump_config()
|
||||
30
pkg/core/migrations/m040_ppio_config.py
Normal file
30
pkg/core/migrations/m040_ppio_config.py
Normal file
@@ -0,0 +1,30 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import migration
|
||||
|
||||
|
||||
@migration.migration_class("ppio-config", 40)
|
||||
class PPIOConfigMigration(migration.Migration):
|
||||
"""PPIO配置迁移
|
||||
"""
|
||||
|
||||
async def need_migrate(self) -> bool:
|
||||
"""判断当前环境是否需要运行此迁移
|
||||
"""
|
||||
return 'ppio-chat-completions' not in self.ap.provider_cfg.data['requester'] \
|
||||
or 'ppio' not in self.ap.provider_cfg.data['keys']
|
||||
|
||||
async def run(self):
|
||||
"""执行迁移
|
||||
"""
|
||||
if 'ppio-chat-completions' not in self.ap.provider_cfg.data['requester']:
|
||||
self.ap.provider_cfg.data['requester']['ppio-chat-completions'] = {
|
||||
'base-url': 'https://api.ppinfra.com/v3/openai',
|
||||
'args': {},
|
||||
'timeout': 120,
|
||||
}
|
||||
|
||||
if 'ppio' not in self.ap.provider_cfg.data['keys']:
|
||||
self.ap.provider_cfg.data['keys']['ppio'] = []
|
||||
|
||||
await self.ap.provider_cfg.dump_config()
|
||||
@@ -12,7 +12,7 @@ from ..migrations import m020_wecom_config, m021_lark_config, m022_lmstudio_conf
|
||||
from ..migrations import m026_qqofficial_config, m027_wx_official_account_config, m028_aliyun_requester_config
|
||||
from ..migrations import m029_dashscope_app_api_config, m030_lark_config_cmpl, m031_dingtalk_config, m032_volcark_config
|
||||
from ..migrations import m033_dify_thinking_config, m034_gewechat_file_url_config, m035_wxoa_mode, m036_wxoa_loading_message
|
||||
from ..migrations import m037_mcp_config, m038_tg_dingtalk_markdown
|
||||
from ..migrations import m037_mcp_config, m038_tg_dingtalk_markdown, m039_modelscope_cfg_completion, m040_ppio_config
|
||||
|
||||
|
||||
@stage.stage_class("MigrationStage")
|
||||
|
||||
@@ -343,7 +343,6 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
|
||||
type = context.header.event_type
|
||||
|
||||
if 'url_verification' == type:
|
||||
print(data.get("challenge"))
|
||||
# todo 验证verification token
|
||||
return {
|
||||
"challenge": data.get("challenge")
|
||||
|
||||
@@ -127,7 +127,7 @@ class TelegramEventConverter(adapter.EventConverter):
|
||||
time=event.message.date.timestamp(),
|
||||
source_platform_object=event
|
||||
)
|
||||
elif event.effective_chat.type == 'group':
|
||||
elif event.effective_chat.type == 'group' or 'supergroup' :
|
||||
return platform_events.GroupMessage(
|
||||
sender=platform_entities.GroupMember(
|
||||
id=event.effective_chat.id,
|
||||
@@ -202,9 +202,12 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
|
||||
|
||||
for component in components:
|
||||
if component['type'] == 'text':
|
||||
content = telegramify_markdown.markdownify(
|
||||
content= component['text'],
|
||||
)
|
||||
if self.config['markdown_card'] is True:
|
||||
content = telegramify_markdown.markdownify(
|
||||
content= component['text'],
|
||||
)
|
||||
else:
|
||||
content = component['text']
|
||||
args = {
|
||||
"chat_id": message_source.source_platform_object.effective_chat.id,
|
||||
"text": content,
|
||||
|
||||
223
pkg/platform/sources/wecomcs.py
Normal file
223
pkg/platform/sources/wecomcs.py
Normal file
@@ -0,0 +1,223 @@
|
||||
from __future__ import annotations
|
||||
import typing
|
||||
import asyncio
|
||||
import traceback
|
||||
|
||||
import datetime
|
||||
|
||||
from libs.wecom_customer_service_api.api import WecomCSClient
|
||||
from pkg.platform.adapter import MessagePlatformAdapter
|
||||
from pkg.platform.types import events as platform_events, message as platform_message
|
||||
from libs.wecom_customer_service_api.wecomcsevent import WecomCSEvent
|
||||
from pkg.core import app
|
||||
from .. import adapter
|
||||
from ...core import app
|
||||
from ..types import message as platform_message
|
||||
from ..types import events as platform_events
|
||||
from ..types import entities as platform_entities
|
||||
from ...command.errors import ParamNotEnoughError
|
||||
from ...utils import image
|
||||
|
||||
class WecomMessageConverter(adapter.MessageConverter):
|
||||
|
||||
@staticmethod
|
||||
async def yiri2target(
|
||||
message_chain: platform_message.MessageChain, bot: WecomCSClient
|
||||
):
|
||||
content_list = []
|
||||
|
||||
for msg in message_chain:
|
||||
if type(msg) is platform_message.Plain:
|
||||
content_list.append({
|
||||
"type": "text",
|
||||
"content": msg.text,
|
||||
})
|
||||
elif type(msg) is platform_message.Image:
|
||||
content_list.append({
|
||||
"type": "image",
|
||||
"media_id": await bot.get_media_id(msg),
|
||||
})
|
||||
elif type(msg) is platform_message.Forward:
|
||||
for node in msg.node_list:
|
||||
content_list.extend((await WecomMessageConverter.yiri2target(node.message_chain, bot)))
|
||||
else:
|
||||
content_list.append({
|
||||
"type": "text",
|
||||
"content": str(msg),
|
||||
})
|
||||
|
||||
return content_list
|
||||
|
||||
@staticmethod
|
||||
async def target2yiri(message: str, message_id: int = -1):
|
||||
yiri_msg_list = []
|
||||
yiri_msg_list.append(
|
||||
platform_message.Source(id=message_id, time=datetime.datetime.now())
|
||||
)
|
||||
|
||||
yiri_msg_list.append(platform_message.Plain(text=message))
|
||||
chain = platform_message.MessageChain(yiri_msg_list)
|
||||
|
||||
return chain
|
||||
|
||||
@staticmethod
|
||||
async def target2yiri_image(picurl: str, message_id: int = -1):
|
||||
yiri_msg_list = []
|
||||
yiri_msg_list.append(
|
||||
platform_message.Source(id=message_id, time=datetime.datetime.now())
|
||||
)
|
||||
yiri_msg_list.append(platform_message.Image(base64=picurl))
|
||||
chain = platform_message.MessageChain(yiri_msg_list)
|
||||
|
||||
return chain
|
||||
|
||||
|
||||
class WecomEventConverter:
|
||||
|
||||
@staticmethod
|
||||
async def yiri2target(
|
||||
event: platform_events.Event, bot_account_id: int, bot: WecomCSClient
|
||||
) -> WecomCSEvent:
|
||||
# only for extracting user information
|
||||
|
||||
if type(event) is platform_events.GroupMessage:
|
||||
pass
|
||||
|
||||
if type(event) is platform_events.FriendMessage:
|
||||
return event.source_platform_object
|
||||
|
||||
@staticmethod
|
||||
async def target2yiri(event: WecomCSEvent):
|
||||
"""
|
||||
将 WecomEvent 转换为平台的 FriendMessage 对象。
|
||||
|
||||
Args:
|
||||
event (WecomEvent): 企业微信客服事件。
|
||||
|
||||
Returns:
|
||||
platform_events.FriendMessage: 转换后的 FriendMessage 对象。
|
||||
"""
|
||||
# 转换消息链
|
||||
if event.type == "text":
|
||||
yiri_chain = await WecomMessageConverter.target2yiri(
|
||||
event.message, event.message_id
|
||||
)
|
||||
friend = platform_entities.Friend(
|
||||
id=f"u{event.user_id}",
|
||||
nickname=str(event.user_id),
|
||||
remark="",
|
||||
)
|
||||
|
||||
return platform_events.FriendMessage(
|
||||
sender=friend, message_chain=yiri_chain, time=event.timestamp, source_platform_object=event
|
||||
)
|
||||
elif event.type == "image":
|
||||
friend = platform_entities.Friend(
|
||||
id=f"u{event.user_id}",
|
||||
nickname=str(event.user_id),
|
||||
remark="",
|
||||
)
|
||||
|
||||
yiri_chain = await WecomMessageConverter.target2yiri_image(
|
||||
picurl=event.picurl, message_id=event.message_id
|
||||
)
|
||||
|
||||
return platform_events.FriendMessage(
|
||||
sender=friend, message_chain=yiri_chain, time=event.timestamp, source_platform_object=event
|
||||
)
|
||||
|
||||
|
||||
class WecomCSAdapter(adapter.MessagePlatformAdapter):
|
||||
|
||||
bot: WecomCSClient
|
||||
ap: app.Application
|
||||
bot_account_id: str
|
||||
message_converter: WecomMessageConverter = WecomMessageConverter()
|
||||
event_converter: WecomEventConverter = WecomEventConverter()
|
||||
config: dict
|
||||
|
||||
def __init__(self, config: dict, ap: app.Application):
|
||||
self.config = config
|
||||
|
||||
self.ap = ap
|
||||
|
||||
required_keys = [
|
||||
"corpid",
|
||||
"secret",
|
||||
"token",
|
||||
"EncodingAESKey",
|
||||
]
|
||||
missing_keys = [key for key in required_keys if key not in config]
|
||||
if missing_keys:
|
||||
raise ParamNotEnoughError("企业微信客服缺少相关配置项,请查看文档或联系管理员")
|
||||
|
||||
self.bot = WecomCSClient(
|
||||
corpid=config["corpid"],
|
||||
secret=config["secret"],
|
||||
token=config["token"],
|
||||
EncodingAESKey=config["EncodingAESKey"],
|
||||
)
|
||||
|
||||
async def reply_message(
|
||||
self,
|
||||
message_source: platform_events.MessageEvent,
|
||||
message: platform_message.MessageChain,
|
||||
quote_origin: bool = False,
|
||||
):
|
||||
|
||||
Wecom_event = await WecomEventConverter.yiri2target(
|
||||
message_source, self.bot_account_id, self.bot
|
||||
)
|
||||
content_list = await WecomMessageConverter.yiri2target(message, self.bot)
|
||||
|
||||
for content in content_list:
|
||||
if content["type"] == "text":
|
||||
await self.bot.send_text_msg(open_kfid=Wecom_event.receiver_id,external_userid=Wecom_event.user_id,msgid=Wecom_event.message_id,content=content["content"])
|
||||
|
||||
async def send_message(
|
||||
self, target_type: str, target_id: str, message: platform_message.MessageChain
|
||||
):
|
||||
pass
|
||||
|
||||
def register_listener(
|
||||
self,
|
||||
event_type: typing.Type[platform_events.Event],
|
||||
callback: typing.Callable[
|
||||
[platform_events.Event, adapter.MessagePlatformAdapter], None
|
||||
],
|
||||
):
|
||||
async def on_message(event: WecomCSEvent):
|
||||
self.bot_account_id = event.receiver_id
|
||||
try:
|
||||
return await callback(
|
||||
await self.event_converter.target2yiri(event), self
|
||||
)
|
||||
except:
|
||||
traceback.print_exc()
|
||||
|
||||
if event_type == platform_events.FriendMessage:
|
||||
self.bot.on_message("text")(on_message)
|
||||
self.bot.on_message("image")(on_message)
|
||||
elif event_type == platform_events.GroupMessage:
|
||||
pass
|
||||
|
||||
async def run_async(self):
|
||||
async def shutdown_trigger_placeholder():
|
||||
while True:
|
||||
await asyncio.sleep(1)
|
||||
|
||||
await self.bot.run_task(
|
||||
host="0.0.0.0",
|
||||
port=self.config["port"],
|
||||
shutdown_trigger=shutdown_trigger_placeholder,
|
||||
)
|
||||
|
||||
async def kill(self) -> bool:
|
||||
return False
|
||||
|
||||
async def unregister_listener(
|
||||
self,
|
||||
event_type: type,
|
||||
callback: typing.Callable[[platform_events.Event, MessagePlatformAdapter], None],
|
||||
):
|
||||
return super().unregister_listener(event_type, callback)
|
||||
51
pkg/platform/sources/wecomcs.yaml
Normal file
51
pkg/platform/sources/wecomcs.yaml
Normal file
@@ -0,0 +1,51 @@
|
||||
apiVersion: v1
|
||||
kind: MessagePlatformAdapter
|
||||
metadata:
|
||||
name: wecomcs
|
||||
label:
|
||||
en_US: WeComCustomerService
|
||||
zh_CN: 企业微信客服
|
||||
description:
|
||||
en_US: WeComCSAdapter
|
||||
zh_CN: 企业微信客服适配器
|
||||
spec:
|
||||
config:
|
||||
- name: port
|
||||
label:
|
||||
en_US: Port
|
||||
zh_CN: 监听端口
|
||||
type: int
|
||||
required: true
|
||||
default: 2289
|
||||
- name: corpid
|
||||
label:
|
||||
en_US: Corpid
|
||||
zh_CN: 企业ID
|
||||
type: string
|
||||
required: true
|
||||
default: ""
|
||||
- name: secret
|
||||
label:
|
||||
en_US: Secret
|
||||
zh_CN: 密钥
|
||||
type: string
|
||||
required: true
|
||||
default: ""
|
||||
- name: token
|
||||
label:
|
||||
en_US: Token
|
||||
zh_CN: 令牌
|
||||
type: string
|
||||
required: true
|
||||
default: ""
|
||||
- name: EncodingAESKey
|
||||
label:
|
||||
en_US: EncodingAESKey
|
||||
zh_CN: 消息加解密密钥
|
||||
type: string
|
||||
required: true
|
||||
default: ""
|
||||
execution:
|
||||
python:
|
||||
path: ./wecomcs.py
|
||||
attr: WecomCSAdapter
|
||||
@@ -6,7 +6,7 @@ from . import entities, requester
|
||||
from ...core import app
|
||||
from ...discover import engine
|
||||
from . import token
|
||||
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl
|
||||
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl, modelscopechatcmpl
|
||||
|
||||
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"
|
||||
|
||||
|
||||
@@ -4,6 +4,8 @@ import typing
|
||||
import json
|
||||
import traceback
|
||||
import base64
|
||||
import platform
|
||||
import socket
|
||||
|
||||
import anthropic
|
||||
import httpx
|
||||
@@ -23,6 +25,12 @@ class AnthropicMessages(requester.LLMAPIRequester):
|
||||
client: anthropic.AsyncAnthropic
|
||||
|
||||
async def initialize(self):
|
||||
# 兼容 Windows 缺失 TCP_KEEPINTVL 和 TCP_KEEPCNT 的问题
|
||||
if platform.system() == "Windows":
|
||||
if not hasattr(socket, "TCP_KEEPINTVL"):
|
||||
socket.TCP_KEEPINTVL = 0
|
||||
if not hasattr(socket, "TCP_KEEPCNT"):
|
||||
socket.TCP_KEEPCNT = 0
|
||||
|
||||
httpx_client = anthropic._base_client.AsyncHttpxClientWrapper(
|
||||
base_url=self.ap.provider_cfg.data['requester']['anthropic-messages']['base-url'].replace(' ', ''),
|
||||
|
||||
@@ -2,12 +2,12 @@ from __future__ import annotations
|
||||
|
||||
import openai
|
||||
|
||||
from . import chatcmpl
|
||||
from . import chatcmpl, modelscopechatcmpl
|
||||
from .. import requester
|
||||
from ....core import app
|
||||
|
||||
|
||||
class BailianChatCompletions(chatcmpl.OpenAIChatCompletions):
|
||||
class BailianChatCompletions(modelscopechatcmpl.ModelScopeChatCompletions):
|
||||
"""阿里云百炼大模型平台 ChatCompletion API 请求器"""
|
||||
|
||||
client: openai.AsyncClient
|
||||
@@ -18,3 +18,4 @@ class BailianChatCompletions(chatcmpl.OpenAIChatCompletions):
|
||||
self.ap = ap
|
||||
|
||||
self.requester_cfg = self.ap.provider_cfg.data['requester']['bailian-chat-completions']
|
||||
|
||||
@@ -61,6 +61,12 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
|
||||
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
|
||||
chatcmpl_message['role'] = 'assistant'
|
||||
|
||||
reasoning_content = chatcmpl_message['reasoning_content'] if 'reasoning_content' in chatcmpl_message else None
|
||||
|
||||
# deepseek的reasoner模型
|
||||
if reasoning_content is not None:
|
||||
chatcmpl_message['content'] = "<think>\n" + reasoning_content + "\n</think>\n"+ chatcmpl_message['content']
|
||||
|
||||
message = llm_entities.Message(**chatcmpl_message)
|
||||
|
||||
return message
|
||||
|
||||
207
pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
Normal file
207
pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
Normal file
@@ -0,0 +1,207 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import typing
|
||||
import json
|
||||
import base64
|
||||
from typing import AsyncGenerator
|
||||
|
||||
import openai
|
||||
import openai.types.chat.chat_completion as chat_completion
|
||||
import openai.types.chat.chat_completion_message_tool_call as chat_completion_message_tool_call
|
||||
import httpx
|
||||
import aiohttp
|
||||
import async_lru
|
||||
|
||||
from .. import entities, errors, requester
|
||||
from ....core import entities as core_entities, app
|
||||
from ... import entities as llm_entities
|
||||
from ...tools import entities as tools_entities
|
||||
from ....utils import image
|
||||
|
||||
|
||||
class ModelScopeChatCompletions(requester.LLMAPIRequester):
|
||||
"""ModelScope ChatCompletion API 请求器"""
|
||||
|
||||
client: openai.AsyncClient
|
||||
|
||||
requester_cfg: dict
|
||||
|
||||
def __init__(self, ap: app.Application):
|
||||
self.ap = ap
|
||||
|
||||
self.requester_cfg = self.ap.provider_cfg.data['requester']['modelscope-chat-completions']
|
||||
|
||||
async def initialize(self):
|
||||
|
||||
self.client = openai.AsyncClient(
|
||||
api_key="",
|
||||
base_url=self.requester_cfg['base-url'],
|
||||
timeout=self.requester_cfg['timeout'],
|
||||
http_client=httpx.AsyncClient(
|
||||
trust_env=True,
|
||||
timeout=self.requester_cfg['timeout']
|
||||
)
|
||||
)
|
||||
|
||||
async def _req(
|
||||
self,
|
||||
args: dict,
|
||||
) -> chat_completion.ChatCompletion:
|
||||
args["stream"] = True
|
||||
|
||||
chunk = None
|
||||
|
||||
pending_content = ""
|
||||
|
||||
tool_calls = []
|
||||
|
||||
resp_gen: openai.AsyncStream = await self.client.chat.completions.create(**args)
|
||||
|
||||
async for chunk in resp_gen:
|
||||
# print(chunk)
|
||||
if not chunk or not chunk.id or not chunk.choices or not chunk.choices[0] or not chunk.choices[0].delta:
|
||||
continue
|
||||
|
||||
if chunk.choices[0].delta.content is not None:
|
||||
pending_content += chunk.choices[0].delta.content
|
||||
|
||||
if chunk.choices[0].delta.tool_calls is not None:
|
||||
for tool_call in chunk.choices[0].delta.tool_calls:
|
||||
for tc in tool_calls:
|
||||
if tc.index == tool_call.index:
|
||||
tc.function.arguments += tool_call.function.arguments
|
||||
break
|
||||
else:
|
||||
tool_calls.append(tool_call)
|
||||
|
||||
if chunk.choices[0].finish_reason is not None:
|
||||
break
|
||||
|
||||
real_tool_calls = []
|
||||
|
||||
for tc in tool_calls:
|
||||
function = chat_completion_message_tool_call.Function(
|
||||
name=tc.function.name,
|
||||
arguments=tc.function.arguments
|
||||
)
|
||||
real_tool_calls.append(chat_completion_message_tool_call.ChatCompletionMessageToolCall(
|
||||
id=tc.id,
|
||||
function=function,
|
||||
type="function"
|
||||
))
|
||||
|
||||
return chat_completion.ChatCompletion(
|
||||
id=chunk.id,
|
||||
object="chat.completion",
|
||||
created=chunk.created,
|
||||
choices=[
|
||||
chat_completion.Choice(
|
||||
index=0,
|
||||
message=chat_completion.ChatCompletionMessage(
|
||||
role="assistant",
|
||||
content=pending_content,
|
||||
tool_calls=real_tool_calls if len(real_tool_calls) > 0 else None
|
||||
),
|
||||
finish_reason=chunk.choices[0].finish_reason if hasattr(chunk.choices[0], 'finish_reason') and chunk.choices[0].finish_reason is not None else 'stop',
|
||||
logprobs=chunk.choices[0].logprobs,
|
||||
)
|
||||
],
|
||||
model=chunk.model,
|
||||
service_tier=chunk.service_tier if hasattr(chunk, 'service_tier') else None,
|
||||
system_fingerprint=chunk.system_fingerprint if hasattr(chunk, 'system_fingerprint') else None,
|
||||
usage=chunk.usage if hasattr(chunk, 'usage') else None
|
||||
) if chunk else None
|
||||
return await self.client.chat.completions.create(**args)
|
||||
|
||||
async def _make_msg(
|
||||
self,
|
||||
chat_completion: chat_completion.ChatCompletion,
|
||||
) -> llm_entities.Message:
|
||||
chatcmpl_message = chat_completion.choices[0].message.dict()
|
||||
|
||||
# 确保 role 字段存在且不为 None
|
||||
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
|
||||
chatcmpl_message['role'] = 'assistant'
|
||||
|
||||
message = llm_entities.Message(**chatcmpl_message)
|
||||
|
||||
return message
|
||||
|
||||
async def _closure(
|
||||
self,
|
||||
query: core_entities.Query,
|
||||
req_messages: list[dict],
|
||||
use_model: entities.LLMModelInfo,
|
||||
use_funcs: list[tools_entities.LLMFunction] = None,
|
||||
) -> llm_entities.Message:
|
||||
self.client.api_key = use_model.token_mgr.get_token()
|
||||
|
||||
args = self.requester_cfg['args'].copy()
|
||||
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
|
||||
|
||||
if use_funcs:
|
||||
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
|
||||
|
||||
if tools:
|
||||
args["tools"] = tools
|
||||
|
||||
# 设置此次请求中的messages
|
||||
messages = req_messages.copy()
|
||||
|
||||
# 检查vision
|
||||
for msg in messages:
|
||||
if 'content' in msg and isinstance(msg["content"], list):
|
||||
for me in msg["content"]:
|
||||
if me["type"] == "image_base64":
|
||||
me["image_url"] = {
|
||||
"url": me["image_base64"]
|
||||
}
|
||||
me["type"] = "image_url"
|
||||
del me["image_base64"]
|
||||
|
||||
args["messages"] = messages
|
||||
|
||||
# 发送请求
|
||||
resp = await self._req(args)
|
||||
|
||||
# 处理请求结果
|
||||
message = await self._make_msg(resp)
|
||||
|
||||
return message
|
||||
|
||||
async def call(
|
||||
self,
|
||||
query: core_entities.Query,
|
||||
model: entities.LLMModelInfo,
|
||||
messages: typing.List[llm_entities.Message],
|
||||
funcs: typing.List[tools_entities.LLMFunction] = None,
|
||||
) -> llm_entities.Message:
|
||||
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
|
||||
for m in messages:
|
||||
msg_dict = m.dict(exclude_none=True)
|
||||
content = msg_dict.get("content")
|
||||
if isinstance(content, list):
|
||||
# 检查 content 列表中是否每个部分都是文本
|
||||
if all(isinstance(part, dict) and part.get("type") == "text" for part in content):
|
||||
# 将所有文本部分合并为一个字符串
|
||||
msg_dict["content"] = "\n".join(part["text"] for part in content)
|
||||
req_messages.append(msg_dict)
|
||||
|
||||
try:
|
||||
return await self._closure(query=query, req_messages=req_messages, use_model=model, use_funcs=funcs)
|
||||
except asyncio.TimeoutError:
|
||||
raise errors.RequesterError('请求超时')
|
||||
except openai.BadRequestError as e:
|
||||
if 'context_length_exceeded' in e.message:
|
||||
raise errors.RequesterError(f'上文过长,请重置会话: {e.message}')
|
||||
else:
|
||||
raise errors.RequesterError(f'请求参数错误: {e.message}')
|
||||
except openai.AuthenticationError as e:
|
||||
raise errors.RequesterError(f'无效的 api-key: {e.message}')
|
||||
except openai.NotFoundError as e:
|
||||
raise errors.RequesterError(f'请求路径错误: {e.message}')
|
||||
except openai.RateLimitError as e:
|
||||
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
|
||||
except openai.APIError as e:
|
||||
raise errors.RequesterError(f'请求错误: {e.message}')
|
||||
34
pkg/provider/modelmgr/requesters/modelscopechatcmpl.yaml
Normal file
34
pkg/provider/modelmgr/requesters/modelscopechatcmpl.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
apiVersion: v1
|
||||
kind: LLMAPIRequester
|
||||
metadata:
|
||||
name: modelscope-chat-completions
|
||||
label:
|
||||
en_US: ModelScope
|
||||
zh_CN: 魔搭社区
|
||||
spec:
|
||||
config:
|
||||
- name: base-url
|
||||
label:
|
||||
en_US: Base URL
|
||||
zh_CN: 基础 URL
|
||||
type: string
|
||||
required: true
|
||||
default: "https://api-inference.modelscope.cn/v1"
|
||||
- name: args
|
||||
label:
|
||||
en_US: Args
|
||||
zh_CN: 附加参数
|
||||
type: object
|
||||
required: true
|
||||
default: {}
|
||||
- name: timeout
|
||||
label:
|
||||
en_US: Timeout
|
||||
zh_CN: 超时时间
|
||||
type: int
|
||||
required: true
|
||||
default: 120
|
||||
execution:
|
||||
python:
|
||||
path: ./modelscopechatcmpl.py
|
||||
attr: ModelScopeChatCompletions
|
||||
@@ -42,8 +42,8 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
|
||||
if 'content' in m and isinstance(m["content"], list):
|
||||
m["content"] = " ".join([c["text"] for c in m["content"]])
|
||||
|
||||
# 删除空的
|
||||
messages = [m for m in messages if m["content"].strip() != ""]
|
||||
# 删除空的,不知道干嘛的,直接删了。
|
||||
# messages = [m for m in messages if m["content"].strip() != "" and ('tool_calls' not in m or not m['tool_calls'])]
|
||||
|
||||
args["messages"] = messages
|
||||
|
||||
|
||||
20
pkg/provider/modelmgr/requesters/ppiochatcmpl.py
Normal file
20
pkg/provider/modelmgr/requesters/ppiochatcmpl.py
Normal file
@@ -0,0 +1,20 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import openai
|
||||
|
||||
from . import chatcmpl, modelscopechatcmpl
|
||||
from .. import requester
|
||||
from ....core import app
|
||||
|
||||
class PPIOChatCompletions(chatcmpl.OpenAIChatCompletions):
|
||||
"""欧派云 ChatCompletion API 请求器"""
|
||||
|
||||
client: openai.AsyncClient
|
||||
|
||||
requester_cfg: dict
|
||||
|
||||
def __init__(self, ap: app.Application):
|
||||
self.ap = ap
|
||||
|
||||
self.requester_cfg = self.ap.provider_cfg.data['requester']['ppio-chat-completions']
|
||||
34
pkg/provider/modelmgr/requesters/ppiochatcmpl.yaml
Normal file
34
pkg/provider/modelmgr/requesters/ppiochatcmpl.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
apiVersion: v1
|
||||
kind: LLMAPIRequester
|
||||
metadata:
|
||||
name: ppio-chat-completions
|
||||
label:
|
||||
en_US: ppio
|
||||
zh_CN: 派欧云
|
||||
spec:
|
||||
config:
|
||||
- name: base-url
|
||||
label:
|
||||
en_US: Base URL
|
||||
zh_CN: 基础 URL
|
||||
type: string
|
||||
required: true
|
||||
default: "https://api.ppinfra.com/v3/openai"
|
||||
- name: args
|
||||
label:
|
||||
en_US: Args
|
||||
zh_CN: 附加参数
|
||||
type: object
|
||||
required: true
|
||||
default: {}
|
||||
- name: timeout
|
||||
label:
|
||||
en_US: Timeout
|
||||
zh_CN: 超时时间
|
||||
type: int
|
||||
required: true
|
||||
default: 120
|
||||
execution:
|
||||
python:
|
||||
path: ./ppiochatcmpl.py
|
||||
attr: PPIOChatCompletions
|
||||
@@ -164,12 +164,14 @@ class DifyServiceAPIRunner(runner.RequestRunner):
|
||||
for image_id in image_ids
|
||||
]
|
||||
|
||||
ignored_events = ["agent_message"]
|
||||
ignored_events = []
|
||||
|
||||
inputs = {}
|
||||
|
||||
inputs.update(query.variables)
|
||||
|
||||
pending_agent_message = ''
|
||||
|
||||
async for chunk in self.dify_client.chat_messages(
|
||||
inputs=inputs,
|
||||
query=plain_text,
|
||||
@@ -183,50 +185,55 @@ class DifyServiceAPIRunner(runner.RequestRunner):
|
||||
|
||||
if chunk["event"] in ignored_events:
|
||||
continue
|
||||
if chunk["event"] == "agent_thought":
|
||||
|
||||
if chunk['tool'] != '' and chunk['observation'] != '': # 工具调用结果,跳过
|
||||
continue
|
||||
|
||||
if chunk['thought'].strip() != '': # 文字回复内容
|
||||
msg = llm_entities.Message(
|
||||
role="assistant",
|
||||
content=chunk["thought"],
|
||||
)
|
||||
yield msg
|
||||
|
||||
if chunk['tool']:
|
||||
msg = llm_entities.Message(
|
||||
role="assistant",
|
||||
tool_calls=[
|
||||
llm_entities.ToolCall(
|
||||
id=chunk['id'],
|
||||
type="function",
|
||||
function=llm_entities.FunctionCall(
|
||||
name=chunk["tool"],
|
||||
arguments=json.dumps({}),
|
||||
),
|
||||
)
|
||||
],
|
||||
)
|
||||
yield msg
|
||||
if chunk['event'] == 'message_file':
|
||||
|
||||
if chunk['type'] == 'image' and chunk['belongs_to'] == 'assistant':
|
||||
|
||||
base_url = self.dify_client.base_url
|
||||
|
||||
if base_url.endswith('/v1'):
|
||||
base_url = base_url[:-3]
|
||||
|
||||
image_url = base_url + chunk['url']
|
||||
|
||||
if chunk['event'] == 'agent_message':
|
||||
pending_agent_message += chunk['answer']
|
||||
else:
|
||||
if pending_agent_message.strip() != '':
|
||||
pending_agent_message = pending_agent_message.replace('</details>Action:', '</details>')
|
||||
yield llm_entities.Message(
|
||||
role="assistant",
|
||||
content=[llm_entities.ContentElement.from_image_url(image_url)],
|
||||
content=self._try_convert_thinking(pending_agent_message),
|
||||
)
|
||||
if chunk['event'] == 'error':
|
||||
raise errors.DifyAPIError("dify 服务错误: " + chunk['message'])
|
||||
pending_agent_message = ''
|
||||
|
||||
if chunk["event"] == "agent_thought":
|
||||
|
||||
if chunk['tool'] != '' and chunk['observation'] != '': # 工具调用结果,跳过
|
||||
continue
|
||||
|
||||
if chunk['tool']:
|
||||
msg = llm_entities.Message(
|
||||
role="assistant",
|
||||
tool_calls=[
|
||||
llm_entities.ToolCall(
|
||||
id=chunk['id'],
|
||||
type="function",
|
||||
function=llm_entities.FunctionCall(
|
||||
name=chunk["tool"],
|
||||
arguments=json.dumps({}),
|
||||
),
|
||||
)
|
||||
],
|
||||
)
|
||||
yield msg
|
||||
if chunk['event'] == 'message_file':
|
||||
|
||||
if chunk['type'] == 'image' and chunk['belongs_to'] == 'assistant':
|
||||
|
||||
base_url = self.dify_client.base_url
|
||||
|
||||
if base_url.endswith('/v1'):
|
||||
base_url = base_url[:-3]
|
||||
|
||||
image_url = base_url + chunk['url']
|
||||
|
||||
yield llm_entities.Message(
|
||||
role="assistant",
|
||||
content=[llm_entities.ContentElement.from_image_url(image_url)],
|
||||
)
|
||||
if chunk['event'] == 'error':
|
||||
raise errors.DifyAPIError("dify 服务错误: " + chunk['message'])
|
||||
|
||||
query.session.using_conversation.uuid = chunk["conversation_id"]
|
||||
|
||||
@@ -303,11 +310,11 @@ class DifyServiceAPIRunner(runner.RequestRunner):
|
||||
|
||||
msg = llm_entities.Message(
|
||||
role="assistant",
|
||||
content=chunk["data"]["outputs"][
|
||||
content=self._try_convert_thinking(chunk["data"]["outputs"][
|
||||
self.ap.provider_cfg.data["dify-service-api"]["workflow"][
|
||||
"output-key"
|
||||
]
|
||||
],
|
||||
]),
|
||||
)
|
||||
|
||||
yield msg
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
semantic_version = "v3.4.12.1"
|
||||
semantic_version = "v3.4.14.2"
|
||||
|
||||
debug_mode = False
|
||||
|
||||
|
||||
@@ -37,4 +37,5 @@ mcp
|
||||
slack_sdk
|
||||
telegramify-markdown
|
||||
# indirect
|
||||
taskgroup==0.0.0a4
|
||||
taskgroup==0.0.0a4
|
||||
python-socks
|
||||
@@ -71,38 +71,47 @@
|
||||
"token": ""
|
||||
},
|
||||
{
|
||||
"adapter":"officialaccount",
|
||||
"adapter": "officialaccount",
|
||||
"enable": false,
|
||||
"token": "",
|
||||
"EncodingAESKey":"",
|
||||
"AppID":"",
|
||||
"AppSecret":"",
|
||||
"Mode":"drop",
|
||||
"LoadingMessage":"AI正在思考中,请发送任意内容获取回复。",
|
||||
"EncodingAESKey": "",
|
||||
"AppID": "",
|
||||
"AppSecret": "",
|
||||
"Mode": "drop",
|
||||
"LoadingMessage": "AI正在思考中,请发送任意内容获取回复。",
|
||||
"host": "0.0.0.0",
|
||||
"port": 2287
|
||||
},
|
||||
{
|
||||
"adapter":"dingtalk",
|
||||
"adapter": "dingtalk",
|
||||
"enable": false,
|
||||
"client_id":"",
|
||||
"client_secret":"",
|
||||
"robot_code":"",
|
||||
"robot_name":"",
|
||||
"markdown_card":false
|
||||
"client_id": "",
|
||||
"client_secret": "",
|
||||
"robot_code": "",
|
||||
"robot_name": "",
|
||||
"markdown_card": false
|
||||
},
|
||||
{
|
||||
"adapter":"telegram",
|
||||
"adapter": "telegram",
|
||||
"enable": false,
|
||||
"token":"",
|
||||
"markdown_card":false
|
||||
"token": "",
|
||||
"markdown_card": false
|
||||
},
|
||||
{
|
||||
"adapter":"slack",
|
||||
"enable":true,
|
||||
"bot_token":"",
|
||||
"signing_secret":"",
|
||||
"port":2288
|
||||
"adapter": "slack",
|
||||
"enable": false,
|
||||
"bot_token": "",
|
||||
"signing_secret": "",
|
||||
"port": 2288
|
||||
},
|
||||
{
|
||||
"adapter": "wecomcs",
|
||||
"enable": false,
|
||||
"port": 2289,
|
||||
"corpid": "",
|
||||
"secret": "",
|
||||
"token": "",
|
||||
"EncodingAESKey": ""
|
||||
}
|
||||
],
|
||||
"track-function-calls": true,
|
||||
|
||||
@@ -31,6 +31,12 @@
|
||||
],
|
||||
"volcark": [
|
||||
"xxxxxxxx"
|
||||
],
|
||||
"modelscope": [
|
||||
"xxxxxxxx"
|
||||
],
|
||||
"ppio": [
|
||||
"xxxxxxxx"
|
||||
]
|
||||
},
|
||||
"requester": {
|
||||
@@ -95,6 +101,16 @@
|
||||
"args": {},
|
||||
"base-url": "https://ark.cn-beijing.volces.com/api/v3",
|
||||
"timeout": 120
|
||||
},
|
||||
"modelscope-chat-completions": {
|
||||
"base-url": "https://api-inference.modelscope.cn/v1",
|
||||
"args": {},
|
||||
"timeout": 120
|
||||
},
|
||||
"ppio-chat-completions": {
|
||||
"base-url": "https://api.ppinfra.com/v3/openai",
|
||||
"args": {},
|
||||
"timeout": 120
|
||||
}
|
||||
},
|
||||
"model": "gpt-4o",
|
||||
|
||||
Reference in New Issue
Block a user