mirror of
https://github.com/langbot-app/LangBot.git
synced 2025-11-25 19:37:36 +08:00
Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
13e29a9966 | ||
|
|
601b0a8964 | ||
|
|
7c2ceb0aca | ||
|
|
42fabd5133 | ||
|
|
210a8856e2 | ||
|
|
c531cb11af | ||
|
|
07e073f526 | ||
|
|
c5457374a8 | ||
|
|
5198349591 | ||
|
|
8a4967525a | ||
|
|
30b068c6e2 | ||
|
|
ea3fff59ac | ||
|
|
b09ce8296f | ||
|
|
f9d07779a9 | ||
|
|
51634c1caf | ||
|
|
0e00da6617 |
16
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
16
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -3,22 +3,6 @@ description: 报错或漏洞请使用这个模板创建,不使用此模板创
|
|||||||
title: "[Bug]: "
|
title: "[Bug]: "
|
||||||
labels: ["bug?"]
|
labels: ["bug?"]
|
||||||
body:
|
body:
|
||||||
- type: dropdown
|
|
||||||
attributes:
|
|
||||||
label: 消息平台适配器
|
|
||||||
description: "接入的消息平台类型"
|
|
||||||
options:
|
|
||||||
- 其他(或暂未使用)
|
|
||||||
- Nakuru(go-cqhttp)
|
|
||||||
- aiocqhttp(使用 OneBot 协议接入的)
|
|
||||||
- qq-botpy(QQ官方API WebSocket)
|
|
||||||
- qqofficial(QQ官方API Webhook)
|
|
||||||
- lark(飞书)
|
|
||||||
- wecom(企业微信)
|
|
||||||
- gewechat(个人微信)
|
|
||||||
- discord
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: input
|
- type: input
|
||||||
attributes:
|
attributes:
|
||||||
label: 运行环境
|
label: 运行环境
|
||||||
|
|||||||
@@ -116,6 +116,7 @@
|
|||||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
|
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
|
||||||
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
||||||
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
|
||||||
|
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
|
||||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
|
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
|
||||||
|
|
||||||
### TTS
|
### TTS
|
||||||
|
|||||||
@@ -113,6 +113,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
|
|||||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
|
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
|
||||||
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
||||||
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
|
||||||
|
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLM gateway(MaaS) |
|
||||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
|
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
|
||||||
|
|
||||||
## 🤝 Community Contribution
|
## 🤝 Community Contribution
|
||||||
|
|||||||
@@ -112,6 +112,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
|
|||||||
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLMゲートウェイ(MaaS) |
|
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLMゲートウェイ(MaaS) |
|
||||||
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
||||||
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
|
||||||
|
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLMゲートウェイ(MaaS) |
|
||||||
| [MCP](https://modelcontextprotocol.io/) | ✅ | MCPプロトコルをサポート |
|
| [MCP](https://modelcontextprotocol.io/) | ✅ | MCPプロトコルをサポート |
|
||||||
|
|
||||||
## 🤝 コミュニティ貢献
|
## 🤝 コミュニティ貢献
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ required_deps = {
|
|||||||
"telegram": "python-telegram-bot",
|
"telegram": "python-telegram-bot",
|
||||||
"certifi": "certifi",
|
"certifi": "certifi",
|
||||||
"mcp": "mcp",
|
"mcp": "mcp",
|
||||||
|
"telegramify_markdown":"telegramify-markdown",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
30
pkg/core/migrations/m039_modelscope_cfg_completion.py
Normal file
30
pkg/core/migrations/m039_modelscope_cfg_completion.py
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .. import migration
|
||||||
|
|
||||||
|
|
||||||
|
@migration.migration_class("modelscope-config-completion", 4)
|
||||||
|
class ModelScopeConfigCompletionMigration(migration.Migration):
|
||||||
|
"""OpenAI配置迁移
|
||||||
|
"""
|
||||||
|
|
||||||
|
async def need_migrate(self) -> bool:
|
||||||
|
"""判断当前环境是否需要运行此迁移
|
||||||
|
"""
|
||||||
|
return 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester'] \
|
||||||
|
or 'modelscope' not in self.ap.provider_cfg.data['keys']
|
||||||
|
|
||||||
|
async def run(self):
|
||||||
|
"""执行迁移
|
||||||
|
"""
|
||||||
|
if 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester']:
|
||||||
|
self.ap.provider_cfg.data['requester']['modelscope-chat-completions'] = {
|
||||||
|
'base-url': 'https://api-inference.modelscope.cn/v1',
|
||||||
|
'args': {},
|
||||||
|
'timeout': 120,
|
||||||
|
}
|
||||||
|
|
||||||
|
if 'modelscope' not in self.ap.provider_cfg.data['keys']:
|
||||||
|
self.ap.provider_cfg.data['keys']['modelscope'] = []
|
||||||
|
|
||||||
|
await self.ap.provider_cfg.dump_config()
|
||||||
@@ -12,7 +12,7 @@ from ..migrations import m020_wecom_config, m021_lark_config, m022_lmstudio_conf
|
|||||||
from ..migrations import m026_qqofficial_config, m027_wx_official_account_config, m028_aliyun_requester_config
|
from ..migrations import m026_qqofficial_config, m027_wx_official_account_config, m028_aliyun_requester_config
|
||||||
from ..migrations import m029_dashscope_app_api_config, m030_lark_config_cmpl, m031_dingtalk_config, m032_volcark_config
|
from ..migrations import m029_dashscope_app_api_config, m030_lark_config_cmpl, m031_dingtalk_config, m032_volcark_config
|
||||||
from ..migrations import m033_dify_thinking_config, m034_gewechat_file_url_config, m035_wxoa_mode, m036_wxoa_loading_message
|
from ..migrations import m033_dify_thinking_config, m034_gewechat_file_url_config, m035_wxoa_mode, m036_wxoa_loading_message
|
||||||
from ..migrations import m037_mcp_config, m038_tg_dingtalk_markdown
|
from ..migrations import m037_mcp_config, m038_tg_dingtalk_markdown, m039_modelscope_cfg_completion
|
||||||
|
|
||||||
|
|
||||||
@stage.stage_class("MigrationStage")
|
@stage.stage_class("MigrationStage")
|
||||||
|
|||||||
@@ -343,7 +343,6 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
|
|||||||
type = context.header.event_type
|
type = context.header.event_type
|
||||||
|
|
||||||
if 'url_verification' == type:
|
if 'url_verification' == type:
|
||||||
print(data.get("challenge"))
|
|
||||||
# todo 验证verification token
|
# todo 验证verification token
|
||||||
return {
|
return {
|
||||||
"challenge": data.get("challenge")
|
"challenge": data.get("challenge")
|
||||||
|
|||||||
@@ -31,13 +31,6 @@ spec:
|
|||||||
type: int
|
type: int
|
||||||
required: true
|
required: true
|
||||||
default: 2288
|
default: 2288
|
||||||
- name: host
|
|
||||||
label:
|
|
||||||
en_US: Host
|
|
||||||
zh_CN: 监听主机
|
|
||||||
type: string
|
|
||||||
required: true
|
|
||||||
default: 0.0.0.0
|
|
||||||
execution:
|
execution:
|
||||||
python:
|
python:
|
||||||
path: ./slack.py
|
path: ./slack.py
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import telegram
|
|||||||
import telegram.ext
|
import telegram.ext
|
||||||
from telegram import Update
|
from telegram import Update
|
||||||
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler, MessageHandler, filters
|
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler, MessageHandler, filters
|
||||||
|
import telegramify_markdown
|
||||||
import typing
|
import typing
|
||||||
import asyncio
|
import asyncio
|
||||||
import traceback
|
import traceback
|
||||||
@@ -86,9 +86,10 @@ class TelegramMessageConverter(adapter.MessageConverter):
|
|||||||
if message.text:
|
if message.text:
|
||||||
message_text = message.text
|
message_text = message.text
|
||||||
message_components.extend(parse_message_text(message_text))
|
message_components.extend(parse_message_text(message_text))
|
||||||
|
|
||||||
if message.photo:
|
if message.photo:
|
||||||
message_components.extend(parse_message_text(message.caption))
|
if message.caption:
|
||||||
|
message_components.extend(parse_message_text(message.caption))
|
||||||
|
|
||||||
file = await message.photo[-1].get_file()
|
file = await message.photo[-1].get_file()
|
||||||
|
|
||||||
@@ -126,7 +127,7 @@ class TelegramEventConverter(adapter.EventConverter):
|
|||||||
time=event.message.date.timestamp(),
|
time=event.message.date.timestamp(),
|
||||||
source_platform_object=event
|
source_platform_object=event
|
||||||
)
|
)
|
||||||
elif event.effective_chat.type == 'group':
|
elif event.effective_chat.type == 'group' or 'supergroup' :
|
||||||
return platform_events.GroupMessage(
|
return platform_events.GroupMessage(
|
||||||
sender=platform_entities.GroupMember(
|
sender=platform_entities.GroupMember(
|
||||||
id=event.effective_chat.id,
|
id=event.effective_chat.id,
|
||||||
@@ -201,19 +202,23 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
|
|||||||
|
|
||||||
for component in components:
|
for component in components:
|
||||||
if component['type'] == 'text':
|
if component['type'] == 'text':
|
||||||
|
if self.config['markdown_card'] is True:
|
||||||
|
content = telegramify_markdown.markdownify(
|
||||||
|
content= component['text'],
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
content = component['text']
|
||||||
args = {
|
args = {
|
||||||
"chat_id": message_source.source_platform_object.effective_chat.id,
|
"chat_id": message_source.source_platform_object.effective_chat.id,
|
||||||
"text": component['text'],
|
"text": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
if self.config['markdown_card'] is True:
|
if self.config['markdown_card'] is True:
|
||||||
args["parse_mode"] = "MarkdownV2"
|
args["parse_mode"] = "MarkdownV2"
|
||||||
|
if quote_origin:
|
||||||
|
args['reply_to_message_id'] = message_source.source_platform_object.message.id
|
||||||
|
|
||||||
if quote_origin:
|
await self.bot.send_message(**args)
|
||||||
args['reply_to_message_id'] = message_source.source_platform_object.message.id
|
|
||||||
|
|
||||||
await self.bot.send_message(**args)
|
|
||||||
|
|
||||||
async def is_muted(self, group_id: int) -> bool:
|
async def is_muted(self, group_id: int) -> bool:
|
||||||
return False
|
return False
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ from . import entities, requester
|
|||||||
from ...core import app
|
from ...core import app
|
||||||
from ...discover import engine
|
from ...discover import engine
|
||||||
from . import token
|
from . import token
|
||||||
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl
|
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl, modelscopechatcmpl
|
||||||
|
|
||||||
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"
|
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"
|
||||||
|
|
||||||
|
|||||||
@@ -2,12 +2,12 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import openai
|
import openai
|
||||||
|
|
||||||
from . import chatcmpl
|
from . import chatcmpl, modelscopechatcmpl
|
||||||
from .. import requester
|
from .. import requester
|
||||||
from ....core import app
|
from ....core import app
|
||||||
|
|
||||||
|
|
||||||
class BailianChatCompletions(chatcmpl.OpenAIChatCompletions):
|
class BailianChatCompletions(modelscopechatcmpl.ModelScopeChatCompletions):
|
||||||
"""阿里云百炼大模型平台 ChatCompletion API 请求器"""
|
"""阿里云百炼大模型平台 ChatCompletion API 请求器"""
|
||||||
|
|
||||||
client: openai.AsyncClient
|
client: openai.AsyncClient
|
||||||
@@ -18,3 +18,4 @@ class BailianChatCompletions(chatcmpl.OpenAIChatCompletions):
|
|||||||
self.ap = ap
|
self.ap = ap
|
||||||
|
|
||||||
self.requester_cfg = self.ap.provider_cfg.data['requester']['bailian-chat-completions']
|
self.requester_cfg = self.ap.provider_cfg.data['requester']['bailian-chat-completions']
|
||||||
|
|
||||||
@@ -61,6 +61,12 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
|
|||||||
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
|
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
|
||||||
chatcmpl_message['role'] = 'assistant'
|
chatcmpl_message['role'] = 'assistant'
|
||||||
|
|
||||||
|
reasoning_content = chatcmpl_message['reasoning_content'] if 'reasoning_content' in chatcmpl_message else None
|
||||||
|
|
||||||
|
# deepseek的reasoner模型
|
||||||
|
if reasoning_content is not None:
|
||||||
|
chatcmpl_message['content'] = "<think>\n" + reasoning_content + "\n</think>\n\n"+ chatcmpl_message['content']
|
||||||
|
|
||||||
message = llm_entities.Message(**chatcmpl_message)
|
message = llm_entities.Message(**chatcmpl_message)
|
||||||
|
|
||||||
return message
|
return message
|
||||||
|
|||||||
207
pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
Normal file
207
pkg/provider/modelmgr/requesters/modelscopechatcmpl.py
Normal file
@@ -0,0 +1,207 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import typing
|
||||||
|
import json
|
||||||
|
import base64
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
|
||||||
|
import openai
|
||||||
|
import openai.types.chat.chat_completion as chat_completion
|
||||||
|
import openai.types.chat.chat_completion_message_tool_call as chat_completion_message_tool_call
|
||||||
|
import httpx
|
||||||
|
import aiohttp
|
||||||
|
import async_lru
|
||||||
|
|
||||||
|
from .. import entities, errors, requester
|
||||||
|
from ....core import entities as core_entities, app
|
||||||
|
from ... import entities as llm_entities
|
||||||
|
from ...tools import entities as tools_entities
|
||||||
|
from ....utils import image
|
||||||
|
|
||||||
|
|
||||||
|
class ModelScopeChatCompletions(requester.LLMAPIRequester):
|
||||||
|
"""ModelScope ChatCompletion API 请求器"""
|
||||||
|
|
||||||
|
client: openai.AsyncClient
|
||||||
|
|
||||||
|
requester_cfg: dict
|
||||||
|
|
||||||
|
def __init__(self, ap: app.Application):
|
||||||
|
self.ap = ap
|
||||||
|
|
||||||
|
self.requester_cfg = self.ap.provider_cfg.data['requester']['modelscope-chat-completions']
|
||||||
|
|
||||||
|
async def initialize(self):
|
||||||
|
|
||||||
|
self.client = openai.AsyncClient(
|
||||||
|
api_key="",
|
||||||
|
base_url=self.requester_cfg['base-url'],
|
||||||
|
timeout=self.requester_cfg['timeout'],
|
||||||
|
http_client=httpx.AsyncClient(
|
||||||
|
trust_env=True,
|
||||||
|
timeout=self.requester_cfg['timeout']
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _req(
|
||||||
|
self,
|
||||||
|
args: dict,
|
||||||
|
) -> chat_completion.ChatCompletion:
|
||||||
|
args["stream"] = True
|
||||||
|
|
||||||
|
chunk = None
|
||||||
|
|
||||||
|
pending_content = ""
|
||||||
|
|
||||||
|
tool_calls = []
|
||||||
|
|
||||||
|
resp_gen: openai.AsyncStream = await self.client.chat.completions.create(**args)
|
||||||
|
|
||||||
|
async for chunk in resp_gen:
|
||||||
|
# print(chunk)
|
||||||
|
if not chunk or not chunk.id or not chunk.choices or not chunk.choices[0] or not chunk.choices[0].delta:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if chunk.choices[0].delta.content is not None:
|
||||||
|
pending_content += chunk.choices[0].delta.content
|
||||||
|
|
||||||
|
if chunk.choices[0].delta.tool_calls is not None:
|
||||||
|
for tool_call in chunk.choices[0].delta.tool_calls:
|
||||||
|
for tc in tool_calls:
|
||||||
|
if tc.index == tool_call.index:
|
||||||
|
tc.function.arguments += tool_call.function.arguments
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
tool_calls.append(tool_call)
|
||||||
|
|
||||||
|
if chunk.choices[0].finish_reason is not None:
|
||||||
|
break
|
||||||
|
|
||||||
|
real_tool_calls = []
|
||||||
|
|
||||||
|
for tc in tool_calls:
|
||||||
|
function = chat_completion_message_tool_call.Function(
|
||||||
|
name=tc.function.name,
|
||||||
|
arguments=tc.function.arguments
|
||||||
|
)
|
||||||
|
real_tool_calls.append(chat_completion_message_tool_call.ChatCompletionMessageToolCall(
|
||||||
|
id=tc.id,
|
||||||
|
function=function,
|
||||||
|
type="function"
|
||||||
|
))
|
||||||
|
|
||||||
|
return chat_completion.ChatCompletion(
|
||||||
|
id=chunk.id,
|
||||||
|
object="chat.completion",
|
||||||
|
created=chunk.created,
|
||||||
|
choices=[
|
||||||
|
chat_completion.Choice(
|
||||||
|
index=0,
|
||||||
|
message=chat_completion.ChatCompletionMessage(
|
||||||
|
role="assistant",
|
||||||
|
content=pending_content,
|
||||||
|
tool_calls=real_tool_calls if len(real_tool_calls) > 0 else None
|
||||||
|
),
|
||||||
|
finish_reason=chunk.choices[0].finish_reason if hasattr(chunk.choices[0], 'finish_reason') and chunk.choices[0].finish_reason is not None else 'stop',
|
||||||
|
logprobs=chunk.choices[0].logprobs,
|
||||||
|
)
|
||||||
|
],
|
||||||
|
model=chunk.model,
|
||||||
|
service_tier=chunk.service_tier if hasattr(chunk, 'service_tier') else None,
|
||||||
|
system_fingerprint=chunk.system_fingerprint if hasattr(chunk, 'system_fingerprint') else None,
|
||||||
|
usage=chunk.usage if hasattr(chunk, 'usage') else None
|
||||||
|
) if chunk else None
|
||||||
|
return await self.client.chat.completions.create(**args)
|
||||||
|
|
||||||
|
async def _make_msg(
|
||||||
|
self,
|
||||||
|
chat_completion: chat_completion.ChatCompletion,
|
||||||
|
) -> llm_entities.Message:
|
||||||
|
chatcmpl_message = chat_completion.choices[0].message.dict()
|
||||||
|
|
||||||
|
# 确保 role 字段存在且不为 None
|
||||||
|
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
|
||||||
|
chatcmpl_message['role'] = 'assistant'
|
||||||
|
|
||||||
|
message = llm_entities.Message(**chatcmpl_message)
|
||||||
|
|
||||||
|
return message
|
||||||
|
|
||||||
|
async def _closure(
|
||||||
|
self,
|
||||||
|
query: core_entities.Query,
|
||||||
|
req_messages: list[dict],
|
||||||
|
use_model: entities.LLMModelInfo,
|
||||||
|
use_funcs: list[tools_entities.LLMFunction] = None,
|
||||||
|
) -> llm_entities.Message:
|
||||||
|
self.client.api_key = use_model.token_mgr.get_token()
|
||||||
|
|
||||||
|
args = self.requester_cfg['args'].copy()
|
||||||
|
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
|
||||||
|
|
||||||
|
if use_funcs:
|
||||||
|
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
|
||||||
|
|
||||||
|
if tools:
|
||||||
|
args["tools"] = tools
|
||||||
|
|
||||||
|
# 设置此次请求中的messages
|
||||||
|
messages = req_messages.copy()
|
||||||
|
|
||||||
|
# 检查vision
|
||||||
|
for msg in messages:
|
||||||
|
if 'content' in msg and isinstance(msg["content"], list):
|
||||||
|
for me in msg["content"]:
|
||||||
|
if me["type"] == "image_base64":
|
||||||
|
me["image_url"] = {
|
||||||
|
"url": me["image_base64"]
|
||||||
|
}
|
||||||
|
me["type"] = "image_url"
|
||||||
|
del me["image_base64"]
|
||||||
|
|
||||||
|
args["messages"] = messages
|
||||||
|
|
||||||
|
# 发送请求
|
||||||
|
resp = await self._req(args)
|
||||||
|
|
||||||
|
# 处理请求结果
|
||||||
|
message = await self._make_msg(resp)
|
||||||
|
|
||||||
|
return message
|
||||||
|
|
||||||
|
async def call(
|
||||||
|
self,
|
||||||
|
query: core_entities.Query,
|
||||||
|
model: entities.LLMModelInfo,
|
||||||
|
messages: typing.List[llm_entities.Message],
|
||||||
|
funcs: typing.List[tools_entities.LLMFunction] = None,
|
||||||
|
) -> llm_entities.Message:
|
||||||
|
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
|
||||||
|
for m in messages:
|
||||||
|
msg_dict = m.dict(exclude_none=True)
|
||||||
|
content = msg_dict.get("content")
|
||||||
|
if isinstance(content, list):
|
||||||
|
# 检查 content 列表中是否每个部分都是文本
|
||||||
|
if all(isinstance(part, dict) and part.get("type") == "text" for part in content):
|
||||||
|
# 将所有文本部分合并为一个字符串
|
||||||
|
msg_dict["content"] = "\n".join(part["text"] for part in content)
|
||||||
|
req_messages.append(msg_dict)
|
||||||
|
|
||||||
|
try:
|
||||||
|
return await self._closure(query=query, req_messages=req_messages, use_model=model, use_funcs=funcs)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
raise errors.RequesterError('请求超时')
|
||||||
|
except openai.BadRequestError as e:
|
||||||
|
if 'context_length_exceeded' in e.message:
|
||||||
|
raise errors.RequesterError(f'上文过长,请重置会话: {e.message}')
|
||||||
|
else:
|
||||||
|
raise errors.RequesterError(f'请求参数错误: {e.message}')
|
||||||
|
except openai.AuthenticationError as e:
|
||||||
|
raise errors.RequesterError(f'无效的 api-key: {e.message}')
|
||||||
|
except openai.NotFoundError as e:
|
||||||
|
raise errors.RequesterError(f'请求路径错误: {e.message}')
|
||||||
|
except openai.RateLimitError as e:
|
||||||
|
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
|
||||||
|
except openai.APIError as e:
|
||||||
|
raise errors.RequesterError(f'请求错误: {e.message}')
|
||||||
34
pkg/provider/modelmgr/requesters/modelscopechatcmpl.yaml
Normal file
34
pkg/provider/modelmgr/requesters/modelscopechatcmpl.yaml
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: LLMAPIRequester
|
||||||
|
metadata:
|
||||||
|
name: modelscope-chat-completions
|
||||||
|
label:
|
||||||
|
en_US: ModelScope
|
||||||
|
zh_CN: 魔搭社区
|
||||||
|
spec:
|
||||||
|
config:
|
||||||
|
- name: base-url
|
||||||
|
label:
|
||||||
|
en_US: Base URL
|
||||||
|
zh_CN: 基础 URL
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
default: "https://api-inference.modelscope.cn/v1"
|
||||||
|
- name: args
|
||||||
|
label:
|
||||||
|
en_US: Args
|
||||||
|
zh_CN: 附加参数
|
||||||
|
type: object
|
||||||
|
required: true
|
||||||
|
default: {}
|
||||||
|
- name: timeout
|
||||||
|
label:
|
||||||
|
en_US: Timeout
|
||||||
|
zh_CN: 超时时间
|
||||||
|
type: int
|
||||||
|
required: true
|
||||||
|
default: 120
|
||||||
|
execution:
|
||||||
|
python:
|
||||||
|
path: ./modelscopechatcmpl.py
|
||||||
|
attr: ModelScopeChatCompletions
|
||||||
@@ -42,8 +42,8 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
|
|||||||
if 'content' in m and isinstance(m["content"], list):
|
if 'content' in m and isinstance(m["content"], list):
|
||||||
m["content"] = " ".join([c["text"] for c in m["content"]])
|
m["content"] = " ".join([c["text"] for c in m["content"]])
|
||||||
|
|
||||||
# 删除空的
|
# 删除空的,不知道干嘛的,直接删了。
|
||||||
messages = [m for m in messages if m["content"].strip() != ""]
|
# messages = [m for m in messages if m["content"].strip() != "" and ('tool_calls' not in m or not m['tool_calls'])]
|
||||||
|
|
||||||
args["messages"] = messages
|
args["messages"] = messages
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
semantic_version = "v3.4.12"
|
semantic_version = "v3.4.13.1"
|
||||||
|
|
||||||
debug_mode = False
|
debug_mode = False
|
||||||
|
|
||||||
|
|||||||
@@ -35,5 +35,7 @@ python-telegram-bot
|
|||||||
certifi
|
certifi
|
||||||
mcp
|
mcp
|
||||||
slack_sdk
|
slack_sdk
|
||||||
|
telegramify-markdown
|
||||||
# indirect
|
# indirect
|
||||||
taskgroup==0.0.0a4
|
taskgroup==0.0.0a4
|
||||||
|
python-socks
|
||||||
@@ -71,38 +71,38 @@
|
|||||||
"token": ""
|
"token": ""
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"adapter":"officialaccount",
|
"adapter": "officialaccount",
|
||||||
"enable": false,
|
"enable": false,
|
||||||
"token": "",
|
"token": "",
|
||||||
"EncodingAESKey":"",
|
"EncodingAESKey": "",
|
||||||
"AppID":"",
|
"AppID": "",
|
||||||
"AppSecret":"",
|
"AppSecret": "",
|
||||||
"Mode":"drop",
|
"Mode": "drop",
|
||||||
"LoadingMessage":"AI正在思考中,请发送任意内容获取回复。",
|
"LoadingMessage": "AI正在思考中,请发送任意内容获取回复。",
|
||||||
"host": "0.0.0.0",
|
"host": "0.0.0.0",
|
||||||
"port": 2287
|
"port": 2287
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"adapter":"dingtalk",
|
"adapter": "dingtalk",
|
||||||
"enable": false,
|
"enable": false,
|
||||||
"client_id":"",
|
"client_id": "",
|
||||||
"client_secret":"",
|
"client_secret": "",
|
||||||
"robot_code":"",
|
"robot_code": "",
|
||||||
"robot_name":"",
|
"robot_name": "",
|
||||||
"markdown_card":false
|
"markdown_card": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"adapter":"telegram",
|
"adapter": "telegram",
|
||||||
"enable": false,
|
"enable": false,
|
||||||
"token":"",
|
"token": "",
|
||||||
"markdown_card":false
|
"markdown_card": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"adapter":"slack",
|
"adapter": "slack",
|
||||||
"enable":true,
|
"enable": false,
|
||||||
"bot_token":"",
|
"bot_token": "",
|
||||||
"signing_secret":"",
|
"signing_secret": "",
|
||||||
"port":2288
|
"port": 2288
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"track-function-calls": true,
|
"track-function-calls": true,
|
||||||
|
|||||||
@@ -31,6 +31,9 @@
|
|||||||
],
|
],
|
||||||
"volcark": [
|
"volcark": [
|
||||||
"xxxxxxxx"
|
"xxxxxxxx"
|
||||||
|
],
|
||||||
|
"modelscope": [
|
||||||
|
"xxxxxxxx"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"requester": {
|
"requester": {
|
||||||
@@ -95,6 +98,11 @@
|
|||||||
"args": {},
|
"args": {},
|
||||||
"base-url": "https://ark.cn-beijing.volces.com/api/v3",
|
"base-url": "https://ark.cn-beijing.volces.com/api/v3",
|
||||||
"timeout": 120
|
"timeout": 120
|
||||||
|
},
|
||||||
|
"modelscope-chat-completions": {
|
||||||
|
"base-url": "https://api-inference.modelscope.cn/v1",
|
||||||
|
"args": {},
|
||||||
|
"timeout": 120
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"model": "gpt-4o",
|
"model": "gpt-4o",
|
||||||
|
|||||||
Reference in New Issue
Block a user