Compare commits

..

47 Commits

Author SHA1 Message Date
Junyan Qin (Chin)
c5457374a8 chore: release v3.4.13 (#1284) 2025-04-09 21:58:23 +08:00
Junyan Qin (Chin)
5198349591 Merge pull request #1275 from yrk111222/master
Add ModelScope Support
2025-04-03 21:00:03 +08:00
Junyan Qin
8a4967525a fix(modelscope): bad base-url in migration 2025-04-03 20:52:01 +08:00
Junyan Qin
30b068c6e2 doc: reorder modelscope in README 2025-04-03 20:44:41 +08:00
Junyan Qin
ea3fff59ac chore: remove verbose models from llm-models.json 2025-04-03 20:40:36 +08:00
yrk
b09ce8296f Add ModelScope Support 2025-04-03 16:55:14 +08:00
Junyan Qin (Chin)
f9d07779a9 fix: slack is incorrectly enabled as default (#1274) 2025-04-03 14:17:21 +08:00
Junyan Qin (Chin)
51634c1caf chore: release v3.4.12.1 (#1271) 2025-04-02 15:23:38 +08:00
Guanchao Wang
0e00da6617 Merge pull request #1270 from RockChinQ/fix/telegram-markdown
fix: markdown and image problems in tg
2025-04-02 12:33:15 +08:00
Junyan Qin (Chin)
5ee6baeaaa Merge pull request #1268 from RockChinQ/version/3.4.12
chore: release v3.4.12
2025-04-01 21:15:46 +08:00
Junyan Qin
f11a036c60 chore: release v3.4.12 2025-04-01 21:13:41 +08:00
Junyan Qin (Chin)
0ac02ff4ce Merge pull request #1267 from RockChinQ/chore/default-prompt
chore: provide default prompt
2025-04-01 20:43:33 +08:00
Junyan Qin
99cc50b5cb chore: provide default prompt 2025-04-01 20:42:23 +08:00
Junyan Qin (Chin)
1d8fb02989 Merge pull request #1218 from fdc310/master
新增了微信发送小程序、转发小程序,发送emoji表情以及发送链接
2025-04-01 20:38:32 +08:00
Junyan Qin
122cb1188c style: standardized component names 2025-04-01 20:37:39 +08:00
Junyan Qin (Chin)
ca36ade288 Merge pull request #1266 from RockChinQ/chore/slack-schema
chore: add slack config schema
2025-04-01 20:04:08 +08:00
Junyan Qin
0877046db7 chore: add slack config schema 2025-04-01 20:03:42 +08:00
Junyan Qin (Chin)
ce9615a00e Merge pull request #1265 from RockChinQ/feat/markdowncard
add support for markdown card in dingtalk & tg
2025-04-01 20:01:44 +08:00
Junyan Qin
dbe5a41395 chore: schema for markdown config 2025-04-01 20:01:20 +08:00
Junyan Qin
4a4ca54c6e feat: migration for markdown config 2025-04-01 19:59:45 +08:00
wangcham
47acb63feb add support for markdown card in dingtalk & tg 2025-04-01 07:11:48 -04:00
Junyan Qin (Chin)
038c5d41e2 Merge pull request #1258 from RockChinQ/feat/slack
feat: add slack adapter
2025-04-01 15:33:22 +08:00
Junyan Qin
011a795895 doc(README): add slack 2025-04-01 15:32:48 +08:00
wangcham
873a0339d8 feat: add support for sending active message in slack 2025-04-01 03:03:48 -04:00
wangcham
715da548c8 fix: put the link and content together 2025-04-01 02:37:25 -04:00
Junyan Qin (Chin)
5378c6ba35 chore: provides TZ=Asia/Shanghai in docker-compose.yaml as default (#1259) 2025-03-31 14:00:08 +08:00
Guanchao Wang
8799f86ea4 Update pkg/platform/sources/slack.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-03-31 13:48:37 +08:00
wangcham
686be4acbc fix: eliminate host config 2025-03-31 01:10:45 -04:00
wangcham
5744eca37a fix: bot user id in slack 2025-03-30 23:06:03 -04:00
wangcham
70f8ddb1ba fix: delete useless image function in slack 2025-03-30 22:56:51 -04:00
wangcham
be1328cee9 feat: add support for slack 2025-03-30 22:24:53 -04:00
wangcham
c0dbf6fd13 feat:add support for slack 2025-03-30 12:53:48 -04:00
Junyan Qin (Chin)
ffe9c3e0f8 chore: release v3.4.11.2 (#1257) 2025-03-31 00:02:54 +08:00
Junyan Qin (Chin)
e20b79b0ed perf(chatcmpl): remove space from base-url (#1256) 2025-03-30 23:59:55 +08:00
Junyan Qin (Chin)
e04d46db2c perf(claude): ensure system message removed (#867) (#1255) 2025-03-30 23:51:53 +08:00
Junyan Qin (Chin)
7341435127 perf(chatcmpl): use extra_body to pass args (#1254) 2025-03-30 23:43:45 +08:00
Junyan Qin (Chin)
8b56f94667 perf: add debugging msg for webhook style adapters (#1253) 2025-03-30 23:23:31 +08:00
Junyan Qin (Chin)
f5e98d4ebb fix(gewe): should not block main launching process (#1163) (#1252) 2025-03-30 23:14:56 +08:00
Junyan Qin (Chin)
23a0dba470 feat(dify): throw error event (#1251) 2025-03-30 23:04:46 +08:00
fdc310
512371cc25 Merge branch 'RockChinQ:master' into master 2025-03-30 22:55:55 +08:00
Dong_master
cd4a06b692 修改因为手误的参数名错误以及类名规范化 2025-03-29 01:18:30 +08:00
Dong_master
432440d6bf 新增reply发送消息及文件 2025-03-27 00:01:05 +08:00
fdc310
71ffbb9eb5 Merge branch 'RockChinQ:master' into master 2025-03-19 23:13:58 +08:00
Dong_master
e22c804deb 新增发送emoji表情?(好像没啥用)和发送链接功能 2025-03-19 22:47:10 +08:00
Dong_master
c136e790ef 新增小程序发送,小程序转发更名为ForwardMiniPrograms 2025-03-19 21:56:13 +08:00
Dong_master
3697afd9d6 新增小程序发送,小程序转发更名为ForwardMiniPrograms 2025-03-19 21:55:36 +08:00
Dong_master
c597c6482a 新增小程序转发 2025-03-19 20:46:56 +08:00
37 changed files with 995 additions and 60 deletions

1
.gitignore vendored
View File

@@ -39,3 +39,4 @@ botpy.log*
/libs/wecom_api/test.py
/venv
/jp-tyo-churros-05.rockchin.top
test.py

View File

@@ -93,7 +93,7 @@
| 钉钉 | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | 🚧 | |
| Slack | | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
@@ -116,6 +116,7 @@
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [火山方舟](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | 大模型聚合平台, LLMOps 平台 |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | 大模型聚合平台 |
| [MCP](https://modelcontextprotocol.io/) | ✅ | 支持通过 MCP 协议获取工具 |
### TTS

View File

@@ -90,7 +90,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | 🚧 | |
| Slack | | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
@@ -113,6 +113,7 @@ Directly use the released version to run, see the [Manual Deployment](https://do
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLM gateway(MaaS) |
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLM gateway(MaaS), LLMOps platform |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLM gateway(MaaS) |
| [MCP](https://modelcontextprotocol.io/) | ✅ | Support tool access through MCP protocol |
## 🤝 Community Contribution

View File

@@ -89,7 +89,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| DingTalk | ✅ | |
| Discord | ✅ | |
| Telegram | ✅ | |
| Slack | 🚧 | |
| Slack | | |
| LINE | 🚧 | |
| WhatsApp | 🚧 | |
@@ -112,6 +112,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [SiliconFlow](https://siliconflow.cn/) | ✅ | LLMゲートウェイ(MaaS) |
| [Aliyun Bailian](https://bailian.console.aliyun.com/) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
| [Volc Engine Ark](https://console.volcengine.com/ark/region:ark+cn-beijing/model?vendor=Bytedance&view=LIST_VIEW) | ✅ | LLMゲートウェイ(MaaS), LLMOpsプラットフォーム |
| [ModelScope](https://modelscope.cn/docs/model-service/API-Inference/intro) | ✅ | LLMゲートウェイ(MaaS) |
| [MCP](https://modelcontextprotocol.io/) | ✅ | MCPプロトコルをサポート |
## 🤝 コミュニティ貢献

View File

@@ -8,6 +8,8 @@ services:
- ./data:/app/data
- ./plugins:/app/plugins
restart: on-failure
environment:
- TZ=Asia/Shanghai
ports:
- 5300:5300 # 供 WebUI 使用
- 2280-2290:2280-2290 # 供消息平台适配器方向连接

View File

@@ -10,7 +10,7 @@ import traceback
class DingTalkClient:
def __init__(self, client_id: str, client_secret: str,robot_name:str,robot_code:str):
def __init__(self, client_id: str, client_secret: str,robot_name:str,robot_code:str,markdown_card:bool):
"""初始化 WebSocket 连接并自动启动"""
self.credential = dingtalk_stream.Credential(client_id, client_secret)
self.client = dingtalk_stream.DingTalkStreamClient(self.credential)
@@ -26,6 +26,7 @@ class DingTalkClient:
self.robot_name = robot_name
self.robot_code = robot_code
self.access_token_expiry_time = ''
self.markdown_card = markdown_card
@@ -128,7 +129,10 @@ class DingTalkClient:
async def send_message(self,content:str,incoming_message):
self.EchoTextHandler.reply_text(content,incoming_message)
if self.markdown_card:
self.EchoTextHandler.reply_markdown(title=self.robot_name+'的回答',text=content,incoming_message=incoming_message)
else:
self.EchoTextHandler.reply_text(content,incoming_message)
async def get_incoming_message(self):

View File

111
libs/slack_api/api.py Normal file
View File

@@ -0,0 +1,111 @@
import json
from quart import Quart, jsonify,request
from slack_sdk.web.async_client import AsyncWebClient
from .slackevent import SlackEvent
from typing import Callable, Dict, Any
from pkg.platform.types import events as platform_events, message as platform_message
class SlackClient():
def __init__(self,bot_token:str,signing_secret:str):
self.bot_token = bot_token
self.signing_secret = signing_secret
self.app = Quart(__name__)
self.client = AsyncWebClient(self.bot_token)
self.app.add_url_rule('/callback/command', 'handle_callback', self.handle_callback_request, methods=['GET', 'POST'])
self._message_handlers = {
"example":[],
}
self.bot_user_id = None # 避免机器人回复自己的消息
async def handle_callback_request(self):
try:
body = await request.get_data()
data = json.loads(body)
if 'type' in data:
if data['type'] == 'url_verification':
return data['challenge']
bot_user_id = data.get("event",{}).get("bot_id","")
if self.bot_user_id and bot_user_id == self.bot_user_id:
return jsonify({'status': 'ok'})
# 处理私信
if data and data.get("event", {}).get("channel_type") in ["im"]:
event = SlackEvent.from_payload(data)
await self._handle_message(event)
return jsonify({'status': 'ok'})
#处理群聊
if data.get("event",{}).get("type") == 'app_mention':
data.setdefault("event", {})["channel_type"] = "channel"
event = SlackEvent.from_payload(data)
await self._handle_message(event)
return jsonify({'status':'ok'})
return jsonify({'status': 'ok'})
except Exception as e:
raise(e)
async def _handle_message(self, event: SlackEvent):
"""
处理消息事件。
"""
msg_type = event.type
if msg_type in self._message_handlers:
for handler in self._message_handlers[msg_type]:
await handler(event)
def on_message(self, msg_type: str):
"""注册消息类型处理器"""
def decorator(func: Callable[[platform_events.Event], None]):
if msg_type not in self._message_handlers:
self._message_handlers[msg_type] = []
self._message_handlers[msg_type].append(func)
return func
return decorator
async def send_message_to_channel(self,text:str,channel_id:str):
try:
response = await self.client.chat_postMessage(
channel=channel_id,
text=text
)
if self.bot_user_id is None and response.get("ok"):
self.bot_user_id = response["message"]["bot_id"]
return
except Exception as e:
raise e
async def send_message_to_one(self,text:str,user_id:str):
try:
response = await self.client.chat_postMessage(
channel = '@'+user_id,
text= text
)
if self.bot_user_id is None and response.get("ok"):
self.bot_user_id = response["message"]["bot_id"]
return
except Exception as e:
raise e
async def run_task(self, host: str, port: int, *args, **kwargs):
"""
启动 Quart 应用。
"""
await self.app.run_task(host=host, port=port, *args, **kwargs)

View File

@@ -0,0 +1,91 @@
from typing import Dict, Any, Optional
class SlackEvent(dict):
@staticmethod
def from_payload(payload: Dict[str, Any]) -> Optional["SlackEvent"]:
try:
event = SlackEvent(payload)
return event
except KeyError:
return None
@property
def text(self) -> str:
if self.get("event", {}).get("channel_type") == "im":
blocks = self.get("event", {}).get("blocks", [])
if not blocks:
return ""
elements = blocks[0].get("elements", [])
if not elements:
return ""
elements = elements[0].get("elements", [])
text = ""
for el in elements:
if el.get("type") == "text":
text += el.get("text", "")
elif el.get("type") == "link":
text += el.get("url", "")
return text
if self.get("event",{}).get("channel_type") == 'channel':
message_text = ""
for block in self.get("event", {}).get("blocks", []):
if block.get("type") == "rich_text":
for element in block.get("elements", []):
if element.get("type") == "rich_text_section":
parts = []
for el in element.get("elements", []):
if el.get("type") == "text":
parts.append(el["text"])
elif el.get("type") == "link":
parts.append(el["url"])
message_text = "".join(parts)
return message_text
@property
def user_id(self) -> Optional[str]:
return self.get("event", {}).get("user","")
@property
def channel_id(self) -> Optional[str]:
return self.get("event", {}).get("channel","")
@property
def type(self) -> str:
""" message对应私聊app_mention对应频道at """
return self.get("event", {}).get("channel_type", "")
@property
def message_id(self) -> str:
return self.get("event_id","")
@property
def pic_url(self) -> str:
"""提取 Slack 事件中的图片 URL"""
files = self.get("event", {}).get("files", [])
if files:
return files[0].get("url_private", "")
return None
@property
def sender_name(self) -> str:
return self.get("event", {}).get("user","")
def __getattr__(self, key: str) -> Optional[Any]:
return self.get(key)
def __setattr__(self, key: str, value: Any) -> None:
self[key] = value
def __repr__(self) -> str:
return f"<SlackEvent {super().__repr__()}>"

View File

@@ -35,6 +35,7 @@ required_deps = {
"telegram": "python-telegram-bot",
"certifi": "certifi",
"mcp": "mcp",
"telegramify_markdown":"telegramify-markdown",
}

View File

@@ -0,0 +1,26 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("tg-dingtalk-markdown", 38)
class TgDingtalkMarkdownMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
for adapter in self.ap.platform_cfg.data['platform-adapters']:
if adapter['adapter'] in ['dingtalk','telegram']:
if 'markdown_card' not in adapter:
return True
return False
async def run(self):
"""执行迁移"""
for adapter in self.ap.platform_cfg.data['platform-adapters']:
if adapter['adapter'] in ['dingtalk','telegram']:
if 'markdown_card' not in adapter:
adapter['markdown_card'] = False
await self.ap.platform_cfg.dump_config()

View File

@@ -0,0 +1,30 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("modelscope-config-completion", 4)
class ModelScopeConfigCompletionMigration(migration.Migration):
"""OpenAI配置迁移
"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移
"""
return 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester'] \
or 'modelscope' not in self.ap.provider_cfg.data['keys']
async def run(self):
"""执行迁移
"""
if 'modelscope-chat-completions' not in self.ap.provider_cfg.data['requester']:
self.ap.provider_cfg.data['requester']['modelscope-chat-completions'] = {
'base-url': 'https://api-inference.modelscope.cn/v1',
'args': {},
'timeout': 120,
}
if 'modelscope' not in self.ap.provider_cfg.data['keys']:
self.ap.provider_cfg.data['keys']['modelscope'] = []
await self.ap.provider_cfg.dump_config()

View File

@@ -12,7 +12,7 @@ from ..migrations import m020_wecom_config, m021_lark_config, m022_lmstudio_conf
from ..migrations import m026_qqofficial_config, m027_wx_official_account_config, m028_aliyun_requester_config
from ..migrations import m029_dashscope_app_api_config, m030_lark_config_cmpl, m031_dingtalk_config, m032_volcark_config
from ..migrations import m033_dify_thinking_config, m034_gewechat_file_url_config, m035_wxoa_mode, m036_wxoa_loading_message
from ..migrations import m037_mcp_config
from ..migrations import m037_mcp_config, m038_tg_dingtalk_markdown, m039_modelscope_cfg_completion
@stage.stage_class("MigrationStage")

View File

@@ -110,7 +110,7 @@ class PlatformManager:
if len(self.adapters) == 0:
self.ap.logger.warning('未运行平台适配器,请根据文档配置并启用平台适配器。')
async def write_back_config(self, adapter_name: str, adapter_inst: msadapter.MessagePlatformAdapter, config: dict):
def write_back_config(self, adapter_name: str, adapter_inst: msadapter.MessagePlatformAdapter, config: dict):
index = -2
for i, adapter in enumerate(self.adapters):
@@ -137,7 +137,7 @@ class PlatformManager:
**config
}
self.ap.platform_cfg.data['platform-adapters'][real_index] = new_cfg
await self.ap.platform_cfg.dump_config()
self.ap.platform_cfg.dump_config_sync()
async def send(self, event: platform_events.MessageEvent, msg: platform_message.MessageChain, adapter: msadapter.MessagePlatformAdapter):

View File

@@ -131,7 +131,8 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
client_id=config["client_id"],
client_secret=config["client_secret"],
robot_name = config["robot_name"],
robot_code=config["robot_code"]
robot_code=config["robot_code"],
markdown_card=config["markdown_card"]
)
async def reply_message(

View File

@@ -47,6 +47,17 @@ class GewechatMessageConverter(adapter.MessageConverter):
if not component.url:
pass
content_list.append({"type": "image", "image": component.url})
elif isinstance(component, platform_message.WeChatMiniPrograms):
content_list.append({"type": 'WeChatMiniPrograms', 'mini_app_id': component.mini_app_id, 'display_name': component.display_name,
'page_path': component.page_path, 'cover_img_url': component.image_url, 'title': component.title,
'user_name': component.user_name})
elif isinstance(component, platform_message.WeChatForwardMiniPrograms):
content_list.append({"type": 'WeChatForwardMiniPrograms', 'xml_data': component.xml_data, 'image_url': component.image_url})
elif isinstance(component, platform_message.WeChatEmoji):
content_list.append({'type': 'WeChatEmoji', 'emoji_md5': component.emoji_md5, 'emoji_size': component.emoji_size})
elif isinstance(component, platform_message.WeChatLink):
content_list.append({'type': 'WeChatLink', 'link_title': component.link_title, 'link_desc': component.link_desc,
'link_thumb_url': component.link_thumb_url, 'link_url': component.link_url})
elif isinstance(component, platform_message.Voice):
@@ -310,6 +321,9 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
async def gewechat_callback():
data = await quart.request.json
# print(json.dumps(data, indent=4, ensure_ascii=False))
self.ap.logger.debug(
f"Gewechat callback event: {data}"
)
if 'data' in data:
data['Data'] = data['data']
@@ -361,6 +375,19 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
elif msg['type'] == 'image':
self.bot.post_image(app_id=self.config['app_id'], to_wxid=target_id, img_url=msg["image"])
elif msg['type'] == 'WeChatMiniPrograms':
self.bot.post_mini_app(app_id=self.config['app_id'], to_wxid=target_id, mini_app_id=msg['mini_app_id']
, display_name=msg['display_name'], page_path=msg['page_path']
, cover_img_url=msg['cover_img_url'], title=msg['title'], user_name=msg['user_name'])
elif msg['type'] == 'WeChatForwardMiniPrograms':
self.bot.forward_mini_app(app_id=self.config['app_id'], to_wxid=target_id, xml=msg['xml_data'], cover_img_url=msg['image_url'])
elif msg['type'] == 'WeChatEmoji':
self.bot.post_emoji(app_id=self.config['app_id'], to_wxid=target_id,
emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size'])
elif msg['type'] == 'WeChatLink':
self.bot.post_link(app_id=self.config['app_id'], to_wxid=target_id
,title=msg['link_title'], desc=msg['link_desc']
, link_url=msg['link_url'], thumb_url=msg['link_thumb_url'])
@@ -373,6 +400,7 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
content_list = await self.message_converter.yiri2target(message)
ats = [item["target"] for item in content_list if item["type"] == "at"]
target_id = message_source.source_platform_object["Data"]["FromUserName"]["string"]
for msg in content_list:
if msg["type"] == "text":
@@ -393,6 +421,22 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
content=msg["content"],
ats=",".join(ats)
)
elif msg['type'] == 'image':
self.bot.post_image(app_id=self.config['app_id'], to_wxid=target_id, img_url=msg["image"])
elif msg['type'] == 'WeChatMiniPrograms':
self.bot.post_mini_app(app_id=self.config['app_id'], to_wxid=target_id, mini_app_id=msg['mini_app_id']
, display_name=msg['display_name'], page_path=msg['page_path']
, cover_img_url=msg['cover_img_url'], title=msg['title'], user_name=msg['user_name'])
elif msg['type'] == 'WeChatForwardMiniPrograms':
self.bot.forward_mini_app(app_id=self.config['app_id'], to_wxid=target_id, xml=msg['xml_data'], cover_img_url=msg['image_url'])
elif msg['type'] == 'WeChatEmoji':
self.bot.post_emoji(app_id=self.config['app_id'], to_wxid=target_id,
emoji_md5=msg['emoji_md5'], emoji_size=msg['emoji_size'])
elif msg['type'] == 'WeChatLink':
self.bot.post_link(app_id=self.config['app_id'], to_wxid=target_id
, title=msg['link_title'], desc=msg['link_desc']
, link_url=msg['link_url'], thumb_url=msg['link_thumb_url'])
async def is_muted(self, group_id: int) -> bool:
pass
@@ -428,26 +472,28 @@ class GeWeChatAdapter(adapter.MessagePlatformAdapter):
self.config["token"]
)
app_id, error_msg = self.bot.login(self.config["app_id"])
if error_msg:
raise Exception(f"Gewechat 登录失败: {error_msg}")
def gewechat_login_process():
self.config["app_id"] = app_id
app_id, error_msg = self.bot.login(self.config["app_id"])
if error_msg:
raise Exception(f"Gewechat 登录失败: {error_msg}")
self.ap.logger.info(f"Gewechat 登录成功app_id: {app_id}")
self.config["app_id"] = app_id
await self.ap.platform_mgr.write_back_config('gewechat', self, self.config)
self.ap.logger.info(f"Gewechat 登录成功app_id: {app_id}")
# 获取 nickname
profile = self.bot.get_profile(self.config["app_id"])
self.bot_account_id = profile["data"]["nickName"]
self.ap.platform_mgr.write_back_config('gewechat', self, self.config)
# 获取 nickname
profile = self.bot.get_profile(self.config["app_id"])
self.bot_account_id = profile["data"]["nickName"]
time.sleep(2)
def thread_set_callback():
time.sleep(3)
ret = self.bot.set_callback(self.config["token"], self.config["callback_url"])
print('设置 Gewechat 回调:', ret)
threading.Thread(target=thread_set_callback).start()
threading.Thread(target=gewechat_login_process).start()
async def shutdown_trigger_placeholder():
while True:

View File

@@ -328,6 +328,10 @@ class LarkAdapter(adapter.MessagePlatformAdapter):
try:
data = await quart.request.json
self.ap.logger.debug(
f"Lark callback event: {data}"
)
if 'encrypt' in data:
cipher = AESCipher(self.config['encrypt-key'])
data = cipher.decrypt_string(data['encrypt'])

View File

@@ -0,0 +1,204 @@
from __future__ import annotations
import typing
import asyncio
import traceback
import datetime
from libs.slack_api.api import SlackClient
from pkg.platform.adapter import MessagePlatformAdapter
from pkg.platform.types import events as platform_events, message as platform_message
from libs.slack_api.slackevent import SlackEvent
from pkg.core import app
from .. import adapter
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...command.errors import ParamNotEnoughError
from ...utils import image
class SlackMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(message_chain:platform_message.MessageChain):
content_list = []
for msg in message_chain:
if type(msg) is platform_message.Plain:
content_list.append({
"content":msg.text,
})
return content_list
@staticmethod
async def target2yiri(message:str,message_id:str,pic_url:str,bot:SlackClient):
yiri_msg_list = []
yiri_msg_list.append(
platform_message.Source(id=message_id,time=datetime.datetime.now())
)
if pic_url is not None:
base64_url = await image.get_slack_image_to_base64(pic_url=pic_url,bot_token=bot.bot_token)
yiri_msg_list.append(
platform_message.Image(base64=base64_url)
)
yiri_msg_list.append(platform_message.Plain(text=message))
chain = platform_message.MessageChain(yiri_msg_list)
return chain
class SlackEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(event:platform_events.MessageEvent) -> SlackEvent:
return event.source_platform_object
@staticmethod
async def target2yiri(event:SlackEvent,bot:SlackClient):
yiri_chain = await SlackMessageConverter.target2yiri(
message=event.text,message_id=event.message_id,pic_url=event.pic_url,bot=bot
)
if event.type == 'channel':
yiri_chain.insert(0, platform_message.At(target="SlackBot"))
sender = platform_entities.GroupMember(
id = event.user_id,
member_name= str(event.sender_name),
permission= 'MEMBER',
group = platform_entities.Group(
id = event.channel_id,
name = 'MEMBER',
permission= platform_entities.Permission.Member
),
special_title='',
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0
)
time = int(datetime.datetime.utcnow().timestamp())
return platform_events.GroupMessage(
sender = sender,
message_chain=yiri_chain,
time = time,
source_platform_object=event
)
if event.type == 'im':
return platform_events.FriendMessage(
sender=platform_entities.Friend(
id=event.user_id,
nickname = event.sender_name,
remark=""
),
message_chain = yiri_chain,
time = float(datetime.datetime.now().timestamp()),
source_platform_object=event,
)
class SlackAdapter(adapter.MessagePlatformAdapter):
bot: SlackClient
ap: app.Application
bot_account_id: str
message_converter: SlackMessageConverter = SlackMessageConverter()
event_converter: SlackEventConverter = SlackEventConverter()
config: dict
def __init__(self,config:dict,ap:app.Application):
self.config = config
self.ap = ap
required_keys = [
"bot_token",
"signing_secret",
]
missing_keys = [key for key in required_keys if key not in config]
if missing_keys:
raise ParamNotEnoughError("Slack机器人缺少相关配置项请查看文档或联系管理员")
self.bot = SlackClient(
bot_token=self.config["bot_token"],
signing_secret=self.config["signing_secret"]
)
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
slack_event = await SlackEventConverter.yiri2target(
message_source
)
content_list = await SlackMessageConverter.yiri2target(message)
for content in content_list:
if slack_event.type == 'channel':
await self.bot.send_message_to_channel(
content['content'],slack_event.channel_id
)
if slack_event.type == 'im':
await self.bot.send_message_to_one(
content['content'],slack_event.user_id
)
async def send_message(self, target_type: str, target_id: str, message: platform_message.MessageChain):
content_list = await SlackMessageConverter.yiri2target(message)
for content in content_list:
if target_type == 'person':
await self.bot.send_message_to_one(content['content'],target_id)
if target_type == 'group':
await self.bot.send_message_to_channel(content['content'],target_id)
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[
[platform_events.Event, adapter.MessagePlatformAdapter], None
],
):
async def on_message(event:SlackEvent):
self.bot_account_id = 'SlackBot'
try:
return await callback(
await self.event_converter.target2yiri(event,self.bot),self
)
except:
traceback.print_exc()
if event_type == platform_events.FriendMessage:
self.bot.on_message("im")(on_message)
elif event_type == platform_events.GroupMessage:
self.bot.on_message("channel")(on_message)
async def run_async(self):
async def shutdown_trigger_placeholder():
while True:
await asyncio.sleep(1)
await self.bot.run_task(
host="0.0.0.0",
port=self.config["port"],
shutdown_trigger=shutdown_trigger_placeholder,
)
async def kill(self) -> bool:
return False
async def unregister_listener(
self,
event_type: type,
callback: typing.Callable[[platform_events.Event, MessagePlatformAdapter], None],
):
return super().unregister_listener(event_type, callback)

View File

@@ -0,0 +1,37 @@
apiVersion: v1
kind: MessagePlatformAdapter
metadata:
name: slack
label:
en_US: Slack API
zh_CN: Slack API
description:
en_US: Slack API
zh_CN: Slack API
spec:
config:
- name: bot_token
label:
en_US: Bot Token
zh_CN: 机器人令牌
type: string
required: true
default: ""
- name: signing_secret
label:
en_US: signing_secret
zh_CN: 密钥
type: string
required: true
default: ""
- name: port
label:
en_US: Port
zh_CN: 监听端口
type: int
required: true
default: 2288
execution:
python:
path: ./slack.py
attr: SlackAdapter

View File

@@ -4,7 +4,7 @@ import telegram
import telegram.ext
from telegram import Update
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler, MessageHandler, filters
import telegramify_markdown
import typing
import asyncio
import traceback
@@ -86,9 +86,10 @@ class TelegramMessageConverter(adapter.MessageConverter):
if message.text:
message_text = message.text
message_components.extend(parse_message_text(message_text))
if message.photo:
message_components.extend(parse_message_text(message.caption))
if message.caption:
message_components.extend(parse_message_text(message.caption))
file = await message.photo[-1].get_file()
@@ -201,16 +202,20 @@ class TelegramAdapter(adapter.MessagePlatformAdapter):
for component in components:
if component['type'] == 'text':
content = telegramify_markdown.markdownify(
content= component['text'],
)
args = {
"chat_id": message_source.source_platform_object.effective_chat.id,
"text": component['text'],
"text": content,
}
if self.config['markdown_card'] is True:
args["parse_mode"] = "MarkdownV2"
if quote_origin:
args['reply_to_message_id'] = message_source.source_platform_object.message.id
if quote_origin:
args['reply_to_message_id'] = message_source.source_platform_object.message.id
await self.bot.send_message(**args)
await self.bot.send_message(**args)
async def is_muted(self, group_id: int) -> bool:
return False

View File

@@ -807,6 +807,56 @@ class File(MessageComponent):
"""文件名称。"""
size: int
"""文件大小。"""
def __str__(self):
return f'[文件]{self.name}'
# ================ 个人微信专用组件 ================
class WeChatMiniPrograms(MessageComponent):
"""小程序。个人微信专用组件。"""
type: str = 'WeChatMiniPrograms'
"""小程序id"""
mini_app_id: str
"""小程序归属用户id"""
user_name: str
"""小程序名称"""
display_name: typing.Optional[str] = ''
"""打开地址"""
page_path: typing.Optional[str] = ''
"""小程序标题"""
title: typing.Optional[str] = ''
"""首页图片"""
image_url: typing.Optional[str] = ''
class WeChatForwardMiniPrograms(MessageComponent):
"""转发小程序。个人微信专用组件。"""
type: str = 'WeChatForwardMiniPrograms'
"""xml数据"""
xml_data: str
"""首页图片"""
image_url: typing.Optional[str] = None
class WeChatEmoji(MessageComponent):
"""emoji表情。个人微信专用组件。"""
type: str = 'WeChatEmoji'
"""emojimd5"""
emoji_md5: str
"""emoji大小"""
emoji_size: int
class WeChatLink(MessageComponent):
"""发送链接。个人微信专用组件。"""
type: str = 'WeChatLink'
"""标题"""
link_title: str = ''
"""链接描述"""
link_desc: str = ''
"""链接地址"""
link_url: str = ''
"""链接略缩图"""
link_thumb_url: str = ''

View File

@@ -6,7 +6,7 @@ from . import entities, requester
from ...core import app
from ...discover import engine
from . import token
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl
from .requesters import bailianchatcmpl, chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, volcarkchatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl, volcarkchatcmpl, modelscopechatcmpl
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"

View File

@@ -25,7 +25,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
async def initialize(self):
httpx_client = anthropic._base_client.AsyncHttpxClientWrapper(
base_url=self.ap.provider_cfg.data['requester']['anthropic-messages']['base-url'],
base_url=self.ap.provider_cfg.data['requester']['anthropic-messages']['base-url'].replace(' ', ''),
# cast to a valid type because mypy doesn't understand our type narrowing
timeout=typing.cast(httpx.Timeout, self.ap.provider_cfg.data['requester']['anthropic-messages']['timeout']),
limits=anthropic._constants.DEFAULT_CONNECTION_LIMITS,
@@ -59,9 +59,11 @@ class AnthropicMessages(requester.LLMAPIRequester):
if m.role == "system":
system_role_message = m
messages.pop(i)
break
if system_role_message:
messages.pop(i)
if isinstance(system_role_message, llm_entities.Message) \
and isinstance(system_role_message.content, str):
args['system'] = system_role_message.content

View File

@@ -36,7 +36,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
self.client = openai.AsyncClient(
api_key="",
base_url=self.requester_cfg['base-url'],
base_url=self.requester_cfg['base-url'].replace(' ', ''),
timeout=self.requester_cfg['timeout'],
http_client=httpx.AsyncClient(
trust_env=True,
@@ -47,8 +47,9 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
async def _req(
self,
args: dict,
extra_body: dict = {},
) -> chat_completion.ChatCompletion:
return await self.client.chat.completions.create(**args)
return await self.client.chat.completions.create(**args, extra_body=extra_body)
async def _make_msg(
self,
@@ -73,7 +74,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = self.requester_cfg['args'].copy()
args = {}
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
if use_funcs:
@@ -99,7 +100,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
args["messages"] = messages
# 发送请求
resp = await self._req(args)
resp = await self._req(args, extra_body=self.requester_cfg['args'])
# 处理请求结果
message = await self._make_msg(resp)

View File

@@ -23,7 +23,7 @@ class DeepseekChatCompletions(chatcmpl.OpenAIChatCompletions):
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = self.requester_cfg['args'].copy()
args = {}
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
if use_funcs:
@@ -43,7 +43,7 @@ class DeepseekChatCompletions(chatcmpl.OpenAIChatCompletions):
args["messages"] = messages
# 发送请求
resp = await self._req(args)
resp = await self._req(args, extra_body=self.requester_cfg['args'])
if resp is None:
raise errors.RequesterError('接口返回为空,请确定模型提供商服务是否正常')

View File

@@ -30,7 +30,7 @@ class GiteeAIChatCompletions(chatcmpl.OpenAIChatCompletions):
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = self.requester_cfg['args'].copy()
args = {}
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
if use_funcs:
@@ -46,7 +46,7 @@ class GiteeAIChatCompletions(chatcmpl.OpenAIChatCompletions):
args["messages"] = req_messages
resp = await self._req(args)
resp = await self._req(args, extra_body=self.requester_cfg['args'])
message = await self._make_msg(resp)

View File

@@ -0,0 +1,207 @@
from __future__ import annotations
import asyncio
import typing
import json
import base64
from typing import AsyncGenerator
import openai
import openai.types.chat.chat_completion as chat_completion
import openai.types.chat.chat_completion_message_tool_call as chat_completion_message_tool_call
import httpx
import aiohttp
import async_lru
from .. import entities, errors, requester
from ....core import entities as core_entities, app
from ... import entities as llm_entities
from ...tools import entities as tools_entities
from ....utils import image
class ModelScopeChatCompletions(requester.LLMAPIRequester):
"""ModelScope ChatCompletion API 请求器"""
client: openai.AsyncClient
requester_cfg: dict
def __init__(self, ap: app.Application):
self.ap = ap
self.requester_cfg = self.ap.provider_cfg.data['requester']['modelscope-chat-completions']
async def initialize(self):
self.client = openai.AsyncClient(
api_key="",
base_url=self.requester_cfg['base-url'],
timeout=self.requester_cfg['timeout'],
http_client=httpx.AsyncClient(
trust_env=True,
timeout=self.requester_cfg['timeout']
)
)
async def _req(
self,
args: dict,
) -> chat_completion.ChatCompletion:
args["stream"] = True
chunk = None
pending_content = ""
tool_calls = []
resp_gen: openai.AsyncStream = await self.client.chat.completions.create(**args)
async for chunk in resp_gen:
# print(chunk)
if not chunk or not chunk.id or not chunk.choices or not chunk.choices[0] or not chunk.choices[0].delta:
continue
if chunk.choices[0].delta.content is not None:
pending_content += chunk.choices[0].delta.content
if chunk.choices[0].delta.tool_calls is not None:
for tool_call in chunk.choices[0].delta.tool_calls:
for tc in tool_calls:
if tc.index == tool_call.index:
tc.function.arguments += tool_call.function.arguments
break
else:
tool_calls.append(tool_call)
if chunk.choices[0].finish_reason is not None:
break
real_tool_calls = []
for tc in tool_calls:
function = chat_completion_message_tool_call.Function(
name=tc.function.name,
arguments=tc.function.arguments
)
real_tool_calls.append(chat_completion_message_tool_call.ChatCompletionMessageToolCall(
id=tc.id,
function=function,
type="function"
))
return chat_completion.ChatCompletion(
id=chunk.id,
object="chat.completion",
created=chunk.created,
choices=[
chat_completion.Choice(
index=0,
message=chat_completion.ChatCompletionMessage(
role="assistant",
content=pending_content,
tool_calls=real_tool_calls if len(real_tool_calls) > 0 else None
),
finish_reason=chunk.choices[0].finish_reason if hasattr(chunk.choices[0], 'finish_reason') and chunk.choices[0].finish_reason is not None else 'stop',
logprobs=chunk.choices[0].logprobs,
)
],
model=chunk.model,
service_tier=chunk.service_tier if hasattr(chunk, 'service_tier') else None,
system_fingerprint=chunk.system_fingerprint if hasattr(chunk, 'system_fingerprint') else None,
usage=chunk.usage if hasattr(chunk, 'usage') else None
) if chunk else None
return await self.client.chat.completions.create(**args)
async def _make_msg(
self,
chat_completion: chat_completion.ChatCompletion,
) -> llm_entities.Message:
chatcmpl_message = chat_completion.choices[0].message.dict()
# 确保 role 字段存在且不为 None
if 'role' not in chatcmpl_message or chatcmpl_message['role'] is None:
chatcmpl_message['role'] = 'assistant'
message = llm_entities.Message(**chatcmpl_message)
return message
async def _closure(
self,
query: core_entities.Query,
req_messages: list[dict],
use_model: entities.LLMModelInfo,
use_funcs: list[tools_entities.LLMFunction] = None,
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = self.requester_cfg['args'].copy()
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
if use_funcs:
tools = await self.ap.tool_mgr.generate_tools_for_openai(use_funcs)
if tools:
args["tools"] = tools
# 设置此次请求中的messages
messages = req_messages.copy()
# 检查vision
for msg in messages:
if 'content' in msg and isinstance(msg["content"], list):
for me in msg["content"]:
if me["type"] == "image_base64":
me["image_url"] = {
"url": me["image_base64"]
}
me["type"] = "image_url"
del me["image_base64"]
args["messages"] = messages
# 发送请求
resp = await self._req(args)
# 处理请求结果
message = await self._make_msg(resp)
return message
async def call(
self,
query: core_entities.Query,
model: entities.LLMModelInfo,
messages: typing.List[llm_entities.Message],
funcs: typing.List[tools_entities.LLMFunction] = None,
) -> llm_entities.Message:
req_messages = [] # req_messages 仅用于类内,外部同步由 query.messages 进行
for m in messages:
msg_dict = m.dict(exclude_none=True)
content = msg_dict.get("content")
if isinstance(content, list):
# 检查 content 列表中是否每个部分都是文本
if all(isinstance(part, dict) and part.get("type") == "text" for part in content):
# 将所有文本部分合并为一个字符串
msg_dict["content"] = "\n".join(part["text"] for part in content)
req_messages.append(msg_dict)
try:
return await self._closure(query=query, req_messages=req_messages, use_model=model, use_funcs=funcs)
except asyncio.TimeoutError:
raise errors.RequesterError('请求超时')
except openai.BadRequestError as e:
if 'context_length_exceeded' in e.message:
raise errors.RequesterError(f'上文过长,请重置会话: {e.message}')
else:
raise errors.RequesterError(f'请求参数错误: {e.message}')
except openai.AuthenticationError as e:
raise errors.RequesterError(f'无效的 api-key: {e.message}')
except openai.NotFoundError as e:
raise errors.RequesterError(f'请求路径错误: {e.message}')
except openai.RateLimitError as e:
raise errors.RequesterError(f'请求过于频繁或余额不足: {e.message}')
except openai.APIError as e:
raise errors.RequesterError(f'请求错误: {e.message}')

View File

@@ -0,0 +1,34 @@
apiVersion: v1
kind: LLMAPIRequester
metadata:
name: modelscope-chat-completions
label:
en_US: ModelScope
zh_CN: 魔搭社区
spec:
config:
- name: base-url
label:
en_US: Base URL
zh_CN: 基础 URL
type: string
required: true
default: "https://api-inference.modelscope.cn/v1"
- name: args
label:
en_US: Args
zh_CN: 附加参数
type: object
required: true
default: {}
- name: timeout
label:
en_US: Timeout
zh_CN: 超时时间
type: int
required: true
default: 120
execution:
python:
path: ./modelscopechatcmpl.py
attr: ModelScopeChatCompletions

View File

@@ -25,7 +25,7 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
) -> llm_entities.Message:
self.client.api_key = use_model.token_mgr.get_token()
args = self.requester_cfg['args'].copy()
args = {}
args["model"] = use_model.name if use_model.model_name is None else use_model.model_name
if use_funcs:
@@ -48,7 +48,7 @@ class MoonshotChatCompletions(chatcmpl.OpenAIChatCompletions):
args["messages"] = messages
# 发送请求
resp = await self._req(args)
resp = await self._req(args, extra_body=self.requester_cfg['args'])
# 处理请求结果
message = await self._make_msg(resp)

View File

@@ -225,6 +225,8 @@ class DifyServiceAPIRunner(runner.RequestRunner):
role="assistant",
content=[llm_entities.ContentElement.from_image_url(image_url)],
)
if chunk['event'] == 'error':
raise errors.DifyAPIError("dify 服务错误: " + chunk['message'])
query.session.using_conversation.uuid = chunk["conversation_id"]

View File

@@ -1,4 +1,4 @@
semantic_version = "v3.4.11.1"
semantic_version = "v3.4.13"
debug_mode = False

View File

@@ -212,4 +212,19 @@ async def extract_b64_and_format(image_base64_data: str) -> typing.Tuple[str, st
"""
base64_str = image_base64_data.split(',')[-1]
image_format = image_base64_data.split(':')[-1].split(';')[0].split('/')[-1]
return base64_str, image_format
return base64_str, image_format
async def get_slack_image_to_base64(pic_url:str, bot_token:str):
headers = {"Authorization": f"Bearer {bot_token}"}
try:
async with aiohttp.ClientSession() as session:
async with session.get(pic_url, headers=headers) as resp:
image_data = await resp.read()
return base64.b64encode(image_data).decode('utf-8')
except Exception as e:
raise(e)

View File

@@ -34,6 +34,7 @@ dashscope
python-telegram-bot
certifi
mcp
slack_sdk
telegramify-markdown
# indirect
taskgroup==0.0.0a4

View File

@@ -71,29 +71,38 @@
"token": ""
},
{
"adapter":"officialaccount",
"adapter": "officialaccount",
"enable": false,
"token": "",
"EncodingAESKey":"",
"AppID":"",
"AppSecret":"",
"Mode":"drop",
"LoadingMessage":"AI正在思考中请发送任意内容获取回复。",
"EncodingAESKey": "",
"AppID": "",
"AppSecret": "",
"Mode": "drop",
"LoadingMessage": "AI正在思考中请发送任意内容获取回复。",
"host": "0.0.0.0",
"port": 2287
},
{
"adapter":"dingtalk",
"adapter": "dingtalk",
"enable": false,
"client_id":"",
"client_secret":"",
"robot_code":"",
"robot_name":""
"client_id": "",
"client_secret": "",
"robot_code": "",
"robot_name": "",
"markdown_card": false
},
{
"adapter":"telegram",
"adapter": "telegram",
"enable": false,
"token":""
"token": "",
"markdown_card": false
},
{
"adapter": "slack",
"enable": false,
"bot_token": "",
"signing_secret": "",
"port": 2288
}
],
"track-function-calls": true,

View File

@@ -31,6 +31,9 @@
],
"volcark": [
"xxxxxxxx"
],
"modelscope": [
"xxxxxxxx"
]
},
"requester": {
@@ -95,12 +98,17 @@
"args": {},
"base-url": "https://ark.cn-beijing.volces.com/api/v3",
"timeout": 120
},
"modelscope-chat-completions": {
"base-url": "https://api-inference.modelscope.cn/v1",
"args": {},
"timeout": 120
}
},
"model": "gpt-4o",
"prompt-mode": "normal",
"prompt": {
"default": ""
"default": "You are a helpful assistant."
},
"runner": "local-agent",
"dify-service-api": {

View File

@@ -446,6 +446,11 @@
"type": "string",
"default": "",
"description": "钉钉的robot_name"
},
"markdown_card": {
"type": "boolean",
"default": false,
"description": "是否使用 Markdown 卡片发送消息"
}
}
},
@@ -466,6 +471,41 @@
"type": "string",
"default": "",
"description": "Telegram 的 token"
},
"markdown_card": {
"type": "boolean",
"default": false,
"description": "是否使用 Markdown 卡片发送消息"
}
}
},
{
"title": "Slack 适配器",
"description": "用于接入 Slack",
"properties": {
"adapter": {
"type": "string",
"const": "slack"
},
"enable": {
"type": "boolean",
"default": false,
"description": "是否启用此适配器"
},
"bot_token": {
"type": "string",
"default": "",
"description": "Slack 的 bot_token"
},
"signing_secret": {
"type": "string",
"default": "",
"description": "Slack 的 signing_secret"
},
"port": {
"type": "integer",
"default": 2288,
"description": "监听的端口"
}
}
}

View File

@@ -368,7 +368,7 @@
"type": "string",
"title": "默认情景预设",
"description": "设置默认情景预设。值为空字符串时,将不使用情景预设(人格)",
"default": ""
"default": "You are a helpful assistant."
}
},
"patternProperties": {