Compare commits

..

41 Commits

Author SHA1 Message Date
Junyan Qin
dbece6af7f chore: release v3.4.5.2 2025-02-04 00:17:46 +08:00
Junyan Qin (Chin)
b1e68182bd Merge pull request #1013 from RockChinQ/feat/marketplace
feat: add marketplace
2025-02-04 00:17:09 +08:00
Junyan Qin
45a64bea78 feat: add marketplace 2025-02-04 00:14:45 +08:00
Junyan Qin
aec8735388 chore: release v3.4.5.1 2025-02-03 01:36:21 +08:00
Junyan Qin (Chin)
1d91faaa49 fix(platform.json): discord enabled by default 2025-02-03 01:33:29 +08:00
Junyan Qin (Chin)
e1e21c0063 Update README.md 2025-02-02 17:12:48 +08:00
Junyan Qin
e775499080 chore: release v3.4.5 2025-02-02 17:10:12 +08:00
Junyan Qin
735aad5a91 doc: README 2025-02-02 16:32:40 +08:00
Junyan Qin
fb4e106f69 doc: update README 2025-02-02 16:31:32 +08:00
Junyan Qin (Chin)
e5659db535 Merge pull request #1002 from RockChinQ/feat/discord
feat: add `discord` adapter
2025-02-02 16:30:40 +08:00
Junyan Qin
5381e09a6c chore: config for discord 2025-02-02 16:28:21 +08:00
Junyan Qin
21f16ecd68 feat: discord adapter 2025-02-02 12:18:18 +08:00
Junyan Qin (Chin)
12fc76b326 Update README.md 2025-02-02 11:11:49 +08:00
Junyan Qin (Chin)
d7f87dd269 更新 README.md 2025-02-02 00:10:41 +08:00
Junyan Qin (Chin)
56227f3713 更新 README.md 2025-02-02 00:10:04 +08:00
Junyan Qin (Chin)
f492fee486 Merge pull request #1000 from RockChinQ/feat/siliconflow
feat: siliconflow provider
2025-02-01 14:19:43 +08:00
Junyan Qin
41a7814615 feat: siliconflow provider 2025-02-01 14:19:21 +08:00
Junyan Qin
8644f2c166 chore: update 2025-02-01 13:53:20 +08:00
Junyan Qin
e4a9365caf chore: update issue template 2025-02-01 12:13:20 +08:00
Junyan Qin (Chin)
9fc7af1295 Merge pull request #999 from RockChinQ/feat/lm-studio
feat: add supports for LM Studio
2025-02-01 12:01:45 +08:00
Junyan Qin
d0eeb2b304 feat: add supports for LM Studio 2025-02-01 12:01:07 +08:00
Junyan Qin
e4518ebcf1 chore: release v3.4.4.1 2025-01-31 17:42:12 +08:00
Junyan Qin
7604cefd0f fix: dify agent type not in schema 2025-01-30 22:07:03 +08:00
Junyan Qin
71729d4784 doc(README): update qq group number 2025-01-30 11:11:24 +08:00
Junyan Qin
1d16bc4968 perf: default value for requester args 2025-01-30 00:30:01 +08:00
Junyan Qin
de2bf79004 chore: release v3.4.4 2025-01-30 00:16:33 +08:00
Junyan Qin (Chin)
83ed7a9f38 Merge pull request #991 from RockChinQ/feat/lark
feat: add adapter `lark`
2025-01-30 00:15:27 +08:00
Junyan Qin
c326e72758 fix: migration not imported 2025-01-29 23:43:32 +08:00
Junyan Qin
ac9cef82cc chore: migrations 2025-01-29 23:41:29 +08:00
Junyan Qin
ea254d57d2 feat: lark adapter 2025-01-29 23:31:40 +08:00
Junyan Qin
a661f24ae0 doc: add contributors graph 2025-01-29 16:53:09 +08:00
Junyan Qin
afabf9256b chore: add model info deepseek-reasoner 2025-01-28 15:14:23 +08:00
Junyan Qin
74a8f9c9e2 fix: deps Crypto not checked 2025-01-27 21:33:10 +08:00
Junyan Qin
1d11e448f9 doc(README): update slogan 2025-01-26 10:15:14 +08:00
Junyan Qin
e3e23cbccb chore: release v3.4.3.2 2025-01-25 17:25:06 +08:00
Junyan Qin (Chin)
79132aa11d Merge pull request #988 from wangcham/bugfix-branch
fix:修复了企业微信的accesstoken问题
2025-01-25 17:23:19 +08:00
wangcham
7bb9e6e951 fix:修复了企业微信的accesstoken问题 2025-01-25 04:17:01 -05:00
Junyan Qin
37dc5b4135 chore: release v3.4.3.1 2025-01-23 13:32:51 +08:00
Junyan Qin
d588faf470 fix(httpx): deprecated proxies param 2025-01-23 13:32:27 +08:00
Junyan Qin
8b51a81158 doc(README): update qq group badge 2025-01-22 00:11:43 +08:00
Junyan Qin
9f125974bf doc: update qq group 2025-01-22 00:07:16 +08:00
30 changed files with 1536 additions and 36 deletions

View File

@@ -12,6 +12,8 @@ body:
- Nakurugo-cqhttp
- aiocqhttp使用 OneBot 协议接入的)
- qq-botpyQQ官方API
- lark飞书
- wecom企业微信
validations:
required: true
- type: input
@@ -30,9 +32,9 @@ body:
- type: textarea
attributes:
label: 复现步骤
description: 如何重现这个问题,越详细越好
description: 如何重现这个问题,越详细越好;请贴上所有相关的配置文件和元数据文件(注意隐去敏感信息)
validations:
required: false
required: true
- type: textarea
attributes:
label: 启用的插件

View File

@@ -15,21 +15,14 @@
<a href="https://docs.langbot.app/plugin/plugin-intro.html">插件介绍</a>
<a href="https://github.com/RockChinQ/LangBot/issues/new?assignees=&labels=%E7%8B%AC%E7%AB%8B%E6%8F%92%E4%BB%B6&projects=&template=submit-plugin.yml&title=%5BPlugin%5D%3A+%E8%AF%B7%E6%B1%82%E7%99%BB%E8%AE%B0%E6%96%B0%E6%8F%92%E4%BB%B6">提交插件</a>
<div align="center">
😎高稳定、🧩支持扩展、🦄多模态 - 基于大语言模型的即时通机器人平台🤖
😎高稳定、🧩支持扩展、🦄多模态 - 大模型原生即时通机器人平台🤖
</div>
<br/>
<a href="http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=66-aWvn8cbP4c1ut_1YYkvvGVeEtyTH8&authKey=pTaKBK5C%2B8dFzQ4XlENf6MHTCLaHnlKcCRx7c14EeVVlpX2nRSaS8lJm8YeM4mCU&noverify=0&group_code=195992197">
<img alt="Static Badge" src="https://img.shields.io/badge/%E5%AE%98%E6%96%B9%E7%BE%A4-195992197-green">
</a>
<a href="https://qm.qq.com/q/PClALFK242">
<img alt="Static Badge" src="https://img.shields.io/badge/%E7%A4%BE%E5%8C%BA%E7%BE%A4-619154800-green">
</a>
<br/>
[![QQ Group](https://img.shields.io/badge/%E7%A4%BE%E5%8C%BAQQ%E7%BE%A4-1030838208-blue)](https://qm.qq.com/q/PF9OuQCCcM)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/RockChinQ/LangBot)](https://github.com/RockChinQ/LangBot/releases/latest)
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.qchatgpt.rockchin.top%2Fapi%2Fv2%2Fview%2Frealtime%2Fcount_query%3Fminute%3D10080&query=%24.data.count&label=%E4%BD%BF%E7%94%A8%E9%87%8F%EF%BC%887%E6%97%A5%EF%BC%89)
<img src="https://img.shields.io/badge/python-3.10 | 3.11 | 3.12-blue.svg" alt="python">
@@ -39,7 +32,7 @@
## ✨ Features
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道后续还将支持微信、WhatsApp、Discord等平台。
- 💬 大模型对话、Agent支持多种大模型适配群聊和私聊具有多轮对话、工具调用、多模态能力并深度适配 [Dify](https://dify.ai)。目前支持 QQ、QQ频道、企业微信、飞书、Discord,后续还将支持个人微信、WhatsApp、Telegram 等平台。
- 🛠️ 高稳定性、功能完备:原生支持访问控制、限速、敏感词过滤等机制;配置简单,支持多种部署方式。
- 🧩 插件扩展、活跃社区:支持事件驱动、组件扩展等插件机制;丰富生态,目前已有数十个[插件](https://docs.langbot.app/plugin/plugin-intro.html)
- 😻 [New] Web 管理面板:支持通过浏览器管理 LangBot 实例,具体支持功能,查看[文档](https://docs.langbot.app/webui/intro.html)
@@ -87,8 +80,12 @@
| 平台 | 状态 | 备注 |
| --- | --- | --- |
| QQ 个人号 | ✅ | QQ 个人号私聊、群聊 |
| QQ 官方机器人 | ✅ | QQ 频道机器人,支持频道、私聊、群聊 |
| QQ 官方机器人 | ✅ | QQ 官方机器人,支持频道、私聊、群聊 |
| 企业微信 | ✅ | |
| 飞书 | ✅ | |
| Discord | ✅ | |
| 个人微信 | 🚧 | |
| WhatsApp | 🚧 | |
| 钉钉 | 🚧 | |
🚧: 正在开发中
@@ -104,5 +101,17 @@
| [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOps 平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型管理平台 |
| [Ollama](https://ollama.com/) | ✅ | 本地大模型运行平台 |
| [LMStudio](https://lmstudio.ai/) | ✅ | 本地大模型运行平台 |
| [GiteeAI](https://ai.gitee.com/) | ✅ | 大模型接口聚合平台 |
| [SiliconFlow](https://siliconflow.cn/) | ✅ | 大模型聚合平台 |
| [阿里云百炼](https://bailian.console.aliyun.com/) | ✅ | 大模型聚合平台 |
## 😘 社区贡献
LangBot 离不开以下贡献者和社区内所有人的贡献,我们欢迎任何形式的贡献和反馈。
<a href="https://github.com/RockChinQ/LangBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=RockChinQ/LangBot" />
</a>

View File

@@ -110,8 +110,17 @@ class WecomClient():
"enable_duplicate_check": 0,
"duplicate_check_interval": 1800
}
response = await client.post(url,json=params)
data = response.json()
try:
response = await client.post(url,json=params)
data = response.json()
except Exception as e:
raise Exception("Failed to send image: "+str(e))
# 企业微信错误码40014和42001代表accesstoken问题
if data['errcode'] == 40014 or data['errcode'] == 42001:
self.access_token = await self.get_access_token(self.secret)
return await self.send_image(user_id,agent_id,media_id)
if data['errcode'] != 0:
raise Exception("Failed to send image: "+str(data))
@@ -136,7 +145,9 @@ class WecomClient():
}
response = await client.post(url,json=params)
data = response.json()
if data['errcode'] == 40014 or data['errcode'] == 42001:
self.access_token = await self.get_access_token(self.secret)
return await self.send_private_msg(user_id,agent_id,content)
if data['errcode'] != 0:
raise Exception("Failed to send message: "+str(data))
@@ -286,11 +297,14 @@ class WecomClient():
async with httpx.AsyncClient() as client:
response = await client.post(url, headers=headers, content=body)
data = response.json()
if data['errcode'] == 40014 or data['errcode'] == 42001:
self.access_token = await self.get_access_token(self.secret)
media_id = await self.upload_to_work(image)
if data.get('errcode', 0) != 0:
raise Exception("failed to upload file")
return data.get('media_id')
media_id = data.get('media_id')
return media_id
async def download_image_to_bytes(self,url:str) -> bytes:
async with httpx.AsyncClient() as client:

View File

@@ -1,5 +1,7 @@
import pip
# 检查依赖,防止用户未安装
# 左边为引入名称,右边为依赖名称
required_deps = {
"requests": "requests",
"openai": "openai",
@@ -23,6 +25,9 @@ required_deps = {
"aioshutil": "aioshutil",
"argon2": "argon2-cffi",
"jwt": "pyjwt",
"Crypto": "pycryptodome",
"lark_oapi": "lark-oapi",
"discord": "discord.py"
}

View File

@@ -0,0 +1,29 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("lark-config", 21)
class LarkConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
for adapter in self.ap.platform_cfg.data['platform-adapters']:
if adapter['adapter'] == 'lark':
return False
return True
async def run(self):
"""执行迁移"""
self.ap.platform_cfg.data['platform-adapters'].append({
"adapter": "lark",
"enable": False,
"app_id": "cli_abcdefgh",
"app_secret": "XXXXXXXXXX",
"bot_name": "LangBot"
})
await self.ap.platform_cfg.dump_config()

View File

@@ -0,0 +1,23 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("lmstudio-config", 22)
class LmStudioConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
return 'lmstudio-chat-completions' not in self.ap.provider_cfg.data['requester']
async def run(self):
"""执行迁移"""
self.ap.provider_cfg.data['requester']['lmstudio-chat-completions'] = {
"base-url": "http://127.0.0.1:1234/v1",
"args": {},
"timeout": 120
}
await self.ap.provider_cfg.dump_config()

View File

@@ -0,0 +1,27 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("siliconflow-config", 23)
class SiliconFlowConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
return 'siliconflow-chat-completions' not in self.ap.provider_cfg.data['requester']
async def run(self):
"""执行迁移"""
self.ap.provider_cfg.data['keys']['siliconflow'] = [
"xxxxxxx"
]
self.ap.provider_cfg.data['requester']['siliconflow-chat-completions'] = {
"base-url": "https://api.siliconflow.cn/v1",
"args": {},
"timeout": 120
}
await self.ap.provider_cfg.dump_config()

View File

@@ -0,0 +1,28 @@
from __future__ import annotations
from .. import migration
@migration.migration_class("discord-config", 24)
class DiscordConfigMigration(migration.Migration):
"""迁移"""
async def need_migrate(self) -> bool:
"""判断当前环境是否需要运行此迁移"""
for adapter in self.ap.platform_cfg.data['platform-adapters']:
if adapter['adapter'] == 'discord':
return False
return True
async def run(self):
"""执行迁移"""
self.ap.platform_cfg.data['platform-adapters'].append({
"adapter": "discord",
"enable": False,
"client_id": "1234567890",
"token": "XXXXXXXXXX"
})
await self.ap.platform_cfg.dump_config()

View File

@@ -8,7 +8,7 @@ from ..migrations import m001_sensitive_word_migration, m002_openai_config_migra
from ..migrations import m005_deepseek_cfg_completion, m006_vision_config, m007_qcg_center_url, m008_ad_fixwin_config_migrate, m009_msg_truncator_cfg
from ..migrations import m010_ollama_requester_config, m011_command_prefix_config, m012_runner_config, m013_http_api_config, m014_force_delay_config
from ..migrations import m015_gitee_ai_config, m016_dify_service_api, m017_dify_api_timeout_params, m018_xai_config, m019_zhipuai_config
from ..migrations import m020_wecom_config
from ..migrations import m020_wecom_config, m021_lark_config, m022_lmstudio_config, m023_siliconflow_config, m024_discord_config
@stage.stage_class("MigrationStage")

View File

@@ -37,7 +37,7 @@ class PlatformManager:
async def initialize(self):
from .sources import nakuru, aiocqhttp, qqbotpy,wecom
from .sources import nakuru, aiocqhttp, qqbotpy, wecom, lark, discord
async def on_friend_message(event: platform_events.FriendMessage, adapter: msadapter.MessageSourceAdapter):

View File

@@ -0,0 +1,264 @@
from __future__ import annotations
import discord
import typing
import asyncio
import traceback
import time
import re
import base64
import uuid
import json
import os
import datetime
import aiohttp
from .. import adapter
from ...pipeline.longtext.strategies import forward
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
class DiscordMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain
) -> typing.Tuple[str, typing.List[discord.File]]:
for ele in message_chain:
if isinstance(ele, platform_message.At):
message_chain.remove(ele)
break
text_string = ""
image_files = []
for ele in message_chain:
if isinstance(ele, platform_message.Image):
image_bytes = None
if ele.base64:
image_bytes = base64.b64decode(ele.base64)
elif ele.url:
async with aiohttp.ClientSession() as session:
async with session.get(ele.url) as response:
image_bytes = await response.read()
elif ele.path:
with open(ele.path, "rb") as f:
image_bytes = f.read()
image_files.append(discord.File(fp=image_bytes, filename=f"{uuid.uuid4()}.png"))
elif isinstance(ele, platform_message.Plain):
text_string += ele.text
return text_string, image_files
@staticmethod
async def target2yiri(
message: discord.Message
) -> platform_message.MessageChain:
lb_msg_list = []
msg_create_time = datetime.datetime.fromtimestamp(
int(message.created_at.timestamp())
)
lb_msg_list.append(
platform_message.Source(id=message.id, time=msg_create_time)
)
element_list = []
def text_element_recur(text_ele: str) -> list[platform_message.MessageComponent]:
if text_ele == "":
return []
# <@1234567890>
# @everyone
# @here
at_pattern = re.compile(r"(@everyone|@here|<@[\d]+>)")
at_matches = at_pattern.findall(text_ele)
if len(at_matches) > 0:
mid_at = at_matches[0]
text_split = text_ele.split(mid_at)
mid_at_component = []
if mid_at == "@everyone" or mid_at == "@here":
mid_at_component.append(platform_message.AtAll())
else:
mid_at_component.append(platform_message.At(target=mid_at[2:-1]))
return text_element_recur(text_split[0]) + \
mid_at_component + \
text_element_recur(text_split[1])
else:
return [platform_message.Plain(text=text_ele)]
element_list.extend(text_element_recur(message.content))
# attachments
for attachment in message.attachments:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get(attachment.url) as response:
image_data = await response.read()
image_base64 = base64.b64encode(image_data).decode("utf-8")
image_format = response.headers["Content-Type"]
element_list.append(platform_message.Image(base64=f"data:{image_format};base64,{image_base64}"))
return platform_message.MessageChain(element_list)
class DiscordEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(
event: platform_events.Event
) -> discord.Message:
pass
@staticmethod
async def target2yiri(
event: discord.Message
) -> platform_events.Event:
message_chain = await DiscordMessageConverter.target2yiri(event)
if type(event.channel) == discord.DMChannel:
return platform_events.FriendMessage(
sender=platform_entities.Friend(
id=event.author.id,
nickname=event.author.name,
remark=event.channel.id,
),
message_chain=message_chain,
time=event.created_at.timestamp(),
source_platform_object=event,
)
elif type(event.channel) == discord.TextChannel:
return platform_events.GroupMessage(
sender=platform_entities.GroupMember(
id=event.author.id,
member_name=event.author.name,
permission=platform_entities.Permission.Member,
group=platform_entities.Group(
id=event.channel.id,
name=event.channel.name,
permission=platform_entities.Permission.Member,
),
special_title="",
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=message_chain,
time=event.created_at.timestamp(),
source_platform_object=event,
)
@adapter.adapter_class("discord")
class DiscordMessageSourceAdapter(adapter.MessageSourceAdapter):
bot: discord.Client
bot_account_id: str # 用于在流水线中识别at是否是本bot直接以bot_name作为标识
config: dict
ap: app.Application
message_converter: DiscordMessageConverter = DiscordMessageConverter()
event_converter: DiscordEventConverter = DiscordEventConverter()
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, adapter.MessageSourceAdapter], None],
] = {}
def __init__(self, config: dict, ap: app.Application):
self.config = config
self.ap = ap
self.bot_account_id = self.config["client_id"]
adapter_self = self
class MyClient(discord.Client):
async def on_message(self: discord.Client, message: discord.Message):
if message.author.id == self.user.id or message.author.bot:
return
lb_event = await adapter_self.event_converter.target2yiri(message)
await adapter_self.listeners[type(lb_event)](lb_event, adapter_self)
intents = discord.Intents.default()
intents.message_content = True
args = {}
if os.getenv("http_proxy"):
args["proxy"] = os.getenv("http_proxy")
self.bot = MyClient(intents=intents, **args)
async def send_message(
self, target_type: str, target_id: str, message: platform_message.MessageChain
):
pass
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
msg_to_send, image_files = await self.message_converter.yiri2target(message)
assert isinstance(message_source.source_platform_object, discord.Message)
args = {
"content": msg_to_send,
}
if len(image_files) > 0:
args["files"] = image_files
if quote_origin:
args["reference"] = message_source.source_platform_object
if message.has(platform_message.At):
args["mention_author"] = True
await message_source.source_platform_object.channel.send(**args)
async def is_muted(self, group_id: int) -> bool:
return False
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessageSourceAdapter], None],
):
self.listeners[event_type] = callback
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[[platform_events.Event, adapter.MessageSourceAdapter], None],
):
self.listeners.pop(event_type)
async def run_async(self):
async with self.bot:
await self.bot.start(self.config["token"], reconnect=True)
async def kill(self) -> bool:
await self.bot.close()
return True

View File

@@ -0,0 +1,404 @@
from __future__ import annotations
import lark_oapi
import typing
import asyncio
import traceback
import time
import re
import base64
import uuid
import json
import datetime
import aiohttp
import lark_oapi.ws.exception
from lark_oapi.api.im.v1 import *
from .. import adapter
from ...pipeline.longtext.strategies import forward
from ...core import app
from ..types import message as platform_message
from ..types import events as platform_events
from ..types import entities as platform_entities
from ...utils import image
class LarkMessageConverter(adapter.MessageConverter):
@staticmethod
async def yiri2target(
message_chain: platform_message.MessageChain, api_client: lark_oapi.Client
) -> typing.Tuple[list]:
message_elements = []
pending_paragraph = []
for msg in message_chain:
if isinstance(msg, platform_message.Plain):
pending_paragraph.append({"tag": "md", "text": msg.text})
elif isinstance(msg, platform_message.At):
pending_paragraph.append(
{"tag": "at", "user_id": msg.target, "style": []}
)
elif isinstance(msg, platform_message.AtAll):
pending_paragraph.append({"tag": "at", "user_id": "all", "style": []})
elif isinstance(msg, platform_message.Image):
image_bytes = None
if msg.base64:
image_bytes = base64.b64decode(msg.base64)
elif msg.url:
async with aiohttp.ClientSession() as session:
async with session.get(msg.url) as response:
image_bytes = await response.read()
elif msg.path:
with open(msg.path, "rb") as f:
image_bytes = f.read()
request: CreateImageRequest = (
CreateImageRequest.builder()
.request_body(
CreateImageRequestBody.builder()
.image_type("message")
.image(image_bytes)
.build()
)
.build()
)
response: CreateImageResponse = await api_client.im.v1.image.acreate(
request
)
if not response.success():
raise Exception(
f"client.im.v1.image.create failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}"
)
image_key = response.data.image_key
message_elements.append(pending_paragraph)
message_elements.append(
[
{
"tag": "img",
"image_key": image_key,
}
]
)
pending_paragraph = []
if pending_paragraph:
message_elements.append(pending_paragraph)
return message_elements
@staticmethod
async def target2yiri(
message: lark_oapi.api.im.v1.model.event_message.EventMessage,
api_client: lark_oapi.Client,
) -> platform_message.MessageChain:
message_content = json.loads(message.content)
lb_msg_list = []
msg_create_time = datetime.datetime.fromtimestamp(
int(message.create_time) / 1000
)
lb_msg_list.append(
platform_message.Source(id=message.message_id, time=msg_create_time)
)
if message.message_type == "text":
element_list = []
def text_element_recur(text_ele: dict) -> list[dict]:
if text_ele["text"] == "":
return []
at_pattern = re.compile(r"@_user_[\d]+")
at_matches = at_pattern.findall(text_ele["text"])
name_mapping = {}
for mathc in at_matches:
for mention in message.mentions:
if mention.key == mathc:
name_mapping[mathc] = mention.name
break
if len(name_mapping.keys()) == 0:
return [text_ele]
# 只处理第一个,剩下的递归处理
text_split = text_ele["text"].split(list(name_mapping.keys())[0])
new_list = []
left_text = text_split[0]
right_text = text_split[1]
new_list.extend(
text_element_recur({"tag": "text", "text": left_text, "style": []})
)
new_list.append(
{
"tag": "at",
"user_id": list(name_mapping.keys())[0],
"user_name": name_mapping[list(name_mapping.keys())[0]],
"style": [],
}
)
new_list.extend(
text_element_recur({"tag": "text", "text": right_text, "style": []})
)
return new_list
element_list = text_element_recur(
{"tag": "text", "text": message_content["text"], "style": []}
)
message_content = {"title": "", "content": element_list}
elif message.message_type == "post":
new_list = []
for ele in message_content["content"]:
if type(ele) is dict:
new_list.append(ele)
elif type(ele) is list:
new_list.extend(ele)
message_content["content"] = new_list
elif message.message_type == "image":
message_content["content"] = [
{"tag": "img", "image_key": message_content["image_key"], "style": []}
]
for ele in message_content["content"]:
if ele["tag"] == "text":
lb_msg_list.append(platform_message.Plain(text=ele["text"]))
elif ele["tag"] == "at":
lb_msg_list.append(platform_message.At(target=ele["user_name"]))
elif ele["tag"] == "img":
image_key = ele["image_key"]
request: GetMessageResourceRequest = (
GetMessageResourceRequest.builder()
.message_id(message.message_id)
.file_key(image_key)
.type("image")
.build()
)
response: GetMessageResourceResponse = (
await api_client.im.v1.message_resource.aget(request)
)
if not response.success():
raise Exception(
f"client.im.v1.message_resource.get failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}"
)
image_bytes = response.file.read()
image_base64 = base64.b64encode(image_bytes).decode()
image_format = response.raw.headers["content-type"]
lb_msg_list.append(
platform_message.Image(
base64=f"data:{image_format};base64,{image_base64}"
)
)
return platform_message.MessageChain(lb_msg_list)
class LarkEventConverter(adapter.EventConverter):
@staticmethod
async def yiri2target(
event: platform_events.MessageEvent,
) -> lark_oapi.im.v1.P2ImMessageReceiveV1:
pass
@staticmethod
async def target2yiri(
event: lark_oapi.im.v1.P2ImMessageReceiveV1, api_client: lark_oapi.Client
) -> platform_events.Event:
message_chain = await LarkMessageConverter.target2yiri(
event.event.message, api_client
)
if event.event.message.chat_type == "p2p":
return platform_events.FriendMessage(
sender=platform_entities.Friend(
id=event.event.sender.sender_id.open_id,
nickname=event.event.sender.sender_id.union_id,
remark="",
),
message_chain=message_chain,
time=event.event.message.create_time,
)
elif event.event.message.chat_type == "group":
return platform_events.GroupMessage(
sender=platform_entities.GroupMember(
id=event.event.sender.sender_id.open_id,
member_name=event.event.sender.sender_id.union_id,
permission=platform_entities.Permission.Member,
group=platform_entities.Group(
id=event.event.message.chat_id,
name="",
permission=platform_entities.Permission.Member,
),
special_title="",
join_timestamp=0,
last_speak_timestamp=0,
mute_time_remaining=0,
),
message_chain=message_chain,
time=event.event.message.create_time,
)
@adapter.adapter_class("lark")
class LarkMessageSourceAdapter(adapter.MessageSourceAdapter):
bot: lark_oapi.ws.Client
api_client: lark_oapi.Client
bot_account_id: str # 用于在流水线中识别at是否是本bot直接以bot_name作为标识
lark_tenant_key: str # 飞书企业key
message_converter: LarkMessageConverter = LarkMessageConverter()
event_converter: LarkEventConverter = LarkEventConverter()
listeners: typing.Dict[
typing.Type[platform_events.Event],
typing.Callable[[platform_events.Event, adapter.MessageSourceAdapter], None],
] = {}
config: dict
ap: app.Application
def __init__(self, config: dict, ap: app.Application):
self.config = config
self.ap = ap
async def on_message(event: lark_oapi.im.v1.P2ImMessageReceiveV1):
lb_event = await self.event_converter.target2yiri(event, self.api_client)
await self.listeners[type(lb_event)](lb_event, self)
def sync_on_message(event: lark_oapi.im.v1.P2ImMessageReceiveV1):
asyncio.create_task(on_message(event))
event_handler = (
lark_oapi.EventDispatcherHandler.builder("", "")
.register_p2_im_message_receive_v1(sync_on_message)
.build()
)
self.bot_account_id = config["bot_name"]
self.bot = lark_oapi.ws.Client(
config["app_id"], config["app_secret"], event_handler=event_handler
)
self.api_client = (
lark_oapi.Client.builder()
.app_id(config["app_id"])
.app_secret(config["app_secret"])
.build()
)
async def send_message(
self, target_type: str, target_id: str, message: platform_message.MessageChain
):
pass
async def reply_message(
self,
message_source: platform_events.MessageEvent,
message: platform_message.MessageChain,
quote_origin: bool = False,
):
# 不再需要了因为message_id已经被包含到message_chain中
# lark_event = await self.event_converter.yiri2target(message_source)
lark_message = await self.message_converter.yiri2target(
message, self.api_client
)
final_content = {
"zh_cn": {
"title": "",
"content": lark_message,
},
}
request: ReplyMessageRequest = (
ReplyMessageRequest.builder()
.message_id(message_source.message_chain.message_id)
.request_body(
ReplyMessageRequestBody.builder()
.content(json.dumps(final_content))
.msg_type("post")
.reply_in_thread(False)
.uuid(str(uuid.uuid4()))
.build()
)
.build()
)
response: ReplyMessageResponse = await self.api_client.im.v1.message.areply(
request
)
if not response.success():
raise Exception(
f"client.im.v1.message.reply failed, code: {response.code}, msg: {response.msg}, log_id: {response.get_log_id()}, resp: \n{json.dumps(json.loads(response.raw.content), indent=4, ensure_ascii=False)}"
)
async def is_muted(self, group_id: int) -> bool:
return False
def register_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[
[platform_events.Event, adapter.MessageSourceAdapter], None
],
):
self.listeners[event_type] = callback
def unregister_listener(
self,
event_type: typing.Type[platform_events.Event],
callback: typing.Callable[
[platform_events.Event, adapter.MessageSourceAdapter], None
],
):
self.listeners.pop(event_type)
async def run_async(self):
try:
await self.bot._connect()
except lark_oapi.ws.exception.ClientException as e:
raise e
except Exception as e:
await self.bot._disconnect()
if self.bot._auto_reconnect:
await self.bot._reconnect()
else:
raise e
async def kill(self) -> bool:
return False

View File

@@ -72,6 +72,11 @@ class MessageEvent(Event):
message_chain: platform_message.MessageChain
"""消息内容。"""
source_platform_object: typing.Optional[typing.Any] = None
"""原消息平台对象。
供消息平台适配器开发者使用,如果回复用户时需要使用原消息事件对象的信息,
那么可以将其存到这个字段以供之后取出使用。"""
class FriendMessage(MessageEvent):
"""好友消息。

View File

@@ -460,7 +460,7 @@ class Source(MessageComponent):
"""源。包含消息的基本信息。"""
type: str = "Source"
"""消息组件类型。"""
id: int
id: typing.Union[int, str]
"""消息的识别号用于引用回复Source 类型永远为 MessageChain 的第一个元素)。"""
time: datetime
"""消息时间。"""
@@ -503,7 +503,7 @@ class At(MessageComponent):
"""At某人。"""
type: str = "At"
"""消息组件类型。"""
target: int
target: typing.Union[int, str]
"""群员 QQ 号。"""
display: typing.Optional[str] = None
"""At时显示的文字发送消息时无效自动使用群名片。"""

View File

@@ -6,7 +6,7 @@ from . import entities, requester
from ...core import app
from . import token
from .requesters import chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, xaichatcmpl, zhipuaichatcmpl
from .requesters import chatcmpl, anthropicmsgs, moonshotchatcmpl, deepseekchatcmpl, ollamachat, giteeaichatcmpl, xaichatcmpl, zhipuaichatcmpl, lmstudiochatcmpl, siliconflowchatcmpl
FETCH_MODEL_LIST_URL = "https://api.qchatgpt.rockchin.top/api/v2/fetch/model_list"
@@ -109,4 +109,4 @@ class ModelManager:
self.model_list.append(model_info)
except Exception as e:
self.ap.logger.error(f"初始化模型 {model['name']} 失败: {e} ,请检查配置文件")
self.ap.logger.error(f"初始化模型 {model['name']} 失败: {type(e)} {e} ,请检查配置文件")

View File

@@ -30,7 +30,7 @@ class AnthropicMessages(requester.LLMAPIRequester):
timeout=typing.cast(httpx.Timeout, self.ap.provider_cfg.data['requester']['anthropic-messages']['timeout']),
limits=anthropic._constants.DEFAULT_CONNECTION_LIMITS,
follow_redirects=True,
proxies=self.ap.proxy_mgr.get_forward_proxies()
trust_env=True,
)
self.client = anthropic.AsyncAnthropic(

View File

@@ -39,7 +39,7 @@ class OpenAIChatCompletions(requester.LLMAPIRequester):
base_url=self.requester_cfg['base-url'],
timeout=self.requester_cfg['timeout'],
http_client=httpx.AsyncClient(
proxies=self.ap.proxy_mgr.get_forward_proxies()
trust_env=True,
)
)

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
import openai
from . import chatcmpl
from .. import requester
from ....core import app
@requester.requester_class("lmstudio-chat-completions")
class LmStudioChatCompletions(chatcmpl.OpenAIChatCompletions):
"""LMStudio ChatCompletion API 请求器"""
client: openai.AsyncClient
requester_cfg: dict
def __init__(self, ap: app.Application):
self.ap = ap
self.requester_cfg = self.ap.provider_cfg.data['requester']['lmstudio-chat-completions']

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
import openai
from . import chatcmpl
from .. import requester
from ....core import app
@requester.requester_class("siliconflow-chat-completions")
class SiliconFlowChatCompletions(chatcmpl.OpenAIChatCompletions):
"""SiliconFlow ChatCompletion API 请求器"""
client: openai.AsyncClient
requester_cfg: dict
def __init__(self, ap: app.Application):
self.ap = ap
self.requester_cfg = self.ap.provider_cfg.data['requester']['siliconflow-chat-completions']

View File

@@ -1,4 +1,4 @@
semantic_version = "v3.4.3"
semantic_version = "v3.4.5.2"
debug_mode = False

View File

@@ -25,6 +25,8 @@ aioshutil
argon2-cffi
pyjwt
pycryptodome
lark-oapi
discord.py
# indirect
taskgroup==0.0.0a4

View File

@@ -116,6 +116,11 @@
"requester": "deepseek-chat-completions",
"token_mgr": "deepseek"
},
{
"name": "deepseek-reasoner",
"requester": "deepseek-chat-completions",
"token_mgr": "deepseek"
},
{
"name": "grok-2-latest",
"requester": "xai-chat-completions",

View File

@@ -35,6 +35,19 @@
"token": "",
"EncodingAESKey": "",
"contacts_secret": ""
},
{
"adapter": "lark",
"enable": false,
"app_id": "cli_abcdefgh",
"app_secret": "XXXXXXXXXX",
"bot_name": "LangBot"
},
{
"adapter": "discord",
"enable": false,
"client_id": "1234567890",
"token": "XXXXXXXXXX"
}
],
"track-function-calls": true,

View File

@@ -22,6 +22,9 @@
],
"zhipuai": [
"xxxxxxx"
],
"siliconflow": [
"xxxxxxx"
]
},
"requester": {
@@ -66,6 +69,16 @@
"base-url": "https://open.bigmodel.cn/api/paas/v4",
"args": {},
"timeout": 120
},
"lmstudio-chat-completions": {
"base-url": "http://127.0.0.1:1234/v1",
"args": {},
"timeout": 120
},
"siliconflow-chat-completions": {
"base-url": "https://api.siliconflow.cn/v1",
"args": {},
"timeout": 120
}
},
"model": "gpt-4o",

View File

@@ -121,6 +121,129 @@
]
}
}
},
{
"title": "企业微信适配器",
"description": "用于接入企业微信",
"properties": {
"adapter": {
"type": "string",
"const": "wecom"
},
"enable": {
"type": "boolean",
"default": false,
"description": "是否启用此适配器",
"layout": {
"comp": "switch",
"props": {
"color": "primary"
}
}
},
"host": {
"type": "string",
"default": "0.0.0.0",
"description": "监听的IP地址"
},
"port": {
"type": "integer",
"default": 2290,
"description": "监听的端口"
},
"corpid": {
"type": "string",
"default": "",
"description": "企业微信的corpid"
},
"secret": {
"type": "string",
"default": "",
"description": "企业微信的secret"
},
"token": {
"type": "string",
"default": "",
"description": "企业微信的token"
},
"EncodingAESKey": {
"type": "string",
"default": "",
"description": "企业微信的EncodingAESKey"
},
"contacts_secret": {
"type": "string",
"default": "",
"description": "企业微信的contacts_secret"
}
}
},
{
"title": "飞书适配器",
"description": "用于接入飞书",
"properties": {
"adapter": {
"type": "string",
"const": "lark"
},
"enable": {
"type": "boolean",
"default": false,
"description": "是否启用此适配器",
"layout": {
"comp": "switch",
"props": {
"color": "primary"
}
}
},
"app_id": {
"type": "string",
"default": "",
"description": "飞书的app_id"
},
"app_secret": {
"type": "string",
"default": "",
"description": "飞书的app_secret"
},
"bot_name": {
"type": "string",
"default": "",
"description": "飞书的bot_name"
}
}
},
{
"title": "Discord 适配器",
"description": "用于接入 Discord",
"properties": {
"adapter": {
"type": "string",
"const": "discord"
},
"enable": {
"type": "boolean",
"default": false,
"description": "是否启用此适配器",
"layout": {
"comp": "switch",
"props": {
"color": "primary"
}
}
},
"client_id": {
"type": "string",
"default": "",
"description": "Discord 的 client_id"
},
"token": {
"type": "string",
"default": "",
"description": "Discord 的 token"
}
}
}
]
}

View File

@@ -74,6 +74,14 @@
"type": "string"
},
"default": []
},
"siliconflow": {
"type": "array",
"title": "SiliconFlow API 密钥",
"items": {
"type": "string"
},
"default": []
}
}
},
@@ -172,7 +180,8 @@
"title": "API URL"
},
"args": {
"type": "object"
"type": "object",
"default": {}
},
"timeout": {
"type": "number",
@@ -191,7 +200,8 @@
"title": "API URL"
},
"args": {
"type": "object"
"type": "object",
"default": {}
},
"timeout": {
"type": "number",
@@ -210,7 +220,8 @@
"title": "API URL"
},
"args": {
"type": "object"
"type": "object",
"default": {}
},
"timeout": {
"type": "number",
@@ -229,10 +240,52 @@
"title": "API URL"
},
"args": {
"type": "object"
"type": "object",
"default": {}
},
"timeout": {
"type": "number"
"type": "number",
"default": 120
}
}
},
"lmstudio-chat-completions": {
"type": "object",
"title": "LMStudio API 请求配置",
"description": "仅可编辑 URL 和 超时时间,额外请求参数不支持可视化编辑,请到编辑器编辑",
"properties": {
"base-url": {
"type": "string",
"title": "API URL"
},
"args": {
"type": "object",
"default": {}
},
"timeout": {
"type": "number",
"title": "API 请求超时时间",
"default": 120
}
}
},
"siliconflow-chat-completions": {
"type": "object",
"title": "SiliconFlow API 请求配置",
"description": "仅可编辑 URL 和 超时时间,额外请求参数不支持可视化编辑,请到编辑器编辑",
"properties": {
"base-url": {
"type": "string",
"title": "API URL"
},
"args": {
"type": "object",
"default": {}
},
"timeout": {
"type": "number",
"title": "API 请求超时时间",
"default": 120
}
}
}
@@ -292,7 +345,7 @@
"type": "string",
"title": "应用类型",
"description": "支持 chat 和 workflowchat聊天助手含高级编排和 Agentworkflow工作流请填写下方对应的应用类型 API 参数",
"enum": ["chat", "workflow"],
"enum": ["chat", "workflow", "agent"],
"default": "chat"
},
"chat": {

View File

@@ -0,0 +1,181 @@
<template>
<div class="plugin-card">
<div class="plugin-card-header">
<div class="plugin-id">
<div class="plugin-card-author">{{ plugin.author }} /</div>
<div class="plugin-card-title">{{ plugin.name }}</div>
</div>
<div class="plugin-card-badges">
<v-icon class="plugin-github-source" icon="mdi-github" v-if="plugin.repository != ''"
@click="openGithubSource"></v-icon>
</div>
</div>
<div class="plugin-card-description" >{{ plugin.description }}</div>
<div class="plugin-card-brief-info">
<div class="plugin-card-brief-info-item">
<v-icon id="plugin-stars-icon" icon="mdi-star" />
<div id="plugin-stars-count">{{ plugin.stars }}</div>
</div>
<v-btn color="primary" @click="installPlugin" density="compact">安装</v-btn>
</div>
</div>
</template>
<script setup>
const props = defineProps({
plugin: {
type: Object,
required: true
},
});
const emit = defineEmits(['install']);
const openGithubSource = () => {
window.open("https://"+props.plugin.repository, '_blank');
}
const installPlugin = () => {
emit('install', props.plugin);
}
</script>
<style scoped>
.plugin-card {
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 0.8rem;
padding-left: 1rem;
margin: 1rem 0;
background-color: white;
display: flex;
flex-direction: column;
justify-content: space-between;
height: 10rem;
}
.plugin-card-header {
display: flex;
flex-direction: row;
justify-content: space-between;
}
.plugin-card-author {
font-size: 0.8rem;
color: #666;
font-weight: 500;
user-select: none;
}
.plugin-card-title {
font-size: 0.9rem;
font-weight: 600;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
user-select: none;
}
.plugin-card-description {
font-size: 0.7rem;
color: #666;
font-weight: 500;
margin-top: -1rem;
height: 2rem;
/* 超出部分自动换行,最多两行 */
text-overflow: ellipsis;
overflow-y: hidden;
white-space: wrap;
user-select: none;
}
.plugin-card-badges {
display: flex;
flex-direction: row;
gap: 0.5rem;
}
.plugin-github-source {
cursor: pointer;
color: #222;
font-size: 1.3rem;
}
.plugin-disabled {
font-size: 0.7rem;
font-weight: 500;
height: 1.3rem;
padding-inline: 0.4rem;
user-select: none;
}
.plugin-card-brief-info {
display: flex;
flex-direction: row;
justify-content: space-between;
/* background-color: #f0f0f0; */
gap: 0.8rem;
margin-left: -0.2rem;
margin-bottom: 0rem;
}
.plugin-card-events {
display: flex;
flex-direction: row;
gap: 0.4rem;
}
.plugin-card-events-icon {
font-size: 1.8rem;
color: #666;
}
.plugin-card-events-count {
font-size: 1.2rem;
font-weight: 600;
color: #666;
}
.plugin-card-functions {
display: flex;
flex-direction: row;
gap: 0.4rem;
}
.plugin-card-functions-icon {
font-size: 1.6rem;
color: #666;
}
.plugin-card-functions-count {
font-size: 1.2rem;
font-weight: 600;
color: #666;
}
.plugin-card-brief-info-item {
display: flex;
flex-direction: row;
gap: 0.4rem;
}
#plugin-stars-icon {
color: #0073ff;
}
#plugin-stars-count {
margin-top: 0.1rem;
font-weight: 700;
color: #0073ff;
user-select: none;
}
.plugin-card-brief-info-item:hover .plugin-card-brief-info-item-icon {
color: #333;
}
</style>

View File

@@ -0,0 +1,228 @@
<template>
<div id="marketplace-container">
<div id="marketplace-search-bar">
<span style="width: 14rem;">
<v-text-field id="marketplace-search-bar-search-input" variant="solo" v-model="proxy.$store.state.marketplaceParams.query" label="搜索"
density="compact" @update:model-value="updateSearch" />
</span>
<!--下拉选择排序-->
<span style="width: 10rem;">
<v-select id="marketplace-search-bar-sort-select" v-model="sort" :items="sortItems" variant="solo"
label="排序" density="compact" @update:model-value="updateSort" />
</span>
<span style="margin-left: 1rem;">
<div id="marketplace-search-bar-total-plugins-count">
{{ proxy.$store.state.marketplaceTotalPluginsCount }} 个插件
</div>
</span>
<span style="margin-left: 1rem;">
<!-- 分页 -->
<v-pagination style="width: 14rem;" v-model="proxy.$store.state.marketplaceParams.page"
:length="proxy.$store.state.marketplaceTotalPages" variant="solo" density="compact"
total-visible="4" @update:model-value="updatePage" />
</span>
</div>
<div id="marketplace-plugins-container" ref="pluginsContainer">
<div id="marketplace-plugins-container-inner">
<MarketPluginCard v-for="plugin in proxy.$store.state.marketplacePlugins" :key="plugin.id" :plugin="plugin" @install="installPlugin" />
</div>
</div>
</div>
</template>
<script setup>
import MarketPluginCard from './MarketPluginCard.vue'
import { ref, getCurrentInstance, onMounted } from 'vue'
import { inject } from "vue";
const snackbar = inject('snackbar');
const { proxy } = getCurrentInstance()
const pluginsContainer = ref(null)
const sortItems = ref([
'最近新增',
'最多星标',
'最近更新',
])
const sortParams = ref({
'最近新增': {
sort_by: 'created_at',
sort_order: 'DESC',
},
'最多星标': {
sort_by: 'stars',
sort_order: 'DESC',
},
'最近更新': {
sort_by: 'pushed_at',
sort_order: 'DESC',
}
})
const sort = ref(sortItems.value[0])
proxy.$store.state.marketplaceParams.sort_by = sortParams.value[sort.value].sort_by
proxy.$store.state.marketplaceParams.sort_order = sortParams.value[sort.value].sort_order
const updateSort = (value) => {
console.log(value)
proxy.$store.state.marketplaceParams.sort_by = sortParams.value[value].sort_by
proxy.$store.state.marketplaceParams.sort_order = sortParams.value[value].sort_order
proxy.$store.state.marketplaceParams.page = 1
console.log(proxy.$store.state.marketplaceParams)
fetchMarketplacePlugins()
}
const updatePage = (value) => {
proxy.$store.state.marketplaceParams.page = value
fetchMarketplacePlugins()
}
const updateSearch = (value) => {
console.log(value)
proxy.$store.state.marketplaceParams.query = value
proxy.$store.state.marketplaceParams.page = 1
fetchMarketplacePlugins()
}
const calculatePluginsPerPage = () => {
if (!pluginsContainer.value) return 10
const containerWidth = pluginsContainer.value.clientWidth
const containerHeight = pluginsContainer.value.clientHeight
console.log(containerWidth, containerHeight)
// 每个卡片宽度18rem + gap 16px
const cardWidth = 18 * 16 + 16 // rem转px
// 每个卡片高度9rem + gap 16px
const cardHeight = 9 * 16 + 16
// 计算每行可以放几个卡片
const cardsPerRow = Math.floor(containerWidth / cardWidth)
// 计算每行可以放几行
const rows = Math.floor(containerHeight / cardHeight)
// 计算每页总数
const perPage = cardsPerRow * rows
proxy.$store.state.marketplaceParams.per_page = perPage > 0 ? perPage : 10
}
const fetchMarketplacePlugins = async () => {
calculatePluginsPerPage()
proxy.$axios.post('https://space.langbot.app/api/v1/market/plugins', {
query: proxy.$store.state.marketplaceParams.query,
sort_by: proxy.$store.state.marketplaceParams.sort_by,
sort_order: proxy.$store.state.marketplaceParams.sort_order,
page: proxy.$store.state.marketplaceParams.page,
page_size: proxy.$store.state.marketplaceParams.per_page,
}).then(response => {
console.log(response.data)
if (response.data.code != 0) {
snackbar.error(response.data.msg)
return
}
// 解析出 name 和 author
response.data.data.plugins.forEach(plugin => {
plugin.name = plugin.repository.split('/')[2]
plugin.author = plugin.repository.split('/')[1]
})
proxy.$store.state.marketplacePlugins = response.data.data.plugins
proxy.$store.state.marketplaceTotalPluginsCount = response.data.data.total
let totalPages = Math.floor(response.data.data.total / proxy.$store.state.marketplaceParams.per_page)
if (response.data.data.total % proxy.$store.state.marketplaceParams.per_page != 0) {
totalPages += 1
}
proxy.$store.state.marketplaceTotalPages = totalPages
}).catch(error => {
snackbar.error(error)
})
}
onMounted(() => {
calculatePluginsPerPage()
fetchMarketplacePlugins()
// 监听窗口大小变化
window.addEventListener('resize', () => {
calculatePluginsPerPage()
fetchMarketplacePlugins()
})
})
const emit = defineEmits(['installPlugin'])
const installPlugin = (plugin) => {
emit('installPlugin', plugin.repository)
}
</script>
<style scoped>
#marketplace-container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
width: 100%;
height: 100%;
}
#marketplace-search-bar {
display: flex;
flex-direction: row;
margin-top: 1rem;
padding-right: 1rem;
gap: 1rem;
width: 100%;
}
#marketplace-search-bar-search-input {
position: relative;
left: 1rem;
width: 10rem;
}
#marketplace-search-bar-total-plugins-count {
font-size: 1.1rem;
font-weight: 500;
margin-top: 0.5rem;
color: #666;
user-select: none;
}
.plugin-card {
width: 18rem;
height: 9rem;
}
#marketplace-plugins-container {
display: flex;
flex-direction: row;
justify-content: flex-start;
flex-wrap: wrap;
gap: 16px;
margin-inline: 0rem;
width: 100%;
height: calc(100vh - 16rem);
overflow-y: auto;
}
#marketplace-plugins-container-inner {
display: flex;
flex-direction: row;
justify-content: flex-start;
flex-wrap: wrap;
gap: 16px;
}
</style>

View File

@@ -2,6 +2,10 @@
<PageTitle title="插件" @refresh="refresh" />
<v-card id="plugins-toolbar">
<div id="view-btns">
<v-btn-toggle id="plugins-view-toggle" color="primary" v-model="proxy.$store.state.pluginsView" mandatory density="compact">
<v-btn class="plugins-view-toggle-btn" value="installed" density="compact">已安装</v-btn>
<v-btn class="plugins-view-toggle-btn" value="market" density="compact">插件市场</v-btn>
</v-btn-toggle>
</div>
<div id="operation-btns">
<v-tooltip text="设置插件优先级" location="top">
@@ -78,17 +82,21 @@
</v-btn>
</div>
</v-card>
<div class="plugins-container">
<div class="plugins-container" v-if="proxy.$store.state.pluginsView == 'installed'">
<v-alert id="no-plugins-alert" v-if="plugins.length == 0" color="warning" icon="$warning" title="暂无插件" text="暂无已安装的插件,请安装插件" density="compact" style="margin-inline: 1rem;"></v-alert>
<PluginCard class="plugin-card" v-if="plugins.length > 0" v-for="plugin in plugins" :key="plugin.name" :plugin="plugin"
@toggle="togglePlugin" @update="updatePlugin" @remove="removePlugin" />
</div>
<div class="plugins-container" v-if="proxy.$store.state.pluginsView == 'market'">
<Marketplace @installPlugin="installMarketplacePlugin" />
</div>
</template>
<script setup>
import PageTitle from '@/components/PageTitle.vue'
import PluginCard from '@/components/PluginCard.vue'
import Marketplace from '@/components/Marketplace.vue'
import draggable from 'vuedraggable'
@@ -154,6 +162,12 @@ const removePlugin = (plugin) => {
})
}
const installMarketplacePlugin = (repository) => {
installDialogSource.value = 'https://'+repository
isInstallDialogActive.value = true
}
const installPlugin = () => {
if (installDialogSource.value == '' || installDialogSource.value.trim() == '') {
@@ -224,6 +238,11 @@ const installDialogSource = ref('')
margin-left: 1rem;
}
#plugins-view-toggle {
margin: 0.5rem;
box-shadow: 0 0 0 2px #dddddd;
}
#operation-btns {
display: flex;
flex-direction: row;

View File

@@ -18,7 +18,18 @@ export default createStore({
tokenValid: false,
systemInitialized: true,
jwtToken: '',
}
},
pluginsView: 'installed',
marketplaceParams: {
query: '',
page: 1,
per_page: 10,
sort_by: 'pushed_at',
sort_order: 'DESC',
},
marketplacePlugins: [],
marketplaceTotalPages: 0,
marketplaceTotalPluginsCount: 0,
},
mutations: {
initializeFetch() {