Compare commits

...

11 Commits

Author SHA1 Message Date
Junyan Qin
89b25b8985 chore: release v4.2.2 2025-08-29 17:01:26 +08:00
devin-ai-integration[bot]
46b4482a7d feat: add GitHub star count component to sidebar (#1636)
* feat: add GitHub star count component to sidebar

- Add GitHub star component to sidebar bottom section
- Fetch star count from space.langbot.app API
- Display star count with proper internationalization
- Open GitHub repository in new tab when clicked
- Follow existing sidebar styling patterns

Co-Authored-By: Rock <rockchinq@gmail.com>

* perf: ui

* chore: remove githubStars text

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Rock <rockchinq@gmail.com>
2025-08-28 21:04:36 +08:00
Bruce
d9fa1cbb06 perf: add cmd enable config & fix announce request timeout & fix send card with disconnect ai platform (#1633)
* add cmd config && fix bugs

* perf: use `get`

* update bansess fix block match rule

* perf: comment for access-control session str

---------

Co-authored-by: Junyan Qin <rockchinq@gmail.com>
2025-08-28 12:59:50 +08:00
Bruce
8858f432b5 fix dingtalk message sender id & update dingtalk streaming card without content (#1630) 2025-08-27 18:09:30 +08:00
Junyan Qin
8f5ec48522 doc: update shengsuanyun comment 2025-08-26 16:00:48 +08:00
devin-ai-integration[bot]
83ff64698b feat: add ZIP file upload support for knowledge base (#1626)
* feat: add ZIP file upload support for knowledge base

- Add _parse_zip method to FileParser class using zipfile library
- Support extraction and processing of TXT, PDF, DOCX, MD, HTML files from ZIP
- Update FileUploadZone to accept .zip files
- Add ZIP format to supported formats in internationalization files
- Implement error handling for invalid ZIP files and unsupported content
- Follow existing async parsing patterns and error handling conventions

Co-Authored-By: Rock <rockchinq@gmail.com>

* refactor: modify ZIP processing to store each document as separate file

- Remove _parse_zip method from FileParser as ZIP handling now occurs at knowledge base level
- Add _store_zip_file method to RuntimeKnowledgeBase to extract and store each document separately
- Each document in ZIP is now stored as individual file entry in knowledge base
- Process ZIP files in memory using io.BytesIO to avoid filesystem writes
- Generate unique file names for extracted documents to prevent conflicts

Co-Authored-By: Rock <rockchinq@gmail.com>

* perf: delete temp files

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Rock <rockchinq@gmail.com>
2025-08-23 21:18:13 +08:00
Junyan Qin
87ecb4e519 feat: add note for remove_think & remove dify remove cot code 2025-08-21 21:38:58 +08:00
Ljzd_PRO
df524b8a7a Fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages (#1624)
* fix: update invoke_embedding to return only embeddings from client.embed

* fix: Fixed the incorrect extraction method of sender ID when converting aiocqhttp reply messages
2025-08-21 20:46:26 +08:00
Junyan Qin
8a7df423ab chore: update shengsuanyun url 2025-08-21 14:14:25 +08:00
Junyan Qin
cafd623c92 chore: update shengsuanyun 2025-08-21 12:03:04 +08:00
Junyan Qin
4df11ef064 chore: update for shengsuanyun 2025-08-21 11:47:40 +08:00
27 changed files with 194 additions and 106 deletions

View File

@@ -107,6 +107,7 @@ docker compose up -d
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智谱AI](https://open.bigmodel.cn/) | ✅ | |
| [胜算云](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 全球大模型都可调用(友情推荐) |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 资源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |

View File

@@ -103,6 +103,7 @@ Or visit the demo environment: https://demo.langbot.dev/
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | LLM and GPU resource platform |
| [Dify](https://dify.ai) | ✅ | LLMOps platform |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | LLM and GPU resource platform |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLM and GPU resource platform |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLM gateway(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Ollama](https://ollama.com/) | ✅ | Local LLM running platform |

View File

@@ -102,6 +102,7 @@ LangBotはBTPanelにリストされています。BTPanelをインストール
| [Zhipu AI](https://open.bigmodel.cn/) | ✅ | |
| [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型とGPUリソースプラットフォーム |
| [ShengSuanYun](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | LLMとGPUリソースプラットフォーム |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | LLMゲートウェイ(MaaS) |
| [Google Gemini](https://aistudio.google.com/prompts/new_chat) | ✅ | |
| [Dify](https://dify.ai) | ✅ | LLMOpsプラットフォーム |

View File

@@ -102,6 +102,7 @@ docker compose up -d
| [Anthropic](https://www.anthropic.com/) | ✅ | |
| [xAI](https://x.ai/) | ✅ | |
| [智譜AI](https://open.bigmodel.cn/) | ✅ | |
| [勝算雲](https://www.shengsuanyun.com/?from=CH_KYIPP758) | ✅ | 大模型和 GPU 資源平台 |
| [優雲智算](https://www.compshare.cn/?ytag=GPU_YY-gh_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [PPIO](https://ppinfra.com/user/register?invited_by=QJKFYD&utm_source=github_langbot) | ✅ | 大模型和 GPU 資源平台 |
| [302.AI](https://share.302.ai/SuTG99) | ✅ | 大模型聚合平台 |

View File

@@ -212,6 +212,7 @@ class DBMigrateV3Config(migration.DBMigration):
self.ap.instance_config.data['api']['port'] = self.ap.system_cfg.data['http-api']['port']
self.ap.instance_config.data['command'] = {
'prefix': self.ap.command_cfg.data['command-prefix'],
'enable': self.ap.command_cfg.data['command-enable'],
'privilege': self.ap.command_cfg.data['privilege'],
}
self.ap.instance_config.data['concurrency']['pipeline'] = self.ap.system_cfg.data['pipeline-concurrency']

View File

@@ -20,7 +20,7 @@ class DBMigratePipelineRemoveCotConfig(migration.DBMigration):
config = serialized_pipeline['config']
if 'remove-think' not in config['output']['misc']:
config['output']['misc']['remove-think'] = True
config['output']['misc']['remove-think'] = False
await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline)

View File

@@ -30,6 +30,10 @@ class BanSessionCheckStage(stage.PipelineStage):
if sess == f'{query.launcher_type.value}_{query.launcher_id}':
found = True
break
# 使用 *_id 来表示加白/拉黑某用户的私聊和群聊场景
if sess.startswith('*_') and (sess[2:] == query.launcher_id or sess[2:] == query.sender_id):
found = True
break
ctn = False

View File

@@ -43,7 +43,7 @@ class ChatMessageHandler(handler.MessageHandler):
query=query,
)
)
is_create_card = False # 判断下是否需要创建流式卡片
if event_ctx.is_prevented_default():
if event_ctx.event.reply is not None:
mc = platform_message.MessageChain(event_ctx.event.reply)
@@ -72,14 +72,17 @@ class ChatMessageHandler(handler.MessageHandler):
raise ValueError(f'未找到请求运行器: {query.pipeline_config["ai"]["runner"]["runner"]}')
if is_stream:
resp_message_id = uuid.uuid4()
await query.adapter.create_message_card(str(resp_message_id), query.message_event)
async for result in runner.run(query):
result.resp_message_id = str(resp_message_id)
if query.resp_messages:
query.resp_messages.pop()
if query.resp_message_chain:
query.resp_message_chain.pop()
# 此时连接外部 AI 服务正常,创建卡片
if not is_create_card: # 只有不是第一次才创建卡片
await query.adapter.create_message_card(str(resp_message_id), query.message_event)
is_create_card = True
query.resp_messages.append(result)
self.ap.logger.info(f'对话({query.query_id})流式响应: {self.cut_str(result.readable_str())}')

View File

@@ -42,12 +42,14 @@ class Processor(stage.PipelineStage):
async def generator():
cmd_prefix = self.ap.instance_config.data['command']['prefix']
cmd_enable = self.ap.instance_config.data['command'].get('enable', True)
if any(message_text.startswith(prefix) for prefix in cmd_prefix):
async for result in self.cmd_handler.handle(query):
yield result
if cmd_enable and any(message_text.startswith(prefix) for prefix in cmd_prefix):
handler_to_use = self.cmd_handler
else:
async for result in self.chat_handler.handle(query):
yield result
handler_to_use = self.chat_handler
async for result in handler_to_use.handle(query):
yield result
return generator()

View File

@@ -266,7 +266,7 @@ class AiocqhttpMessageConverter(adapter.MessageConverter):
await process_message_data(msg_data, reply_list)
reply_msg = platform_message.Quote(
message_id=msg.data['id'], sender_id=msg_datas['user_id'], origin=reply_list
message_id=msg.data['id'], sender_id=msg_datas['sender']['user_id'], origin=reply_list
)
yiri_msg_list.append(reply_msg)

View File

@@ -23,6 +23,9 @@ class DingTalkMessageConverter(adapter.MessageConverter):
at = True
if type(msg) is platform_message.Plain:
content += msg.text
if type(msg) is platform_message.Forward:
for node in msg.node_list:
content += (await DingTalkMessageConverter.yiri2target(node.message_chain))[0]
return content, at
@staticmethod
@@ -61,7 +64,7 @@ class DingTalkEventConverter(adapter.EventConverter):
if event.conversation == 'FriendMessage':
return platform_events.FriendMessage(
sender=platform_entities.Friend(
id=event.incoming_message.sender_id,
id=event.incoming_message.sender_staff_id,
nickname=event.incoming_message.sender_nick,
remark='',
),
@@ -71,7 +74,7 @@ class DingTalkEventConverter(adapter.EventConverter):
)
elif event.conversation == 'GroupMessage':
sender = platform_entities.GroupMember(
id=event.incoming_message.sender_id,
id=event.incoming_message.sender_staff_id,
member_name=event.incoming_message.sender_nick,
permission='MEMBER',
group=platform_entities.Group(
@@ -166,8 +169,11 @@ class DingTalkAdapter(adapter.MessagePlatformAdapter):
content, at = await DingTalkMessageConverter.yiri2target(message)
card_instance, card_instance_id = self.card_instance_id_dict[message_id]
if not content and bot_message.content:
content = bot_message.content # 兼容直接传入content的情况
# print(card_instance_id)
await self.bot.send_card_message(card_instance, card_instance_id, content, is_final)
if content:
await self.bot.send_card_message(card_instance, card_instance_id, content, is_final)
if is_final and bot_message.tool_calls is None:
# self.seq = 1 # 消息回复结束之后重置seq
self.card_instance_id_dict.pop(message_id) # 消息回复结束之后删除卡片实例id

View File

@@ -8,7 +8,7 @@ import openai.types.chat.chat_completion as chat_completion
class ShengSuanYunChatCompletions(chatcmpl.OpenAIChatCompletions):
"""胜算云 ChatCompletion API 请求器"""
"""胜算云(ModelSpot.AI) ChatCompletion API 请求器"""
client: openai.AsyncClient

View File

@@ -3,7 +3,6 @@ from __future__ import annotations
import typing
import json
import uuid
import re
import base64
@@ -38,33 +37,9 @@ class DifyServiceAPIRunner(runner.RequestRunner):
base_url=self.pipeline_config['ai']['dify-service-api']['base-url'],
)
def _try_convert_thinking(self, resp_text: str) -> str:
"""尝试转换 Dify 的思考提示"""
if not resp_text.startswith(
'<details style="color:gray;background-color: #f8f8f8;padding: 8px;border-radius: 4px;" open> <summary> Thinking... </summary>'
):
return resp_text
if self.pipeline_config['ai']['dify-service-api']['thinking-convert'] == 'original':
return resp_text
if self.pipeline_config['ai']['dify-service-api']['thinking-convert'] == 'remove':
return re.sub(
r'<details style="color:gray;background-color: #f8f8f8;padding: 8px;border-radius: 4px;" open> <summary> Thinking... </summary>.*?</details>',
'',
resp_text,
flags=re.DOTALL,
)
if self.pipeline_config['ai']['dify-service-api']['thinking-convert'] == 'plain':
pattern = r'<details style="color:gray;background-color: #f8f8f8;padding: 8px;border-radius: 4px;" open> <summary> Thinking... </summary>(.*?)</details>'
thinking_text = re.search(pattern, resp_text, flags=re.DOTALL)
content_text = re.sub(pattern, '', resp_text, flags=re.DOTALL)
return f'<think>{thinking_text.group(1)}</think>\n{content_text}'
def _process_thinking_content(
self,
content: str,
self,
content: str,
) -> tuple[str, str]:
"""处理思维链内容
@@ -354,8 +329,9 @@ class DifyServiceAPIRunner(runner.RequestRunner):
yield msg
async def _chat_messages_chunk(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.MessageChunk, None]:
async def _chat_messages_chunk(
self, query: core_entities.Query
) -> typing.AsyncGenerator[llm_entities.MessageChunk, None]:
"""调用聊天助手"""
cov_id = query.session.using_conversation.uuid or ''
query.variables['conversation_id'] = cov_id
@@ -371,8 +347,6 @@ class DifyServiceAPIRunner(runner.RequestRunner):
for image_id in image_ids
]
mode = 'basic' # 标记是基础编排还是工作流编排
basic_mode_pending_chunk = ''
inputs = {}
@@ -411,6 +385,7 @@ class DifyServiceAPIRunner(runner.RequestRunner):
continue
if '</think>' in chunk['answer'] and not think_end:
import re
content = re.sub(r'^\n</think>', '', chunk['answer'])
basic_mode_pending_chunk += content
think_end = True
@@ -433,13 +408,11 @@ class DifyServiceAPIRunner(runner.RequestRunner):
is_final=is_final,
)
if chunk is None:
raise errors.DifyAPIError('Dify API 没有返回任何响应请检查网络连接和API配置')
query.session.using_conversation.uuid = chunk['conversation_id']
async def _agent_chat_messages_chunk(
self, query: core_entities.Query
) -> typing.AsyncGenerator[llm_entities.MessageChunk, None]:
@@ -496,6 +469,7 @@ class DifyServiceAPIRunner(runner.RequestRunner):
continue
if '</think>' in chunk['answer'] and not think_end:
import re
content = re.sub(r'^\n</think>', '', chunk['answer'])
pending_agent_message += content
think_end = True
@@ -509,7 +483,6 @@ class DifyServiceAPIRunner(runner.RequestRunner):
elif chunk['event'] == 'message_end':
is_final = True
else:
if chunk['event'] == 'agent_thought':
if chunk['tool'] != '' and chunk['observation'] != '': # 工具调用结果,跳过
continue
@@ -543,7 +516,6 @@ class DifyServiceAPIRunner(runner.RequestRunner):
role='assistant',
content=[llm_entities.ContentElement.from_image_url(image_url)],
is_final=is_final,
)
if chunk['event'] == 'error':
@@ -560,7 +532,9 @@ class DifyServiceAPIRunner(runner.RequestRunner):
query.session.using_conversation.uuid = chunk['conversation_id']
async def _workflow_messages_chunk(self, query: core_entities.Query) -> typing.AsyncGenerator[llm_entities.MessageChunk, None]:
async def _workflow_messages_chunk(
self, query: core_entities.Query
) -> typing.AsyncGenerator[llm_entities.MessageChunk, None]:
"""调用工作流"""
if not query.session.using_conversation.uuid:
@@ -618,6 +592,7 @@ class DifyServiceAPIRunner(runner.RequestRunner):
continue
if '</think>' in chunk['data']['text'] and not think_end:
import re
content = re.sub(r'^\n</think>', '', chunk['data']['text'])
workflow_contents += content
think_end = True
@@ -650,7 +625,6 @@ class DifyServiceAPIRunner(runner.RequestRunner):
yield msg
if messsage_idx % 8 == 0 or is_final:
yield llm_entities.MessageChunk(
role='assistant',
@@ -694,4 +668,4 @@ class DifyServiceAPIRunner(runner.RequestRunner):
else:
raise errors.DifyAPIError(
f'不支持的 Dify 应用类型: {self.pipeline_config["ai"]["dify-service-api"]["app-type"]}'
)
)

View File

@@ -1,6 +1,8 @@
from __future__ import annotations
import traceback
import uuid
import zipfile
import io
from .services import parser, chunker
from pkg.core import app
from pkg.rag.knowledge.services.embedder import Embedder
@@ -89,16 +91,23 @@ class RuntimeKnowledgeBase:
)
raise
finally:
# delete file from storage
await self.ap.storage_mgr.storage_provider.delete(file.file_name)
async def store_file(self, file_id: str) -> str:
# pre checking
if not await self.ap.storage_mgr.storage_provider.exists(file_id):
raise Exception(f'File {file_id} not found')
file_name = file_id
extension = file_name.split('.')[-1].lower()
if extension == 'zip':
return await self._store_zip_file(file_id)
file_uuid = str(uuid.uuid4())
kb_id = self.knowledge_base_entity.uuid
file_name = file_id
extension = file_name.split('.')[-1]
file_obj_data = {
'uuid': file_uuid,
@@ -123,6 +132,61 @@ class RuntimeKnowledgeBase:
)
return wrapper.id
async def _store_zip_file(self, zip_file_id: str) -> str:
"""Handle ZIP file by extracting each document and storing them separately."""
self.ap.logger.info(f'Processing ZIP file: {zip_file_id}')
zip_bytes = await self.ap.storage_mgr.storage_provider.load(zip_file_id)
supported_extensions = {'txt', 'pdf', 'docx', 'md', 'html'}
stored_file_tasks = []
# use utf-8 encoding
with zipfile.ZipFile(io.BytesIO(zip_bytes), 'r', metadata_encoding='utf-8') as zip_ref:
for file_info in zip_ref.filelist:
# skip directories and hidden files
if file_info.is_dir() or file_info.filename.startswith('.'):
continue
file_extension = file_info.filename.split('.')[-1].lower()
if file_extension not in supported_extensions:
self.ap.logger.debug(f'Skipping unsupported file in ZIP: {file_info.filename}')
continue
try:
file_content = zip_ref.read(file_info.filename)
base_name = file_info.filename.replace('/', '_').replace('\\', '_')
extension = base_name.split('.')[-1]
file_name = base_name.split('.')[0]
if file_name.startswith('__MACOSX'):
continue
extracted_file_id = file_name + '_' + str(uuid.uuid4())[:8] + '.' + extension
# save file to storage
await self.ap.storage_mgr.storage_provider.save(extracted_file_id, file_content)
task_id = await self.store_file(extracted_file_id)
stored_file_tasks.append(task_id)
self.ap.logger.info(
f'Extracted and stored file from ZIP: {file_info.filename} -> {extracted_file_id}'
)
except Exception as e:
self.ap.logger.warning(f'Failed to extract file {file_info.filename} from ZIP: {e}')
continue
if not stored_file_tasks:
raise Exception('No supported files found in ZIP archive')
self.ap.logger.info(f'Successfully processed ZIP file {zip_file_id}, extracted {len(stored_file_tasks)} files')
await self.ap.storage_mgr.storage_provider.delete(zip_file_id)
return stored_file_tasks[0] if stored_file_tasks else ''
async def retrieve(self, query: str, top_k: int) -> list[retriever_entities.RetrieveResultEntry]:
embedding_model = await self.ap.model_mgr.get_embedding_model_by_uuid(
self.knowledge_base_entity.embedding_model_uuid

View File

@@ -45,17 +45,23 @@ class AnnouncementManager:
async def fetch_all(self) -> list[Announcement]:
"""获取所有公告"""
resp = requests.get(
url='https://api.github.com/repos/langbot-app/LangBot/contents/res/announcement.json',
proxies=self.ap.proxy_mgr.get_forward_proxies(),
timeout=5,
)
obj_json = resp.json()
b64_content = obj_json['content']
# 解码
content = base64.b64decode(b64_content).decode('utf-8')
try:
resp = requests.get(
url='https://api.github.com/repos/langbot-app/LangBot/contents/res/announcement.json',
proxies=self.ap.proxy_mgr.get_forward_proxies(),
timeout=5,
)
resp.raise_for_status() # 检查请求是否成功
obj_json = resp.json()
b64_content = obj_json['content']
# 解码
content = base64.b64decode(b64_content).decode('utf-8')
return [Announcement(**item) for item in json.loads(content)]
return [Announcement(**item) for item in json.loads(content)]
except (requests.RequestException, json.JSONDecodeError, KeyError) as e:
self.ap.logger.warning(f"获取公告失败: {e}")
pass
return [] # 请求失败时返回空列表
async def fetch_saved(self) -> list[Announcement]:
if not os.path.exists('data/labels/announcement_saved.json'):

View File

@@ -1,4 +1,4 @@
semantic_version = 'v4.2.1'
semantic_version = 'v4.2.2'
required_database_version = 5
"""Tag the version of the database schema, used to check if the database needs to be migrated"""

View File

@@ -28,15 +28,19 @@ class VersionManager:
async def get_release_list(self) -> list:
"""获取发行列表"""
rls_list_resp = requests.get(
url='https://api.github.com/repos/langbot-app/LangBot/releases',
proxies=self.ap.proxy_mgr.get_forward_proxies(),
timeout=5,
)
rls_list = rls_list_resp.json()
return rls_list
try:
rls_list_resp = requests.get(
url='https://api.github.com/repos/langbot-app/LangBot/releases',
proxies=self.ap.proxy_mgr.get_forward_proxies(),
timeout=5,
)
rls_list_resp.raise_for_status() # 检查请求是否成功
rls_list = rls_list_resp.json()
return rls_list
except Exception as e:
self.ap.logger.warning(f"获取发行列表失败: {e}")
pass
return []
async def update_all(self):
"""检查更新并下载源码"""

View File

@@ -1,6 +1,6 @@
[project]
name = "langbot"
version = "4.1.0"
version = "4.2.2"
description = "高稳定、支持扩展、多模态 - 大模型原生即时通信机器人平台"
readme = "README.md"
requires-python = ">=3.10.1"

View File

@@ -2,6 +2,7 @@ admins: []
api:
port: 5300
command:
enable: true
prefix:
- '!'
-

View File

@@ -51,7 +51,6 @@
"base-url": "https://api.dify.ai/v1",
"app-type": "chat",
"api-key": "your-api-key",
"thinking-convert": "plain",
"timeout": 30
},
"dashscope-app-api": {
@@ -88,7 +87,7 @@
"at-sender": true,
"quote-origin": true,
"track-function-calls": false,
"remove-think": true
"remove-think": false
}
}
}

View File

@@ -118,28 +118,6 @@ stages:
zh_Hans: API 密钥
type: string
required: true
- name: thinking-convert
label:
en_US: CoT Convert
zh_Hans: 思维链转换策略
type: select
required: true
default: plain
options:
- name: plain
label:
en_US: Convert to <think>...</think>
zh_Hans: 转换成 <think>...</think>
- name: original
label:
en_US: Original
zh_Hans: 原始
- name: remove
label:
en_US: Remove
zh_Hans: 移除
- name: dashscope-app-api
label:
en_US: Aliyun Dashscope App API

View File

@@ -110,8 +110,8 @@ stages:
en_US: Remove CoT
zh_Hans: 删除思维链
description:
en_US: If enabled, LangBot will remove the LLM thought content in response
zh_Hans: 如果启用,将自动删除大模型回复中的模型思考内容
en_US: 'If enabled, LangBot will remove the LLM thought content in response. Note: When using streaming response, removing CoT may cause the first token to wait for a long time.'
zh_Hans: '如果启用,将自动删除大模型回复中的模型思考内容。注意:当您使用流式响应时,删除思维链可能会导致首个 Token 的等待时间过长'
type: boolean
required: true
default: true
default: false

View File

@@ -79,6 +79,9 @@ stages:
label:
en_US: Blacklist
zh_Hans: 黑名单
description:
en_US: Sessions in the blacklist will be ignored, the format is `{launcher_type}_{launcher_id}`remove quotes, for example `person_123` matches private chat, `group_456` matches group chat, `person_*` matches all private chats, `group_*` matches all group chats, `*_123` matches private and group chats with user ID 123
zh_Hans: 黑名单中的会话将被忽略;会话格式:`{launcher_type}_{launcher_id}`(删除引号),例如 `person_123` 匹配私聊会话,`group_456` 匹配群聊会话;`person_*` 匹配所有私聊会话,`group_*` 匹配所有群聊会话;`*_123` 匹配用户 ID 为 123 的私聊和群聊消息
type: array[string]
required: true
default: []
@@ -86,6 +89,9 @@ stages:
label:
en_US: Whitelist
zh_Hans: 白名单
description:
en_US: Only respond to sessions in the whitelist, the format is `{launcher_type}_{launcher_id}`remove quotes, for example `person_123` matches private chat, `group_456` matches group chat, `person_*` matches all private chats, `group_*` matches all group chats, `*_123` matches private and group chats with user ID 123
zh_Hans: 仅响应白名单中的会话;会话格式:`{launcher_type}_{launcher_id}`(删除引号),例如 `person_123` 匹配私聊会话,`group_456` 匹配群聊会话;`person_*` 匹配所有私聊会话,`group_*` 匹配所有群聊会话;`*_123` 匹配用户 ID 为 123 的私聊和群聊消息
type: array[string]
required: true
default: []

View File

@@ -9,7 +9,7 @@ import {
import { useRouter, usePathname } from 'next/navigation';
import { sidebarConfigList } from '@/app/home/components/home-sidebar/sidbarConfigList';
import langbotIcon from '@/app/assets/langbot-logo.webp';
import { systemInfo } from '@/app/infra/http/HttpClient';
import { systemInfo, spaceClient } from '@/app/infra/http/HttpClient';
import { useTranslation } from 'react-i18next';
import { Moon, Sun, Monitor } from 'lucide-react';
import { useTheme } from 'next-themes';
@@ -22,6 +22,7 @@ import {
import { Button } from '@/components/ui/button';
import { ToggleGroup, ToggleGroupItem } from '@/components/ui/toggle-group';
import { LanguageSelector } from '@/components/ui/language-selector';
import { Badge } from '@/components/ui/badge';
import PasswordChangeDialog from '@/app/home/components/password-change-dialog/PasswordChangeDialog';
// TODO 侧边导航栏要加动画
@@ -44,6 +45,7 @@ export default function HomeSidebar({
const [popoverOpen, setPopoverOpen] = useState(false);
const [passwordChangeOpen, setPasswordChangeOpen] = useState(false);
const [languageSelectorOpen, setLanguageSelectorOpen] = useState(false);
const [starCount, setStarCount] = useState<number | null>(null);
useEffect(() => {
initSelect();
@@ -51,6 +53,16 @@ export default function HomeSidebar({
localStorage.setItem('token', 'test-token');
localStorage.setItem('userEmail', 'test@example.com');
}
spaceClient
.get('/api/v1/dist/info/repo')
.then((response) => {
const data = response as { repo: { stargazers_count: number } };
setStarCount(data.repo.stargazers_count);
})
.catch((error) => {
console.error('Failed to fetch GitHub star count:', error);
});
return () => console.log('sidebar.unmounted');
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
@@ -150,6 +162,30 @@ export default function HomeSidebar({
</div>
<div className={`${styles.sidebarBottomContainer}`}>
{starCount !== null && (
<div
onClick={() => {
window.open('https://github.com/langbot-app/LangBot', '_blank');
}}
className="flex justify-center cursor-pointer p-2 rounded-lg hover:bg-accent/30 transition-colors"
>
<Badge
variant="outline"
className="hover:bg-secondary/50 px-3 py-1.5 text-sm font-medium transition-colors border-border relative overflow-hidden group"
>
<svg
className="w-4 h-4 mr-2"
viewBox="0 0 24 24"
fill="currentColor"
>
<path d="M12 2C6.477 2 2 6.477 2 12c0 4.42 2.865 8.17 6.839 9.49.5.092.682-.217.682-.482 0-.237-.008-.866-.013-1.7-2.782.604-3.369-1.34-3.369-1.34-.454-1.156-1.11-1.464-1.11-1.464-.908-.62.069-.608.069-.608 1.003.07 1.531 1.03 1.531 1.03.892 1.529 2.341 1.087 2.91.831.092-.646.35-1.086.636-1.336-2.22-.253-4.555-1.11-4.555-4.943 0-1.091.39-1.984 1.029-2.683-.103-.253-.446-1.27.098-2.647 0 0 .84-.269 2.75 1.025A9.564 9.564 0 0112 6.844c.85.004 1.705.115 2.504.337 1.909-1.294 2.747-1.025 2.747-1.025.546 1.377.203 2.394.1 2.647.64.699 1.028 1.592 1.028 2.683 0 3.842-2.339 4.687-4.566 4.935.359.309.678.919.678 1.852 0 1.336-.012 2.415-.012 2.743 0 .267.18.578.688.48C19.138 20.167 22 16.418 22 12c0-5.523-4.477-10-10-10z" />
</svg>
<div className="absolute inset-0 -translate-x-full bg-gradient-to-r from-transparent via-white/20 to-transparent group-hover:translate-x-full transition-transform duration-1000 ease-out"></div>
{starCount.toLocaleString()}
</Badge>
</div>
)}
<SidebarChild
onClick={() => {
// open docs.langbot.app

View File

@@ -104,7 +104,7 @@ export default function FileUploadZone({
id="file-upload"
className="hidden"
onChange={handleFileSelect}
accept=".pdf,.doc,.docx,.txt,.md,.html"
accept=".pdf,.doc,.docx,.txt,.md,.html,.zip"
disabled={isUploading}
/>

View File

@@ -292,7 +292,7 @@ const enUS = {
dragAndDrop: 'Drag and drop files here or click to upload',
uploading: 'Uploading...',
supportedFormats:
'Supports PDF, Word, TXT, Markdown and other document formats',
'Supports PDF, Word, TXT, Markdown, HTML, ZIP and other document formats',
uploadSuccess: 'File uploaded successfully!',
uploadError: 'File upload failed, please try again',
uploadingFile: 'Uploading file...',

View File

@@ -282,7 +282,7 @@ const zhHans = {
noResults: '暂无文档',
dragAndDrop: '拖拽文件到此处或点击上传',
uploading: '上传中...',
supportedFormats: '支持 PDF、Word、TXT、Markdown 等文档格式',
supportedFormats: '支持 PDF、Word、TXT、Markdown、HTML、ZIP 等文档格式',
uploadSuccess: '文件上传成功!',
uploadError: '文件上传失败,请重试',
uploadingFile: '上传文件中...',