Compare commits

...

22 Commits

Author SHA1 Message Date
Junyan Qin
34e2fa03ce chore: bump version 4.6.0-beta.1 for testing 2025-11-16 19:11:02 +08:00
Junyan Qin
7b63bcdc39 ci: publish pypi 2025-11-16 19:09:24 +08:00
Junyan Qin
d26e81620d fix: tests 2025-11-16 18:39:45 +08:00
Junyan Qin
e7885539a7 fix: read default-pipeline-config.json 2025-11-16 18:13:10 +08:00
Junyan Qin
f216505237 fix: read default-pipeline-config.json 2025-11-16 18:12:29 +08:00
Junyan Qin
8b11eefd0c Merge branch 'master' into copilot/create-langbot-python-package 2025-11-16 17:50:37 +08:00
Junyan Qin
418cddd657 chore: fix imports 2025-11-16 17:44:18 +08:00
Junyan Qin
75edeb7a01 chore: adjust dir structure 2025-11-16 16:28:04 +08:00
Junyan Qin
c5aa5be4d8 chore: update 2025-11-07 23:19:51 +08:00
Junyan Qin
92614062cc chore: update 2025-11-07 23:10:57 +08:00
Junyan Qin
09307d8c6d chore: update 2025-11-07 23:04:49 +08:00
Junyan Qin
894db240ae chore: update 2025-11-07 23:02:50 +08:00
Junyan Qin
f79cde5b0c chore: update 2025-11-07 22:55:33 +08:00
Junyan Qin
d43c2c498c chore: try pack templates in langbot/ 2025-11-07 22:51:30 +08:00
Junyan Qin
5f6036c5a8 chore: update pyproject.toml 2025-11-07 22:19:15 +08:00
copilot-swe-agent[bot]
dead0794b1 Simplify package configuration and document behavioral differences
- Removed redundant package-data configuration, relying on MANIFEST.in
- Added documentation about behavioral differences between package and source installation
- Clarified that include-package-data=true uses MANIFEST.in for data files

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 14:08:57 +00:00
copilot-swe-agent[bot]
f784bad08b Fix code review issues
- Use specific exception types instead of bare except
- Fix misleading comments about directory levels
- Remove redundant existence check before makedirs with exist_ok=True
- Use context manager for file opening to ensure proper cleanup

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 14:06:49 +00:00
copilot-swe-agent[bot]
4e86e1c93d Address code review feedback
- Made package-data configuration more specific to langbot package only
- Improved path detection with caching to avoid repeated file I/O
- Removed sys.path searching which was incorrect for package data
- Removed interactive input() call for non-interactive environment compatibility
- Simplified error messages for version check

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 14:04:47 +00:00
copilot-swe-agent[bot]
c0eec966ac Add PyPI installation documentation
- Created PYPI_INSTALLATION.md with detailed installation and usage instructions
- Updated README.md to feature uvx/pip installation as recommended method
- Updated README_EN.md with same changes for English documentation

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 14:02:49 +00:00
copilot-swe-agent[bot]
62d6dae4f5 Add PyPI publishing workflow and update license
- Created GitHub Actions workflow to build frontend and publish to PyPI
- Added license field to pyproject.toml to fix deprecation warning
- Updated .gitignore to exclude build artifacts
- Tested package building successfully

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 14:01:07 +00:00
copilot-swe-agent[bot]
cab573f3e2 Add package structure and resource path utilities
- Created langbot/ package with __init__.py and __main__.py entry point
- Added paths utility to find frontend and resource files from package installation
- Updated config loading to use resource paths
- Updated frontend serving to use resource paths
- Added MANIFEST.in for package data inclusion
- Updated pyproject.toml with build system and entry points

Co-authored-by: RockChinQ <45992437+RockChinQ@users.noreply.github.com>
2025-11-07 13:58:18 +00:00
copilot-swe-agent[bot]
8fe59da302 Initial plan 2025-11-07 13:48:46 +00:00
476 changed files with 985 additions and 998 deletions

46
.github/workflows/publish-to-pypi.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: Build and Publish to PyPI
on:
workflow_dispatch:
release:
types: [published]
jobs:
build-and-publish:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write # Required for trusted publishing to PyPI
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
- name: Build frontend
run: |
cd web
npm install -g pnpm
pnpm install
pnpm build
mkdir -p ../src/langbot/web/out
cp -r out ../src/langbot/web/
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v6
with:
version: "latest"
- name: Build package
run: |
uv build
- name: Publish to PyPI
run: |
uv publish --token ${{ secrets.PYPI_TOKEN }}

6
.gitignore vendored
View File

@@ -47,3 +47,9 @@ uv.lock
plugins.bak plugins.bak
coverage.xml coverage.xml
.coverage .coverage
src/langbot/web/
# Build artifacts
/dist
/build
*.egg-info

View File

@@ -31,6 +31,25 @@ LangBot 是一个开源的大语言模型原生即时通信机器人开发平台
## 📦 开始使用 ## 📦 开始使用
#### 快速体验(推荐)
使用 `uvx` 一键启动(无需安装):
```bash
uvx langbot
```
或使用 `pip` 安装后运行:
```bash
pip install langbot
langbot
```
访问 http://localhost:5300 即可开始使用。
详细文档[PyPI 安装](docs/PYPI_INSTALLATION.md)。
#### Docker Compose 部署 #### Docker Compose 部署
```bash ```bash

View File

@@ -25,6 +25,25 @@ LangBot is an open-source LLM native instant messaging robot development platfor
## 📦 Getting Started ## 📦 Getting Started
#### Quick Start (Recommended)
Use `uvx` to start with one command (no installation required):
```bash
uvx langbot
```
Or install with `pip` and run:
```bash
pip install langbot
langbot
```
Visit http://localhost:5300 to start using it.
Detailed documentation [PyPI Installation](docs/PYPI_INSTALLATION.md).
#### Docker Compose Deployment #### Docker Compose Deployment
```bash ```bash

117
docs/PYPI_INSTALLATION.md Normal file
View File

@@ -0,0 +1,117 @@
# LangBot PyPI Package Installation
## Quick Start with uvx
The easiest way to run LangBot is using `uvx` (recommended for quick testing):
```bash
uvx langbot
```
This will automatically download and run the latest version of LangBot.
## Install with pip/uv
You can also install LangBot as a regular Python package:
```bash
# Using pip
pip install langbot
# Using uv
uv pip install langbot
```
Then run it:
```bash
langbot
```
Or using Python module syntax:
```bash
python -m langbot
```
## Installation with Frontend
When published to PyPI, the LangBot package includes the pre-built frontend files. You don't need to build the frontend separately.
## Data Directory
When running LangBot as a package, it will create a `data/` directory in your current working directory to store configuration, logs, and other runtime data. You can run LangBot from any directory, and it will set up its data directory there.
## Command Line Options
LangBot supports the following command line options:
- `--standalone-runtime`: Use standalone plugin runtime
- `--debug`: Enable debug mode
Example:
```bash
langbot --debug
```
## Comparison with Other Installation Methods
### PyPI Package (uvx/pip)
- **Pros**: Easy to install and update, no need to clone repository or build frontend
- **Cons**: Less flexible for development/customization
### Docker
- **Pros**: Isolated environment, easy deployment
- **Cons**: Requires Docker
### Manual Source Installation
- **Pros**: Full control, easy to customize and develop
- **Cons**: Requires building frontend, managing dependencies manually
## Development
If you want to contribute or customize LangBot, you should still use the manual installation method by cloning the repository:
```bash
git clone https://github.com/langbot-app/LangBot
cd LangBot
uv sync
cd web
npm install
npm run build
cd ..
uv run main.py
```
## Updating
To update to the latest version:
```bash
# With pip
pip install --upgrade langbot
# With uv
uv pip install --upgrade langbot
# With uvx (automatically uses latest)
uvx langbot
```
## System Requirements
- Python 3.10.1 or higher
- Operating System: Linux, macOS, or Windows
## Differences from Source Installation
When running LangBot from the PyPI package (via uvx or pip), there are a few behavioral differences compared to running from source:
1. **Version Check**: The package version does not prompt for user input when the Python version is incompatible. It simply prints an error message and exits. This makes it compatible with non-interactive environments like containers and CI/CD.
2. **Working Directory**: The package version does not require being run from the LangBot project root. You can run `langbot` from any directory, and it will create a `data/` directory in your current working directory.
3. **Frontend Files**: The frontend is pre-built and included in the package, so you don't need to run `npm build` separately.
These differences are intentional to make the package more user-friendly and suitable for various deployment scenarios.

View File

@@ -1,45 +0,0 @@
from v1 import client # type: ignore
import asyncio
import os
import json
class TestDifyClient:
async def test_chat_messages(self):
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
async for chunk in cln.chat_messages(inputs={}, query='调用工具查看现在几点?', user='test'):
print(json.dumps(chunk, ensure_ascii=False, indent=4))
async def test_upload_file(self):
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
file_bytes = open('img.png', 'rb').read()
print(type(file_bytes))
file = ('img2.png', file_bytes, 'image/png')
resp = await cln.upload_file(file=file, user='test')
print(json.dumps(resp, ensure_ascii=False, indent=4))
async def test_workflow_run(self):
cln = client.AsyncDifyServiceClient(api_key=os.getenv('DIFY_API_KEY'), base_url=os.getenv('DIFY_BASE_URL'))
# resp = await cln.workflow_run(inputs={}, user="test")
# # print(json.dumps(resp, ensure_ascii=False, indent=4))
# print(resp)
chunks = []
ignored_events = ['text_chunk']
async for chunk in cln.workflow_run(inputs={}, user='test'):
if chunk['event'] in ignored_events:
continue
chunks.append(chunk)
print(json.dumps(chunks, ensure_ascii=False, indent=4))
if __name__ == '__main__':
asyncio.run(TestDifyClient().test_chat_messages())

118
main.py
View File

@@ -1,117 +1,3 @@
import asyncio import langbot.__main__
import argparse
# LangBot 终端启动入口
# 在此层级解决依赖项检查。
# LangBot/main.py
asciiart = r""" langbot.__main__.main()
_ ___ _
| | __ _ _ _ __ _| _ ) ___| |_
| |__/ _` | ' \/ _` | _ \/ _ \ _|
|____\__,_|_||_\__, |___/\___/\__|
|___/
⭐️ Open Source 开源地址: https://github.com/langbot-app/LangBot
📖 Documentation 文档地址: https://docs.langbot.app
"""
async def main_entry(loop: asyncio.AbstractEventLoop):
parser = argparse.ArgumentParser(description='LangBot')
parser.add_argument(
'--standalone-runtime',
action='store_true',
help='Use standalone plugin runtime / 使用独立插件运行时',
default=False,
)
parser.add_argument('--debug', action='store_true', help='Debug mode / 调试模式', default=False)
args = parser.parse_args()
if args.standalone_runtime:
from pkg.utils import platform
platform.standalone_runtime = True
if args.debug:
from pkg.utils import constants
constants.debug_mode = True
print(asciiart)
import sys
# 检查依赖
from pkg.core.bootutils import deps
missing_deps = await deps.check_deps()
if missing_deps:
print('以下依赖包未安装,将自动安装,请完成后重启程序:')
print(
'These dependencies are missing, they will be installed automatically, please restart the program after completion:'
)
for dep in missing_deps:
print('-', dep)
await deps.install_deps(missing_deps)
print('已自动安装缺失的依赖包,请重启程序。')
print('The missing dependencies have been installed automatically, please restart the program.')
sys.exit(0)
# # 检查pydantic版本如果没有 pydantic.v1则把 pydantic 映射为 v1
# import pydantic.version
# if pydantic.version.VERSION < '2.0':
# import pydantic
# sys.modules['pydantic.v1'] = pydantic
# 检查配置文件
from pkg.core.bootutils import files
generated_files = await files.generate_files()
if generated_files:
print('以下文件不存在,已自动生成:')
print('Following files do not exist and have been automatically generated:')
for file in generated_files:
print('-', file)
from pkg.core import boot
await boot.main(loop)
if __name__ == '__main__':
import os
import sys
# 必须大于 3.10.1
if sys.version_info < (3, 10, 1):
print('需要 Python 3.10.1 及以上版本,当前 Python 版本为:', sys.version)
input('按任意键退出...')
print('Your Python version is not supported. Please exit the program by pressing any key.')
exit(1)
# Check if the current directory is the LangBot project root directory
invalid_pwd = False
if not os.path.exists('main.py'):
invalid_pwd = True
else:
with open('main.py', 'r', encoding='utf-8') as f:
content = f.read()
if 'LangBot/main.py' not in content:
invalid_pwd = True
if invalid_pwd:
print('请在 LangBot 项目根目录下以命令形式运行此程序。')
input('按任意键退出...')
print('Please run this program in the LangBot project root directory in command form.')
print('Press any key to exit...')
exit(1)
loop = asyncio.new_event_loop()
loop.run_until_complete(main_entry(loop))

View File

@@ -1,115 +0,0 @@
from __future__ import annotations
import json
import typing
import os
import base64
import logging
import pydantic
import requests
from ..core import app
class Announcement(pydantic.BaseModel):
"""公告"""
id: int
time: str
timestamp: int
content: str
enabled: typing.Optional[bool] = True
def to_dict(self) -> dict:
return {
'id': self.id,
'time': self.time,
'timestamp': self.timestamp,
'content': self.content,
'enabled': self.enabled,
}
class AnnouncementManager:
"""公告管理器"""
ap: app.Application = None
def __init__(self, ap: app.Application):
self.ap = ap
async def fetch_all(self) -> list[Announcement]:
"""获取所有公告"""
try:
resp = requests.get(
url='https://api.github.com/repos/langbot-app/LangBot/contents/res/announcement.json',
proxies=self.ap.proxy_mgr.get_forward_proxies(),
timeout=5,
)
resp.raise_for_status() # 检查请求是否成功
obj_json = resp.json()
b64_content = obj_json['content']
# 解码
content = base64.b64decode(b64_content).decode('utf-8')
return [Announcement(**item) for item in json.loads(content)]
except (requests.RequestException, json.JSONDecodeError, KeyError) as e:
self.ap.logger.warning(f'获取公告失败: {e}')
pass
return [] # 请求失败时返回空列表
async def fetch_saved(self) -> list[Announcement]:
if not os.path.exists('data/labels/announcement_saved.json'):
with open('data/labels/announcement_saved.json', 'w', encoding='utf-8') as f:
f.write('[]')
with open('data/labels/announcement_saved.json', 'r', encoding='utf-8') as f:
content = f.read()
if not content:
content = '[]'
return [Announcement(**item) for item in json.loads(content)]
async def write_saved(self, content: list[Announcement]):
with open('data/labels/announcement_saved.json', 'w', encoding='utf-8') as f:
f.write(json.dumps([item.to_dict() for item in content], indent=4, ensure_ascii=False))
async def fetch_new(self) -> list[Announcement]:
"""获取新公告"""
all = await self.fetch_all()
saved = await self.fetch_saved()
to_show: list[Announcement] = []
for item in all:
# 遍历saved检查是否有相同id的公告
for saved_item in saved:
if saved_item.id == item.id:
break
else:
if item.enabled:
# 没有相同id的公告
to_show.append(item)
await self.write_saved(all)
return to_show
async def show_announcements(self) -> typing.Tuple[str, int]:
"""显示公告"""
try:
announcements = await self.fetch_new()
ann_text = ''
for ann in announcements:
ann_text += f'[公告] {ann.time}: {ann.content}\n'
# TODO statistics
return ann_text, logging.INFO
except Exception as e:
return f'获取公告时出错: {e}', logging.WARNING

View File

@@ -1,8 +1,9 @@
[project] [project]
name = "langbot" name = "langbot"
version = "4.5.0" version = "4.6.0-beta.1"
description = "Easy-to-use global IM bot platform designed for LLM era" description = "Easy-to-use global IM bot platform designed for LLM era"
readme = "README.md" readme = "README.md"
license-files = ["LICENSE"]
requires-python = ">=3.10.1,<4.0" requires-python = ">=3.10.1,<4.0"
dependencies = [ dependencies = [
"aiocqhttp>=1.4.4", "aiocqhttp>=1.4.4",
@@ -85,11 +86,10 @@ keywords = [
"onebot", "onebot",
] ]
classifiers = [ classifiers = [
"Development Status :: 3 - Alpha", "Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO", "Framework :: AsyncIO",
"Framework :: Robot Framework", "Framework :: Robot Framework",
"Framework :: Robot Framework :: Library", "Framework :: Robot Framework :: Library",
"License :: OSI Approved :: AGPL-3 License",
"Operating System :: OS Independent", "Operating System :: OS Independent",
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",
"Topic :: Communications :: Chat", "Topic :: Communications :: Chat",
@@ -100,6 +100,16 @@ Homepage = "https://langbot.app"
Documentation = "https://docs.langbot.app" Documentation = "https://docs.langbot.app"
Repository = "https://github.com/langbot-app/LangBot" Repository = "https://github.com/langbot-app/LangBot"
[project.scripts]
langbot = "langbot.__main__:main"
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
package-data = { "langbot" = ["templates/*", "pkg/provider/modelmgr/requesters/*", "pkg/platform/sources/*", "web/out/**"] }
[dependency-groups] [dependency-groups]
dev = [ dev = [
"pre-commit>=4.2.0", "pre-commit>=4.2.0",

View File

@@ -26,7 +26,7 @@ markers =
# Coverage options (when using pytest-cov) # Coverage options (when using pytest-cov)
[coverage:run] [coverage:run]
source = pkg source = langbot.pkg
omit = omit =
*/tests/* */tests/*
*/test_*.py */test_*.py

3
src/langbot/__init__.py Normal file
View File

@@ -0,0 +1,3 @@
"""LangBot - Easy-to-use global IM bot platform designed for LLM era"""
__version__ = '4.6.0-beta.1'

104
src/langbot/__main__.py Normal file
View File

@@ -0,0 +1,104 @@
"""LangBot entry point for package execution"""
import asyncio
import argparse
import sys
import os
# ASCII art banner
asciiart = r"""
_ ___ _
| | __ _ _ _ __ _| _ ) ___| |_
| |__/ _` | ' \/ _` | _ \/ _ \ _|
|____\__,_|_||_\__, |___/\___/\__|
|___/
⭐️ Open Source 开源地址: https://github.com/langbot-app/LangBot
📖 Documentation 文档地址: https://docs.langbot.app
"""
async def main_entry(loop: asyncio.AbstractEventLoop):
"""Main entry point for LangBot"""
parser = argparse.ArgumentParser(description='LangBot')
parser.add_argument(
'--standalone-runtime',
action='store_true',
help='Use standalone plugin runtime / 使用独立插件运行时',
default=False,
)
parser.add_argument('--debug', action='store_true', help='Debug mode / 调试模式', default=False)
args = parser.parse_args()
if args.standalone_runtime:
from langbot.pkg.utils import platform
platform.standalone_runtime = True
if args.debug:
from langbot.pkg.utils import constants
constants.debug_mode = True
print(asciiart)
# Check dependencies
from langbot.pkg.core.bootutils import deps
missing_deps = await deps.check_deps()
if missing_deps:
print('以下依赖包未安装,将自动安装,请完成后重启程序:')
print(
'These dependencies are missing, they will be installed automatically, please restart the program after completion:'
)
for dep in missing_deps:
print('-', dep)
await deps.install_deps(missing_deps)
print('已自动安装缺失的依赖包,请重启程序。')
print('The missing dependencies have been installed automatically, please restart the program.')
sys.exit(0)
# Check configuration files
from langbot.pkg.core.bootutils import files
generated_files = await files.generate_files()
if generated_files:
print('以下文件不存在,已自动生成:')
print('Following files do not exist and have been automatically generated:')
for file in generated_files:
print('-', file)
from langbot.pkg.core import boot
await boot.main(loop)
def main():
"""Main function to be called by console script entry point"""
# Check Python version
if sys.version_info < (3, 10, 1):
print('需要 Python 3.10.1 及以上版本,当前 Python 版本为:', sys.version)
print('Your Python version is not supported.')
print('Python 3.10.1 or higher is required. Current version:', sys.version)
sys.exit(1)
# Set up the working directory
# When installed as a package, we need to handle the working directory differently
# We'll create data directory in current working directory if not exists
os.makedirs('data', exist_ok=True)
loop = asyncio.new_event_loop()
try:
loop.run_until_complete(main_entry(loop))
except KeyboardInterrupt:
print('\n正在退出...')
print('Exiting...')
finally:
loop.close()
if __name__ == '__main__':
main()

View File

@@ -7,10 +7,8 @@ import os
from pathlib import Path from pathlib import Path
class AsyncCozeAPIClient: class AsyncCozeAPIClient:
def __init__(self, api_key: str, api_base: str = "https://api.coze.cn"): def __init__(self, api_key: str, api_base: str = 'https://api.coze.cn'):
self.api_key = api_key self.api_key = api_key
self.api_base = api_base self.api_base = api_base
self.session = None self.session = None
@@ -24,13 +22,11 @@ class AsyncCozeAPIClient:
"""退出时自动关闭会话""" """退出时自动关闭会话"""
await self.close() await self.close()
async def coze_session(self): async def coze_session(self):
"""确保HTTP session存在""" """确保HTTP session存在"""
if self.session is None: if self.session is None:
connector = aiohttp.TCPConnector( connector = aiohttp.TCPConnector(
ssl=False if self.api_base.startswith("http://") else True, ssl=False if self.api_base.startswith('http://') else True,
limit=100, limit=100,
limit_per_host=30, limit_per_host=30,
keepalive_timeout=30, keepalive_timeout=30,
@@ -42,12 +38,10 @@ class AsyncCozeAPIClient:
sock_read=120, sock_read=120,
) )
headers = { headers = {
"Authorization": f"Bearer {self.api_key}", 'Authorization': f'Bearer {self.api_key}',
"Accept": "text/event-stream", 'Accept': 'text/event-stream',
} }
self.session = aiohttp.ClientSession( self.session = aiohttp.ClientSession(headers=headers, timeout=timeout, connector=connector)
headers=headers, timeout=timeout, connector=connector
)
return self.session return self.session
async def close(self): async def close(self):
@@ -63,15 +57,15 @@ class AsyncCozeAPIClient:
# 处理 Path 对象 # 处理 Path 对象
if isinstance(file, Path): if isinstance(file, Path):
if not file.exists(): if not file.exists():
raise ValueError(f"File not found: {file}") raise ValueError(f'File not found: {file}')
with open(file, "rb") as f: with open(file, 'rb') as f:
file = f.read() file = f.read()
# 处理文件路径字符串 # 处理文件路径字符串
elif isinstance(file, str): elif isinstance(file, str):
if not os.path.isfile(file): if not os.path.isfile(file):
raise ValueError(f"File not found: {file}") raise ValueError(f'File not found: {file}')
with open(file, "rb") as f: with open(file, 'rb') as f:
file = f.read() file = f.read()
# 处理文件对象 # 处理文件对象
@@ -79,43 +73,39 @@ class AsyncCozeAPIClient:
file = file.read() file = file.read()
session = await self.coze_session() session = await self.coze_session()
url = f"{self.api_base}/v1/files/upload" url = f'{self.api_base}/v1/files/upload'
try: try:
file_io = io.BytesIO(file) file_io = io.BytesIO(file)
async with session.post( async with session.post(
url, url,
data={ data={
"file": file_io, 'file': file_io,
}, },
timeout=aiohttp.ClientTimeout(total=60), timeout=aiohttp.ClientTimeout(total=60),
) as response: ) as response:
if response.status == 401: if response.status == 401:
raise Exception("Coze API 认证失败,请检查 API Key 是否正确") raise Exception('Coze API 认证失败,请检查 API Key 是否正确')
response_text = await response.text() response_text = await response.text()
if response.status != 200: if response.status != 200:
raise Exception( raise Exception(f'文件上传失败,状态码: {response.status}, 响应: {response_text}')
f"文件上传失败,状态码: {response.status}, 响应: {response_text}"
)
try: try:
result = await response.json() result = await response.json()
except json.JSONDecodeError: except json.JSONDecodeError:
raise Exception(f"文件上传响应解析失败: {response_text}") raise Exception(f'文件上传响应解析失败: {response_text}')
if result.get("code") != 0: if result.get('code') != 0:
raise Exception(f"文件上传失败: {result.get('msg', '未知错误')}") raise Exception(f'文件上传失败: {result.get("msg", "未知错误")}')
file_id = result["data"]["id"] file_id = result['data']['id']
return file_id return file_id
except asyncio.TimeoutError: except asyncio.TimeoutError:
raise Exception("文件上传超时") raise Exception('文件上传超时')
except Exception as e: except Exception as e:
raise Exception(f"文件上传失败: {str(e)}") raise Exception(f'文件上传失败: {str(e)}')
async def chat_messages( async def chat_messages(
self, self,
@@ -139,22 +129,21 @@ class AsyncCozeAPIClient:
timeout: 超时时间 timeout: 超时时间
""" """
session = await self.coze_session() session = await self.coze_session()
url = f"{self.api_base}/v3/chat" url = f'{self.api_base}/v3/chat'
payload = { payload = {
"bot_id": bot_id, 'bot_id': bot_id,
"user_id": user_id, 'user_id': user_id,
"stream": stream, 'stream': stream,
"auto_save_history": auto_save_history, 'auto_save_history': auto_save_history,
} }
if additional_messages: if additional_messages:
payload["additional_messages"] = additional_messages payload['additional_messages'] = additional_messages
params = {} params = {}
if conversation_id: if conversation_id:
params["conversation_id"] = conversation_id params['conversation_id'] = conversation_id
try: try:
async with session.post( async with session.post(
@@ -164,29 +153,25 @@ class AsyncCozeAPIClient:
timeout=aiohttp.ClientTimeout(total=timeout), timeout=aiohttp.ClientTimeout(total=timeout),
) as response: ) as response:
if response.status == 401: if response.status == 401:
raise Exception("Coze API 认证失败,请检查 API Key 是否正确") raise Exception('Coze API 认证失败,请检查 API Key 是否正确')
if response.status != 200: if response.status != 200:
raise Exception(f"Coze API 流式请求失败,状态码: {response.status}") raise Exception(f'Coze API 流式请求失败,状态码: {response.status}')
async for chunk in response.content: async for chunk in response.content:
chunk = chunk.decode("utf-8") chunk = chunk.decode('utf-8')
if chunk != '\n': if chunk != '\n':
if chunk.startswith("event:"): if chunk.startswith('event:'):
chunk_type = chunk.replace("event:", "", 1).strip() chunk_type = chunk.replace('event:', '', 1).strip()
elif chunk.startswith("data:"): elif chunk.startswith('data:'):
chunk_data = chunk.replace("data:", "", 1).strip() chunk_data = chunk.replace('data:', '', 1).strip()
else: else:
yield {"event": chunk_type, "data": json.loads(chunk_data) if chunk_data else {}} # 处理本地部署时接口返回的data为空值 yield {
'event': chunk_type,
'data': json.loads(chunk_data) if chunk_data else {},
} # 处理本地部署时接口返回的data为空值
except asyncio.TimeoutError: except asyncio.TimeoutError:
raise Exception(f"Coze API 流式请求超时 ({timeout}秒)") raise Exception(f'Coze API 流式请求超时 ({timeout}秒)')
except Exception as e: except Exception as e:
raise Exception(f"Coze API 流式请求失败: {str(e)}") raise Exception(f'Coze API 流式请求失败: {str(e)}')

View File

@@ -194,28 +194,23 @@ class DingTalkClient:
'Type': 'richText', 'Type': 'richText',
'Elements': [], # 按顺序存储所有元素 'Elements': [], # 按顺序存储所有元素
'SimpleContent': '', # 兼容字段:纯文本内容 'SimpleContent': '', # 兼容字段:纯文本内容
'SimplePicture': '' # 兼容字段:第一张图片 'SimplePicture': '', # 兼容字段:第一张图片
} }
# 先收集所有文本和图片占位符 # 先收集所有文本和图片占位符
text_elements = [] text_elements = []
image_placeholders = []
# 解析富文本内容,保持原始顺序 # 解析富文本内容,保持原始顺序
for item in data['richText']: for item in data['richText']:
# 处理文本内容 # 处理文本内容
if 'text' in item and item['text'] != "\n": if 'text' in item and item['text'] != '\n':
element = { element = {'Type': 'text', 'Content': item['text']}
'Type': 'text',
'Content': item['text']
}
rich_content['Elements'].append(element) rich_content['Elements'].append(element)
text_elements.append(item['text']) text_elements.append(item['text'])
# 检查是否是图片元素 - 根据钉钉API的实际结构调整 # 检查是否是图片元素 - 根据钉钉API的实际结构调整
# 钉钉富文本中的图片通常有特定标识,可能需要根据实际返回调整 # 钉钉富文本中的图片通常有特定标识,可能需要根据实际返回调整
elif item.get("type") == "picture": elif item.get('type') == 'picture':
# 创建图片占位符 # 创建图片占位符
element = { element = {
'Type': 'image_placeholder', 'Type': 'image_placeholder',
@@ -232,10 +227,7 @@ class DingTalkClient:
if element['Type'] == 'image_placeholder': if element['Type'] == 'image_placeholder':
if image_index < len(image_list) and image_list[image_index]: if image_index < len(image_list) and image_list[image_index]:
image_url = await self.download_image(image_list[image_index]) image_url = await self.download_image(image_list[image_index])
new_elements.append({ new_elements.append({'Type': 'image', 'Picture': image_url})
'Type': 'image',
'Picture': image_url
})
image_index += 1 image_index += 1
else: else:
# 如果没有对应的图片,保留占位符或跳过 # 如果没有对应的图片,保留占位符或跳过
@@ -245,7 +237,6 @@ class DingTalkClient:
rich_content['Elements'] = new_elements rich_content['Elements'] = new_elements
# 设置兼容字段 # 设置兼容字段
all_texts = [elem['Content'] for elem in rich_content['Elements'] if elem.get('Type') == 'text'] all_texts = [elem['Content'] for elem in rich_content['Elements'] if elem.get('Type') == 'text']
rich_content['SimpleContent'] = '\n'.join(all_texts) if all_texts else '' rich_content['SimpleContent'] = '\n'.join(all_texts) if all_texts else ''
@@ -261,8 +252,6 @@ class DingTalkClient:
if all_images: if all_images:
message_data['Picture'] = all_images[0] message_data['Picture'] = all_images[0]
elif incoming_message.message_type == 'text': elif incoming_message.message_type == 'text':
message_data['Content'] = incoming_message.get_text_list()[0] message_data['Content'] = incoming_message.get_text_list()[0]

View File

@@ -43,7 +43,6 @@ class DingTalkEvent(dict):
def name(self): def name(self):
return self.get('Name', '') return self.get('Name', '')
@property @property
def conversation(self): def conversation(self):
return self.get('conversation_type', '') return self.get('conversation_type', '')

View File

@@ -1,12 +1,12 @@
# 微信公众号的加解密算法与企业微信一样,所以直接使用企业微信的加解密算法文件 # 微信公众号的加解密算法与企业微信一样,所以直接使用企业微信的加解密算法文件
import time import time
import traceback import traceback
from libs.wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt from langbot.libs.wecom_api.WXBizMsgCrypt3 import WXBizMsgCrypt
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from quart import Quart, request from quart import Quart, request
import hashlib import hashlib
from typing import Callable from typing import Callable
from .oaevent import OAEvent from langbot.libs.official_account_api.oaevent import OAEvent
import asyncio import asyncio

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import post_json from langbot.libs.wechatpad_api.util.http_util import post_json
class ChatRoomApi: class ChatRoomApi:

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import post_json from langbot.libs.wechatpad_api.util.http_util import post_json
import httpx import httpx
import base64 import base64

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import post_json, get_json from langbot.libs.wechatpad_api.util.http_util import post_json, get_json
class LoginApi: class LoginApi:

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import post_json from langbot.libs.wechatpad_api.util.http_util import post_json
class MessageApi: class MessageApi:

View File

@@ -1,4 +1,4 @@
from libs.wechatpad_api.util.http_util import post_json, async_request, get_json from langbot.libs.wechatpad_api.util.http_util import post_json, async_request, get_json
class UserApi: class UserApi:

View File

@@ -1,9 +1,9 @@
from libs.wechatpad_api.api.login import LoginApi from langbot.libs.wechatpad_api.api.login import LoginApi
from libs.wechatpad_api.api.friend import FriendApi from langbot.libs.wechatpad_api.api.friend import FriendApi
from libs.wechatpad_api.api.message import MessageApi from langbot.libs.wechatpad_api.api.message import MessageApi
from libs.wechatpad_api.api.user import UserApi from langbot.libs.wechatpad_api.api.user import UserApi
from libs.wechatpad_api.api.downloadpai import DownloadApi from langbot.libs.wechatpad_api.api.downloadpai import DownloadApi
from libs.wechatpad_api.api.chatroom import ChatRoomApi from langbot.libs.wechatpad_api.api.chatroom import ChatRoomApi
class WeChatPadClient: class WeChatPadClient:

View File

@@ -16,7 +16,7 @@ import struct
from Crypto.Cipher import AES from Crypto.Cipher import AES
import xml.etree.cElementTree as ET import xml.etree.cElementTree as ET
import socket import socket
from libs.wecom_ai_bot_api import ierror from langbot.libs.wecom_ai_bot_api import ierror
""" """

View File

@@ -13,9 +13,9 @@ import httpx
from Crypto.Cipher import AES from Crypto.Cipher import AES
from quart import Quart, request, Response, jsonify from quart import Quart, request, Response, jsonify
from libs.wecom_ai_bot_api import wecombotevent from langbot.libs.wecom_ai_bot_api import wecombotevent
from libs.wecom_ai_bot_api.WXBizMsgCrypt3 import WXBizMsgCrypt from langbot.libs.wecom_ai_bot_api.WXBizMsgCrypt3 import WXBizMsgCrypt
from pkg.platform.logger import EventLogger from langbot.pkg.platform.logger import EventLogger
@dataclass @dataclass
@@ -219,10 +219,7 @@ class WecomBotClient:
self.ReceiveId = '' self.ReceiveId = ''
self.app = Quart(__name__) self.app = Quart(__name__)
self.app.add_url_rule( self.app.add_url_rule(
'/callback/command', '/callback/command', 'handle_callback', self.handle_callback_request, methods=['POST', 'GET']
'handle_callback',
self.handle_callback_request,
methods=['POST', 'GET']
) )
self._message_handlers = { self._message_handlers = {
'example': [], 'example': [],
@@ -420,7 +417,7 @@ class WecomBotClient:
await self.logger.error("请求体中缺少 'encrypt' 字段") await self.logger.error("请求体中缺少 'encrypt' 字段")
return Response('Bad Request', status=400) return Response('Bad Request', status=400)
xml_post_data = f"<xml><Encrypt><![CDATA[{encrypted_msg}]]></Encrypt></xml>" xml_post_data = f'<xml><Encrypt><![CDATA[{encrypted_msg}]]></Encrypt></xml>'
ret, decrypted_xml = self.wxcpt.DecryptMsg(xml_post_data, msg_signature, timestamp, nonce) ret, decrypted_xml = self.wxcpt.DecryptMsg(xml_post_data, msg_signature, timestamp, nonce)
if ret != 0: if ret != 0:
await self.logger.error('解密失败') await self.logger.error('解密失败')
@@ -458,7 +455,7 @@ class WecomBotClient:
picurl = item.get('image', {}).get('url') picurl = item.get('image', {}).get('url')
if texts: if texts:
message_data['content'] = "".join(texts) # 拼接所有 text message_data['content'] = ''.join(texts) # 拼接所有 text
if picurl: if picurl:
base64 = await self.download_url_to_base64(picurl, self.EnCodingAESKey) base64 = await self.download_url_to_base64(picurl, self.EnCodingAESKey)
message_data['picurl'] = base64 # 只保留第一个 image message_data['picurl'] = base64 # 只保留第一个 image
@@ -466,7 +463,9 @@ class WecomBotClient:
# Extract user information # Extract user information
from_info = msg_json.get('from', {}) from_info = msg_json.get('from', {})
message_data['userid'] = from_info.get('userid', '') message_data['userid'] = from_info.get('userid', '')
message_data['username'] = from_info.get('alias', '') or from_info.get('name', '') or from_info.get('userid', '') message_data['username'] = (
from_info.get('alias', '') or from_info.get('name', '') or from_info.get('userid', '')
)
# Extract chat/group information # Extract chat/group information
if msg_json.get('chattype', '') == 'group': if msg_json.get('chattype', '') == 'group':
@@ -555,7 +554,7 @@ class WecomBotClient:
encrypted_bytes = response.content encrypted_bytes = response.content
aes_key = base64.b64decode(encoding_aes_key + "=") # base64 补齐 aes_key = base64.b64decode(encoding_aes_key + '=') # base64 补齐
iv = aes_key[:16] iv = aes_key[:16]
cipher = AES.new(aes_key, AES.MODE_CBC, iv) cipher = AES.new(aes_key, AES.MODE_CBC, iv)
@@ -564,22 +563,22 @@ class WecomBotClient:
pad_len = decrypted[-1] pad_len = decrypted[-1]
decrypted = decrypted[:-pad_len] decrypted = decrypted[:-pad_len]
if decrypted.startswith(b"\xff\xd8"): # JPEG if decrypted.startswith(b'\xff\xd8'): # JPEG
mime_type = "image/jpeg" mime_type = 'image/jpeg'
elif decrypted.startswith(b"\x89PNG"): # PNG elif decrypted.startswith(b'\x89PNG'): # PNG
mime_type = "image/png" mime_type = 'image/png'
elif decrypted.startswith((b"GIF87a", b"GIF89a")): # GIF elif decrypted.startswith((b'GIF87a', b'GIF89a')): # GIF
mime_type = "image/gif" mime_type = 'image/gif'
elif decrypted.startswith(b"BM"): # BMP elif decrypted.startswith(b'BM'): # BMP
mime_type = "image/bmp" mime_type = 'image/bmp'
elif decrypted.startswith(b"II*\x00") or decrypted.startswith(b"MM\x00*"): # TIFF elif decrypted.startswith(b'II*\x00') or decrypted.startswith(b'MM\x00*'): # TIFF
mime_type = "image/tiff" mime_type = 'image/tiff'
else: else:
mime_type = "application/octet-stream" mime_type = 'application/octet-stream'
# 转 base64 # 转 base64
base64_str = base64.b64encode(decrypted).decode("utf-8") base64_str = base64.b64encode(decrypted).decode('utf-8')
return f"data:{mime_type};base64,{base64_str}" return f'data:{mime_type};base64,{base64_str}'
async def run_task(self, host: str, port: int, *args, **kwargs): async def run_task(self, host: str, port: int, *args, **kwargs):
""" """

View File

@@ -29,7 +29,12 @@ class WecomBotEvent(dict):
""" """
用户名称 用户名称
""" """
return self.get('username', '') or self.get('from', {}).get('alias', '') or self.get('from', {}).get('name', '') or self.userid return (
self.get('username', '')
or self.get('from', {}).get('alias', '')
or self.get('from', {}).get('name', '')
or self.userid
)
@property @property
def chatname(self) -> str: def chatname(self) -> str:
@@ -65,7 +70,7 @@ class WecomBotEvent(dict):
消息id 消息id
""" """
return self.get('msgid', '') return self.get('msgid', '')
@property @property
def ai_bot_id(self) -> str: def ai_bot_id(self) -> str:
""" """

View File

@@ -340,4 +340,3 @@ class WecomClient:
async def get_media_id(self, image: platform_message.Image): async def get_media_id(self, image: platform_message.Image):
media_id = await self.upload_to_work(image=image) media_id = await self.upload_to_work(image=image)
return media_id return media_id

View File

@@ -110,7 +110,7 @@ class RouterGroup(abc.ABC):
elif auth_type == AuthType.USER_TOKEN_OR_API_KEY: elif auth_type == AuthType.USER_TOKEN_OR_API_KEY:
# Try API key first (check X-API-Key header) # Try API key first (check X-API-Key header)
api_key = quart.request.headers.get('X-API-Key', '') api_key = quart.request.headers.get('X-API-Key', '')
if api_key: if api_key:
# API key authentication # API key authentication
try: try:
@@ -124,7 +124,9 @@ class RouterGroup(abc.ABC):
token = quart.request.headers.get('Authorization', '').replace('Bearer ', '') token = quart.request.headers.get('Authorization', '').replace('Bearer ', '')
if not token: if not token:
return self.http_status(401, -1, 'No valid authentication provided (user token or API key required)') return self.http_status(
401, -1, 'No valid authentication provided (user token or API key required)'
)
try: try:
user_email = await self.ap.user_service.verify_jwt_token(token) user_email = await self.ap.user_service.verify_jwt_token(token)

View File

@@ -27,7 +27,9 @@ class PipelinesRouterGroup(group.RouterGroup):
async def _() -> str: async def _() -> str:
return self.success(data={'configs': await self.ap.pipeline_service.get_pipeline_metadata()}) return self.success(data={'configs': await self.ap.pipeline_service.get_pipeline_metadata()})
@self.route('/<pipeline_uuid>', methods=['GET', 'PUT', 'DELETE'], auth_type=group.AuthType.USER_TOKEN_OR_API_KEY) @self.route(
'/<pipeline_uuid>', methods=['GET', 'PUT', 'DELETE'], auth_type=group.AuthType.USER_TOKEN_OR_API_KEY
)
async def _(pipeline_uuid: str) -> str: async def _(pipeline_uuid: str) -> str:
if quart.request.method == 'GET': if quart.request.method == 'GET':
pipeline = await self.ap.pipeline_service.get_pipeline(pipeline_uuid) pipeline = await self.ap.pipeline_service.get_pipeline(pipeline_uuid)
@@ -47,7 +49,9 @@ class PipelinesRouterGroup(group.RouterGroup):
return self.success() return self.success()
@self.route('/<pipeline_uuid>/extensions', methods=['GET', 'PUT'], auth_type=group.AuthType.USER_TOKEN_OR_API_KEY) @self.route(
'/<pipeline_uuid>/extensions', methods=['GET', 'PUT'], auth_type=group.AuthType.USER_TOKEN_OR_API_KEY
)
async def _(pipeline_uuid: str) -> str: async def _(pipeline_uuid: str) -> str:
if quart.request.method == 'GET': if quart.request.method == 'GET':
# Get current extensions and available plugins # Get current extensions and available plugins

View File

@@ -86,7 +86,9 @@ class HTTPController:
ginst = g(self.ap, self.quart_app) ginst = g(self.ap, self.quart_app)
await ginst.initialize() await ginst.initialize()
frontend_path = 'web/out' from ....utils import paths
frontend_path = paths.get_frontend_path()
@self.quart_app.route('/') @self.quart_app.route('/')
async def index(): async def index():

View File

@@ -61,9 +61,7 @@ class ApiKeyService:
async def delete_api_key(self, key_id: int) -> None: async def delete_api_key(self, key_id: int) -> None:
"""Delete an API key""" """Delete an API key"""
await self.ap.persistence_mgr.execute_async( await self.ap.persistence_mgr.execute_async(sqlalchemy.delete(apikey.ApiKey).where(apikey.ApiKey.id == key_id))
sqlalchemy.delete(apikey.ApiKey).where(apikey.ApiKey.id == key_id)
)
async def update_api_key(self, key_id: int, name: str = None, description: str = None) -> None: async def update_api_key(self, key_id: int, name: str = None, description: str = None) -> None:
"""Update an API key's metadata (name, description)""" """Update an API key's metadata (name, description)"""

View File

@@ -84,13 +84,11 @@ class MCPService:
new_enable = server_data.get('enable', False) new_enable = server_data.get('enable', False)
need_remove = old_server_name and old_server_name in self.ap.tool_mgr.mcp_tool_loader.sessions need_remove = old_server_name and old_server_name in self.ap.tool_mgr.mcp_tool_loader.sessions
need_start = new_enable
if old_enable and not new_enable: if old_enable and not new_enable:
if need_remove: if need_remove:
await self.ap.tool_mgr.mcp_tool_loader.remove_mcp_server(old_server_name) await self.ap.tool_mgr.mcp_tool_loader.remove_mcp_server(old_server_name)
elif not old_enable and new_enable: elif not old_enable and new_enable:
result = await self.ap.persistence_mgr.execute_async( result = await self.ap.persistence_mgr.execute_async(
sqlalchemy.select(persistence_mcp.MCPServer).where(persistence_mcp.MCPServer.uuid == server_uuid) sqlalchemy.select(persistence_mcp.MCPServer).where(persistence_mcp.MCPServer.uuid == server_uuid)
@@ -100,7 +98,7 @@ class MCPService:
server_config = self.ap.persistence_mgr.serialize_model(persistence_mcp.MCPServer, updated_server) server_config = self.ap.persistence_mgr.serialize_model(persistence_mcp.MCPServer, updated_server)
task = asyncio.create_task(self.ap.tool_mgr.mcp_tool_loader.host_mcp_server(server_config)) task = asyncio.create_task(self.ap.tool_mgr.mcp_tool_loader.host_mcp_server(server_config))
self.ap.tool_mgr.mcp_tool_loader._hosted_mcp_tasks.append(task) self.ap.tool_mgr.mcp_tool_loader._hosted_mcp_tasks.append(task)
elif old_enable and new_enable: elif old_enable and new_enable:
if need_remove: if need_remove:
await self.ap.tool_mgr.mcp_tool_loader.remove_mcp_server(old_server_name) await self.ap.tool_mgr.mcp_tool_loader.remove_mcp_server(old_server_name)
@@ -112,7 +110,6 @@ class MCPService:
server_config = self.ap.persistence_mgr.serialize_model(persistence_mcp.MCPServer, updated_server) server_config = self.ap.persistence_mgr.serialize_model(persistence_mcp.MCPServer, updated_server)
task = asyncio.create_task(self.ap.tool_mgr.mcp_tool_loader.host_mcp_server(server_config)) task = asyncio.create_task(self.ap.tool_mgr.mcp_tool_loader.host_mcp_server(server_config))
self.ap.tool_mgr.mcp_tool_loader._hosted_mcp_tasks.append(task) self.ap.tool_mgr.mcp_tool_loader._hosted_mcp_tasks.append(task)
async def delete_mcp_server(self, server_uuid: str) -> None: async def delete_mcp_server(self, server_uuid: str) -> None:
result = await self.ap.persistence_mgr.execute_async( result = await self.ap.persistence_mgr.execute_async(

View File

@@ -30,12 +30,12 @@ class PipelineService:
def __init__(self, ap: app.Application) -> None: def __init__(self, ap: app.Application) -> None:
self.ap = ap self.ap = ap
async def get_pipeline_metadata(self) -> dict: async def get_pipeline_metadata(self) -> list[dict]:
return [ return [
self.ap.pipeline_config_meta_trigger.data, self.ap.pipeline_config_meta_trigger,
self.ap.pipeline_config_meta_safety.data, self.ap.pipeline_config_meta_safety,
self.ap.pipeline_config_meta_ai.data, self.ap.pipeline_config_meta_ai,
self.ap.pipeline_config_meta_output.data, self.ap.pipeline_config_meta_output,
] ]
async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]: async def get_pipelines(self, sort_by: str = 'created_at', sort_order: str = 'DESC') -> list[dict]:
@@ -74,11 +74,16 @@ class PipelineService:
return self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline) return self.ap.persistence_mgr.serialize_model(persistence_pipeline.LegacyPipeline, pipeline)
async def create_pipeline(self, pipeline_data: dict, default: bool = False) -> str: async def create_pipeline(self, pipeline_data: dict, default: bool = False) -> str:
from ....utils import paths as path_utils
pipeline_data['uuid'] = str(uuid.uuid4()) pipeline_data['uuid'] = str(uuid.uuid4())
pipeline_data['for_version'] = self.ap.ver_mgr.get_current_version() pipeline_data['for_version'] = self.ap.ver_mgr.get_current_version()
pipeline_data['stages'] = default_stage_order.copy() pipeline_data['stages'] = default_stage_order.copy()
pipeline_data['is_default'] = default pipeline_data['is_default'] = default
pipeline_data['config'] = json.load(open('templates/default-pipeline-config.json', 'r', encoding='utf-8'))
template_path = path_utils.get_resource_path('templates/default-pipeline-config.json')
with open(template_path, 'r', encoding='utf-8') as f:
pipeline_data['config'] = json.load(f)
await self.ap.persistence_mgr.execute_async( await self.ap.persistence_mgr.execute_async(
sqlalchemy.insert(persistence_pipeline.LegacyPipeline).values(**pipeline_data) sqlalchemy.insert(persistence_pipeline.LegacyPipeline).values(**pipeline_data)
@@ -137,7 +142,9 @@ class PipelineService:
) )
await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid) await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid)
async def update_pipeline_extensions(self, pipeline_uuid: str, bound_plugins: list[dict], bound_mcp_servers: list[str] = None) -> None: async def update_pipeline_extensions(
self, pipeline_uuid: str, bound_plugins: list[dict], bound_mcp_servers: list[str] = None
) -> None:
"""Update the bound plugins and MCP servers for a pipeline""" """Update the bound plugins and MCP servers for a pipeline"""
# Get current pipeline # Get current pipeline
result = await self.ap.persistence_mgr.execute_async( result = await self.ap.persistence_mgr.execute_async(
@@ -145,23 +152,23 @@ class PipelineService:
persistence_pipeline.LegacyPipeline.uuid == pipeline_uuid persistence_pipeline.LegacyPipeline.uuid == pipeline_uuid
) )
) )
pipeline = result.first() pipeline = result.first()
if pipeline is None: if pipeline is None:
raise ValueError(f'Pipeline {pipeline_uuid} not found') raise ValueError(f'Pipeline {pipeline_uuid} not found')
# Update extensions_preferences # Update extensions_preferences
extensions_preferences = pipeline.extensions_preferences or {} extensions_preferences = pipeline.extensions_preferences or {}
extensions_preferences['plugins'] = bound_plugins extensions_preferences['plugins'] = bound_plugins
if bound_mcp_servers is not None: if bound_mcp_servers is not None:
extensions_preferences['mcp_servers'] = bound_mcp_servers extensions_preferences['mcp_servers'] = bound_mcp_servers
await self.ap.persistence_mgr.execute_async( await self.ap.persistence_mgr.execute_async(
sqlalchemy.update(persistence_pipeline.LegacyPipeline) sqlalchemy.update(persistence_pipeline.LegacyPipeline)
.where(persistence_pipeline.LegacyPipeline.uuid == pipeline_uuid) .where(persistence_pipeline.LegacyPipeline.uuid == pipeline_uuid)
.values(extensions_preferences=extensions_preferences) .values(extensions_preferences=extensions_preferences)
) )
# Reload pipeline to apply changes # Reload pipeline to apply changes
await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid) await self.ap.pipeline_mgr.remove_pipeline(pipeline_uuid)
pipeline = await self.get_pipeline(pipeline_uuid) pipeline = await self.get_pipeline(pipeline_uuid)

Some files were not shown because too many files have changed in this diff Show More