Compare commits

...

81 Commits

Author SHA1 Message Date
Xinrea
b070013efc bump version to 2.11.1 2025-08-10 20:56:48 +08:00
Eeeeep4
d2d9112f6c feat: smart flv conversion with progress (#155)
* fix: auto-close batch delete dialog after completion

* feat: smart FLV conversion with detailed progress and better UX

- Intelligent FLV→MP4 conversion (lossless stream copy + high-quality fallback)
- Real-time import progress with percentage tracking
- Smart file size display (auto GB/MB units)
- Optimized thumbnail generation and network file handling

* refactor: reorganize FFmpeg functions and fix network detection

- Move FFmpeg functions from video.rs to ffmpeg.rs
- Fix Windows drive letters misidentified as network paths
- Improve network vs local file detection logic

* fix: delete thumbnails when removing cliped videos
2025-08-10 17:00:31 +08:00
Xinrea
9fea18f2de ci/cd: remove unused rust-cache 2025-08-10 10:17:09 +08:00
Xinrea
74480f91ce ci/cd: fix rust cache 2025-08-10 00:05:44 +08:00
Xinrea
b2e13b631f bump version to 2.11.0 2025-08-09 23:47:39 +08:00
Xinrea
001d995c8f chore: style adjustment 2025-08-09 23:46:47 +08:00
Eeeeep4
8cb2acea88 feat: add universal video clipping support (close #78) (#145)
* feat: add universal video clipping support

- Allow clipping for all video types including imported and recorded videos
- Support secondary precise clipping from rough clips
- Fix video status validation to include completed recordings (status 0 and 1)

* feat: integrate video clipping into player and enhance subtitle timeline

- Optimize subtitle progress bar styling and timeline interaction
- Unify video clipping workflow across all video types
- Fix build issues and improve code quality with safer path handling

* feat: improve video clipping and UI consistency

- fix: resolve FFmpeg clipping start time offset issue
- fix: enhance clip selection creation logic
- style: unify interface styling and colors
- feat: complete HTTP APIs for headless mode

* fix: improve headless mode file handling and path resolution

- Add multipart file upload support for external video import in headless mode
- Fix file path resolution issues for both video files and thumbnails
- Make convertFileSrc and convertCoverSrc async to properly handle absolute path conversion in Tauri

* fix: correct video cover size and file size display in GUI import

Fix the cover size when importing videos
Fix file size display during GUI-side import
2025-08-09 23:22:53 +08:00
Xinrea
7c0d57d84e fix: clip danmu offset (#153)
* fix: account tutorial link

* feat: encoding-fix option for clip

* fix: clip on stream

* fix: danmu encoding offset

* fix: clip when no range provided
2025-08-09 21:00:28 +08:00
Xinrea
8cb875f449 bump version to 2.10.6 2025-08-07 23:31:17 +08:00
Xinrea
e6bbe65723 feat: backup api for douyin room info (#146) 2025-08-07 23:09:43 +08:00
Xinrea
f4a71a2476 bump version to 2.10.5 2025-08-07 01:01:05 +08:00
Xinrea
47b9362b0a fix: douyin manifest and ts fetch error 2025-08-06 23:28:46 +08:00
Xinrea
c1aad0806e fix: subtitle result not saved 2025-08-06 23:08:29 +08:00
Xinrea
4ccc90f9fb docs: update 2025-08-05 08:35:46 +08:00
Xinrea
7dc63440e6 docs: update 2025-08-04 23:29:17 +08:00
Xinrea
4094e8b80d docs: update 2025-08-04 22:05:31 +08:00
Xinrea
e27cbaf715 bump version to 2.10.4 2025-08-04 00:20:12 +08:00
Xinrea
1f39b27d79 fix: creation_flags on windows 2025-08-03 23:56:56 +08:00
Xinrea
f45891fd95 fix: cmd window on windows 2025-08-03 23:17:36 +08:00
Xinrea
18fe644715 bump version to 2.10.3 2025-08-03 21:18:21 +08:00
Xinrea
40cde8c69a fix: no danmaku after adding with short room id 2025-08-03 21:17:41 +08:00
Xinrea
4b0af47906 fix: douyin room info params 2025-08-03 20:45:07 +08:00
Xinrea
9365b3c8cd bump version to 2.10.2 2025-08-02 21:08:29 +08:00
Xinrea
4b9f015ea7 fix: introduce user-agent configuration to avoid access limit 2025-08-02 21:07:06 +08:00
Xinrea
c42d4a084e doc: update 2025-08-02 01:18:04 +08:00
Xinrea
5bb3feb05b bump version to 2.10.1 2025-07-31 23:08:45 +08:00
Xinrea
05f776ed8b chore: adjust logs 2025-07-31 23:07:37 +08:00
Xinrea
9cec809485 fix: button disabled when triggered by deeplinking 2025-07-31 22:50:57 +08:00
Xinrea
429f909152 feat: break recording when resolution changes (close #144) 2025-07-31 22:39:45 +08:00
Xinrea
084dd23df1 Revert "fix: start a new recording when header changes"
This reverts commit 955e284d41.
2025-07-31 21:15:26 +08:00
Xinrea
e55afdd739 docs: update 2025-07-31 00:30:02 +08:00
Xinrea
72128a132b docs: update 2025-07-30 01:48:29 +08:00
Xinrea
92ca2cddad fix: dependencies 2025-07-29 00:59:21 +08:00
Xinrea
3db0d1dfe5 feat: manual input model name (close #143) 2025-07-29 00:09:06 +08:00
Xinrea
57907323e6 bump version to 2.10.0 2025-07-27 19:52:53 +08:00
Xinrea
dbdca44c5f feat: deep-link support bsr:// 2025-07-27 19:51:58 +08:00
Xinrea
fe1dd2201f fix: prevent list corruption when deleting archived items 2025-07-26 22:52:45 +08:00
Xinrea
e0ae194cc3 bump version to 2.9.5 2025-07-26 22:40:50 +08:00
Xinrea
6fc5700457 ci/cd: add script to bump version 2025-07-26 22:40:49 +08:00
Xinrea
c4fdcf86d4 fix: bilibili stream pathway not update (close #117) 2025-07-26 22:40:46 +08:00
Xinrea
3088500c8d bump version to 2.9.4 2025-07-25 21:10:04 +08:00
Xinrea
861f3a3624 fix: tauri schema not handled by custom plugin for shaka-player 2025-07-25 21:09:41 +08:00
Xinrea
c55783e4d9 chore: update @tauri-apps/api 2025-07-25 20:13:04 +08:00
Xinrea
955e284d41 fix: start a new recording when header changes 2025-07-24 23:03:09 +08:00
Xinrea
fc4c47427e chore: adjust log level 2025-07-24 21:57:04 +08:00
Xinrea
e2d7563faa bump version to 2.9.3 2025-07-24 21:28:35 +08:00
Xinrea
27d69f7f8d fix: clip video cover not loaded 2025-07-24 21:28:10 +08:00
Xinrea
a77bb5af44 bump version to 2.9.2 2025-07-24 00:32:28 +08:00
Xinrea
00286261a4 fix: range offset caused by duration error 2025-07-24 00:23:58 +08:00
Xinrea
0b898dccaa fix: bilibili stream url extraction error caused 404 2025-07-23 22:27:57 +08:00
Xinrea
a1d9ac4e68 chore: remove ai generated docs 2025-07-23 21:56:56 +08:00
Xinrea
4150939e23 Only create records after successful ts download (#141)
* Defer record creation until first successful stream segment download

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Checkpoint before follow-up message

* Improve recording logic with directory management and error handling

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Add recorder flow diagrams for Bilibili and Douyin recorders

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Refactor recorder update_entries to prevent empty records and directories

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Refactor recorder update_entries to prevent empty records and dirs

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Fix panic in non-FMP4 stream recording by safely handling entry store

Co-authored-by: shenwuol <shenwuol@gmail.com>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: shenwuol <shenwuol@gmail.com>
2025-07-23 17:35:00 +08:00
Xinrea
8f84b7f063 fix: import missing in headless build 2025-07-23 00:17:51 +08:00
Xinrea
04b245ac64 bump version to 2.9.1 2025-07-23 00:07:23 +08:00
Xinrea
12f7e62957 chore: remove unused code 2025-07-23 00:01:23 +08:00
Xinrea
9600d310c7 fix: 400-request-error on some douyin stream 2025-07-22 23:58:43 +08:00
Xinrea
dec5a2472a feat: douyin account information fetching (#140)
* Implement Douyin account info retrieval and auto-update

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Refactor Douyin account API to use IM relation endpoint

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Fix Douyin client error handling with correct error variant

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Checkpoint before follow-up message

* Checkpoint before follow-up message

* Add id_str support for cross-platform account ID compatibility

Co-authored-by: shenwuol <shenwuol@gmail.com>

* Fix account update with deref for id_str comparison

Co-authored-by: shenwuol <shenwuol@gmail.com>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: shenwuol <shenwuol@gmail.com>
2025-07-22 23:35:49 +08:00
Xinrea
13eb7c6ea2 docs: update 2025-07-22 01:40:05 +08:00
Xinrea
2356cfa10a bump version to 2.9.0 2025-07-20 22:53:27 +08:00
Xinrea
3bfaefb3b0 doc: update 2025-07-20 22:53:03 +08:00
Xinrea
78b8c25d96 fix: ts entry timestamp 2025-07-20 22:52:54 +08:00
Xinrea
c1d2ff2b96 feat: ai agent (#139)
* feat: ai assistance

* feat: prompt for model configuration

* feat: agent optimize

* feat: archive subtitle

* feat: agent optimize

* fix: live preview callstack error

* chore: update dependencies

* fix: frontend hang with incorrect danmu ts

* fix: platform error handle

* fix: negative clip_offset

* fix: handle fs error
2025-07-20 20:50:26 +08:00
Xinrea
24aee9446a feat: using milisecond timestamp as sequence (#136)
* feat: using milisecond timestamp as sequence

* feat: log for douyin ts timestamp extract-error
2025-07-17 00:52:31 +08:00
Xinrea
2fb094ec31 bump version to 2.8.0 2025-07-15 22:52:46 +08:00
Xinrea
53897c66ee feat: whisper language setting (#134)
* feat: whisper language setting

* fix: set auto as default language setting
2025-07-15 22:38:31 +08:00
Xinrea
ca4e266ae6 feat: audio chunks for whisper (#133)
* feat: split audio into chunks for whisper

* fix: srt time

* fix: remove mp3 cover

* fix: chunks order on local whisper

* fix: improve error handling when reading chunk directory in video subtitle generation

* feat: prevent interruption of processes when using local Whisper

* fix: inconsistency of srt item formatting
2025-07-10 01:58:57 +08:00
Xinrea
6612a1e16f fix: code -352 for user/room api 2025-07-05 18:50:19 +08:00
Xinrea
55ceb65dfb fix: bili danmu (#132)
* fix: danmu message cmd

* bump version to 2.7.5
2025-07-03 01:42:48 +08:00
Xinrea
6cad3d6afb fix: crash with invalid cookie (#131) 2025-07-01 00:24:11 +08:00
Xinrea
151e1bdb8a fix: bilibili api buvid3 check 2025-06-28 21:13:54 +08:00
Xinrea
44a3cfd1ff bump version to 2.7.3 2025-06-26 00:48:16 +08:00
Xinrea
9cbc3028a7 fix: optimize whisper-online file size 2025-06-26 00:48:03 +08:00
Xinrea
8c30730d7b fix: whisper prompt for service 2025-06-26 00:19:23 +08:00
Xinrea
acfb870f9d fix: subtitle response in docker mode 2025-06-26 00:01:16 +08:00
Xinrea
3813528f50 feat: implement virtual scrolling for danmu list 2025-06-24 23:36:15 +08:00
Xinrea
e3bb014644 fix: crashed by loading large amount of cover data 2025-06-24 00:23:56 +08:00
Xinrea
76a7afde76 bump version to 2.7.0 2025-06-22 21:21:09 +08:00
Xinrea
1184f9f3f5 fix: auto close video preview after deleting 2025-06-22 21:20:04 +08:00
Xinrea
b754f8938f chore: clean up code 2025-06-22 21:15:03 +08:00
Xinrea
6b30ff04b7 feat: stt services support (#128)
* feat: add task page

* feat: online whisper service support (close #126)

* fix: blocking in recorder main loop
2025-06-22 21:06:35 +08:00
Xinrea
1c40acca63 feat: clip manage (#127)
* feat: ignore pre-release on version check

* feat: new clip-manage page

* chore: rename custom scrollbar class
2025-06-22 00:32:39 +08:00
125 changed files with 16154 additions and 3654 deletions

View File

@@ -59,11 +59,6 @@ jobs:
if: matrix.platform == 'windows-latest' && matrix.features == 'cuda'
uses: Jimver/cuda-toolkit@v0.2.24
- name: Rust cache
uses: swatinem/rust-cache@v2
with:
workspaces: "./src-tauri -> target"
- name: Setup ffmpeg
if: matrix.platform == 'windows-latest'
working-directory: ./

1
.gitignore vendored
View File

@@ -11,6 +11,7 @@ node_modules
dist
dist-ssr
*.local
/target/
# Editor directories and files
.vscode/*

View File

@@ -4,23 +4,27 @@
![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/xinrea/bili-shadowreplay/main.yml?label=Application%20Build)
![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/Xinrea/bili-shadowreplay/package.yml?label=Docker%20Build)
![GitHub Release](https://img.shields.io/github/v/release/xinrea/bili-shadowreplay)
![GitHub Downloads (all assets, all releases)](https://img.shields.io/github/downloads/xinrea/bili-shadowreplay/total)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/Xinrea/bili-shadowreplay)
BiliBili ShadowReplay 是一个缓存直播并进行实时编辑投稿的工具。通过划定时间区间,并编辑简单的必需信息,即可完成直播切片以及投稿,将整个流程压缩到分钟级。同时,也支持对缓存的历史直播进行回放,以及相同的切片编辑投稿处理流程。
目前仅支持 B 站和抖音平台的直播。
![rooms](docs/public/images/summary.png)
[![Star History Chart](https://api.star-history.com/svg?repos=Xinrea/bili-shadowreplay&type=Date)](https://www.star-history.com/#Xinrea/bili-shadowreplay&Date)
## 安装和使用
![rooms](docs/public/images/summary.png)
前往网站查看说明:[BiliBili ShadowReplay](https://bsr.xinrea.cn/)
## 参与开发
[Contributing](.github/CONTRIBUTING.md)
可以通过 [DeepWiki](https://deepwiki.com/Xinrea/bili-shadowreplay) 了解本项目。
贡献指南:[Contributing](.github/CONTRIBUTING.md)
## 赞助

View File

@@ -1,7 +1,8 @@
import { defineConfig } from "vitepress";
import { withMermaid } from "vitepress-plugin-mermaid";
// https://vitepress.dev/reference/site-config
export default defineConfig({
export default withMermaid({
title: "BiliBili ShadowReplay",
description: "直播录制/实时回放/剪辑/投稿工具",
themeConfig: {
@@ -18,21 +19,54 @@ export default defineConfig({
{
text: "开始使用",
items: [
{ text: "安装准备", link: "/getting-started/installation" },
{ text: "配置使用", link: "/getting-started/configuration" },
{ text: "FFmpeg 配置", link: "/getting-started/ffmpeg" },
{
text: "安装准备",
items: [
{
text: "桌面端安装",
link: "/getting-started/installation/desktop",
},
{
text: "Docker 安装",
link: "/getting-started/installation/docker",
},
],
},
{
text: "配置使用",
items: [
{ text: "账号配置", link: "/getting-started/config/account" },
{ text: "FFmpeg 配置", link: "/getting-started/config/ffmpeg" },
{ text: "Whisper 配置", link: "/getting-started/config/whisper" },
{ text: "LLM 配置", link: "/getting-started/config/llm" },
],
},
],
},
{
text: "说明文档",
items: [
{ text: "功能说明", link: "/usage/features" },
{
text: "功能说明",
items: [
{ text: "工作流程", link: "/usage/features/workflow" },
{ text: "直播间管理", link: "/usage/features/room" },
{ text: "切片功能", link: "/usage/features/clip" },
{ text: "字幕功能", link: "/usage/features/subtitle" },
{ text: "弹幕功能", link: "/usage/features/danmaku" },
],
},
{ text: "常见问题", link: "/usage/faq" },
],
},
{
text: "开发文档",
items: [{ text: "架构设计", link: "/develop/architecture" }],
items: [
{
text: "DeepWiki",
link: "https://deepwiki.com/Xinrea/bili-shadowreplay",
},
],
},
],

View File

@@ -1 +0,0 @@
# 架构设计

View File

@@ -1,27 +1,12 @@
# 配置使用
## 账号配置
# 账号配置
要添加直播间,至少需要配置一个同平台的账号。在账号页面,你可以通过添加账号按钮添加一个账号。
- B 站账号:目前支持扫码登录和 Cookie 手动配置两种方式,推荐使用扫码登录
- 抖音账号:目前仅支持 Cookie 手动配置登陆
### 抖音账号配置
## 抖音账号配置
首先确保已经登录抖音,然后打开[个人主页](https://www.douyin.com/user/self),右键单击网页,在菜单中选择 `检查Inspect`,打开开发者工具,切换到 `网络Network` 选项卡,然后刷新网页,此时能在列表中找到 `self` 请求(一般是列表中第一个),单击该请求,查看`请求标头`,在 `请求标头` 中找到 `Cookie`,复制该字段的值,粘贴到配置页面的 `Cookie` 输入框中,要注意复制完全。
![DouyinCookie](/images/douyin_cookie.png)
## FFmpeg 配置
如果想要使用切片生成和压制功能,请确保 FFmpeg 已正确配置;除了 Windows 平台打包自带 FFfmpeg 以外,其他平台需要手动安装 FFfmpeg请参考 [FFfmpeg 配置](/getting-started/ffmpeg)。
## Whisper 模型配置
要使用 AI 字幕识别功能,需要在设置页面配置 Whisper 模型路径,模型文件可以从网络上下载,例如:
- [Whisper.cpp国内镜像内容较旧](https://www.modelscope.cn/models/cjc1887415157/whisper.cpp/files)
- [Whisper.cpp](https://huggingface.co/ggerganov/whisper.cpp/tree/main)
可以跟据自己的需求选择不同的模型,要注意带有 `en` 的模型是英文模型,其他模型为多语言模型。

View File

@@ -0,0 +1,9 @@
# LLM 配置
![LLM](/images/model_config.png)
助手页面的 AI Agent 助手功能需要配置大模型,目前仅支持配置 OpenAI 协议兼容的大模型服务。
本软件并不提供大模型服务,请自行选择服务提供商。要注意,使用 AI Agent 助手需要消耗比普通对话更多的 Token请确保有足够的 Token 余额。
此外AI Agent 的功能需要大模型支持 Function Calling 功能,否则无法正常调用工具。

View File

@@ -0,0 +1,35 @@
# Whisper 配置
要使用 AI 字幕识别功能,需要在设置页面配置 Whisper。目前可以选择使用本地运行 Whisper 模型,或是使用在线的 Whisper 服务(通常需要付费获取 API Key
> [!NOTE]
> 其实有许多更好的中文字幕识别解决方案,但是这类服务通常需要将文件上传到对象存储后异步处理,考虑到实现的复杂度,选择了使用本地运行 Whisper 模型或是使用在线的 Whisper 服务,在请求返回时能够直接获取字幕生成结果。
## 本地运行 Whisper 模型
![WhisperLocal](/images/whisper_local.png)
如果需要使用本地运行 Whisper 模型进行字幕生成,需要下载 Whisper.cpp 模型,并在设置中指定模型路径。模型文件可以从网络上下载,例如:
- [Whisper.cpp国内镜像内容较旧](https://www.modelscope.cn/models/cjc1887415157/whisper.cpp/files)
- [Whisper.cpp](https://huggingface.co/ggerganov/whisper.cpp/tree/main)
可以跟据自己的需求选择不同的模型,要注意带有 `en` 的模型是英文模型,其他模型为多语言模型。
模型文件的大小通常意味着其在运行时资源占用的大小因此请根据电脑配置选择合适的模型。此外GPU 版本与 CPU 版本在字幕生成速度上存在**巨大差异**,因此推荐使用 GPU 版本进行本地处理(目前仅支持 Nvidia GPU
## 使用在线 Whisper 服务
![WhisperOnline](/images/whisper_online.png)
如果需要使用在线的 Whisper 服务进行字幕生成,可以在设置中切换为在线 Whisper并配置好 API Key。提供 Whisper 服务的平台并非只有 OpenAI 一家,许多云服务平台也提供 Whisper 服务。
## 字幕识别质量的调优
目前在设置中支持设置 Whisper 语言和 Whisper 提示词,这些设置对于本地和在线的 Whisper 服务都有效。
通常情况下,`auto` 语言选项能够自动识别语音语言,并生成相应语言的字幕。如果需要生成其他语言的字幕,或是生成的字幕语言不匹配,可以手动配置指定的语言。根据 OpenAI 官方文档中对于 `language` 参数的描述,目前支持的语言包括
Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.
提示词可以优化生成的字幕的风格也会一定程度上影响质量要注意Whisper 无法理解复杂的提示词,你可以在提示词中使用一些简单的描述,让其在选择词汇时使用偏向于提示词所描述的领域相关的词汇,以避免出现毫不相干领域的词汇;或是让它在标点符号的使用上参照提示词的风格。

View File

@@ -1,66 +0,0 @@
# 安装准备
## 桌面端安装
桌面端目前提供了 Windows、Linux 和 MacOS 三个平台的安装包。
安装包分为两个版本,普通版和 debug 版普通版适合大部分用户使用debug 版包含了更多的调试信息,适合开发者使用;由于程序会对账号等敏感信息进行管理,请从信任的来源进行下载;所有版本均可在 [GitHub Releases](https://github.com/Xinrea/bili-shadowreplay/releases) 页面下载安装。
### Windows
由于程序内置 Whisper 字幕识别模型支持Windows 版本分为两种:
- **普通版本**:内置了 Whisper GPU 加速,字幕识别较快,体积较大,只支持 Nvidia 显卡
- **CPU 版本** 使用 CPU 进行字幕识别推理,速度较慢
请根据自己的显卡情况选择合适的版本进行下载。
### Linux
Linux 版本目前仅支持使用 CPU 推理,且测试较少,可能存在一些问题,遇到问题请及时反馈。
### MacOS
MacOS 版本内置 Metal GPU 加速;安装后首次运行,会提示无法打开从网络下载的软件,请在设置-隐私与安全性下,选择仍然打开以允许程序运行。
## Docker 部署
BiliBili ShadowReplay 提供了服务端部署的能力,提供 Web 控制界面,可以用于在服务器等无图形界面环境下部署使用。
### 镜像获取
```bash
# 拉取最新版本
docker pull ghcr.io/xinrea/bili-shadowreplay:latest
# 拉取指定版本
docker pull ghcr.io/xinrea/bili-shadowreplay:2.5.0
# 速度太慢?从镜像源拉取
docker pull ghcr.nju.edu.cn/xinrea/bili-shadowreplay:latest
```
### 镜像使用
使用方法:
```bash
sudo docker run -it -d\
-p 3000:3000 \
-v $DATA_DIR:/app/data \
-v $CACHE_DIR:/app/cache \
-v $OUTPUT_DIR:/app/output \
-v $WHISPER_MODEL:/app/whisper_model.bin \
--name bili-shadowreplay \
ghcr.io/xinrea/bili-shadowreplay:latest
```
其中:
- `$DATA_DIR`:为数据目录,对应于桌面版的数据目录,
Windows 下位于 `C:\Users\{用户名}\AppData\Roaming\cn.vjoi.bilishadowreplay`;
MacOS 下位于 `/Users/{user}/Library/Application Support/cn.vjoi.bilishadowreplay`
- `$CACHE_DIR`:为缓存目录,对应于桌面版的缓存目录;
- `$OUTPUT_DIR`:为输出目录,对应于桌面版的输出目录;
- `$WHISPER_MODEL`:为 Whisper 模型文件路径,对应于桌面版的 Whisper 模型文件路径。

View File

@@ -0,0 +1,22 @@
# 桌面端安装
桌面端目前提供了 Windows、Linux 和 MacOS 三个平台的安装包。
安装包分为两个版本,普通版和 debug 版普通版适合大部分用户使用debug 版包含了更多的调试信息,适合开发者使用;由于程序会对账号等敏感信息进行管理,请从信任的来源进行下载;所有版本均可在 [GitHub Releases](https://github.com/Xinrea/bili-shadowreplay/releases) 页面下载安装。
## Windows
由于程序内置 Whisper 字幕识别模型支持Windows 版本分为两种:
- **普通版本**:内置了 Whisper GPU 加速,字幕识别较快,体积较大,只支持 Nvidia 显卡
- **CPU 版本** 使用 CPU 进行字幕识别推理,速度较慢
请根据自己的显卡情况选择合适的版本进行下载。
## Linux
Linux 版本目前仅支持使用 CPU 推理,且测试较少,可能存在一些问题,遇到问题请及时反馈。
## MacOS
MacOS 版本内置 Metal GPU 加速;安装后首次运行,会提示无法打开从网络下载的软件,请在设置-隐私与安全性下,选择仍然打开以允许程序运行。

View File

@@ -0,0 +1,41 @@
# Docker 部署
BiliBili ShadowReplay 提供了服务端部署的能力,提供 Web 控制界面,可以用于在服务器等无图形界面环境下部署使用。
## 镜像获取
```bash
# 拉取最新版本
docker pull ghcr.io/xinrea/bili-shadowreplay:latest
# 拉取指定版本
docker pull ghcr.io/xinrea/bili-shadowreplay:2.5.0
# 速度太慢?从镜像源拉取
docker pull ghcr.nju.edu.cn/xinrea/bili-shadowreplay:latest
```
## 镜像使用
使用方法:
```bash
sudo docker run -it -d\
-p 3000:3000 \
-v $DATA_DIR:/app/data \
-v $CACHE_DIR:/app/cache \
-v $OUTPUT_DIR:/app/output \
-v $WHISPER_MODEL:/app/whisper_model.bin \
--name bili-shadowreplay \
ghcr.io/xinrea/bili-shadowreplay:latest
```
其中:
- `$DATA_DIR`:为数据目录,对应于桌面版的数据目录,
Windows 下位于 `C:\Users\{用户名}\AppData\Roaming\cn.vjoi.bilishadowreplay`;
MacOS 下位于 `/Users/{user}/Library/Application Support/cn.vjoi.bilishadowreplay`
- `$CACHE_DIR`:为缓存目录,对应于桌面版的缓存目录;
- `$OUTPUT_DIR`:为输出目录,对应于桌面版的输出目录;
- `$WHISPER_MODEL`:为 Whisper 模型文件路径,对应于桌面版的 Whisper 模型文件路径。

View File

@@ -11,10 +11,10 @@ hero:
actions:
- theme: brand
text: 开始使用
link: /getting-started/installation
link: /getting-started/installation/desktop
- theme: alt
text: 说明文档
link: /usage/features
link: /usage/features/workflow
features:
- icon: 📹
@@ -38,9 +38,9 @@ features:
- icon: 🔍
title: 云端部署
details: 支持 Docker 部署,提供 Web 控制界面
- icon: 📦
title: 多平台支持
details: 桌面端支持 Windows/Linux/macOS
- icon: 🤖
title: AI Agent 支持
details: 支持 AI 助手管理录播,分析直播内容,生成切片
---
## 总览
@@ -63,7 +63,7 @@ features:
## 封面编辑
![cover](/images/coveredit.png)
![cover](/images/cover_edit.png)
## 设置

Binary file not shown.

Before

Width:  |  Height:  |  Size: 555 KiB

After

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 434 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 234 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 MiB

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 383 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 MiB

After

Width:  |  Height:  |  Size: 949 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 622 KiB

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 721 KiB

After

Width:  |  Height:  |  Size: 372 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,31 @@
# 常见问题
## 一、在哪里反馈问题?
你可以前往 [Github Issues](https://github.com/Xinrea/bili-shadowreplay/issues/new?template=bug_report.md) 提交问题,或是加入[反馈交流群](https://qm.qq.com/q/v4lrE6gyum)。
1. 在提交问题前,请先阅读其它常见问题,确保你的问题已有解答;
2. 其次,请确保你的程序已更新到最新版本;
3. 最后,你应准备好提供你的程序日志文件,以便更好地定位问题。
## 二、在哪里查看日志?
在主窗口的设置页面,提供了一键打开日志目录所在位置的按钮。当你打开日志目录所在位置后,进入 `logs` 目录,找到后缀名为 `log` 的文件,这便是你需要提供给开发者的日志文件。
## 三、无法预览直播或是生成切片
如果你是 macOS 或 Linux 用户,请确保你已安装了 `ffmpeg``ffprobe` 工具;如果不知道如何安装,请参考 [FFmpeg 配置](/getting-started/config/ffmpeg)。
如果你是 Windows 用户,程序目录下应当自带了 `ffmpeg``ffprobe` 工具,如果无法预览直播或是生成切片,请向开发者反馈。
## 四、添加 B 站直播间出现 -352 错误
`-352` 错误是由 B 站风控机制导致的,如果你添加了大量的 B 站直播间进行录制,可以在设置页面调整直播间状态的检查间隔,尽量避免风控;如果你在直播间数量较少的情况下出现该错误,请向开发者反馈。
## 五、录播为什么都是碎片文件?
缓存目录下的录播文件并非用于直接播放或是投稿,而是用于直播流的预览与实时回放。如果你需要录播文件用于投稿,请打开对应录播的预览界面,使用快捷键创建选区,生成所需范围的切片,切片文件为常规的 mp4 文件,位于你所设置的切片目录下。
如果你将 BSR 作为单纯的录播软件使用,在设置中可以开启`整场录播生成`这样在直播结束后BSR 会自动生成整场录播的切片。
![整场录播](/images/whole_clip.png)

View File

View File

@@ -0,0 +1 @@
# 切片

View File

@@ -0,0 +1 @@
# 弹幕

View File

@@ -0,0 +1,38 @@
# 直播间
> [!WARNING]
> 在添加管理直播间前,请确保账号列表中有对应平台的可用账号。
## 添加直播间
### 手动添加直播间
你可以在 BSR 直播间页面,点击按钮手动添加直播间。你需要选择平台,并输入直播间号。
直播间号通常是直播间网页地址尾部的遗传数字,例如 `https://live.bilibili.com/123456` 中的 `123456`,或是 `https://live.douyin.com/123456` 中的 `123456`
抖音直播间比较特殊,当未开播时,你无法找到直播间的入口,因此你需要当直播间开播时找到直播间网页地址,并记录其直播间号。
抖音直播间需要输入主播的 sec_uid你可以在主播主页的 URL 中找到,例如 `https://www.douyin.com/user/MS4wLjABAAAA` 中的 `MS4wLjABAAAA`
### 使用 DeepLinking 快速添加直播间
<video src="/videos/deeplinking.mp4" loop autoplay muted style="border-radius: 10px;"></video>
在浏览器中观看直播时,替换地址栏中直播间地址中的 `https://``bsr://` 即可快速唤起 BSR 添加直播间。
## 启用/禁用直播间
你可以点击直播间卡片右上角的菜单按钮,选择启用/禁用直播间。
- 启用后,当直播间开播时,会自动开始录制
- 禁用后,当直播间开播时,不会自动开始录制
## 移除直播间
> [!CAUTION]
> 移除直播间后,该直播间相关的所有录播都会被删除,请谨慎操作。
你可以点击直播间卡片右上角的菜单按钮,选择移除直播间。
<video src="/videos/room_remove.mp4" loop autoplay muted style="border-radius: 10px;"></video>

View File

@@ -0,0 +1 @@
# 字幕

View File

@@ -0,0 +1,30 @@
# 工作流程
- 直播间:各个平台的直播间
- 录播:直播流的存档,每次录制会自动生成一场录播记录
- 切片:从直播流中剪切生成的视频片段
- 投稿:将切片上传到各个平台(目前仅支持 Bilibili
下图展示了它们之间的关系:
```mermaid
flowchart TD
A[直播间] -->|录制| B[录播 01]
A -->|录制| C[录播 02]
A -->|录制| E[录播 N]
B --> F[直播流预览窗口]
F -->|区间生成| G[切片 01]
F -->|区间生成| H[切片 02]
F -->|区间生成| I[切片 N]
G --> J[切片预览窗口]
J -->|字幕压制| K[新切片]
K --> J
J -->|投稿| L[Bilibili]
```

13
index_clip.html Normal file
View File

@@ -0,0 +1,13 @@
<!DOCTYPE html>
<html lang="zh-cn">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>切片窗口</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main_clip.ts"></script>
</body>
</html>

63
index_live.html Normal file
View File

@@ -0,0 +1,63 @@
<!DOCTYPE html>
<html lang="zh-cn" class="dark">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="shaka-player/controls.min.css" />
<link rel="stylesheet" href="shaka-player/youtube-theme.css" />
<script src="shaka-player/shaka-player.ui.js"></script>
</head>
<body>
<div id="app"></div>
<script type="module" src="src/main_live.ts"></script>
<style>
input[type="range"]::-webkit-slider-thumb {
width: 12px;
/* 设置滑块按钮宽度 */
height: 12px;
/* 设置滑块按钮高度 */
border-radius: 50%;
/* 设置为圆形 */
}
html {
scrollbar-face-color: #646464;
scrollbar-base-color: #646464;
scrollbar-3dlight-color: #646464;
scrollbar-highlight-color: #646464;
scrollbar-track-color: #000;
scrollbar-arrow-color: #000;
scrollbar-shadow-color: #646464;
}
::-webkit-scrollbar {
width: 8px;
height: 3px;
}
::-webkit-scrollbar-button {
background-color: #666;
}
::-webkit-scrollbar-track {
background-color: #646464;
}
::-webkit-scrollbar-track-piece {
background-color: #000;
}
::-webkit-scrollbar-thumb {
height: 50px;
background-color: #666;
border-radius: 3px;
}
::-webkit-scrollbar-corner {
background-color: #646464;
}
</style>
</body>
</html>

View File

@@ -1,65 +0,0 @@
<!DOCTYPE html>
<html lang="zh-cn" class="dark">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="shaka-player/controls.min.css" />
<link rel="stylesheet" href="shaka-player/youtube-theme.css" />
<script src="shaka-player/shaka-player.ui.js"></script>
</head>
<body>
<div id="app"></div>
<script type="module" src="src/live_main.ts"></script>
<style>
input[type="range"]::-webkit-slider-thumb {
width: 12px;
/* 设置滑块按钮宽度 */
height: 12px;
/* 设置滑块按钮高度 */
border-radius: 50%;
/* 设置为圆形 */
}
html {
scrollbar-face-color: #646464;
scrollbar-base-color: #646464;
scrollbar-3dlight-color: #646464;
scrollbar-highlight-color: #646464;
scrollbar-track-color: #000;
scrollbar-arrow-color: #000;
scrollbar-shadow-color: #646464;
}
::-webkit-scrollbar {
width: 8px;
height: 3px;
}
::-webkit-scrollbar-button {
background-color: #666;
}
::-webkit-scrollbar-track {
background-color: #646464;
}
::-webkit-scrollbar-track-piece {
background-color: #000;
}
::-webkit-scrollbar-thumb {
height: 50px;
background-color: #666;
border-radius: 3px;
}
::-webkit-scrollbar-corner {
background-color: #646464;
}
</style>
</body>
</html>

View File

@@ -1,7 +1,7 @@
{
"name": "bili-shadowreplay",
"private": true,
"version": "2.6.1",
"version": "2.11.1",
"type": "module",
"scripts": {
"dev": "vite",
@@ -11,10 +11,16 @@
"tauri": "tauri",
"docs:dev": "vitepress dev docs",
"docs:build": "vitepress build docs",
"docs:preview": "vitepress preview docs"
"docs:preview": "vitepress preview docs",
"bump": "node scripts/bump.cjs"
},
"dependencies": {
"@tauri-apps/api": "^2.4.1",
"@langchain/core": "^0.3.64",
"@langchain/deepseek": "^0.1.0",
"@langchain/langgraph": "^0.3.10",
"@langchain/ollama": "^0.2.3",
"@tauri-apps/api": "^2.6.2",
"@tauri-apps/plugin-deep-link": "~2",
"@tauri-apps/plugin-dialog": "~2",
"@tauri-apps/plugin-fs": "~2",
"@tauri-apps/plugin-http": "~2",
@@ -23,6 +29,7 @@
"@tauri-apps/plugin-shell": "~2",
"@tauri-apps/plugin-sql": "~2",
"lucide-svelte": "^0.479.0",
"marked": "^16.1.1",
"qrcode": "^1.5.4"
},
"devDependencies": {
@@ -35,6 +42,7 @@
"flowbite": "^2.5.1",
"flowbite-svelte": "^0.46.16",
"flowbite-svelte-icons": "^1.6.1",
"mermaid": "^11.9.0",
"postcss": "^8.4.21",
"svelte": "^3.54.0",
"svelte-check": "^3.0.0",
@@ -44,6 +52,7 @@
"tslib": "^2.4.1",
"typescript": "^4.6.4",
"vite": "^4.0.0",
"vitepress": "^1.6.3"
"vitepress": "^1.6.3",
"vitepress-plugin-mermaid": "^2.0.17"
}
}

58
scripts/bump.cjs Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env node
const fs = require("fs");
const path = require("path");
function updatePackageJson(version) {
const packageJsonPath = path.join(process.cwd(), "package.json");
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, "utf8"));
packageJson.version = version;
fs.writeFileSync(
packageJsonPath,
JSON.stringify(packageJson, null, 2) + "\n"
);
console.log(`✅ Updated package.json version to ${version}`);
}
function updateCargoToml(version) {
const cargoTomlPath = path.join(process.cwd(), "src-tauri", "Cargo.toml");
let cargoToml = fs.readFileSync(cargoTomlPath, "utf8");
// Update the version in the [package] section
cargoToml = cargoToml.replace(/^version = ".*"$/m, `version = "${version}"`);
fs.writeFileSync(cargoTomlPath, cargoToml);
console.log(`✅ Updated Cargo.toml version to ${version}`);
}
function main() {
const args = process.argv.slice(2);
if (args.length === 0) {
console.error("❌ Please provide a version number");
console.error("Usage: yarn bump <version>");
console.error("Example: yarn bump 3.1.0");
process.exit(1);
}
const version = args[0];
// Validate version format (simple check)
if (!/^\d+\.\d+\.\d+/.test(version)) {
console.error(
"❌ Invalid version format. Please use semantic versioning (e.g., 3.1.0)"
);
process.exit(1);
}
try {
updatePackageJson(version);
updateCargoToml(version);
console.log(`🎉 Successfully bumped version to ${version}`);
} catch (error) {
console.error("❌ Error updating version:", error.message);
process.exit(1);
}
}
main();

950
src-tauri/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ resolver = "2"
[package]
name = "bili-shadowreplay"
version = "1.0.0"
version = "2.11.1"
description = "BiliBili ShadowReplay"
authors = ["Xinrea"]
license = ""
@@ -16,7 +16,7 @@ edition = "2021"
[dependencies]
danmu_stream = { path = "crates/danmu_stream" }
serde_json = "1.0"
reqwest = { version = "0.11", features = ["blocking", "json"] }
reqwest = { version = "0.11", features = ["blocking", "json", "multipart"] }
serde_derive = "1.0.158"
serde = "1.0.158"
sysinfo = "0.32.0"
@@ -44,13 +44,14 @@ async-trait = "0.1.87"
whisper-rs = "0.14.2"
hound = "3.5.1"
uuid = { version = "1.4", features = ["v4"] }
axum = { version = "0.7", features = ["macros"] }
axum = { version = "0.7", features = ["macros", "multipart"] }
tower-http = { version = "0.5", features = ["cors", "fs"] }
futures-core = "0.3"
futures = "0.3"
tokio-util = { version = "0.7", features = ["io"] }
clap = { version = "4.5.37", features = ["derive"] }
url = "2.5.4"
srtparse = "0.2.0"
[features]
# this feature is used for production builds or when `devPath` points to the filesystem
@@ -70,6 +71,7 @@ gui = [
"tauri-utils",
"tauri-plugin-os",
"tauri-plugin-notification",
"tauri-plugin-deep-link",
"fix-path-env",
"tauri-build",
]
@@ -82,6 +84,7 @@ optional = true
[dependencies.tauri-plugin-single-instance]
version = "2"
optional = true
features = ["deep-link"]
[dependencies.tauri-plugin-dialog]
version = "2"
@@ -116,6 +119,10 @@ optional = true
version = "2"
optional = true
[dependencies.tauri-plugin-deep-link]
version = "2"
optional = true
[dependencies.fix-path-env]
git = "https://github.com/tauri-apps/fix-path-env-rs"
optional = true

View File

@@ -4,7 +4,8 @@
"local": true,
"windows": [
"main",
"Live*"
"Live*",
"Clip*"
],
"permissions": [
"core:default",
@@ -70,6 +71,7 @@
"shell:default",
"sql:default",
"os:default",
"dialog:default"
"dialog:default",
"deep-link:default"
]
}

View File

@@ -5,8 +5,10 @@ live_end_notify = true
clip_notify = true
post_notify = true
auto_subtitle = false
subtitle_generator_type = "whisper_online"
whisper_model = "./whisper_model.bin"
whisper_prompt = "这是一段中文 你们好"
openai_api_key = ""
clip_name_format = "[{room_id}][{live_id}][{title}][{created_at}].mp4"
[auto_generate]

View File

@@ -38,6 +38,7 @@ urlencoding = "2.1.3"
gzip = "0.1.2"
hex = "0.4.3"
async-trait = "0.1.88"
uuid = "1.17.0"
[build-dependencies]
tonic-build = "0.10"

View File

@@ -0,0 +1,41 @@
use std::{sync::Arc, time::Duration};
use danmu_stream::{danmu_stream::DanmuStream, provider::ProviderType, DanmuMessageType};
use tokio::time::sleep;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize logging
env_logger::init();
// Replace these with actual values
let room_id = 768756;
let cookie = "";
let stream = Arc::new(DanmuStream::new(ProviderType::BiliBili, cookie, room_id).await?);
log::info!("Start to receive danmu messages: {}", cookie);
let stream_clone = stream.clone();
tokio::spawn(async move {
loop {
log::info!("Waitting for message");
if let Ok(Some(msg)) = stream_clone.recv().await {
match msg {
DanmuMessageType::DanmuMessage(danmu) => {
log::info!("Received danmu message: {:?}", danmu.message);
}
}
} else {
log::info!("Channel closed");
break;
}
}
});
let _ = stream.start().await;
sleep(Duration::from_secs(10)).await;
stream.stop().await?;
Ok(())
}

View File

@@ -14,6 +14,7 @@ custom_error! {pub DanmuStreamError
InvalidIdentifier {err: String} = "InvalidIdentifier {err}"
}
#[derive(Debug)]
pub enum DanmuMessageType {
DanmuMessage(DanmuMessage),
}

View File

@@ -47,7 +47,9 @@ impl DanmuProvider for BiliDanmu {
async fn new(cookie: &str, room_id: u64) -> Result<Self, DanmuStreamError> {
// find DedeUserID=<user_id> in cookie str
let user_id = BiliDanmu::parse_user_id(cookie)?;
let client = ApiClient::new(cookie);
// add buvid3 to cookie
let cookie = format!("{};buvid3={}", cookie, uuid::Uuid::new_v4());
let client = ApiClient::new(&cookie);
Ok(Self {
client,
@@ -121,9 +123,11 @@ impl BiliDanmu {
tx: mpsc::UnboundedSender<DanmuMessageType>,
) -> Result<(), DanmuStreamError> {
let wbi_key = self.get_wbi_key().await?;
let danmu_info = self.get_danmu_info(&wbi_key, self.room_id).await?;
let real_room = self.get_real_room(&wbi_key, self.room_id).await?;
let danmu_info = self.get_danmu_info(&wbi_key, real_room).await?;
let ws_hosts = danmu_info.data.host_list.clone();
let mut conn = None;
log::debug!("ws_hosts: {:?}", ws_hosts);
// try to connect to ws_hsots, once success, send the token to the tx
for i in ws_hosts {
let host = format!("wss://{}/sub", i.host);
@@ -149,7 +153,7 @@ impl BiliDanmu {
*self.write.write().await = Some(write);
let json = serde_json::to_string(&WsSend {
roomid: self.room_id,
roomid: real_room,
key: danmu_info.data.token,
uid: self.user_id,
protover: 3,
@@ -209,6 +213,7 @@ impl BiliDanmu {
if let Ok(ws) = ws {
match ws.match_msg() {
Ok(v) => {
log::debug!("Received message: {:?}", v);
tx.send(v).map_err(|e| DanmuStreamError::WebsocketError {
err: e.to_string(),
})?;
@@ -235,7 +240,6 @@ impl BiliDanmu {
wbi_key: &str,
room_id: u64,
) -> Result<DanmuInfo, DanmuStreamError> {
let room_id = self.get_real_room(wbi_key, room_id).await?;
let params = self
.get_sign(
wbi_key,

View File

@@ -3,7 +3,7 @@ use serde_json::Value;
use crate::{
provider::{bilibili::dannmu_msg::BiliDanmuMessage, DanmuMessageType},
DanmuStreamError, DanmuMessage,
DanmuMessage, DanmuStreamError,
};
#[derive(Debug, Deserialize, Clone)]
@@ -83,7 +83,7 @@ impl WsStreamCtx {
fn handle_cmd(&self) -> Option<&str> {
// handle DANMU_MSG:4:0:2:2:2:0
let cmd = if let Some(c) = self.cmd.as_deref() {
if c.starts_with("DANMU_MSG") {
if c.starts_with("DM_INTERACTION") {
Some("DANMU_MSG")
} else {
Some(c)

View File

@@ -1,6 +1,5 @@
use crate::{provider::DanmuProvider, DanmuMessage, DanmuMessageType, DanmuStreamError};
use async_trait::async_trait;
use chrono;
use deno_core::v8;
use deno_core::JsRuntime;
use deno_core::RuntimeOptions;

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
{"migrated":{"identifier":"migrated","description":"permissions that were migrated from v1","local":true,"windows":["main","Live*"],"permissions":["core:default","fs:allow-read-file","fs:allow-write-file","fs:allow-read-dir","fs:allow-copy-file","fs:allow-mkdir","fs:allow-remove","fs:allow-remove","fs:allow-rename","fs:allow-exists",{"identifier":"fs:scope","allow":["**"]},"core:window:default","core:window:allow-start-dragging","core:window:allow-close","core:window:allow-minimize","core:window:allow-maximize","core:window:allow-unmaximize","core:window:allow-set-title","sql:allow-execute","shell:allow-open","dialog:allow-open","dialog:allow-save","dialog:allow-message","dialog:allow-ask","dialog:allow-confirm",{"identifier":"http:default","allow":[{"url":"https://*.hdslb.com/"},{"url":"https://afdian.com/"},{"url":"https://*.afdiancdn.com/"},{"url":"https://*.douyin.com/"},{"url":"https://*.douyinpic.com/"}]},"dialog:default","shell:default","fs:default","http:default","sql:default","os:default","notification:default","dialog:default","fs:default","http:default","shell:default","sql:default","os:default","dialog:default"]}}
{"migrated":{"identifier":"migrated","description":"permissions that were migrated from v1","local":true,"windows":["main","Live*","Clip*"],"permissions":["core:default","fs:allow-read-file","fs:allow-write-file","fs:allow-read-dir","fs:allow-copy-file","fs:allow-mkdir","fs:allow-remove","fs:allow-remove","fs:allow-rename","fs:allow-exists",{"identifier":"fs:scope","allow":["**"]},"core:window:default","core:window:allow-start-dragging","core:window:allow-close","core:window:allow-minimize","core:window:allow-maximize","core:window:allow-unmaximize","core:window:allow-set-title","sql:allow-execute","shell:allow-open","dialog:allow-open","dialog:allow-save","dialog:allow-message","dialog:allow-ask","dialog:allow-confirm",{"identifier":"http:default","allow":[{"url":"https://*.hdslb.com/"},{"url":"https://afdian.com/"},{"url":"https://*.afdiancdn.com/"},{"url":"https://*.douyin.com/"},{"url":"https://*.douyinpic.com/"}]},"dialog:default","shell:default","fs:default","http:default","sql:default","os:default","notification:default","dialog:default","fs:default","http:default","shell:default","sql:default","os:default","dialog:default","deep-link:default"]}}

View File

@@ -37,7 +37,7 @@
],
"definitions": {
"Capability": {
"description": "A grouping and boundary mechanism developers can use to isolate access to the IPC layer.\n\nIt controls application windows' and webviews' fine grained access to the Tauri core, application, or plugin commands. If a webview or its window is not matching any capability then it has no access to the IPC layer at all.\n\nThis can be done to create groups of windows, based on their required system access, which can reduce impact of frontend vulnerabilities in less privileged windows. Windows can be added to a capability by exact name (e.g. `main-window`) or glob patterns like `*` or `admin-*`. A Window can have none, one, or multiple associated capabilities.\n\n## Example\n\n```json { \"identifier\": \"main-user-files-write\", \"description\": \"This capability allows the `main` window on macOS and Windows access to `filesystem` write related commands and `dialog` commands to enable programatic access to files selected by the user.\", \"windows\": [ \"main\" ], \"permissions\": [ \"core:default\", \"dialog:open\", { \"identifier\": \"fs:allow-write-text-file\", \"allow\": [{ \"path\": \"$HOME/test.txt\" }] }, ], \"platforms\": [\"macOS\",\"windows\"] } ```",
"description": "A grouping and boundary mechanism developers can use to isolate access to the IPC layer.\n\nIt controls application windows' and webviews' fine grained access to the Tauri core, application, or plugin commands. If a webview or its window is not matching any capability then it has no access to the IPC layer at all.\n\nThis can be done to create groups of windows, based on their required system access, which can reduce impact of frontend vulnerabilities in less privileged windows. Windows can be added to a capability by exact name (e.g. `main-window`) or glob patterns like `*` or `admin-*`. A Window can have none, one, or multiple associated capabilities.\n\n## Example\n\n```json { \"identifier\": \"main-user-files-write\", \"description\": \"This capability allows the `main` window on macOS and Windows access to `filesystem` write related commands and `dialog` commands to enable programmatic access to files selected by the user.\", \"windows\": [ \"main\" ], \"permissions\": [ \"core:default\", \"dialog:open\", { \"identifier\": \"fs:allow-write-text-file\", \"allow\": [{ \"path\": \"$HOME/test.txt\" }] }, ], \"platforms\": [\"macOS\",\"windows\"] } ```",
"type": "object",
"required": [
"identifier",
@@ -49,7 +49,7 @@
"type": "string"
},
"description": {
"description": "Description of what the capability is intended to allow on associated windows.\n\nIt should contain a description of what the grouped permissions should allow.\n\n## Example\n\nThis capability allows the `main` window access to `filesystem` write related commands and `dialog` commands to enable programatic access to files selected by the user.",
"description": "Description of what the capability is intended to allow on associated windows.\n\nIt should contain a description of what the grouped permissions should allow.\n\n## Example\n\nThis capability allows the `main` window access to `filesystem` write related commands and `dialog` commands to enable programmatic access to files selected by the user.",
"default": "",
"type": "string"
},
@@ -3152,6 +3152,12 @@
"const": "core:webview:allow-reparent",
"markdownDescription": "Enables the reparent command without any pre-configured scope."
},
{
"description": "Enables the set_webview_auto_resize command without any pre-configured scope.",
"type": "string",
"const": "core:webview:allow-set-webview-auto-resize",
"markdownDescription": "Enables the set_webview_auto_resize command without any pre-configured scope."
},
{
"description": "Enables the set_webview_background_color command without any pre-configured scope.",
"type": "string",
@@ -3254,6 +3260,12 @@
"const": "core:webview:deny-reparent",
"markdownDescription": "Denies the reparent command without any pre-configured scope."
},
{
"description": "Denies the set_webview_auto_resize command without any pre-configured scope.",
"type": "string",
"const": "core:webview:deny-set-webview-auto-resize",
"markdownDescription": "Denies the set_webview_auto_resize command without any pre-configured scope."
},
{
"description": "Denies the set_webview_background_color command without any pre-configured scope.",
"type": "string",
@@ -4208,6 +4220,60 @@
"const": "core:window:deny-unminimize",
"markdownDescription": "Denies the unminimize command without any pre-configured scope."
},
{
"description": "Allows reading the opened deep link via the get_current command\n#### This default permission set includes:\n\n- `allow-get-current`",
"type": "string",
"const": "deep-link:default",
"markdownDescription": "Allows reading the opened deep link via the get_current command\n#### This default permission set includes:\n\n- `allow-get-current`"
},
{
"description": "Enables the get_current command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-get-current",
"markdownDescription": "Enables the get_current command without any pre-configured scope."
},
{
"description": "Enables the is_registered command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-is-registered",
"markdownDescription": "Enables the is_registered command without any pre-configured scope."
},
{
"description": "Enables the register command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-register",
"markdownDescription": "Enables the register command without any pre-configured scope."
},
{
"description": "Enables the unregister command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-unregister",
"markdownDescription": "Enables the unregister command without any pre-configured scope."
},
{
"description": "Denies the get_current command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-get-current",
"markdownDescription": "Denies the get_current command without any pre-configured scope."
},
{
"description": "Denies the is_registered command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-is-registered",
"markdownDescription": "Denies the is_registered command without any pre-configured scope."
},
{
"description": "Denies the register command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-register",
"markdownDescription": "Denies the register command without any pre-configured scope."
},
{
"description": "Denies the unregister command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-unregister",
"markdownDescription": "Denies the unregister command without any pre-configured scope."
},
{
"description": "This permission set configures the types of dialogs\navailable from the dialog plugin.\n\n#### Granted Permissions\n\nAll dialog types are enabled.\n\n\n\n#### This default permission set includes:\n\n- `allow-ask`\n- `allow-confirm`\n- `allow-message`\n- `allow-save`\n- `allow-open`",
"type": "string",

View File

@@ -37,7 +37,7 @@
],
"definitions": {
"Capability": {
"description": "A grouping and boundary mechanism developers can use to isolate access to the IPC layer.\n\nIt controls application windows' and webviews' fine grained access to the Tauri core, application, or plugin commands. If a webview or its window is not matching any capability then it has no access to the IPC layer at all.\n\nThis can be done to create groups of windows, based on their required system access, which can reduce impact of frontend vulnerabilities in less privileged windows. Windows can be added to a capability by exact name (e.g. `main-window`) or glob patterns like `*` or `admin-*`. A Window can have none, one, or multiple associated capabilities.\n\n## Example\n\n```json { \"identifier\": \"main-user-files-write\", \"description\": \"This capability allows the `main` window on macOS and Windows access to `filesystem` write related commands and `dialog` commands to enable programatic access to files selected by the user.\", \"windows\": [ \"main\" ], \"permissions\": [ \"core:default\", \"dialog:open\", { \"identifier\": \"fs:allow-write-text-file\", \"allow\": [{ \"path\": \"$HOME/test.txt\" }] }, ], \"platforms\": [\"macOS\",\"windows\"] } ```",
"description": "A grouping and boundary mechanism developers can use to isolate access to the IPC layer.\n\nIt controls application windows' and webviews' fine grained access to the Tauri core, application, or plugin commands. If a webview or its window is not matching any capability then it has no access to the IPC layer at all.\n\nThis can be done to create groups of windows, based on their required system access, which can reduce impact of frontend vulnerabilities in less privileged windows. Windows can be added to a capability by exact name (e.g. `main-window`) or glob patterns like `*` or `admin-*`. A Window can have none, one, or multiple associated capabilities.\n\n## Example\n\n```json { \"identifier\": \"main-user-files-write\", \"description\": \"This capability allows the `main` window on macOS and Windows access to `filesystem` write related commands and `dialog` commands to enable programmatic access to files selected by the user.\", \"windows\": [ \"main\" ], \"permissions\": [ \"core:default\", \"dialog:open\", { \"identifier\": \"fs:allow-write-text-file\", \"allow\": [{ \"path\": \"$HOME/test.txt\" }] }, ], \"platforms\": [\"macOS\",\"windows\"] } ```",
"type": "object",
"required": [
"identifier",
@@ -49,7 +49,7 @@
"type": "string"
},
"description": {
"description": "Description of what the capability is intended to allow on associated windows.\n\nIt should contain a description of what the grouped permissions should allow.\n\n## Example\n\nThis capability allows the `main` window access to `filesystem` write related commands and `dialog` commands to enable programatic access to files selected by the user.",
"description": "Description of what the capability is intended to allow on associated windows.\n\nIt should contain a description of what the grouped permissions should allow.\n\n## Example\n\nThis capability allows the `main` window access to `filesystem` write related commands and `dialog` commands to enable programmatic access to files selected by the user.",
"default": "",
"type": "string"
},
@@ -3152,6 +3152,12 @@
"const": "core:webview:allow-reparent",
"markdownDescription": "Enables the reparent command without any pre-configured scope."
},
{
"description": "Enables the set_webview_auto_resize command without any pre-configured scope.",
"type": "string",
"const": "core:webview:allow-set-webview-auto-resize",
"markdownDescription": "Enables the set_webview_auto_resize command without any pre-configured scope."
},
{
"description": "Enables the set_webview_background_color command without any pre-configured scope.",
"type": "string",
@@ -3254,6 +3260,12 @@
"const": "core:webview:deny-reparent",
"markdownDescription": "Denies the reparent command without any pre-configured scope."
},
{
"description": "Denies the set_webview_auto_resize command without any pre-configured scope.",
"type": "string",
"const": "core:webview:deny-set-webview-auto-resize",
"markdownDescription": "Denies the set_webview_auto_resize command without any pre-configured scope."
},
{
"description": "Denies the set_webview_background_color command without any pre-configured scope.",
"type": "string",
@@ -4208,6 +4220,60 @@
"const": "core:window:deny-unminimize",
"markdownDescription": "Denies the unminimize command without any pre-configured scope."
},
{
"description": "Allows reading the opened deep link via the get_current command\n#### This default permission set includes:\n\n- `allow-get-current`",
"type": "string",
"const": "deep-link:default",
"markdownDescription": "Allows reading the opened deep link via the get_current command\n#### This default permission set includes:\n\n- `allow-get-current`"
},
{
"description": "Enables the get_current command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-get-current",
"markdownDescription": "Enables the get_current command without any pre-configured scope."
},
{
"description": "Enables the is_registered command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-is-registered",
"markdownDescription": "Enables the is_registered command without any pre-configured scope."
},
{
"description": "Enables the register command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-register",
"markdownDescription": "Enables the register command without any pre-configured scope."
},
{
"description": "Enables the unregister command without any pre-configured scope.",
"type": "string",
"const": "deep-link:allow-unregister",
"markdownDescription": "Enables the unregister command without any pre-configured scope."
},
{
"description": "Denies the get_current command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-get-current",
"markdownDescription": "Denies the get_current command without any pre-configured scope."
},
{
"description": "Denies the is_registered command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-is-registered",
"markdownDescription": "Denies the is_registered command without any pre-configured scope."
},
{
"description": "Denies the register command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-register",
"markdownDescription": "Denies the register command without any pre-configured scope."
},
{
"description": "Denies the unregister command without any pre-configured scope.",
"type": "string",
"const": "deep-link:deny-unregister",
"markdownDescription": "Denies the unregister command without any pre-configured scope."
},
{
"description": "This permission set configures the types of dialogs\navailable from the dialog plugin.\n\n#### Granted Permissions\n\nAll dialog types are enabled.\n\n\n\n#### This default permission set includes:\n\n- `allow-ask`\n- `allow-confirm`\n- `allow-message`\n- `allow-save`\n- `allow-open`",
"type": "string",

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
use std::path::{Path, PathBuf};
use chrono::Utc;
use chrono::Local;
use serde::{Deserialize, Serialize};
use crate::{recorder::PlatformType, recorder_manager::ClipRangeParams};
@@ -15,10 +15,16 @@ pub struct Config {
pub post_notify: bool,
#[serde(default = "default_auto_subtitle")]
pub auto_subtitle: bool,
#[serde(default = "default_subtitle_generator_type")]
pub subtitle_generator_type: String,
#[serde(default = "default_whisper_model")]
pub whisper_model: String,
#[serde(default = "default_whisper_prompt")]
pub whisper_prompt: String,
#[serde(default = "default_openai_api_endpoint")]
pub openai_api_endpoint: String,
#[serde(default = "default_openai_api_key")]
pub openai_api_key: String,
#[serde(default = "default_clip_name_format")]
pub clip_name_format: String,
#[serde(default = "default_auto_generate_config")]
@@ -27,6 +33,10 @@ pub struct Config {
pub status_check_interval: u64,
#[serde(skip)]
pub config_path: String,
#[serde(default = "default_whisper_language")]
pub whisper_language: String,
#[serde(default = "default_user_agent")]
pub user_agent: String,
}
#[derive(Deserialize, Serialize, Clone)]
@@ -39,6 +49,10 @@ fn default_auto_subtitle() -> bool {
false
}
fn default_subtitle_generator_type() -> String {
"whisper".to_string()
}
fn default_whisper_model() -> String {
"whisper_model.bin".to_string()
}
@@ -47,6 +61,14 @@ fn default_whisper_prompt() -> String {
"这是一段中文 你们好".to_string()
}
fn default_openai_api_endpoint() -> String {
"https://api.openai.com/v1".to_string()
}
fn default_openai_api_key() -> String {
"".to_string()
}
fn default_clip_name_format() -> String {
"[{room_id}][{live_id}][{title}][{created_at}].mp4".to_string()
}
@@ -62,6 +84,14 @@ fn default_status_check_interval() -> u64 {
30
}
fn default_whisper_language() -> String {
"auto".to_string()
}
fn default_user_agent() -> String {
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36".to_string()
}
impl Config {
pub fn load(
config_path: &PathBuf,
@@ -89,12 +119,17 @@ impl Config {
clip_notify: true,
post_notify: true,
auto_subtitle: false,
subtitle_generator_type: default_subtitle_generator_type(),
whisper_model: default_whisper_model(),
whisper_prompt: default_whisper_prompt(),
openai_api_endpoint: default_openai_api_endpoint(),
openai_api_key: default_openai_api_key(),
clip_name_format: default_clip_name_format(),
auto_generate: default_auto_generate_config(),
status_check_interval: default_status_check_interval(),
config_path: config_path.to_str().unwrap().into(),
whisper_language: default_whisper_language(),
user_agent: default_user_agent(),
};
config.save();
@@ -121,6 +156,18 @@ impl Config {
self.save();
}
#[allow(dead_code)]
pub fn set_whisper_language(&mut self, language: &str) {
self.whisper_language = language.to_string();
self.save();
}
#[allow(dead_code)]
pub fn set_user_agent(&mut self, user_agent: &str) {
self.user_agent = user_agent.to_string();
self.save();
}
pub fn generate_clip_name(&self, params: &ClipRangeParams) -> PathBuf {
let platform = PlatformType::from_str(&params.platform).unwrap();
@@ -136,13 +183,31 @@ impl Config {
let format_config = format_config.replace("{platform}", platform.as_str());
let format_config = format_config.replace("{room_id}", &params.room_id.to_string());
let format_config = format_config.replace("{live_id}", &params.live_id);
let format_config = format_config.replace("{x}", &params.x.to_string());
let format_config = format_config.replace("{y}", &params.y.to_string());
let format_config = format_config.replace(
"{x}",
&params
.range
.as_ref()
.map_or("0".to_string(), |r| r.start.to_string()),
);
let format_config = format_config.replace(
"{y}",
&params
.range
.as_ref()
.map_or("0".to_string(), |r| r.end.to_string()),
);
let format_config = format_config.replace(
"{created_at}",
&Utc::now().format("%Y-%m-%d_%H-%M-%S").to_string(),
&Local::now().format("%Y-%m-%d_%H-%M-%S").to_string(),
);
let format_config = format_config.replace(
"{length}",
&params
.range
.as_ref()
.map_or("0".to_string(), |r| r.duration().to_string()),
);
let format_config = format_config.replace("{length}", &(params.y - params.x).to_string());
let output = self.output.clone();

View File

@@ -7,6 +7,7 @@ pub mod account;
pub mod message;
pub mod record;
pub mod recorder;
pub mod task;
pub mod video;
pub struct Database {

View File

@@ -9,7 +9,8 @@ use rand::Rng;
#[derive(Debug, Clone, serde::Serialize, sqlx::FromRow)]
pub struct AccountRow {
pub platform: String,
pub uid: u64,
pub uid: u64, // Keep for Bilibili compatibility
pub id_str: Option<String>, // New field for string IDs like Douyin sec_uid
pub name: String,
pub avatar: String,
pub csrf: String,
@@ -50,9 +51,10 @@ impl Database {
return Err(DatabaseError::InvalidCookiesError);
}
// parse uid
let uid = if platform == PlatformType::BiliBili {
cookies
// parse uid and id_str based on platform
let (uid, id_str) = if platform == PlatformType::BiliBili {
// For Bilibili, extract numeric uid from cookies
let uid = cookies
.split("DedeUserID=")
.collect::<Vec<&str>>()
.get(1)
@@ -63,15 +65,18 @@ impl Database {
.unwrap()
.to_string()
.parse::<u64>()
.map_err(|_| DatabaseError::InvalidCookiesError)?
.map_err(|_| DatabaseError::InvalidCookiesError)?;
(uid, None)
} else {
// generate a random uid
rand::thread_rng().gen_range(10000..=i32::MAX) as u64
// For Douyin, use temporary uid and will set id_str later with real sec_uid
let temp_uid = rand::thread_rng().gen_range(10000..=i32::MAX) as u64;
(temp_uid, Some(format!("temp_{}", temp_uid)))
};
let account = AccountRow {
platform: platform.as_str().to_string(),
uid,
id_str,
name: "".into(),
avatar: "".into(),
csrf: csrf.unwrap(),
@@ -79,7 +84,7 @@ impl Database {
created_at: Utc::now().to_rfc3339(),
};
sqlx::query("INSERT INTO accounts (uid, platform, name, avatar, csrf, cookies, created_at) VALUES ($1, $2, $3, $4, $5, $6, $7)").bind(account.uid as i64).bind(&account.platform).bind(&account.name).bind(&account.avatar).bind(&account.csrf).bind(&account.cookies).bind(&account.created_at).execute(&lock).await?;
sqlx::query("INSERT INTO accounts (uid, platform, id_str, name, avatar, csrf, cookies, created_at) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)").bind(account.uid as i64).bind(&account.platform).bind(&account.id_str).bind(&account.name).bind(&account.avatar).bind(&account.csrf).bind(&account.cookies).bind(&account.created_at).execute(&lock).await?;
Ok(account)
}
@@ -120,6 +125,52 @@ impl Database {
Ok(())
}
pub async fn update_account_with_id_str(
&self,
old_account: &AccountRow,
new_id_str: &str,
name: &str,
avatar: &str,
) -> Result<(), DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
// If the id_str changed, we need to delete the old record and create a new one
if old_account.id_str.as_deref() != Some(new_id_str) {
// Delete the old record (for Douyin accounts, we use uid to identify)
sqlx::query("DELETE FROM accounts WHERE uid = $1 and platform = $2")
.bind(old_account.uid as i64)
.bind(&old_account.platform)
.execute(&lock)
.await?;
// Insert the new record with updated id_str
sqlx::query("INSERT INTO accounts (uid, platform, id_str, name, avatar, csrf, cookies, created_at) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)")
.bind(old_account.uid as i64)
.bind(&old_account.platform)
.bind(new_id_str)
.bind(name)
.bind(avatar)
.bind(&old_account.csrf)
.bind(&old_account.cookies)
.bind(&old_account.created_at)
.execute(&lock)
.await?;
} else {
// id_str is the same, just update name and avatar
sqlx::query(
"UPDATE accounts SET name = $1, avatar = $2 WHERE uid = $3 and platform = $4",
)
.bind(name)
.bind(avatar)
.bind(old_account.uid as i64)
.bind(&old_account.platform)
.execute(&lock)
.await?;
}
Ok(())
}
pub async fn get_accounts(&self) -> Result<Vec<AccountRow>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
Ok(sqlx::query_as::<_, AccountRow>("SELECT * FROM accounts")

View File

@@ -123,10 +123,12 @@ impl Database {
pub async fn get_recent_record(
&self,
room_id: u64,
offset: u64,
limit: u64,
) -> Result<Vec<RecordRow>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
if room_id == 0 {
Ok(sqlx::query_as::<_, RecordRow>(
"SELECT * FROM records ORDER BY created_at DESC LIMIT $1 OFFSET $2",
)
@@ -134,5 +136,15 @@ impl Database {
.bind(offset as i64)
.fetch_all(&lock)
.await?)
} else {
Ok(sqlx::query_as::<_, RecordRow>(
"SELECT * FROM records WHERE room_id = $1 ORDER BY created_at DESC LIMIT $2 OFFSET $3",
)
.bind(room_id as i64)
.bind(limit as i64)
.bind(offset as i64)
.fetch_all(&lock)
.await?)
}
}
}

View File

@@ -10,6 +10,7 @@ pub struct RecorderRow {
pub created_at: String,
pub platform: String,
pub auto_start: bool,
pub extra: String,
}
// recorders
@@ -18,6 +19,7 @@ impl Database {
&self,
platform: PlatformType,
room_id: u64,
extra: &str,
) -> Result<RecorderRow, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let recorder = RecorderRow {
@@ -25,14 +27,16 @@ impl Database {
created_at: Utc::now().to_rfc3339(),
platform: platform.as_str().to_string(),
auto_start: true,
extra: extra.to_string(),
};
let _ = sqlx::query(
"INSERT INTO recorders (room_id, created_at, platform, auto_start) VALUES ($1, $2, $3, $4)",
"INSERT OR REPLACE INTO recorders (room_id, created_at, platform, auto_start, extra) VALUES ($1, $2, $3, $4, $5)",
)
.bind(room_id as i64)
.bind(&recorder.created_at)
.bind(platform.as_str())
.bind(recorder.auto_start)
.bind(extra)
.execute(&lock)
.await?;
Ok(recorder)
@@ -56,7 +60,7 @@ impl Database {
pub async fn get_recorders(&self) -> Result<Vec<RecorderRow>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
Ok(sqlx::query_as::<_, RecorderRow>(
"SELECT room_id, created_at, platform, auto_start FROM recorders",
"SELECT room_id, created_at, platform, auto_start, extra FROM recorders",
)
.fetch_all(&lock)
.await?)

View File

@@ -0,0 +1,86 @@
use super::Database;
use super::DatabaseError;
#[derive(Debug, Clone, serde::Serialize, sqlx::FromRow)]
pub struct TaskRow {
pub id: String,
#[sqlx(rename = "type")]
pub task_type: String,
pub status: String,
pub message: String,
pub metadata: String,
pub created_at: String,
}
impl Database {
pub async fn add_task(&self, task: &TaskRow) -> Result<(), DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let _ = sqlx::query(
"INSERT INTO tasks (id, type, status, message, metadata, created_at) VALUES ($1, $2, $3, $4, $5, $6)",
)
.bind(&task.id)
.bind(&task.task_type)
.bind(&task.status)
.bind(&task.message)
.bind(&task.metadata)
.bind(&task.created_at)
.execute(&lock)
.await?;
Ok(())
}
pub async fn get_tasks(&self) -> Result<Vec<TaskRow>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let tasks = sqlx::query_as::<_, TaskRow>("SELECT * FROM tasks")
.fetch_all(&lock)
.await?;
Ok(tasks)
}
pub async fn update_task(
&self,
id: &str,
status: &str,
message: &str,
metadata: Option<&str>,
) -> Result<(), DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
if let Some(metadata) = metadata {
let _ = sqlx::query(
"UPDATE tasks SET status = $1, message = $2, metadata = $3 WHERE id = $4",
)
.bind(status)
.bind(message)
.bind(metadata)
.bind(id)
.execute(&lock)
.await?;
} else {
let _ = sqlx::query("UPDATE tasks SET status = $1, message = $2 WHERE id = $3")
.bind(status)
.bind(message)
.bind(id)
.execute(&lock)
.await?;
}
Ok(())
}
pub async fn delete_task(&self, id: &str) -> Result<(), DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let _ = sqlx::query("DELETE FROM tasks WHERE id = $1")
.bind(id)
.execute(&lock)
.await?;
Ok(())
}
pub async fn finish_pending_tasks(&self) -> Result<(), DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let _ = sqlx::query("UPDATE tasks SET status = 'failed' WHERE status = 'pending'")
.execute(&lock)
.await?;
Ok(())
}
}

View File

@@ -17,17 +17,34 @@ pub struct VideoRow {
pub tags: String,
pub area: i64,
pub created_at: String,
pub platform: String,
}
#[derive(Debug, Clone, serde::Serialize, sqlx::FromRow)]
pub struct VideoNoCover {
pub id: i64,
pub room_id: u64,
pub file: String,
pub length: i64,
pub size: i64,
pub status: i64,
pub bvid: String,
pub title: String,
pub desc: String,
pub tags: String,
pub area: i64,
pub created_at: String,
pub platform: String,
}
impl Database {
pub async fn get_videos(&self, room_id: u64) -> Result<Vec<VideoRow>, DatabaseError> {
pub async fn get_videos(&self, room_id: u64) -> Result<Vec<VideoNoCover>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
Ok(
sqlx::query_as::<_, VideoRow>("SELECT * FROM videos WHERE room_id = $1;")
let videos = sqlx::query_as::<_, VideoNoCover>("SELECT * FROM videos WHERE room_id = $1;")
.bind(room_id as i64)
.fetch_all(&lock)
.await?,
)
.await?;
Ok(videos)
}
pub async fn get_video(&self, id: i64) -> Result<VideoRow, DatabaseError> {
@@ -66,7 +83,7 @@ impl Database {
pub async fn add_video(&self, video: &VideoRow) -> Result<VideoRow, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let sql = sqlx::query("INSERT INTO videos (room_id, cover, file, length, size, status, bvid, title, desc, tags, area, created_at) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)")
let sql = sqlx::query("INSERT INTO videos (room_id, cover, file, length, size, status, bvid, title, desc, tags, area, created_at, platform) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)")
.bind(video.room_id as i64)
.bind(&video.cover)
.bind(&video.file)
@@ -79,6 +96,7 @@ impl Database {
.bind(&video.tags)
.bind(video.area)
.bind(&video.created_at)
.bind(&video.platform)
.execute(&lock)
.await?;
let video = VideoRow {
@@ -97,4 +115,22 @@ impl Database {
.await?;
Ok(())
}
pub async fn get_all_videos(&self) -> Result<Vec<VideoNoCover>, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let videos =
sqlx::query_as::<_, VideoNoCover>("SELECT * FROM videos ORDER BY created_at DESC;")
.fetch_all(&lock)
.await?;
Ok(videos)
}
pub async fn get_video_cover(&self, id: i64) -> Result<String, DatabaseError> {
let lock = self.db.read().await.clone().unwrap();
let video = sqlx::query_as::<_, VideoRow>("SELECT * FROM videos WHERE id = $1")
.bind(id)
.fetch_one(&lock)
.await?;
Ok(video.cover)
}
}

View File

@@ -1,15 +1,55 @@
use std::fmt;
use std::path::{Path, PathBuf};
use std::process::Stdio;
use crate::progress_reporter::ProgressReporterTrait;
use async_ffmpeg_sidecar::event::FfmpegEvent;
use crate::progress_reporter::{ProgressReporter, ProgressReporterTrait};
use crate::subtitle_generator::whisper_online;
use crate::subtitle_generator::{
whisper_cpp, GenerateResult, SubtitleGenerator, SubtitleGeneratorType,
};
use async_ffmpeg_sidecar::event::{FfmpegEvent, LogLevel};
use async_ffmpeg_sidecar::log_parser::FfmpegLogParser;
use tokio::io::BufReader;
use serde::{Deserialize, Serialize};
use tokio::io::{AsyncBufReadExt, BufReader};
// 视频元数据结构
#[derive(Debug)]
pub struct VideoMetadata {
pub duration: f64,
pub width: u32,
pub height: u32,
}
#[cfg(target_os = "windows")]
const CREATE_NO_WINDOW: u32 = 0x08000000;
#[cfg(target_os = "windows")]
#[allow(unused_imports)]
use std::os::windows::process::CommandExt;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Range {
pub start: f64,
pub end: f64,
}
impl fmt::Display for Range {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "[{}, {}]", self.start, self.end)
}
}
impl Range {
pub fn duration(&self) -> f64 {
self.end - self.start
}
}
pub async fn clip_from_m3u8(
reporter: Option<&impl ProgressReporterTrait>,
m3u8_index: &Path,
output_path: &Path,
range: Option<&Range>,
fix_encoding: bool,
) -> Result<(), String> {
// first check output folder exists
let output_folder = output_path.parent().unwrap();
@@ -21,9 +61,28 @@ pub async fn clip_from_m3u8(
std::fs::create_dir_all(output_folder).unwrap();
}
let child = tokio::process::Command::new(ffmpeg_path())
.args(["-i", &format!("{}", m3u8_index.display())])
.args(["-c", "copy"])
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child_command = ffmpeg_process.args(["-i", &format!("{}", m3u8_index.display())]);
if let Some(range) = range {
child_command
.args(["-ss", &range.start.to_string()])
.args(["-t", &range.duration().to_string()]);
}
if fix_encoding {
child_command
.args(["-c:v", "libx264"])
.args(["-c:a", "aac"])
.args(["-preset", "fast"]);
} else {
child_command.args(["-c", "copy"]);
}
let child = child_command
.args(["-y", output_path.to_str().unwrap()])
.args(["-progress", "pipe:2"])
.stderr(Stdio::piped())
@@ -45,13 +104,17 @@ pub async fn clip_from_m3u8(
if reporter.is_none() {
continue;
}
log::debug!("Clip progress: {}", p.time);
reporter
.unwrap()
.update(format!("编码中:{}", p.time).as_str())
}
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(_level, content) => {
log::debug!("{}", content);
FfmpegEvent::Log(level, content) => {
// log error if content contains error
if content.contains("error") || level == LogLevel::Error {
log::error!("Clip error: {}", content);
}
}
FfmpegEvent::Error(e) => {
log::error!("Clip error: {}", e);
@@ -75,20 +138,92 @@ pub async fn clip_from_m3u8(
}
}
pub async fn extract_audio(file: &Path) -> Result<(), String> {
pub async fn extract_audio_chunks(file: &Path, format: &str) -> Result<PathBuf, String> {
// ffmpeg -i fixed_\[30655190\]1742887114_0325084106_81.5.mp4 -ar 16000 test.wav
log::info!("Extract audio task start: {}", file.display());
let output_path = file.with_extension("wav");
let output_path = file.with_extension(format);
let mut extract_error = None;
let child = tokio::process::Command::new(ffmpeg_path())
.args(["-i", file.to_str().unwrap()])
.args(["-ar", "16000"])
.args([output_path.to_str().unwrap()])
.args(["-y"])
.args(["-progress", "pipe:2"])
.stderr(Stdio::piped())
.spawn();
// 降低采样率以提高处理速度,同时保持足够的音质用于语音识别
let sample_rate = if format == "mp3" { "22050" } else { "16000" };
// First, get the duration of the input file
let duration = get_audio_duration(file).await?;
log::info!("Audio duration: {} seconds", duration);
// Split into chunks of 30 seconds
let chunk_duration = 30;
let chunk_count = (duration as f64 / chunk_duration as f64).ceil() as usize;
log::info!(
"Splitting into {} chunks of {} seconds each",
chunk_count,
chunk_duration
);
// Create output directory for chunks
let output_dir = output_path.parent().unwrap();
let base_name = output_path.file_stem().unwrap().to_str().unwrap();
let chunk_dir = output_dir.join(format!("{}_chunks", base_name));
if !chunk_dir.exists() {
std::fs::create_dir_all(&chunk_dir)
.map_err(|e| format!("Failed to create chunk directory: {}", e))?;
}
// Use ffmpeg segment feature to split audio into chunks
let segment_pattern = chunk_dir.join(format!("{}_%03d.{}", base_name, format));
// 构建优化的ffmpeg命令参数
let file_str = file.to_str().unwrap();
let chunk_duration_str = chunk_duration.to_string();
let segment_pattern_str = segment_pattern.to_str().unwrap();
let mut args = vec![
"-i",
file_str,
"-ar",
sample_rate,
"-vn",
"-f",
"segment",
"-segment_time",
&chunk_duration_str,
"-reset_timestamps",
"1",
"-y",
"-progress",
"pipe:2",
];
// 根据格式添加优化的编码参数
if format == "mp3" {
args.extend_from_slice(&[
"-c:a",
"mp3",
"-b:a",
"64k", // 降低比特率以提高速度
"-compression_level",
"0", // 最快压缩
]);
} else {
args.extend_from_slice(&[
"-c:a",
"pcm_s16le", // 使用PCM编码速度更快
]);
}
// 添加性能优化参数
args.extend_from_slice(&[
"-threads", "0", // 使用所有可用CPU核心
]);
args.push(segment_pattern_str);
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child = ffmpeg_process.args(&args).stderr(Stdio::piped()).spawn();
if let Err(e) = child {
return Err(e.to_string());
@@ -106,9 +241,7 @@ pub async fn extract_audio(file: &Path) -> Result<(), String> {
extract_error = Some(e.to_string());
}
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(_level, content) => {
log::debug!("{}", content);
}
FfmpegEvent::Log(_level, _content) => {}
_ => {}
}
}
@@ -122,11 +255,114 @@ pub async fn extract_audio(file: &Path) -> Result<(), String> {
log::error!("Extract audio error: {}", error);
Err(error)
} else {
log::info!("Extract audio task end: {}", output_path.display());
Ok(())
log::info!(
"Extract audio task end: {} chunks created in {}",
chunk_count,
chunk_dir.display()
);
Ok(chunk_dir)
}
}
/// Get the duration of an audio/video file in seconds
async fn get_audio_duration(file: &Path) -> Result<u64, String> {
// Use ffprobe with format option to get duration
let mut ffprobe_process = tokio::process::Command::new(ffprobe_path());
#[cfg(target_os = "windows")]
ffprobe_process.creation_flags(CREATE_NO_WINDOW);
let child = ffprobe_process
.args(["-v", "quiet"])
.args(["-show_entries", "format=duration"])
.args(["-of", "csv=p=0"])
.args(["-i", file.to_str().unwrap()])
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn();
if let Err(e) = child {
return Err(format!("Failed to spawn ffprobe process: {}", e));
}
let mut child = child.unwrap();
let stdout = child.stdout.take().unwrap();
let reader = BufReader::new(stdout);
let mut parser = FfmpegLogParser::new(reader);
let mut duration = None;
while let Ok(event) = parser.parse_next_event().await {
match event {
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(_level, content) => {
// The new command outputs duration directly as a float
if let Ok(seconds_f64) = content.trim().parse::<f64>() {
duration = Some(seconds_f64.ceil() as u64);
log::debug!("Parsed duration: {} seconds", seconds_f64);
}
}
_ => {}
}
}
if let Err(e) = child.wait().await {
log::error!("Failed to get duration: {}", e);
return Err(e.to_string());
}
duration.ok_or_else(|| "Failed to parse duration".to_string())
}
/// Get the precise duration of a video segment (TS/MP4) in seconds
pub async fn get_segment_duration(file: &Path) -> Result<f64, String> {
// Use ffprobe to get the exact duration of the segment
let mut ffprobe_process = tokio::process::Command::new(ffprobe_path());
#[cfg(target_os = "windows")]
ffprobe_process.creation_flags(CREATE_NO_WINDOW);
let child = ffprobe_process
.args(["-v", "quiet"])
.args(["-show_entries", "format=duration"])
.args(["-of", "csv=p=0"])
.args(["-i", file.to_str().unwrap()])
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn();
if let Err(e) = child {
return Err(format!(
"Failed to spawn ffprobe process for segment: {}",
e
));
}
let mut child = child.unwrap();
let stdout = child.stdout.take().unwrap();
let reader = BufReader::new(stdout);
let mut parser = FfmpegLogParser::new(reader);
let mut duration = None;
while let Ok(event) = parser.parse_next_event().await {
match event {
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(_level, content) => {
// Parse the exact duration as f64 for precise timing
if let Ok(seconds_f64) = content.trim().parse::<f64>() {
duration = Some(seconds_f64);
log::debug!("Parsed segment duration: {} seconds", seconds_f64);
}
}
_ => {}
}
}
if let Err(e) = child.wait().await {
log::error!("Failed to get segment duration: {}", e);
return Err(e.to_string());
}
duration.ok_or_else(|| "Failed to parse segment duration".to_string())
}
pub async fn encode_video_subtitle(
reporter: &impl ProgressReporterTrait,
file: &Path,
@@ -140,10 +376,9 @@ pub async fn encode_video_subtitle(
let output_filename = format!("[subtitle]{}", file.file_name().unwrap().to_str().unwrap());
let output_path = file.with_file_name(&output_filename);
// check output path exists
// check output path exists - log but allow overwrite
if output_path.exists() {
log::info!("Output path already exists: {}", output_path.display());
return Err("Output path already exists".to_string());
log::info!("Output path already exists, will overwrite: {}", output_path.display());
}
let mut command_error = None;
@@ -163,7 +398,11 @@ pub async fn encode_video_subtitle(
let vf = format!("subtitles={}:force_style='{}'", subtitle, srt_style);
log::info!("vf: {}", vf);
let child = tokio::process::Command::new(ffmpeg_path())
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child = ffmpeg_process
.args(["-i", file.to_str().unwrap()])
.args(["-vf", vf.as_str()])
.args(["-c:v", "libx264"])
@@ -194,9 +433,7 @@ pub async fn encode_video_subtitle(
reporter.update(format!("压制中:{}", p.time).as_str());
}
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(_level, content) => {
log::debug!("{}", content);
}
FfmpegEvent::Log(_level, _content) => {}
_ => {}
}
}
@@ -225,10 +462,9 @@ pub async fn encode_video_danmu(
let danmu_filename = format!("[danmu]{}", file.file_name().unwrap().to_str().unwrap());
let output_path = file.with_file_name(danmu_filename);
// check output path exists
// check output path exists - log but allow overwrite
if output_path.exists() {
log::info!("Output path already exists: {}", output_path.display());
return Err("Output path already exists".to_string());
log::info!("Output path already exists, will overwrite: {}", output_path.display());
}
let mut command_error = None;
@@ -246,7 +482,11 @@ pub async fn encode_video_danmu(
format!("'{}'", subtitle.display())
};
let child = tokio::process::Command::new(ffmpeg_path())
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child = ffmpeg_process
.args(["-i", file.to_str().unwrap()])
.args(["-vf", &format!("ass={}", subtitle)])
.args(["-c:v", "libx264"])
@@ -273,7 +513,7 @@ pub async fn encode_video_danmu(
command_error = Some(e.to_string());
}
FfmpegEvent::Progress(p) => {
log::info!("Encode video danmu progress: {}", p.time);
log::debug!("Encode video danmu progress: {}", p.time);
if reporter.is_none() {
continue;
}
@@ -281,9 +521,7 @@ pub async fn encode_video_danmu(
.unwrap()
.update(format!("压制中:{}", p.time).as_str());
}
FfmpegEvent::Log(_level, content) => {
log::debug!("{}", content);
}
FfmpegEvent::Log(_level, _content) => {}
FfmpegEvent::LogEOF => break,
_ => {}
}
@@ -303,6 +541,166 @@ pub async fn encode_video_danmu(
}
}
pub async fn generic_ffmpeg_command(args: &[&str]) -> Result<String, String> {
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child = ffmpeg_process.args(args).stderr(Stdio::piped()).spawn();
if let Err(e) = child {
return Err(e.to_string());
}
let mut child = child.unwrap();
let stderr = child.stderr.take().unwrap();
let reader = BufReader::new(stderr);
let mut parser = FfmpegLogParser::new(reader);
let mut logs = Vec::new();
while let Ok(event) = parser.parse_next_event().await {
match event {
FfmpegEvent::Log(_level, content) => {
logs.push(content);
}
FfmpegEvent::LogEOF => break,
_ => {}
}
}
if let Err(e) = child.wait().await {
log::error!("Generic ffmpeg command error: {}", e);
return Err(e.to_string());
}
Ok(logs.join("\n"))
}
#[allow(clippy::too_many_arguments)]
pub async fn generate_video_subtitle(
reporter: Option<&ProgressReporter>,
file: &Path,
generator_type: &str,
whisper_model: &str,
whisper_prompt: &str,
openai_api_key: &str,
openai_api_endpoint: &str,
language_hint: &str,
) -> Result<GenerateResult, String> {
match generator_type {
"whisper" => {
if whisper_model.is_empty() {
return Err("Whisper model not configured".to_string());
}
if let Ok(generator) = whisper_cpp::new(Path::new(&whisper_model), whisper_prompt).await
{
let chunk_dir = extract_audio_chunks(file, "wav").await?;
let mut full_result = GenerateResult {
subtitle_id: "".to_string(),
subtitle_content: vec![],
generator_type: SubtitleGeneratorType::Whisper,
};
let mut chunk_paths = vec![];
for entry in std::fs::read_dir(&chunk_dir)
.map_err(|e| format!("Failed to read chunk directory: {}", e))?
{
let entry =
entry.map_err(|e| format!("Failed to read directory entry: {}", e))?;
let path = entry.path();
chunk_paths.push(path);
}
// sort chunk paths by name
chunk_paths
.sort_by_key(|path| path.file_name().unwrap().to_str().unwrap().to_string());
let mut results = Vec::new();
for path in chunk_paths {
let result = generator
.generate_subtitle(reporter, &path, language_hint)
.await;
results.push(result);
}
for (i, result) in results.iter().enumerate() {
if let Ok(result) = result {
full_result.subtitle_id = result.subtitle_id.clone();
full_result.concat(result, 30 * i as u64);
}
}
// delete chunk directory
let _ = tokio::fs::remove_dir_all(chunk_dir).await;
Ok(full_result)
} else {
Err("Failed to initialize Whisper model".to_string())
}
}
"whisper_online" => {
if openai_api_key.is_empty() {
return Err("API key not configured".to_string());
}
if let Ok(generator) = whisper_online::new(
Some(openai_api_endpoint),
Some(openai_api_key),
Some(whisper_prompt),
)
.await
{
let chunk_dir = extract_audio_chunks(file, "mp3").await?;
let mut full_result = GenerateResult {
subtitle_id: "".to_string(),
subtitle_content: vec![],
generator_type: SubtitleGeneratorType::WhisperOnline,
};
let mut chunk_paths = vec![];
for entry in std::fs::read_dir(&chunk_dir)
.map_err(|e| format!("Failed to read chunk directory: {}", e))?
{
let entry =
entry.map_err(|e| format!("Failed to read directory entry: {}", e))?;
let path = entry.path();
chunk_paths.push(path);
}
// sort chunk paths by name
chunk_paths
.sort_by_key(|path| path.file_name().unwrap().to_str().unwrap().to_string());
let mut results = Vec::new();
for path in chunk_paths {
let result = generator
.generate_subtitle(reporter, &path, language_hint)
.await;
results.push(result);
}
for (i, result) in results.iter().enumerate() {
if let Ok(result) = result {
full_result.subtitle_id = result.subtitle_id.clone();
full_result.concat(result, 30 * i as u64);
}
}
// delete chunk directory
let _ = tokio::fs::remove_dir_all(chunk_dir).await;
Ok(full_result)
} else {
Err("Failed to initialize Whisper Online".to_string())
}
}
_ => Err(format!(
"Unknown subtitle generator type: {}",
generator_type
)),
}
}
/// Trying to run ffmpeg for version
pub async fn check_ffmpeg() -> Result<String, String> {
let child = tokio::process::Command::new(ffmpeg_path())
@@ -342,6 +740,52 @@ pub async fn check_ffmpeg() -> Result<String, String> {
}
}
pub async fn get_video_resolution(file: &str) -> Result<String, String> {
// ffprobe -v error -select_streams v:0 -show_entries stream=width,height -of csv=s=x:p=0 input.mp4
let mut ffprobe_process = tokio::process::Command::new(ffprobe_path());
#[cfg(target_os = "windows")]
ffprobe_process.creation_flags(CREATE_NO_WINDOW);
let child = ffprobe_process
.arg("-i")
.arg(file)
.arg("-v")
.arg("error")
.arg("-select_streams")
.arg("v:0")
.arg("-show_entries")
.arg("stream=width,height")
.arg("-of")
.arg("csv=s=x:p=0")
.stdout(Stdio::piped())
.spawn();
if let Err(e) = child {
log::error!("Faild to spwan ffprobe process: {e}");
return Err(e.to_string());
}
let mut child = child.unwrap();
let stdout = child.stdout.take();
if stdout.is_none() {
log::error!("Failed to take ffprobe output");
return Err("Failed to take ffprobe output".into());
}
let stdout = stdout.unwrap();
let reader = BufReader::new(stdout);
let mut lines = reader.lines();
let line = lines.next_line().await.unwrap();
if line.is_none() {
return Err("Failed to parse resolution from output".into());
}
let line = line.unwrap();
let resolution = line.split("x").collect::<Vec<&str>>();
if resolution.len() != 2 {
return Err("Failed to parse resolution from output".into());
}
Ok(format!("{}x{}", resolution[0], resolution[1]))
}
fn ffmpeg_path() -> PathBuf {
let mut path = Path::new("ffmpeg").to_path_buf();
if cfg!(windows) {
@@ -350,3 +794,365 @@ fn ffmpeg_path() -> PathBuf {
path
}
fn ffprobe_path() -> PathBuf {
let mut path = Path::new("ffprobe").to_path_buf();
if cfg!(windows) {
path.set_extension("exe");
}
path
}
// 解析 FFmpeg 时间字符串 (格式如 "00:01:23.45")
fn parse_time_string(time_str: &str) -> Result<f64, String> {
let parts: Vec<&str> = time_str.split(':').collect();
if parts.len() != 3 {
return Err("Invalid time format".to_string());
}
let hours: f64 = parts[0].parse().map_err(|_| "Invalid hours")?;
let minutes: f64 = parts[1].parse().map_err(|_| "Invalid minutes")?;
let seconds: f64 = parts[2].parse().map_err(|_| "Invalid seconds")?;
Ok(hours * 3600.0 + minutes * 60.0 + seconds)
}
// 从视频文件切片
pub async fn clip_from_video_file(
reporter: Option<&impl ProgressReporterTrait>,
input_path: &Path,
output_path: &Path,
start_time: f64,
duration: f64,
) -> Result<(), String> {
let output_folder = output_path.parent().unwrap();
if !output_folder.exists() {
std::fs::create_dir_all(output_folder).unwrap();
}
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let child = ffmpeg_process
.args(["-i", &format!("{}", input_path.display())])
.args(["-ss", &start_time.to_string()])
.args(["-t", &duration.to_string()])
.args(["-c:v", "libx264"])
.args(["-c:a", "aac"])
.args(["-preset", "fast"])
.args(["-crf", "23"])
.args(["-avoid_negative_ts", "make_zero"])
.args(["-y", output_path.to_str().unwrap()])
.args(["-progress", "pipe:2"])
.stderr(Stdio::piped())
.spawn();
if let Err(e) = child {
return Err(format!("启动ffmpeg进程失败: {}", e));
}
let mut child = child.unwrap();
let stderr = child.stderr.take().unwrap();
let reader = BufReader::new(stderr);
let mut parser = FfmpegLogParser::new(reader);
let mut clip_error = None;
while let Ok(event) = parser.parse_next_event().await {
match event {
FfmpegEvent::Progress(p) => {
if let Some(reporter) = reporter {
// 解析时间字符串 (格式如 "00:01:23.45")
if let Ok(current_time) = parse_time_string(&p.time) {
let progress = (current_time / duration * 100.0).min(100.0);
reporter.update(&format!("切片进度: {:.1}%", progress));
}
}
}
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(level, content) => {
if content.contains("error") || level == LogLevel::Error {
log::error!("切片错误: {}", content);
}
}
FfmpegEvent::Error(e) => {
log::error!("切片错误: {}", e);
clip_error = Some(e.to_string());
}
_ => {}
}
}
if let Err(e) = child.wait().await {
return Err(e.to_string());
}
if let Some(error) = clip_error {
Err(error)
} else {
log::info!("切片任务完成: {}", output_path.display());
Ok(())
}
}
// 获取视频元数据
pub async fn extract_video_metadata(file_path: &Path) -> Result<VideoMetadata, String> {
let mut ffprobe_process = tokio::process::Command::new("ffprobe");
#[cfg(target_os = "windows")]
ffprobe_process.creation_flags(CREATE_NO_WINDOW);
let output = ffprobe_process
.args([
"-v", "quiet",
"-print_format", "json",
"-show_format",
"-show_streams",
"-select_streams", "v:0",
&format!("{}", file_path.display())
])
.output()
.await
.map_err(|e| format!("执行ffprobe失败: {}", e))?;
if !output.status.success() {
return Err(format!("ffprobe执行失败: {}", String::from_utf8_lossy(&output.stderr)));
}
let json_str = String::from_utf8_lossy(&output.stdout);
let json: serde_json::Value = serde_json::from_str(&json_str)
.map_err(|e| format!("解析ffprobe输出失败: {}", e))?;
// 解析视频流信息
let streams = json["streams"].as_array()
.ok_or("未找到视频流信息")?;
if streams.is_empty() {
return Err("未找到视频流".to_string());
}
let video_stream = &streams[0];
let format = &json["format"];
let duration = format["duration"].as_str()
.and_then(|d| d.parse::<f64>().ok())
.unwrap_or(0.0);
let width = video_stream["width"].as_u64().unwrap_or(0) as u32;
let height = video_stream["height"].as_u64().unwrap_or(0) as u32;
Ok(VideoMetadata {
duration,
width,
height,
})
}
// 生成视频缩略图
pub async fn generate_thumbnail(
video_path: &Path,
output_path: &Path,
timestamp: f64,
) -> Result<(), String> {
let output_folder = output_path.parent().unwrap();
if !output_folder.exists() {
std::fs::create_dir_all(output_folder).unwrap();
}
let mut ffmpeg_process = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
ffmpeg_process.creation_flags(CREATE_NO_WINDOW);
let output = ffmpeg_process
.args(["-i", &format!("{}", video_path.display())])
.args(["-ss", &timestamp.to_string()])
.args(["-vframes", "1"])
.args(["-y", output_path.to_str().unwrap()])
.output()
.await
.map_err(|e| format!("生成缩略图失败: {}", e))?;
if !output.status.success() {
return Err(format!("ffmpeg生成缩略图失败: {}", String::from_utf8_lossy(&output.stderr)));
}
// 记录生成的缩略图信息
if let Ok(metadata) = std::fs::metadata(output_path) {
log::info!("生成缩略图完成: {} (文件大小: {} bytes)", output_path.display(), metadata.len());
} else {
log::info!("生成缩略图完成: {}", output_path.display());
}
Ok(())
}
// 解析FFmpeg时间字符串为秒数 (格式: "HH:MM:SS.mmm")
pub fn parse_ffmpeg_time(time_str: &str) -> Result<f64, String> {
let parts: Vec<&str> = time_str.split(':').collect();
if parts.len() != 3 {
return Err(format!("Invalid time format: {}", time_str));
}
let hours: f64 = parts[0].parse().map_err(|_| format!("Invalid hours: {}", parts[0]))?;
let minutes: f64 = parts[1].parse().map_err(|_| format!("Invalid minutes: {}", parts[1]))?;
let seconds: f64 = parts[2].parse().map_err(|_| format!("Invalid seconds: {}", parts[2]))?;
Ok(hours * 3600.0 + minutes * 60.0 + seconds)
}
// 执行FFmpeg转换的通用函数
pub async fn execute_ffmpeg_conversion(
mut cmd: tokio::process::Command,
total_duration: f64,
reporter: &ProgressReporter,
mode_name: &str,
) -> Result<(), String> {
use std::process::Stdio;
use tokio::io::BufReader;
use async_ffmpeg_sidecar::event::FfmpegEvent;
use async_ffmpeg_sidecar::log_parser::FfmpegLogParser;
let mut child = cmd
.stderr(Stdio::piped())
.spawn()
.map_err(|e| format!("启动FFmpeg进程失败: {}", e))?;
let stderr = child.stderr.take().unwrap();
let reader = BufReader::new(stderr);
let mut parser = FfmpegLogParser::new(reader);
let mut conversion_error = None;
while let Ok(event) = parser.parse_next_event().await {
match event {
FfmpegEvent::Progress(p) => {
if total_duration > 0.0 {
// 解析时间字符串为浮点数 (格式: "HH:MM:SS.mmm")
if let Ok(current_time) = parse_ffmpeg_time(&p.time) {
let progress = (current_time / total_duration * 100.0).min(100.0);
reporter.update(&format!("正在转换视频格式... {:.1}% ({})", progress, mode_name));
} else {
reporter.update(&format!("正在转换视频格式... {} ({})", p.time, mode_name));
}
} else {
reporter.update(&format!("正在转换视频格式... {} ({})", p.time, mode_name));
}
}
FfmpegEvent::LogEOF => break,
FfmpegEvent::Log(level, content) => {
if matches!(level, async_ffmpeg_sidecar::event::LogLevel::Error) && content.contains("Error") {
conversion_error = Some(content);
}
}
FfmpegEvent::Error(e) => {
conversion_error = Some(e);
}
_ => {} // 忽略其他事件类型
}
}
let status = child.wait().await.map_err(|e| format!("等待FFmpeg进程失败: {}", e))?;
if !status.success() {
let error_msg = conversion_error.unwrap_or_else(|| format!("FFmpeg退出码: {}", status.code().unwrap_or(-1)));
return Err(format!("视频格式转换失败 ({}): {}", mode_name, error_msg));
}
reporter.update(&format!("视频格式转换完成 100% ({})", mode_name));
Ok(())
}
// 尝试流复制转换(无损,速度快)
pub async fn try_stream_copy_conversion(
source: &Path,
dest: &Path,
reporter: &ProgressReporter,
) -> Result<(), String> {
// 获取视频时长以计算进度
let metadata = extract_video_metadata(source).await?;
let total_duration = metadata.duration;
reporter.update("正在转换视频格式... 0% (无损模式)");
// 构建ffmpeg命令 - 流复制模式
let mut cmd = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
cmd.creation_flags(0x08000000); // CREATE_NO_WINDOW
cmd.args([
"-i", &source.to_string_lossy(),
"-c:v", "copy", // 直接复制视频流,零损失
"-c:a", "copy", // 直接复制音频流,零损失
"-avoid_negative_ts", "make_zero", // 修复时间戳问题
"-movflags", "+faststart", // 优化web播放
"-progress", "pipe:2", // 输出进度到stderr
"-y", // 覆盖输出文件
&dest.to_string_lossy(),
]);
execute_ffmpeg_conversion(cmd, total_duration, reporter, "无损转换").await
}
// 高质量重编码转换(兼容性好,质量高)
pub async fn try_high_quality_conversion(
source: &Path,
dest: &Path,
reporter: &ProgressReporter,
) -> Result<(), String> {
// 获取视频时长以计算进度
let metadata = extract_video_metadata(source).await?;
let total_duration = metadata.duration;
reporter.update("正在转换视频格式... 0% (高质量模式)");
// 构建ffmpeg命令 - 高质量重编码
let mut cmd = tokio::process::Command::new(ffmpeg_path());
#[cfg(target_os = "windows")]
cmd.creation_flags(0x08000000); // CREATE_NO_WINDOW
cmd.args([
"-i", &source.to_string_lossy(),
"-c:v", "libx264", // H.264编码器
"-preset", "slow", // 慢速预设,更好的压缩效率
"-crf", "18", // 高质量设置 (18-23范围越小质量越高)
"-c:a", "aac", // AAC音频编码器
"-b:a", "192k", // 高音频码率
"-avoid_negative_ts", "make_zero", // 修复时间戳问题
"-movflags", "+faststart", // 优化web播放
"-progress", "pipe:2", // 输出进度到stderr
"-y", // 覆盖输出文件
&dest.to_string_lossy(),
]);
execute_ffmpeg_conversion(cmd, total_duration, reporter, "高质量转换").await
}
// 带进度的视频格式转换函数(智能质量保持策略)
pub async fn convert_video_format(
source: &Path,
dest: &Path,
reporter: &ProgressReporter,
) -> Result<(), String> {
// 先尝试stream copy无损转换如果失败则使用高质量重编码
match try_stream_copy_conversion(source, dest, reporter).await {
Ok(()) => Ok(()),
Err(stream_copy_error) => {
reporter.update("流复制失败,使用高质量重编码模式...");
log::warn!("Stream copy failed: {}, falling back to re-encoding", stream_copy_error);
try_high_quality_conversion(source, dest, reporter).await
}
}
}
// tests
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_get_video_size() {
let file = Path::new("/Users/xinreasuper/Desktop/shadowreplay-test/output2/[1789714684][1753965688317][摄像头被前夫抛妻弃子直播挣点奶粉][2025-07-31_12-58-14].mp4");
let resolution = get_video_resolution(file.to_str().unwrap()).await.unwrap();
println!("Resolution: {}", resolution);
}
}

View File

@@ -3,6 +3,7 @@ use crate::recorder::bilibili::client::{QrInfo, QrStatus};
use crate::state::State;
use crate::state_type;
use hyper::header::HeaderValue;
#[cfg(feature = "gui")]
use tauri::State as TauriState;
@@ -20,6 +21,10 @@ pub async fn add_account(
platform: String,
cookies: &str,
) -> Result<AccountRow, String> {
// check if cookies is valid
if let Err(e) = cookies.parse::<HeaderValue>() {
return Err(format!("Invalid cookies: {}", e));
}
let account = state.db.add_account(&platform, cookies).await?;
if platform == "bilibili" {
let account_info = state.client.get_user_info(&account, account.uid).await?;
@@ -32,6 +37,37 @@ pub async fn add_account(
&account_info.user_avatar_url,
)
.await?;
} else if platform == "douyin" {
// Get user info from Douyin API
let douyin_client = crate::recorder::douyin::client::DouyinClient::new(
&state.config.read().await.user_agent,
&account,
);
match douyin_client.get_user_info().await {
Ok(user_info) => {
// For Douyin, use sec_uid as the primary identifier in id_str field
let avatar_url = user_info
.avatar_thumb
.url_list
.first()
.cloned()
.unwrap_or_default();
state
.db
.update_account_with_id_str(
&account,
&user_info.sec_uid,
&user_info.nickname,
&avatar_url,
)
.await?;
}
Err(e) => {
log::warn!("Failed to get Douyin user info: {}", e);
// Keep the account but with default values
}
}
}
Ok(account)
}

View File

@@ -172,6 +172,42 @@ pub async fn update_whisper_prompt(state: state_type!(), whisper_prompt: String)
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_subtitle_generator_type(
state: state_type!(),
subtitle_generator_type: String,
) -> Result<(), ()> {
log::info!(
"Updating subtitle generator type to {}",
subtitle_generator_type
);
let mut config = state.config.write().await;
config.subtitle_generator_type = subtitle_generator_type;
config.save();
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_openai_api_key(state: state_type!(), openai_api_key: String) -> Result<(), ()> {
log::info!("Updating openai api key");
let mut config = state.config.write().await;
config.openai_api_key = openai_api_key;
config.save();
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_openai_api_endpoint(
state: state_type!(),
openai_api_endpoint: String,
) -> Result<(), ()> {
log::info!("Updating openai api endpoint to {}", openai_api_endpoint);
let mut config = state.config.write().await;
config.openai_api_endpoint = openai_api_endpoint;
config.save();
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_auto_generate(
state: state_type!(),
@@ -198,3 +234,21 @@ pub async fn update_status_check_interval(
state.config.write().await.save();
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_whisper_language(
state: state_type!(),
whisper_language: String,
) -> Result<(), ()> {
log::info!("Updating whisper language to {}", whisper_language);
state.config.write().await.whisper_language = whisper_language;
state.config.write().await.save();
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn update_user_agent(state: state_type!(), user_agent: String) -> Result<(), ()> {
log::info!("Updating user agent to {}", user_agent);
state.config.write().await.set_user_agent(&user_agent);
Ok(())
}

View File

@@ -3,6 +3,7 @@ pub mod config;
pub mod macros;
pub mod message;
pub mod recorder;
pub mod task;
pub mod utils;
pub mod video;

View File

@@ -24,6 +24,7 @@ pub async fn add_recorder(
state: state_type!(),
platform: String,
room_id: u64,
extra: String,
) -> Result<RecorderRow, String> {
log::info!("Add recorder: {} {}", platform, room_id);
let platform = PlatformType::from_str(&platform).unwrap();
@@ -50,11 +51,11 @@ pub async fn add_recorder(
match account {
Ok(account) => match state
.recorder_manager
.add_recorder(&account, platform, room_id, true)
.add_recorder(&account, platform, room_id, &extra, true)
.await
{
Ok(()) => {
let room = state.db.add_recorder(platform, room_id).await?;
let room = state.db.add_recorder(platform, room_id, &extra).await?;
state
.db
.new_message("添加直播间", &format!("添加了新直播间 {}", room_id))
@@ -136,6 +137,40 @@ pub async fn get_archive(
.await?)
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn get_archive_subtitle(
state: state_type!(),
platform: String,
room_id: u64,
live_id: String,
) -> Result<String, String> {
let platform = PlatformType::from_str(&platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
Ok(state
.recorder_manager
.get_archive_subtitle(platform.unwrap(), room_id, &live_id)
.await?)
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn generate_archive_subtitle(
state: state_type!(),
platform: String,
room_id: u64,
live_id: String,
) -> Result<String, String> {
let platform = PlatformType::from_str(&platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
Ok(state
.recorder_manager
.generate_archive_subtitle(platform.unwrap(), room_id, &live_id)
.await?)
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn delete_archive(
state: state_type!(),
@@ -143,10 +178,13 @@ pub async fn delete_archive(
room_id: u64,
live_id: String,
) -> Result<(), String> {
let platform = PlatformType::from_str(&platform).unwrap();
let platform = PlatformType::from_str(&platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
state
.recorder_manager
.delete_archive(platform, room_id, &live_id)
.delete_archive(platform.unwrap(), room_id, &live_id)
.await?;
state
.db
@@ -165,10 +203,13 @@ pub async fn get_danmu_record(
room_id: u64,
live_id: String,
) -> Result<Vec<DanmuEntry>, String> {
let platform = PlatformType::from_str(&platform).unwrap();
let platform = PlatformType::from_str(&platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
Ok(state
.recorder_manager
.get_danmu(platform, room_id, &live_id)
.get_danmu(platform.unwrap(), room_id, &live_id)
.await?)
}
@@ -188,10 +229,13 @@ pub async fn export_danmu(
state: state_type!(),
options: ExportDanmuOptions,
) -> Result<String, String> {
let platform = PlatformType::from_str(&options.platform).unwrap();
let platform = PlatformType::from_str(&options.platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
let mut danmus = state
.recorder_manager
.get_danmu(platform, options.room_id, &options.live_id)
.get_danmu(platform.unwrap(), options.room_id, &options.live_id)
.await?;
log::debug!("First danmu entry: {:?}", danmus.first());
@@ -249,10 +293,11 @@ pub async fn get_today_record_count(state: state_type!()) -> Result<i64, String>
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn get_recent_record(
state: state_type!(),
room_id: u64,
offset: u64,
limit: u64,
) -> Result<Vec<RecordRow>, String> {
match state.db.get_recent_record(offset, limit).await {
match state.db.get_recent_record(room_id, offset, limit).await {
Ok(records) => Ok(records),
Err(e) => Err(format!("Failed to get recent record: {}", e)),
}
@@ -266,10 +311,13 @@ pub async fn set_enable(
enabled: bool,
) -> Result<(), String> {
log::info!("Set enable for recorder {platform} {room_id} {enabled}");
let platform = PlatformType::from_str(&platform).unwrap();
let platform = PlatformType::from_str(&platform);
if platform.is_none() {
return Err("Unsupported platform".to_string());
}
state
.recorder_manager
.set_enable(platform, room_id, enabled)
.set_enable(platform.unwrap(), room_id, enabled)
.await;
Ok(())
}

View File

@@ -0,0 +1,15 @@
#[cfg(feature = "gui")]
use tauri::State as TauriState;
use crate::state::State;
use crate::{database::task::TaskRow, state_type};
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn get_tasks(state: state_type!()) -> Result<Vec<TaskRow>, String> {
Ok(state.db.get_tasks().await?)
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn delete_task(state: state_type!(), id: &str) -> Result<(), String> {
Ok(state.db.delete_task(id).await?)
}

View File

@@ -228,7 +228,7 @@ pub async fn open_live(
format!("Live:{}:{}", room_id, live_id),
tauri::WebviewUrl::App(
format!(
"live_index.html?platform={}&room_id={}&live_id={}",
"index_live.html?platform={}&room_id={}&live_id={}",
platform.as_str(),
room_id,
live_id
@@ -259,3 +259,46 @@ pub async fn open_live(
Ok(())
}
#[cfg(feature = "gui")]
#[tauri::command]
pub async fn open_clip(state: state_type!(), video_id: i64) -> Result<(), String> {
log::info!("Open clip window: {}", video_id);
let builder = tauri::WebviewWindowBuilder::new(
&state.app_handle,
format!("Clip:{}", video_id),
tauri::WebviewUrl::App(format!("index_clip.html?id={}", video_id).into()),
)
.title(format!("Clip window:{}", video_id))
.theme(Some(Theme::Light))
.inner_size(1200.0, 800.0)
.effects(WindowEffectsConfig {
effects: vec![
tauri_utils::WindowEffect::Tabbed,
tauri_utils::WindowEffect::Mica,
],
state: None,
radius: None,
color: None,
});
if let Err(e) = builder.decorations(true).build() {
log::error!("clip window build failed: {}", e);
}
Ok(())
}
#[cfg_attr(feature = "gui", tauri::command)]
pub async fn list_folder(_state: state_type!(), path: String) -> Result<Vec<String>, String> {
let path = PathBuf::from(path);
let entries = std::fs::read_dir(path);
if entries.is_err() {
return Err(format!("Read directory failed: {}", entries.err().unwrap()));
}
let mut files = Vec::new();
for entry in entries.unwrap().flatten() {
files.push(entry.path().to_str().unwrap().to_string());
}
Ok(files)
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,8 +3,12 @@ use std::fmt::{self, Display};
use crate::{
config::Config,
database::{
account::AccountRow, message::MessageRow, record::RecordRow, recorder::RecorderRow,
video::VideoRow,
account::AccountRow,
message::MessageRow,
record::RecordRow,
recorder::RecorderRow,
task::TaskRow,
video::{VideoNoCover, VideoRow},
},
handlers::{
account::{
@@ -12,21 +16,24 @@ use crate::{
},
config::{
get_config, update_auto_generate, update_clip_name_format, update_notify,
update_status_check_interval, update_subtitle_setting, update_whisper_model,
update_whisper_prompt,
update_openai_api_endpoint, update_openai_api_key, update_status_check_interval,
update_subtitle_generator_type, update_subtitle_setting, update_user_agent,
update_whisper_language, update_whisper_model, update_whisper_prompt,
},
message::{delete_message, get_messages, read_message},
recorder::{
add_recorder, delete_archive, export_danmu, fetch_hls, get_archive, get_archives,
get_danmu_record, get_recent_record, get_recorder_list, get_room_info,
get_today_record_count, get_total_length, remove_recorder, send_danmaku, set_enable,
ExportDanmuOptions,
add_recorder, delete_archive, export_danmu, fetch_hls, generate_archive_subtitle,
get_archive, get_archive_subtitle, get_archives, get_danmu_record, get_recent_record,
get_recorder_list, get_room_info, get_today_record_count, get_total_length,
remove_recorder, send_danmaku, set_enable, ExportDanmuOptions,
},
utils::{console_log, get_disk_info, DiskInfo},
task::{delete_task, get_tasks},
utils::{console_log, get_disk_info, list_folder, DiskInfo},
video::{
cancel, clip_range, delete_video, encode_video_subtitle, generate_video_subtitle,
get_video, get_video_subtitle, get_video_typelist, get_videos, update_video_cover,
update_video_subtitle, upload_procedure,
cancel, clip_range, clip_video, delete_video, encode_video_subtitle, generate_video_subtitle,
generic_ffmpeg_command, get_all_videos, get_file_size, get_video, get_video_cover, get_video_subtitle,
get_video_typelist, get_videos, import_external_video, update_video_cover, update_video_subtitle,
upload_procedure,
},
AccountInfo,
},
@@ -45,7 +52,7 @@ use crate::{
};
use axum::{extract::Query, response::sse};
use axum::{
extract::{DefaultBodyLimit, Json, Path},
extract::{DefaultBodyLimit, Json, Path, Multipart},
http::StatusCode,
response::{IntoResponse, Sse},
routing::{get, post},
@@ -245,12 +252,44 @@ async fn handler_update_whisper_model(
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateWhisperLanguageRequest {
whisper_language: String,
}
async fn handler_update_whisper_language(
state: axum::extract::State<State>,
Json(whisper_language): Json<UpdateWhisperLanguageRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
update_whisper_language(state.0, whisper_language.whisper_language)
.await
.expect("Failed to update whisper language");
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateSubtitleSettingRequest {
auto_subtitle: bool,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateUserAgentRequest {
user_agent: String,
}
async fn handler_update_user_agent(
state: axum::extract::State<State>,
Json(user_agent): Json<UpdateUserAgentRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
update_user_agent(state.0, user_agent.user_agent)
.await
.expect("Failed to update user agent");
Ok(Json(ApiResponse::success(())))
}
async fn handler_update_subtitle_setting(
state: axum::extract::State<State>,
Json(subtitle_setting): Json<UpdateSubtitleSettingRequest>,
@@ -293,6 +332,54 @@ async fn handler_update_whisper_prompt(
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateSubtitleGeneratorTypeRequest {
subtitle_generator_type: String,
}
async fn handler_update_subtitle_generator_type(
state: axum::extract::State<State>,
Json(param): Json<UpdateSubtitleGeneratorTypeRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
update_subtitle_generator_type(state.0, param.subtitle_generator_type)
.await
.expect("Failed to update subtitle generator type");
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateOpenaiApiEndpointRequest {
openai_api_endpoint: String,
}
async fn handler_update_openai_api_endpoint(
state: axum::extract::State<State>,
Json(param): Json<UpdateOpenaiApiEndpointRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
update_openai_api_endpoint(state.0, param.openai_api_endpoint)
.await
.expect("Failed to update openai api endpoint");
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateOpenaiApiKeyRequest {
openai_api_key: String,
}
async fn handler_update_openai_api_key(
state: axum::extract::State<State>,
Json(param): Json<UpdateOpenaiApiKeyRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
update_openai_api_key(state.0, param.openai_api_key)
.await
.expect("Failed to update openai api key");
Ok(Json(ApiResponse::success(())))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct UpdateAutoGenerateRequest {
@@ -363,13 +450,14 @@ async fn handler_get_recorder_list(
struct AddRecorderRequest {
platform: String,
room_id: u64,
extra: String,
}
async fn handler_add_recorder(
state: axum::extract::State<State>,
Json(param): Json<AddRecorderRequest>,
) -> Result<Json<ApiResponse<RecorderRow>>, ApiError> {
let recorder = add_recorder(state.0, param.platform, param.room_id)
let recorder = add_recorder(state.0, param.platform, param.room_id, param.extra)
.await
.expect("Failed to add recorder");
Ok(Json(ApiResponse::success(recorder)))
@@ -436,6 +524,40 @@ async fn handler_get_archive(
Ok(Json(ApiResponse::success(archive)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GetArchiveSubtitleRequest {
platform: String,
room_id: u64,
live_id: String,
}
async fn handler_get_archive_subtitle(
state: axum::extract::State<State>,
Json(param): Json<GetArchiveSubtitleRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
let subtitle =
get_archive_subtitle(state.0, param.platform, param.room_id, param.live_id).await?;
Ok(Json(ApiResponse::success(subtitle)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GenerateArchiveSubtitleRequest {
platform: String,
room_id: u64,
live_id: String,
}
async fn handler_generate_archive_subtitle(
state: axum::extract::State<State>,
Json(param): Json<GenerateArchiveSubtitleRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
let subtitle =
generate_archive_subtitle(state.0, param.platform, param.room_id, param.live_id).await?;
Ok(Json(ApiResponse::success(subtitle)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DeleteArchiveRequest {
@@ -502,6 +624,7 @@ async fn handler_get_today_record_count(
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GetRecentRecordRequest {
room_id: u64,
offset: u64,
limit: u64,
}
@@ -510,7 +633,8 @@ async fn handler_get_recent_record(
state: axum::extract::State<State>,
Json(param): Json<GetRecentRecordRequest>,
) -> Result<Json<ApiResponse<Vec<RecordRow>>>, ApiError> {
let recent_record = get_recent_record(state.0, param.offset, param.limit).await?;
let recent_record =
get_recent_record(state.0, param.room_id, param.offset, param.limit).await?;
Ok(Json(ApiResponse::success(recent_record)))
}
@@ -606,14 +730,36 @@ async fn handler_get_video(
struct GetVideosRequest {
room_id: u64,
}
async fn handler_get_videos(
state: axum::extract::State<State>,
Json(param): Json<GetVideosRequest>,
) -> Result<Json<ApiResponse<Vec<VideoRow>>>, ApiError> {
) -> Result<Json<ApiResponse<Vec<VideoNoCover>>>, ApiError> {
let videos = get_videos(state.0, param.room_id).await?;
Ok(Json(ApiResponse::success(videos)))
}
async fn handler_get_all_videos(
state: axum::extract::State<State>,
) -> Result<Json<ApiResponse<Vec<VideoNoCover>>>, ApiError> {
let videos = get_all_videos(state.0).await?;
Ok(Json(ApiResponse::success(videos)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GetVideoCoverRequest {
id: i64,
}
async fn handler_get_video_cover(
state: axum::extract::State<State>,
Json(param): Json<GetVideoCoverRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
let video_cover = get_video_cover(state.0, param.id).await?;
Ok(Json(ApiResponse::success(video_cover)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DeleteVideoRequest {
@@ -650,6 +796,57 @@ async fn handler_update_video_cover(
Ok(Json(ApiResponse::success(())))
}
// 处理base64图片数据的API
async fn handler_image_base64(
Path(video_id): Path<i64>,
state: axum::extract::State<State>,
) -> Result<impl IntoResponse, StatusCode> {
// 获取视频封面
let cover = match get_video_cover(state.0, video_id).await {
Ok(cover) => cover,
Err(_) => return Err(StatusCode::NOT_FOUND),
};
// 检查是否是base64数据URL
if cover.starts_with("data:image/") {
if let Some(base64_start) = cover.find("base64,") {
let base64_data = &cover[base64_start + 7..]; // 跳过 "base64,"
// 解码base64数据
use base64::{Engine as _, engine::general_purpose};
if let Ok(image_data) = general_purpose::STANDARD.decode(base64_data) {
// 确定MIME类型
let content_type = if cover.contains("data:image/png") {
"image/png"
} else if cover.contains("data:image/jpeg") || cover.contains("data:image/jpg") {
"image/jpeg"
} else if cover.contains("data:image/gif") {
"image/gif"
} else if cover.contains("data:image/webp") {
"image/webp"
} else {
"image/png" // 默认
};
let mut response = axum::response::Response::new(axum::body::Body::from(image_data));
let headers = response.headers_mut();
headers.insert(
axum::http::header::CONTENT_TYPE,
content_type.parse().unwrap(),
);
headers.insert(
axum::http::header::CACHE_CONTROL,
"public, max-age=3600".parse().unwrap(),
);
return Ok(response);
}
}
}
Err(StatusCode::NOT_FOUND)
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GenerateVideoSubtitleRequest {
@@ -661,8 +858,8 @@ async fn handler_generate_video_subtitle(
state: axum::extract::State<State>,
Json(param): Json<GenerateVideoSubtitleRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
generate_video_subtitle(state.0, param.event_id.clone(), param.id).await?;
Ok(Json(ApiResponse::success(param.event_id)))
let result = generate_video_subtitle(state.0, param.event_id.clone(), param.id).await?;
Ok(Json(ApiResponse::success(result)))
}
#[derive(Debug, Serialize, Deserialize)]
@@ -718,6 +915,64 @@ async fn handler_encode_video_subtitle(
)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct ImportExternalVideoRequest {
event_id: String,
file_path: String,
title: String,
original_name: String,
size: i64,
room_id: u64,
}
async fn handler_import_external_video(
state: axum::extract::State<State>,
Json(param): Json<ImportExternalVideoRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
import_external_video(state.0, param.event_id.clone(), param.file_path.clone(), param.title, param.original_name, param.size, param.room_id).await?;
Ok(Json(ApiResponse::success(param.event_id)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct ClipVideoRequest {
event_id: String,
parent_video_id: i64,
start_time: f64,
end_time: f64,
clip_title: String,
}
async fn handler_clip_video(
state: axum::extract::State<State>,
Json(param): Json<ClipVideoRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
clip_video(
state.0,
param.event_id.clone(),
param.parent_video_id,
param.start_time,
param.end_time,
param.clip_title,
).await?;
Ok(Json(ApiResponse::success(param.event_id)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct GetFileSizeRequest {
file_path: String,
}
async fn handler_get_file_size(
_state: axum::extract::State<State>,
Json(param): Json<GetFileSizeRequest>,
) -> Result<Json<ApiResponse<u64>>, ApiError> {
let file_size = get_file_size(param.file_path).await?;
Ok(Json(ApiResponse::success(file_size)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct ConsoleLogRequest {
@@ -819,6 +1074,136 @@ async fn handler_export_danmu(
Ok(Json(ApiResponse::success(result)))
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct DeleteTaskRequest {
id: String,
}
async fn handler_delete_task(
state: axum::extract::State<State>,
Json(params): Json<DeleteTaskRequest>,
) -> Result<Json<ApiResponse<()>>, ApiError> {
delete_task(state.0, &params.id).await?;
Ok(Json(ApiResponse::success(())))
}
async fn handler_get_tasks(
state: axum::extract::State<State>,
) -> Result<Json<ApiResponse<Vec<TaskRow>>>, ApiError> {
let tasks = get_tasks(state.0).await?;
Ok(Json(ApiResponse::success(tasks)))
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct GenericFfmpegCommandRequest {
args: Vec<String>,
}
async fn handler_generic_ffmpeg_command(
state: axum::extract::State<State>,
Json(params): Json<GenericFfmpegCommandRequest>,
) -> Result<Json<ApiResponse<String>>, ApiError> {
let result = generic_ffmpeg_command(state.0, params.args).await?;
Ok(Json(ApiResponse::success(result)))
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct ListFolderRequest {
path: String,
}
async fn handler_list_folder(
state: axum::extract::State<State>,
Json(params): Json<ListFolderRequest>,
) -> Result<Json<ApiResponse<Vec<String>>>, ApiError> {
let result = list_folder(state.0, params.path).await?;
Ok(Json(ApiResponse::success(result)))
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct FileUploadResponse {
file_path: String,
file_name: String,
file_size: u64,
}
async fn handler_upload_file(
state: axum::extract::State<State>,
mut multipart: Multipart,
) -> Result<Json<ApiResponse<FileUploadResponse>>, ApiError> {
if state.readonly {
return Err(ApiError("Server is in readonly mode".to_string()));
}
let mut file_name = String::new();
let mut file_data = Vec::new();
let mut _room_id = 0u64;
while let Some(field) = multipart.next_field().await.map_err(|e| e.to_string())? {
let name = field.name().unwrap_or("").to_string();
match name.as_str() {
"file" => {
file_name = field.file_name().unwrap_or("unknown").to_string();
file_data = field.bytes().await.map_err(|e| e.to_string())?.to_vec();
}
"roomId" => {
let room_id_str = field.text().await.map_err(|e| e.to_string())?;
_room_id = room_id_str.parse().unwrap_or(0);
}
_ => {}
}
}
if file_name.is_empty() || file_data.is_empty() {
return Err(ApiError("No file uploaded".to_string()));
}
// 创建上传目录
let config = state.config.read().await;
let upload_dir = std::path::Path::new(&config.cache).join("uploads");
if !upload_dir.exists() {
std::fs::create_dir_all(&upload_dir).map_err(|e| e.to_string())?;
}
// 生成唯一文件名避免冲突
let timestamp = chrono::Utc::now().timestamp();
let extension = std::path::Path::new(&file_name)
.extension()
.and_then(|ext| ext.to_str())
.unwrap_or("");
let base_name = std::path::Path::new(&file_name)
.file_stem()
.and_then(|stem| stem.to_str())
.unwrap_or("upload");
let unique_filename = if extension.is_empty() {
format!("{}_{}", base_name, timestamp)
} else {
format!("{}_{}.{}", base_name, timestamp, extension)
};
let file_path = upload_dir.join(&unique_filename);
// 写入文件
tokio::fs::write(&file_path, &file_data).await.map_err(|e| e.to_string())?;
let file_size = file_data.len() as u64;
let file_path_str = file_path.to_string_lossy().to_string();
log::info!("File uploaded: {} ({} bytes)", file_path_str, file_size);
Ok(Json(ApiResponse::success(FileUploadResponse {
file_path: file_path_str,
file_name: unique_filename,
file_size,
})))
}
async fn handler_hls(
state: axum::extract::State<State>,
Path(uri): Path<String>,
@@ -951,6 +1336,11 @@ async fn handler_output(
Some("m4v") => "video/x-m4v",
Some("mkv") => "video/x-matroska",
Some("avi") => "video/x-msvideo",
Some("jpg") | Some("jpeg") => "image/jpeg",
Some("png") => "image/png",
Some("gif") => "image/gif",
Some("webp") => "image/webp",
Some("svg") => "image/svg+xml",
_ => "application/octet-stream",
};
@@ -962,7 +1352,10 @@ async fn handler_output(
content_type.parse().unwrap(),
);
// Add Content-Disposition header to force download
// Only set Content-Disposition for non-media files to allow inline playback/display
if !matches!(content_type,
"video/mp4" | "video/webm" | "video/x-m4v" | "video/x-matroska" | "video/x-msvideo" |
"image/jpeg" | "image/png" | "image/gif" | "image/webp" | "image/svg+xml") {
let filename = path.file_name().and_then(|n| n.to_str()).unwrap_or("file");
headers.insert(
axum::http::header::CONTENT_DISPOSITION,
@@ -970,6 +1363,7 @@ async fn handler_output(
.parse()
.unwrap(),
);
}
let content_length = end - start + 1;
headers.insert(
@@ -1097,6 +1491,14 @@ pub async fn start_api_server(state: State) {
"/api/generate_video_subtitle",
post(handler_generate_video_subtitle),
)
.route(
"/api/generate_archive_subtitle",
post(handler_generate_archive_subtitle),
)
.route(
"/api/generic_ffmpeg_command",
post(handler_generic_ffmpeg_command),
)
.route(
"/api/update_video_subtitle",
post(handler_update_video_subtitle),
@@ -1106,6 +1508,11 @@ pub async fn start_api_server(state: State) {
"/api/encode_video_subtitle",
post(handler_encode_video_subtitle),
)
.route(
"/api/import_external_video",
post(handler_import_external_video),
)
.route("/api/clip_video", post(handler_clip_video))
.route("/api/update_notify", post(handler_update_notify))
.route(
"/api/update_status_check_interval",
@@ -1115,10 +1522,27 @@ pub async fn start_api_server(state: State) {
"/api/update_whisper_prompt",
post(handler_update_whisper_prompt),
)
.route(
"/api/update_subtitle_generator_type",
post(handler_update_subtitle_generator_type),
)
.route(
"/api/update_openai_api_endpoint",
post(handler_update_openai_api_endpoint),
)
.route(
"/api/update_openai_api_key",
post(handler_update_openai_api_key),
)
.route(
"/api/update_auto_generate",
post(handler_update_auto_generate),
);
)
.route(
"/api/update_whisper_language",
post(handler_update_whisper_language),
)
.route("/api/update_user_agent", post(handler_update_user_agent));
} else {
log::info!("Running in readonly mode, some api routes are disabled");
}
@@ -1135,6 +1559,10 @@ pub async fn start_api_server(state: State) {
.route("/api/get_room_info", post(handler_get_room_info))
.route("/api/get_archives", post(handler_get_archives))
.route("/api/get_archive", post(handler_get_archive))
.route(
"/api/get_archive_subtitle",
post(handler_get_archive_subtitle),
)
.route("/api/get_danmu_record", post(handler_get_danmu_record))
.route("/api/get_total_length", post(handler_get_total_length))
.route(
@@ -1146,13 +1574,21 @@ pub async fn start_api_server(state: State) {
.route("/api/clip_range", post(handler_clip_range))
.route("/api/get_video", post(handler_get_video))
.route("/api/get_videos", post(handler_get_videos))
.route("/api/get_video_cover", post(handler_get_video_cover))
.route("/api/get_all_videos", post(handler_get_all_videos))
.route("/api/get_video_typelist", post(handler_get_video_typelist))
.route("/api/get_video_subtitle", post(handler_get_video_subtitle))
.route("/api/get_file_size", post(handler_get_file_size))
.route("/api/delete_task", post(handler_delete_task))
.route("/api/get_tasks", post(handler_get_tasks))
.route("/api/export_danmu", post(handler_export_danmu))
// Utils commands
.route("/api/get_disk_info", post(handler_get_disk_info))
.route("/api/console_log", post(handler_console_log))
.route("/api/list_folder", post(handler_list_folder))
.route("/api/fetch", post(handler_fetch))
.route("/api/upload_file", post(handler_upload_file))
.route("/api/image/:video_id", get(handler_image_base64))
.route("/hls/*uri", get(handler_hls))
.route("/output/*uri", get(handler_output))
.route("/api/sse", get(handler_sse));
@@ -1163,8 +1599,21 @@ pub async fn start_api_server(state: State) {
.with_state(state);
let addr = "0.0.0.0:3000";
log::info!("API server listening on http://{}", addr);
log::info!("Starting API server on http://{}", addr);
let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
axum::serve(listener, router).await.unwrap();
let listener = match tokio::net::TcpListener::bind(addr).await {
Ok(listener) => {
log::info!("API server listening on http://{}", addr);
listener
}
Err(e) => {
log::error!("Failed to bind to address {}: {}", addr, e);
log::error!("Please check if the port is already in use or try a different port");
return;
}
};
if let Err(e) = axum::serve(listener, router).await {
log::error!("Server error: {}", e);
}
}

View File

@@ -26,6 +26,7 @@ use chrono::Utc;
use config::Config;
use database::Database;
use recorder::bilibili::client::BiliClient;
use recorder::PlatformType;
use recorder_manager::RecorderManager;
use simplelog::ConfigBuilder;
use state::State;
@@ -42,7 +43,6 @@ use std::os::windows::fs::MetadataExt;
#[cfg(feature = "gui")]
use {
recorder::PlatformType,
tauri::{Manager, WindowEvent},
tauri_plugin_sql::{Migration, MigrationKind},
};
@@ -117,6 +117,9 @@ async fn setup_logging(log_dir: &Path) -> Result<(), Box<dyn std::error::Error>>
),
])?;
// logging current package version
log::info!("Current version: {}", env!("CARGO_PKG_VERSION"));
Ok(())
}
@@ -141,6 +144,34 @@ fn get_migrations() -> Vec<Migration> {
sql: r#"ALTER TABLE recorders ADD COLUMN auto_start INTEGER NOT NULL DEFAULT 1;"#,
kind: MigrationKind::Up,
},
// add platform column to videos table
Migration {
version: 3,
description: "add_platform_column",
sql: r#"ALTER TABLE videos ADD COLUMN platform TEXT;"#,
kind: MigrationKind::Up,
},
// add task table to record encode/upload task
Migration {
version: 4,
description: "add_task_table",
sql: r#"CREATE TABLE tasks (id TEXT PRIMARY KEY, type TEXT, status TEXT, message TEXT, metadata TEXT, created_at TEXT);"#,
kind: MigrationKind::Up,
},
// add id_str column to support string IDs like Douyin sec_uid while keeping uid for Bilibili compatibility
Migration {
version: 5,
description: "add_id_str_column",
sql: r#"ALTER TABLE accounts ADD COLUMN id_str TEXT;"#,
kind: MigrationKind::Up,
},
// add extra column to recorders
Migration {
version: 6,
description: "add_extra_column_to_recorders",
sql: r#"ALTER TABLE recorders ADD COLUMN extra TEXT;"#,
kind: MigrationKind::Up,
},
]
}
@@ -188,7 +219,7 @@ async fn setup_server_state(args: Args) -> Result<State, Box<dyn std::error::Err
return Err(e.into());
}
};
let client = Arc::new(BiliClient::new()?);
let client = Arc::new(BiliClient::new(&config.user_agent)?);
let config = Arc::new(RwLock::new(config));
let db = Arc::new(Database::new());
// connect to sqlite database
@@ -214,10 +245,68 @@ async fn setup_server_state(args: Args) -> Result<State, Box<dyn std::error::Err
.expect("Failed to run migrations");
db.set(db_pool).await;
db.finish_pending_tasks().await?;
let progress_manager = Arc::new(ProgressManager::new());
let emitter = EventEmitter::new(progress_manager.get_event_sender());
let recorder_manager = Arc::new(RecorderManager::new(emitter, db.clone(), config.clone()));
// Update account infos for headless mode
let accounts = db.get_accounts().await?;
for account in accounts {
let platform = PlatformType::from_str(&account.platform).unwrap();
if platform == PlatformType::BiliBili {
match client.get_user_info(&account, account.uid).await {
Ok(account_info) => {
if let Err(e) = db
.update_account(
&account.platform,
account_info.user_id,
&account_info.user_name,
&account_info.user_avatar_url,
)
.await
{
log::error!("Error when updating Bilibili account info {}", e);
}
}
Err(e) => {
log::error!("Get Bilibili user info failed {}", e);
}
}
} else if platform == PlatformType::Douyin {
// Update Douyin account info
use crate::recorder::douyin::client::DouyinClient;
let douyin_client = DouyinClient::new(&config.read().await.user_agent, &account);
match douyin_client.get_user_info().await {
Ok(user_info) => {
let avatar_url = user_info
.avatar_thumb
.url_list
.first()
.cloned()
.unwrap_or_default();
if let Err(e) = db
.update_account_with_id_str(
&account,
&user_info.sec_uid,
&user_info.nickname,
&avatar_url,
)
.await
{
log::error!("Error when updating Douyin account info {}", e);
}
}
Err(e) => {
log::error!("Get Douyin user info failed {}", e);
}
}
}
}
let _ = try_rebuild_archives(&db, config.read().await.cache.clone().into()).await;
Ok(State {
@@ -252,7 +341,7 @@ async fn setup_app_state(app: &tauri::App) -> Result<State, Box<dyn std::error::
}
};
let client = Arc::new(BiliClient::new()?);
let client = Arc::new(BiliClient::new(&config.user_agent)?);
let config = Arc::new(RwLock::new(config));
let config_clone = config.clone();
let dbs = app.state::<tauri_plugin_sql::DbInstances>().inner();
@@ -266,6 +355,7 @@ async fn setup_app_state(app: &tauri::App) -> Result<State, Box<dyn std::error::
tauri_plugin_sql::DbPool::Sqlite(pool) => Some(pool),
};
db_clone.set(sqlite_pool.unwrap().clone()).await;
db_clone.finish_pending_tasks().await?;
let recorder_manager = Arc::new(RecorderManager::new(
app.app_handle().clone(),
@@ -288,12 +378,9 @@ async fn setup_app_state(app: &tauri::App) -> Result<State, Box<dyn std::error::
// update account infos
for account in accounts {
// only update bilibili account
let platform = PlatformType::from_str(&account.platform).unwrap();
if platform != PlatformType::BiliBili {
continue;
}
if platform == PlatformType::BiliBili {
match client_clone.get_user_info(&account, account.uid).await {
Ok(account_info) => {
if let Err(e) = db_clone
@@ -305,11 +392,41 @@ async fn setup_app_state(app: &tauri::App) -> Result<State, Box<dyn std::error::
)
.await
{
log::error!("Error when updating account info {}", e);
log::error!("Error when updating Bilibili account info {}", e);
}
}
Err(e) => {
log::error!("Get user info failed {}", e);
log::error!("Get Bilibili user info failed {}", e);
}
}
} else if platform == PlatformType::Douyin {
// Update Douyin account info
use crate::recorder::douyin::client::DouyinClient;
let douyin_client = DouyinClient::new(&config_clone.read().await.user_agent, &account);
match douyin_client.get_user_info().await {
Ok(user_info) => {
let avatar_url = user_info
.avatar_thumb
.url_list
.first()
.cloned()
.unwrap_or_default();
if let Err(e) = db_clone
.update_account_with_id_str(
&account,
&user_info.sec_uid,
&user_info.nickname,
&avatar_url,
)
.await
{
log::error!("Error when updating Douyin account info {}", e);
}
}
Err(e) => {
log::error!("Get Douyin user info failed {}", e);
}
}
}
}
@@ -362,7 +479,8 @@ fn setup_plugins(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<tauri::W
fn setup_event_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<tauri::Wry> {
builder.on_window_event(|window, event| {
if let WindowEvent::CloseRequested { api, .. } = event {
if !window.label().starts_with("Live") {
// main window is not closable
if window.label() == "main" {
window.hide().unwrap();
api.prevent_close();
}
@@ -387,8 +505,13 @@ fn setup_invoke_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<
crate::handlers::config::update_subtitle_setting,
crate::handlers::config::update_clip_name_format,
crate::handlers::config::update_whisper_prompt,
crate::handlers::config::update_subtitle_generator_type,
crate::handlers::config::update_openai_api_key,
crate::handlers::config::update_openai_api_endpoint,
crate::handlers::config::update_auto_generate,
crate::handlers::config::update_status_check_interval,
crate::handlers::config::update_whisper_language,
crate::handlers::config::update_user_agent,
crate::handlers::message::get_messages,
crate::handlers::message::read_message,
crate::handlers::message::delete_message,
@@ -398,6 +521,8 @@ fn setup_invoke_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<
crate::handlers::recorder::get_room_info,
crate::handlers::recorder::get_archives,
crate::handlers::recorder::get_archive,
crate::handlers::recorder::get_archive_subtitle,
crate::handlers::recorder::generate_archive_subtitle,
crate::handlers::recorder::delete_archive,
crate::handlers::recorder::get_danmu_record,
crate::handlers::recorder::export_danmu,
@@ -412,6 +537,8 @@ fn setup_invoke_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<
crate::handlers::video::cancel,
crate::handlers::video::get_video,
crate::handlers::video::get_videos,
crate::handlers::video::get_all_videos,
crate::handlers::video::get_video_cover,
crate::handlers::video::delete_video,
crate::handlers::video::get_video_typelist,
crate::handlers::video::update_video_cover,
@@ -419,12 +546,20 @@ fn setup_invoke_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<
crate::handlers::video::get_video_subtitle,
crate::handlers::video::update_video_subtitle,
crate::handlers::video::encode_video_subtitle,
crate::handlers::video::generic_ffmpeg_command,
crate::handlers::video::import_external_video,
crate::handlers::video::clip_video,
crate::handlers::video::get_file_size,
crate::handlers::task::get_tasks,
crate::handlers::task::delete_task,
crate::handlers::utils::show_in_folder,
crate::handlers::utils::export_to_file,
crate::handlers::utils::get_disk_info,
crate::handlers::utils::open_live,
crate::handlers::utils::open_clip,
crate::handlers::utils::open_log_folder,
crate::handlers::utils::console_log,
crate::handlers::utils::list_folder,
])
}
@@ -432,7 +567,7 @@ fn setup_invoke_handlers(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<
fn main() -> Result<(), Box<dyn std::error::Error>> {
let _ = fix_path_env::fix();
let builder = tauri::Builder::default();
let builder = tauri::Builder::default().plugin(tauri_plugin_deep_link::init());
let builder = setup_plugins(builder);
let builder = setup_event_handlers(builder);
let builder = setup_invoke_handlers(builder);

View File

@@ -81,6 +81,11 @@ pub trait Recorder: Send + Sync + 'static {
async fn info(&self) -> RecorderInfo;
async fn comments(&self, live_id: &str) -> Result<Vec<DanmuEntry>, errors::RecorderError>;
async fn is_recording(&self, live_id: &str) -> bool;
async fn get_archive_subtitle(&self, live_id: &str) -> Result<String, errors::RecorderError>;
async fn generate_archive_subtitle(
&self,
live_id: &str,
) -> Result<String, errors::RecorderError>;
async fn enable(&self);
async fn disable(&self);
}

View File

@@ -6,9 +6,11 @@ use super::entry::{EntryStore, Range};
use super::errors::RecorderError;
use super::PlatformType;
use crate::database::account::AccountRow;
use crate::ffmpeg::get_video_resolution;
use crate::progress_manager::Event;
use crate::progress_reporter::EventEmitter;
use crate::recorder_manager::RecorderEvent;
use crate::subtitle_generator::item_to_srt;
use super::danmu::{DanmuEntry, DanmuStorage};
use super::entry::TsEntry;
@@ -20,9 +22,12 @@ use danmu_stream::DanmuMessageType;
use errors::BiliClientError;
use m3u8_rs::{Playlist, QuotedOrUnquoted, VariantStream};
use regex::Regex;
use std::path::Path;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use tokio::fs::File;
use tokio::io::{AsyncReadExt, AsyncWriteExt, BufReader};
use tokio::sync::{broadcast, Mutex, RwLock};
use tokio::task::JoinHandle;
use url::Url;
@@ -64,9 +69,12 @@ pub struct BiliRecorder {
danmu_storage: Arc<RwLock<Option<DanmuStorage>>>,
live_end_channel: broadcast::Sender<RecorderEvent>,
enabled: Arc<RwLock<bool>>,
last_segment_offset: Arc<RwLock<Option<i64>>>, // 保存上次处理的最后一个片段的偏移
current_header_info: Arc<RwLock<Option<HeaderInfo>>>, // 保存当前的分辨率
danmu_task: Arc<Mutex<Option<JoinHandle<()>>>>,
record_task: Arc<Mutex<Option<JoinHandle<()>>>>,
master_manifest: Arc<RwLock<Option<String>>>,
}
impl From<DatabaseError> for super::errors::RecorderError {
@@ -93,9 +101,15 @@ pub struct BiliRecorderOptions {
pub channel: broadcast::Sender<RecorderEvent>,
}
#[derive(Debug, Clone)]
struct HeaderInfo {
url: String,
resolution: String,
}
impl BiliRecorder {
pub async fn new(options: BiliRecorderOptions) -> Result<Self, super::errors::RecorderError> {
let client = BiliClient::new()?;
let client = BiliClient::new(&options.config.read().await.user_agent)?;
let room_info = client
.get_room_info(&options.account, options.room_id)
.await?;
@@ -136,9 +150,11 @@ impl BiliRecorder {
danmu_storage: Arc::new(RwLock::new(None)),
live_end_channel: options.channel,
enabled: Arc::new(RwLock::new(options.auto_start)),
last_segment_offset: Arc::new(RwLock::new(None)),
current_header_info: Arc::new(RwLock::new(None)),
danmu_task: Arc::new(Mutex::new(None)),
record_task: Arc::new(Mutex::new(None)),
master_manifest: Arc::new(RwLock::new(None)),
};
log::info!("Recorder for room {} created.", options.room_id);
Ok(recorder)
@@ -150,6 +166,8 @@ impl BiliRecorder {
*self.live_stream.write().await = None;
*self.last_update.write().await = Utc::now().timestamp();
*self.danmu_storage.write().await = None;
*self.last_segment_offset.write().await = None;
*self.current_header_info.write().await = None;
}
async fn should_record(&self) -> bool {
@@ -255,10 +273,12 @@ impl BiliRecorder {
return true;
}
let master_manifest =
m3u8_rs::parse_playlist_res(master_manifest.as_ref().unwrap().as_bytes())
let master_manifest = master_manifest.unwrap();
*self.master_manifest.write().await = Some(master_manifest.clone());
let master_manifest = m3u8_rs::parse_playlist_res(master_manifest.as_bytes())
.map_err(|_| super::errors::RecorderError::M3u8ParseFailed {
content: master_manifest.as_ref().unwrap().clone(),
content: master_manifest.clone(),
});
if master_manifest.is_err() {
log::error!(
@@ -313,26 +333,12 @@ impl BiliRecorder {
let stream = new_stream.unwrap();
let should_update_stream = self.live_stream.read().await.is_none()
|| !self
.live_stream
.read()
.await
.as_ref()
.unwrap()
.is_same(&stream)
|| self.force_update.load(Ordering::Relaxed);
if should_update_stream {
log::info!(
"[{}]Update to a new stream: {:?} => {}",
self.room_id,
self.live_stream.read().await.clone(),
stream
);
self.force_update.store(false, Ordering::Relaxed);
let new_stream = self.fetch_real_stream(stream).await;
let new_stream = self.fetch_real_stream(&stream).await;
if new_stream.is_err() {
log::error!(
"[{}]Fetch real stream failed: {}",
@@ -345,6 +351,13 @@ impl BiliRecorder {
let new_stream = new_stream.unwrap();
*self.live_stream.write().await = Some(new_stream);
*self.last_update.write().await = Utc::now().timestamp();
log::info!(
"[{}]Update to a new stream: {:?} => {}",
self.room_id,
self.live_stream.read().await.clone(),
stream
);
}
true
@@ -399,13 +412,14 @@ impl BiliRecorder {
if let Ok(Some(msg)) = danmu_stream.recv().await {
match msg {
DanmuMessageType::DanmuMessage(danmu) => {
let ts = Utc::now().timestamp_millis();
self.emitter.emit(&Event::DanmuReceived {
room: self.room_id,
ts: danmu.timestamp,
ts,
content: danmu.message.clone(),
});
if let Some(storage) = self.danmu_storage.write().await.as_ref() {
storage.add_line(danmu.timestamp, &danmu.message).await;
storage.add_line(ts, &danmu.message).await;
}
}
}
@@ -450,6 +464,10 @@ impl BiliRecorder {
}
Err(e) => {
log::error!("Failed fetching index content from {}", stream.index());
log::error!(
"Master manifest: {}",
self.master_manifest.read().await.as_ref().unwrap()
);
Err(super::errors::RecorderError::BiliClientError { err: e })
}
}
@@ -461,6 +479,7 @@ impl BiliRecorder {
return Err(super::errors::RecorderError::NoStreamAvailable);
}
let stream = stream.unwrap();
let index_content = self
.client
.read()
@@ -475,6 +494,7 @@ impl BiliRecorder {
url: stream.index(),
});
}
let mut header_url = String::from("");
let re = Regex::new(r"h.*\.m4s").unwrap();
if let Some(captures) = re.captures(&index_content) {
@@ -483,12 +503,24 @@ impl BiliRecorder {
if header_url.is_empty() {
log::warn!("Parse header url failed: {}", index_content);
}
Ok(header_url)
}
async fn get_resolution(
&self,
header_url: &str,
) -> Result<String, super::errors::RecorderError> {
log::debug!("Get resolution from {}", header_url);
let resolution = get_video_resolution(header_url)
.await
.map_err(|e| super::errors::RecorderError::FfmpegError { err: e })?;
Ok(resolution)
}
async fn fetch_real_stream(
&self,
stream: BiliStream,
stream: &BiliStream,
) -> Result<BiliStream, super::errors::RecorderError> {
let index_content = self
.client
@@ -497,16 +529,9 @@ impl BiliRecorder {
.get_index_content(&self.account, &stream.index())
.await?;
if index_content.is_empty() {
return Err(super::errors::RecorderError::InvalidStream { stream });
}
let index_content = self
.client
.read()
.await
.get_index_content(&self.account, &stream.index())
.await?;
if index_content.is_empty() {
return Err(super::errors::RecorderError::InvalidStream { stream });
return Err(super::errors::RecorderError::InvalidStream {
stream: stream.clone(),
});
}
if index_content.contains("Not Found") {
return Err(super::errors::RecorderError::IndexNotFound {
@@ -517,27 +542,23 @@ impl BiliRecorder {
// this index content provides another m3u8 url
// example: https://765b047cec3b099771d4b1851136046f.v.smtcdns.net/d1--cn-gotcha204-3.bilivideo.com/live-bvc/246284/live_1323355750_55526594/index.m3u8?expires=1741318366&len=0&oi=1961017843&pt=h5&qn=10000&trid=1007049a5300422eeffd2d6995d67b67ca5a&sigparams=cdn,expires,len,oi,pt,qn,trid&cdn=cn-gotcha204&sign=7ef1241439467ef27d3c804c1eda8d4d&site=1c89ef99adec13fab3a3592ee4db26d3&free_type=0&mid=475210&sche=ban&bvchls=1&trace=16&isp=ct&rg=East&pv=Shanghai&source=puv3_onetier&p2p_type=-1&score=1&suffix=origin&deploy_env=prod&flvsk=e5c4d6fb512ed7832b706f0a92f7a8c8&sk=246b3930727a89629f17520b1b551a2f&pp=rtmp&hot_cdn=57345&origin_bitrate=657300&sl=1&info_source=cache&vd=bc&src=puv3&order=1&TxLiveCode=cold_stream&TxDispType=3&svr_type=live_oc&tencent_test_client_ip=116.226.193.243&dispatch_from=OC_MGR61.170.74.11&utime=1741314857497
let new_url = index_content.lines().last().unwrap();
let base_url = new_url.split('/').next().unwrap();
let host = base_url.split('/').next().unwrap();
// extra is params after index.m3u8
let extra = new_url.split(base_url).last().unwrap();
let new_stream = BiliStream::new(StreamType::FMP4, base_url, host, extra);
return Box::pin(self.fetch_real_stream(new_stream)).await;
}
Ok(stream)
}
async fn extract_liveid(&self, header_url: &str) -> i64 {
log::debug!("[{}]Extract liveid from {}", self.room_id, header_url);
let re = Regex::new(r"h(\d+).m4s").unwrap();
if let Some(cap) = re.captures(header_url) {
let liveid: i64 = cap.get(1).unwrap().as_str().parse().unwrap();
*self.live_id.write().await = liveid.to_string();
liveid
} else {
log::error!("Extract liveid failed: {}", header_url);
0
// extract host: cn-gotcha204-3.bilivideo.com
let host = new_url.split('/').nth(2).unwrap_or_default();
let extra = new_url.split('?').nth(1).unwrap_or_default();
// extract base url: live-bvc/246284/live_1323355750_55526594/
let base_url = new_url
.split('/')
.skip(3)
.take_while(|&part| !part.contains('?') && part != "index.m3u8")
.collect::<Vec<&str>>()
.join("/")
+ "/";
let new_stream = BiliStream::new(StreamType::FMP4, base_url.as_str(), host, extra);
return Box::pin(self.fetch_real_stream(&new_stream)).await;
}
Ok(stream.clone())
}
async fn get_work_dir(&self, live_id: &str) -> String {
@@ -557,8 +578,24 @@ impl BiliRecorder {
}
let current_stream = current_stream.unwrap();
let parsed = self.get_playlist().await;
if parsed.is_err() {
self.force_update.store(true, Ordering::Relaxed);
return Err(parsed.err().unwrap());
}
let playlist = parsed.unwrap();
let mut timestamp: i64 = self.live_id.read().await.parse::<i64>().unwrap_or(0);
let mut work_dir = self.get_work_dir(timestamp.to_string().as_str()).await;
let mut work_dir;
let mut is_first_record = false;
// Get url from EXT-X-MAP
let header_url = self.get_header_url().await?;
if header_url.is_empty() {
return Err(super::errors::RecorderError::EmptyHeader);
}
let full_header_url = current_stream.ts_url(&header_url);
// Check header if None
if (self.entry_store.read().await.as_ref().is_none()
|| self
@@ -571,16 +608,52 @@ impl BiliRecorder {
.is_none())
&& current_stream.format == StreamType::FMP4
{
// Get url from EXT-X-MAP
let header_url = self.get_header_url().await?;
if header_url.is_empty() {
return Err(super::errors::RecorderError::EmptyHeader);
timestamp = Utc::now().timestamp_millis();
*self.live_id.write().await = timestamp.to_string();
work_dir = self.get_work_dir(timestamp.to_string().as_str()).await;
is_first_record = true;
let file_name = header_url.split('/').next_back().unwrap();
let mut header = TsEntry {
url: file_name.to_string(),
sequence: 0,
length: 0.0,
size: 0,
ts: timestamp,
is_header: true,
};
// Create work directory before download
tokio::fs::create_dir_all(&work_dir)
.await
.map_err(|e| super::errors::RecorderError::IoError { err: e })?;
// Download header
match self
.client
.read()
.await
.download_ts(&full_header_url, &format!("{}/{}", work_dir, file_name))
.await
{
Ok(size) => {
if size == 0 {
log::error!("Download header failed: {}", full_header_url);
// Clean up empty directory since header download failed
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await {
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
timestamp = self.extract_liveid(&header_url).await;
if timestamp == 0 {
log::error!("[{}]Parse timestamp failed: {}", self.room_id, header_url);
return Err(super::errors::RecorderError::InvalidTimestamp);
return Err(super::errors::RecorderError::InvalidStream {
stream: current_stream,
});
}
header.size = size;
// Now that download succeeded, create the record and setup stores
self.db
.add_record(
PlatformType::BiliBili,
@@ -591,35 +664,14 @@ impl BiliRecorder {
None,
)
.await?;
// now work dir is confirmed
work_dir = self.get_work_dir(timestamp.to_string().as_str()).await;
let entry_store = EntryStore::new(&work_dir).await;
*self.entry_store.write().await = Some(entry_store);
// danmau file
// danmu file
let danmu_file_path = format!("{}{}", work_dir, "danmu.txt");
*self.danmu_storage.write().await = DanmuStorage::new(&danmu_file_path).await;
let full_header_url = current_stream.ts_url(&header_url);
let file_name = header_url.split('/').next_back().unwrap();
let mut header = TsEntry {
url: file_name.to_string(),
sequence: 0,
length: 0.0,
size: 0,
ts: timestamp,
is_header: true,
};
// Download header
match self
.client
.read()
.await
.download_ts(&full_header_url, &format!("{}/{}", work_dir, file_name))
.await
{
Ok(size) => {
header.size = size;
self.entry_store
.write()
.await
@@ -627,68 +679,200 @@ impl BiliRecorder {
.unwrap()
.add_entry(header)
.await;
let new_resolution = self.get_resolution(&full_header_url).await?;
log::info!(
"[{}] Initial header resolution: {} {}",
self.room_id,
header_url,
new_resolution
);
*self.current_header_info.write().await = Some(HeaderInfo {
url: header_url.clone(),
resolution: new_resolution,
});
}
Err(e) => {
log::error!("Download header failed: {}", e);
// Clean up empty directory since header download failed
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await {
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
return Err(e.into());
}
}
} else {
work_dir = self.get_work_dir(timestamp.to_string().as_str()).await;
// For non-FMP4 streams, check if we need to initialize
if self.entry_store.read().await.as_ref().is_none() {
timestamp = Utc::now().timestamp_millis();
*self.live_id.write().await = timestamp.to_string();
work_dir = self.get_work_dir(timestamp.to_string().as_str()).await;
is_first_record = true;
}
}
// check resolution change
let current_header_info = self.current_header_info.read().await.clone();
if current_header_info.is_some() {
let current_header_info = current_header_info.unwrap();
if current_header_info.url != header_url {
let new_resolution = self.get_resolution(&full_header_url).await?;
log::debug!(
"[{}] Header url changed: {} => {}, resolution: {} => {}",
self.room_id,
current_header_info.url,
header_url,
current_header_info.resolution,
new_resolution
);
if current_header_info.resolution != new_resolution {
self.reset().await;
return Err(super::errors::RecorderError::ResolutionChanged {
err: format!(
"Resolution changed: {} => {}",
current_header_info.resolution, new_resolution
),
});
}
}
}
match parsed {
Ok(Playlist::MasterPlaylist(pl)) => log::debug!("Master playlist:\n{:?}", pl),
Ok(Playlist::MediaPlaylist(pl)) => {
match playlist {
Playlist::MasterPlaylist(pl) => log::debug!("Master playlist:\n{:?}", pl),
Playlist::MediaPlaylist(pl) => {
let mut new_segment_fetched = false;
let mut sequence = pl.media_sequence;
let last_sequence = self
.entry_store
.read()
.await
.as_ref()
.unwrap()
.last_sequence();
for ts in pl.segments {
if sequence <= last_sequence {
sequence += 1;
continue;
}
new_segment_fetched = true;
.map(|store| store.last_sequence)
.unwrap_or(0); // For first-time recording, start from 0
// Parse BILI-AUX offsets to calculate precise durations for FMP4
let mut segment_offsets = Vec::new();
for ts in pl.segments.iter() {
let mut seg_offset: i64 = 0;
for tag in ts.unknown_tags {
for tag in &ts.unknown_tags {
if tag.tag == "BILI-AUX" {
if let Some(rest) = tag.rest {
if let Some(rest) = &tag.rest {
let parts: Vec<&str> = rest.split('|').collect();
if parts.is_empty() {
continue;
if !parts.is_empty() {
let offset_hex = parts.first().unwrap();
if let Ok(offset) = i64::from_str_radix(offset_hex, 16) {
seg_offset = offset;
}
}
let offset_hex = parts.first().unwrap().to_string();
seg_offset = i64::from_str_radix(&offset_hex, 16).unwrap();
}
break;
}
}
segment_offsets.push(seg_offset);
}
// Extract stream start timestamp from header if available for FMP4
let stream_start_timestamp = self.room_info.read().await.live_start_time;
// Get the last segment offset from previous processing
let mut last_offset = *self.last_segment_offset.read().await;
for (i, ts) in pl.segments.iter().enumerate() {
let sequence = pl.media_sequence + i as u64;
if sequence <= last_sequence {
continue;
}
let ts_url = current_stream.ts_url(&ts.uri);
if Url::parse(&ts_url).is_err() {
log::error!("Ts url is invalid. ts_url={} original={}", ts_url, ts.uri);
continue;
}
// Calculate precise timestamp from stream start + BILI-AUX offset for FMP4
let ts_mili = if current_stream.format == StreamType::FMP4
&& stream_start_timestamp > 0
&& i < segment_offsets.len()
{
let seg_offset = segment_offsets[i];
stream_start_timestamp * 1000 + seg_offset
} else {
// Fallback to current time if parsing fails or not FMP4
Utc::now().timestamp_millis()
};
// encode segment offset into filename
let file_name = ts.uri.split('/').next_back().unwrap_or(&ts.uri);
let mut ts_length = pl.target_duration as f64;
let ts = timestamp * 1000 + seg_offset;
// calculate entry length using offset
// the default #EXTINF is 1.0, which is not accurate
if let Some(last) = self.entry_store.read().await.as_ref().unwrap().last_ts() {
// skip this entry as it is already in cache or stream changed
if ts <= last {
continue;
}
ts_length = (ts - last) as f64 / 1000.0;
}
let ts_length = pl.target_duration as f64;
// Calculate precise duration from BILI-AUX offsets for FMP4
let precise_length_from_aux =
if current_stream.format == StreamType::FMP4 && i < segment_offsets.len() {
let current_offset = segment_offsets[i];
// Get the previous offset for duration calculation
let prev_offset = if i > 0 {
// Use previous segment in current M3U8
Some(segment_offsets[i - 1])
} else {
// Use saved last offset from previous M3U8 processing
last_offset
};
if let Some(prev) = prev_offset {
let duration_ms = current_offset - prev;
if duration_ms > 0 {
Some(duration_ms as f64 / 1000.0) // Convert ms to seconds
} else {
None
}
} else {
// No previous offset available, use target duration
None
}
} else {
None
};
let client = self.client.clone();
let mut retry = 0;
let mut work_dir_created_for_non_fmp4 = false;
// For non-FMP4 streams, create record on first successful ts download
if is_first_record && current_stream.format != StreamType::FMP4 {
// Create work directory before first ts download
tokio::fs::create_dir_all(&work_dir)
.await
.map_err(|e| super::errors::RecorderError::IoError { err: e })?;
work_dir_created_for_non_fmp4 = true;
}
loop {
if retry > 3 {
log::error!("Download ts failed after retry");
// Clean up empty directory if first ts download failed for non-FMP4
if is_first_record
&& current_stream.format != StreamType::FMP4
&& work_dir_created_for_non_fmp4
{
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await
{
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
break;
}
match client
@@ -700,11 +884,84 @@ impl BiliRecorder {
Ok(size) => {
if size == 0 {
log::error!("Segment with size 0, stream might be corrupted");
// Clean up empty directory if first ts download failed for non-FMP4
if is_first_record
&& current_stream.format != StreamType::FMP4
&& work_dir_created_for_non_fmp4
{
if let Err(cleanup_err) =
tokio::fs::remove_dir_all(&work_dir).await
{
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
return Err(super::errors::RecorderError::InvalidStream {
stream: current_stream,
});
}
// Create record and setup stores on first successful download for non-FMP4
if is_first_record && current_stream.format != StreamType::FMP4 {
self.db
.add_record(
PlatformType::BiliBili,
timestamp.to_string().as_str(),
self.room_id,
&self.room_info.read().await.room_title,
self.cover.read().await.clone(),
None,
)
.await?;
let entry_store = EntryStore::new(&work_dir).await;
*self.entry_store.write().await = Some(entry_store);
// danmu file
let danmu_file_path = format!("{}{}", work_dir, "danmu.txt");
*self.danmu_storage.write().await =
DanmuStorage::new(&danmu_file_path).await;
is_first_record = false;
}
// Get precise duration - prioritize BILI-AUX for FMP4, fallback to ffprobe if needed
let precise_length = if let Some(aux_duration) =
precise_length_from_aux
{
aux_duration
} else if current_stream.format != StreamType::FMP4 {
// For regular TS segments, use direct ffprobe
let file_path = format!("{}/{}", work_dir, file_name);
match crate::ffmpeg::get_segment_duration(std::path::Path::new(
&file_path,
))
.await
{
Ok(duration) => {
log::debug!(
"Precise TS segment duration: {}s (original: {}s)",
duration,
ts_length
);
duration
}
Err(e) => {
log::warn!("Failed to get precise TS duration for {}: {}, using fallback", file_name, e);
ts_length
}
}
} else {
// FMP4 segment without BILI-AUX info, use fallback
log::debug!("No BILI-AUX data available for FMP4 segment {}, using target duration", file_name);
ts_length
};
self.entry_store
.write()
.await
@@ -713,26 +970,56 @@ impl BiliRecorder {
.add_entry(TsEntry {
url: file_name.into(),
sequence,
length: ts_length,
length: precise_length,
size,
ts,
ts: ts_mili,
is_header: false,
})
.await;
// Update last offset for next segment calculation
if current_stream.format == StreamType::FMP4
&& i < segment_offsets.len()
{
last_offset = Some(segment_offsets[i]);
}
new_segment_fetched = true;
break;
}
Err(e) => {
retry += 1;
log::warn!("Download ts failed, retry {}: {}", retry, e);
}
}
}
sequence += 1;
// If this is the last retry and it's the first record for non-FMP4, clean up
if retry > 3
&& is_first_record
&& current_stream.format != StreamType::FMP4
&& work_dir_created_for_non_fmp4
{
if let Err(cleanup_err) =
tokio::fs::remove_dir_all(&work_dir).await
{
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
}
}
}
}
if new_segment_fetched {
*self.last_update.write().await = Utc::now().timestamp();
// Save the last offset for next M3U8 processing
if current_stream.format == StreamType::FMP4 {
*self.last_segment_offset.write().await = last_offset;
}
self.db
.update_record(
timestamp.to_string().as_str(),
@@ -755,7 +1042,8 @@ impl BiliRecorder {
}
}
// check the current stream is too slow or not
if let Some(last_ts) = self.entry_store.read().await.as_ref().unwrap().last_ts() {
if let Some(entry_store) = self.entry_store.read().await.as_ref() {
if let Some(last_ts) = entry_store.last_ts() {
if last_ts < Utc::now().timestamp() - 10 {
log::error!("Stream is too slow, last entry ts is at {}", last_ts);
return Err(super::errors::RecorderError::SlowStream {
@@ -764,9 +1052,6 @@ impl BiliRecorder {
}
}
}
Err(e) => {
self.force_update.store(true, Ordering::Relaxed);
return Err(e);
}
}
@@ -816,11 +1101,12 @@ impl BiliRecorder {
None
};
self.entry_store.read().await.as_ref().unwrap().manifest(
!live_status || range.is_some(),
true,
range,
)
if let Some(entry_store) = self.entry_store.read().await.as_ref() {
entry_store.manifest(!live_status || range.is_some(), true, range)
} else {
// Return empty manifest if entry_store is not initialized yet
"#EXTM3U\n#EXT-X-VERSION:3\n".to_string()
}
}
}
@@ -877,10 +1163,8 @@ impl super::Recorder for BiliRecorder {
continue;
}
tokio::time::sleep(Duration::from_secs(
self_clone.config.read().await.status_check_interval,
))
.await;
let interval = self_clone.config.read().await.status_check_interval;
tokio::time::sleep(Duration::from_secs(interval)).await;
}
}));
}
@@ -968,7 +1252,11 @@ impl super::Recorder for BiliRecorder {
Ok(if live_id == *self.live_id.read().await {
// just return current cache content
match self.danmu_storage.read().await.as_ref() {
Some(storage) => storage.get_entries().await,
Some(storage) => {
storage
.get_entries(self.first_segment_ts(live_id).await)
.await
}
None => Vec::new(),
}
} else {
@@ -986,7 +1274,9 @@ impl super::Recorder for BiliRecorder {
return Ok(Vec::new());
}
let storage = storage.unwrap();
storage.get_entries().await
storage
.get_entries(self.first_segment_ts(live_id).await)
.await
})
}
@@ -994,6 +1284,92 @@ impl super::Recorder for BiliRecorder {
*self.live_id.read().await == live_id && *self.live_status.read().await
}
async fn get_archive_subtitle(
&self,
live_id: &str,
) -> Result<String, super::errors::RecorderError> {
// read subtitle file under work_dir
let work_dir = self.get_work_dir(live_id).await;
let subtitle_file_path = format!("{}/{}", work_dir, "subtitle.srt");
let subtitle_file = File::open(subtitle_file_path).await;
if subtitle_file.is_err() {
return Err(super::errors::RecorderError::SubtitleNotFound {
live_id: live_id.to_string(),
});
}
let subtitle_file = subtitle_file.unwrap();
let mut subtitle_file = BufReader::new(subtitle_file);
let mut subtitle_content = String::new();
subtitle_file.read_to_string(&mut subtitle_content).await?;
Ok(subtitle_content)
}
async fn generate_archive_subtitle(
&self,
live_id: &str,
) -> Result<String, super::errors::RecorderError> {
// generate subtitle file under work_dir
let work_dir = self.get_work_dir(live_id).await;
let subtitle_file_path = format!("{}/{}", work_dir, "subtitle.srt");
let mut subtitle_file = File::create(subtitle_file_path).await?;
// first generate a tmp clip file
// generate a tmp m3u8 index file
let m3u8_index_file_path = format!("{}/{}", work_dir, "tmp.m3u8");
let m3u8_content = self.m3u8_content(live_id, 0, 0).await;
tokio::fs::write(&m3u8_index_file_path, m3u8_content).await?;
log::info!("M3U8 index file generated: {}", m3u8_index_file_path);
// generate a tmp clip file
let clip_file_path = format!("{}/{}", work_dir, "tmp.mp4");
if let Err(e) = crate::ffmpeg::clip_from_m3u8(
None::<&crate::progress_reporter::ProgressReporter>,
Path::new(&m3u8_index_file_path),
Path::new(&clip_file_path),
None,
false,
)
.await
{
return Err(super::errors::RecorderError::SubtitleGenerationFailed {
error: e.to_string(),
});
}
log::info!("Temp clip file generated: {}", clip_file_path);
// generate subtitle file
let config = self.config.read().await;
let result = crate::ffmpeg::generate_video_subtitle(
None,
Path::new(&clip_file_path),
"whisper",
&config.whisper_model,
&config.whisper_prompt,
&config.openai_api_key,
&config.openai_api_endpoint,
&config.whisper_language,
)
.await;
// write subtitle file
if let Err(e) = result {
return Err(super::errors::RecorderError::SubtitleGenerationFailed {
error: e.to_string(),
});
}
log::info!("Subtitle generated");
let result = result.unwrap();
let subtitle_content = result
.subtitle_content
.iter()
.map(item_to_srt)
.collect::<Vec<String>>()
.join("");
subtitle_file.write_all(subtitle_content.as_bytes()).await?;
log::info!("Subtitle file written");
// remove tmp file
tokio::fs::remove_file(&m3u8_index_file_path).await?;
tokio::fs::remove_file(&clip_file_path).await?;
log::info!("Tmp file removed");
Ok(subtitle_content)
}
async fn enable(&self) {
*self.enabled.write().await = true;
}

View File

@@ -10,6 +10,7 @@ use crate::database::account::AccountRow;
use crate::progress_reporter::ProgressReporter;
use crate::progress_reporter::ProgressReporterTrait;
use base64::Engine;
use chrono::TimeZone;
use pct_str::PctString;
use pct_str::URIReserved;
use regex::Regex;
@@ -42,6 +43,7 @@ pub struct RoomInfo {
pub room_keyframe_url: String,
pub room_title: String,
pub user_id: u64,
pub live_start_time: i64,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
@@ -138,25 +140,12 @@ impl BiliStream {
}
})
}
pub fn is_same(&self, other: &BiliStream) -> bool {
// Extract live_id part from path (e.g., live_1848752274_71463808)
let get_live_id = |path: &str| {
path.split('/')
.find(|part| part.starts_with("live_"))
.unwrap_or("")
.to_string()
};
let self_live_id = get_live_id(&self.path);
let other_live_id = get_live_id(&other.path);
self_live_id == other_live_id
}
}
impl BiliClient {
pub fn new() -> Result<BiliClient, BiliClientError> {
pub fn new(user_agent: &str) -> Result<BiliClient, BiliClientError> {
let mut headers = reqwest::header::HeaderMap::new();
headers.insert("user-agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36".parse().unwrap());
headers.insert("user-agent", user_agent.parse().unwrap());
if let Ok(client) = Client::builder().timeout(Duration::from_secs(10)).build() {
Ok(BiliClient { client, headers })
@@ -214,7 +203,11 @@ impl BiliClient {
pub async fn logout(&self, account: &AccountRow) -> Result<(), BiliClientError> {
let url = "https://passport.bilibili.com/login/exit/v2";
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let params = [("csrf", account.csrf.clone())];
let _ = self
.client
@@ -241,7 +234,11 @@ impl BiliClient {
});
let params = self.get_sign(params).await?;
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let resp = self
.client
.get(format!(
@@ -283,7 +280,11 @@ impl BiliClient {
room_id: u64,
) -> Result<RoomInfo, BiliClientError> {
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let response = self
.client
.get(format!(
@@ -332,6 +333,22 @@ impl BiliClient {
let live_status = res["data"]["live_status"]
.as_u64()
.ok_or(BiliClientError::InvalidValue)? as u8;
// "live_time": "2025-08-09 18:33:35",
let live_start_time_str = res["data"]["live_time"]
.as_str()
.ok_or(BiliClientError::InvalidValue)?;
let live_start_time = if live_start_time_str == "0000-00-00 00:00:00" {
0
} else {
let naive =
chrono::NaiveDateTime::parse_from_str(live_start_time_str, "%Y-%m-%d %H:%M:%S")
.map_err(|_| BiliClientError::InvalidValue)?;
chrono::Local
.from_local_datetime(&naive)
.earliest()
.ok_or(BiliClientError::InvalidValue)?
.timestamp()
};
Ok(RoomInfo {
room_id,
room_title,
@@ -339,6 +356,7 @@ impl BiliClient {
room_keyframe_url,
user_id,
live_status,
live_start_time,
})
}
@@ -359,7 +377,11 @@ impl BiliClient {
url: &String,
) -> Result<String, BiliClientError> {
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let response = self
.client
.get(url.to_owned())
@@ -382,11 +404,11 @@ impl BiliClient {
.headers(self.headers.clone())
.send()
.await?;
let mut file = std::fs::File::create(file_path)?;
let mut file = tokio::fs::File::create(file_path).await?;
let bytes = res.bytes().await?;
let size = bytes.len() as u64;
let mut content = std::io::Cursor::new(bytes);
std::io::copy(&mut content, &mut file)?;
tokio::io::copy(&mut content, &mut file).await?;
Ok(size)
}
@@ -476,7 +498,11 @@ impl BiliClient {
video_file: &Path,
) -> Result<PreuploadResponse, BiliClientError> {
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let url = format!(
"https://member.bilibili.com/preupload?name={}&r=upos&profile=ugcfx/bup",
video_file.file_name().unwrap().to_str().unwrap()
@@ -715,7 +741,11 @@ impl BiliClient {
video: &profile::Video,
) -> Result<VideoSubmitData, BiliClientError> {
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let url = format!(
"https://member.bilibili.com/x/vu/web/add/v3?ts={}&csrf={}",
chrono::Local::now().timestamp(),
@@ -733,19 +763,19 @@ impl BiliClient {
.await
{
Ok(raw_resp) => {
let json = raw_resp.json().await?;
if let Ok(resp) = serde_json::from_value::<GeneralResponse>(json) {
let json: Value = raw_resp.json().await?;
if let Ok(resp) = serde_json::from_value::<GeneralResponse>(json.clone()) {
match resp.data {
response::Data::VideoSubmit(data) => Ok(data),
_ => Err(BiliClientError::InvalidResponse),
}
} else {
println!("Parse response failed");
log::error!("Parse response failed: {}", json);
Err(BiliClientError::InvalidResponse)
}
}
Err(e) => {
println!("Send failed {}", e);
log::error!("Send failed {}", e);
Err(BiliClientError::InvalidResponse)
}
}
@@ -761,7 +791,11 @@ impl BiliClient {
chrono::Local::now().timestamp(),
);
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let params = [("csrf", account.csrf.clone()), ("cover", cover.to_string())];
match self
.client
@@ -773,19 +807,19 @@ impl BiliClient {
.await
{
Ok(raw_resp) => {
let json = raw_resp.json().await?;
if let Ok(resp) = serde_json::from_value::<GeneralResponse>(json) {
let json: Value = raw_resp.json().await?;
if let Ok(resp) = serde_json::from_value::<GeneralResponse>(json.clone()) {
match resp.data {
response::Data::Cover(data) => Ok(data.url),
_ => Err(BiliClientError::InvalidResponse),
}
} else {
println!("Parse response failed");
log::error!("Parse response failed: {}", json);
Err(BiliClientError::InvalidResponse)
}
}
Err(e) => {
println!("Send failed {}", e);
log::error!("Send failed {}", e);
Err(BiliClientError::InvalidResponse)
}
}
@@ -799,7 +833,11 @@ impl BiliClient {
) -> Result<(), BiliClientError> {
let url = "https://api.live.bilibili.com/msg/send".to_string();
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let params = [
("bubble", "0"),
("msg", message),
@@ -829,7 +867,11 @@ impl BiliClient {
) -> Result<Vec<response::Typelist>, BiliClientError> {
let url = "https://member.bilibili.com/x/vupre/web/archive/pre?lang=cn";
let mut headers = self.headers.clone();
headers.insert("cookie", account.cookies.parse().unwrap());
if let Ok(cookies) = account.cookies.parse() {
headers.insert("cookie", cookies);
} else {
return Err(BiliClientError::InvalidCookie);
}
let resp: GeneralResponse = self
.client
.get(url)

View File

@@ -10,6 +10,7 @@ custom_error! {pub BiliClientError
InvalidUrl = "Invalid url",
InvalidFormat = "Invalid stream format",
InvalidStream = "Invalid stream",
InvalidCookie = "Invalid cookie",
UploadError{err: String} = "Upload error: {err}",
UploadCancelled = "Upload was cancelled by user",
EmptyCache = "Empty cache",

View File

@@ -65,7 +65,20 @@ impl DanmuStorage {
.await;
}
pub async fn get_entries(&self) -> Vec<DanmuEntry> {
self.cache.read().await.clone()
// get entries with ts relative to live start time
pub async fn get_entries(&self, live_start_ts: i64) -> Vec<DanmuEntry> {
let mut danmus: Vec<DanmuEntry> = self
.cache
.read()
.await
.iter()
.map(|entry| DanmuEntry {
ts: entry.ts - live_start_ts,
content: entry.content.clone(),
})
.collect();
// filter out danmus with ts < 0
danmus.retain(|entry| entry.ts >= 0);
danmus
}
}

View File

@@ -10,6 +10,7 @@ use crate::database::Database;
use crate::progress_manager::Event;
use crate::progress_reporter::EventEmitter;
use crate::recorder_manager::RecorderEvent;
use crate::subtitle_generator::item_to_srt;
use crate::{config::Config, database::account::AccountRow};
use async_trait::async_trait;
use chrono::Utc;
@@ -18,8 +19,11 @@ use danmu_stream::danmu_stream::DanmuStream;
use danmu_stream::provider::ProviderType;
use danmu_stream::DanmuMessageType;
use rand::random;
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
use tokio::fs::File;
use tokio::io::{AsyncReadExt, AsyncWriteExt, BufReader};
use tokio::sync::{broadcast, Mutex, RwLock};
use tokio::task::JoinHandle;
@@ -55,11 +59,13 @@ pub struct DouyinRecorder {
db: Arc<Database>,
account: AccountRow,
room_id: u64,
room_info: Arc<RwLock<Option<response::DouyinRoomInfoResponse>>>,
sec_user_id: String,
room_info: Arc<RwLock<Option<client::DouyinBasicRoomInfo>>>,
stream_url: Arc<RwLock<Option<String>>>,
entry_store: Arc<RwLock<Option<EntryStore>>>,
danmu_store: Arc<RwLock<Option<DanmuStorage>>>,
live_id: Arc<RwLock<String>>,
danmu_room_id: Arc<RwLock<String>>,
live_status: Arc<RwLock<LiveStatus>>,
is_recording: Arc<RwLock<bool>>,
running: Arc<RwLock<bool>>,
@@ -79,16 +85,17 @@ impl DouyinRecorder {
#[cfg(not(feature = "headless"))] app_handle: AppHandle,
emitter: EventEmitter,
room_id: u64,
sec_user_id: &str,
config: Arc<RwLock<Config>>,
account: &AccountRow,
db: &Arc<Database>,
enabled: bool,
channel: broadcast::Sender<RecorderEvent>,
) -> Result<Self, super::errors::RecorderError> {
let client = client::DouyinClient::new(account);
let room_info = client.get_room_info(room_id).await?;
let client = client::DouyinClient::new(&config.read().await.user_agent, account);
let room_info = client.get_room_info(room_id, sec_user_id).await?;
let mut live_status = LiveStatus::Offline;
if room_info.data.room_status == 0 {
if room_info.status == 0 {
live_status = LiveStatus::Live;
}
@@ -99,7 +106,9 @@ impl DouyinRecorder {
db: db.clone(),
account: account.clone(),
room_id,
sec_user_id: sec_user_id.to_string(),
live_id: Arc::new(RwLock::new(String::new())),
danmu_room_id: Arc::new(RwLock::new(String::new())),
entry_store: Arc::new(RwLock::new(None)),
danmu_store: Arc::new(RwLock::new(None)),
client,
@@ -128,9 +137,13 @@ impl DouyinRecorder {
}
async fn check_status(&self) -> bool {
match self.client.get_room_info(self.room_id).await {
match self
.client
.get_room_info(self.room_id, &self.sec_user_id)
.await
{
Ok(info) => {
let live_status = info.data.room_status == 0; // room_status == 0 表示正在直播
let live_status = info.status == 0; // room_status == 0 表示正在直播
*self.room_info.write().await = Some(info.clone());
@@ -151,7 +164,7 @@ impl DouyinRecorder {
.title("BiliShadowReplay - 直播开始")
.body(format!(
"{} 开启了直播:{}",
info.data.user.nickname, info.data.data[0].title
info.user_name, info.room_title
))
.show()
.unwrap();
@@ -163,7 +176,7 @@ impl DouyinRecorder {
.title("BiliShadowReplay - 直播结束")
.body(format!(
"{} 关闭了直播:{}",
info.data.user.nickname, info.data.data[0].title
info.user_name, info.room_title
))
.show()
.unwrap();
@@ -196,63 +209,18 @@ impl DouyinRecorder {
}
// Get stream URL when live starts
if !info.data.data[0]
.stream_url
.as_ref()
.unwrap()
.hls_pull_url
.is_empty()
{
*self.live_id.write().await = info.data.data[0].id_str.clone();
// create a new record
let cover_url = info.data.data[0]
.cover
.as_ref()
.map(|cover| cover.url_list[0].clone());
let cover = if let Some(url) = cover_url {
Some(self.client.get_cover_base64(&url).await.unwrap())
} else {
None
};
if let Err(e) = self
.db
.add_record(
PlatformType::Douyin,
self.live_id.read().await.as_str(),
self.room_id,
&info.data.data[0].title,
cover,
None,
)
.await
{
log::error!("Failed to add record: {}", e);
if !info.hls_url.is_empty() {
// Only set stream URL, don't create record yet
// Record will be created when first ts download succeeds
let new_stream_url = self.get_best_stream_url(&info).await;
if new_stream_url.is_none() {
log::error!("No stream url found in room_info: {:#?}", info);
return false;
}
// setup entry store
let work_dir = self.get_work_dir(self.live_id.read().await.as_str()).await;
let entry_store = EntryStore::new(&work_dir).await;
*self.entry_store.write().await = Some(entry_store);
// setup danmu store
let danmu_file_path = format!("{}{}", work_dir, "danmu.txt");
let danmu_store = DanmuStorage::new(&danmu_file_path).await;
*self.danmu_store.write().await = danmu_store;
// start danmu task
if let Some(danmu_task) = self.danmu_task.lock().await.as_mut() {
danmu_task.abort();
}
if let Some(danmu_stream_task) = self.danmu_stream_task.lock().await.as_mut() {
danmu_stream_task.abort();
}
let live_id = self.live_id.read().await.clone();
let self_clone = self.clone();
*self.danmu_task.lock().await = Some(tokio::spawn(async move {
log::info!("Start fetching danmu for live {}", live_id);
let _ = self_clone.danmu().await;
}));
log::info!("New douyin stream URL: {}", new_stream_url.clone().unwrap());
*self.stream_url.write().await = Some(new_stream_url.unwrap());
*self.danmu_room_id.write().await = info.room_id_str.clone();
}
true
@@ -266,14 +234,14 @@ impl DouyinRecorder {
async fn danmu(&self) -> Result<(), super::errors::RecorderError> {
let cookies = self.account.cookies.clone();
let live_id = self
.live_id
let danmu_room_id = self
.danmu_room_id
.read()
.await
.clone()
.parse::<u64>()
.unwrap_or(0);
let danmu_stream = DanmuStream::new(ProviderType::Douyin, &cookies, live_id).await;
let danmu_stream = DanmuStream::new(ProviderType::Douyin, &cookies, danmu_room_id).await;
if danmu_stream.is_err() {
let err = danmu_stream.err().unwrap();
log::error!("Failed to create danmu stream: {}", err);
@@ -290,13 +258,14 @@ impl DouyinRecorder {
if let Ok(Some(msg)) = danmu_stream.recv().await {
match msg {
DanmuMessageType::DanmuMessage(danmu) => {
let ts = Utc::now().timestamp_millis();
self.emitter.emit(&Event::DanmuReceived {
room: self.room_id,
ts: danmu.timestamp,
ts,
content: danmu.message.clone(),
});
if let Some(storage) = self.danmu_store.read().await.as_ref() {
storage.add_line(danmu.timestamp, &danmu.message).await;
storage.add_line(ts, &danmu.message).await;
}
}
}
@@ -314,6 +283,7 @@ impl DouyinRecorder {
async fn reset(&self) {
*self.entry_store.write().await = None;
*self.live_id.write().await = String::new();
*self.danmu_room_id.write().await = String::new();
*self.last_update.write().await = Utc::now().timestamp();
*self.stream_url.write().await = None;
}
@@ -327,18 +297,8 @@ impl DouyinRecorder {
)
}
async fn get_best_stream_url(
&self,
room_info: &response::DouyinRoomInfoResponse,
) -> Option<String> {
let stream_data = room_info.data.data[0]
.stream_url
.as_ref()
.unwrap()
.live_core_sdk_data
.pull_data
.stream_data
.clone();
async fn get_best_stream_url(&self, room_info: &client::DouyinBasicRoomInfo) -> Option<String> {
let stream_data = room_info.stream_data.clone();
// parse stream_data into stream_info
let stream_info = serde_json::from_str::<stream_info::StreamInfo>(&stream_data);
if let Ok(stream_info) = stream_info {
@@ -356,6 +316,25 @@ impl DouyinRecorder {
}
}
fn parse_stream_url(&self, stream_url: &str) -> (String, String) {
// Parse stream URL to extract base URL and query parameters
// Example: http://7167739a741646b4651b6949b2f3eb8e.livehwc3.cn/pull-hls-l26.douyincdn.com/third/stream-693342996808860134_or4.m3u8?sub_m3u8=true&user_session_id=16090eb45ab8a2f042f7c46563936187&major_anchor_level=common&edge_slice=true&expire=67d944ec&sign=47b95cc6e8de20d82f3d404412fa8406
let base_url = stream_url
.rfind('/')
.map(|i| &stream_url[..=i])
.unwrap_or(stream_url)
.to_string();
let query_params = stream_url
.find('?')
.map(|i| &stream_url[i..])
.unwrap_or("")
.to_string();
(base_url, query_params)
}
async fn update_entries(&self) -> Result<u128, RecorderError> {
let task_begin_time = std::time::Instant::now();
@@ -367,70 +346,163 @@ impl DouyinRecorder {
}
if self.stream_url.read().await.is_none() {
let new_stream_url = self.get_best_stream_url(room_info.as_ref().unwrap()).await;
if new_stream_url.is_none() {
return Err(RecorderError::NoStreamAvailable);
}
log::info!("New douyin stream URL: {}", new_stream_url.clone().unwrap());
*self.stream_url.write().await = Some(new_stream_url.unwrap());
}
let stream_url = self.stream_url.read().await.as_ref().unwrap().clone();
let mut stream_url = self.stream_url.read().await.as_ref().unwrap().clone();
// Get m3u8 playlist
let (playlist, updated_stream_url) = self.client.get_m3u8_content(&stream_url).await?;
*self.stream_url.write().await = Some(updated_stream_url);
*self.stream_url.write().await = Some(updated_stream_url.clone());
stream_url = updated_stream_url;
let mut new_segment_fetched = false;
let work_dir = self.get_work_dir(self.live_id.read().await.as_str()).await;
let mut is_first_segment = self.entry_store.read().await.is_none();
let work_dir;
// Create work directory if not exists
tokio::fs::create_dir_all(&work_dir).await?;
// If this is the first segment, prepare but don't create directories yet
if is_first_segment {
// Generate live_id for potential use
let live_id = Utc::now().timestamp_millis().to_string();
*self.live_id.write().await = live_id.clone();
work_dir = self.get_work_dir(&live_id).await;
} else {
work_dir = self.get_work_dir(self.live_id.read().await.as_str()).await;
}
let last_sequence = self
.entry_store
let last_sequence = if is_first_segment {
0
} else {
self.entry_store
.read()
.await
.as_ref()
.unwrap()
.last_sequence();
.last_sequence
};
for (i, segment) in playlist.segments.iter().enumerate() {
let sequence = playlist.media_sequence + i as u64;
for segment in playlist.segments.iter() {
let formated_ts_name = segment.uri.clone();
let sequence = extract_sequence_from(&formated_ts_name);
if sequence.is_none() {
log::error!(
"No timestamp extracted from douyin ts name: {}",
formated_ts_name
);
continue;
}
let sequence = sequence.unwrap();
if sequence <= last_sequence {
continue;
}
new_segment_fetched = true;
let mut uri = segment.uri.clone();
// if uri contains ?params, remove it
if let Some(pos) = uri.find('?') {
uri = uri[..pos].to_string();
}
// example: pull-l3.douyincdn.com_stream-405850027547689439_or4-1752675567719.ts
let uri = segment.uri.clone();
let ts_url = if uri.starts_with("http") {
uri.clone()
} else {
// Get the base URL without the filename and query parameters
let base_url = stream_url
.rfind('/')
.map(|i| &stream_url[..=i])
.unwrap_or(&stream_url);
// Get the query parameters
let query = stream_url.find('?').map(|i| &stream_url[i..]).unwrap_or("");
// Combine: base_url + new_filename + query_params
format!("{}{}{}", base_url, uri, query)
// Parse the stream URL to extract base URL and query parameters
let (base_url, query_params) = self.parse_stream_url(&stream_url);
// Check if the segment URI already has query parameters
if uri.contains('?') {
// If segment URI has query params, append m3u8 query params with &
format!("{}{}&{}", base_url, uri, &query_params[1..]) // Remove leading ? from query_params
} else {
// If segment URI has no query params, append m3u8 query params with ?
format!("{}{}{}", base_url, uri, query_params)
}
};
let file_name = format!("{}.ts", sequence);
// Download segment with retry mechanism
let mut retry_count = 0;
let max_retries = 3;
let mut download_success = false;
let mut work_dir_created = false;
// Download segment
match self
.client
.download_ts(&ts_url, &format!("{}/{}", work_dir, file_name))
while retry_count < max_retries && !download_success {
let file_name = format!("{}.ts", sequence);
let file_path = format!("{}/{}", work_dir, file_name);
// If this is the first segment, create work directory before first download attempt
if is_first_segment && !work_dir_created {
// Create work directory only when we're about to download
if let Err(e) = tokio::fs::create_dir_all(&work_dir).await {
log::error!("Failed to create work directory: {}", e);
return Err(e.into());
}
work_dir_created = true;
}
match self.client.download_ts(&ts_url, &file_path).await {
Ok(size) => {
if size == 0 {
log::error!("Download segment failed (empty response): {}", ts_url);
retry_count += 1;
if retry_count < max_retries {
tokio::time::sleep(Duration::from_millis(500)).await;
continue;
}
break;
}
// If this is the first successful download, create record and initialize stores
if is_first_segment {
// Create database record
let room_info = room_info.as_ref().unwrap();
let cover_url = room_info.cover.clone();
let cover = if let Some(url) = cover_url {
Some(self.client.get_cover_base64(&url).await.unwrap_or_default())
} else {
None
};
if let Err(e) = self
.db
.add_record(
PlatformType::Douyin,
self.live_id.read().await.as_str(),
self.room_id,
&room_info.room_title,
cover,
None,
)
.await
{
Ok(size) => {
log::error!("Failed to add record: {}", e);
}
// Setup entry store
let entry_store = EntryStore::new(&work_dir).await;
*self.entry_store.write().await = Some(entry_store);
// Setup danmu store
let danmu_file_path = format!("{}{}", work_dir, "danmu.txt");
let danmu_store = DanmuStorage::new(&danmu_file_path).await;
*self.danmu_store.write().await = danmu_store;
// Start danmu task
if let Some(danmu_task) = self.danmu_task.lock().await.as_mut() {
danmu_task.abort();
}
if let Some(danmu_stream_task) =
self.danmu_stream_task.lock().await.as_mut()
{
danmu_stream_task.abort();
}
let live_id = self.live_id.read().await.clone();
let self_clone = self.clone();
*self.danmu_task.lock().await = Some(tokio::spawn(async move {
log::info!("Start fetching danmu for live {}", live_id);
let _ = self_clone.danmu().await;
}));
is_first_segment = false;
}
let ts_entry = TsEntry {
url: file_name,
sequence,
@@ -447,20 +519,98 @@ impl DouyinRecorder {
.unwrap()
.add_entry(ts_entry)
.await;
new_segment_fetched = true;
download_success = true;
}
Err(e) => {
log::error!("Failed to download segment: {}", e);
log::warn!(
"Failed to download segment (attempt {}/{}): {} - URL: {}",
retry_count + 1,
max_retries,
e,
ts_url
);
retry_count += 1;
if retry_count < max_retries {
tokio::time::sleep(Duration::from_millis(1000 * retry_count as u64))
.await;
continue;
}
// If all retries failed, check if it's a 400 error
if e.to_string().contains("400") {
log::error!(
"HTTP 400 error for segment, stream URL may be expired: {}",
ts_url
);
*self.stream_url.write().await = None;
// Clean up empty directory if first segment failed
if is_first_segment && work_dir_created {
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await
{
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
return Err(RecorderError::NoStreamAvailable);
}
// Clean up empty directory if first segment failed
if is_first_segment && work_dir_created {
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await {
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
return Err(e.into());
}
}
}
if !download_success {
log::error!(
"Failed to download segment after {} retries: {}",
max_retries,
ts_url
);
// Clean up empty directory if first segment failed after all retries
if is_first_segment && work_dir_created {
if let Err(cleanup_err) = tokio::fs::remove_dir_all(&work_dir).await {
log::warn!(
"Failed to cleanup empty work directory {}: {}",
work_dir,
cleanup_err
);
}
}
continue;
}
}
if new_segment_fetched {
*self.last_update.write().await = Utc::now().timestamp();
self.update_record().await;
}
// if no new segment fetched for 10 seconds
if *self.last_update.read().await + 10 < Utc::now().timestamp() {
log::warn!("No new segment fetched for 10 seconds");
*self.stream_url.write().await = None;
*self.last_update.write().await = Utc::now().timestamp();
return Err(RecorderError::NoStreamAvailable);
}
Ok(task_begin_time.elapsed().as_millis())
}
@@ -511,6 +661,13 @@ impl DouyinRecorder {
}
}
fn extract_sequence_from(name: &str) -> Option<u64> {
use regex::Regex;
let re = Regex::new(r"(\d+)\.ts").ok()?;
let captures = re.captures(name)?;
captures.get(1)?.as_str().parse().ok()
}
#[async_trait]
impl Recorder for DouyinRecorder {
async fn run(&self) {
@@ -558,10 +715,9 @@ impl Recorder for DouyinRecorder {
.await;
continue;
}
tokio::time::sleep(Duration::from_secs(
self_clone.config.read().await.status_check_interval,
))
.await;
let interval = self_clone.config.read().await.status_check_interval;
tokio::time::sleep(Duration::from_secs(interval)).await;
}
log::info!("recording thread {} quit.", self_clone.room_id);
}));
@@ -598,6 +754,87 @@ impl Recorder for DouyinRecorder {
m3u8_content
}
async fn get_archive_subtitle(
&self,
live_id: &str,
) -> Result<String, super::errors::RecorderError> {
let work_dir = self.get_work_dir(live_id).await;
let subtitle_file_path = format!("{}/{}", work_dir, "subtitle.srt");
let subtitle_file = File::open(subtitle_file_path).await;
if subtitle_file.is_err() {
return Err(super::errors::RecorderError::SubtitleNotFound {
live_id: live_id.to_string(),
});
}
let subtitle_file = subtitle_file.unwrap();
let mut subtitle_file = BufReader::new(subtitle_file);
let mut subtitle_content = String::new();
subtitle_file.read_to_string(&mut subtitle_content).await?;
Ok(subtitle_content)
}
async fn generate_archive_subtitle(
&self,
live_id: &str,
) -> Result<String, super::errors::RecorderError> {
// generate subtitle file under work_dir
let work_dir = self.get_work_dir(live_id).await;
let subtitle_file_path = format!("{}/{}", work_dir, "subtitle.srt");
let mut subtitle_file = File::create(subtitle_file_path).await?;
// first generate a tmp clip file
// generate a tmp m3u8 index file
let m3u8_index_file_path = format!("{}/{}", work_dir, "tmp.m3u8");
let m3u8_content = self.m3u8_content(live_id, 0, 0).await;
tokio::fs::write(&m3u8_index_file_path, m3u8_content).await?;
// generate a tmp clip file
let clip_file_path = format!("{}/{}", work_dir, "tmp.mp4");
if let Err(e) = crate::ffmpeg::clip_from_m3u8(
None::<&crate::progress_reporter::ProgressReporter>,
Path::new(&m3u8_index_file_path),
Path::new(&clip_file_path),
None,
false,
)
.await
{
return Err(super::errors::RecorderError::SubtitleGenerationFailed {
error: e.to_string(),
});
}
// generate subtitle file
let config = self.config.read().await;
let result = crate::ffmpeg::generate_video_subtitle(
None,
Path::new(&clip_file_path),
"whisper",
&config.whisper_model,
&config.whisper_prompt,
&config.openai_api_key,
&config.openai_api_endpoint,
&config.whisper_language,
)
.await;
// write subtitle file
if let Err(e) = result {
return Err(super::errors::RecorderError::SubtitleGenerationFailed {
error: e.to_string(),
});
}
let result = result.unwrap();
let subtitle_content = result
.subtitle_content
.iter()
.map(item_to_srt)
.collect::<Vec<String>>()
.join("");
subtitle_file.write_all(subtitle_content.as_bytes()).await?;
// remove tmp file
tokio::fs::remove_file(&m3u8_index_file_path).await?;
tokio::fs::remove_file(&clip_file_path).await?;
Ok(subtitle_content)
}
async fn first_segment_ts(&self, live_id: &str) -> i64 {
if *self.live_id.read().await == live_id {
let entry_store = self.entry_store.read().await;
@@ -616,17 +853,11 @@ impl Recorder for DouyinRecorder {
let room_info = self.room_info.read().await;
let room_cover_url = room_info
.as_ref()
.and_then(|info| {
info.data
.data
.first()
.and_then(|data| data.cover.as_ref())
.map(|cover| cover.url_list[0].clone())
})
.and_then(|info| info.cover.clone())
.unwrap_or_default();
let room_title = room_info
.as_ref()
.and_then(|info| info.data.data.first().map(|data| data.title.clone()))
.map(|info| info.room_title.clone())
.unwrap_or_default();
RecorderInfo {
room_id: self.room_id,
@@ -638,15 +869,15 @@ impl Recorder for DouyinRecorder {
user_info: UserInfo {
user_id: room_info
.as_ref()
.map(|info| info.data.user.sec_uid.clone())
.map(|info| info.sec_user_id.clone())
.unwrap_or_default(),
user_name: room_info
.as_ref()
.map(|info| info.data.user.nickname.clone())
.map(|info| info.user_name.clone())
.unwrap_or_default(),
user_avatar: room_info
.as_ref()
.map(|info| info.data.user.avatar_thumb.url_list[0].clone())
.map(|info| info.user_avatar.clone())
.unwrap_or_default(),
},
total_length: if let Some(store) = self.entry_store.read().await.as_ref() {
@@ -666,7 +897,11 @@ impl Recorder for DouyinRecorder {
Ok(if live_id == *self.live_id.read().await {
// just return current cache content
match self.danmu_store.read().await.as_ref() {
Some(storage) => storage.get_entries().await,
Some(storage) => {
storage
.get_entries(self.first_segment_ts(live_id).await)
.await
}
None => Vec::new(),
}
} else {
@@ -684,7 +919,9 @@ impl Recorder for DouyinRecorder {
return Ok(Vec::new());
}
let storage = storage.unwrap();
storage.get_entries().await
storage
.get_entries(self.first_segment_ts(live_id).await)
.await
})
}

View File

@@ -2,17 +2,13 @@ use crate::database::account::AccountRow;
use base64::Engine;
use m3u8_rs::{MediaPlaylist, Playlist};
use reqwest::{Client, Error as ReqwestError};
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use super::response::DouyinRoomInfoResponse;
use std::fmt;
const USER_AGENT: &str = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36";
#[derive(Debug)]
pub enum DouyinClientError {
Network(ReqwestError),
Network(String),
Io(std::io::Error),
Playlist(String),
}
@@ -29,7 +25,7 @@ impl fmt::Display for DouyinClientError {
impl From<ReqwestError> for DouyinClientError {
fn from(err: ReqwestError) -> Self {
DouyinClientError::Network(err)
DouyinClientError::Network(err.to_string())
}
}
@@ -39,27 +35,42 @@ impl From<std::io::Error> for DouyinClientError {
}
}
#[derive(Debug, Clone)]
pub struct DouyinBasicRoomInfo {
pub room_id_str: String,
pub room_title: String,
pub cover: Option<String>,
pub status: i64,
pub hls_url: String,
pub stream_data: String,
// user related
pub user_name: String,
pub user_avatar: String,
pub sec_user_id: String,
}
#[derive(Clone)]
pub struct DouyinClient {
client: Client,
cookies: String,
account: AccountRow,
}
impl DouyinClient {
pub fn new(account: &AccountRow) -> Self {
let client = Client::builder().user_agent(USER_AGENT).build().unwrap();
pub fn new(user_agent: &str, account: &AccountRow) -> Self {
let client = Client::builder().user_agent(user_agent).build().unwrap();
Self {
client,
cookies: account.cookies.clone(),
account: account.clone(),
}
}
pub async fn get_room_info(
&self,
room_id: u64,
) -> Result<DouyinRoomInfoResponse, DouyinClientError> {
sec_user_id: &str,
) -> Result<DouyinBasicRoomInfo, DouyinClientError> {
let url = format!(
"https://live.douyin.com/webcast/room/web/enter/?aid=6383&app_name=douyin_web&live_id=1&device_platform=web&language=zh-CN&enter_from=web_live&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=MacIntel&browser_name=Chrome&browser_version=122.0.0.0&web_rid={}",
"https://live.douyin.com/webcast/room/web/enter/?aid=6383&app_name=douyin_web&live_id=1&device_platform=web&language=zh-CN&enter_from=web_live&a_bogus=0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=MacIntel&browser_name=Chrome&browser_version=122.0.0.0&web_rid={}",
room_id
);
@@ -67,14 +78,257 @@ impl DouyinClient {
.client
.get(&url)
.header("Referer", "https://live.douyin.com/")
.header("User-Agent", USER_AGENT)
.header("Cookie", self.cookies.clone())
.header("Cookie", self.account.cookies.clone())
.send()
.await?
.json::<DouyinRoomInfoResponse>()
.await?;
Ok(resp)
let status = resp.status();
let text = resp.text().await?;
if text.is_empty() {
log::warn!("Empty room info response, trying H5 API");
return self.get_room_info_h5(room_id, sec_user_id).await;
}
if status.is_success() {
if let Ok(data) = serde_json::from_str::<DouyinRoomInfoResponse>(&text) {
let cover = data
.data
.data
.first()
.and_then(|data| data.cover.as_ref())
.map(|cover| cover.url_list[0].clone());
return Ok(DouyinBasicRoomInfo {
room_id_str: data.data.data[0].id_str.clone(),
sec_user_id: sec_user_id.to_string(),
cover,
room_title: data.data.data[0].title.clone(),
user_name: data.data.user.nickname.clone(),
user_avatar: data.data.user.avatar_thumb.url_list[0].clone(),
status: data.data.room_status,
hls_url: data.data.data[0]
.stream_url
.as_ref()
.map(|stream_url| stream_url.hls_pull_url.clone())
.unwrap_or_default(),
stream_data: data.data.data[0]
.stream_url
.as_ref()
.map(|s| s.live_core_sdk_data.pull_data.stream_data.clone())
.unwrap_or_default(),
});
} else {
log::error!("Failed to parse room info response: {}", text);
return self.get_room_info_h5(room_id, sec_user_id).await;
}
}
log::error!("Failed to get room info: {}", status);
return self.get_room_info_h5(room_id, sec_user_id).await;
}
pub async fn get_room_info_h5(
&self,
room_id: u64,
sec_user_id: &str,
) -> Result<DouyinBasicRoomInfo, DouyinClientError> {
// 参考biliup实现构建完整的URL参数
let room_id_str = room_id.to_string();
// https://webcast.amemv.com/webcast/room/reflow/info/?type_id=0&live_id=1&version_code=99.99.99&app_id=1128&room_id=10000&sec_user_id=MS4wLjAB&aid=6383&device_platform=web&browser_language=zh-CN&browser_platform=Win32&browser_name=Mozilla&browser_version=5.0
let url_params = [
("type_id", "0"),
("live_id", "1"),
("version_code", "99.99.99"),
("app_id", "1128"),
("room_id", &room_id_str),
("sec_user_id", sec_user_id),
("aid", "6383"),
("device_platform", "web"),
];
// 构建URL
let query_string = url_params
.iter()
.map(|(k, v)| format!("{}={}", k, v))
.collect::<Vec<_>>()
.join("&");
let url = format!(
"https://webcast.amemv.com/webcast/room/reflow/info/?{}",
query_string
);
log::info!("get_room_info_h5: {}", url);
let resp = self
.client
.get(&url)
.header("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36")
.header("Referer", "https://live.douyin.com/")
.header("Cookie", self.account.cookies.clone())
.send()
.await?;
let status = resp.status();
let text = resp.text().await?;
if status.is_success() {
// Try to parse as H5 response format
if let Ok(h5_data) =
serde_json::from_str::<super::response::DouyinH5RoomInfoResponse>(&text)
{
// Extract RoomBasicInfo from H5 response
let room = &h5_data.data.room;
let owner = &room.owner;
let cover = room
.cover
.as_ref()
.and_then(|c| c.url_list.first().cloned());
let hls_url = room
.stream_url
.as_ref()
.map(|s| s.hls_pull_url.clone())
.unwrap_or_default();
return Ok(DouyinBasicRoomInfo {
room_id_str: room.id_str.clone(),
room_title: room.title.clone(),
cover,
status: if room.status == 2 { 0 } else { 1 },
hls_url,
user_name: owner.nickname.clone(),
user_avatar: owner
.avatar_thumb
.url_list
.first()
.unwrap_or(&String::new())
.clone(),
sec_user_id: owner.sec_uid.clone(),
stream_data: room
.stream_url
.as_ref()
.map(|s| s.live_core_sdk_data.pull_data.stream_data.clone())
.unwrap_or_default(),
});
}
// If that fails, try to parse as a generic JSON to see what we got
if let Ok(json_value) = serde_json::from_str::<serde_json::Value>(&text) {
log::error!(
"Unexpected response structure: {}",
serde_json::to_string_pretty(&json_value).unwrap_or_default()
);
// Check if it's an error response
if let Some(status_code) = json_value.get("status_code").and_then(|v| v.as_i64()) {
if status_code != 0 {
let error_msg = json_value
.get("status_message")
.and_then(|v| v.as_str())
.unwrap_or("Unknown error");
return Err(DouyinClientError::Network(format!(
"API returned error status_code: {} - {}",
status_code, error_msg
)));
}
}
// 检查是否是"invalid session"错误
if let Some(status_message) =
json_value.get("status_message").and_then(|v| v.as_str())
{
if status_message.contains("invalid session") {
return Err(DouyinClientError::Network(
"Invalid session - please check your cookies. Make sure you have valid sessionid, passport_csrf_token, and other authentication cookies from douyin.com".to_string(),
));
}
}
return Err(DouyinClientError::Network(format!(
"Failed to parse h5 room info response: {}",
text
)));
} else {
log::error!("Failed to parse h5 room info response: {}", text);
return Err(DouyinClientError::Network(format!(
"Failed to parse h5 room info response: {}",
text
)));
}
}
log::error!("Failed to get h5 room info: {}", status);
Err(DouyinClientError::Network(format!(
"Failed to get h5 room info: {} {}",
status, text
)))
}
pub async fn get_user_info(&self) -> Result<super::response::User, DouyinClientError> {
// Use the IM spotlight relation API to get user info
let url = "https://www.douyin.com/aweme/v1/web/im/spotlight/relation/";
let resp = self
.client
.get(url)
.header("Referer", "https://www.douyin.com/")
.header("Cookie", self.account.cookies.clone())
.send()
.await?;
let status = resp.status();
let text = resp.text().await?;
if status.is_success() {
if let Ok(data) = serde_json::from_str::<super::response::DouyinRelationResponse>(&text)
{
if data.status_code == 0 {
let owner_sec_uid = &data.owner_sec_uid;
// Find the user's own info in the followings list by matching sec_uid
if let Some(followings) = &data.followings {
for following in followings {
if following.sec_uid == *owner_sec_uid {
let user = super::response::User {
id_str: following.uid.clone(),
sec_uid: following.sec_uid.clone(),
nickname: following.nickname.clone(),
avatar_thumb: following.avatar_thumb.clone(),
follow_info: super::response::FollowInfo::default(),
foreign_user: 0,
open_id_str: "".to_string(),
};
return Ok(user);
}
}
}
// If not found in followings, create a minimal user info from owner_sec_uid
let user = super::response::User {
id_str: "".to_string(), // We don't have the numeric UID
sec_uid: owner_sec_uid.clone(),
nickname: "抖音用户".to_string(), // Default nickname
avatar_thumb: super::response::AvatarThumb { url_list: vec![] },
follow_info: super::response::FollowInfo::default(),
foreign_user: 0,
open_id_str: "".to_string(),
};
return Ok(user);
}
} else {
log::error!("Failed to parse user info response: {}", text);
return Err(DouyinClientError::Network(format!(
"Failed to parse user info response: {}",
text
)));
}
}
log::error!("Failed to get user info: {}", status);
Err(DouyinClientError::Io(std::io::Error::new(
std::io::ErrorKind::NotFound,
"Failed to get user info from Douyin relation API",
)))
}
pub async fn get_cover_base64(&self, url: &str) -> Result<String, DouyinClientError> {
@@ -98,6 +352,7 @@ impl DouyinClient {
// #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2560000
// http://7167739a741646b4651b6949b2f3eb8e.livehwc3.cn/pull-hls-l26.douyincdn.com/third/stream-693342996808860134_or4.m3u8?sub_m3u8=true&user_session_id=16090eb45ab8a2f042f7c46563936187&major_anchor_level=common&edge_slice=true&expire=67d944ec&sign=47b95cc6e8de20d82f3d404412fa8406
if content.contains("BANDWIDTH") {
log::info!("Master manifest with playlist URL: {}", url);
let new_url = content.lines().last().unwrap();
return Box::pin(self.get_m3u8_content(new_url)).await;
}
@@ -115,15 +370,16 @@ impl DouyinClient {
let response = self.client.get(url).send().await?;
if response.status() != reqwest::StatusCode::OK {
return Err(DouyinClientError::Network(
response.error_for_status().unwrap_err(),
));
let error = response.error_for_status().unwrap_err();
log::error!("HTTP error: {} for URL: {}", error, url);
return Err(DouyinClientError::Network(error.to_string()));
}
let content = response.bytes().await?;
let mut file = File::create(path).await?;
file.write_all(&content).await?;
Ok(content.len() as u64)
let mut file = tokio::fs::File::create(path).await?;
let bytes = response.bytes().await?;
let size = bytes.len() as u64;
let mut content = std::io::Cursor::new(bytes);
tokio::io::copy(&mut content, &mut file).await?;
Ok(size)
}
}

View File

@@ -182,8 +182,7 @@ pub struct Extra {
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct PullDatas {
}
pub struct PullDatas {}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
@@ -436,8 +435,7 @@ pub struct Stats {
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct LinkerMap {
}
pub struct LinkerMap {}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
@@ -478,13 +476,11 @@ pub struct LinkerDetail {
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct LinkerMapStr {
}
pub struct LinkerMapStr {}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct PlaymodeDetail {
}
pub struct PlaymodeDetail {}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
@@ -590,3 +586,207 @@ pub struct User {
#[serde(rename = "open_id_str")]
pub open_id_str: String,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct DouyinRelationResponse {
pub extra: Option<Extra2>,
pub followings: Option<Vec<Following>>,
#[serde(rename = "owner_sec_uid")]
pub owner_sec_uid: String,
#[serde(rename = "status_code")]
pub status_code: i64,
#[serde(rename = "log_pb")]
pub log_pb: Option<LogPb>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Extra2 {
#[serde(rename = "fatal_item_ids")]
pub fatal_item_ids: Vec<String>,
pub logid: String,
pub now: i64,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct LogPb {
#[serde(rename = "impr_id")]
pub impr_id: String,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Following {
#[serde(rename = "account_cert_info")]
pub account_cert_info: String,
#[serde(rename = "avatar_signature")]
pub avatar_signature: String,
#[serde(rename = "avatar_small")]
pub avatar_small: AvatarSmall,
#[serde(rename = "avatar_thumb")]
pub avatar_thumb: AvatarThumb,
#[serde(rename = "birthday_hide_level")]
pub birthday_hide_level: i64,
#[serde(rename = "commerce_user_level")]
pub commerce_user_level: i64,
#[serde(rename = "custom_verify")]
pub custom_verify: String,
#[serde(rename = "enterprise_verify_reason")]
pub enterprise_verify_reason: String,
#[serde(rename = "follow_status")]
pub follow_status: i64,
#[serde(rename = "follower_status")]
pub follower_status: i64,
#[serde(rename = "has_e_account_role")]
pub has_e_account_role: bool,
#[serde(rename = "im_activeness")]
pub im_activeness: i64,
#[serde(rename = "im_role_ids")]
pub im_role_ids: Vec<serde_json::Value>,
#[serde(rename = "is_im_oversea_user")]
pub is_im_oversea_user: i64,
pub nickname: String,
#[serde(rename = "sec_uid")]
pub sec_uid: String,
#[serde(rename = "short_id")]
pub short_id: String,
pub signature: String,
#[serde(rename = "social_relation_sub_type")]
pub social_relation_sub_type: i64,
#[serde(rename = "social_relation_type")]
pub social_relation_type: i64,
pub uid: String,
#[serde(rename = "unique_id")]
pub unique_id: String,
#[serde(rename = "verification_type")]
pub verification_type: i64,
#[serde(rename = "webcast_sp_info")]
pub webcast_sp_info: serde_json::Value,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct AvatarSmall {
pub uri: String,
#[serde(rename = "url_list")]
pub url_list: Vec<String>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct DouyinH5RoomInfoResponse {
pub data: H5Data,
pub extra: H5Extra,
#[serde(rename = "status_code")]
pub status_code: i64,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5Data {
pub room: H5Room,
pub user: H5User,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5Room {
pub id: u64,
#[serde(rename = "id_str")]
pub id_str: String,
pub status: i64,
pub title: String,
pub cover: Option<H5Cover>,
#[serde(rename = "stream_url")]
pub stream_url: Option<H5StreamUrl>,
pub owner: H5Owner,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5Cover {
#[serde(rename = "url_list")]
pub url_list: Vec<String>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5StreamUrl {
pub provider: i64,
pub id: u64,
#[serde(rename = "id_str")]
pub id_str: String,
#[serde(rename = "default_resolution")]
pub default_resolution: String,
#[serde(rename = "rtmp_pull_url")]
pub rtmp_pull_url: String,
#[serde(rename = "flv_pull_url")]
pub flv_pull_url: H5FlvPullUrl,
#[serde(rename = "hls_pull_url")]
pub hls_pull_url: String,
#[serde(rename = "hls_pull_url_map")]
pub hls_pull_url_map: H5HlsPullUrlMap,
#[serde(rename = "live_core_sdk_data")]
pub live_core_sdk_data: LiveCoreSdkData,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5FlvPullUrl {
#[serde(rename = "FULL_HD1")]
pub full_hd1: Option<String>,
#[serde(rename = "HD1")]
pub hd1: Option<String>,
#[serde(rename = "SD1")]
pub sd1: Option<String>,
#[serde(rename = "SD2")]
pub sd2: Option<String>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5HlsPullUrlMap {
#[serde(rename = "FULL_HD1")]
pub full_hd1: Option<String>,
#[serde(rename = "HD1")]
pub hd1: Option<String>,
#[serde(rename = "SD1")]
pub sd1: Option<String>,
#[serde(rename = "SD2")]
pub sd2: Option<String>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5Owner {
pub nickname: String,
#[serde(rename = "avatar_thumb")]
pub avatar_thumb: H5AvatarThumb,
#[serde(rename = "sec_uid")]
pub sec_uid: String,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5AvatarThumb {
#[serde(rename = "url_list")]
pub url_list: Vec<String>,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5User {
pub nickname: String,
#[serde(rename = "avatar_thumb")]
pub avatar_thumb: Option<H5AvatarThumb>,
#[serde(rename = "sec_uid")]
pub sec_uid: String,
}
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct H5Extra {
pub now: i64,
}

View File

@@ -51,13 +51,22 @@ impl TsEntry {
pub fn ts_seconds(&self) -> i64 {
// For some legacy problem, douyin entry's ts is s, bilibili entry's ts is ms.
// This should be fixed after 2.5.6, but we need to support entry.log generated by previous version.
if self.ts > 1619884800000 {
if self.ts > 10000000000 {
self.ts / 1000
} else {
self.ts
}
}
pub fn ts_mili(&self) -> i64 {
// if already in ms, return as is
if self.ts > 10000000000 {
self.ts
} else {
self.ts * 1000
}
}
pub fn date_time(&self) -> String {
let date_str = Utc
.timestamp_opt(self.ts_seconds(), 0)
@@ -74,7 +83,7 @@ impl TsEntry {
let mut content = String::new();
content += &format!("#EXTINF:{:.2},\n", self.length);
content += &format!("#EXTINF:{:.4},\n", self.length);
content += &format!("{}\n", self.url);
content
@@ -100,9 +109,7 @@ pub struct EntryStore {
entries: Vec<TsEntry>,
total_duration: f64,
total_size: u64,
last_sequence: u64,
pub continue_sequence: u64,
pub last_sequence: u64,
}
impl EntryStore {
@@ -125,7 +132,6 @@ impl EntryStore {
total_duration: 0.0,
total_size: 0,
last_sequence: 0,
continue_sequence: 0,
};
entry_store.load(work_dir).await;
@@ -150,9 +156,7 @@ impl EntryStore {
let entry = entry.unwrap();
if entry.sequence > self.last_sequence {
self.last_sequence = entry.sequence;
}
self.last_sequence = std::cmp::max(self.last_sequence, entry.sequence);
if entry.is_header {
self.header = Some(entry.clone());
@@ -163,8 +167,6 @@ impl EntryStore {
self.total_duration += entry.length;
self.total_size += entry.size;
}
self.continue_sequence = self.last_sequence + 100;
}
pub async fn add_entry(&mut self, entry: TsEntry) {
@@ -180,9 +182,7 @@ impl EntryStore {
self.log_file.flush().await.unwrap();
if self.last_sequence < entry.sequence {
self.last_sequence = entry.sequence;
}
self.last_sequence = std::cmp::max(self.last_sequence, entry.sequence);
self.total_duration += entry.length;
self.total_size += entry.size;
@@ -200,16 +200,14 @@ impl EntryStore {
self.total_size
}
pub fn last_sequence(&self) -> u64 {
self.last_sequence
}
pub fn last_ts(&self) -> Option<i64> {
self.entries.last().map(|entry| entry.ts)
}
/// Get first timestamp in milliseconds
pub fn first_ts(&self) -> Option<i64> {
self.entries.first().map(|e| e.ts)
self.entries.first().map(|x| x.ts_mili())
}
/// Get last timestamp in milliseconds
pub fn last_ts(&self) -> Option<i64> {
self.entries.last().map(|x| x.ts_mili())
}
/// Generate a hls manifest for selected range.
@@ -257,6 +255,7 @@ impl EntryStore {
if entries_in_range.is_empty() {
m3u8_content += end_content;
log::warn!("No entries in range, return empty manifest");
return m3u8_content;
}

View File

@@ -20,4 +20,8 @@ custom_error! {pub RecorderError
DouyinClientError {err: DouyinClientError} = "DouyinClient error: {err}",
IoError {err: std::io::Error} = "IO error: {err}",
DanmuStreamError {err: danmu_stream::DanmuStreamError} = "Danmu stream error: {err}",
SubtitleNotFound {live_id: String} = "Subtitle not found: {live_id}",
SubtitleGenerationFailed {error: String} = "Subtitle generation failed: {error}",
FfmpegError {err: String} = "FFmpeg error: {err}",
ResolutionChanged {err: String} = "Resolution changed: {err}",
}

View File

@@ -1,22 +0,0 @@
use actix_web::Response;
fn handle_hls_request(ts_path: Option<&str>) -> Response {
if let Some(ts_path) = ts_path {
if let Ok(content) = std::fs::read(ts_path) {
return Response::builder()
.status(200)
.header("Content-Type", "video/mp2t")
.header("Cache-Control", "no-cache")
.header("Access-Control-Allow-Origin", "*")
.body(content)
.unwrap();
}
}
Response::builder()
.status(404)
.header("Content-Type", "text/plain")
.header("Cache-Control", "no-cache")
.header("Access-Control-Allow-Origin", "*")
.body(b"Not Found".to_vec())
.unwrap()
}

View File

@@ -3,7 +3,7 @@ use crate::danmu2ass;
use crate::database::video::VideoRow;
use crate::database::{account::AccountRow, record::RecordRow};
use crate::database::{Database, DatabaseError};
use crate::ffmpeg::{clip_from_m3u8, encode_video_danmu};
use crate::ffmpeg::{clip_from_m3u8, encode_video_danmu, Range};
use crate::progress_reporter::{EventEmitter, ProgressReporter};
use crate::recorder::bilibili::{BiliRecorder, BiliRecorderOptions};
use crate::recorder::danmu::DanmuEntry;
@@ -32,22 +32,19 @@ pub struct RecorderList {
pub recorders: Vec<RecorderInfo>,
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct ClipRangeParams {
pub title: String,
pub cover: String,
pub platform: String,
pub room_id: u64,
pub live_id: String,
/// Clip range start in seconds
pub x: i64,
/// Clip range end in seconds
pub y: i64,
/// Timestamp of first stream segment in seconds
pub offset: i64,
pub range: Option<Range>,
/// Encode danmu after clip
pub danmu: bool,
pub local_offset: i64,
/// Fix encoding after clip
pub fix_encoding: bool,
}
#[derive(Debug, Clone)]
@@ -202,14 +199,13 @@ impl RecorderManager {
let clip_config = ClipRangeParams {
title: live_record.title,
cover: "".into(),
platform: live_record.platform,
platform: live_record.platform.clone(),
room_id,
live_id: live_id.to_string(),
x: 0,
y: 0,
offset: recorder.first_segment_ts(live_id).await,
range: None,
danmu: encode_danmu,
local_offset: 0,
fix_encoding: false,
};
let clip_filename = self.config.read().await.generate_clip_name(&clip_config);
@@ -249,6 +245,7 @@ impl RecorderManager {
desc: "".into(),
tags: "".into(),
area: 0,
platform: live_record.platform.clone(),
})
.await
{
@@ -291,7 +288,8 @@ impl RecorderManager {
let platform = PlatformType::from_str(&recorder.platform).unwrap();
let room_id = recorder.room_id;
let auto_start = recorder.auto_start;
recorder_map.insert((platform, room_id), auto_start);
let extra = recorder.extra;
recorder_map.insert((platform, room_id), (auto_start, extra));
}
let mut recorders_to_add = Vec::new();
for (platform, room_id) in recorder_map.keys() {
@@ -306,7 +304,7 @@ impl RecorderManager {
if self.is_migrating.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
let auto_start = recorder_map.get(&(platform, room_id)).unwrap();
let (auto_start, extra) = recorder_map.get(&(platform, room_id)).unwrap();
let account = self
.db
.get_account_by_platform(platform.clone().as_str())
@@ -318,7 +316,7 @@ impl RecorderManager {
let account = account.unwrap();
if let Err(e) = self
.add_recorder(&account, platform, room_id, *auto_start)
.add_recorder(&account, platform, room_id, extra, *auto_start)
.await
{
log::error!("Failed to add recorder: {}", e);
@@ -333,6 +331,7 @@ impl RecorderManager {
account: &AccountRow,
platform: PlatformType,
room_id: u64,
extra: &str,
auto_start: bool,
) -> Result<(), RecorderManagerError> {
let recorder_id = format!("{}:{}", platform.as_str(), room_id);
@@ -362,6 +361,7 @@ impl RecorderManager {
self.app_handle.clone(),
self.emitter.clone(),
room_id,
extra,
self.config.clone(),
account,
&self.db,
@@ -474,14 +474,21 @@ impl RecorderManager {
params: &ClipRangeParams,
) -> Result<PathBuf, RecorderManagerError> {
let range_m3u8 = format!(
"{}/{}/{}/playlist.m3u8?start={}&end={}",
params.platform, params.room_id, params.live_id, params.x, params.y
"{}/{}/{}/playlist.m3u8",
params.platform, params.room_id, params.live_id
);
let manifest_content = self.handle_hls_request(&range_m3u8).await?;
let manifest_content = String::from_utf8(manifest_content)
let mut manifest_content = String::from_utf8(manifest_content)
.map_err(|e| RecorderManagerError::ClipError { err: e.to_string() })?;
// if manifest is for stream, replace EXT-X-PLAYLIST-TYPE:EVENT to EXT-X-PLAYLIST-TYPE:VOD, and add #EXT-X-ENDLIST
if manifest_content.contains("#EXT-X-PLAYLIST-TYPE:EVENT") {
manifest_content =
manifest_content.replace("#EXT-X-PLAYLIST-TYPE:EVENT", "#EXT-X-PLAYLIST-TYPE:VOD");
manifest_content += "\n#EXT-X-ENDLIST\n";
}
let cache_path = self.config.read().await.cache.clone();
let cache_path = Path::new(&cache_path);
let random_filename = format!("manifest_{}.m3u8", uuid::Uuid::new_v4());
@@ -496,7 +503,15 @@ impl RecorderManager {
.await
.map_err(|e| RecorderManagerError::ClipError { err: e.to_string() })?;
if let Err(e) = clip_from_m3u8(reporter, &tmp_manifest_file_path, &clip_file).await {
if let Err(e) = clip_from_m3u8(
reporter,
&tmp_manifest_file_path,
&clip_file,
params.range.as_ref(),
params.fix_encoding,
)
.await
{
log::error!("Failed to generate clip file: {}", e);
return Err(RecorderManagerError::ClipError { err: e.to_string() });
}
@@ -504,6 +519,14 @@ impl RecorderManager {
// remove temp file
let _ = tokio::fs::remove_file(tmp_manifest_file_path).await;
// check clip_file exists
if !clip_file.exists() {
log::error!("Clip file not found: {}", clip_file.display());
return Err(RecorderManagerError::ClipError {
err: "Clip file not found".into(),
});
}
if !params.danmu {
return Ok(clip_file);
}
@@ -515,20 +538,24 @@ impl RecorderManager {
}
log::info!(
"Filter danmus in range [{}, {}] with global offset {} and local offset {}",
params.x,
params.y,
params.offset,
"Filter danmus in range {} with local offset {}",
params
.range
.as_ref()
.map_or("None".to_string(), |r| r.to_string()),
params.local_offset
);
let mut danmus = danmus.unwrap();
log::debug!("First danmu entry: {:?}", danmus.first());
// update entry ts to offset
if let Some(range) = &params.range {
// update entry ts to offset and filter danmus in range
for d in &mut danmus {
d.ts -= (params.x + params.offset + params.local_offset) * 1000;
d.ts -= (range.start as i64 + params.local_offset) * 1000;
}
if range.duration() > 0.0 {
danmus.retain(|x| x.ts >= 0 && x.ts <= (range.duration() as i64) * 1000);
}
if params.x != 0 || params.y != 0 {
danmus.retain(|x| x.ts >= 0 && x.ts <= (params.y - params.x) * 1000);
}
if danmus.is_empty() {
@@ -598,6 +625,36 @@ impl RecorderManager {
Ok(self.db.get_record(room_id, live_id).await?)
}
pub async fn get_archive_subtitle(
&self,
platform: PlatformType,
room_id: u64,
live_id: &str,
) -> Result<String, RecorderManagerError> {
let recorder_id = format!("{}:{}", platform.as_str(), room_id);
if let Some(recorder_ref) = self.recorders.read().await.get(&recorder_id) {
let recorder = recorder_ref.as_ref();
Ok(recorder.get_archive_subtitle(live_id).await?)
} else {
Err(RecorderManagerError::NotFound { room_id })
}
}
pub async fn generate_archive_subtitle(
&self,
platform: PlatformType,
room_id: u64,
live_id: &str,
) -> Result<String, RecorderManagerError> {
let recorder_id = format!("{}:{}", platform.as_str(), room_id);
if let Some(recorder_ref) = self.recorders.read().await.get(&recorder_id) {
let recorder = recorder_ref.as_ref();
Ok(recorder.generate_archive_subtitle(live_id).await?)
} else {
Err(RecorderManagerError::NotFound { room_id })
}
}
pub async fn delete_archive(
&self,
platform: PlatformType,

View File

@@ -3,12 +3,85 @@ use std::path::Path;
use crate::progress_reporter::ProgressReporterTrait;
pub mod whisper;
pub mod whisper_cpp;
pub mod whisper_online;
// subtitle_generator types
#[allow(dead_code)]
#[derive(Debug, Clone, PartialEq)]
pub enum SubtitleGeneratorType {
Whisper,
WhisperOnline,
}
#[derive(Debug, Clone)]
pub struct GenerateResult {
pub generator_type: SubtitleGeneratorType,
pub subtitle_id: String,
pub subtitle_content: Vec<srtparse::Item>,
}
impl GenerateResult {
pub fn concat(&mut self, other: &GenerateResult, offset_seconds: u64) {
let mut to_extend = other.subtitle_content.clone();
let last_item_index = self.subtitle_content.len();
for (i, item) in to_extend.iter_mut().enumerate() {
item.pos = last_item_index + i + 1;
item.start_time = add_offset(&item.start_time, offset_seconds);
item.end_time = add_offset(&item.end_time, offset_seconds);
}
self.subtitle_content.extend(to_extend);
}
}
fn add_offset(item: &srtparse::Time, offset: u64) -> srtparse::Time {
let mut total_seconds = item.seconds + offset;
let mut total_minutes = item.minutes;
let mut total_hours = item.hours;
// Handle seconds overflow (>= 60)
if total_seconds >= 60 {
let additional_minutes = total_seconds / 60;
total_seconds %= 60;
total_minutes += additional_minutes;
}
// Handle minutes overflow (>= 60)
if total_minutes >= 60 {
let additional_hours = total_minutes / 60;
total_minutes %= 60;
total_hours += additional_hours;
}
srtparse::Time {
hours: total_hours,
minutes: total_minutes,
seconds: total_seconds,
milliseconds: item.milliseconds,
}
}
pub fn item_to_srt(item: &srtparse::Item) -> String {
let start_time = format!(
"{:02}:{:02}:{:02},{:03}",
item.start_time.hours,
item.start_time.minutes,
item.start_time.seconds,
item.start_time.milliseconds
);
let end_time = format!(
"{:02}:{:02}:{:02},{:03}",
item.end_time.hours,
item.end_time.minutes,
item.end_time.seconds,
item.end_time.milliseconds
);
format!(
"{}\n{} --> {}\n{}\n\n",
item.pos, start_time, end_time, item.text
)
}
impl SubtitleGeneratorType {
@@ -16,12 +89,14 @@ impl SubtitleGeneratorType {
pub fn as_str(&self) -> &'static str {
match self {
SubtitleGeneratorType::Whisper => "whisper",
SubtitleGeneratorType::WhisperOnline => "whisper_online",
}
}
#[allow(dead_code)]
pub fn from_str(s: &str) -> Option<Self> {
match s {
"whisper" => Some(SubtitleGeneratorType::Whisper),
"whisper_online" => Some(SubtitleGeneratorType::WhisperOnline),
_ => None,
}
}
@@ -31,8 +106,8 @@ impl SubtitleGeneratorType {
pub trait SubtitleGenerator {
async fn generate_subtitle(
&self,
reporter: &impl ProgressReporterTrait,
video_path: &Path,
output_path: &Path,
) -> Result<String, String>;
reporter: Option<&impl ProgressReporterTrait>,
audio_path: &Path,
language_hint: &str,
) -> Result<GenerateResult, String>;
}

View File

@@ -1,9 +1,11 @@
use async_trait::async_trait;
use crate::progress_reporter::ProgressReporterTrait;
use crate::{
progress_reporter::ProgressReporterTrait,
subtitle_generator::{GenerateResult, SubtitleGeneratorType},
};
use async_std::sync::{Arc, RwLock};
use std::path::Path;
use tokio::io::AsyncWriteExt;
use whisper_rs::{FullParams, SamplingStrategy, WhisperContext, WhisperContextParameters};
use super::SubtitleGenerator;
@@ -34,10 +36,10 @@ pub async fn new(model: &Path, prompt: &str) -> Result<WhisperCPP, String> {
impl SubtitleGenerator for WhisperCPP {
async fn generate_subtitle(
&self,
reporter: &impl ProgressReporterTrait,
reporter: Option<&impl ProgressReporterTrait>,
audio_path: &Path,
output_path: &Path,
) -> Result<String, String> {
language_hint: &str,
) -> Result<GenerateResult, String> {
log::info!("Generating subtitle for {:?}", audio_path);
let start_time = std::time::Instant::now();
let audio = hound::WavReader::open(audio_path).map_err(|e| e.to_string())?;
@@ -52,8 +54,8 @@ impl SubtitleGenerator for WhisperCPP {
let mut params = FullParams::new(SamplingStrategy::Greedy { best_of: 1 });
// and set the language to translate to to auto
params.set_language(None);
// and set the language
params.set_language(Some(language_hint));
params.set_initial_prompt(self.prompt.as_str());
// we also explicitly disable anything that prints to stdout
@@ -68,7 +70,9 @@ impl SubtitleGenerator for WhisperCPP {
let mut inter_samples = vec![Default::default(); samples.len()];
if let Some(reporter) = reporter {
reporter.update("处理音频中");
}
if let Err(e) = whisper_rs::convert_integer_to_float_audio(&samples, &mut inter_samples) {
return Err(e.to_string());
}
@@ -80,17 +84,14 @@ impl SubtitleGenerator for WhisperCPP {
let samples = samples.unwrap();
if let Some(reporter) = reporter {
reporter.update("生成字幕中");
}
if let Err(e) = state.full(params, &samples[..]) {
log::error!("failed to run model: {}", e);
return Err(e.to_string());
}
// open the output file
let mut output_file = tokio::fs::File::create(output_path).await.map_err(|e| {
log::error!("failed to create output file: {}", e);
e.to_string()
})?;
// fetch the results
let num_segments = state.full_n_segments().map_err(|e| e.to_string())?;
let mut subtitle = String::new();
@@ -102,8 +103,14 @@ impl SubtitleGenerator for WhisperCPP {
let format_time = |timestamp: f64| {
let hours = (timestamp / 3600.0).floor();
let minutes = ((timestamp - hours * 3600.0) / 60.0).floor();
let seconds = timestamp - hours * 3600.0 - minutes * 60.0;
format!("{:02}:{:02}:{:06.3}", hours, minutes, seconds).replace(".", ",")
let seconds = (timestamp - hours * 3600.0 - minutes * 60.0).floor();
let milliseconds = ((timestamp - hours * 3600.0 - minutes * 60.0 - seconds)
* 1000.0)
.floor() as u32;
format!(
"{:02}:{:02}:{:02},{:03}",
hours, minutes, seconds, milliseconds
)
};
let line = format!(
@@ -117,18 +124,16 @@ impl SubtitleGenerator for WhisperCPP {
subtitle.push_str(&line);
}
output_file
.write_all(subtitle.as_bytes())
.await
.map_err(|e| {
log::error!("failed to write to output file: {}", e);
e.to_string()
})?;
log::info!("Subtitle generated: {:?}", output_path);
log::info!("Time taken: {} seconds", start_time.elapsed().as_secs_f64());
Ok(subtitle)
let subtitle_content = srtparse::from_str(&subtitle)
.map_err(|e| format!("Failed to parse subtitle: {}", e))?;
Ok(GenerateResult {
generator_type: SubtitleGeneratorType::Whisper,
subtitle_id: "".to_string(),
subtitle_content,
})
}
}
@@ -178,10 +183,9 @@ mod tests {
.await
.unwrap();
let audio_path = Path::new("tests/audio/test.wav");
let output_path = Path::new("tests/audio/test.srt");
let reporter = MockReporter::new();
let result = whisper
.generate_subtitle(&reporter, audio_path, output_path)
.generate_subtitle(Some(&reporter), audio_path, "auto")
.await;
if let Err(e) = result {
println!("Error: {}", e);

View File

@@ -0,0 +1,247 @@
use async_trait::async_trait;
use reqwest::Client;
use serde::Deserialize;
use std::path::Path;
use tokio::fs;
use crate::{
progress_reporter::ProgressReporterTrait,
subtitle_generator::{GenerateResult, SubtitleGenerator, SubtitleGeneratorType},
};
#[derive(Debug, Clone)]
pub struct WhisperOnline {
client: Client,
api_url: String,
api_key: Option<String>,
prompt: Option<String>,
}
#[derive(Debug, Deserialize)]
struct WhisperResponse {
segments: Vec<WhisperSegment>,
}
#[derive(Debug, Deserialize)]
struct WhisperSegment {
start: f64,
end: f64,
text: String,
}
pub async fn new(
api_url: Option<&str>,
api_key: Option<&str>,
prompt: Option<&str>,
) -> Result<WhisperOnline, String> {
let client = Client::builder()
.timeout(std::time::Duration::from_secs(300)) // 5 minutes timeout
.build()
.map_err(|e| format!("Failed to create HTTP client: {}", e))?;
let api_url = api_url.unwrap_or("https://api.openai.com/v1");
let api_url = api_url.to_string() + "/audio/transcriptions";
Ok(WhisperOnline {
client,
api_url: api_url.to_string(),
api_key: api_key.map(|k| k.to_string()),
prompt: prompt.map(|p| p.to_string()),
})
}
#[async_trait]
impl SubtitleGenerator for WhisperOnline {
async fn generate_subtitle(
&self,
reporter: Option<&impl ProgressReporterTrait>,
audio_path: &Path,
language_hint: &str,
) -> Result<GenerateResult, String> {
log::info!("Generating subtitle online for {:?}", audio_path);
let start_time = std::time::Instant::now();
// Read audio file
if let Some(reporter) = reporter {
reporter.update("读取音频文件中");
}
let audio_data = fs::read(audio_path)
.await
.map_err(|e| format!("Failed to read audio file: {}", e))?;
// Get file extension for proper MIME type
let file_extension = audio_path
.extension()
.and_then(|ext| ext.to_str())
.unwrap_or("wav");
let mime_type = match file_extension.to_lowercase().as_str() {
"wav" => "audio/wav",
"mp3" => "audio/mpeg",
"m4a" => "audio/mp4",
"flac" => "audio/flac",
_ => "audio/wav",
};
// Build form data with proper file part
let file_part = reqwest::multipart::Part::bytes(audio_data)
.mime_str(mime_type)
.map_err(|e| format!("Failed to set MIME type: {}", e))?
.file_name(
audio_path
.file_name()
.unwrap_or_default()
.to_string_lossy()
.to_string(),
);
let mut form = reqwest::multipart::Form::new()
.part("file", file_part)
.text("model", "whisper-1")
.text("response_format", "verbose_json")
.text("temperature", "0.0");
form = form.text("language", language_hint.to_string());
if let Some(prompt) = self.prompt.clone() {
form = form.text("prompt", prompt);
}
// Build HTTP request
let mut req_builder = self.client.post(&self.api_url);
if let Some(api_key) = &self.api_key {
req_builder = req_builder.header("Authorization", format!("Bearer {}", api_key));
}
if let Some(reporter) = reporter {
reporter.update("上传音频中");
}
let response = req_builder
.timeout(std::time::Duration::from_secs(3 * 60)) // 3 minutes timeout
.multipart(form)
.send()
.await
.map_err(|e| format!("HTTP request failed: {}", e))?;
let status = response.status();
if !status.is_success() {
let error_text = response.text().await.unwrap_or_default();
log::error!("API request failed with status {}: {}", status, error_text);
return Err(format!(
"API request failed with status {}: {}",
status, error_text
));
}
// Get the raw response text first for debugging
let response_text = response
.text()
.await
.map_err(|e| format!("Failed to get response text: {}", e))?;
// Try to parse as JSON
let whisper_response: WhisperResponse =
serde_json::from_str(&response_text).map_err(|e| {
println!("{}", response_text);
log::error!(
"Failed to parse JSON response. Raw response: {}",
response_text
);
format!("Failed to parse response: {}", e)
})?;
// Generate SRT format subtitle
let mut subtitle = String::new();
for (i, segment) in whisper_response.segments.iter().enumerate() {
let format_time = |timestamp: f64| {
let hours = (timestamp / 3600.0).floor();
let minutes = ((timestamp - hours * 3600.0) / 60.0).floor();
let seconds = (timestamp - hours * 3600.0 - minutes * 60.0).floor();
let milliseconds = ((timestamp - hours * 3600.0 - minutes * 60.0 - seconds)
* 1000.0)
.floor() as u32;
format!(
"{:02}:{:02}:{:02},{:03}",
hours, minutes, seconds, milliseconds
)
};
let line = format!(
"{}\n{} --> {}\n{}\n\n",
i + 1,
format_time(segment.start),
format_time(segment.end),
segment.text.trim(),
);
subtitle.push_str(&line);
}
log::info!("Time taken: {} seconds", start_time.elapsed().as_secs_f64());
let subtitle_content = srtparse::from_str(&subtitle)
.map_err(|e| format!("Failed to parse subtitle: {}", e))?;
Ok(GenerateResult {
generator_type: SubtitleGeneratorType::WhisperOnline,
subtitle_id: "".to_string(),
subtitle_content,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use async_trait::async_trait;
// Mock reporter for testing
#[derive(Clone)]
struct MockReporter {}
#[async_trait]
impl ProgressReporterTrait for MockReporter {
fn update(&self, message: &str) {
println!("Mock update: {}", message);
}
async fn finish(&self, success: bool, message: &str) {
if success {
println!("Mock finish: {}", message);
} else {
println!("Mock error: {}", message);
}
}
}
impl MockReporter {
fn new() -> Self {
MockReporter {}
}
}
#[tokio::test]
async fn test_create_whisper_online() {
let result = new(Some("https://api.openai.com/v1"), Some("test-key"), None).await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_generate_subtitle() {
let result = new(Some("https://api.openai.com/v1"), Some("sk-****"), None).await;
assert!(result.is_ok());
let result = result.unwrap();
let result = result
.generate_subtitle(
Some(&MockReporter::new()),
Path::new("tests/audio/test.wav"),
"auto",
)
.await;
println!("{:?}", result);
assert!(result.is_ok());
let result = result.unwrap();
println!("{:?}", result.subtitle_content);
}
}

View File

@@ -22,6 +22,11 @@
"plugins": {
"sql": {
"preload": ["sqlite:data_v2.db"]
},
"deep-link": {
"desktop": {
"schemes": ["bsr"]
}
}
},
"app": {

View File

@@ -5,9 +5,47 @@
import Setting from "./page/Setting.svelte";
import Account from "./page/Account.svelte";
import About from "./page/About.svelte";
import { log } from "./lib/invoker";
import { log, onOpenUrl } from "./lib/invoker";
import Clip from "./page/Clip.svelte";
import Task from "./page/Task.svelte";
import AI from "./page/AI.svelte";
import { onMount } from "svelte";
let active = "总览";
onMount(async () => {
await onOpenUrl((urls: string[]) => {
console.log("Received Deep Link:", urls);
if (urls.length > 0) {
const url = urls[0];
// extract platform and room_id from url
// url example:
// bsr://live.bilibili.com/167537?live_from=85001&spm_id_from=333.1365.live_users.item.click
// bsr://live.douyin.com/200525029536
let platform = "";
let room_id = "";
if (url.startsWith("bsr://live.bilibili.com/")) {
// 1. remove bsr://live.bilibili.com/
// 2. remove all query params
room_id = url.replace("bsr://live.bilibili.com/", "").split("?")[0];
platform = "bilibili";
}
if (url.startsWith("bsr://live.douyin.com/")) {
room_id = url.replace("bsr://live.douyin.com/", "").split("?")[0];
platform = "douyin";
}
if (platform && room_id) {
// switch to room page
active = "直播间";
}
}
});
});
log.info("App loaded");
</script>
@@ -28,6 +66,15 @@
<div class="page" class:visible={active == "直播间"}>
<Room />
</div>
<div class="page" class:visible={active == "切片"}>
<Clip />
</div>
<div class="page" class:visible={active == "任务"}>
<Task />
</div>
<div class="page" class:visible={active == "助手"}>
<AI />
</div>
<div class="page" class:visible={active == "账号"}>
<Account />
</div>

106
src/AppClip.svelte Normal file
View File

@@ -0,0 +1,106 @@
<script lang="ts">
import { invoke, convertFileSrc, convertCoverSrc } from "./lib/invoker";
import { onMount } from "svelte";
import VideoPreview from "./lib/VideoPreview.svelte";
import type { Config, VideoItem } from "./lib/interface";
import { set_title } from "./lib/invoker";
let video: VideoItem | null = null;
let videos: any[] = [];
let showVideoPreview = false;
let roomId: number | null = null;
let config: Config = null;
invoke("get_config").then((c) => {
config = c as Config;
});
onMount(async () => {
const videoId = new URLSearchParams(window.location.search).get("id");
if (videoId) {
try {
// 获取视频信息
const videoData = await invoke("get_video", { id: parseInt(videoId) });
roomId = (videoData as VideoItem).room_id;
// update window title to file name
set_title((videoData as VideoItem).file);
// 获取房间下的所有视频列表
if (roomId !== null && roomId !== undefined) {
const videoList = (await invoke("get_videos", { roomId: roomId })) as VideoItem[];
videos = await Promise.all(videoList.map(async (v) => {
return {
id: v.id,
value: v.id,
name: v.file,
file: await convertFileSrc(v.file),
cover: v.cover,
};
}));
}
// find video in videos
let new_video = videos.find((v) => v.id === parseInt(videoId));
handleVideoChange(new_video);
// 显示视频预览
showVideoPreview = true;
} catch (error) {
console.error("Failed to load video:", error);
}
}
});
async function handleVideoChange(newVideo: VideoItem) {
if (newVideo) {
// get cover from video
const cover = await invoke("get_video_cover", { id: newVideo.id }) as string;
// 对于非空的封面路径使用convertCoverSrc转换
if (cover && cover.trim() !== "") {
newVideo.cover = await convertCoverSrc(cover, newVideo.id);
} else {
newVideo.cover = "";
}
}
video = newVideo;
}
async function handleVideoListUpdate() {
if (roomId !== null && roomId !== undefined) {
const videosData = await invoke("get_videos", { roomId });
videos = await Promise.all((videosData as VideoItem[]).map(async (v) => {
return {
id: v.id,
value: v.id,
name: v.file,
file: await convertFileSrc(v.file),
cover: v.cover, // 这里保持原样因为get_videos返回的是VideoNoCover类型不包含完整封面数据
};
}));
}
}
</script>
{#if showVideoPreview && video && (roomId !== null && roomId !== undefined)}
<VideoPreview
bind:show={showVideoPreview}
{video}
{videos}
{roomId}
onVideoChange={handleVideoChange}
onVideoListUpdate={handleVideoListUpdate}
/>
{:else}
<main
class="flex items-center justify-center h-screen bg-[#1c1c1e] text-white"
>
<div class="text-center">
<div
class="animate-spin h-8 w-8 border-2 border-[#0A84FF] border-t-transparent rounded-full mx-auto mb-4"
></div>
<p>加载中...</p>
</div>
</main>
{/if}

View File

@@ -4,26 +4,24 @@
set_title,
TAURI_ENV,
convertFileSrc,
convertCoverSrc,
listen,
log,
} from "./lib/invoker";
import Player from "./lib/Player.svelte";
import type { AccountInfo, RecordItem } from "./lib/db";
import type { RecordItem } from "./lib/db";
import { ChevronRight, ChevronLeft, Play, Pen } from "lucide-svelte";
import {
type Profile,
type VideoItem,
type Config,
type Marker,
type ProgressUpdate,
type ProgressFinished,
type DanmuEntry,
clipRange,
generateEventId,
} from "./lib/interface";
import TypeSelect from "./lib/TypeSelect.svelte";
import MarkerPanel from "./lib/MarkerPanel.svelte";
import CoverEditor from "./lib/CoverEditor.svelte";
import VideoPreview from "./lib/VideoPreview.svelte";
import { onDestroy, onMount } from "svelte";
const urlParams = new URLSearchParams(window.location.search);
@@ -35,88 +33,161 @@
log.info("AppLive loaded", room_id, platform, live_id);
// get profile in local storage with a default value
let profile: Profile = get_profile();
let config: Config = null;
invoke("get_config").then((c) => {
config = c as Config;
console.log(config);
});
function get_profile(): Profile {
const profile_str = window.localStorage.getItem("profile-" + room_id);
if (profile_str && profile_str.includes("videos")) {
return JSON.parse(profile_str);
}
return default_profile();
}
$: {
window.localStorage.setItem("profile-" + room_id, JSON.stringify(profile));
}
function default_profile(): Profile {
return {
videos: [],
cover: "",
cover43: null,
title: "",
copyright: 1,
tid: 27,
tag: "",
desc_format_id: 9999,
desc: "",
recreate: -1,
dynamic: "",
interactive: 0,
act_reserve_create: 0,
no_disturbance: 0,
no_reprint: 0,
subtitle: {
open: 0,
lan: "",
},
dolby: 0,
lossless_music: 0,
up_selection_reply: false,
up_close_danmu: false,
up_close_reply: false,
web_os: 0,
};
}
let current_clip_event_id = null;
let current_post_event_id = null;
let danmu_enabled = false;
let fix_encoding = false;
// 弹幕相关变量
let danmu_records: DanmuEntry[] = [];
let filtered_danmu: DanmuEntry[] = [];
let danmu_search_text = "";
// 虚拟滚动相关变量
let danmu_container_height = 0;
let danmu_item_height = 80; // 预估每个弹幕项的高度
let visible_start_index = 0;
let visible_end_index = 0;
let scroll_top = 0;
let container_ref: HTMLElement;
let scroll_timeout: ReturnType<typeof setTimeout>;
// 计算可见区域的弹幕
function calculate_visible_danmu() {
if (!container_ref || filtered_danmu.length === 0) return;
const container_height = container_ref.clientHeight;
const buffer = 10; // 缓冲区,多渲染几个项目
visible_start_index = Math.max(
0,
Math.floor(scroll_top / danmu_item_height) - buffer
);
visible_end_index = Math.min(
filtered_danmu.length,
Math.ceil((scroll_top + container_height) / danmu_item_height) + buffer
);
}
// 处理滚动事件(带防抖)
function handle_scroll(event: Event) {
const target = event.target as HTMLElement;
scroll_top = target.scrollTop;
// 清除之前的定时器
if (scroll_timeout) {
clearTimeout(scroll_timeout);
}
// 防抖处理,避免频繁计算
scroll_timeout = setTimeout(() => {
calculate_visible_danmu();
}, 16); // 约60fps
}
// 监听容器大小变化
function handle_resize() {
if (container_ref) {
danmu_container_height = container_ref.clientHeight;
calculate_visible_danmu();
}
}
// 监听弹幕数据变化,更新过滤结果
$: {
if (danmu_records) {
// 如果当前有搜索文本,重新过滤
if (danmu_search_text) {
filter_danmu();
} else {
// 否则直接复制所有弹幕
filtered_danmu = [...danmu_records];
}
// 重新计算可见区域
calculate_visible_danmu();
}
}
// 监听容器引用变化
$: if (container_ref) {
handle_resize();
}
// 过滤弹幕
function filter_danmu() {
filtered_danmu = danmu_records.filter((danmu) => {
// 只按内容过滤
if (
danmu_search_text &&
!danmu.content.toLowerCase().includes(danmu_search_text.toLowerCase())
) {
return false;
}
return true;
});
}
// 监听弹幕搜索变化
$: {
if (danmu_search_text !== undefined && danmu_records) {
filter_danmu();
}
}
// 格式化时间(ts 为毫秒)
function format_time(milliseconds: number): string {
const seconds = Math.floor(milliseconds / 1000);
const minutes = Math.floor(seconds / 60);
const hours = Math.floor(minutes / 60)
.toString()
.padStart(2, "0");
const remaining_seconds = (seconds % 60).toString().padStart(2, "0");
const remaining_minutes = (minutes % 60).toString().padStart(2, "0");
return `${hours}:${remaining_minutes}:${remaining_seconds}`;
}
// 将时长(单位: 秒)格式化为 "X小时 Y分 Z秒"
function format_duration_seconds(totalSecondsFloat: number): string {
const totalSeconds = Math.max(0, Math.floor(totalSecondsFloat));
const hours = Math.floor(totalSeconds / 3600);
const minutes = Math.floor((totalSeconds % 3600) / 60);
const seconds = totalSeconds % 60;
const parts = [] as string[];
if (hours > 0) parts.push(`${hours}小时`);
if (minutes > 0) parts.push(`${minutes}分`);
parts.push(`${seconds}秒`);
return parts.join(" ");
}
// 跳转到弹幕时间点
function seek_to_danmu(danmu: DanmuEntry) {
if (player) {
const time_in_seconds = danmu.ts / 1000;
player.seek(time_in_seconds);
}
}
const update_listener = listen<ProgressUpdate>(`progress-update`, (e) => {
console.log("progress-update event", e.payload.id);
let event_id = e.payload.id;
if (event_id === current_clip_event_id) {
update_clip_prompt(e.payload.content);
} else if (event_id === current_post_event_id) {
update_post_prompt(e.payload.content);
}
});
const finished_listener = listen<ProgressFinished>(
`progress-finished`,
(e) => {
console.log("progress-finished event", e.payload.id);
let event_id = e.payload.id;
if (event_id === current_clip_event_id) {
console.log("clip event finished", event_id);
update_clip_prompt(`生成切片`);
if (!e.payload.success) {
alert("请检查 ffmpeg 是否配置正确:" + e.payload.message);
}
current_clip_event_id = null;
} else if (event_id === current_post_event_id) {
update_post_prompt(`投稿`);
if (!e.payload.success) {
alert(e.payload.message);
}
current_post_event_id = null;
}
}
);
@@ -124,6 +195,11 @@
onDestroy(() => {
update_listener?.then((fn) => fn());
finished_listener?.then((fn) => fn());
// 清理滚动定时器
if (scroll_timeout) {
clearTimeout(scroll_timeout);
}
});
let archive: RecordItem = null;
@@ -140,7 +216,7 @@
end = parseFloat(localStorage.getItem(`${live_id}_end`)) - focus_start;
}
console.log("Loaded start and end", start, end);
function generateCover() {
const video = document.getElementById("video") as HTMLVideoElement;
@@ -154,17 +230,13 @@
return canvas.toDataURL();
}
let preview = false;
let show_cover_editor = false;
let show_clip_confirm = false;
let text_style = {
position: { x: 8, y: 8 },
fontSize: 24,
color: "#FF7F00",
};
let uid_selected = 0;
let video_selected = 0;
let accounts = [];
let videos = [];
let selected_video = null;
@@ -180,24 +252,19 @@
// Initialize video element when component is mounted
onMount(() => {
video = document.getElementById("video") as HTMLVideoElement;
});
invoke("get_accounts").then((account_info: AccountInfo) => {
accounts = account_info.accounts.map((a) => {
return {
value: a.uid,
name: a.name,
platform: a.platform,
};
});
accounts = accounts.filter((a) => a.platform === "bilibili");
// 初始化虚拟滚动
setTimeout(() => {
if (container_ref) {
handle_resize();
}
}, 100);
});
get_video_list();
invoke("get_archive", { roomId: room_id, liveId: live_id }).then(
(a: RecordItem) => {
console.log(a);
archive = a;
set_title(`[${room_id}]${archive.title}`);
}
@@ -211,37 +278,33 @@
}
}
function update_post_prompt(str: string) {
const span = document.getElementById("post-prompt");
if (span) {
span.textContent = str;
}
}
async function get_video_list() {
videos = (
(await invoke("get_videos", { roomId: room_id })) as VideoItem[]
).map((v) => {
const videoList = (await invoke("get_videos", { roomId: room_id })) as VideoItem[];
videos = await Promise.all(videoList.map(async (v) => {
return {
id: v.id,
value: v.id,
name: v.file,
file: convertFileSrc(config.output + "/" + v.file),
file: await convertFileSrc(v.file),
cover: v.cover,
};
});
}));
}
function find_video(e) {
async function find_video(e) {
if (!e.target) {
selected_video = null;
return;
}
const id = parseInt(e.target.value);
selected_video = videos.find((v) => {
let target_video = videos.find((v) => {
return v.value == id;
});
console.log("video selected", videos, selected_video, e, id);
if (target_video) {
const rawCover = await invoke("get_video_cover", { id: id }) as string;
target_video.cover = await convertCoverSrc(rawCover, id);
}
selected_video = target_video;
}
async function generate_clip() {
@@ -263,21 +326,22 @@
update_clip_prompt(`切片生成中`);
let event_id = generateEventId();
current_clip_event_id = event_id;
let new_video = await clipRange(event_id, {
let new_video = (await clipRange(event_id, {
title: archive.title,
room_id: room_id,
platform: platform,
cover: new_cover,
live_id: live_id,
x: Math.floor(focus_start + start),
y: Math.floor(focus_start + end),
range: {
start: focus_start + start,
end: focus_start + end,
},
danmu: danmu_enabled,
offset: global_offset,
local_offset:
parseInt(localStorage.getItem(`local_offset:${live_id}`) || "0", 10) ||
0,
});
console.log("video file generatd:", new_video);
fix_encoding,
})) as VideoItem;
await get_video_list();
video_selected = new_video.id;
selected_video = videos.find((v) => {
@@ -288,30 +352,6 @@
}
}
async function do_post() {
if (!selected_video) {
return;
}
let event_id = generateEventId();
current_post_event_id = event_id;
update_post_prompt(`投稿上传中`);
// update profile in local storage
window.localStorage.setItem("profile-" + room_id, JSON.stringify(profile));
invoke("upload_procedure", {
uid: uid_selected,
eventId: event_id,
roomId: room_id,
videoId: video_selected,
cover: selected_video.cover,
profile: profile,
}).then(async () => {
video_selected = 0;
await get_video_list();
});
}
async function cancel_clip() {
if (!current_clip_event_id) {
return;
@@ -319,13 +359,6 @@
invoke("cancel", { eventId: current_clip_event_id });
}
async function cancel_post() {
if (!current_post_event_id) {
return;
}
invoke("cancel", { eventId: current_post_event_id });
}
async function delete_video() {
if (!selected_video) {
return;
@@ -363,6 +396,10 @@
a.download = video_name;
a.click();
}
async function open_clip(video_id: number) {
await invoke("open_clip", { videoId: video_id });
}
</script>
<main>
@@ -407,6 +444,7 @@
bind:end
bind:global_offset
bind:this={player}
bind:danmu_records
{focus_start}
{focus_end}
{platform}
@@ -422,19 +460,6 @@
markers = markers.sort((a, b) => a.offset - b.offset);
}}
/>
<VideoPreview
bind:show={preview}
video={selected_video}
roomId={room_id}
{videos}
onVideoChange={(video) => {
selected_video = video;
}}
onClose={() => {
preview = false;
}}
onVideoListUpdate={get_video_list}
/>
</div>
<div
class="flex relative h-screen border-solid bg-gray-950 border-l-2 border-gray-800 text-white transition-all duration-300 ease-in-out"
@@ -462,18 +487,11 @@
class:opacity-100={!rpanel_collapsed}
class:invisible={rpanel_collapsed}
>
<!-- 顶部标题栏 -->
<div
class="flex-none sticky top-0 z-10 backdrop-blur-xl bg-[#1c1c1e]/80 px-6 py-4 border-b border-gray-800/50"
>
<h2 class="text-lg font-medium">视频投稿</h2>
</div>
<!-- 内容区域 -->
<div class="flex-1 overflow-y-auto">
<div class="px-6 py-4 space-y-8">
<div class="flex-1 overflow-hidden flex flex-col">
<div class="px-6 py-4 space-y-8 flex flex-col h-full">
<!-- 切片操作区 -->
<section class="space-y-3">
<section class="space-y-3 flex-shrink-0">
<div class="flex items-center justify-between">
<h3 class="text-sm font-medium text-gray-300">切片列表</h3>
<div class="flex space-x-2">
@@ -541,29 +559,18 @@
{/if}
</div>
</section>
<!-- 封面预览 -->
{#if selected_video && selected_video.id != -1}
<section>
<section class="flex-shrink-0">
<div class="group">
<div
class="text-sm text-gray-400 mb-2 flex items-center justify-between"
>
<span>视频封面</span>
<button
class="text-[#0A84FF] hover:text-[#0A84FF]/80 transition-colors duration-200 flex items-center space-x-1"
on:click={() => (show_cover_editor = true)}
>
<Pen class="w-4 h-4" />
<span class="text-xs">创建新封面</span>
</button>
</div>
<!-- svelte-ignore a11y-click-events-have-key-events -->
<div
id="capture"
class="relative rounded-xl overflow-hidden bg-black/20 border border-gray-800/50 cursor-pointer group"
on:click={() => {
on:click={async () => {
pauseVideo();
preview = true;
await open_clip(selected_video.id);
}}
>
<div
@@ -586,160 +593,92 @@
</section>
{/if}
<!-- 表单区域 -->
<section class="space-y-8">
<!-- 基本信息 -->
<div class="space-y-4">
<h3 class="text-sm font-medium text-gray-400">基本信息</h3>
<!-- 标题 -->
<div class="space-y-2">
<label
for="title"
class="block text-sm font-medium text-gray-300">标题</label
>
<!-- 弹幕列表区 -->
<section class="space-y-3 flex flex-col flex-1 min-h-0">
<div class="flex items-center justify-between flex-shrink-0">
<h3 class="text-sm font-medium text-gray-300">弹幕列表</h3>
</div>
<div class="space-y-3 flex flex-col flex-1 min-h-0">
<!-- 搜索 -->
<div class="space-y-2 flex-shrink-0">
<input
id="title"
type="text"
bind:value={profile.title}
placeholder="输入视频标题"
bind:value={danmu_search_text}
placeholder="搜索弹幕内容..."
class="w-full px-3 py-2 bg-[#2c2c2e] text-white rounded-lg
border border-gray-800/50 focus:border-[#0A84FF]
transition duration-200 outline-none
hover:border-gray-700/50"
placeholder-gray-500"
/>
</div>
<!-- 视频分区 -->
<div class="space-y-2">
<label
for="tid"
class="block text-sm font-medium text-gray-300"
>视频分区</label
>
<div class="w-full" id="tid">
<TypeSelect bind:value={profile.tid} />
</div>
<!-- 弹幕统计 -->
<div class="text-xs text-gray-400 flex-shrink-0">
{danmu_records.length} 条弹幕,显示 {filtered_danmu.length}
</div>
<!-- 投稿账号 -->
<div id="uid" class="space-y-2">
<label
for="uid"
class="block text-sm font-medium text-gray-300"
>投稿账号</label
<!-- 弹幕列表 -->
<div
bind:this={container_ref}
on:scroll={handle_scroll}
class="flex-1 overflow-y-auto space-y-2 sidebar-scrollbar min-h-0 danmu-container"
>
<select
bind:value={uid_selected}
class="w-full px-3 py-2 bg-[#2c2c2e] text-white rounded-lg
border border-gray-800/50 focus:border-[#0A84FF]
transition duration-200 outline-none appearance-none
hover:border-gray-700/50"
<!-- 顶部占位符 -->
<div
style="height: {visible_start_index * danmu_item_height}px;"
/>
<!-- 可见的弹幕项 -->
{#each filtered_danmu.slice(visible_start_index, visible_end_index) as danmu, index (visible_start_index + index)}
<!-- svelte-ignore a11y-click-events-have-key-events -->
<div
class="p-3 bg-[#2c2c2e] rounded-lg border border-gray-800/50
hover:border-[#0A84FF]/50 transition-all duration-200
cursor-pointer group danmu-item"
style="content-visibility: auto; contain-intrinsic-size: {danmu_item_height}px;"
on:click={() => seek_to_danmu(danmu)}
>
{#each accounts as account}
<option value={account.value}>{account.name}</option>
<div class="flex items-start justify-between">
<div class="flex-1 min-w-0">
<p
class="text-sm text-white break-words leading-relaxed"
>
{danmu.content}
</p>
</div>
<div class="ml-3 flex-shrink-0">
<span
class="text-xs text-gray-400 bg-[#1c1c1e] px-2 py-1 rounded
group-hover:text-[#0A84FF] transition-colors duration-200"
>
{format_time(danmu.ts)}
</span>
</div>
</div>
</div>
{/each}
</select>
</div>
</div>
<!-- 详细信息 -->
<div class="space-y-4">
<h3 class="text-sm font-medium text-gray-400">详细信息</h3>
<!-- 描述 -->
<div class="space-y-2">
<label
for="desc"
class="block text-sm font-medium text-gray-300">描述</label
>
<textarea
id="desc"
bind:value={profile.desc}
placeholder="输入视频描述"
class="w-full px-3 py-2 bg-[#2c2c2e] text-white rounded-lg
border border-gray-800/50 focus:border-[#0A84FF]
transition duration-200 outline-none resize-none h-32
hover:border-gray-700/50"
<!-- 底部占位符 -->
<div
style="height: {(filtered_danmu.length -
visible_end_index) *
danmu_item_height}px;"
/>
</div>
<!-- 标签 -->
<div class="space-y-2">
<label
for="tag"
class="block text-sm font-medium text-gray-300">标签</label
>
<input
id="tag"
type="text"
bind:value={profile.tag}
placeholder="输入视频标签,用逗号分隔"
class="w-full px-3 py-2 bg-[#2c2c2e] text-white rounded-lg
border border-gray-800/50 focus:border-[#0A84FF]
transition duration-200 outline-none
hover:border-gray-700/50"
/>
{#if filtered_danmu.length === 0}
<div class="text-center py-8 text-gray-500">
{danmu_records.length === 0
? "暂无弹幕数据"
: "没有匹配的弹幕"}
</div>
<!-- 动态 -->
<div class="space-y-2">
<label
for="dynamic"
class="block text-sm font-medium text-gray-300">动态</label
>
<textarea
id="dynamic"
bind:value={profile.dynamic}
placeholder="输入动态内容"
class="w-full px-3 py-2 bg-[#2c2c2e] text-white rounded-lg
border border-gray-800/50 focus:border-[#0A84FF]
transition duration-200 outline-none resize-none h-32
hover:border-gray-700/50"
/>
{/if}
</div>
</div>
</section>
<!-- 投稿按钮 -->
{#if selected_video}
<div class="h-10" />
{/if}
</div>
</div>
<!-- 底部按钮 -->
{#if selected_video}
<div
class="flex-none sticky bottom-0 px-6 py-4 bg-gradient-to-t from-[#1c1c1e] via-[#1c1c1e] to-transparent"
>
<div class="flex gap-3">
<button
on:click={do_post}
disabled={current_post_event_id != null}
class="flex-1 px-4 py-2.5 bg-[#0A84FF] text-white rounded-lg
transition-all duration-200 hover:bg-[#0A84FF]/90
disabled:opacity-50 disabled:cursor-not-allowed
flex items-center justify-center space-x-2"
>
{#if current_post_event_id != null}
<div
class="w-4 h-4 border-2 border-current border-t-transparent rounded-full animate-spin"
/>
{/if}
<span id="post-prompt">投稿</span>
</button>
{#if current_post_event_id != null}
<button
on:click={() => cancel_post()}
class="w-24 px-3 py-2 bg-red-500 text-white rounded-lg
transition-all duration-200 hover:bg-red-500/90
flex items-center justify-center"
>
取消
</button>
{/if}
</div>
</div>
{/if}
</div>
</div>
</div>
@@ -747,56 +686,82 @@
<!-- Clip Confirmation Dialog -->
{#if show_clip_confirm}
<div class="fixed inset-0 z-[100] flex items-center justify-center">
<div
class="fixed inset-0 bg-gray-900/50 backdrop-blur-sm flex items-center justify-center z-50"
class="absolute inset-0 bg-black/60 backdrop-blur-md"
role="button"
tabindex="0"
aria-label="关闭对话框"
on:click={() => (show_clip_confirm = false)}
on:keydown={(e) => {
if (e.key === "Escape" || e.key === "Enter" || e.key === " ") {
e.preventDefault();
show_clip_confirm = false;
}
}}
/>
<div
role="dialog"
aria-modal="true"
class="relative mx-4 w-full max-w-md rounded-2xl bg-[#1c1c1e] border border-white/10 shadow-2xl ring-1 ring-black/5"
>
<div class="bg-[#1c1c1e] rounded-lg p-6 max-w-md w-full mx-4">
<h3 class="text-lg font-medium text-white mb-4">确认生成切片</h3>
<div class="space-y-4">
<div class="text-sm text-gray-300">
<p>切片时长: {(end - start).toFixed(2)}</p>
<div class="p-5">
<h3 class="text-[17px] font-semibold text-white">确认生成切片</h3>
<p class="mt-1 text-[13px] text-white/70">请确认以下设置后继续</p>
<div class="mt-4 rounded-xl bg-[#2c2c2e] border border-white/10 p-3">
<div class="text-[13px] text-white/80">切片时长</div>
<div
class="mt-0.5 text-[22px] font-semibold tracking-tight text-white"
>
{format_duration_seconds(end - start)}
</div>
<div class="flex items-center space-x-2">
</div>
<div class="mt-3 space-y-3">
<label class="flex items-center gap-2.5">
<input
type="checkbox"
id="confirm-danmu-checkbox"
bind:checked={danmu_enabled}
class="w-4 h-4 text-[#0A84FF] bg-[#2c2c2e] border-gray-800 rounded focus:ring-[#0A84FF] focus:ring-offset-[#1c1c1e]"
class="h-4 w-4 rounded border-white/30 bg-[#2c2c2e] text-[#0A84FF] accent-[#0A84FF] focus:outline-none focus:ring-2 focus:ring-[#0A84FF]/40"
/>
<label for="confirm-danmu-checkbox" class="text-sm text-gray-300"
>压制弹幕</label
>
<span class="text-[13px] text-white/80">压制弹幕</span>
</label>
<label class="flex items-center gap-2.5">
<input
type="checkbox"
id="confirm-fix-encoding-checkbox"
bind:checked={fix_encoding}
class="h-4 w-4 rounded border-white/30 bg-[#2c2c2e] text-[#0A84FF] accent-[#0A84FF] focus:outline-none focus:ring-2 focus:ring-[#0A84FF]/40"
/>
<span class="text-[13px] text-white/80">修复编码</span>
</label>
</div>
<div class="flex justify-end space-x-3">
</div>
<div
class="flex items-center justify-end gap-2 rounded-b-2xl border-t border-white/10 bg-[#111113] px-5 py-3"
>
<button
on:click={() => (show_clip_confirm = false)}
class="px-4 py-2 text-gray-300 hover:text-white transition-colors duration-200"
class="px-3.5 py-2 text-[13px] rounded-lg border border-white/20 text-white/90 hover:bg-white/10 transition-colors"
>
取消
</button>
<button
on:click={confirm_generate_clip}
class="px-4 py-2 bg-[#0A84FF] text-white rounded-lg hover:bg-[#0A84FF]/90 transition-colors duration-200"
class="px-3.5 py-2 text-[13px] rounded-lg bg-[#0A84FF] text-white shadow-[inset_0_1px_0_rgba(255,255,255,.15)] hover:bg-[#0A84FF]/90 transition-colors"
>
确认生成
</button>
</div>
</div>
</div>
</div>
{/if}
<CoverEditor
bind:show={show_cover_editor}
video={selected_video}
on:coverUpdate={(event) => {
selected_video = {
...selected_video,
cover: event.detail.cover,
};
}}
/>
<style>
main {
width: 100vw;
@@ -826,4 +791,34 @@
background-color: rgb(3 7 18 / var(--tw-bg-opacity));
transform: translateY(-50%);
}
/* 弹幕列表滚动条样式 */
.sidebar-scrollbar::-webkit-scrollbar {
width: 6px;
}
.sidebar-scrollbar::-webkit-scrollbar-track {
background: rgba(44, 44, 46, 0.3);
border-radius: 3px;
}
.sidebar-scrollbar::-webkit-scrollbar-thumb {
background: rgba(10, 132, 255, 0.5);
border-radius: 3px;
}
.sidebar-scrollbar::-webkit-scrollbar-thumb:hover {
background: rgba(10, 132, 255, 0.7);
}
/* 虚拟滚动优化 */
.danmu-container {
will-change: scroll-position;
contain: layout style paint;
}
.danmu-item {
contain: layout style paint;
will-change: transform;
}
</style>

214
src/lib/AIMessage.svelte Normal file
View File

@@ -0,0 +1,214 @@
<script lang="ts">
import {
Bot,
Check,
X,
AlertTriangle,
} from "lucide-svelte";
import { AIMessage } from "@langchain/core/messages";
import { marked } from "marked";
export let message: AIMessage;
export let formatTime: (date: Date) => string;
export let onToolCallConfirm: ((toolCall: any) => void) | undefined =
undefined;
export let onToolCallReject: ((toolCall: any) => void) | undefined =
undefined;
export let toolCallState: 'confirmed' | 'rejected' | 'none' = 'none';
export let isSensitiveToolCall: boolean = false;
// 检查是否被内容过滤
$: isContentFiltered = message.response_metadata?.finish_reason === "content_filter";
// 获取消息时间戳,如果没有则使用当前时间
$: messageTime = message.additional_kwargs?.timestamp
? new Date(message.additional_kwargs.timestamp as string)
: new Date();
// 将 Markdown 转换为 HTML
$: htmlContent = marked(
typeof message.content === "string"
? message.content
: Array.isArray(message.content)
? message.content
.map((c) => (typeof c === "string" ? c : JSON.stringify(c)))
.join("\n")
: JSON.stringify(message.content)
);
// 检查消息是否包含表格
$: hasTable = message.content && typeof message.content === 'string' &&
(message.content.includes('|') || message.content.includes('---') ||
message.content.includes('|--') || message.content.includes('| -'));
// 处理工具调用确认
function handleToolCallConfirm(toolCall: any) {
if (onToolCallConfirm) {
onToolCallConfirm(toolCall);
}
}
// 处理工具调用拒绝
function handleToolCallReject(toolCall: any) {
if (onToolCallReject) {
onToolCallReject(toolCall);
}
}
</script>
<div class="flex justify-start">
<div class="flex items-start space-x-3" class:max-w-2xl={!hasTable} class:max-w-4xl={hasTable}>
<div
class="w-8 h-8 rounded-full bg-blue-500 flex items-center justify-center flex-shrink-0"
>
<Bot class="w-4 h-4 text-white" />
</div>
<div class="flex flex-col space-y-1">
<div class="flex items-center space-x-2">
<span class="text-sm font-medium text-gray-700 dark:text-gray-300">
小轴
</span>
<span class="text-xs text-gray-500 dark:text-gray-400">
{formatTime(messageTime)}
</span>
</div>
<div
class="bg-white dark:bg-gray-800 rounded-2xl px-4 py-3 shadow-sm border border-gray-200 dark:border-gray-700"
>
<!-- 内容过滤警告 -->
{#if isContentFiltered}
<div class="mb-3 p-3 bg-yellow-50 dark:bg-yellow-900/20 border border-yellow-200 dark:border-yellow-700 rounded-lg">
<div class="flex items-center space-x-2">
<AlertTriangle class="w-4 h-4 text-yellow-600 dark:text-yellow-400" />
<span class="text-sm font-medium text-yellow-800 dark:text-yellow-200">
内容被过滤
</span>
</div>
<p class="text-xs text-yellow-700 dark:text-yellow-300 mt-1">
由于内容安全策略,部分回复内容可能已被过滤。请尝试重新表述您的问题。
</p>
</div>
{/if}
<div
class="text-gray-900 dark:text-white text-sm leading-relaxed prose prose-sm max-w-none [&_.prose]:bg-transparent [&_.prose_*]:bg-transparent [&_p]:bg-transparent [&_div]:bg-transparent [&_span]:bg-transparent [&_code]:bg-gray-100 dark:bg-gray-700 [&_pre]:bg-gray-100 dark:bg-gray-700 [&_blockquote]:bg-transparent [&_ul]:bg-transparent [&_ol]:bg-transparent [&_li]:bg-transparent [&_h1]:bg-transparent [&_h2]:bg-transparent [&_h3]:bg-transparent [&_h4]:bg-transparent [&_h5]:bg-transparent [&_h6]:bg-transparent [&_p]:m-0 [&_p]:p-0 [&_div]:m-0 [&_div]:p-0 [&_ul]:m-0 [&_ul]:p-0 [&_ol]:m-0 [&_ol]:p-0 [&_li]:m-0 [&_li]:p-0 [&_li]:mb-0 [&_li]:mt-0 [&_h1]:m-0 [&_h1]:p-0 [&_h2]:m-0 [&_h2]:p-0 [&_h3]:m-0 [&_h3]:p-0 [&_h4]:m-0 [&_h4]:p-0 [&_h5]:m-0 [&_h5]:p-0 [&_h6]:m-0 [&_h6]:p-0 [&_blockquote]:m-0 [&_blockquote]:p-0"
>
{#if hasTable}
<div class="table-container">
{@html htmlContent}
</div>
{:else}
{@html htmlContent}
{/if}
</div>
{#if message.tool_calls && message.tool_calls.length > 0}
<div class="space-y-2 mt-3">
{#each message.tool_calls as tool_call}
<div
class="bg-blue-50 dark:bg-blue-900/20 border border-blue-200 dark:border-blue-700 rounded-lg p-3"
>
<div class="flex items-center space-x-2 mb-2">
<div
class="w-5 h-5 rounded bg-blue-500 flex items-center justify-center"
>
<svg
class="w-3 h-3 text-white"
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M10.325 4.317c.426-1.756 2.924-1.756 3.35 0a1.724 1.724 0 002.573 1.066c1.543-.94 3.31.826 2.37 2.37a1.724 1.724 0 001.065 2.572c1.756.426 1.756 2.924 0 3.35a1.724 1.724 0 00-1.066 2.573c.94 1.543-.826 3.31-2.37 2.37a1.724 1.724 0 00-2.572 1.065c-.426 1.756-2.924 1.756-3.35 0a1.724 1.724 0 00-2.573-1.066c-1.543.94-3.31-.826-2.37-2.37a1.724 1.724 0 00-1.065-2.572c-1.756-.426-1.756-2.924 0-3.35a1.724 1.724 0 001.066-2.573c-.94-1.543.826-3.31 2.37-2.37.996.608 2.296.07 2.572-1.065z"
></path>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M15 12a3 3 0 11-6 0 3 3 0 016 0z"
></path>
</svg>
</div>
<span
class="text-sm font-medium text-blue-700 dark:text-blue-300"
>
工具调用: {tool_call.name}
</span>
</div>
{#if tool_call.args && Object.keys(tool_call.args).length > 0}
<div class="mb-2">
<div
class="text-xs font-medium text-gray-600 dark:text-gray-400 mb-1"
>
参数:
</div>
<div class="bg-gray-50 dark:bg-gray-800 rounded p-2">
<pre
class="text-xs text-gray-700 dark:text-gray-300 whitespace-pre-wrap break-words">{JSON.stringify(
tool_call.args,
null,
2
)}</pre>
</div>
</div>
{/if}
{#if tool_call.id}
<div class="text-xs text-gray-500 dark:text-gray-400 mb-2">
ID: {tool_call.id}
</div>
{/if}
<!-- 工具调用状态或操作按钮 -->
{#if isSensitiveToolCall}
<div
class="flex items-center justify-between mt-3 pt-2 border-t border-blue-200 dark:border-blue-700"
>
{#if tool_call.id && toolCallState === 'confirmed'}
<!-- 显示状态 -->
<div class="flex items-center space-x-2 text-green-600 dark:text-green-400">
<Check class="w-4 h-4" />
<span class="text-sm font-medium">已确认</span>
</div>
{:else if toolCallState === 'rejected'}
<div class="flex items-center space-x-2 text-red-600 dark:text-red-400">
<X class="w-4 h-4" />
<span class="text-sm font-medium">已拒绝</span>
</div>
{:else if onToolCallConfirm || onToolCallReject}
<!-- 显示操作按钮 -->
{#if onToolCallReject}
<button
on:click={() => handleToolCallReject(tool_call)}
class="flex items-center space-x-1 px-4 py-2 bg-red-500 hover:bg-red-600 active:bg-red-700 text-white text-xs font-medium rounded-lg shadow-sm transition-all duration-200 focus:outline-none focus:ring-2 focus:ring-red-500 focus:ring-offset-2 dark:focus:ring-offset-gray-800"
>
<X class="w-3 h-3" />
<span>拒绝</span>
</button>
{/if}
{#if onToolCallConfirm}
<button
on:click={() => handleToolCallConfirm(tool_call)}
class="flex items-center space-x-1 px-4 py-2 bg-blue-500 hover:bg-blue-600 active:bg-blue-700 text-white text-xs font-medium rounded-lg shadow-sm transition-all duration-200 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2 dark:focus:ring-offset-gray-800"
>
<Check class="w-3 h-3" />
<span>确认</span>
</button>
{/if}
{/if}
</div>
{/if}
</div>
{/each}
</div>
{/if}
</div>
</div>
</div>
</div>

View File

@@ -1,5 +1,14 @@
<script>
import { Info, LayoutDashboard, Settings, Users, Video } from "lucide-svelte";
import {
FileVideo,
Info,
LayoutDashboard,
List,
Settings,
Users,
Video,
Brain,
} from "lucide-svelte";
import { hasNewVersion } from "./stores/version";
import SidebarItem from "./SidebarItem.svelte";
import { createEventDispatcher } from "svelte";
@@ -30,6 +39,21 @@
<Video class="w-5 h-5" />
</div>
</SidebarItem>
<SidebarItem label="切片" {activeUrl} on:activeChange={navigate}>
<div slot="icon">
<FileVideo class="w-5 h-5" />
</div>
</SidebarItem>
<SidebarItem label="任务" {activeUrl} on:activeChange={navigate}>
<div slot="icon">
<List class="w-5 h-5" />
</div>
</SidebarItem>
<SidebarItem label="助手" {activeUrl} on:activeChange={navigate}>
<div slot="icon">
<Brain class="w-5 h-5" />
</div>
</SidebarItem>
<SidebarItem label="账号" {activeUrl} on:activeChange={navigate}>
<div slot="icon">
<Users class="w-5 h-5" />

View File

@@ -236,7 +236,10 @@
function handleVideoLoaded() {
isVideoLoaded = true;
duration = videoElement.duration;
// 延迟一点时间确保视频完全准备好
setTimeout(() => {
updateCoverFromVideo();
}, 100);
}
function handleTimeUpdate() {
@@ -248,10 +251,13 @@
const time = parseFloat(target.value);
if (videoElement) {
videoElement.currentTime = time;
updateCoverFromVideo();
}
}
function handleVideoSeeked() {
updateCoverFromVideo();
}
function formatTime(seconds: number): string {
const mins = Math.floor(seconds / 60);
const secs = Math.floor(seconds % 60);
@@ -259,15 +265,46 @@
}
function updateCoverFromVideo() {
if (!videoElement) return;
if (!videoElement || !isVideoLoaded) return;
// 确保视频已经准备好并且有有效的尺寸
if (videoElement.videoWidth === 0 || videoElement.videoHeight === 0) {
// 如果视频尺寸无效,等待一下再重试
setTimeout(() => {
if (
videoElement &&
videoElement.videoWidth > 0 &&
videoElement.videoHeight > 0
) {
updateCoverFromVideo();
}
}, 100);
return;
}
try {
const tempCanvas = document.createElement("canvas");
tempCanvas.width = videoElement.videoWidth;
tempCanvas.height = videoElement.videoHeight;
const tempCtx = tempCanvas.getContext("2d");
tempCtx.drawImage(videoElement, 0, 0, tempCanvas.width, tempCanvas.height);
if (!tempCtx) {
log.error("Failed to get canvas context");
return;
}
tempCtx.drawImage(
videoElement,
0,
0,
tempCanvas.width,
tempCanvas.height
);
videoFrame = tempCanvas.toDataURL("image/jpeg");
loadBackgroundImage();
} catch (error) {
log.error("Failed to capture video frame:", error);
}
}
function handleClose() {
@@ -333,6 +370,10 @@
// 监听 show 变化,当模态框显示时重新绘制
$: if (show && ctx) {
setTimeout(() => {
if (isVideoLoaded && videoElement) {
// 如果视频已加载,更新封面
updateCoverFromVideo();
}
loadBackgroundImage();
resizeCanvas();
}, 50);
@@ -402,6 +443,7 @@
crossorigin="anonymous"
on:loadedmetadata={handleVideoLoaded}
on:timeupdate={handleTimeUpdate}
on:seeked={handleVideoSeeked}
/>
<!-- Video Controls -->

View File

@@ -0,0 +1,41 @@
<script lang="ts">
import { User } from "lucide-svelte";
import { HumanMessage } from "@langchain/core/messages";
export let message: HumanMessage;
export let formatTime: (date: Date) => string;
// 获取消息时间戳,如果没有则使用当前时间
$: messageTime = message.additional_kwargs?.timestamp
? new Date(message.additional_kwargs.timestamp as string | number)
: new Date();
</script>
<div class="flex justify-end">
<div class="flex items-start space-x-3 max-w-2xl">
<div class="flex flex-col space-y-1">
<div class="flex items-center space-x-2">
<span class="text-sm font-medium text-gray-700 dark:text-gray-300">
</span>
<span class="text-xs text-gray-500 dark:text-gray-400">
{formatTime(messageTime)}
</span>
</div>
<div
class="bg-white dark:bg-gray-800 rounded-2xl px-4 py-3 shadow-sm border border-gray-200 dark:border-gray-700"
>
<div class="text-gray-900 dark:text-white text-sm leading-relaxed">
{message.content}
</div>
</div>
</div>
<div
class="w-8 h-8 rounded-full bg-gray-500 flex items-center justify-center flex-shrink-0"
>
<User class="w-4 h-4 text-white" />
</div>
</div>
</div>

View File

@@ -0,0 +1,399 @@
<script lang="ts">
import { invoke, TAURI_ENV, ENDPOINT, listen } from "../lib/invoker";
import { Upload, X, CheckCircle } from "lucide-svelte";
import { createEventDispatcher, onDestroy } from "svelte";
import { open } from "@tauri-apps/plugin-dialog";
import type { ProgressUpdate, ProgressFinished } from "./interface";
export let showDialog = false;
export let roomId: number | null = null;
const dispatch = createEventDispatcher();
let selectedFilePath: string | null = null;
let selectedFileName: string = "";
let selectedFileSize: number = 0;
let videoTitle = "";
let importing = false;
let uploading = false;
let uploadProgress = 0;
let dragOver = false;
let fileInput: HTMLInputElement;
let importProgress = "";
let currentImportEventId: string | null = null;
// 格式化文件大小
function formatFileSize(sizeInBytes: number): string {
if (sizeInBytes === 0) return "0 B";
const units = ["B", "KB", "MB", "GB", "TB"];
const k = 1024;
let unitIndex = 0;
let size = sizeInBytes;
// 找到合适的单位
while (size >= k && unitIndex < units.length - 1) {
size /= k;
unitIndex++;
}
// 对于GB以上显示2位小数MB显示2位小数KB及以下显示1位小数
const decimals = unitIndex >= 3 ? 2 : (unitIndex >= 2 ? 2 : 1);
return size.toFixed(decimals) + " " + units[unitIndex];
}
// 进度监听器
const progressUpdateListener = listen<ProgressUpdate>('progress-update', (e) => {
if (e.payload.id === currentImportEventId) {
importProgress = e.payload.content;
}
});
const progressFinishedListener = listen<ProgressFinished>('progress-finished', (e) => {
if (e.payload.id === currentImportEventId) {
if (e.payload.success) {
// 导入成功,关闭对话框并刷新列表
showDialog = false;
selectedFilePath = null;
selectedFileName = "";
selectedFileSize = 0;
videoTitle = "";
dispatch("imported");
} else {
alert("导入失败: " + e.payload.message);
}
importing = false;
currentImportEventId = null;
importProgress = "";
}
});
onDestroy(() => {
progressUpdateListener?.then(fn => fn());
progressFinishedListener?.then(fn => fn());
});
async function handleFileSelect() {
if (TAURI_ENV) {
// Tauri模式使用文件对话框
try {
const selected = await open({
multiple: false,
filters: [{
name: '视频文件',
extensions: ['mp4', 'mkv', 'avi', 'mov', 'wmv', 'flv', 'm4v', 'webm']
}]
});
if (selected && typeof selected === 'string') {
await setSelectedFile(selected);
}
} catch (error) {
console.error("文件选择失败:", error);
}
} else {
// Web模式触发文件输入
fileInput?.click();
}
}
async function handleFileInputChange(event: Event) {
const target = event.target as HTMLInputElement;
const file = target.files?.[0];
if (file) {
// 提前设置文件信息,提升用户体验
selectedFileName = file.name;
videoTitle = file.name.replace(/\.[^/.]+$/, ""); // 去掉扩展名
selectedFileSize = file.size;
await uploadFile(file);
}
}
async function handleDrop(event: DragEvent) {
event.preventDefault();
dragOver = false;
if (TAURI_ENV) return; // Tauri模式不支持拖拽
const files = event.dataTransfer?.files;
if (files && files.length > 0) {
const file = files[0];
// 检查文件类型
const allowedTypes = ['video/mp4', 'video/x-msvideo', 'video/quicktime', 'video/x-ms-wmv', 'video/x-flv', 'video/x-m4v', 'video/webm', 'video/x-matroska'];
if (allowedTypes.includes(file.type) || file.name.match(/\.(mp4|mkv|avi|mov|wmv|flv|m4v|webm)$/i)) {
// 提前设置文件信息,提升用户体验
selectedFileName = file.name;
videoTitle = file.name.replace(/\.[^/.]+$/, ""); // 去掉扩展名
selectedFileSize = file.size;
await uploadFile(file);
} else {
alert("请选择支持的视频文件格式 (MP4, MKV, AVI, MOV, WMV, FLV, M4V, WebM)");
}
}
}
async function uploadFile(file: File) {
uploading = true;
uploadProgress = 0;
try {
const formData = new FormData();
formData.append('file', file);
formData.append('roomId', String(roomId || 0));
const xhr = new XMLHttpRequest();
// 监听上传进度
xhr.upload.addEventListener('progress', (e) => {
if (e.lengthComputable) {
uploadProgress = Math.round((e.loaded / e.total) * 100);
}
});
// 处理上传完成
xhr.addEventListener('load', async () => {
if (xhr.status === 200) {
const response = JSON.parse(xhr.responseText);
if (response.code === 0 && response.data) {
// 使用本地文件信息,更快更准确
await setSelectedFile(response.data.filePath, file.size);
} else {
throw new Error(response.message || '上传失败');
}
} else {
throw new Error(`上传失败: HTTP ${xhr.status}`);
}
uploading = false;
});
xhr.addEventListener('error', () => {
alert("上传失败:网络错误");
uploading = false;
});
xhr.open('POST', `${ENDPOINT}/api/upload_file`);
xhr.send(formData);
} catch (error) {
console.error("上传失败:", error);
alert("上传失败: " + error);
uploading = false;
}
}
async function setSelectedFile(filePath: string, fileSize?: number) {
selectedFilePath = filePath;
selectedFileName = filePath.split(/[/\\]/).pop() || '';
videoTitle = selectedFileName.replace(/\.[^/.]+$/, ""); // 去掉扩展名
if (fileSize !== undefined) {
selectedFileSize = fileSize;
} else {
// 获取文件大小 (Tauri模式)
try {
selectedFileSize = await invoke("get_file_size", { filePath });
} catch (e) {
selectedFileSize = 0;
}
}
}
async function startImport() {
if (!selectedFilePath) return;
importing = true;
importProgress = "准备导入...";
try {
const eventId = "import_" + Date.now();
currentImportEventId = eventId;
await invoke("import_external_video", {
eventId: eventId,
filePath: selectedFilePath,
title: videoTitle,
originalName: selectedFileName,
size: selectedFileSize,
roomId: roomId || 0
});
// 注意成功处理移到了progressFinishedListener中
} catch (error) {
console.error("导入失败:", error);
alert("导入失败: " + error);
importing = false;
currentImportEventId = null;
importProgress = "";
}
}
function closeDialog() {
showDialog = false;
selectedFilePath = null;
selectedFileName = "";
selectedFileSize = 0;
videoTitle = "";
uploading = false;
uploadProgress = 0;
importing = false;
currentImportEventId = null;
importProgress = "";
}
function handleDragOver(event: DragEvent) {
event.preventDefault();
if (!TAURI_ENV) {
dragOver = true;
}
}
function handleDragLeave() {
dragOver = false;
}
</script>
<!-- 隐藏的文件输入 -->
{#if !TAURI_ENV}
<input
bind:this={fileInput}
type="file"
accept="video/*"
style="display: none"
on:change={handleFileInputChange}
/>
{/if}
{#if showDialog}
<div class="fixed inset-0 bg-black/20 dark:bg-black/40 backdrop-blur-sm z-50 flex items-center justify-center p-4">
<div class="bg-white dark:bg-[#323234] rounded-xl shadow-xl w-full max-w-[600px] max-h-[90vh] overflow-hidden flex flex-col">
<div class="flex-1 overflow-y-auto">
<div class="p-6 space-y-4">
<div class="flex justify-between items-center">
<h3 class="text-lg font-medium text-gray-900 dark:text-white">导入外部视频</h3>
<button on:click={closeDialog} class="text-gray-400 hover:text-gray-600">
<X class="w-5 h-5" />
</button>
</div>
<!-- 文件选择区域 -->
<div
class="border-2 border-dashed rounded-lg p-8 text-center transition-colors {
dragOver ? 'border-blue-400 bg-blue-50 dark:bg-blue-900/20' :
'border-gray-300 dark:border-gray-600'
}"
on:dragover={handleDragOver}
on:dragleave={handleDragLeave}
on:drop={handleDrop}
>
{#if uploading}
<!-- 上传进度 -->
<div class="space-y-4">
<Upload class="w-12 h-12 text-blue-500 mx-auto animate-bounce" />
<p class="text-sm text-gray-900 dark:text-white font-medium">上传中...</p>
<div class="w-full bg-gray-200 dark:bg-gray-700 rounded-full h-2">
<div class="bg-blue-500 h-2 rounded-full transition-all" style="width: {uploadProgress}%"></div>
</div>
<p class="text-xs text-gray-500">{uploadProgress}%</p>
</div>
{:else if selectedFilePath}
<!-- 已选择文件 -->
<div class="space-y-4">
<div class="flex items-center justify-center">
<CheckCircle class="w-12 h-12 text-green-500 mx-auto" />
</div>
<p class="text-sm text-gray-900 dark:text-white font-medium">{selectedFileName}</p>
<p class="text-xs text-gray-500">大小: {formatFileSize(selectedFileSize)}</p>
<p class="text-xs text-gray-400 break-all" title={selectedFilePath}>{selectedFilePath}</p>
<button
on:click={() => {
selectedFilePath = null;
selectedFileName = "";
selectedFileSize = 0;
videoTitle = "";
}}
class="text-sm text-red-500 hover:text-red-700"
>
重新选择
</button>
</div>
{:else}
<!-- 选择文件提示 -->
<div class="space-y-4">
<Upload class="w-12 h-12 text-gray-400 mx-auto" />
{#if TAURI_ENV}
<p class="text-sm text-gray-600 dark:text-gray-400">
点击按钮选择视频文件
</p>
{:else}
<p class="text-sm text-gray-600 dark:text-gray-400">
拖拽视频文件到此处,或点击按钮选择文件
</p>
{/if}
<p class="text-xs text-gray-500 dark:text-gray-500">
支持 MP4, MKV, AVI, MOV, WMV, FLV, M4V, WebM 格式
</p>
</div>
{/if}
{#if !uploading && !selectedFilePath}
<button
on:click={handleFileSelect}
class="mt-4 px-4 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600 transition-colors"
>
{TAURI_ENV ? '选择文件' : '选择或拖拽文件'}
</button>
{/if}
</div>
<!-- 视频信息编辑 -->
{#if selectedFilePath}
<div class="space-y-4">
<div>
<label for="video-title-input" class="block text-sm font-medium text-gray-700 dark:text-gray-300 mb-2">
视频标题
</label>
<input
id="video-title-input"
type="text"
bind:value={videoTitle}
class="w-full px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-white focus:outline-none focus:ring-2 focus:ring-blue-500 dark:focus:ring-blue-400"
placeholder="输入视频标题"
/>
</div>
</div>
{/if}
</div>
</div>
<!-- 操作按钮 - 固定在底部 -->
<div class="border-t border-gray-200 dark:border-gray-700 p-4 bg-gray-50 dark:bg-[#2a2a2c]">
<div class="flex justify-end space-x-3">
<button
on:click={closeDialog}
class="px-4 py-2 text-gray-700 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-gray-600 rounded-lg transition-colors"
>
取消
</button>
<button
on:click={startImport}
disabled={!selectedFilePath || importing || !videoTitle.trim() || uploading}
class="px-4 py-2 bg-green-500 text-white rounded-lg hover:bg-green-600 disabled:opacity-50 disabled:cursor-not-allowed transition-colors flex items-center space-x-2"
>
{#if importing}
<svg class="animate-spin h-4 w-4" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
{/if}
<span>{importing ? (importProgress || "导入中...") : "开始导入"}</span>
</button>
</div>
</div>
</div>
</div>
{/if}

Some files were not shown because too many files have changed in this diff Show More